Recipes

End-to-end working examples showing how ElectriPy Studio components compose. Each recipe is a runnable, commented Python file.

CLI Tool

Build typed CLI tools with ElectriPy's CLI framework. Subcommands, typed arguments, help generation, and config loading wired together.

When to use

When you need a production-grade CLI over your AI pipeline or toolkit.

LLM Gateway

Provider-agnostic LLM gateway with sync/async support. Wraps OpenAI, Anthropic, Ollama, and HTTP-JSON providers behind a unified interface.

When to use

When you want to swap LLM providers without changing application code.

Policy-Governed LLM Flow

LLM requests with pre/post policy hooks. Block, warn, or transform requests and responses at runtime using a declarative policy chain.

When to use

When you need guardrails, content filtering, or compliance enforcement on LLM traffic.

Agent Collaboration Runtime

Multi-agent coordination patterns. Orchestrate specialist agents with a shared context, handoff protocol, and result aggregation.

When to use

When a single LLM call is insufficient and you need multi-agent decomposition.

Policy + Collaboration E2E

End-to-end flow combining policy enforcement with multi-agent collaboration. Demonstrates how safety and orchestration layers compose.

When to use

When building governed multi-agent systems that require both safety seams and coordination.

RAG Evaluation Runner

Evaluate retrieval-augmented generation pipelines with configurable scorers. Measures retrieval quality, answer correctness, and coherence.

When to use

When you need systematic evaluation of RAG systems before or after deployment.

AI Telemetry

Instrument AI workloads with telemetry and cost tracking. Captures latency, token usage, model metadata, and structured outcome events.

When to use

When you need visibility into cost, latency, and quality across AI calls in production.