Patterns for teams shipping AI agents.
A curated reference for orchestration, safety, memory, evals, and human control. Built to help you choose the operating model before the implementation becomes expensive.
The patterns that inform architecture decisions early.
This library should feel opinionated. These entries are the fastest way to understand how agent systems break, how they recover, and where control needs to stay explicit.
Editorial Thesis
Reference material should help you make decisions, not just collect examples.
- Bias toward patterns with operational consequences and real trade-offs.
- Separate architecture, reliability, and safety instead of flattening them into tags.
- Make it easy to move from exploration to a concrete build path.
Action-Selector Pattern
Human-in-the-Loop Approval Framework
Systematically insert human approval gates for designated high-risk functions while maintaining agent autonomy for safe operations, with multi-channel approval interfaces and comprehensive audit trails.
LLM Observability
Integrate LLM observability platforms for span-level tracing of agent workflows, providing visual UI debugging, workflow linking, and aggregate metrics to enable fast navigation of complex multi-step executions.
Pick a route through the catalog.
The useful split is not “new vs updated.” It is whether you are choosing architecture, improving feedback loops, or hardening a risky system.
Choose the architecture before you choose the model
Start with orchestration, context, and interface patterns that define how your agent actually behaves.
ImproveInstrument the loop and harden the feedback path
Add evals, observability, and review loops early so the system can improve without guesswork.
ProtectDesign guardrails as first-class product behavior
Treat safety, permissions, and approval flows as core UX, not compliance afterthoughts.
Follow the library as it matures.
Product teams do not need more vague AI commentary. They need a sharper feed of patterns, changes, and build guidance.