GitHub
Reliability & Eval emerging

Schema Validation Retry with Cross-Step Learning

By Nikola Balic (@nibzard)
Add to Pack
or

Saved locally in this browser for now.

Cite This Pattern
APA
Nikola Balic (@nibzard) (2026). Schema Validation Retry with Cross-Step Learning. In *Awesome Agentic Patterns*. Retrieved March 11, 2026, from https://agentic-patterns.com/patterns/schema-validation-retry-cross-step-learning
BibTeX
@misc{agentic_patterns_schema-validation-retry-cross-step-learning,
  title = {Schema Validation Retry with Cross-Step Learning},
  author = {Nikola Balic (@nibzard)},
  year = {2026},
  howpublished = {\url{https://agentic-patterns.com/patterns/schema-validation-retry-cross-step-learning}},
  note = {Awesome Agentic Patterns}
}
01

Problem

LLMs don't always produce valid structured output matching the expected schema. Single-attempt validation leads to task failures even when retry would succeed.

The issues compound in multi-step workflows:

  • Schema violations: LLM generates JSON that doesn't match the expected Zod/JSON Schema
  • One-and-done failure: Single failed attempt terminates the entire workflow
  • No learning from mistakes: Each step repeats the same errors independently
  • Wasted tokens: Failed responses still consume context and cost money
  • Fragile workflows: Flaky LLM outputs make agents unreliable
02

Solution

Implement multi-step retry with detailed error feedback and cross-step error accumulation. The agent learns from its validation failures across the entire workflow.

03

How to use it

04

Trade-offs

Pros:

  • Higher success rate: 3-attempt retry significantly improves structured output reliability
  • Cross-step learning: Agent avoids repeating mistakes across workflow
  • Detailed error feedback: Zod errors guide LLM to specific fixes
  • Better debugging: Error history provides diagnostic information
  • Configurable balance: Can tune attempt count vs. cost/latency

Cons:

  • Increased latency: Multiple LLM calls add delay when retries occur
  • Higher cost: Failed attempts still consume tokens
  • Context bloat: Error history consumes tokens if not limited
  • Not guaranteed: Some LLMs struggle to correct from errors
  • Complexity: Additional retry logic and error management

Mitigation strategies:

  • Limit cross-step error window (last 3 errors) to control token usage
  • Use caching to skip retries for repeated workflows
  • Set per-step timeout to prevent runaway retries
  • Log failures to improve prompts over time
  • Consider using models with better structured output adherence
  • Add exponential backoff with jitter for production deployments
06

References