GitHub 3.6K
Context & Memory emerging

Context-Minimization Pattern

By Nikola Balic (@nibzard)
Add to Pack
or

Saved locally in this browser for now.

Cite This Pattern
APA
Nikola Balic (@nibzard) (2026). Context-Minimization Pattern. In *Awesome Agentic Patterns*. Retrieved March 11, 2026, from https://agentic-patterns.com/patterns/context-minimization-pattern
BibTeX
@misc{agentic_patterns_context-minimization-pattern,
  title = {Context-Minimization Pattern},
  author = {Nikola Balic (@nibzard)},
  year = {2026},
  howpublished = {\url{https://agentic-patterns.com/patterns/context-minimization-pattern}},
  note = {Awesome Agentic Patterns}
}
01

Problem

In long agent sessions, raw user text and tool outputs often remain in-context long after they are needed. If those tokens include adversarial instructions, they can silently bias later reasoning steps, even when the current step is unrelated. This creates delayed prompt-injection risk and unnecessary context bloat.

02

Solution

Purge or redact untrusted segments once they've served their purpose:

  • After transforming input into a safe intermediate (query, structured object), strip the original prompt from context.
  • Subsequent reasoning sees only trusted data, eliminating latent injections.
  • A strong variant also removes intermediate LLM outputs that may have been tainted.

Treat context as a staged pipeline: ingest untrusted text, transform it, then aggressively discard the original tainted material. Keep only signed-off structured artifacts that downstream steps are allowed to consume.

sql = LLM("to SQL", user_prompt)
remove(user_prompt)              # tainted tokens gone
rows = db.query(sql)
answer = LLM("summarize rows", rows)
03

How to use it

Customer-service chat, medical Q&A, database query generation, any multi-turn flow where initial text shouldn't steer later steps.

04

Trade-offs

  • Pros: Simple; no extra models needed; helps prevent context window anxiety by reducing overall context usage; provides compliance benefits (HIPAA/GDPR data minimization).
  • Cons: Later turns lose conversational nuance; may hurt UX; overly aggressive minimization can remove useful context; risks broken referential coherence when earlier turns are referenced ("the function I mentioned before").
05

Example

flowchart LR A[User Prompt] --> B[Extract Intent] B --> C[Remove Original] C --> D[Trusted Data] D --> E[Execute Safely] A -.removed.-> C
06

References