GitHub
Context & Memory established

Dynamic Context Injection

By Nikola Balic (@nibzard)
Add to Pack
or

Saved locally in this browser for now.

Cite This Pattern
APA
Nikola Balic (@nibzard) (2026). Dynamic Context Injection. In *Awesome Agentic Patterns*. Retrieved March 11, 2026, from https://agentic-patterns.com/patterns/dynamic-context-injection
BibTeX
@misc{agentic_patterns_dynamic-context-injection,
  title = {Dynamic Context Injection},
  author = {Nikola Balic (@nibzard)},
  year = {2026},
  howpublished = {\url{https://agentic-patterns.com/patterns/dynamic-context-injection}},
  note = {Awesome Agentic Patterns}
}
01

Problem

While layered configuration files provide good baseline context, agents often need specific pieces of information (e.g., contents of a particular file, output of a script, predefined complex prompt) on-demand during an interactive session. Constantly editing static context files or pasting large chunks of text into prompts is inefficient.

02

Solution

Implement mechanisms for users to dynamically inject context into the agent's working memory during a session. Common approaches include:

  • File/Folder At-Mentions: Allowing users to type a special character (e.g., @) followed by a file or folder path (e.g., @src/components/Button.tsx or @app/tests/). The agent then ingests the content of the specified file or a summary of the folder into its current context for the ongoing task.
  • Custom Slash Commands: Enabling users to define reusable, named prompts or instructions in separate files (e.g., in ~/.claude/commands/foo.md). These can be invoked with a slash command (e.g., /user:foo), causing their content to be loaded into the agent's context. This is useful for frequently used complex instructions or context snippets.

These methods allow for a more fluid and efficient way to provide targeted context exactly when needed.

03

How to use it

  • Use this when model quality depends on selecting or retaining the right context.
  • Start with strict context budgets and explicit memory retention rules.
  • Measure relevance and retrieval hit-rate before increasing memory breadth.
  • Implement security controls: allowlist-based directory access, regex-based credential scanning, file size limits
04

Trade-offs

  • Pros: Raises answer quality by keeping context relevant and reducing retrieval noise.
  • Cons: Requires ongoing tuning of memory policies and indexing quality.
06

References

  • Based on the at-mention and slash command features described in "Mastering Claude Code: Boris Cherny's Guide & Cheatsheet," section IV.
  • Lewis, P., et al. (2020). "Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks." NeurIPS 2020.
  • Beurer-Kellner, M., et al. (2025). "Design Patterns for Securing LLM Agents against Prompt Injections." arXiv:2506.08837.

Source