Problem
For AI agents to reliably and effectively use tools, especially APIs or internal libraries, the design of these interfaces matters. APIs designed solely for human consumption might be ambiguous or overly complex for an LLM to use correctly without extensive fine-tuning or elaborate prompting.
Solution
Design or adapt software APIs (including internal libraries and modules) with explicit consideration for LLM consumption. This involves:
- Explicit Versioning: Making API version information clearly visible and understandable to the LLM, so it can request or adapt to specific versions.
- Self-Descriptive Functionality: Ensuring function names, parameter names, type schemas (JSON Schema/OpenAPI), and documentation clearly describe what the API does and how to use it.
- Simplified Interaction Patterns: Favoring simpler, more direct API calls over highly nested or complex interaction sequences where possible, to reduce the chances of the LLM making errors.
- Clear Error Messaging: Designing error responses that are informative and actionable for an LLM, helping it to self-correct or understand why a call failed.
- Reduced Indirection: Structuring code and libraries to minimize layers of indirection (target: 2 levels instead of n-levels), making it easier for the model to reason about the codebase.
The aim is to create interfaces that are robust and intuitive for LLMs to interact with, thereby improving the reliability and effectiveness of agent tool use.
How to use it
- Use this when agent success depends on reliable tool invocation and environment setup.
- Start with a narrow tool surface and explicit parameter validation.
- Add observability around tool latency, failures, and fallback paths.
Trade-offs
- Pros: Improves execution success and lowers tool-call failure rates.
- Cons: Introduces integration coupling and environment-specific upkeep.
References
-
Lukas Möller (Cursor) at 0:16:00: "API design is already adjusting such that LLMs are more comfortable with that. For example, changing not only the the version number internally but making it like very visible to the model that this is a new version of some software just to make sure that the the API is used correctly." And at 0:16:20: "...structuring the code in a way where one doesn't have to go through like n level of indirection but maybe just through two levels of indirection makes, yeah, LLM models better at at working with that code base."
-
Primary source: https://www.youtube.com/watch?v=BGgsoIgbT_Y
-
ReAct: Synergizing Reasoning and Acting in Language Models (Yao et al., ICLR 2023): https://arxiv.org/abs/2210.03629
-
Gorilla: Fine-tuned LLMs for API Calls (Berkeley, 2023): https://arxiv.org/abs/2305.15334
-
Model Context Protocol (Anthropic): Standardized tool schemas for LLM consumption: https://modelcontextprotocol.io