GitHub
UX & Collaboration validated in production

Spectrum of Control / Blended Initiative

By Nikola Balic (@nibzard)
Add to Pack
or

Saved locally in this browser for now.

Cite This Pattern
APA
Nikola Balic (@nibzard) (2026). Spectrum of Control / Blended Initiative. In *Awesome Agentic Patterns*. Retrieved March 11, 2026, from https://agentic-patterns.com/patterns/spectrum-of-control-blended-initiative
BibTeX
@misc{agentic_patterns_spectrum-of-control-blended-initiative,
  title = {Spectrum of Control / Blended Initiative},
  author = {Nikola Balic (@nibzard)},
  year = {2026},
  howpublished = {\url{https://agentic-patterns.com/patterns/spectrum-of-control-blended-initiative}},
  note = {Awesome Agentic Patterns}
}
01

Problem

AI agents for tasks like coding can offer various levels of assistance, from simple completions to complex, multi-step operations. A one-size-fits-all approach to agent autonomy doesn't cater to the diverse needs of users or the varying complexity of tasks. Users need to fluidly shift between direct control and delegating tasks to the agent.

02

Solution

Design the human-agent interaction to support a spectrum of control, allowing users to choose the level of agent autonomy appropriate for the current task or their familiarity with the codebase. This involves providing multiple modes or features for interaction:

  • Low Autonomy (High Human Control): Simple, inline assistance like tab-completion for code, where the human is primarily driving and the AI augments their input.
  • Medium Autonomy: Agent assistance for more contained tasks, like editing a selected region of code or an entire file based on a specific instruction (e.g., "Command K" functionality). The human defines the scope and the high-level goal.
  • High Autonomy: Agent takes on larger, multi-file tasks or complex refactorings, potentially involving multiple steps, with less direct human guidance on each step (e.g., an "Agent" feature).
  • Very High Autonomy (Asynchronous): Background agents that can take on entire complex tasks like implementing a feature or fixing a set of bugs and creating a pull request, operating largely independently.

Users can seamlessly switch between these modes depending on their needs, allowing for a "blended initiative" where both human and AI contribute effectively.

03

How to use it

  • Use this when humans and agents share ownership of work across handoffs.
  • Start with clear interaction contracts for approvals, overrides, and escalation.
  • Capture user feedback in structured form so prompts and workflows can improve.
  • Implement mode-switching controls (keyboard shortcuts, UI toggles) for explicit autonomy level selection.
  • Pair with human-in-the-loop approval at higher autonomy levels for high-risk operations.
04

Trade-offs

  • Pros: Creates clearer human-agent handoffs, builds trust through progressive autonomy, enables error containment at lower levels, allows context-appropriate control selection
  • Cons: Multiple modes can confuse users if not clearly presented, requires building/maintaining several interaction paths, users may struggle to choose appropriate autonomy level
05

Example

flowchart LR subgraph "Human Control" A[Tab Completion] end subgraph "Shared Control" B[Command K - Edit Region/File] end subgraph "Agent Control" C[Agent Feature - Multi-File Edits] end subgraph "Autonomous Agent" D[Background Agent - Entire PRs] end A --> B B --> C C --> D D --> A
06

References