

We can now build autonomous systems that pursue meaningful, high-level goals. Yet, for every inspiring success story, there is a deep-seated fear of unpredictability. What happens when an agent predicts the wrong output for a goal or makes a costly error? This fear is the single biggest barrier to enterprise adoption, raising the question: how do we embrace agentic AI while applying proven risk management principles?
The real breakthrough is realizing that agency isn’t a binary switch. It’s a false choice that you either have a predictable automation or an uncontrollable agent. When we realize that we have a spectrum of choices, the strategic decision then becomes where to place your agent along this spectrum.
The Four Levers of Agency
We can control an agent’s position on this spectrum by manipulating four levers of governance. Think of them as a layered set of controls, moving from soft guidance to hard-coded rules, each designed to provide a more robust backstop than the last. Understanding this hierarchy is the first step to building with confidence.
- Suggestion: The agent’s instructions, which provide high-level natural language guidance.
- Access: The specific set of tools an agent is permitted to use, defining its fundamental capabilities.
- Constraint: The architectural design of each tool, which enforces rules through specific inputs and outputs.
- Safety Net: The layer of human oversight that provides real-time approval and intervention.
To make these levers concrete, we will illustrate them using Google’s open-source Agent Development Kit, or ADK. The ADK is designed around this philosophy of modular control, providing specific components that allow developers to deliberately set the agency dial. For instance, choosing a SequentialAgent
locks the system into a low-agency workflow, while using an LLMAgent
enables high-agency, dynamic decision-making.
Lever 1: Suggestion
The first and most brittle layer of control is Suggestion. This comes from the agent’s natural language instructions, provided in a system prompt to define its persona, goal, and constraints. While essential for steering behavior, these instructions are fundamentally suggestions. Think of this as the first line of defense: necessary for direction, but wholly insufficient on its own for enterprise-grade safety.
In Google’s ADK, these instructions are typically passed as the system_instruction
parameter when initializing an agent.
Lever 2: Access
The next layer provides the first truly robust, architectural backstop. Access is determined by the specific set of tools an agent is permitted to use, which defines its fundamental capabilities. This is where our layered defense becomes critical: even if a misunderstood suggestion (a failure at Lever 1) causes an agent to attempt something destructive, it is architecturally impossible if it lacks the tool. An agent that is not given a delete_database
tool can never delete the database. This is the security principle of least privilege in action: only grant an agent the absolute minimum capabilities it needs.
In the ADK, an agent’s capabilities are explicitly defined by the list of tools passed to it during creation.
Lever 3: Constraint
While Lever 2 determines which tools an agent has, Constraint adds another defensive layer by defining precisely how those tools can be used. This lever enforces rules through the architectural design of each tool’s inputs and outputs. A failure at this layer is more subtle: the agent has the right tool but tries to use it improperly. By tightly scoping a tool’s parameters, you can prevent these errors before they happen. The ADK formalizes this by having developers wrap their Python functions in a FunctionTool
class, making the function’s signature and docstring available to the agent.
Lever 4: Safety Net
The final and most robust layer of defense is the Safety Net: direct human oversight. This layer provides real-time approval and intervention, acting as the ultimate backstop for any action that is too risky or nuanced for pure automation. Far from being a sign of failure, Human-in-the-Loop (HITL) patterns are a critical feature for managing risk when the preceding architectural layers are insufficient. Modern frameworks provide direct implementations for these patterns. For example, ADK includes a built-in tool confirmation flow that can be enabled for any tool, which is a direct, code-level implementation of the “Approval Gate” pattern.
Case Studies from the Field
These risk management principles are clearly visible in how agentic AI is successfully deployed across industries today. Each use case represents a deliberate choice about where to position the agent on the spectrum.
In high-stakes fields like finance and healthcare, the dial is turned low for execution. Financial institutions run complex risk models, allowing agents to perform deep analysis while requiring human approval before acting on those insights. This balances powerful analytics with stringent safety and regulatory compliance.
In collaborative domains like software engineering, the dial is set to the middle. Here, the goal is to augment human creativity, not replace it. Tools like the Gemini CLI act as partners, given high agency to generate code, suggest refactors, and write tests, but the human developer always provides the final validation, creating an iterative loop of creation and refinement.
In open-ended fields like scientific research, the dial is turned to the maximum to foster discovery. Biotech firms build AI-enabled maps of human biology, empowering agents to autonomously sift through petabytes of data to find promising treatments for rare diseases. In this context, high agency is not a risk but a requirement for breakthrough innovation.
Your Playbook for the Agentic Era
Navigating the agentic journey requires a disciplined, architecture-first approach. The fear of uncontrollable agents dissolves when we recognize that agency is a dial that we can deliberately set. By layering your defenses, you can build with confidence.
Where does your project sit on this agency spectrum? Continue the discussion with me on LinkedIn, X, and BlueSky.
Source Credit: https://medium.com/google-cloud/the-agency-spectrum-an-ai-risk-management-framework-4d02e536a406?source=rss—-e52cf94d98af—4