The “Agentic Workforce" is not a staffing metaphor; it is an operational reality. AI agents have varying degrees of autonomy. When prompt ed, they act on your behalf and as an extension of you in the business. With this autonomy come consequences. That’s why trust is more important than working prompting skills.
This article explores how new trust boundaries between humans and AI agents emerge, what they are made of, and how to make sure they benefit you as an AI-literate individual. Here I am bringing together the latest on AI trustworthiness, security recommendations from the OWASP Top 10 for Agentic AI, and new ways to test and think about trust boundaries between AI Agents and coworkers.

Functionally, the agentic workforce is like you in the workplace, but extended and amplified. A thing to be proud of, not afraid.
Technically, agents like Claude and Joule make up the agentic workforce in the enterprise. Joule, for example, has become the interface for agentic workflows in the enterprise: in finance, HR, procurement.
These AI agents can be deployed in various parts of the business, can be plugged into multiple systems at once, and have various degrees of autonomy. However, autonomous agents do not replace people. They are extensions of people’s point of view, from a process and work taxonomy perspective. Let me explain.
Jennifer, for example, works in Legal. She needs to stay on top of how the company meets compliance requirements. But the product line grows faster than compliance requirements, and Jennifer needs to keep up with both.
She asks an AI agent to ‘find all apps and LLMs currently processing sensitive data in the business’. The AI agent accesses IT inventories, identifies apps and LLMs, examines documentation about data flows and data classifiers, and returns to Jennifer with an answer.
Jennifer then realizes that data classifiers look different. Product has implemented different classifiers for RAG-level data processing (the sensitive data that’s processed ad hoc at the time of the prompt). She asks the agent to create an IT ticket for the new data classifier to be added to existing legal workflows for review.
Here, the AI Agent represents Legal in Product and Product in Legal. It mediating and translates the two requests in a language parties are familiar with. The agent is also using and changing existing files and processes, on your behalf.
This example illustrates what I mean when I say that the agentic workforce is a functional extension of your role.
What are trust boundaries for the new agentic workforce?
The agentic AI workforce is an operational reality and a functional extension of your role. Either way, it needs to be governed. However, the transition to an autonomous agentic state is not binary. It exists along a gradient of autonomy, which directly correlates with the complexity of trust and safety controls required for each agent.
Whilst enterprises carry the responsibility of implementing these controls, people need to upskill themselves on their existence. Trust works both ways.
So you have people implementing controls, people verifying these controls, and ultimately people using them. You can find these controls along trust boundaries.
So what are trust boundaries?
Trust boundary #1: The conversational interface. With AI agents, this interface is usually the conversation and canvas window where prompting, researching, and planning take shape, usually under the guidance of a human.
The markers of trust at this trust boundary are: visibility into agentic reasoning and task planning; controls to adjust reasoning, planning, and execution steps; and data deletion controls.
I am particularly interested in how data deletion controls are handled, for example. Coming back to my first live conversation with Gemini this morning, I remember using the camera to show Gemini my heating system. I got carried away in the conversation and forgot to turn off the camera. Hypothetically, it could have captured credit card details from the cards sitting on my table if my camera had pointed that way. That’s sensitive data. What controls do we have to delete this data, captured by accident? Are we all adequately aware of the risk for sensitive data capture during live conversations?
Trust boundary #2. Another trust boundary is at the MCP server. Vendors carry the responsibility of providing a well-maintained, safe MCP server for agentic clients to plug in and generate more reliable results. The mapping between prompts and green-flagged tools available to users should be explicit both to employees and developers.
The Model Context Protocol is a standard for connecting agents to enterprise apps and data. It is the trust boundary that gets established before the conversation with users even happens. The protocol creates a trust layer with vendors so that, for example, when Jennifer asks about applications processing sensitive data, the agent correctly identifies functional business apps instead of platforms. Jennifer trusts that the agent's definition matches hers because instructions relayed through the MCP ensure an implicit alignment of definitions.
Trust boundary #3. This one is much more abstract, and it sits closer to the model architecture. The team negotiates and discusses model weights at this trust boundary, where vendor-provided models are available. The goal is to set up adequate controls aligned to OWASP Top 10 for Agentic AI. Controls for model drift, input manipulation (prompt injection) and excessive agency.
Trust boundaries are the surface where trust and safety controls become visible and available for verification. Trust boundaries for the agentic workforce are non-negotiable because the agents' ability to act makes them highly consequential. Because they act on your behalf, under your prompting and supervision, the agentic workforce is highly consequential for individuals, too.
For the agentic workforce, AI literacy is critical.
It helps you make the most of agentic-driven visibility. You can embed your POV, results, recommendations, and your work philosophy in every conversation with colleagues. You can ask an agent to create and embed review workflows that work both for product and for legal representatives.
If you understand the minimum safeguards and emerging trust boundaries, you can safely direct and supervise the agentic workforce.


