AI agents are not just a new tool category. They are a new threat class, and the security tools you already own were built for a world where humans took actions and machines waited for instructions.
Most security programs treated generative AI as a data leakage problem. An employee uploads a contract to ChatGPT. A prompt contains PII. The risk was human-to-model: a person made a choice, and the model received data it should not have. That framing is incomplete in 2026.
The risks of agentic AI operate in the opposite direction. The threat is model-to-system. An agent receives a goal, selects tools, authenticates to downstream applications, and executes a sequence of actions without a human approving each step. The agent is not waiting for instructions. It is issuing them.
This shift matters because every control layer you have was designed to govern human decisions. Zero Trust policies evaluate user identity. DLP rules scan human-initiated uploads. Access reviews ask managers to certify employee permissions. None of these controls have a concept of an AI agent acting on behalf of a user, using credentials the user never directly touched, inside applications the user may not even have access to.
Agents also inherit credentials. A model does not hold a bearer token or an OAuth grant. An agent does. That token persists. It does not expire when the user logs out. It does not trigger an MFA challenge. It does not appear in your identity governance platform's access review queue. It operates as a machine identity, and machine identities now outnumber human identities in most enterprise environments by a factor of 25 to 50.
The predecessor framing of "AI security" addressed the model layer. The current threat class lives in the action layer, and it requires a different detection framework entirely.
The following chain is a composite drawn from patterns across enterprise Salesforce and Copilot Studio deployments. It illustrates how individual risks combine into a critical incident.
Step 1. A sales operations manager builds a Copilot Studio agent to automate pipeline reporting. The agent uses a Salesforce connector configured in maker mode, using the manager's admin-level Salesforce credentials.
Step 2. The manager leaves the company. IT disables the manager's Active Directory account. The Copilot agent is not in any IT-managed inventory. The Salesforce connector credentials remain embedded and active. The agent is now orphaned.
Step 3. The agent was configured as org-wide accessible. Any employee in the organization can invoke it via the Teams interface. No one removed this setting when the creator left.
Step 4. A contractor with no Salesforce provisioning invokes the agent and asks it to pull a list of all enterprise accounts with contract values above a certain threshold. The agent executes the query using the embedded admin credentials. The contractor receives data they have no right to access.
Step 5. The agent's response includes data tagged with Microsoft Information Protection sensitivity labels. The contractor copies the output to a personal cloud storage account. The agent completed a legitimate tool call. The data movement looks like normal agent activity.
Step 6. The action chain touched four systems: Teams, Copilot Studio, Salesforce, and the contractor's personal storage. No single system logged the full sequence. No alert fired because each individual action was within the agent's configured permissions.
Step 7. The incident surfaces three weeks later during a routine audit. By then, the data has been accessed multiple times. The blast radius includes every record the agent's admin credentials could reach.
Real-World Toxic Combination Pattern Agent profile: Org-wide accessible, maker mode connector with admin Salesforce credentials, creator account disabled (orphaned), and sensitive data access confirmed Individual severity: Each factor scores medium in isolation Combined severity: Critical Why it matters: This combination means any employee, contractor, or compromised account that can reach the agent's Teams interface can extract admin-level Salesforce data. The agent is functioning as designed. The risk is invisible to every tool that sees only configuration, not runtime behavior.
Security teams are not failing because they are not paying attention. They are failing because their tools were built for a different threat model.
What IAM sees: User identities, group memberships, access certifications, MFA status. What IAM misses: the agent's effective authority inside each connected SaaS application, the relationship between the invoking user's permissions and the agent's embedded credentials, and whether the agent's token has been used to access data the invoker should not reach.
What SIEM sees: Log events from connected sources. What SIEM misses: the correlation between the runner's identity, the agent's maker credentials, and the downstream data access. MCP server interactions do not appear in SIEM logs. Agent-to-agent communication across platforms generates no unified log record. The sequence of actions that constitutes an attack chain is scattered across four separate log sources with no native join key.
What posture tools see: Theoretical configuration. What posture tools miss: runtime behavior. A posture tool can tell you that an agent has a maker mode connector. It cannot tell you whether that connector was used, who invoked it, what data was returned, or whether the invoker had any right to that data. This is the ghost chasing problem. Security teams review theoretical configuration signals with no runtime evidence of what actually happened.
The signal gap is structural. Posture tools see the agent as it is set up. Runtime tools see the agent as it operates. Most organizations have posture coverage and no runtime coverage. That means they know what could happen, not what did happen. Effective authority, what the agent can actually do inside each connected application after all entitlements resolve, is invisible to every tool that does not operate at runtime.
Addressing the risks of agentic AI requires four capabilities working together. No single tool delivers all four, and no program is complete without all four.
1. Complete AI Agent Inventory
You cannot govern what you cannot see. The starting point for any agentic security program is a single pane of glass across every AI platform in your environment: who built each agent, when it was last used, what SaaS connections it holds, what MCP servers it connects to, and whether the creator account is still active. This inventory must cover sanctioned and unsanctioned agents, including shadow agents deployed by business users without security review. An AI agent risk assessment is the fastest way to understand the current state of your environment.
2. Runtime Visibility Into Effective Authority
Configuration visibility is necessary but not sufficient. Security teams need to know what agents actually do at runtime: which users invoke them, what data they access, what tool calls they execute, and whether any of that activity is policy-aligned. This requires correlating agent configuration with SaaS entitlements, identity context, and real-time behavior into a single picture of effective authority. Runtime visibility is what separates evidence-based security from ghost chasing.
3. Deterministic Guardrails for Probabilistic Agents
AI agents are probabilistic by design. They select actions based on context and probability. Your access controls cannot be probabilistic. Deterministic guardrails apply fixed, predictable enforcement rules to dynamic agent behavior: blocking maker mode escalation before it completes, flagging org-wide accessible agents with sensitive data connections, and enforcing least privilege at the point of action rather than after the fact. Probabilistic agents require deterministic guardrails. This is not a philosophical preference. It is the only architecture that prevents action chains from completing before detection fires. For more on this framework, see AI agent governance.
4. Machine Identity Governance
AI agents are non-human identities. They hold tokens, execute actions, and make access decisions. Every existing insider risk program covers human behavior. None of them cover machine behavior. Machine identity governance extends your identity framework to cover agent credentials, token lifecycle, delegation chains, and the relationship between agent permissions and invoker permissions. The bearer token problem, where possession of a token grants full authority with no verification of who holds it, is the technical foundation of machine insider risk. The Salesloft-Drift and Gainsight incidents, where attackers used stolen bearer tokens to access more than 700 organizations' Salesforce environments without triggering authentication alerts, demonstrate the scale of the exposure. Addressing it requires treating agents as first-class identity subjects, not as tools that humans use. See bearer tokens explained: the hidden risk behind your AI Agent strategy for the technical detail.
For teams securing specific platforms, Microsoft Copilot agent defense and Salesforce Agentforce security are the highest-priority starting points given the privilege escalation patterns documented across enterprise deployments. Note: deterministic runtime guardrails are generally available on Microsoft Copilot today, with expanded platform coverage on the roadmap.
The risks of agentic AI are not theoretical. They are operational, they are compounding, and they are invisible to the tools most security teams currently rely on. Agents inherit credentials, chain actions across systems, and move data at speeds that make human-pattern detection irrelevant. The toxic combinations that create critical exposures are already present in most enterprise environments. They are just not visible yet.
The path forward is clear even if the work is not easy. Build a complete inventory of every agent in your environment. Move from theoretical configuration review to runtime visibility into effective authority. Apply deterministic guardrails to the highest-risk agent configurations. Extend your identity governance program to cover machine identities with the same rigor you apply to human accounts.
Configuration is not reality. Runtime truth is the only foundation for effective agentic AI security, and the time to build that foundation is before the next incident surfaces it for you.
The highest-severity risks are maker mode credential inheritance enabling privilege escalation, toxic combinations of medium-severity risk factors on a single agent, and orphaned agents running with persistent credentials after their creator accounts are disabled. These risks are critical because they operate within the agent's intended design. Nothing is technically broken. The access controls were bypassed by architecture, not by exploit.
Traditional AI security focused on the human-to-model interaction: what data a user sends to a model, and what the model returns. Agentic AI risks operate in the opposite direction. The agent takes actions autonomously, authenticates to downstream systems using embedded credentials, and executes multi-step workflows without human checkpoints at each step. The threat is model-to-system, not human-to-model.
IAM tools govern human identity lifecycle events. They have no concept of an agent's effective authority inside a connected SaaS application, or the relationship between an invoking user's permissions and the agent's embedded credentials. SIEM tools correlate log events, but agent action chains scatter their evidence across multiple disconnected log sources with no native join key. Neither tool operates at runtime with the context needed to distinguish legitimate agent behavior from privilege escalation or data exfiltration.
A toxic combination occurs when multiple medium-severity risk factors co-exist on a single agent, compounding into a critical-priority exposure. The canonical example is an agent that is org-wide accessible, uses a maker mode connector with admin-level credentials, and has a disabled creator account. Each factor scores medium in isolation. Together, they create a condition where any user in the organization can access admin-level data through the agent, with no active owner to detect or remediate the misuse.
Four capabilities are required: a complete AI agent inventory covering sanctioned and unsanctioned agents across all platforms, runtime visibility into what agents actually do rather than what their configuration says they should do, deterministic guardrails that enforce least privilege on probabilistic agents at the point of action, and machine identity governance that extends your identity framework to cover agent credentials and token lifecycle. None of these capabilities are optional. Each one is a prerequisite for the next.