The Headline
Source: VentureBeat
Translation: Every enterprise security system in existence was designed around the assumption that the actor is a human being who can be trained, held accountable, and eventually logged off. That assumption is now wrong, and the consequences are only beginning to surface.
What’s Actually Happening
AI agents are proliferating inside enterprise environments faster than security teams can instrument or govern them. These agents log into systems, fetch sensitive data, call tools, execute workflows, and take action. This happens autonomously, continuously, and at a scale no human user could match. The identity and access management systems designed to control who does what inside an organization were built on a set of assumptions that agentic AI violates entirely.
Those assumptions are specific: that every actor is a human being with consistent behavior, clear intent, and direct accountability; that privileges are relatively stable over time; that anomalous behavior can be detected because human patterns are recognizable; and that every identity traces back to a specific person who can be held responsible. AI agents break all four simultaneously. They can be copied, forked, scaled horizontally, left running indefinitely, and they operate without the moral code or contextual judgment that human accountability depends on.
The result is an identity layer that was designed to enforce trust but is now being operated by actors it was never designed to see.
The Distortion
The primary distortion in enterprise AI adoption is the framing of security as an implementation detail. Something to be addressed after deployment, through bolt-on governance and aftermarket controls. The article’s core argument is that this sequencing is the vulnerability. AI agents are not a new feature being added to an existing security model. They are a new class of actor that invalidates the model’s foundational assumptions.
The secondary distortion is the language of control. Organizations speak of “governing” AI agents, “instrumenting” their behavior, “monitoring” their access. This language implies that existing frameworks, applied more rigorously, will be sufficient. They will not. A static privilege model cannot govern an agent that requires different permission levels at different moments within a single workflow. A behavior-based anomaly detection system cannot flag an agent that operates continuously across multiple systems, because continuous cross-system operation is what legitimate agents do. The tools built for the threat model of human users produce false negatives and false positives simultaneously when applied to agents ending up missing real risks and flagging legitimate workflows.
The deepest distortion is the accountability gap. Legacy identity systems assume that when something goes wrong, there is a person responsible. Agents blur this entirely. When an agent acts under delegated authority, is duplicated, modified, and left running long after its original purpose has been fulfilled, under whose authority is it operating? That question currently has no reliable answer inside most enterprise environments. That is not a governance gap. It is a structural absence.
The Incentive
For AI vendors, the incentive is deployment velocity. Every friction point in enterprise AI adoption (e.g. security review, identity governance, access controls etc.) is a deal cycle extended, a pilot delayed, a contract at risk. The market pressure is to ship agents into production environments and let security teams catch up. The asymmetry is deliberate: moving fast creates facts on the ground that are difficult to reverse.
For enterprise security vendors, the incentive is category creation. Identity and access management is a mature, commoditized market. Agentic AI represents an opportunity to reframe the entire category — to argue, as the article does, that identity is now the fundamental control plane for AI, not one security component among many. The companies that define the new architecture will own the new market. The urgency in this conversation is real, but it is also commercially convenient for the vendors best positioned to sell the solution.
For IT and security leaders, the incentive is a deeply uncomfortable one: admitting that the threat model they have built their careers around is no longer sufficient. The organizations that move fastest on this are the ones whose leaders can tolerate that admission. Most cannot, which is why most enterprises are extending legacy identity models to cover agents rather than rethinking the architecture from the ground up.
For boards and executives, the incentive is plausible deniability. AI adoption is strategically mandatory. Missing it carries real career risk. The security implications are complex, slow-moving, and not yet producing visible incidents at scale. In that environment, the rational executive behavior is to deploy and monitor, not to delay and redesign. The reckoning, when it comes, will be attributed to the sophistication of the attack rather than the inadequacy of the governance.
The Consequence
The near-term consequence is an expanding attack surface that most organizations cannot currently see. Agents operating through inherited or shared credentials, spinning up dynamic identities, ingesting documentation and configuration files as part of their decision-making is already happening inside enterprise environments right now. And it is largely invisible to conventional identity and access management tools. The threat is not theoretical. Prompt injection attacks (think where a seemingly harmless README contains concealed directives that trick an agent into exposing credentials) are already a documented risk in development environments.
The structural consequence is an accountability vacuum that compounds over time. As agents are duplicated, modified, and left running beyond their original scope, the chain of delegated authority becomes impossible to reconstruct. When an incident occurs like a data breach, an unauthorized transaction, or a compliance violation, the audit trail required to determine what happened, under whose authority, and through what chain of actions will not exist in a form current systems can produce. Regulatory exposure in that environment is significant and growing.
The longer-term consequence is a bifurcation of enterprise AI outcomes that mirrors the strategic bifurcation described in last week’s Fast Company piece. Organizations that rebuild their identity architecture to account for context, delegation, and real-time accountability will be able to deploy agents at scale with predictable authority and enforceable trust boundaries. Organizations that extend legacy models will deploy agents at scale with unmanaged risk and likely end up moving faster, with greater automation, toward outcomes they cannot fully audit or control. The gap between them will not be visible until an incident makes it legible. By then, the structural vulnerability will be deeply embedded.
The Calibration
The useful reframe here is conceptual. The question is what identity means when the actor is software that can be copied, scaled, and left running indefinitely under delegated human authority.
That question requires separating three things that legacy systems collapse into one: who invoked the agent, what authority was delegated to it, and what it actually did. Current systems track the third poorly and the first two barely at all. Building systems that capture all three in real time, across complex multi-agent workflows, at the speed agents operate, is not an upgrade to existing architecture. It is a replacement of the foundational assumption on which that architecture rests.
The calibration for security leaders is to resist the instinct to treat this as a compliance problem with a checklist solution. Zero Trust architecture, as NIST defines it, already requires that all non-human entities be considered untrusted until authenticated and authorized. Most enterprises have not implemented that requirement for their existing service accounts, let alone for AI agents. The gap between policy and practice is where the risk lives.
The broader calibration is one that cuts across every AI story we have covered: the infrastructure is moving faster than the governance. In data centers, the bottleneck is permitted land. In private credit, it is origination discipline. In schools, it is the capacity for human attention. In enterprise security, it is the identity layer. In every case, the technology is not waiting for the institution to catch up. And in every case, the cost of that lag will be paid. The only question is when, and by whom.
Next calibration: 1 pm (GMT). Stay sharp.



