Mar 16, 2026

Why Human and AI Insiders Demand a New Risk Model

6

Most insider risk models were built for people. That is now the blind spot.

The next wave of insider risk is not just human, and it is not just AI. It sits in the interplay between the two. A person provides the intent, access, or instruction. An AI agent executes with speed, scale, and authority. Together, they magnify enterprise risk in ways legacy security controls were never designed to contain.

This is the shift many organizations have yet to fully absorb.

AI agents are already operating inside trusted environments — embedded in workflows, connected to sensitive data, and authorized to act. They are not malicious. They do not form grievance or intent. But they do not need to. When combined with human direction, broad permissions, and exposure to untrusted inputs, they can turn small mistakes, poor judgment, or misuse into significant security events.

That is why the question is no longer whether AI will make mistakes. It will. The real question is whether organizations understand the authority they are delegating — and the risks created when human judgment and machine execution operate together inside the enterprise.

The problem is delegated authority

For years, insider risk was largely framed around negligence, compromise (i.e., credential theft), or malicious intent. That framework still matters, but it is no longer sufficient.

AI agents now sit inside enterprise systems with the ability to retrieve information, trigger actions, interact with users, and in some cases communicate externally. They function as insiders in the operational sense that matters most: they are trusted, connected, and capable of acting.

The issue is not that these systems are becoming more intelligent. The issue is that they are being given more authority.

That distinction matters. A human employee may make a poor decision, overshare information, or misuse access. An AI agent can take that same flawed instruction and execute it across multiple systems, at machine speed, without pause. The risk is not just automation. It is amplification.

Why human and AI insider risks are more dangerous together

The greatest risk is not AI acting alone. It is the combination of human intent and machine execution.

This becomes especially dangerous in the context of the Lethal Trifecta:

  • Access to sensitive data
  • Exposure to untrusted external content
  • Ability to communicate or execute externally

When those conditions converge, small errors can turn into enterprise-scale problems. A prompt, an instruction, a poisoned input, or a bad workflow decision can quickly lead to data exposure, policy violations, or actions that are difficult to unwind.

Shadow AI compounds the problem. In practice, many unauthorized agent transactions will stem not from outside attackers, but from internal oversharing, unacceptable use, or misguided AI behavior. According to the Ponemon Institute’s 2026 Cost of Insider Risks Global Report, the cost of negligent insiders rose 17% year over year to $10.3 million, with shadow AI identified as one of the main drivers behind that increase.

And the scale is easy to underestimate. Human insider risk was already challenging because people make mistakes. But there are only so many people inside an organization. Machine identities now outnumber human identities by 82 to 1, and 42% have privileged or sensitive access. When those same employees are deploying and directing agents across multiple workflows, negligence is no longer linear. It is amplified at machine scale.

Not every AI insider risk looks the same

One of the biggest mistakes organizations can make is treating AI-related insider incidents as a single category. They are not.

In practice, four patterns matter:

  1. Collusion: a malicious human uses a malicious or manipulated agent to accelerate fraud, abuse, or deception.
  2. Adversarial use: a malicious human exploits a non-malicious agent to carry out harmful objectives more efficiently.
  3. Compromised agent: a non-malicious human relies on an agent that has been influenced by poisoned inputs, prompt injection, or memory manipulation.
  4. Misaligned normal use: both human and agent may be acting without harmful intent, but excessive delegation, broad permissions, or poor controls allow mistakes to cascade.

That last category deserves more attention than it gets. Many organizations still assume safety where there is no malicious intent. But some of the most consequential incidents will begin in exactly that space: trusted people, trusted systems, and insufficient guardrails.

Why blocking AI is not a strategy

The instinctive response is often to block AI agents or restrict their use until AI governance catches up. That is understandable, but it is not realistic.

Computer-use agents like Claude CoWork and OpenAI Operator are being adopted because they offer real productivity gains. Their spread across the enterprise will continue. A strategy based only on prohibition will not stop adoption; it will simply reduce visibility and push activity further into the shadows.

The better approach is to assume these systems will be used and build AI governance accordingly.

What the new insider risk model requires

A modern insider risk model must treat AI agents as operational actors with credentials, permissions, authority, and observable behavior.

That starts with a few fundamentals:

  • Visibility into all AI agents, sanctioned and unsanctioned
  • Attribution for actions taken by or through agents
  • Access controls tied to the sensitivity of the data agents can touch
  • Continuous monitoring for risky behavior
  • Blocking and remediation when activity crosses policy or risk thresholds

This is where guardian agents and guardian capabilities become essential.

Guardian agents: the first line of defense

If organizations are going to rely on autonomous systems, they also need supervisory mechanisms that can monitor AI agent behavior, enforce boundaries, and intervene when actions drift from policy, acceptable use, or business intent.

Gartner defines guardian agents as systems that supervise AI agents to help ensure their actions align with intended goals and boundaries. They monitor and block risky actions and are evolving from a collection of services into autonomous agents that can enforce policies across platforms.

In that sense, guardian agents are becoming the first line of AI defense: a control layer designed not just to observe autonomous systems, but to keep them within safe and intended bounds.

Just as importantly, those guardian capabilities need governance of their own. Oversight without oversight simply moves the risk around.

The leadership challenge now

Organizations do not need a perfect AI risk model before they move forward. But they do need a more honest one.

The old insider framework assumed risk came from people and could be understood through motive, behavior, and access. The emerging reality is more complex. Risk increasingly sits in the space between human decisions and machine execution. That is where authority is being delegated, where oversight is weakest, and where consequences can spread fastest.

The organizations that get ahead of this will not be the ones that block AI the longest. They will be the ones that build the visibility, controls, and governance needed to use it responsibly.

Because the defining insider risk challenge ahead is not human or AI in isolation.

It is what happens when trusted humans and trusted machines operate together.

DTEX is transforming enterprise security by extending proven insider risk management principles to AI, enabling visibility and control at enterprise scale. For a deep dive on how the DTEX Platform proactively detects and mitigates human and AI risk, request a demo.

FAQ: Navigating the space between human and AI risk

Traditional insider risk models focus on people. The new blind spot is the interaction between human intent, access, or instruction and AI agents that execute quickly, broadly, and with delegated authority without the natural friction that slows human action.

The biggest risk is not AI acting alone. It is the combination of human intent and machine execution. Four patterns matter: collusion, adversarial use, compromised agent, and misaligned normal use — where even trusted people and trusted systems can trigger cascading harm without malicious intent.

The Lethal Trifecta is the convergence of three conditions: access to sensitive data, exposure to untrusted external content, and the ability to communicate or execute externally. When all three conditions converge, a prompt or poisoned input can quickly become enterprise-scale exposure.

A guardian agent supervises AI agents, enforces boundaries, monitors risky behavior, and can intervene when actions drift from policy or business intent. It is the control layer that helps keep autonomous systems within safe, intended bounds.

Blocking alone is not realistic or sustainable because useful agents will keep spreading. Blanket bans reduce visibility and push usage into the shadows. The stronger approach is governance: visibility, attribution, access controls, continuous monitoring, and blocking or remediation only when behavior crosses policy or risk thresholds.