Apr 7, 2026

What RSAC 2026 Revealed About AI Agent Security and Enterprise Risk 

5

At RSAC 2026, one of the more grounded AI sessions wasn’t about copilots, productivity gains or model performance. It was about what changes when AI agents start operating inside the enterprise, and what that means for security teams already under pressure.

That was the focus of Trust Issues: The Only AI Panel You Need to See This Week, featuring:

  • Nicole Perlroth, Founding Partner, Silver Buckshot, Venture Partner, Ballistic Ventures and best-selling author
  • Kevin Mandia, Founder and Former CEO of Mandiant, Co-founder and General Partner at Ballistic Ventures
  • Marshall Heilman, CEO of DTEX
  • Sean Joyce, Principal, Office of the Chairman, Global Cybersecurity and Privacy Leader, PWC
  • Hart Rossman, Vice President of Security, AWS
  • Amanda Gerut (moderator), West Coast Editor, Fortune

The panel returned to a few consistent themes: AI is accelerating both external and internal risk; AI governance is lagging deployment; and organizations need better visibility into how agents behave, not just whether they complete the task they were assigned.

Below are the clearest takeaways from the panel.

The “everything, everywhere, all at once” cyber scenario

The panel opened on a sober note. Nicole Perlroth argued that AI is pushing the industry out of its “honeymoon phase” and into a period where long-standing weaknesses — misconfigurations, unpatched vulnerabilities, neglected controls — are more likely to be found and exploited at machine speed. Kevin Mandia reinforced the same point from the attacker’s side, warning of “near term pain” as AI makes offensive activity faster, more scalable and more aggressive than human operators alone.

That is what gives Perlroth’s “everything, everywhere, all at once cyber scenario” framing its force. The shift is not only speed. It is the widening field of actors able to test and pressure organizations at once. As Sean Joyce noted, more states and more criminals are likely to gain real offensive capability. The result is a more constant threat environment, where cyber risk becomes less episodic and more ambient.

AI agents can be just as unpredictable as humans and, in some cases, more so

The panel’s more distinctive point was internal: organizations are now creating risk themselves as they deploy AI agents into business processes. Marshall Heilman described two examples that landed with the room. In one, an AI agent gathered sensitive financial information for an executive, saved it to a publicly accessible share, then emailed it. In another, an agent tasked with obfuscating data downloaded code from the internet to create its own encryption routine, introducing fresh vulnerabilities along the way. In both cases, the agent completed the task it had been given. The problem was how it got there.

That became the heart of the discussion that followed. Heilman argued that AI agents can be just as unpredictable as humans — and in some cases more so — because they may take actions that make sense to the system but are opaque to the people supervising it. Joyce pushed back on whether humans are still the more complex actor, but the practical takeaway was the same: successful output is not the same thing as a safe outcome.

AI agents now belong in the insider risk model

One of the clearest themes to emerge was that AI agents increasingly need to be treated as insiders. Joyce was direct: “It is 100% insider. I would argue it’s a privileged insider.” Heilman took a similar line, arguing that agents doing work in the enterprise need to be treated with the same seriousness as any other internal actor.

The panel agreed permissions alone are not enough. Heilman’s argument was that in an agentic environment, traditional controls such as over-permissioning fixes and just-in-time access do not fully solve the problem. His answer was intent: security teams need to understand not just whether an action was allowed, but why it was happening and whether it aligned with the business purpose of the agent.

The board and C-suite literacy gap is now a material risk

Another theme that deserves more attention than it usually gets at security events is leadership literacy. Joyce said plainly that most board members still do not understand AI, let alone agentic AI. Hart Rossman made the operating problem just as clear: this can’t sit on a quarterly board deck. If leadership is only discussing AI and cybersecurity occasionally, the organization is already behind.

Perlroth made the same point from experience. After the success of her book, she found herself explaining cyber risk to Fortune 500 boards that often had very limited technical grounding. Her message to them was blunt: they are now part of the frontline, whether they feel prepared or not.

For security leaders, that is more than an observation. It is part of the risk picture. If executive teams lack the fluency to ask the right questions, move at the right cadence, or recognize how AI is being used inside the business, governance will lag even further behind deployment.

Shadow AI is not going away

The panel was equally realistic on shadow AI. Mandia argued that organizations need to accept a basic fact: employees are already using AI tools, whether security approves them or not. Rossman’s view was that the answer is not simply to block these tools, but to “bring it into the daylight” and make them available in a way the organization is comfortable with. Heilman went further, arguing that most employees are not trying to bypass controls for the sake of it; they are trying to do their jobs better and faster.   

Joyce was more cautious. He agreed that innovation should not be stifled, but argued for policy, guardrails, and controlled environments — including sandboxed experimentation — rather than letting shadow AI spread unchecked. That tension matters because the panel was clear on two points: AI demand is not going away, and unmanaged adoption creates risk that many organizations are still struggling to govern.

AI risk management still depends on fundamentals

For all the discussion of agents, autonomy and machine-speed risk, the panel kept returning to fundamentals. Joyce’s point was straightforward: the basics matter more now, not less. Rossman added two practical signals that security teams should be watching closely — credentials and token usage. If you do not know where your credentials live, who has access to them, or how token consumption is changing, you are missing an important source of signal.

That may be the most useful takeaway for security leaders and insider risk buyers. The AI agent security problem is new, but much of the work is familiar: know what agents you have, govern them, monitor them, and close the visibility gap between what they were asked to do and what they actually did.

If your team is working through AI agent security, insider risk or shadow AI exposure, it may be worth pressure-testing whether you have the visibility to distinguish useful automation from risky behavior — before scale makes that harder. Request a demo to see how DTEX can help.

FAQ: Demystifying AI agent risk

AI agent security is the practice of securing AI agents that operate inside the enterprise. That means looking beyond whether an agent completed its task and understanding how it completed it, what data and systems it touched, what permissions it used, and whether it introduced risk along the way.

Agentic AI governance matters because organizations cannot secure what they do not know exists. Companies need to know what agents are running, who owns them, what they can access, what business purpose they serve, and what guardrails and accountability models govern their behavior.

AI agents should be treated as insiders because they can operate inside the environment with access to systems, data, and workflows. Even if they are not human, they can still create insider-like risk if they inherit permissions, act unexpectedly, or are misused by an attacker or employee.

Security teams should prioritize visibility, governance, and control. Know which AI tools and agents are in use, what they can access, and who owns them. Tighten credentials and permissions, monitor for abnormal behavior, and put guardrails around high-risk actions before they scale into security or business risk.

Subscribe today to stay informed and get regular updates from DTEX