Apr 6, 2026

Agentic AI: The Star of RSAC 2026 

5

There were 650 vendors with exhibits at RSAC 2026, and while it is difficult to get an exact count, it seems like 649 of them were emphasizing agentic AI. Even the one vendor without AI in their handouts admitted that their company is increasingly centering its tools around it. As more than one attendee noted, AI agents are both the security problem and the security solution, often at the same booth.  

A year ago, most of the AI chatter was still focused on threats involving ChatGTP. But in mid-2025, the focus shifted to agentic AI. It did not just shift, it accelerated quickly and the alarm bells sounded. Agentic AI moved from buzzword to reality, with agents now acting as digital coworkers, taking on tasks and relieving cyber professionals. At the same time, they are introducing entirely new levels of risk. 

Why Has Agentic AI Taken Center Stage at RSAC 

So what changed this year? Why was literally everyone talking about AI and why was it featured at almost every booth on the expo floor?  

The answer is speed. 

Agentic AI is being adopted faster than organizations can understand, control, or secure it. Microsoft predicts more than 1 billion agents will be used across businesses by 2028, and 8 in 10 organizations expect to use them within the next 12 to 18 months. 

Between RSAC 2025 and 2026, the conversation jumped from “What is an agent?” to “How do we control thousands of them before they control us?” 

There is a legitimate concern around a world where AI agents take control. According to recent research

  • 63% of organizations cannot enforce purpose limitations on their AI agents 
  • 60% cannot terminate an agent that’s misbehaving 
  • 55% cannot isolate AI systems from their broader networks 
  • 33% plan for autonomous workflow agents that act without human approval 

While vendors may not emphasize these risks in their pitches, they were widely discussed in private conversations and sessions. Terms like governance and red teaming came up frequently as they are seen as the primary guardrails around AI agents. Speakers at the RSAC Cryptographers’ Panel also raised concern about the proliferation of agents and the vast amount of data they will have access to.  

Shadow AI and the New Insider Threat 

If every vendor and every session and keynote mentioned AI, there was a common thread throughout many of those talks: Shadow AI.  

The concept of shadow AI isn’t new. Organizations have dealt with shadow IT and shadow cloud for years. But the urgency around shadow AI is different, particularly in how it amplifies insider risk. 

Speaking at DTEX’s session, “Trust Issues: The Only AI Panel You Need to See This Week,” Hart Rossman, Vice President, AWS Security, brought up the threats around Open Claw, shadow AI, and how easy it is for anyone within the company to install the technology on their work device with no guardrails.  

“I think most organizations would largely agree they don’t want Open Claw on their desktops and laptops, right? They don’t want it deleting their email or databases,” Rossman said. But that reality will not stop adoption. Instead of trying to eliminate shadow AI, organizations need to bring it into the open and manage it intentionally. Trying to lock it down completely is not realistic. 

According to the latest DTEX and Ponemon 2026 Cost of Insider Risks Report, the cost of negligent insiders has reached $10.3 million annually, up 17% year over year, largely driven by the rise of generative AI. 

The data makes the shift clear: 

  • 92% of organizations say GenAI has changed how employees access and share information  
  • Yet only 13% have formally integrated AI into their business strategies 

This gap is where shadow AI lives. 

Employees are adopting AI faster than organizations can govern it. They are pasting sensitive data into tools, connecting AI agents to internal systems, and experimenting in ways that bypass traditional controls. Not because they are malicious, but because they are trying to move faster than the business has enabled them to. 

“We want all the innovation available to employees as quickly as possible, but not unguarded,” Rossman added. “Organizations have to get more savvy about how to do that safely, and do it faster.” 

Importance of Behavior and Intent 

One point that was stressed repeatedly throughout RSAC is that the insider risk that comes with shadow AI is different than the insider risk that comes with other types of shadow technology. “Most people think that humans are more unpredictable than AI agents,” said Marshall Heilman, DTEX CEO. It’s a fair assessment; the fallibility of human behavior is a top cause of insider risk. However, Heilman argued that AI agents are more unpredictable because they often take an unexpected action or one you wouldn’t predict them to take. 

“My company’s whole business is understanding why humans take certain actions and trying to predict what they’re going to do in the future. So, I’d argue with humans are relatively well understood. Agents, on the other hand, take actions that you have no idea that they take.” 

This changes the equation for insider risk. If a human takes malicious action, the security team can respond to it in human speed and while it is a problem, it can be addressed. However, if your company has multiple AI agents going rogue, they are doing it at machine speed, and it is nearly impossible for humans to react at this pace. The window between action and impact is shrinking. 

Understanding behavior over time, across users, data, and now AI, is what allows organizations to detect risk earlier and determine intent before it becomes an incident. 

Closing

The takeaway from RSAC 2026 is clear: AI is not just changing security. It is changing how risk forms inside the enterprise. Agentic AI is not just this year’s buzzword, it is also the fastest-growing source of risk. It amplifies human behaviors at scale, operates with increasing autonomy, and has direct access to sensitive data and systems. 

Stay tuned for our next blog, where we take a deeper look at the discussion from the DTEX “Trust Issues” panel and explore what it will take to build trust in an AI-driven enterprise. 

Subscribe today to stay informed and get regular updates from DTEX