Jan 21, 2026

Shadow AI Is Everywhere. What Gartner® Says and How DTEX Can Help You Regain Control.

6

If you’re treating shadow AI as tomorrow’s problem, you’re already late. The latest Gartner findings and industry surveys converge on a stark reality: unsanctioned GenAI use has exploded behind the scenes. Data is leaking through prompts, and experimentation with AI agents is outpacing governance. Many CISOs know this is happening, but most still lack line of sight into what employees are prompting, where those prompts go, and how autonomous agents are touching sensitive systems. And the risk is compounding every day.

Shadow AI is a today thing, not a future risk

A Gartner survey of 302 cybersecurity leaders in March – May 2025 revealed that 69% of cybersecurity leaders suspect or have evidence that employees use prohibited public GenAI at work. And by 2030, over 40% of enterprises will face security or compliance incidents tied to unauthorized use.

Like shadow IT but scarier

Unapproved AI tools, unsanctioned models, embedded AI features are now top board level concerns. Notably, this isn’t classic shadow IT you can block with an endpoint agent. It’s a browser window and a copy-paste away from your crown jewels.

Shadow AI is scarier than shadow IT for three big, structural reasons, and they all come down to speed, scale, and invisibility. AI tools can expose sensitive data far more quickly and irreversibly, often without employees realizing it. Shadow AI is nearly impossible to detect with traditional security tooling, and AI’s adoption is exploding much faster than governance can scale.

The risk escalates across borders: Gartner warns >40% of AI-related data breaches by 2027 will stem from cross-border GenAI misuse, often via prompts sent to models or APIs hosted in unknown jurisdictions, outside of your governance. Sensitive data is moving, and you don’t control the ‘where’.

Data leakage via prompts: where breaches reign

Prompts are the new payload. Recent reporting from Infosecurity Magazine on Netskope’s Cloud and Threat Report 2026 shows prompt volumes to SaaS GenAI apps grew sixfold in a year—from ~3,000 to ~18,000 per month. Top quartile organizations exceed 70,000 prompts/month; the top 1% surpasses 1.4 million. Unsurprisingly, GenAI data policy violations doubled; on average 3% of GenAI users commit ~223 violations/month, with the most enthusiastic adopters seeing ~2,100 incidents/month.

This isn’t just “bad prompts.” Its source code, customer records, strategy decks, and credentials pasted into personal AI accounts, a perfect storm of zero visibility, cross-border exposure, and no guarantees about model retention or downstream use. Gartner guidance is explicit: build enterprise-wide AI policies, audit shadow AI activity, and monitor usage continuously.

The next frontier: agent experimentation without guardrails

As teams push beyond chatbots to agentic AI, autonomous or semiautonomous systems that execute tasks, access data, and integrate with tools, the blast radius widens. Gartner attack landscape data shows 32% of organizations experienced attacks on GenAI applications that leveraged the application prompt in the last 12 months. Even without mentioning phishing or deepfakes, the takeaway is clear: agent misuse can create novel pathways to leak data, invoke APIs, or alter business logic.

Unsupervised agents, especially those configured by enthusiastic builders outside security purview, can chain actions, traverse integrations, and touch regulated data. Without role-based policies, scoped credentials, and kill switches, you’re a prompt away from autonomous misbehavior.

The root cause: limited AI literacy meets the easy button

Gartner emphasizes education, monitoring, and filtering as cornerstones of shadow AI mitigation. The hard reality: many users don’t grasp what constitutes sensitive data in a prompt, how model memory/history works, or why personal accounts are off limits. Broader workforce data backs this up: 42% of employees received no basic GenAI communications, and fewer than 35% got role specific training—a governance vacuum that guarantees unsafe usage.

This literacy gap isn’t just about “bad actors.” It’s well-meaning employees optimizing productivity and unknowingly creating compliance exposure, contractual risk, and irreversible IP leakage.

What CISOs know but struggle to see

Security leaders recognize the urgency. Gartner highlights that emerging GenAI risks demand significant changes to traditional approaches, and that shadow AI is now a top enterprise risk. But many programs still lack actual data on which AI apps are used, what content is uploaded, and which accounts (enterprise vs. personal) are in play. Without those signals, policies are unenforceable, and education can’t be targeted where it matters.

DTEX brings shadow AI into the light without killing innovation

Outright bans tend to push behavior underground. Behavior centric visibility and AI guardrails preserve innovation while stopping leaks. DTEX Risk-Adaptive DLP, AI Security and Ai³ Risk Assistant were built for exactly this reality:

1) Get comprehensive visibility over all AI tools in use, authorized and unauthorized

Security teams can curate safe access to thousands of GenAI apps and differentiate enterprise Copilot accounts from personal use, a crucial control for shadow AI. DTEX enables monitoring of GenAI tools in non-browser utilities, like AI-driven code completion tools and detects AI-based notetaking used during video conferencing.

2) Understand the prompts before the data leaves

DTEX monitors AI prompts and tracks content flow to and from GenAI platforms, supporting native classification for IP and sensitive markers that can prevent risky data from leaving in the first place. Categorizing prompt questions and answers enables you to identify risky behavior and AI misuse well before a threat of data loss.

3) Monitor behaviors, preserving privacy by design

DTEX pairs rich behavioral telemetry with patented pseudonymization so analysts detect risky patterns (e.g., repeated code pastes to personal LLMs) without exposing PII, reducing bias and enabling proportionate response.

4) Safely investigate at AI speed

The Ai³ Risk Assistant lets analysts ask natural language questions, “Who uploaded customer data to a public LLM?”, “Was it on purpose?” and “Run a data loss playbook on GenAI”. It returns explainable findings and recommendations to accelerate triage and containment. And it leverages Azure OpenAI without direct access to customer datasets or the internet.

5) Raise literacy with real signals

Because DTEX shows what employees are putting in AI prompts, you can deliver just-in-time coaching, role specific training, and enforceable policies that align with Gartner recommendations, closing the literacy gap where it counts.


Sidebar: AI Agent Controls You Should Implement Now

Accountability & change control

  • Assign agent owners, record purposes, capabilities, and data access.
  • Require risk reviews before agents connect to production data or tools.

Least privilege by design

  • Scope agent credentials to minimal, task specific permissions; rotate secrets frequently.
  • Enforce per action approvals for sensitive operations (exports, deletes, bulk updates).

Guardrails & kill switches

  • Implement policy constraints (allowed actions, datasets, time windows).
  • Provide an operator kill switch and automatic halt on anomaly detection (e.g., excessive exports).

Prompt hygiene & logging

  • Standardize prompt templates that exclude sensitive data; enable prompt linting.
  • Log all prompts/responses and tool invocations for audit and post-incident forensics.

Monitoring & drift detection

  • Continuously observe agent outputs for policy violations, hallucinated instructions, or scope creep.
  • Alert on unexpected data access patterns or multitool chains that exceed approved scope.

Education & simulations

  • Train builders and business users on safe agent design, data boundaries, and failure modes.
  • Run tabletop exercises: prompt injection scenarios, cross-border data egress, rogue automation paths.

A pragmatic 30–60–90-day plan, aligned to Gartner guidance

Days 0–30: Discover & contain

Create and publish AI acceptable use policies. Launch role-specific training on prompt hygiene, IP handling, and cross-border awareness. Stand up agent controls (RBAC, approvals, kill switches). Read the DTEX Guide to AI Governance.

Days 31–60: Guardrails & literacy

Monitor AI activity to discover shadow AI apps, prompt content and AI data flows, and AI agent activity. Block personal AI accounts. Whitelist enterprise endpoints. Read the DTEX Insider Guide to AI.

Days 61–90: Operationalize & scale

Measure and block policy violations. Report prompt leakage trends to executives. Integrate AI risk evaluation into SaaS onboarding and data governance lifecycles to prevent technical debt and compliance surprises.

The DTEX bottom line

The Gartner message is unmistakable. Shadow AI will cause real incidents for a large share of enterprises unless there is action today. Prompt leakage and unsupervised agent behavior are accelerating faster than most organizations can govern. You need continuous visibility, enforceable guardrails, and higher AI literacy today, not next quarter.

DTEX gives you the operational backbone to do it: see what’s being prompted, stop risky uploads, explain insider intent, and enable safe AI adoption without resorting to blanket bans. If you’re ready to bring shadow AI into the light, let’s map your 90-day plan and get the telemetry flowing.

Subscribe today to stay informed and get regular updates from DTEX

Gartner, What Are the Top AI-Related Cybersecurity Threats to Organizations?, By Craig Porter, 10 September 2025

Gartner is a trademark of Gartner, Inc. and/or its affiliates.