Executive summary (TL;DR)
- This advisory examines ChatGPT and similar AI chat tools as an AI security and insider threat risk.
- This matters now because employee adoption is rising faster than most monitoring and policy controls.
- Risk is highest for developers, contractors, and staff handling source code or confidential business data.
- In this investigation, DTEX observed AI chat tool use involving sensitive information, including source code.
- Primary risks include data exfiltration, accidental disclosure, and misuse of AI tools to support harmful activity.
- Organizations should prioritize application control, monitoring, policy updates, and targeted employee training.
Threat overview
ChatGPT and similar AI chat tools are now part of many employees’ daily workflows. As these tools become easier to access through browsers, operating systems, and other common business applications, organizations face a practical security challenge: employees can quickly move sensitive content into external AI systems that the organization does not control.
The core concern is not simply tool adoption. It is what happens when employees enter confidential business information, source code, product details, or other sensitive material into AI chat tools to speed up work. Once submitted, that data may be stored on external servers, reviewed by the provider, or reused as part of model improvement processes, depending on the platform’s terms and settings.
For security teams, this creates overlapping risks across AI security, data exfiltration, and insider threats. In some cases, the activity is negligent or convenience-driven. In others, AI tools may be used to assist more deliberate misuse of sensitive information.
This advisory focuses on how to detect and mitigate employee use of ChatGPT and related AI chat tools in the enterprise, with particular attention to confidential document handling, source code exposure, and high-risk user populations.
DTEX investigation and indicators
DTEX i³ has repeatedly heard concerns from organizations about the security impact of AI tools used by employees who access sensitive information, including products in development and proprietary code.
In one observed case, the DTEX i³ team identified a developer using an AI tool alongside source code in an attempt to develop a remote code execution (RCE) backdoor into the organization’s product. That activity was referred to the organization for further investigation.
DTEX i³ has also observed incidents in which employees or contractors entered confidential business information or source code into tools such as ChatGPT to support work-related tasks. Even where intent is not malicious, this behavior increases the risk of unauthorized disclosure and data exfiltration through legitimate services.
What defenders should look for
Security teams should focus on:
- Access to ChatGPT and other AI chat tools from company-issued endpoints
- Use of confidential documents or source code near AI chat tool activity
- Repeated or unusual AI tool usage by sensitive user groups
- Patterns that suggest copying, summarizing, or transforming proprietary information for external submission
- User behavior that departs from normal peer or role-based baselines
Detection and visibility
DTEX Intelligence Release 6.10.0 includes a new Data Enrichment category designed to help insider risk practitioners detect ChatGPT and AI chat tool activity.
Internal practitioners are best placed to configure and tune detection logic for their own environment. The AI chat ecosystem is expanding rapidly beyond text to image, video, and audio workflows, so detections should account for broader AI-enabled activity over time.
Insider threat profile
The most likely insider threat in this advisory is not a single archetype. It is a mix of negligent, policy-unaware, and potentially malicious users who interact with sensitive data and use AI chat tools to accelerate work.
Common characteristics include:
- Access to confidential business information, source code, or product development material
- Reliance on external AI tools for drafting, troubleshooting, coding, or summarization
- Limited awareness of how submitted data may be stored, reviewed, or reused
- Willingness to trade security controls for speed or convenience
- In higher-risk cases, intent to use AI tools to facilitate unauthorized or harmful outcomes
The advisory does not suggest that all use of ChatGPT is malicious. Rather, it highlights that employee use of AI chat tools can create material insider threat and data exfiltration risk when sensitive content is involved.
Insider threat persona
A representative persona for this activity is a developer, contractor, or knowledge worker with legitimate access to sensitive information who uses ChatGPT or a similar AI chat tool to solve a work problem more quickly.
That may include:
- A developer entering source code to troubleshoot or accelerate coding tasks
- An employee pasting internal business information into an AI chat tool to draft content or summarize material
- A contractor using external AI services without understanding the organization’s acceptable use policies
- A higher-risk insider using AI assistance to support unauthorized code manipulation or other harmful activity
The common thread is access plus convenience. The risk increases when AI chat tools are used outside approved workflows or without controls around sensitive data handling.
Mitigations: what organizations should do now
Organizations should treat AI chat tool use as a governable security issue rather than a temporary trend. The goal is not simply to block tools, but to understand who is using them, what data is involved, and where policy, monitoring, and user education need to improve.
Review and update acceptable use policies
Review current acceptable use policies for AI-based tools. Where gaps exist, add clear language specifying whether employee use is allowed, under what conditions, and what types of data must never be entered into external AI systems.
Apply application control on company-issued devices
Implement application control to limit the number of AI tools that can be accessed or used from corporate endpoints. Where some AI tools are approved, restricting the toolset can make monitoring more focused and operationally manageable.
Monitor confidential data use around AI chat tools
Monitor usage of confidential documents, source code, and other sensitive information alongside AI chat tool activity. This is especially important for high-risk roles and users with access to material tied to product development or regulated data.
Quantify current exposure
Determine which user populations are currently accessing AI chat tools, identify common usage patterns, and assess which teams or roles have the greatest exposure. This helps security teams move from broad concern to measurable risk.
Tune detections before scaling
Use available detections, tagging rules, and behavioral indicators to support anomaly-based or threshold-based alerting. Tuning should occur before broad deployment to reduce noise and improve detection quality.
Use targeted education and teachable moments
Provide regular employee training on the safe use of AI chat-based tools. Where possible, automate teachable moments around policy violations so users receive immediate guidance when risky behavior occurs.
Investigation support
This advisory includes limited-distribution reporting available only to approved insider risk practitioners. To request access to the redacted material, log in to the customer portal or contact DTEX i³. For organizations assessing suspected related activity, DTEX i³ can provide additional intelligence, indicator support, and investigative guidance. Behavioral detections should be tested and tuned prior to enterprise-wide deployment, particularly in large environments where scale can affect signal quality and operational effectiveness.
Additional resources
FAQ
Organizations can detect employee use of ChatGPT by monitoring access to AI chat tools on company-issued endpoints and correlating that activity with confidential document or source code use. Behavioral tuning is important to separate routine browsing from higher-risk activity involving sensitive data.
The main ChatGPT security risks are data exfiltration, accidental disclosure of confidential information, source code exposure, and misuse of AI tools to support harmful activity. Risk rises when employees submit sensitive business content to external AI services outside approved workflows.
Risk is highest for developers, contractors, and knowledge workers with access to source code, product information, or confidential business data. These users are more likely to use AI chat tools for speed or problem-solving and may expose sensitive information in the process.
Security teams should first update acceptable use policies, apply application control on corporate devices, and monitor sensitive data use around AI chat tools. From there, they should quantify exposure, tune detections, and deliver targeted training to high-risk user groups.
Get Threat Advisory
Email Alerts






