i³ Threat Advisory:
How Agentic AI Browsers Elevate Insider Risk

Executive summary (TL;DR)

  1. This advisory examines how an agentic AI browser (Fellou) can automate data exfiltration.
  2. This matters now because 2026 will push ai agent autonomy into everyday browsing.
  3. Risk is highest for organizations allowing personal webmail, weak DLP, and broad scripting access.In this investigation the browser executed 403 agent actions that blended into endpoint noise.
  4. Primary risks include misaligned user intent, credential exposure, vulnerable libraries, and covert encryption.
  5. Organizations should prioritize least privilege for ai agents, restrict PowerShell/Python, and monitor prompts.

Threat overview

At the start of 2023, the DTEX Intel team released a Threat Advisory warning about the risks and potential abuse of ChatGPT. Much has changed in three years, and now we see a new wave of tools ready for the consumer market and corporate use as highlighted in our “Rise and Risk of AI Agents” iTA.

Generative AI chat tools have helped insiders and threat actors sharpen skills for malicious acts or caused negligent data loss when users enter company information into them. In 2026, we expect a focus on agentic AI, starting within browsers.

How do agentic AI browsers increase insider risk? 

They can act on behalf of a user across websites and local files, turning a single prompt into multi-step execution. That shifts risk from “what a user typed” to what an ai agent actually did, including actions that extend beyond original user intent.

Threat actors will exploit agentic browser vulnerabilities or use those browsers as tools once inside environments. Insiders will enhance their capabilities along the DTEX Insider Threat Kill Chain. Negligent users testing these tools may cause significant data loss.

This iTA examines one browser, Fellou AI, showcasing clear use cases for the most revolutionary AI development in the consumer market to date.

How agentic AI browsers introduce insider risks

Here we will present three separate use cases highlighting how the Fellou AI Browser could be used to introduce risk into an environment. 

  1. The improved insider. An insider aims to appear normal by performing their usual job. They crafted a prompt for their agentic AI browser to package sensitive files and covertly exfiltrate encrypted data from the company.
  2. The negligent employee. They are using the browser to gain a deeper understanding of their work content without understanding they’ve caused a data breach. This could be a customer contract, sales records, or even a code base. 
  3. Threat actors’ recon tool. Threat actors continue to gain access to environments and, once inside they could be utilizing agentic AI browsers as tools for reconnaissance. 

The improved insider

This shows how an AI file agent automates the Insider Threat Kill Chain — staging, obfuscation, and exfiltration — while bypassing content inspection and traditional DLP controls by encrypting data and disguising it as a benign file. The autonomous execution of supporting tools shows how agentic workflows can introduce secondary risk beyond the original user intent.


Vulnerable library

Instead of using existing command-line tools, the AI agent downloaded a library from hxxps://www[.]nuget[.]org/api/v2/package/DotNetZip/1.16.0 that has a severe RCE vulnerability with no patch. This increases risks while assisting the insider in exfiltration.

User intentPackage internal files and covertly move them out of the organization.
Technical use caseCovertly exfiltrate files by automating staging and encryption to bypass content inspection.
Agentic AI browserThe file agent executes a multi-step transformation pipeline: bulk selection, compression, encryption, and deceptive renaming. Then, the browser agent transfers the artifact through a Proton Mail attachment workflow. During this, the file agent spawns additional processes, including PowerShell, and retrieves an external DLL to complete encryption.

The negligent employee

This shows how agentic browsers bypass AI access controls and governance by acting as intermediaries, moving internal data into external AI ecosystems where retention, training, and sharing risks may escape organizational oversight.

In this example, we explicitly state where the AI agent should upload the data. However, AI systems may deviate if not given explicit instructions, sometimes uploading elsewhere to please their human operator.

User intentGain deeper insights from internal data by uploading it to an agentic AI browser that uses various AI services.
Technical use caseIndirect leakage of internal data to external AI platforms through an agentic browser.
Agentic AI browserThe browser agent navigates to ChatGPT, uploads an internal CSV file via an authenticated session, and submits a prompt to analyze sensitive content, brokering data transfer between internal systems and an external AI platform.

Threat actors’ recon tool

This shows how agentic file analysis accelerates credential harvesting, transforming a slow manual task into an automated workflow that enables rapid compromise beyond data theft to downstream system access.

Imagine you are a threat actor who gains access to an environment with an agentic AI browser installed. You write a quick prompt, spend 20 minutes hacking other organizations, then return to this:

User intentIdentify and extract sensitive credentials (API keys, tokens, passwords) from local artifacts and relocate them outside the organization.
Technical use caseAutomated discovery and extraction of embedded credentials from internal files.
Agentic AI browserThe file agent parses multiple files, extracting credentials by pattern and grouping them by type. The browser agent uses an authenticated Proton Mail session to send the extracted secrets to an external recipient.

DTEX investigation and indicators

Some parts of this Threat Advisory are classified as “limited distribution” and are accessible only to approved insider risk practitioners. To view the redacted information, please log in to the customer portal or contact the i³ team.

DTEX has optimized how AI data is visualized, enabling organizations to tailor monitoring and detection uniquely and effectively.

This starts with an update on the DTEX Control Center.

Clicking “See all AI utilization” presents an overview dashboard and four additional dashboards. For this iTA, we focus on the AI agents dashboard. Scrolling to the Fellou Browser Prompts section reveals all prompts, including one use case requiring extra subprocesses.

Clicking through provides a clear overview of the user prompt and the AI agent’s resulting work.

This now shows human versus AI agent activity. Rest assured, it’s not only the AI agent being detected and alerted; all content in DTEX’s Intel releases remains valuable for detecting the Insider Threat Kill Chain.

Stage: potential risk indicator (PRI) | generative AI application activity 

This behavior category updates regularly in new DI releases to include emerging AI technology. Depending on your organization’s tools, some applications may be omitted. Fellou AI browser improves agentic capability. Although its path may not always be straightforward, it finds ways to complete tasks with minimal human intervention and repeats the same path for similar tasks.

In the iTA workshop demonstration, the exfiltration use case took about 2 hours. During the Fellou process, 403 activities occurred, while the user also simulated work in another browser and used the file system before a lunch break. The total activities in this period were 2,266.

The demonstration shows that without additional indicators for insider behavior and activities, as discussed below, agentic activity can easily get lost in the ‘noise’.

Stage: potential risk indicator (PRI) | personal webmail activity

Personal webmail access has long been a key target for insiders to exfiltrate data from organizations due to the visibility of outgoing content and ease of account review. This use case demonstrates how agentic AI can exploit authenticated webmail sessions to stage and exfiltrate data from an environment.

Stage: account compromise | personal webmail activity

A constant since 2025 is that agentic AI needs a way to interact with the local endpoint. Most tools we’ve seen use Python, command line, and PowerShell on Windows. Fellou AI was similar, with process parameters revealing some of the AI agent’s goals.

Users rarely run PowerShell commands on their endpoints unless their role requires it. We recommend blocking user access to scripting tools to limit most current consumer-market tools.

Stage: potential risk indicator (PRI) | sensitive data tracking

Data exfiltration risks always target an organization’s crown jewels. When configured correctly, data sensitivity labels help track data movement within the organization.

The FileRead activity remains visible on the endpoint when accessing sensitive files. Similarly, a FileCreated event occurs when the archive is made. This lets DTEX track file lineage even through an automated browser.

Stage: potential risk indicator (PRI) | credential misuse

Clear text credentials often result from negligence or policy violations. Here, agentic AI showed little regard for security by placing the password in the command line, exposing it to logging in various files, including DTEX’s forensic data audit trail.

Even if used as an approved organizational tool, this behavior is concerning and must be considered during implementation.

Stage: obfuscation (IOBF) | suspicious archive renaming

This is a very common behavioral indicator seen in the majority of insider threat data exfiltration. Renaming files to appear like personal data rather than company secrets. In this case the insider can go further by also trying an extension rename which makes it even less obvious without closer inspection of the file.

Stage: exfiltration (IEXF) | personal webmail HTTP exfiltration

The authenticated email session allowed the Fellou AI agent to exfiltrate data. It composed a message to avoid suspicion if flagged, while the attachment archive went unnoticed.

DTEX supports detecting file uploads to personal webmail without HTTP Inspection Filtering (HIF), but it is less accurate. The sample data below was tagged with the HIF rule FileUpload.

Insider threat profile

This iTA highlights interesting use cases. We expect a typical employee seeking to exfiltrate data to use these commercial AI agents. The profile and persona below reflect this.

Technical Specialist Profile

RoleDevicesMotivationTiming and opportunity
Multiple departments and standard non-technical roleOrganization issued laptopExfiltrate company data for new roleThis could be done during normal work hours to blend in with daily activity.  Alternatively, they might run this at the end of the day if they believe no activity monitoring occurs, performing the exfiltration while continuing their afternoon.
 
Application usageFellou AI Browser (or other similar commercial tool). Authenticated personal webmail session.

Insider threat persona

The normal employee with no special access or technical skills who is leveraging advancement in AI to perform data exfiltration. 

This persona helps organizations conceptualize and differentiate threat-hunting strategies. By separating these behavioral patterns, teams can proactively detect and respond to risks without technical emulation.

The departing employee

  • Motivation: Data exfiltration for position benefit.
  • Behavioral indicators: Flight risk, security circumvention research, suspicious AI chat prompt usage, fast consecutive action of aggregation, exfiltration, cover tracks.
  • Risks: Data theft, leaking confidential/IP information.

Mitigations: what organizations should do now

This investigation revealed the organization allowed the activity to observe the individual’s actions and possessed a mature security function to manage this. The recommendations combine existing practices with areas for improvement benefiting all organizations.

Enterprise AI compliance and monitoring

Organizations adopting corporate AI for workflows must recognize these tools introduce new risks beyond traditional applications, especially with agentic AI systems and autonomous browsers like Fellou AI. These technologies can execute multi-step tasks, interact with external systems, and aggregate sensitive data without direct human oversight, increasing potential misuse or data leakage. The AI Steering Committee should define approved use cases aligned with compliance and security standards and conduct risk assessments for each AI capability. This includes implementing monitoring frameworks to detect inappropriate prompting, prevent unauthorized data aggregation, and mitigate exfiltration attempts. Policies such as Data Loss Prevention (DLP), Communication Compliance, and role-based access controls should protect sensitive information during AI-driven interactions.

Organizations should treat AI systems—including agentic AI browsers—as critical enterprise applications by incorporating penetration testing and adversarial validation into security programs. This involves red team exercises simulating malicious prompts, adversarial testing to validate guardrails, and continuous validation through synthetic prompt libraries to confirm resilience against evolving threats. Governance structures should include AI risk dashboards, incident response playbooks, and regular executive reviews to maintain oversight and adapt policies as risks evolve. As AI agents gain autonomy and integration capabilities, proactive governance and layered security controls are essential to safeguard enterprise data and maintain compliance.

Limit access to scripting tools

Agentic AI currently uses the user’s profile to perform actions, replicating their work. Execution often requires command-line access and scripting tools.

Several cybersecurity standards restrict access to command-line tools like CMD and PowerShell, focusing on least privilege and controlled administrative tool use. The CIS Critical Security Controls (CIS CSC) provide explicit guidance. Specifically, CIS Control 4.7 limits scripting environment access like PowerShell and Python to accounts needing them for administration or development. This reduces the attack surface by preventing standard users from running scripts or commands that could compromise system integrity.

Other frameworks, such as NIST SP 800-53 and ISO/IEC 27001:2022, do not mention CMD or PowerShell directly but enforce similar concepts through broader controls. For example, NIST emphasizes AC-6 (Least Privilege) and SC-7 (Boundary Protection), which support restricting unnecessary tool access. ISO/IEC 27001 Annex A includes A.8.3 (Information Access Restriction), requiring access control policies that block command-line utilities for non-essential users. Agencies like NSA and CISA advise hardening PowerShell rather than disabling it, recommending constrained language mode, execution policies, and logging.

Generative AI agent use monitoring

When organizations allow agentic AI browsers like Fellou AI for productivity and automation, they must address unique security challenges. While these technologies speed up workflows, continuous monitoring and review are essential to detect misuse or insider threats. Analyzing an insider’s AI interaction history, including prompts and autonomous actions, can be crucial in investigations. When direct cyber indicators are missing, these conversational and behavioral signals provide vital context to help security teams identify intent and risk patterns.

Insights from AI interactions can shape the urgency and focus of investigations. By revealing behavioral trends and anomalies, monitoring frameworks help analysts prioritize threats and allocate resources efficiently. This highlights the need to integrate behavioral intelligence and AI activity auditing into security strategies. Even when adopting agentic AI browsers, governance must include real-time threat detection, prompt reviews, and compliance controls to ensure autonomy does not compromise security or regulatory requirements.

Learn from the growing catalogue

As mentioned at the start of this iTA, DTEX has led this field since 2023. We continue to expand our knowledge and recommendations. Now is a good time to revisit past iTAs on AI:

Many more relate to AI in detecting insider threat behavior, but we won’t give you too much homework.

Investigation support

This advisory includes limited-distribution reporting available only to approved insider risk practitioners. To request access to the redacted material, log in to the customer portal or contact DTEX i³. For organizations assessing suspected related activity, DTEX i³ can provide additional intelligence, indicator support, and investigative guidance. Behavioral detections should be tested and tuned prior to enterprise-wide deployment, particularly in large environments where scale can affect signal quality and operational effectiveness.

Sources

  • CIS Critical Security Controls v8 – Control 4.7
  • NIST SP 800-53 Rev. 5 – AC-6, SC-7
  • ISO/IEC 27001:2022 – Annex A.8.3
  • NSA/CISA PowerShell Security Guidance

FAQ

Agentic AI browsers increase insider risk by turning one prompt into autonomous steps across files, scripts, and web sessions, which can exceed user intent. That makes data exfiltration faster, harder to spot, and easier to blend into normal endpoint activity.

User intent matters because the prompt explains why actions occurred, but the ai agent’s execution shows what actually happened. The gap between the two is where insider risk escalates, especially when agents spawn tools or move data without explicit oversight.

Common paths include staging sensitive files locally, encrypting or renaming archives to bypass DLP, and uploading through authenticated personal webmail or external AI platforms. Browser-based agents can leverage existing sessions, reducing friction and increasing speed.

First, enforce least privilege for ai agents and restrict scripting tools (PowerShell/Python) to essential users only. Next, monitor agent prompts and autonomous actions alongside traditional insider risk indicators so agentic activity does not disappear into endpoint noise.

Get Threat Advisory
Email Alerts