'Shadow Escape': New Agentic Attack That Can Exploit ChatGPT, Claude, Gemini, Other AI Agents
The powerful zero-click attack was discovered by runtime AI defence platform Operant AI.

Popular AI agents and assistants, including ChatGPT, Claude, and Gemini, can be exploited by a powerful zero-click attack, runtime AI defence platform Operant AI has discovered. Called Shadow Escape, the zero-click attack exploits model context protocol (MCP) and connected AI agents and can enable data exfiltration via AI agents and assistants, according to the company.
Shadow Escape affects any organisation using MCP-enabled AI agents or MCP-connected AI assistants, including ChatGPT (OpenAI), Claude (Anthropic), Gemini (Google), custom AI agents built on various LLM backends, open-source alternatives like Llama-based assistants, and industry-specific AI copilots across healthcare, finance, and customer service, Operant AI said.
The common thread isn’t the specific AI Agent; it's the MCP that grants these agents unprecedented access to organisational systems. Operant AI has reported this security issue to OpenAI and initiated the common vulnerabilities and exposures (CVE) designation process.
The Attack Chain
Unlike traditional prompt injection or data leaks, this attack doesn’t need user error, phishing, or malicious browser extensions. Instead, it leverages the trust already granted to AI agents and AI assistants through legitimate MCP connections.
The attack unfolds in three stages:
Infiltration: Malicious instructions are embedded invisibly in documents uploaded to AI agents, which appear completely legitimate and pass standard security scans.
Discovery: AI agents discover and surface sensitive data across connected databases without explicit user requests, leveraging MCP.
Exfiltration: AI agents transmit entire datasets to external endpoints, disguised as routine performance tracking or analytics uploads.
The attack first enables the AI agent to access and display data and personal information to any human interacting with it. It then uses an invisible zero-click instruction to extract data to the dark web, without IT or standard security measures blocking or detecting the breach.
Using the Shadow Escape attack path, malicious entities are able to gain everything needed to perpetrate identity theft, financial fraud, and more, without users realising the exfiltration is happening.
Shadow Escape can impact sensitive, privacy-regulated AI/human interactions, including medical assistants using AI to access patient records, insurance databases, or treatment protocols or banking representatives using AI copilots.
It demonstrates a new class of threats that operate inside the firewall and within authorised identity boundaries, making them invisible to conventional cybersecurity monitoring. This comes at a time when enterprises are rapidly adopting agentic AI through MCP servers and MCP-based integrations to connect LLMs to internal tools, APIs, and databases.