Operant AI Launches Security Application For AI Agents, To Boost Runtime Protection For Enterprises
The application protects against rogue agents such as trust scoring, agentic access controls, and threat blocking for model context protocols and non-human identities.

As organisations rapidly adopt autonomous artificial intelligence agents and complex multi-agent workflows, especially in high-growth markets like India, security challenges have escalated.
According to Deloitte's State of GenAI report, over 80% of Indian organisations are exploring autonomous agents, with 50% focused on multi-agent setups that require minimal human oversight.
To address these security challenges, Silicon Valley-headquartered Operant AI has launched AI Gatekeeper, a real-time security application for live AI applications, agents and agentic AI workflows — across Kubernetes, private, hybrid and edge environments.
AI Gatekeeper goes beyond Operant's existing 3D Defense capabilities, offering protections against rogue agents like trust scoring, agentic access controls and threat blocking for model context protocols and non-human identities.
According to Operant AI, Indian enterprises and cybersecurity leaders are interested in deploying AI agents but also rely on third-party vendors for AI deployment, complicating data governance and security. Key concerns include data leakage, model poisoning and rogue agent behaviour. AI Gatekeeper addresses these issues by securing agentic AI deployments at runtime across all platforms.
AI Gatekeeper Capabilities
According to Operant, AI Gatekeeper's capabilities include:
Runtime Defence For AI Across Public, Private and Hybrid Clouds
Moving beyond Kubernetes, Operant's 3D Runtime Protection is now available across public, private and hybrid cloud platforms.
Live comprehensive catalogues of AI workloads, AI agents, tools, models and AI platforms that automatically update with the use of AI in an organisation (includes providers such as OpenAI, Deepseek, Cohere, Anthropic, Hugging Face and more).
Support for large data platforms, large language models and AI agent platforms.
Defence analytics into threats that are being blocked at runtime.
Cross-Platform Threat Modelling
AI Security Graphs mapping and flagging risk data flows between AI workloads, agents, and AI APIs.
Mappings to Owasp Top 10 threat vectors for AI/LLMs and AI agents, including sensitive data leakage, API key and secrets leakage, prompt injection and data poisoning risks.
Advanced Threat Detection For AI Agents
Supply chain risks for AI agents, with mapping of trust scores and boundaries.
Unauthenticated and unauthorised AI agent detection and defence.
Least privilege runtime execution and least permissioned trust boundaries for AI agents.
"The AI that we are now securing is a completely new beast compared to even two years ago," said Vrajesh Bhavsar, Operant AI's CEO and co-founder. "AI Gatekeeper can bring Operant's unique defensive capabilities to everywhere customers are deploying AI, alongside critical new capabilities for protecting sensitive data and the rest of the application environment from the new attack surface that is being fuelled by rapid agentic AI adoption."