Beware! Tool Misuse To Privacy Breach—Top 10 Risks Of Agentic AI: How To Defend Yourself?

Agentic AI has its own set of challenges as Wipro illustrates, exposing systems to new kinds of threats that a traditional digital security system might not be equipped to handle.

Agentic AI is autonomous and has decision-making executive function as opposed to large models. (Photo: Freepik)

Agentic AI is fast becoming a ubiquitous asset to firms that have become integral parts of almost every digital workflow. As opposed to large language models, agentic AI is autonomous and has decision-making executive function.

This comes with its own set of challenges, exposing systems to new kinds of threats that a traditional digital security system might not be equipped to handle, according to Saugat Sindhu, Global Head – Advisory Services, Cybersecurity & Risk Services, Wipro Ltd.

Sindhu has highlighted ten major security risks stemming from agentic AI, as well as ways to defend against them:

1. Memory Poisoning 

Security Risk: Malicious or erroneous data could be injected into either the short-term or long term memory of the AI agent, messing up its context and altering decisions. For example, an AI agent utilised by a bank may falsely remember that a loan was approved due to some tampering with its records, leading to unauthorised fund disbursement.

Safeguard: Firms should validate memory content consistently, and isolate memory sessions for sensitive tasks. They should also take precautionary steps like implementing a strong authentication system for memory access, and deploy anomaly detection and memory sanitisation routines.

2. Tool Misuse

Security Risk: Attackers can trick AI agents into abusing their integrated tools (APIs, payment gateways, document processors) by employing deceptive prompts, thereby facilitating hijacking. For example, an AI-powered human resource chatbot may be manipulated into sending confidential salary data to an external email via a forged request.

Safeguard: Enforce strict tool access verification; monitor tool usage patterns in real time; set operational boundaries for high-risk tools; and validate all agent instructions before execution.

3. Privilege Compromise

Security Risk: The exploitation of improper permission configurations or dynamic role inheritance to execute unauthorised actions. For example, an employee escalates privileges with an AI agent in a government portal to access Aadhaar-linked information while lacking proper authorisation.

Safeguard: Companies should apply granular permission controls and validate access dynamically. They should also constantly monitor role changes and audit privilege operations comprehensively.

4. Resource Overload

Security Risk: Overwhelming an AI’s compute, memory, or service capacity which may compromise its performance and/or cause failure of the same. This is dangerous security threat in mission-critical systems such as healthcare or transport. For example, during festive season, an e-commerce AI agent may get flooded with thousands of simultaneous payment requests, causing transaction failures.

Safeguard: Companies must implement resource management controls using adaptive scaling and quotas. They should also monitor system loads in real time and apply AI rate-limiting policies.

5. Cascading Hallucination Attacks

Security Risk: Information that is AI-generated and false but plausible may circulate through systems, disrupting decisions. This could especially be a problem for financial risk models and legal document generation. For example, an AI agent in a stock trading platform generates a misleading market report, which is then utilised by other financial systems, magnifying the impact of the error.

Safeguard: Firms should validate outputs with multiple trusted sources and apply behavioural constraints. They should also employ feedback loops for corrections and have requirements for secondary validation before making critical decisions.

6. Intent Breaking And Goal Manipulation

Security Risk: Attackers may alter an AI’s objectives or it reasoning to redirect and manipulate its actions. For example, a procurement AI in a company may be manipulated into always selecting a particular vendor, and bypass competitive bidding.

Safeguard: Companies should engage in validating planning processes, setting boundaries for reflection and reasoning to protect goal alignment dynamically, and audit AI behaviour for deviations.

7. Overwhelming Human Overseers

Security Risk: Human reviewers may be flooded with excessive AI output to exploit a cognitive overload. This would be a serious issue in high-volume sectors such as banking, insurance, and e-governance. For example, an insurance company’s AI agent sends hundreds of claim alerts to staff, making it hard to spot genuine fraud cases.

Safeguard: Firms should build advanced human-AI interaction frameworks and adjust oversight levels based on risk and confidence. They should also use adaptive trust mechanisms.

8. Agent Communication Poisoning

Security Risk: Attackers may tamper with communication between AI agents to spread false data or disrupt workflows. This is especially risky in multi-agent systems used in logistics or defense sectors. For example, in a logistics company, two AI agents coordinating deliveries are fed false location data, which leads to sending shipments to the wrong cities.

Safeguard: Companies should use cryptographic message authentication and enforce communication validation policies. They should also monitor inter-agent interactions and require multi-agent consensus for critical decisions.

9. Rogue Agents In Multi-Agent Systems

Security Risk: Malicious or compromised AI agents may operate outside their monitoring boundaries, and execute unauthorized actions or steal data. For example, in a smart factory, a compromised AI agent may start shutting down machines unexpectedly, disrupting production.

Safeguard: Firms must restrict AI agents' autonomy with policy constraints and continuously monitor agent behaviour. They should host agents in controlled environments and conduct regular AI red teaming exercises.

10. Privacy Breaches

Security Risk: Excessive access to sensitive user data (emails, Aadhaar-linked services, financial accounts) increases exposure risk if compromised. For example, an AI agent in a fintech app accesses users’ PAN, Aadhaar, and bank details, risking exposure if compromised.

Safeguard: Firms must define clear data usage policies; implement robust consent mechanisms; maintain transparency in AI decision-making; allow user intervention to correct errors.

Also Read: Why Are Top AI Models Resisting Shutdown Command? US Researchers Suggest 'Survival Drive'

Watch LIVE TV, Get Stock Market Updates, Top Business, IPO and Latest News on NDTV Profit. Feel free to Add NDTV Profit as trusted source on Google.
WRITTEN BY
Prajwal Jayaraj
Prajwal Jayaraj covers business news for NDTV Profit. He holds a postgradua... more
GET REGULAR UPDATES
Add us to your Preferences
Set as your preferred source on Google