Even as agentic artificial intelligence systems are reducing our burden by autonomously performing tasks and taking decisions, not all is rosy about this technology. In a dangerous development, AI agents are poised to make cyberattacks, ranging from social engineering to deepfakes to user credential abuses, easier.
AI agents will increasingly exploit weak authentication by automating credential theft and compromising authentication communication channels, so much so that by 2027, these agents will reduce the time it takes to exploit account exposures by 50%, according to new research by Gartner Inc.
AI Agents Will Automate User Account Takeovers
Because weak authentication credentials, like passwords, can be obtained through a number of methods, including malware, phishing, social engineering, and data breaches, account takeover (ATO) is a common attack vector. Attackers then utilise bots to automate a flurry of login attempts across other services and platforms in the hopes that the credentials have been used on several sites.
AI agents will enable automation for more steps in ATO, from social engineering based on deepfake voices to end-to-end automation of user credential abuses. Because of this, organisations will have to introduce technologies to detect, monitor and classify interactions involving AI agents.
According to Gartner, in the face of this threat, security leaders should shift to passwordless, phishing-resistant multi-factor authentication. Users should migrate from passwords to multidevice passkeys whenever possible.
Social Engineering Attacks To Target Executives, Employees
Along with ATO, technology-enabled social engineering will also pose a significant threat to corporate cybersecurity. Gartner predicts that 40% of social engineering attacks will target not just executives but the overall workforce as well by 2028. Attackers are now combining social engineering tactics with counterfeit reality techniques, such as deepfake audio and video, to deceive employees during calls.
Although only a few high-profile cases have been reported, these incidents have resulted in substantial financial losses for organisations, highlighting the scale of this threat. The challenge of detecting deepfakes is still in its early stages, particularly when applied to the diverse attack surfaces of real-time person-to-person voice and video communications across various platforms.
Organisations will have to adapt procedures and workflows for better resisting attacks that use counterfeit reality techniques, and one of the key steps will be to train employees about social engineering with deepfakes.
RECOMMENDED FOR YOU

Harvard Warns Foreign Students Over US Airport, Social Media


Goa Government To Provide Rs 4,000 A Month To Widows With Children Under 21


India Pushes For Social Security Pacts In FTAs, Awaits US Nod


Cristiano Ronaldo Signs New Deal With Saudi's Al Nassr, Set To Stay Till 2027
