ADVERTISEMENT

ChatGPT Atlas Browser Vulnerability Could Let Hackers Take Control, Study Reveals

The flaw allows attackers to plant malicious code or commands that the browser might later execute, even without the user's knowledge.

<div class="paragraphs"><p>OpenAI first introduced the memory feature in February 2024.&nbsp;(Photo source: Freepik)</p></div>
OpenAI first introduced the memory feature in February 2024. (Photo source: Freepik)
Show Quick Read
Summary is AI Generated. Newsroom Reviewed

A recent study has found that OpenAI's ChatGPT Atlas browser may not be as secure as it seems. Cybersecurity researchers at LayerX Security have found a vulnerability that could let hackers inject hidden instructions into the AI-powered browser's memory. This gives them control over users' devices, reported The Hacker News.

The report mentioned that the flaw allows attackers to plant malicious code or commands that the browser might later execute, even without the user's knowledge. 

"This exploit can allow attackers to infect systems with malicious code, grant themselves access privileges, or deploy malware," Or Eshed, LayerX Security CEO and co-founder, said.

"Atlas currently does not include any meaningful anti-phishing protections, meaning that users of this browser are up to 90% more vulnerable to phishing attacks than users of traditional browsers like Chrome or Edge," the security lab said in its report.

OpenAI memory feature

OpenAI first introduced the memory feature in February 2024 to make the chatbot more helpful and personalized. It allows ChatGPT to remember small details like a user's name, preferences, or interests to improve future conversations. 

However, the same function now seems to have a loophole. 

What Is The Flaw In ChatGPT Atlas Browser?

The researchers have found a flaw, called Cross-Site Request Forgery (CSRF), that takes advantage of ChatGPT’s memory feature. So, once the memory is corrupted, it is still available even if the user switches devices or logs in later. 

This means that when a user tries to use ChatGPT for normal tasks, the hidden instructions could secretly allow hackers to take control of their account, browser, or connected systems.

Michelle Levy, head of security research at LayerX Security, said, "What makes this exploit uniquely dangerous is that it targets the AI's persistent memory, not just the browser session."

How Does Exploitation Work?

Once users log in to the ChatGPT account, a hacker tricks them into clicking on a malicious link. After they open the link, the malicious website plants hidden instructions without the knowledge of the users.

The harmful instructions remain saved inside ChatGPT's memory, even after the browser is closed. When you use ChatGPT again, it will allow hackers to run code in the background, giving them control over your account, browser, or system.

Edge, Chrome Outperform AI Rivals

Researchers tested different web browsers to see how well they could block real-world online threats like viruses, fake websites and phishing attacks. In security tests, Microsoft Edge delivered the best performance by blocking 53% of online threats, followed by Google Chrome at 47% and Dia at 46%.

 AI-powered browsers such as Perplexity's Comet and ChatGPT Atlas lagged far behind, stopping only 7% and 5.8% of malicious websites.

Opinion
OpenAI Offering One Year Of Free ChatGPT GO To Indian Users
OUR NEWSLETTERS
By signing up you agree to the Terms & Conditions of NDTV Profit