Google Refutes ‘Misleading’ Allegations Of Using Gmail Data To Train Gemini AI
A report claimed the only way to prevent Gmail from accessing personal data for AI training was to switch off the platform’s ‘Smart Features’.

Google has firmly denied claims that it scans users’ Gmail messages and attachments to train its Gemini AI models, describing these allegations as “misleading”.
The controversy began after Malwarebytes published a report claiming that the only way to prevent Gmail from accessing personal data for AI training was to switch off the platform’s ‘Smart Features’, such as spell check.
The claims sparked a wave of outrage online, as users voiced frustration over the idea that Google was using their personal data for AI training without consent.
In a recent post on social media, Google clarified that it has not made any changes to user settings and does not use Gmail content to train its Gemini AI models.
ALSO READ
Is Your Gmail Account Safe? 183 Million Passwords Leaked: How To Check Safety Of Your Mailbox?
“Let's set the record straight on recent misleading reports. Here are the facts:
We have not changed anyone’s settings.
Gmail Smart Features have existed for many years.
We do not use your Gmail content to train our Gemini AI model.
We are always transparent and clear if we make changes to our terms & policies,” Gmail said in an X post.
Let's set the record straight on recent misleading reports. Here are the facts:
— Gmail (@gmail) November 21, 2025
⢠We have not changed anyoneâs settings.
⢠Gmail Smart Features have existed for many years.
⢠We do not use your Gmail content to train our Gemini AI model.
We are always transparent andâ¦
The Malwarebytes report suggested Google had been analysing private Gmail messages and attachments to improve AI tools. The report advised users to disable the smart features and personalisation options if they wished to stop Gmail from using their emails to train AI models.
Following further review, Malwarebytes updated its article to clarify that Google does not use users’ emails for AI training, blaming the initial misunderstanding on ambiguous language from Google.
“The settings themselves aren’t new, but the way Google recently rewrote and surfaced them led a lot of people (including us) to believe Gmail content might be used to train Google’s AI models, and that users were being opted in automatically.
After taking a closer look at Google’s documentation and reviewing other reporting, that doesn’t appear to be the case,” Malwarebytes wrote.
