Privacy Stripped: Meta's AI Glasses Are Exposing Users' Intimate Lives to Human Reviewers, Say Reports

Meta's AI data pipeline in Nairobi is reportedly exposing the intimate lives of smart glass users and fueling a global privacy storm.

Advertisement
Read Time: 3 mins
Meta's AI glasses allow users to activate an AI assistant through voice commands.
(Photo source: Meta)

Meta Platforms' AI-powered smart glasses are facing fresh privacy concerns after two Swedish newspapers reported that contractors reviewing user data have accessed highly intimate recordings captured through the device.

According to reports by Göteborgs-Posten and Svenska Dagbladet, contractors employed by Sama, a Nairobi-based outsourcing firm working with Meta, are tasked with annotating user-generated content to train artificial intelligence systems. The work includes labelling images, transcribing audio and evaluating how Meta's AI assistant responds to user prompts.

Advertisement

Several data annotators told Svenska Dagbladet they had reviewed sensitive footage recorded via the smart glasses.

“In some videos, you can see someone going to the toilet or getting undressed. I don't think they know, because if they knew they wouldn't be recording,” one worker was quoted as saying.

Another worker said, “We see everything from living rooms to naked bodies. Meta has that type of content in its databases. People can record themselves in the wrong way and not even know what they are recording. They are real people like you and me.”

Advertisement

Some annotators claimed that certain recordings included explicit sexual content. Workers also said personal electronic devices are prohibited inside their offices to prevent data leaks.

How The Glasses Work

Meta's AI glasses allow users to activate an AI assistant through voice commands, analyse surroundings via image processing and record short videos. Meta's privacy terms state that “in some cases, Meta will review your interactions with AIs, including the content of your conversations with or messages to AIs, and this review can be automated or manual (human).”

Advertisement

Also Read: X Tightens Rules On AI War Content, Suspends Monetisation For Violations

A Meta spokesperson told The Telegraph: “When people share content with Meta AI, like other companies, we sometimes use contractors to review this data to improve people's experience with the glasses, as stated in our privacy policy. This data is first filtered to protect people's privacy. We take the protection of people's data very seriously and we're continuously refining our efforts and tools in this area.”

Former Meta employees cited by the Swedish newspapers said faces in annotation data are typically blurred automatically. However, Kenyan workers claimed the anonymisation tools do not always perform as intended.

“The algorithms sometimes miss. Especially in difficult lighting conditions, certain faces and bodies become visible,” a former Meta worker reportedly said.

Ethical Concerns

Data annotators interviewed by the publications said they often feel uneasy reviewing such personal material but continue due to financial necessity.

Advertisement

“You understand that it is someone's private life you are looking at, but at the same time you are just expected to carry out the work. You are not supposed to question it. If you start asking questions, you are gone,” one worker said.

Meta has not publicly commented beyond its privacy policy and spokesperson statement, but the reports have intensified scrutiny around AI data training practices and user privacy safeguards.

Essential Business Intelligence, Continuous LIVE TV, Sharp Market Insights, Practical Personal Finance Advice and Latest Stories — On NDTV Profit.

Loading...