Meta Cuts Off 1,000 Kenyan Contractors After They Reportedly Saw People Have Sex on Ray-Ban Glasses
Kenyan contractors said some of what they saw included private moments, such as people changing clothes, using bathrooms, and, in some cases, sexual activity

Meta has ended its contract with a Kenyan outsourcing firm after reports that workers reviewing Ray-Ban Meta smart glasses footage allegedly came across highly private content, including people having sex, the company has confirmed.
The decision has led to more questions about how Meta Ray-Ban smart glasses handle the videos and photos people record, and how that information is later shared with outside companies that help train Meta's AI systems.
These concerns were first reported in February 2026, when journalists in Sweden and Kenya spoke to Sama employees. Workers said they had seen videos that appeared to show people in private situations, and in some cases, they believed users may not have realised they were being recorded at the time.
Meta Ray-Ban Smart Glasses Data Review Under Fire
The Meta Ray-Ban smart glasses are designed to take photos and videos, and they have a small light that indicates when they are recording. Meta also sends some of this recorded content to outside workers who help label and sort the data so the company can improve its artificial intelligence features.
The issue began after workers at Sama, a company based in Kenya, said they had to review sensitive footage captured by the smart glasses while on the job. They claimed that some of what they saw included private moments, such as people changing clothes, using bathrooms, and, in some cases, sexual activity. These claims led to concerns about how clearly users understood what was being recorded and who might later see it.
Meta has said that any content shared with contractors is filtered to protect privacy, including steps like blurring faces. The company also said it paused its work with Sama while it looked into the concerns, and later ended the contract, saying the firm did not meet its standards.
Contractor Dispute Starts
Now, according to Ars Technica, Sama has rejected the implication that it failed to meet requirements. The company said it was not informed of any specific performance issues and maintained that it followed operational and security standards across all its work. It also said it was focused on supporting employees affected by the end of the Meta contract, which reportedly impacted more than 1,000 workers.
The situation has also raised bigger questions about how companies collect and use data to train artificial intelligence systems. People who do this kind of work, often called data annotators, go through large amounts of video and audio and label what they see so AI systems can learn from it. Depending on the material, this can sometimes include upsetting or uncomfortable content.
Workers told journalists they felt they did not really have a choice about what they were assigned to review. They said they were expected to keep going even when the footage was difficult to watch, describing it as something they simply had to process as part of the job.
Meta has said that people agree to data processing when they use its devices and services, and that having humans review content is part of improving its technology. The company has not responded directly to claims that it terminated the contract because workers spoke publicly about their experiences.
Regulatory Attention Follows
The issue has also caught the attention of regulators.
In the UK, the Information Commissioner's Office said it was worried by the reports and stressed that people should clearly understand and control how their personal data is used. In Kenya, data authorities have also started looking into privacy concerns linked to the smart glasses and how the information is used to train AI systems.
At the same time, legal pressure is growing. A class-action lawsuit in the United States alleges that Meta and its partners may not have properly protected users in violation of privacy and consumer protection laws. The case is still ongoing and has not been decided by a court.
For now, Meta continues to stand by its approach to building AI. But critics say the situation poses a significant privacy risk, as fast-moving AI development can clash with the realities of people reviewing large amounts of personal, and sometimes sensitive, data. Questions about what users are told, what workers actually see, and how transparent companies are remain unresolved.
© Copyright IBTimes 2025. All rights reserved.



















