AI Racism Exposed: Anger Erupts as Deepfakes Being Used To Target Black Individuals
Racist AI deepfakes cause outrage as Black communities get digitally targeted

Deepfakes are getting out of hand, and it is disturbing to learn that AI might be treating people unequally. AI was once seen as a transformative tech advance that could help society become fairer and more connected. Instead, new reports have exposed a much darker side of this technology where deepfakes and generative media are being weaponised to target marginalised communities, particularly Black individuals.
It is a growing worry as deepfake tools that can synthesise hyper-realistic images and videos have become easily available and super easy to use. Although a fake clip of a white tech CEO being accused of shoplifting might be dismissed as humorous or unbelievable, the implications are much more serious when these tools are turned on people already vulnerable to discrimination and systemic bias, as per reports.
So, for Black people, who are reportedly already disproportionately subjected to wrongful convictions and racial profiling, the posting of racist deepfakes threatens to put further harm on lives, reputations and access to justice. These AI made fabrications do not simply distort reality, they feed into pre existing prejudices and increase mistrust in public, leading to outrage and experts racing to understand how best to respond and protect those most at risk.
AI Targeting Vulnerable People?
It seems that the emergence of consumer grade deepfake platforms such as Sora, Vibes and Veo 3 has incredibly increased the creation and sharing of synthetic videos purporting to be real.
On one such platform as per reports, a deepfake video went viral showing Sam Altman, head of OpenAI, allegedly stealing from a shop, a fake video that many dismissed as implausible simply because of who was depicted. But this kind of incredulity does not extend to everyone, it seems. Experts warn that if similar deepfakes were created of Black individuals committing crimes or behaving in a negative way, they could be more believed and have devastating consequences.
Deepfakes draw on large datasets and learning models that are often trained on unrepresentative samples of people. That, at least in theory, makes them more likely to reproduce or exaggerate harmful stereotypes, especially against under-represented groups. This problem is accompanied by other documented biases in AI, like facial recognition systems that have misidentified Black men at much higher rates than white men, as per reports, leading to possible wrongful arrests. For instance, a high school AI gun detection system in Baltimore misclassified a Black student's snack bag as a weapon, resulting in an armed police response.
Moreover, this is not just about a few such news reports, academic research shows that AI systems can represent darker-skinned individuals in more homogeneous or stereotyped ways compared to lighter-skinned individuals, showing how bias in AI extends much beyond detection to content generation itself. These kinds of biases are not just theoretical, it seems, because they make the output of generative models used to make deepfakes and synthetic media, meaning that Black individuals are more likely to be misrepresented or portrayed unfairly.
Furthermore, social media has already seen examples where racist deepfake videos are used to mock Black people or spread false stories, one of them just went viral. Here it is.
the first ad I saw from this company was an ai Kylie Jenner stealing but now it’s just several ai Black people being advertised as a “prank” this is a weapon pic.twitter.com/pWNTIj7mBG
— fat!so? (@fatfabfeminist) January 7, 2026
Also, on various platforms, users have reported AI made content showing historically important Black figures like Martin Luther King Jr. or ordinary Black people in contexts that reinforce harmful stereotypes or lead to misunderstanding. Here is one such video.
WTF
by u/Buddymaster39449 in blackmen
Such content spreads very fast and can be hard to distinguish from the truth without expert analysis.
Read More: 'Not Kidding' : Elon Musk Warns Twitter Users on Misusing AI
Read More: What is SynthID? - The Invisible Watermark That Can Get AI Content Detected
Possible Ways To Mitigate It
Addressing the risks posed by racist deepfakes and other biased AI content requires a comprehensive plan that spans technology, policy and public engagement. Researchers and engineers are finding new ways to detect and reduce bias in AI systems. For example, a team at the University at Buffalo has developed deepfake detection algorithms especially designed to reduce the racial and gender disparities seen in earlier models.
So, by making detectors aware of demographic variables and training them to be less biased, these new tools can better differentiate between real and synthetic content across different groups. Deepfake mitigation is not just about detection, however. Systematic reviews of bias in generative AI recommend diverse and representative training data, fairness aware learning techniques, and multidisciplinary methods to uncover and correct sources of prejudice in models.
© Copyright IBTimes 2025. All rights reserved.




















