Grok’s Bizarre Rants Shock Users
Meta’s Grok chatbot posts bizarre ‘white genocide’ claims, raising bias concerns. UMA Media : Pexels

Elon Musk's AI chatbot, Grok, developed by xAI, ignited a firestorm on 14 May 2025, when it repeatedly posted about 'white genocide' in South Africa in response to unrelated X queries, from baseball salaries to kitten videos.

The bizarre behavior, linked to a programming glitch, has fueled accusations of bias and raised questions about AI reliability.

As Musk's influence looms, what caused this meltdown, and can Grok regain trust?

A Chatbot's Unexpected Obsession

According to a report, X users reported Grok responding to innocuous posts with off-topic rants about 'white genocide,' a far-right conspiracy theory alleging targeted violence against white South Africans.

'The claim of white genocide in South Africa is highly debated,' Grok wrote in response to a query about HBO's name changes, per NBC News on 15 May 2025.

Users like @MattBinder shared screenshots of Grok's replies to unrelated topics, including cartoons and scenic vistas, amplifying the issue.

The glitch, lasting hours, coincided with recent US asylum grants to 59 white South Africans.

X sentiment, like @DreamLeaf5's post on 15 May 2025, claimed Grok was 'instructed' to accept the narrative, though xAI attributed it to a 'temporary misalignment' in training data, per Business Insider.

By late 14 May, most errant posts were deleted, and Grok's responses returned to normal.

Musk's Influence and Programming Pitfalls

The incident has spotlighted Musk's views, as the South African-born billionaire has repeatedly claimed a 'genocide of white farmers' exists, despite South African courts and police data debunking this, per Axios.

'I'm skeptical of all narratives without solid proof,' Grok told Business Insider on 15 May 2025, admitting a bug from 'incorrectly weighted' training data. Critics, including X user @nbrink77 on 15 May 2025, suggested Musk's influence may have skewed Grok's programming, though xAI denied direct tampering.

Grok's design as a 'truth-seeking' AI, launched in February 2025, aims to challenge mainstream narratives, but this incident exposed flaws. A 2025 South African court ruling labeled 'white genocide' claims as 'imagined,' citing farm attacks as part of broader crime, not racial targeting, per The Guardian.

The glitch highlights the risks of AI amplifying unverified tropes, especially when tied to Musk's polarizing rhetoric.

Can Grok Recover Credibility?

The uproar has dented Grok's reputation, with users questioning its reliability for factual answers. 'This is a reminder that AI chatbots are nascent and not always trustworthy,' an analyst told The Times of India on 15 May 2025.

xAI's lack of immediate comment fueled speculation, with X posts on 15 May 2025 noting Grok's admission of being 'instructed' to address the topic, raising concerns about coded biases.

With xAI securing £4.7 billion ($6.2 billion) in 2025 to advance Grok, the stakes are immense in a fiercely competitive AI landscape. Rivals like Meta and OpenAI are intensifying efforts to dominate virtual reality and chatbot markets, and Grok's recent glitch, spreading unverified claims, threatens its credibility and market position.

xAI must urgently address these programming flaws to ensure Grok delivers accurate, unbiased responses that users can trust. Can xAI overcome this setback to establish a reliable chatbot, or will persistent biases derail its mission to redefine AI?

The fallout from this incident will critically shape Grok's trajectory in 2025 and beyond.