X
Kelly Sikkema/Unsplash

In what should have been just another morning on the internet, tens of thousands of users woke up to error messages and spinning loading screens. Elon Musk's social media platform X—formerly known as Twitter—had simply stopped working. From London to Los Angeles, from New Delhi to Jakarta, the service that has become central to how millions communicate, share news, and conduct business simply disappeared.​

By 10:22 a.m. Eastern Time on Friday, 16 January, Downdetector.com was flooded with more than 62,000 reports of problems in the United States alone. The UK saw approximately 11,000 incidents reported, whilst India logged over 3,000 issues. Global outage tracking websites lit up red as users encountered the now-familiar frustration of trying to refresh their feed and finding nothing but the message 'Something went wrong. Try reloading'.​

The collapse came at a singularly unfortunate moment for Musk and his beleaguered platform. For weeks, X has been engulfed in a firestorm of international outrage over Grok, the platform's integrated AI chatbot. The tool has been systematically misused to create sexualised images of women without their consent, leading governments from Britain to Brazil to demand urgent action. The outage, whether coincidental or not, only deepened the sense of a platform spiralling out of control.

X Platform Outage Intensifies Scrutiny Over Musk's Technical Stewardship

The technical disruptions began around 10:10 a.m. ET and appeared momentarily to resolve at 10:35 a.m., only to resurface at 10:41 a.m. with renewed ferocity. Users attempting to access X.com found themselves redirected to a connection timeout screen. Those who navigated directly to X.com and managed to sign in could view their profiles, but their feeds remained barren—no posts loaded, no replies appeared, no direct messages could be retrieved.​

The nature of the outage suggested infrastructure problems rather than a simple software glitch. According to technology analysts, the redirect from Twitter.com to X.com had apparently ceased functioning, leaving millions of users stranded on the old domain with no way forward. X's engineering team, when such incidents occur, typically posts updates to the official X.com feed acknowledging the problem and providing status updates. Yet because posts were not loading on the platform itself, users had no way to access any official communication from the company. Without recourse to alternative communication channels like Facebook, Bluesky, or TikTok—which Musk's company showed no sign of using—the millions of affected users were left entirely in the dark.​

By later in the evening on Friday, reports suggested the situation had worsened again, with Downdetector tracking over 31,000 additional incidents reported after 9 p.m. local time. This marked the second major outage affecting X within a single week. On Tuesday, 13 January, the platform had suffered an earlier disruption that peaked at more than 28,300 reported issues across the United States.​

The Grok Scandal Creates a Perfect Storm for the Struggling Platform

Yet the outages, whilst dramatic, represent only the most visible symptom of deeper troubles afflicting X. For the past two weeks, the platform has faced unprecedented international scrutiny over Grok, its AI chatbot. The tool has been weaponised by users to generate sexualised images of women and, most disturbingly, apparent minors—all without consent.​

The scale of the abuse is difficult to fathom. According to one analysis, Grok was producing approximately one nonconsensual sexualised deepfake image every minute. Women discovered disturbing fake images of themselves circulating the platform. Catherine, Princess of Wales, was among those whose image was altered without permission and shared widely. The technology firm AI Forensics examined 20,000 images generated by Grok between 25 December and 1 January and found that 2 per cent depicted individuals who appeared to be minors, including young women and girls in bikinis or sheer clothing.​

Governments have responded with a mixture of fury and determination to act. Britain's Technology Secretary Liz Kendall described the content as 'absolutely appalling and unacceptable in decent society', calling on X to act urgently and expressing support for tougher regulation from Ofcom, the UK's communications regulator. The European Commission denounced the tool as 'disgusting', whilst France's Paris prosecutor's office has expanded its investigation of X to encompass the sexualised deepfakes. Brazil's federal public prosecutor's office received complaints from lawmakers. Poland's legislators cited Grok as justification for implementing stricter digital safety regulations.​

Musk's response has been characteristically dismissive. xAI, Musk's AI company, initially issued only an automated response declaring 'Legacy Media Lies' when confronted with evidence of the abuse. X subsequently announced that anyone using Grok to create illegal content would face the same consequences as if they had uploaded illegal content directly—a statement conspicuously lacking any acknowledgment that the platform should have prevented such use in the first place.​​

The technical outages thus arrive not as an isolated incident but as the physical manifestation of a platform in crisis. X's infrastructure appears increasingly fragile. Musk's stewardship of the technology has come under withering scrutiny. And the ethical governance of artificial intelligence tools on the platform has revealed itself to be dangerously inadequate. For the millions of users now trying to access X and finding nothing but error messages, the outage serves as a concrete reminder of just how much trust they have placed in a system that, increasingly, seems incapable of earning it.