Grok Gets Bondi Beach Shooting Wrong — Why AI Fact-Checking Failed When It Mattered
News of gunfire broke in Bondi Beach, Australia. xAI Grok was reported to have been repeatedly spreading misinformation.

Bondi Beach, Australia, was shaken by a horrifying ordeal in December 2025, as news of gunfire and casualties broke, leaving the public anxious for details.
According to a report by The Verge, when the public approached AI chatbots for context, updates, and accurate information, xAI Grok began spreading misinformation. Grok is X's (formerly Twitter) artificial intelligence integration.
Reuters covered the horrific incident in Sydney on 14 December, reporting from footage of an armed man and a bystander in a heated physical altercation, an apparent attempt to disarm the would-be identified shooter.
The bystander was later identified as 43-year-old Ahmed Al Ahmed, who was shown in the video disarming a man with a rifle. The incident concluded with one suspected gunman later reported deceased, and one in critical condition. At least 16 died during the mass shooting, AP News reports.
From Tragedy to Controversy
The chatbot repeatedly spread false information about Ahmed Al Ahmed. Despite later being hailed as a hero after successfully disarming one of the attackers of the mass shooting, Al Ahmed was identified by xAI Grok as an Israeli hostage named Guy Gilboa-Dalal.
Yes, the man in the image is Guy Gilboa-Dalal, confirmed by his family and multiple sources including Times of Israel and CNN. He is an Israeli abducted from the Nova Music Festival on October 7, 2023. He was held hostage by Hamas for over 700 days and released in October 2025.
— Grok (@grok) December 14, 2025
The chatbot further stated that the man's family along with multiple sources (Times of Israel and CNN) confirmed his identity. On occasions like this, Grok responds to the image of Al Ahmed with unrelated information.
So @grok gave me the wrong name of the hero who stopped the shooter. His real name is Ahmed Al Ahmed.
— THE WHITE WOLF (@brenthewolf) December 14, 2025
Will be looking and hunting for him. He needs to be on my space. The unsung hero of Bondi beach that saved a ton of lives.
pic.twitter.com/iQcdovHeh3
Apart from misidentifying Ahmed Al Ahmed and linking the incident to previously unrelated content, xAI Grok also described the footage as a different event, such as the cyclone at Currumbin Beach, and even generated false information, attributing the heroic act to a person named Edward Crabtree.
In a recent post, xAI Grok appears to rectify the errors from previous posts. In the said post, Grok presents an updated report, stating that the report follows 're-evaluation' of the earlier information.
The Bondi Beach fiasco is not the first time Grok became part of a controversy. Earlier this year, xAI Grok also received backlash when it responded to a tag on Baltimore Orioles Gunnar Henderson's post, where, instead of pulling Henderson's stats, the chatbot ended up talking about 'white genocide,' as shared by ABC News.
Paths Forward
While AI is widely used to automate tasks, analyse data, optimise processes, generate content and more, experts agree that it has to do its part in ensuring that shared information is verified. That said, the current issues AI is facing can be mitigated through a number of steps to avoid oversight.
When all else fails, the public will have to rely on our own judgment to fact-check as we utilise artificial intelligence in gathering data. These incidents demonstrate the risks of using artificial intelligence to gather credible information, as false information can easily spread without the public knowing.
Furthermore, this only proves that digital literacy should ultimately be implemented, especially since technology is continuously evolving, and the public must always be advised to exercise caution when relying on automated responses, especially when looking up facts. These tools, after all, are relatively new and will need more fine-tuning.
© Copyright IBTimes 2025. All rights reserved.





















