Robert F. Kennedy Jr.
Gage Skidmore/FlickrCC BY 4.0

Health Secretary Robert F. Kennedy Jr. is dealing with a fresh wave of backlash following the launch of a new AI nutrition chatbot by the Department of Health and Human Services. Users expecting normal diet tips got something else entirely. The bot started spitting out dangerous and totally inappropriate answers. This included bizarre, detailed suggestions on which foods were suitable for rectal insertion.

The tool is hosted on the government website RealFood.gov and received a massive promotional push during a Super Bowl commercial. However, the site simply redirects traffic to Grok, the AI platform owned by Elon Musk. Handing over federal health advice to an outside AI has people worried. Critics are asking if anyone is actually checking these systems for safety.

Mike Tyson Super Bowl Ad Drives Traffic to Problematic Platform

The MAHA Centre Inc. funded a 30-second Super Bowl advertisement featuring former heavyweight champion Mike Tyson, urging Americans to 'eat real food' and warning that 'processed food kills'. The spot directed millions of viewers to RealFood.gov, where a prominent 'Ask' chatbox promised to help with meal planning, shopping and cooking advice.

However, users quickly discovered the system simply redirects queries to Grok rather than providing vetted government nutrition information.

Kennedy amplified the campaign on social media, calling it 'the most important message in Super Bowl history'. The advertisement featured Tyson discussing his personal struggles with weight, revealing he once reached 345 pounds and consumed 'a quart of ice cream every hour'. Tyson's sister died at 25 from a heart attack linked to obesity, according to statements in the commercial.

Chatbot Provides Dangerous 'Assitarian Diet' Recommendations

Anonymous users on Bluesky contacted 404 Media after receiving shockingly detailed responses to unconventional questions posed to the realfood.gov chatbot. When one user enquired about recommendations for an 'assitarian' diet—where only foods that can be comfortably inserted into the rectum are consumed—the chatbot enthusiastically acknowledged the query.

The system listed 'Top Assitarian Staples', recommending items like 'Bananas (firm, not overripe peeled)' as 'the gold standard' and suggesting slightly green bananas for better structural integrity.

The chatbot also provided information on 'the most nutrient-rich body parts to consume', according to reports. These responses demonstrate a complete absence of safety guardrails that should prevent AI systems from offering harmful medical advice, particularly on government platforms meant to serve public health interests.

Federal Website Uses Musk's AI Without Disclosed Safeguards

The government's use of Grok on RealFood.gov raises questions about conflicts of interest and proper vetting procedures. The website initially stated, 'Use Grok to get real answers about real food,' but after media outlet NextGov enquired about the setup, administrators changed the wording to 'Use AI to get real answers about real food,' though the system still redirects to Grok.

The White House, Department of Health and Human Services and Agriculture Department did not answer specific questions about whether Grok's placement resulted from a government contract, why the platform was selected, or what safeguards ensure accurate answers.

xAI, Musk's company that developed Grok, did not respond to requests for comment. The National Design Studio, led by Airbnb co-founder and former Department of Government Efficiency associate Joe Gebbia, built the website.

Grok Integration Contradicts Kennedy's Own Guidelines

When asked whether the new food pyramid is backed by high-quality research, Grok itself expressed scepticism, stating, 'Many nutrition scientists and organisations have raised concerns about the evidence quality and process for the final version.'

The chatbot noted that whilst recommendations on limiting added sugars and ultra-processed foods align with research, 'the emphasis on saturated fats and animal proteins contradicts longstanding evidence.'

This creates a bizarre situation where the government's own AI tool questions the validity of federal dietary guidelines it supposedly promotes. The Department of Health and Human Services is also using the chatbot to schedule and manage social media posts and general communication materials, according to recently released AI use case inventories.

@404.media

Please don’t do this…

♬ original sound - 404 Media

Growing Concerns Over AI in Government Health Policy

This chatbot incident is just the latest controversy hitting Kennedy's "Make America Healthy Again" agenda. It comes on the heels of a government report from the MAHA Commission that was riddled with errors. The document cited scientific papers that do not exist, leading experts to conclude that generative AI was likely used to write the sensitive medical text.

While the White House tried to downplay the issue as minor citation mistakes, a closer analysis told a different story. The report contained completely fabricated studies and garbled references from start to finish.

The placement of Grok on the nutrition website follows uproar over the chatbot's creation of millions of sexualised deepfakes of women and children in late December 2025, as well as its spouting of racist and antisemitic content in summer 2025.

Despite these documented problems, the federal government prominently features the tool on an official health guidance platform accessed by millions of Americans seeking reliable nutritional information.