Microsoft Copilot
Microsoft engineer warns company’s AI tool creates disturbing content. Wikimedia Commons

Shane Jones, a Microsoft engineer, has raised concerns about the sexually suggestive and violent content generated by their AI image tool, Copilot Designer (formerly Bing Image Creator). Jones, who has been with the company for six years, accused the software giant of ignoring his warnings and not taking adequate action.

Trying out Microsoft's OpenAI-powered AI image tool, Copilot Designer, which was released in March 2023, Jones discovered disturbing results. Similar to OpenAI's DALL-E, users enter text prompts to create pictures. However, Jones claims the "creativity" encouraged went too far.

The Microsoft veteran has been actively testing the product for vulnerabilities for a while now. This practice is known as red-teaming. His exploration with Copilot Designer led to concerning outputs that violated Microsoft's frequently cited responsible AI principles.

Employee sounds alarm on Copilot Designer's potential for harmful content

While testing, Jones found the AI image generation tool forged disturbing content, including demons, violent themes alongside abortion references, teenagers with guns, sexualised violence against women, and underage substance abuse.

"It was an eye-opening moment," Jones, who hasn't stopped testing the image generator, told CNBC in an interview. "It's when I first realized, wow this is really not a safe model." To make things worse, it did not even show warnings about the content.

Ironically, Bing Image Creator showed content violation warnings for safe-to-use phrases and even censored pictures it generated on its own last year. Jones is currently a principal software engineering manager at corporate headquarters in Redmond, Washington.

It is worth noting that Jones doesn't work on Copilot in a professional capacity but as a red teamer. He is one of the employees (and outsiders) who, in their free time, choose to test the company's AI technology and see where problems may be surfacing.

Concerns surface regarding Microsoft's AI image generator

Alarmed by his experience, Jones started internally reporting his findings in December. Despite acknowledging his concerns, the software giant did not take the product off the market. Instead, the company reportedly referred Jones to OpenAI.

When Jones did not hear back from the Sam Altman-led AI startup, he posted an open letter on LinkedIn asking the OpenAI's board to take down DALL-E 3 (the latest version of the AI model) for an investigation.

Microsoft's legal department asked Jones to remove his LinkedIn post immediately and he had no choice but to comply. He escalated concerns in January when he wrote a letter to U.S. senators about the matter and met with staffers from the Senate's Committee on Commerce, Science and Transportation.

He recently sent a letter to Federal Trade Commission Chair Lina Khan and another to Microsoft's board of directors. "Over the last three months, I have repeatedly urged Microsoft to remove Copilot Designer from public use until better safeguards could be put in place," Jones wrote in the letter to Khan.

Since Microsoft has "refused that recommendation," Jones said he is urging the company to add disclosures to the product and change the Android app's rating on Google to reflect that it's only for mature audiences.

"We are committed to addressing any and all concerns employees have in accordance with our company policies, and appreciate employee efforts in studying and testing our latest technology to further enhance its safety," a Microsoft spokesperson told CNBC.

The company assured it has "robust internal reporting channels" that properly investigate and remediate any issues. Still, some users recently reported Microsoft's Copilot AI generated disturbingly aggressive responses when prompted with specific questions.