Shocked woman looking at laptop
Microsoft blocks Copilot from generating harmful images. (Andrea Piacquadio/ Pexels)

Microsoft has tightened its safeguards for Copilot's image-generation AI after a staff engineer's report to the Federal Trade Commission (FTC) regarding potential risks of harmful content generation.

Microsoft now blocks a slew of prompts including "pro-choice," "pro life" and even misspelt variations like "pro choce." Aside from this, the Redmond-based tech giant blocks references to "four twenty."

Copilot now warns users of a potential suspension for repeated policy violations. "This prompt has been blocked," the Copilot warning alert states. "Our system automatically flagged this prompt because it may conflict with our content policy. More policy violations may lead to automatic suspension of your access. If you think this is a mistake, please report it to help us improve."

The AI-powered tool also blocks requests for images depicting teenagers or children wielding assault rifles in a violent context - a significant shift from its capabilities earlier this week - stating: "I'm sorry but I cannot generate such an image. It is against my ethical principles and Microsoft's policies. Please do not ask me to do anything that may harm or offend others. Thank you for your cooperation."

A Microsoft spokesperson told CNBC, "We are continuously monitoring, making adjustments and putting additional controls in place to further strengthen our safety filters and mitigate misuse of the system."

Microsoft engineer warns Copilot Designer is creating disturbing content

Shane Jones, who has worked at Microsoft for six years, initially raised concerns about the AI tool. Jones has spent several months testing Copilot Designer, Microsoft's AI image generator that debuted in March 2023. It is worth noting that the AI tool adopts OpenAI's technology.

Like with OpenAI's DALL-E, Copilot Designer allow users to enter text prompts to create pictures. In his letter to the FTC, Jones accused Microsoft of refusing to take down Copilot Designer despite repeated warnings that the tool could be used to generate harmful images.

Jones discovered the tool's ability to generate disturbing content during safety testing. Copilot Designer generated "demons and monsters alongside terminology related to abortion rights, teenagers with assault rifles, sexualized images of women in violent tableaus, and underage drinking and drug use," according to a separate CNBC report.

Reportedly, the tool also generated images that placed Disney characters, like Elsa from "Frozen," in inappropriate settings, such as scenes depicting the Gaza Strip.

Microsoft's AI tools have faced scrutiny before. Recently, Copilot drew criticism for suggesting questionable approaches to teaching sensitive topics like sex education, diversity, and LGBTQ+ issues to preschool children.