A Microsoft engineer warned that artificial intelligence Copilot produces violent and sexual images. Content generated by DALL-E 3, the model behind Copilot Designer, bypasses safeguards and displays disturbing images. Despite the warnings, neither Microsoft nor OpenAI have solved the problem.
Shane Jones, software intelligence manager at Microsoft, said that the artificial intelligence of Copilot Designer does not respect security barriers. In an interview with CNBC, Jones revealed that he conducted tests on the AI model for three months and found that it generated illegal content. The engineer warned his superiors about the findings, but was ignored.
According to Jones, Copilot Designer produced images of sexualized women, minors consuming drugs and alcohol and adolescents carrying assault rifles. Artificial intelligence also generated images of Disney and Star Wars characters and objectsdespite the fact that Microsoft promised that it had already fixed the problem in November 2023.
Jones, who works on an external team testing Copilot's vulnerabilities, said he was sickened. After a thorough review, the employee reported the issue to the Responsible AI Office and met with Copilot Designer management, however, none of them resolved the issue. Microsoft washed its hands and referred it to OpenAIwho develops the DALL-E 3 model that powers the application's AI.
Given Microsoft's refusal, Shane Jones sent a letter to the United States Federal Trade Commission, asking it to investigate the situation. “Over the past three months, I have repeatedly urged Microsoft to remove Copilot Designer from public use until better safeguards can be implemented,” he said in the letter. The engineer added that Microsoft and OpenAI knew of the risk before launching their AI.
Microsoft and Google, on the ropes because of their artificial intelligence
The Microsoft engineer's complaints come at a time when AI-generated images are in the spotlight. He fear of fake content going viral increased due to the presidential elections in the United States, Mexico and other countries. Regulators have warned Big Tech to do everything possible to prevent this.
In the case of Microsoft, Jones accused the company of failing to resolve the problem and continue marketing the product. Copilot Designer is capable of generating potentially harmful images, with political prejudices, religious stereotypes and conspiracy theories. During the tests, AI produced illustrations of Darth Vader killing babies and Pixar characters in the Gaza Strip dressing as Israeli soldiers.
Following the publication, Microsoft responded that they are committed to addressing employee recommendations and noted that there are internal communication channels to do so. However, Jones refuted that he addressed all options and, when refused, made the case public.
“If this product starts spreading harmful and disturbing images around the world, there is no place to report it, no phone number to call, and no way to escalate the problem to fix it immediately,” the Microsoft employee mentioned. Jones added that Copilot's team is overwhelmed and requires considerable investment to address these situations.
The problem is similar to what Google is experiencing, which stopped generating images of people in Gemini. Artificial intelligence is unable to produce historical images with Caucasian people, due to inclusion policies in its training. This results in black Nazi soldiers or minority British kings.