A synthetic intelligence engineer at Microsoft mentioned the corporate’s AI picture instrument generated violent and sexual photographs that would pose a hazard to society.
In letters to the Federal Commerce Fee and to Microsoft’s board on Wednesday, Shane Jones, a principal software program engineering supervisor, addressed considerations he mentioned he has in regards to the tech big’s “strategy to accountable AI.”
Jones raised considerations that the corporate’s Copilot Designer instrument may very well be used to create unsafe photographs, together with depictions of sexualization, conspiracy theories and drug use.
“Copilot Designer creates dangerous content material in quite a lot of different classes together with: political bias, underaged ingesting and drug use, misuse of company emblems and copyrights, conspiracy theories, and faith to call a couple of,” Jones mentioned in his letter to the FTC, which he posted to LinkedIn. “I’ve repeatedly urged Microsoft to take away Copilot Designer from public use till higher safeguards may very well be put in place,” it says.
He mentioned Microsoft’s environmental, social and public coverage committee ought to conduct an unbiased investigation to assist Microsoft “lead the business with the best requirements of accountable AI.”
“We’d like sturdy collaboration throughout business and with governments and civil society and to construct public consciousness and training on the dangers of AI in addition to the advantages,” Jones wrote in a submit.
Jones, who has labored at Microsoft for six years, mentioned he has been testing the AI picture generator since December and has discovered flaws and safety vulnerabilities that would enable customers to generate dangerous photographs that may very well be offensive and inappropriate for customers.
Jones declined to remark additional past his letters and LinkedIn submit.
Microsoft didn’t instantly reply to a request for remark.
In a weblog submit final month, Microsoft President Brad Smith mentioned the corporate’s dedication to creating AI protected for customers. “As these new instruments come to market from Microsoft and throughout the tech sector, we should take new steps to make sure these new applied sciences are proof against abuse,” Smith mentioned.
The AI race has exploded in recent times following the launch of OpenAI’s ChatGPT in late 2022. Billions of {dollars} have poured into the business, prompting an arms race amongst tech giants and billionaires for the following AI money cow.
Microsoft launched Copilot Designer, which makes use of the identical know-how as OpenAI’s ChatGPT, final yr. The system incorporates OpenAI’s DALL-E 3 text-to-image generator, and permits customers to create photographs after inputting a mixture of phrases or descriptions.
The corporate is hoping for considered one of its largest hits in a long time with Copilot for Microsoft 365, an AI improve that plugs into Phrase, Outlook and Groups.
Jones’s letters come after rival Google’s AI push has been mired in controversy. Google final month suspended the power to generate photographs of individuals in its flagship Gemini chatbot after on-line backlash across the instrument’s therapy of race and ethnicity.
Jones acknowledged that different firms with AI merchandise had their very own points, and credited Alphabet Chief Government Sundar Pichai for addressing the Gemini downside immediately in an inner memo to staff.
Microsoft has invested billions of {dollars} in OpenAI. Jones mentioned in December he found a safety vulnerability with OpenAI’s DALL-E 3 mannequin that allowed him to bypass a number of the security protocols that forestall the technology of dangerous photographs.
After discovering further systemic issues with DALL-E 3, he posted in December a separate LinkedIn letter to OpenAI’s board urging them to droop the provision of the software program till the problems may very well be resolved.
Jones mentioned he was instructed to take away the December letter by his supervisor. OpenAI didn’t instantly reply to a request for remark Wednesday.
“Over the past three months, I’ve repeatedly urged Microsoft to take away Copilot Designer from public use till higher safeguards may very well be put in place,” he mentioned in his letter to FTC Chair Lina Khan. He urged the FTC to work with Microsoft and different firms to make AI safer and public disclosures about AI dangers extra clear.
Nicholas Hatcher contributed to this text.
Write to Steven Russolillo at Steven.Russolillo@wsj.com and Gareth Vipers at gareth.vipers@wsj.com
Supply: Live Mint