
Google halts AI tool’s ability to produce images of people after backlash over historically inaccurate depictions of race
CNN
Google is pausing its artificial intelligence tool Gemini’s ability to generate images of people after it was blasted on social media for producing historically inaccurate images that largely showed people of color in place of White people.
Google is pausing its artificial intelligence tool Gemini’s ability to generate images of people after it was blasted on social media for producing historically inaccurate images that largely showed people of color in place of White people. The embarrassing blunder shows how AI tools still struggle with the concept of race. OpenAI’s Dall-E image generator, for example, has taken heat for perpetuating harmful racial and ethnic stereotypes at scale. Google’s attempt to overcome this, however, appears to have backfired and made it difficult for the AI chatbot to generate images of White people. Gemini, like other AI tools such as ChatGPT, is trained on vast troves of online data. Experts have long warned that AI tools therefore have the potential to replicate the racial and gender biases baked into that information. When prompted by CNN on Wednesday to generate an image of a pope, for example, Gemini produced an image of a man and a woman, neither of whom were White. Tech site The Verge also reported that the tool produced images of people of color in response a prompt to generate images of a “1943 German Soldier.” “We’re already working to address recent issues with Gemini’s image generation feature,” Google said in a post on X Thursday. “While we do this, we’re going to pause the image generation of people and will re-release an improved version soon.” Thursday’s statement came after Google on Wednesday appeared to defend the tool a day prior by saying in a post on X, “Gemini’s AI image generation does generate a wide range of people. And that’s generally a good thing because people around the world use it.”













