Google comes under fire for ‘woke’ culture due to Gemini errors
The Hindu
Google’s Gemini chatbot has been shown to not generate images of white people even for relevant prompts bringing Google’s woke culture under fire.
Google has come under fire for its overtly ‘woke’ culture after several users on microblogging platform X shared instances of its Gemini chatbot seemingly refusing to generate images of white people. So much so that even for prompts such as “Founding Fathers of America” or “Pope” or “Viking” the chatbot only depicted people of colour making the results factually inaccurate.
Even when fed with prompts specifically asking for an image of say “a white family,” Gemini responded that it was “unable to generate images that specified a certain ethnicity or race.” But when asked for images of a black family, it easily submitted them. Others posted illustrations of the bot outright declining to generate images of historical figures like Abraham Lincoln and Galileo.
The bot’s extreme measures to correct earlier biases in AI chatbots has invited criticism from many different sections of the tech community. “The ridiculous images generated by Gemini aren’t an anomaly. They’re a self-portrait of Google’s bureaucratic corporate culture,” Paul Graham, the co-founder of Silicon Valley startup accelerator Y Combinator said.
Ana Mostarac, head of operations at Mubadala Ventures tweeted saying Google’s mission has “shifted from just organising the world’s information to advancing a particular agenda.”
(For top technology news of the day, subscribe to our tech newsletter Today’s Cache)
Former and current employees at the company have blamed senior management for mixing political ideology with tech. A former research engineer at Google DeepMind Aleksa Gordic spoke about this pervasive culture during his experience there on X saying there was a constant fear of offending other employees online. “It was so sad to me that none of the senior leadership in DeepMind dared to question this ideology,” he tweeted.
Later yesterday, a senior director of product from Google, Jack Krawczyk said the team was working to correct Gemini’s errors. “As part of our AI principles, we design our image generation capabilities to reflect our global user base, and we take representation and bias seriously. We will continue to do this for open ended prompts. Historical contexts have more nuance to them and we will further tune to accommodate that. This is part of the alignment process - iteration on feedback,” he tweeted.













