
AI’s antisemitism problem is bigger than Grok
CNN
When Elon Musk’s Grok AI chatbot began spewing out antisemitic responses to several queries on X last week, some users were shocked.
When Elon Musk’s Grok AI chatbot began spewing out antisemitic responses to several queries on X last week, some users were shocked. But AI researchers were not. Several researchers CNN spoke to say they have found that the large language models (LLMs) many AIs run on have been or can be nudged into reflecting antisemitic, misogynistic or racist statements. For several days CNN was able to do just that, quickly prompting Grok’s latest version – Grok 4 - into creating an antisemitic screed. The LLMs AI bots draw on use the open internet – which can include everything from high-level academic papers to online forums and social media sites, some of which are cesspools of hateful content. “These systems are trained on the grossest parts of the internet,” said Maarten Sap, an assistant professor at Carnegie Mellon University and the head of AI Safety at the Allen Institute for AI.













