
Grok’s antisemitic outbursts reflect a problem with AI chatbots
CNN
Grok, the chatbot created by Elon Musk’s xAI, began responding with violent posts this week after the company tweaked its system to allow it to offer users more “politically incorrect” answers.
Grok, the chatbot created by Elon Musk’s xAI, began responding with violent posts this week after the company tweaked its system to allow it to offer users more “politically incorrect” answers. The chatbot didn’t just spew antisemitic hate posts, though. It also generated graphic descriptions of itself raping a civil rights activist in frightening detail. X eventually deleted many of the obscene posts. Hours later, on Wednesday, X CEO Linda Yaccarino resigned from the company after just two years at the helm, though it wasn’t immediately clear whether her departure was related to the Grok issue. The episode came just before a key moment for Musk and xAI: the unveiling of Grok 4, a more powerful version of the AI assistant that he claims is the “smartest AI in the world.” Musk also announced a more advanced variant that costs $300 per month in a bid to more closely compete with AI giants OpenAI and Google. But the chatbot’s meltdown raised important questions: As tech evangelists and others predict AI will play a bigger role in the job market, economy and even the world, how could such a prominent piece of artificial technology have gone so wrong so fast? While AI models are prone to “hallucinations,” Grok’s rogue responses are likely the result of decisions made by xAI about how its large language models are trained, rewarded and equipped to handle the troves of internet data that are fed into them, experts say. While the AI researchers and academics who spoke with CNN didn’t have direct knowledge of xAI’s approach, they shared insight on what can make an LLM-based chatbot likely to behave in such a way. CNN has reached out to xAI.













