How do you stop an AI model from turning Nazi? What the Grok drama reveals about AI training.
CBSN
Grok, the artificial intelligence (AI) chatbot embedded in X (formerly Twitter) and built by Elon Musk's company xAI, is back in the headlines after calling itself "MechaHitler" and producing pro-Nazi remarks.
The developers have apologized for the "inappropriate posts" and "taken action to ban hate speech" from Grok's posts on X. Debates about AI bias have been revived, too.
But the latest Grok controversy is revealing not for the extremist outputs, but for how it exposes a fundamental dishonesty in AI development. Musk claims to be building a "truth-seeking" AI free from bias, yet the technical implementation reveals systemic ideological programming.

The Trump administration deployed ICE and other Homeland Security agents to 14 of the nation's airports on Monday to help shuttle passengers through overcrowded TSA checkpoints. In one airport, the security line wait-time was up to six hours. Nicole Sganga and Kaia Hubbard contributed to this report. In:












