How do you stop an AI model from turning Nazi? What the Grok drama reveals about AI training.
CBSN
Grok, the artificial intelligence (AI) chatbot embedded in X (formerly Twitter) and built by Elon Musk's company xAI, is back in the headlines after calling itself "MechaHitler" and producing pro-Nazi remarks.
The developers have apologized for the "inappropriate posts" and "taken action to ban hate speech" from Grok's posts on X. Debates about AI bias have been revived, too.
But the latest Grok controversy is revealing not for the extremist outputs, but for how it exposes a fundamental dishonesty in AI development. Musk claims to be building a "truth-seeking" AI free from bias, yet the technical implementation reveals systemic ideological programming.
More Related News
