
Elon Musk slams OpenAI safety, says xAI's Grok is safer than ChatGPT
India Today
Amid his ongoing legal battle with OpenAI, xAI CEO Elon Musk alleged in a deposition that ChatGPT is unsafe, linking the chatbot to user suicides while claiming that no such cases involve his own AI system, Grok.
Elon Musk and Sam Altman, two of the co-founders of OpenAI, are locked in a long-running feud over whether the company abandoned its original non-profit mission to benefit humanity in favour of maximising profits. While both sides have traded blows, in a newly released legal deposition, Musk made a serious accusation against his former partners. He claimed that their AI chatbot ChatGPT is linked to the deaths of some users, while his own AI system, Grok, has a cleaner track record.
“Nobody has committed suicide because of Grok, but apparently they have because of ChatGPT,” alleged Musk, referring to the other lawsuits OpenAI is currently facing. Here the plaintiffs claim that ChatGPT’s manipulative or emotionally intense conversations led to severe mental health distress, with some cases reportedly linked to suicide.
Musk made these remarks during his video testimony, recorded in September and publicly filed this week ahead of an expected jury trial next month, reports Tech Crunch. The deposition is part of Musk’s ongoing lawsuit against OpenAI, where he is positioning his AI company, xAI, as more safety-focused than OpenAI.
At the heart of this legal battle is OpenAI’s shift from being a nonprofit research lab to becoming a for-profit company. Musk, who was one of OpenAI’s co-founders, claims that this transition violated the company’s original mission and agreements. According to him, OpenAI was created to ensure AI would be developed safely and not controlled by a single powerful company.
In the deposition, Musk argued that commercial pressure, revenue, scale, partnerships, could push companies to move faster than safety allows. He has repeatedly said that AI development should prioritise caution over speed.
Musk’s criticism of OpenAI also ties back to a public letter he signed in March 2023. The letter, backed by more than 1,100 signatories including AI experts, called for a pause in developing systems more powerful than GPT-4. It warned that AI labs were in an “out-of-control race” to build increasingly powerful systems without fully understanding the risks.













