
Anthropic AI safety researcher quits, says the ‘world is in peril’
Global News
Anthropic was founded in 2021 by a breakaway group of former OpenAI employees who pledged to design a more safety-centric approach to AI development.
An artificial intelligence researcher left his job at the U.S. firm Anthropic this week with a cryptic warning about the state of the world, marking the latest resignation in a wave of departures over safety risks and ethical dilemmas.
In a letter posted on X, Mrinank Sharma wrote that he had achieved all he had hoped during his time at the AI safety company and was proud of his efforts, but was leaving over fears that the “world is in peril,” not just because of AI, but from a “whole series of interconnected crises,” ranging from bioterrorism to concerns over the industry’s “sycophancy.”
He said he felt called to writing, to pursue a degree in poetry and to devote himself to “the practice of courageous speech.”
“Throughout my time here, I’ve repeatedly seen how hard it is to truly let our values govern our actions,” he continued.
Anthropic was founded in 2021 by a breakaway group of former OpenAI employees who pledged to design a more safety-centric approach to AI development than its competitors.
Sharma led the company’s AI safeguards research team.
Anthropic has released reports outlining the safety of its own products, including Claude, its hybrid-reasoning large language model, and markets itself as a company committed to building reliable and understandable AI systems.
The company faced criticism last year after agreeing to pay US$1.5 billion to settle a class-action lawsuit from a group of authors who alleged the company used pirated versions of their work to train its AI models.













