
AI safety shake-up: Top researchers quit OpenAI and Anthropic, warning of risks
Newsy
In the past week, some of the researchers tasked with building safety guardrails inside the world’s most powerful AI labs publicly walked away from their jobs.
In the past week, some of the researchers tasked with building safety guardrails inside the world’s most powerful AI labs publicly walked away, raising fresh questions over whether commercial pressures are beginning to outweigh long-term safety commitments.
At OpenAI, former researcher Zoë Hitzig announced her resignation in a guest essay published Tuesday in The New York Times titled “OpenAI Is Making the Mistakes Facebook Made. I Quit.”
Hitzig warned that OpenAI’s reported exploration of advertising inside ChatGPT risks repeating what she views as social media’s central error: optimizing for engagement at scale.
ChatGPT, she wrote, now contains an unprecedented “archive of human candor,” with users sharing everything from medical fears to relationship struggles and career anxieties. Building an advertising business on top of that data, she argued, could create incentives to subtly shape user behavior in ways “we don’t have the tools to understand, let alone prevent.”
“The erosion of OpenAI’s principles to maximize engagement may already be underway,” she wrote, adding that such optimization “can make users feel more dependent on A.I. for support in their lives.”





