Anthropic senior researcher resigns, warning of a “world in peril”
A now-former senior researcher at artificial intelligence company Anthropic has resigned, also issuing a warning that the world is facing a series of interconnected crises, Qazinform News Agency correspondent reports.
In a departure letter shared on social media platform X on Monday, Mrinank Sharma, who said he led safeguards research at Anthropic, announced that his final day at the company was February 9. In the letter, Sharma reflected on his work in AI safety, explained his decision to step down, and outlined his plans to leave the technology sector.
Sharma said he joined Anthropic after completing his PhD in machine learning at the University of Oxford, aiming to contribute to AI safety research. He highlighted his involvement in projects focused on understanding AI sycophancy, developing defenses against AI-assisted bioterrorism, and putting those safeguards into production.
“I’ve achieved what I wanted to here,” Sharma wrote, also pointing to his work on internal transparency mechanisms and a final project examining how AI assistants could “make us less human or distort our humanity.”
Despite those achievements, Sharma said he felt compelled to leave after repeatedly grappling with the broader consequences of emerging technologies and the challenge of aligning actions with stated values.
“The world is in peril,” Sharma wrote, adding that the risks go beyond artificial intelligence or bioweapons to “a whole series of interconnected crises unfolding at this very moment.”
“We appear to be approaching a threshold where our wisdom must grow in equal measure to our capacity to affect the world, lest we face the consequences,” he added.
Sharma said he plans to return to the UK and create space for writing, poetry, and community-focused work. He concluded the letter by sharing a poem by American poet William Stafford, which he said reflects his personal values and sense of direction as he steps away from the company.
Anthropic, founded by former OpenAI researchers, is known for developing the Claude AI chatbot and positioning itself as a company focused on AI safety.
Earlier, Qazinform News Agency reported on the launch of a new platform where AI agents hire humans to carry out physical-world tasks that software cannot perform.