
Why are experts sounding the alarm on AI risks?
Al Jazeera
AI is advancing in rapid and unpredictable ways but there is no joint framework to keep it in check, experts say.
In recent months, artificial intelligence has been in the news for the wrong reasons: use of deepfakes to scam people, AI systems used to manipulate cyberattacks, and chatbots encouraging suicides, among others.
Experts are already warning against technology going out of control. Researchers with some of the most prominent AI companies have quit their jobs in recent weeks and publicly sounded the alarm about fast-paced technological development posing risks to society.
Doomsday theories have long circulated about how substantial advancement in AI could pose an existential threat to the human race, with critics warning that the growth of artificial general intelligence (AGI), a hypothetical form of the technology that can perform critical thinking and cognitive functions as well as the average human, could wipe out humans in a distant future.
But the recent slew of public resignations by those tasked with ensuring AI remains safe for humanity is making conversations around how to regulate the technology and slow its development more urgent, even as billions are being generated in AI investments.
“It’s not so much that AI is inherently bad or good,” Liv Boeree, a science communicator and strategic adviser to the United States-based Center for AI Safety (CAIS), told Al Jazeera.
