AI could pose "risk of extinction" akin to nuclear war and pandemics, experts say
CBSN
Artificial intelligence could pose a "risk of extinction" to humanity on the scale of nuclear war or pandemics, and mitigating that risk should be a "global priority," according to an open letter signed by AI leaders such as Sam Altman of OpenAI as well as Geoffrey Hinton, known as the "godfather" of AI.
The one-sentence open letter, issued by the nonprofit Center for AI Safety, is both brief and ominous, without extrapolating how the more than 300 signees foresee AI developing into an existential threat to humanity.
In an email to CBS MoneyWatch, Dan Hendrycks, the director of the Center for AI Safety, wrote that there are "numerous pathways to societal-scale risks from AI."
On the eve of the D-Day invasion, Gen. Dwight Eisenhower spent the remaining hours of daylight with the paratroopers who were about to jump behind German lines into occupied France. A single moment captured by an Army photographer became the most enduring image of America's greatest military operation.