AI executive Dario Amodei on the red lines Anthropic would not cross
CBSN
"It's about the principle of standing up for what's right," said Dario Amodei, CEO of the artificial intelligence firm Anthropic, who has found himself at the center of a new kind of firestorm. What's wrong, in his view, is why the AI company he co-founded has been banned from the federal government. In:
"It's about the principle of standing up for what's right," said Dario Amodei, CEO of the artificial intelligence firm Anthropic, who has found himself at the center of a new kind of firestorm. What's wrong, in his view, is why the AI company he co-founded has been banned from the federal government.
"It feels very punitive and inappropriate, given the amount that we've done for U.S. national security," he said.
Anthropic created Claude, an AI chatbot you might use at work or school. Since last summer, its government version has been deeply embedded in military intelligence and classified operations at the Pentagon. This past week, in the lead-up to the attack on Iran, the Defense Department demanded Anthropic hand over its AI without restrictions for lawful military use. The company refused.
"We have these two red lines," said Amodei. "We've had them from Day One. We are still advocating for those red lines. We're not gonna move on those red lines."
Those red lines? Not allowing Anthropic's AI to perform mass surveillance of Americans, and prohibiting its AI from powering fully-autonomous weapons without any human involvement.
