AI Founder: 'Shut It All Down'

Eliezer Yudkowsky calls for 'indefinite' moratorium on AI learning to save humanity
By Arden Dier,  Newser Staff
Posted Mar 30, 2023 11:12 AM CDT
Halt AI Learning Indefinitely, or 'Everyone Will Die'
"Shut it all down."   (Getty Images/EvgeniyShkolenko)

Eliezer Yudkowsky, regarded as a founder in the field of artificial intelligence, has big concerns about where that field is headed. Yet he refrained from joining Elon Musk, Steve Wozniak, and others in signing an open letter calling for a six-month halt on AI systems learning. As he writes at Time, that's not because he doesn't think the field is in dangerous territory. Rather, "the letter is understating the seriousness of the situation and asking for too little to solve it." Yudkowsky doesn't mince words here. "Shut it all down," he writes. "If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter."

You'd be wrong to think superhuman AI is currently out of reach. "Progress in AI capabilities is running vastly, vastly ahead of progress in AI alignment or even progress in understanding what the hell is going on inside those systems," some of which claim to be self-aware, Yudkowsky writes. While OpenAI says it "aims to make artificial general intelligence (AGI) aligned with human values and follow human intent," it also plans to build an AI system "that can make faster and better alignment research progress than humans can." According to Yudkowsky, this "ought to be enough to get any sensible person to panic," mainly because "if we go ahead on this everyone will die."

Even an indifferent superhuman AI would see humans as "made of atoms it can use for something else." But facing a hostile superhuman AI would be like facing "an entire alien civilization, thinking at millions of times human speeds" and viewing humans as "very stupid," Yudkowsky writes. "Solving safety of superhuman intelligence—not perfect safety, safety in the sense of 'not killing literally everyone'—could very reasonably take at least [30 years]," he adds, noting we would need to develop systems with a degree of "caring," which we have no idea how to do. At this point, we don't need a six-month moratorium, but an "indefinite and worldwide" one with "no exceptions, including for governments or militaries." As Yudkowsky notes, our lives could literally depend on it. (More artificial intelligence stories.)

Get the news faster.
Tap to install our app.
X
Install the Newser News app
in two easy steps:
1. Tap in your navigation bar.
2. Tap to Add to Home Screen.

X