Book HackIf Anyone Builds It, Everyone DiesBy Eliezer Yudkowsky, Nate Soares
In a Nutshell
Machine intelligence experts Eliezer Yudkowsky and Nate Soares explain why artificial superintelligence, a form of AI that would far exceed human cognitive abilities, should never be developed.
Favorite Quote
The preferences that wind up in a mature AI are complicated, practically impossible to predict, and vanishingly unlikely to be aligned with our own, no matter how it was trained.
Eliezer Yudkowsky and Nate Soares
Introduction
In 2023, hundreds of AI scientists, including Geoffrey Hinton and Yoshua Bengio, who shared the Turing Award for inventing the deep learning technology that powers AI, published an open letter.
The letter contained one sentence: 'Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.'
The letter was also signed by Eliezer Yudkowsky and Nate Soares, pioneering researchers on the risks and capabilities of machine intelligence.
Yudkowsky co-founded and Soares is the President of MIRI, a non-profit exploring machine intelligence since 2001.
In their 2025 book, If Anyone Builds It, Everyone Dies, Yudkowsky and Soares explain why developing artificial superintelligence – AI that surpasses humans in every cognitive realm – will lead to our extinction.
Here are the 3 key insights from this Hack
- 1.No one fully understands how an AI ‘thinks’
- 2.Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nunc volutpat, leo ut.
- 3.Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nunc volutpat, leo ut.
