Book Hack
Our Final InventionBy James Barrat

In a Nutshell

Published in 2013, Our Final Invention explores the dangers that human-level or super-intelligent AI might pose to the future of humanity.

Favorite Quote

AGI is intrinsically very, very dangerous. And this problem is not terribly difficult to understand.

Michael Vassar, former president of the Machine Intelligence Research Institute

Introduction

The idea that AI might one day take over the world is usually confined to the realm of science fiction. But this threat isn't out of the realms of possibility — and humanity is grossly underprepared for it.

In our lifetimes, it's overwhelmingly likely that scientists will develop AGI – which stands for Artificial General Intelligence – that will take over from today's ANI — meaning Artificial Narrow Intelligence.

ANI outperforms humans at specific tasks, but AGI will likely have a wide range of abilities, learning capacity, and drives that will make it resemble human-level intelligence.

Once AGI is achieved, it may only be a very small step to developing ASI – or Artificial Super Intelligence – which will be very difficult, if not impossible, to control and make work in our interests.

James Barrat is a filmmaker who, through wide-ranging interviews and study of the current field, explores this under-represented perspective on the potential dangers of AI.

Here are the 3 key insights from this Hack

  1. 1.
    There is a high chance that scientists will achieve AGI or ASI in the near future
  2. 2.
    Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nunc volutpat, leo ut.
  3. 3.
    Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nunc volutpat, leo ut.
To see the rest,Download the app

Thousands more Hacks on Uptime

Stand out from the crowd

Uptime helps you save time, reach your goals, and feel more confident across any area of your life.