Cover of Evil Robots, Killer Computers, and Other Myths: The Truth About AI and the Future of Humanity

Book Highlights

Evil Robots, Killer Computers, and Other Myths: The Truth About AI and the Future of Humanity

by Steven Shwartz

What it's about

Steven Shwartz argues that current AI development is nowhere near creating human-level intelligence, or AGI. He systematically dismantles the hype surrounding autonomous vehicles, machine learning, and sentient computers by highlighting our lack of a foundational theory for how the human mind actually functions.

Key ideas

  • The AGI Gap: Creating general intelligence is impossible without a scientific theory of mind, which we currently lack.
  • Narrow AI Limits: Today’s systems only excel at single, isolated tasks and completely fail at commonsense reasoning.
  • Autonomous Reality: Self-driving cars struggle with unpredictable situations because they cannot rely on human-like intuition or broad world knowledge.
  • Fragility of Vision: Computer vision remains prone to simple errors and visual tricks that humans would never fall for.

You'll love this book if...

  • You feel skeptical about the breathless news headlines claiming that robots are about to take over the world.
  • You want a grounded, technical perspective on why modern machine learning is still just glorified pattern matching.

Best for

Tech-curious readers who want to separate AI reality from science fiction marketing.

Books with the same vibe

  • Superintelligence by Nick Bostrom
  • Rebooting AI by Gary Marcus and Ernest Davis
  • The Master Algorithm by Pedro Domingos

10 popular highlights from this book

Key Insights & Memorable Quotes

The most popular highlights from Evil Robots, Killer Computers, and Other Myths: The Truth About AI and the Future of Humanity, saved by readers on Screvi.

it is hard to imagine how processing power by itself can create AGI. If I turn on a computer from the 1970s with no programs loaded, turn on one of today’s computers with no programs loaded, or turn on a computer fifty years from now with no programs loaded, none of these computers will be capable of doing anything at all.
During the first two AI hype cycles, AI systems were developed to further the goal of AGI. Narrow AI did not exist. Because they failed to achieve AGI in the first two hype cycles, researchers created and focused on building narrow AI systems during the third cycle.
Ben Goertzel, who is generally credited with inventing the term AGI, likens it to flying machines. We were able to create blimps, airplanes, and other flying machines because we had a general theory of thermodynamics. We do not have an analogous theory for AGI.7 What we have are some vague ideas.
The Insurance Institute for Highway Safety analyzed five thousand car accidents and found that if autonomous vehicles do not drive more slowly and cautiously than people, they will only prevent one-third of all crashes.
Computer vision systems are prone to incorrect classifications. Computer vision systems can be fooled in ways that people are usually not.
the AI systems of 2020 are narrow AI systems; they can only perform one task. Second, they are not capable of commonsense reasoning based on general world knowledge or of other types of thinking, such as planning, imagination, and abstract reasoning.
due to the lack of commonsense reasoning in autonomous vehicles, coupled with the seeming impossibility of anticipating every possible situation a vehicle might encounter, we will probably not see autonomous vehicles dominating our highways and city streets for a long time.
Because autonomous vehicles lack the commonsense reasoning capabilities to handle these unanticipated situations, their manufacturers have only two choices. They can try to collect data on human encounters with rare phenomena and use machine learning to build systems that can learn how to handle each of them individually.
How long will it be before we know enough about how people think to make real progress toward AGI? At the current rate of progress, it appears we will need hundreds—maybe thousands—of years, and it may never happen.
Researchers have found some interesting facts about computer-generated short social media posts:3•The average person is twice as likely to be fooled by these posts as a security researcher is. Computer-generated posts that are contrary to popular belief are more likely to be accepted as true.•It is easier to deceive people about entertainment topics than about science topics.•It is easier to fool people about pornographic topics than any other topic.

Find Another Book

Search by title or author to explore highlights from other books.

Try it with your highlights

Create your account, add your highlights and see how Screvi can change the way you read.

Get Started for Free(No credit card required)