
The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do
by Erik J. Larson
11 popular highlights from this book
Key Insights & Memorable Quotes
Below are the most popular and impactful highlights and quotes from The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do:
Notice that the story [of technical progress accelerating indefinitely] is not testable; we just have to wait around and see. If the predicted year of true AI's coming is false, too, another one can be forecast, a few decades into the future. AI in this sense is unfalsifiable and thus--according to the accepted rules of the scientific method--unscientific.
This cuts the myth at an awkward angle: it is because the [artificial intelligence] systems are idiots, but still find their way into business, consumer, and government application, that human-value questions are now infecting what were once purely scientific values.
Understanding natural language is a paradigm case of undercoded abductive inference.
First, intelligence is situational—there is no such thing as general intelligence. Your brain is one piece in a broader system which includes your body, your environment, other humans, and culture as a whole. Second, it is contextual—far from existing in a vacuum, any individual intelligence will always be both defined and limited by its environment. (And currently, the environment, not the brain, is acting as the bottleneck to intelligence.) Third, human intelligence is largely externalized, contained not in your brain but in your civilization. Think of individuals as tools, whose brains are modules in a cognitive system much larger than themselves—a system that is self-improving and has been for a long time.
In the early part of the twentieth century, the philosopher of language Paul Grice offered four maxims for successful conversation:The maxim of quantity. Try to be as informative as you possibly can, and give as much information as is needed, but no more.The maxim of quality. Try to be truthful, and don’t give information that is false or that is not supported by evidence.The maxim of relation. Try to be relevant, and say things that are pertinent to the discussion.The maxim of manner. Try to be as clear, as brief, and as orderly as you can, and avoid obscurity and ambiguity.
Here, it’s best to be clear: equating a mind with a computer is not scientific, it’s philosophical.
General (non-narrow) intelligence of the sort we all display daily is not an algorithm running in our heads, but calls on the entire cultural, historical, and social context within which we think and act in the world.
World knowledge, as Bar-Hillel pointed out, couldn’t really be supplied to computers—at least not in any straightforward, engineering manner—because the “number of facts we human beings know is, in a certain very pregnant sense, infinite.
Kurzweilians and Russellians alike promulgate a technocentric view of the world that both simplifies views of people—in particular, with deflationary views of intelligence as computation—and expands views of technology, by promoting futurism about AI as science and not myth. Focusing on bat suits instead of Bruce Wayne has gotten us into a lot of trouble. We see unlimited possibilities for machines, but a restricted horizon for ourselves. In fact, the future intelligence of machines is a scientific question, not a mythological one. If AI keeps following the same pattern of overperforming in the fake world of games or ad placement, we might end up, at the limit, with fantastically intrusive and dangerous idiot savants.
Careers were made. Bletchley, meanwhile, also proved a haven for thinking about computation: Bombes were machines, and they ran programs to solve problems that humans, by themselves, could not.
Science, once a triumph of human intelligence, now seems headed into a morass of rhetoric about the power of big data and new computational methods, where the scientists' role is now as a technician, essentially testing existing theories on IBM Blue Gene supercomputers. But computers don't have insights. People do. And collaborative efforts are only effective when individuals are valued. Someone has to have an idea. Turing at Bletchley knew—or learned—this, but the lessons have been lost in the decades since. Technology, or rather AI technology, is now pulling "us" into "it." Stupefyingly, we now disparage Einstein to make room for talking-up machinery.