Cover of AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference

AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference

by Arvind Narayanan

5 popular highlights from this book

Buy on Amazon

Key Insights & Memorable Quotes

Below are the most popular and impactful highlights and quotes from AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference:

“Imagine an alternate universe in which people don’t have words for different forms of transportation—only the collective noun “vehicle.” They use that word to refer to cars, buses, bikes, spacecraft, and all other ways of getting from place A to place B. Conversations in this world are confusing. There are furious debates about whether or not vehicles are environmentally friendly, even though no one realizes that one side of the debate is talking about bikes and the other side is talking about trucks. There is a breakthrough in rocketry, but the media focuses on how vehicles have gotten faster—so people call their car dealer (oops, vehicle dealer) to ask when faster models will be available. Meanwhile, fraudsters have capitalized on the fact that consumers don’t know what to believe when it comes to vehicle technology, so scams are rampant in the vehicle sector.Now replace the word “vehicle” with “artificial intelligence,” and we have a pretty good description of the world we live in.Artificial intelligence, AI for short, is an umbrella term for a set of loosely related technologies. ChatGPT has little in common with, say, software that banks use to evaluate loan applicants. Both are referred to as AI, but in all the ways that matter—how they work, what they’re used for and by whom, and how they fail—they couldn’t be more different.”
“[All] modern chatbots are actually trained simply to predict the next word in a sequence of words. They generate text by repeatedly producing one word at a time. For technical reasons, they generate a “token” at a time, tokens being chunks of words that are shorter than words but longer than individual letters. They string these tokens together to generate text.When a chatbot begins to respond to you, it has no coherent picture of the overall response it’s about to produce. It instead performs an absurdly large number of calculations to determine what the first word in the response should be. After it has output—say, a hundred words—it decides what word would make the most sense given your prompt together with the first hundred words that it has generated so far.This is, of course, a way of producing text that’s utterly unlike human speech. Even when we understand perfectly well how and why a chatbot works, it can remain mind-boggling that it works at all.Again, we cannot stress enough how computationally expensive all this is. To generate a single token—part of a word—ChatGPT has to perform roughly a trillion arithmetic operations. If you asked it to generate a poem that ended up having about a thousand tokens (i.e., a few hundred words), it would have required about a quadrillion calculations—a million billion.”
“It’s true that companies and governments have many misguided commercial or bureaucratic reasons for deploying faulty predictive AI. But part of the reason surely is that decision-makers are people—people who dread randomness like every one else. This means they can’t stand the thought of the alternative to this way of decision-making—that is, acknowledging that the future cannot be predicted. They would have to accept that they have no control over, say, picking good job performers, and that it’s not possible to do better than a process that is mostly random.”
“Will society be left perpetually reacting to new developments in generative AI? Or do we have the collective will to make structural changes that would allow us to spread out the highly uneven benefits and costs of new innovations, whatever they may be?”
“A good example is what’s happening in schools and colleges, given that AI can generate essays and pass college exams. Let’s be clear—AI is no threat to education, any more than the introduction of the calculator was. With the right oversight, it can be a valuable learning tool. But to get there, teachers will have to overhaul their curricula, their teaching strategies, and their exams.”

More Books You Might Like

Note: As an Amazon Associate, we earn from qualifying purchases