
The Big Nine: How the Tech Titans and Their Thinking Machines Could Warp Humanity
by Amy Webb
16 popular highlights from this book
Key Insights & Memorable Quotes
Below are the most popular and impactful highlights and quotes from The Big Nine: How the Tech Titans and Their Thinking Machines Could Warp Humanity:
Right now, there is no other country on Earth with as much data as China, as many people as China, and as many electronics per capita. No other country is positioned to have a bigger economy than America’s within our lifetimes. No other country has more potential to influence our planet’s ecosystem, climate, and weather patterns—leading to survival or catastrophe—than China. No other country bridges both the developed and developing world like China does.
There would be no way to create a set of commandments for AI. We couldn’t write out all of the rules to correctly optimize for humanity, and that’s because while thinking machines may be fast and powerful, they lack flexibility.
While plenty of smart people advocate AI for the public good, we are not yet discussing artificial intelligence as a public good. This is a mistake.
As AI advances, a more robust personal data record will afford greater efficiencies to the Big Nine, and so they will nudge us to accept and adopt PDRs, even if we don’t entirely understand the implications of using them. Of course, in China, PDRs are already being piloted under the auspices of its social credit score.
English mathematician Ada Lovelace and scientist Charles Babbage invented a machine called the “Difference Engine” and then later postulated a more advanced “Analytical Engine,” which used a series of predetermined steps to solve mathematical problems. Babbage hadn’t conceived that the machine could do anything beyond calculating numbers. It was Lovelace who, in the footnotes of a scientific paper she was translating, went off on a brilliant tangent speculating that a more powerful version of the Engine could be used in other ways.13 If the machine could manipulate symbols, which themselves could be assigned to different things (such as musical notes), then the Engine could be used to “think” outside of mathematics. While she didn’t believe that a computer would ever be able to create original thought, she did envision a complex system that could follow instructions and thus mimic a lot of what everyday people did. It seemed unremarkable to some at the time, but Ada had written the first complete computer program for a future, powerful machine—decades before the light bulb was invented. A
Turing figured out that a program and the data it used could be stored inside a computer—again, this was a radical proposition in the 1930s. Until that point, everyone agreed that the machine, the program, and the data were each independent. For the first time, Turing’s universal machine explained why all three were intertwined.
The tools and built environments of hair salons and the platforms powering the airline industry are examples of something called Conway’s law, which says that in absence of stated rules and instructions, the choices teams make tend to reflect the implicit values of their tribe.
In 1968, Melvin Conway, a computer programmer and high school math and physics teacher, observed that systems tend to reflect the people and values who designed them. Conway was specifically looking at how organizations communicate internally, but later Harvard and MIT studies proved his idea more broadly. Harvard Business School analyzed different codebases, looking at software that was built for the same purpose but by different kinds of teams: those that were tightly controlled, and those that were more ad-hoc and open source.10 One of their key findings: design choices stem from how their teams are organized, and within those teams, bias and influence tends to go overlooked.
Our fast, intuitive mind makes thousands of decisions autonomously all day long, and while it’s more energy efficient, it’s riddled with cognitive biases that affect our emotions, beliefs, and opinions. We make mistakes because of the fast side of our brain. We overeat, or drink to excess, or have unprotected sex. It’s that side of the brain that enables stereotyping
We are crossing a threshold into a new reality in which AI is generating its own programs, creating its own algorithms, and making choices without humans in the loop. At the moment, no one, in any country, has the right to interrogate an AI and see clearly how a decision was made.
Since AI isn’t being taught to make perfect decisions, but rather to optimize, our response to changing forces in society matter a lot. Our values are not immutable. This is what makes the problem of AI’s values so vexing. Building AI means predicting the values of the future. Our values aren’t static. So how do we teach machines to reflect our values without influencing them?
What’s not on the table, at the G-MAFIA or BAT, is optimizing for empathy. Take empathy out of the decision-making process, and you take away our humanity. Sometimes what might make no logical sense at all is the best possible choice for us at a particular moment. Like blowing off work to spend time with a sick family member, or helping someone out of a burning car, even if that action puts your own life in jeopardy.
The mantra is part of a troubling ideology that’s pervasive among the Big Nine: build it first, and ask for forgiveness later.
Skills are taught experientially—meaning that students studying AI don’t have their heads buried in books. In order to learn, they need lexical databases, image libraries, and neural nets. For a time, one of the more popular neural nets at universities was called Word2vec, and it was built by the Google Brain team. It was a two-layer system that processed text, turning words into numbers that AI could understand.17 For example, it learned that “man is to king as woman is to queen.” But the database also decided that “father is to doctor as mother is to nurse” and “man is to computer programmer as woman is to homemaker.”18 The very system students were exposed to was itself biased. If someone wanted to analyze the farther-reaching implications of sexist code, there weren’t any classes where that learning could take place.
One probable near-term outcome of AI and a through-line in all three of the scenarios is the emergence of what I’ll call a “personal data record,” or PDR. This is a single unifying ledger that includes all of the data we create as a result of our digital usage (think internet and mobile phones), but it would also include other sources of information: our school and work histories (diplomas, previous and current employers); our legal records (marriages, divorces, arrests); our financial records (home mortgages, credit scores, loans, taxes); travel (countries visited, visas); dating history (online apps); health (electronic health records, genetic screening results, exercise habits); and shopping history (online retailers, in-store coupon use). In China, a PDR would also include all the social credit score data described in the last chapter.
Data is analogous to our world’s oceans. It surrounds us, is an endless resource, and remains totally useless to us unless we desalinate it, treating and processing it for consumption. At