
Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI
by Karen Hao
21 popular highlights from this book
Key Insights & Memorable Quotes
Below are the most popular and impactful highlights and quotes from Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI:
“Over the years, I’ve found only one metaphor that encapsulates the nature of what these AI power players are: empires. During the long era of European colonialism, empires seized and extracted resources that were not their own and exploited the labor of the people they subjugated to mine, cultivate, and refine those resources for the empires’ enrichment. They projected racist, dehumanizing ideas of their own superiority and modernity to justify—and even entice the conquered into accepting—the invasion of sovereignty, the theft, and the subjugation. They justified their quest for power by the need to compete with other empires: In an arms race, all bets are off. All this ultimately served to entrench each empire’s power and to drive its expansion and progress. In the simplest terms, empires amassed extraordinary riches across space and time, through imposing a colonial world order, at great expense to everyone else.”
“Over the years, I’ve found only one metaphor that encapsulates the nature of what these AI power players are: empires.”
“The number of independent researchers not affiliated with or receiving funding from the tech industry has rapidly dwindled, diminishing the diversity of ideas in the field not tied to short-term commercial benefit.”
“predetermined. But the question of governance returns: Who will get to shape them?”
“In the simplest terms, empires amassed extraordinary riches across space and time, through imposing a colonial world order, at great expense to everyone else.”
“Under the hood, generative AI models are monstrosities, built from consuming previously unfathomable amounts of data, labor, computing power, and natural resources.”
“Neural networks, meanwhile, come with a different trade-off. For years the field has aggressively debated whether such connectionist software can do what the symbolic ones can: store information and reason. Regardless of the answer, it has become clear that if they can, they do so inefficiently. Only with extraordinary amounts of data and computational power have neural networks even begun to have the kinds of behaviors that may suggest the emergence of either property. That said, one area where deep learning models really shine is how easy it is to commercialize them. You do not need perfectly accurate systems with reasoning capabilities to turn a handsome profit. Strong statistical pattern-matching and prediction go a long way in solving financially lucrative problems. The path to reaping a return, despite similarly expensive upfront investment, is also short and predictable, well suited to corporate planning cycles and the pace of quarterly earnings. Even better that such models can be spun up for a range of contexts without specialized domain knowledge, fitting for a tech giant’s expansive ambitions. Not to mention that deep learning affords the greatest competitive advantage to players with the most data.”
“Sutskever intuitively believed it would have to do with one key dimension above all else: the amount of “compute,” a term of art for computational resources, that OpenAI would need to achieve major breakthroughs in AI capabilities. The ImageNet competition and subsequent advancements that he had been a part of had all involved a material increase in the amount of compute that had been used to train an AI model. The advancements had involved other things, too: significantly more data and more sophisticated algorithms. But compute, Sutskever felt, was king. And if it were possible to scale compute enough to train an AI model at human brain scale, he believed, something radical would surely happen: AGI.”
“On Musk’s list of recommended books was Superintelligence: Paths, Dangers, Strategies, in which Oxford philosopher Nick Bostrom argues”
“OpenAI began to keep a road map to systemize its research. Amodei treated it like an investor: He called it having “a portfolio of bets.” He and other researchers kept tabs on different ideas within the field, born out of different philosophies about how to achieve artificial general intelligence, and advanced each one through small-scale experimentation. Those that seemed promising, OpenAI would continue. Those that didn’t pan out, it would abandon.”
“In other words, it was possible to estimate with high accuracy how much data, how much compute, and how many parameters to use to produce a model with a desired level of performance on a discrete capability tightly correlated with next-word-prediction—say, fluency in text generation. For capabilities less but still somewhat correlated, increasing these inputs should also lead to better performance.”
“the term hallucinations is subtly misleading. It suggests that the bad behavior is an aberration, a bug, when it’s actually a feature of the probabilistic pattern-matching mechanics of neural networks.”
“It triggered the very race to the bottom that it had warned about, massively accelerating the technology’s commercialization and deployment without shoring up its harmful flaws or the dangerous ways that it could amplify and exploit the fault lines in our society.”
“OpenAI had grown competitive, secretive, and insular, even fearful of the outside world under the intoxicating power of controlling such a paramount technology. Gone were notions of transparency and democracy, of self-sacrifice and collaboration. OpenAI executives had a singular obsession: to be the first to reach artificial general intelligence, to make it in their own image.”
“Despite his concerns, Amodei believed, as with GPT-3, that the best way to mitigate the possible harms of code generation was simply to build the model faster than anyone else, including even the other teams at OpenAI who he didn’t believe would prioritize AI safety, and use the lead time to conduct research on de-risking the model. Much to the confusion of other employees, the two teams continued to work on duplicate code-generation efforts. “It just seemed from the outside watching this that it was some kind of crazy Game of Thrones stuff,” a researcher says.”
“We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions,” they wrote. If another attempt to create beneficial AGI surpassed OpenAI’s progress, “we commit to stop competing with and start assisting this project.”
“Six years after my initial skepticism about OpenAI's altruism, I've come to firmly believe that OpenAI's mission (to ensure AGI benefits all of humanity) may have begun as a sincere stroke of idealism, but it has since become a uniquely potent formula for consolidating resources and constructing an empire-esque power structure.”
“OpenAI was unconcerned—or in tech startup terms, “unburdened”—by this compliance. It was a classic mindset in Silicon Valley, where founders and investors espouse the mantra that startups could and should move into legal gray areas (think Airbnb, Uber, or Coinbase) to disrupt and revolutionize industries.”
“Intelligence is compression”
“An accounting of the societal impacts of commercializing AI research returned an unsettling scorecard: Automated software being sold to the police, mortgage brokers, and credit lenders were entrenching racial, gender, and class discrimination. Algorithms running Facebook’s News Feed and YouTube’s recommendation systems had likely polarized the public, fueled misinformation and extremism, enabled election interference, and, most horrifying in the case of Facebook, precipitated ethnic cleansing in Myanmar.”
“AI computing globally could use more energy than all of India, the world’s third-largest electricity consumer.”


