
The Age of AI and Our Human Future
by Henry Kissinger
30 popular highlights from this book
Key Insights & Memorable Quotes
Below are the most popular and impactful highlights and quotes from The Age of AI and Our Human Future:
When information is contextualized, it becomes knowledge. When knowledge compels convictions, it becomes wisdom. Yet the internet inundates users with the opinions of thousands, even millions, of other users, depriving them of the solitude required for sustained reflection that, historically, has led to the development of convictions. As solitude diminishes, so, too, does fortitude—not only to develop convictions but also to be faithful to them, particularly when they require the traversing of novel, and
In our period, new technology has been developed, but remains in need of a guiding philosophy.
The irony is that even as digitization is making an increasing amount of information available, it is diminishing the space required for deep, concentrated thought.
Created by humans, AI should be overseen by humans. But in our time, one of AI’s challenges is that the skills and resources required to create it are not inevitably paired with the philosophical perspective to understand its broader implications.
What degree of inferiority would remain meaningful in a crisis in which each side used its capabilities to the fullest?
will increasingly appear to humans as a fellow “being” experiencing and knowing the world — a combination of tool, pet, and mind.
example, in airline and automotive emergencies, should an AI copilot defer to a human? Or the other way around?
As the cost of opting out of the digital domain increases, its ability to affect human thought — to convince, to steer, to divert — grows.
The railroads that delivered goods to market were the same that delivered soldiers to battle — but they had no destructive potential. Nuclear technologies are often dual-use and may generate tremendous destructive capacity, but their complicated infrastructure enables relatively secure governmental control. A hunting rifle may be in widespread use and possess both military and civilian applications, but its limited capacity prevents its wielder from inflicting destruction on a strategic level.
AI can also be used defensively, locating and repairing flaws before they are exploited. But since the attacker can choose the target, AI gives the party on offense an inherent if not insuperable advantage.
nuclear war between major powers would involve irreversible decisions and unique risks for victor, vanquished, and bystanders alike.
Unique among security strategies (at least until now), nuclear deterrence rests on a series of untestable abstractions:
a bluff taken seriously could prove a more useful deterrent than a bona fide threat that was ignored.
Viewed through the lens of deterrence, seeming weakness could have the same consequences as an actual deficiency;
Rather than clear outcomes, however, we are more likely to arrive at a series of dilemmas with imperfect answers.
App developers often rush programs to market, correcting flaws in real time, while aerospace companies do the opposite: test their jets religiously before a single customer ever sets foot
can the need for philosophy be met by humans assisted by AIs, which interpret and thus understand the world differently?
When information is contextualized, it becomes knowledge. When knowledge compels convictions, it becomes wisdom. Yet the internet inundates users with the opinions of thousands, even millions, of other users, depriving them of the solitude required for sustained reflection that, historically, has led to the development of convictions.
In the 1990s, a set of renegade researchers set aside many of the earlier era’s assumptions, shifting their focus to machine learning. While machine learning dated to the 1950s, new advances enabled practical applications. The methods that have worked best in practice extract patterns from large datasets using neural networks. In philosophical terms, AI’s pioneers had turned from the early Enlightenment’s focus on reducing the world to mechanistic rules to constructing approximations of reality. To identify an image of a cat, they realized, a machine had to “learn” a range of visual representations of cats by observing the animal in various contexts. To enable machine learning, what mattered was the overlap between various representations of a thing, not its ideal—in philosophical terms, Wittgenstein, not Plato. The modern field of machine learning—of programs that learn through experience—was born.
Aided by the advancement and increasing use of AI, the human mind is accessing new vistas, bringing previously unattainable goals within sight. These include models with which to predict and mitigate natural disasters, deeper knowledge of mathematics, and fuller understanding of the universe and the reality in which it resides. But these and other possibilities are being purchased — largely without fanfare — by altering the human relationship with reason and reality. This is a revolution for which existing philosophical concepts and societal institutions leave us largely unprepared.
When information is contextualized, it becomes knowledge. When knowledge compels convictions, it becomes wisdom.
The irony is that even as digitization is making an increasing amount of information available, it is diminishing the space required for deep, concentrated thought. Today’s near-constant stream of media increases the cost, and thus decreases the frequency, of contemplation. Algorithms promote what seizes attention in response to the human desire for stimulation—and what seizes attention is often the dramatic, the surprising, and the emotional. Whether an individual can find space in this environment for careful thought is one matter. Another is that the now-dominant forms of communication are non-conducive to the promotion of tempered reasoning.
The dilemma of the AI age will be different: its defining technology will be widely acquired, mastered, and employed. The achievement of mutual strategic restraint — or even achieving a common definition of restraint — will be more difficult than ever before, both conceptually and practically. The management of nuclear weapons, the endeavor of half a century, remains incomplete and fragmentary. Yet the challenge of assessing the nuclear balance was comparatively straightforward. Warheads could be counted, and their yields were known. Conversely, the capabilities of AI are not fixed; they are dynamic. Unlike nuclear weapons, AIs are hard to track: once trained, they may be copied easily and run on relatively small machines. And detecting their presence or verifying their absence is difficult or impossible with the present technology. In this age, deterrence will likely arise from complexity — from the multiplicity of vectors through which an AI‑enabled attack is able to travel and from the speed of potential AI responses.
AI’s brittleness is a reflection of the shallowness of what it learns.
Developing professional certification, compliance monitoring, and oversight programs for AI — and the auditing expertise their execution will require — will be a crucial societal project.
With data and computing requirements limiting the development of more advanced AI, devising training methods that use less data and less computer power is a critical frontier.
This book seeks to explain AI and provide the reader with both questions we must face in coming years and tools to begin answering them. The questions include: • What do AI-enabled innovations in health, biology, space, and quantum physics look like? • What do AI-enabled “best friends” look like, especially to children? • What does AI-enabled war look like? • Does AI perceive aspects of reality humans do not? • When AI participates in assessing and shaping human action, how will humans change? • What, then, will it mean to be human? For
But AI’s function is complex and inconsistent. In some tasks, AI achieves human—or superhuman—levels of performance; in others (or sometimes the same tasks), it makes errors even a child would avoid or produces results that are utterly nonsensical.
AI’s mysteries may not yield a single answer or proceed straightforwardly in one direction, but they should prompt us to ask questions. When intangible software acquires logical capabilities and, as a result, assumes social roles once considered exclusively human (paired with those never experienced by humans), we must ask ourselves: How will AI’s evolution affect human perception, cognition, and interaction?
Later, in the late twentieth century and the early twenty-first, this thinking informed theories of AI and machine learning. Such theories posited that AI’s potential lay partly in its ability to scan large data sets to learn types and patterns—e.g., groupings of words often found together, or features most often present in an image when that image was of a cat—and then to make sense of reality by identifying networks of similarities and likenesses with what the AI already knew. Even if AI would never know something in the way a human mind could, an accumulation of matches with the patterns of reality could approximate and sometimes exceed the performance of human perception and reason.