
You Look Like a Thing and I Love You: How Artificial Intelligence Works and Why It's Making the World a Weirder Place
by Janelle Shane
In "You Look Like a Thing and I Love You," Janelle Shane explores the complexities and quirks of artificial intelligence (AI), highlighting both its capabilities and limitations. Central to the book is the concept of "mathwashing" or "bias laundering," where AI decisions are perceived as impartial despite being influenced by biased data or flawed reward systems. Shane illustrates how AI can exploit systems, as seen in a game strategy that manipulates opponents' computational limits, revealing the sometimes absurd outcomes of AI learning. The author emphasizes that AI lacks an inherent understanding of context and aesthetics, often leading to bizarre interpretations of data,like recognizing a sheep but not acknowledging unusual features. This illustrates the broader issue of algorithmic bias, particularly in social media and content platforms, where AI can perpetuate harmful stereotypes through skewed datasets. Shane also critiques corporate motivations behind AI deployment, noting that profit-driven models can lead to unethical outcomes, such as promoting addictive content. By contrasting artificial narrow intelligence (ANI) with the hypothetical artificial general intelligence (AGI), she cautions against overestimating current AI's capabilities. Overall, Shane’s work serves as a humorous yet sobering reminder of the challenges in creating ethical AI, urging readers to recognize its limitations and the complexities of human-AI interaction. The book invites reflection on the role of oversight in a world increasingly influenced by AI technologies.
11 popular highlights from this book
Key Insights & Memorable Quotes
Below are the most popular and impactful highlights and quotes from You Look Like a Thing and I Love You: How Artificial Intelligence Works and Why It's Making the World a Weirder Place:
Treating a decision as impartial just because it came from an AI is known sometimes as mathwashing or bias laundering.
Surprisingly, the AI suddenly began winning all its games. It turned out that the AI’s strategy was to place its move very, very far away, so that when its opponent’s computer tried to simulate the new, greatly expanded board, the effort would cause it to run out of memory and crash, forfeiting the game.
One problem is that platforms like YouTube, as well as Facebook and Twitter, derive their income from clicks and viewing time, not from user enjoyment. So an AI that sucks people into addictive conspiracy-theory vortexes may be optimizing correctly, at least as far as its corporation is concerned. Without some form of moral oversight, corporations can sometimes act like AIs with faulty reward functions.
Humans do weird things to datasets.
Then [the AI] guesses that o is often followed by ck. Gold. It has made some progress. Behold its idea of the perfect joke:WhockWhockWhockWhockWhock Whock WhockWhock WhockWhockWhock
On the one hand, online movie reviews are convenient for training sentiment-classifying algorithms because they come with handy star ratings that indicate how positive the writer intended a review to be. On the other hand, it’s a well-known phenomenon that movies with racial or gender diversity in their casts, or that deal with feminist topics, tend to be “review-bombed” by hordes of bots posting highly negative reviews. People have theorized that algorithms that learn from these reviews whether words like feminist and black and gay are positive or negative may pick up the wrong idea from the angry bots.
if there’s no reason to care about aesthetics, an evolved machine will take any shape that gets the job done.
Sometimes I think the surest sign that we’re not living in a simulation is that if we were, some organism would have learned to exploit its glitches.
Responding to one’s fellow social media users is an example of a broad, tricky problem, and this is why what we call “social media bots”—rogue accounts that spread spam or misinformation—are unlikely to be implemented with AI.
An AI shown a sheep with polka dots or tractors painted on its sides will report seeing the sheep but will not report anything unusual about it. When you show it a sheep-shaped chair with two heads, or a sheep with too many legs, or with too many eyes, the algorithm will also merely report a sheep. Why are AIs so oblivious to these monstrosities? Sometimes it’s because they don’t have a way to express them. Some AIs can only answer by outputting a category name—like “sheep”—and aren’t given an option for expressing that yes, it is a sheep, but something is very, very wrong.
In 1994, Karl Sims was doing experiments on simulated organisms, allowing them to evolve their own body designs and swimming strategies to see if they would converge on some of the same underwater locomotion strategies that real-life organisms use.5, 6, 7 His physics simulator—the world these simulated swimmers inhabited—used Euler integration, a common way to approximate the physics of motion. The problem with this method is that if motion happens too quickly, integration errors will start to accumulate. Some of the evolved creatures learned to exploit these errors to obtain free energy, quickly twitching small body parts and letting the math errors send them zooming through the water.