
You Look Like a Thing and I Love You: How Artificial Intelligence Works and Why It's Making the World a Weirder Place
by Janelle Shane
18 popular highlights from this book
Key Insights & Memorable Quotes
Below are the most popular and impactful highlights and quotes from You Look Like a Thing and I Love You: How Artificial Intelligence Works and Why It's Making the World a Weirder Place:
Treating a decision as impartial just because it came from an AI is known sometimes as mathwashing or bias laundering.
Surprisingly, the AI suddenly began winning all its games. It turned out that the AI’s strategy was to place its move very, very far away, so that when its opponent’s computer tried to simulate the new, greatly expanded board, the effort would cause it to run out of memory and crash, forfeiting the game.
One problem is that platforms like YouTube, as well as Facebook and Twitter, derive their income from clicks and viewing time, not from user enjoyment. So an AI that sucks people into addictive conspiracy-theory vortexes may be optimizing correctly, at least as far as its corporation is concerned. Without some form of moral oversight, corporations can sometimes act like AIs with faulty reward functions.
Humans do weird things to datasets.
Then [the AI] guesses that o is often followed by ck. Gold. It has made some progress. Behold its idea of the perfect joke:WhockWhockWhockWhockWhock Whock WhockWhock WhockWhockWhock
On the one hand, online movie reviews are convenient for training sentiment-classifying algorithms because they come with handy star ratings that indicate how positive the writer intended a review to be. On the other hand, it’s a well-known phenomenon that movies with racial or gender diversity in their casts, or that deal with feminist topics, tend to be “review-bombed” by hordes of bots posting highly negative reviews. People have theorized that algorithms that learn from these reviews whether words like feminist and black and gay are positive or negative may pick up the wrong idea from the angry bots.
if there’s no reason to care about aesthetics, an evolved machine will take any shape that gets the job done.
Sometimes I think the surest sign that we’re not living in a simulation is that if we were, some organism would have learned to exploit its glitches.
Responding to one’s fellow social media users is an example of a broad, tricky problem, and this is why what we call “social media bots”—rogue accounts that spread spam or misinformation—are unlikely to be implemented with AI.
An AI shown a sheep with polka dots or tractors painted on its sides will report seeing the sheep but will not report anything unusual about it. When you show it a sheep-shaped chair with two heads, or a sheep with too many legs, or with too many eyes, the algorithm will also merely report a sheep. Why are AIs so oblivious to these monstrosities? Sometimes it’s because they don’t have a way to express them. Some AIs can only answer by outputting a category name—like “sheep”—and aren’t given an option for expressing that yes, it is a sheep, but something is very, very wrong.
In 1994, Karl Sims was doing experiments on simulated organisms, allowing them to evolve their own body designs and swimming strategies to see if they would converge on some of the same underwater locomotion strategies that real-life organisms use.5, 6, 7 His physics simulator—the world these simulated swimmers inhabited—used Euler integration, a common way to approximate the physics of motion. The problem with this method is that if motion happens too quickly, integration errors will start to accumulate. Some of the evolved creatures learned to exploit these errors to obtain free energy, quickly twitching small body parts and letting the math errors send them zooming through the water.
Your phone’s predictive text and autocorrect Markov chains update themselves as you type, training themselves on what you write. That’s why if you make a typo, it may haunt you for quite some time.
This is why AI researchers like to draw a distinction between artificial narrow intelligence (ANI), the kind we have now, and artificial general intelligence (AGI), the kind we usually find in books and movies.
Yes, the farm produces cockroaches, which are crushed into a potion that’s highly valuable in traditional Chinese medicine. “Slightly sweet,” reports its packaging. With “a slightly fishy smell.
Without some form of moral oversight, corporations can sometimes act like AIs with faulty reward functions.
Another application that may be particularly vulnerable to adversarial attack is fingerprint reading. A team from New York University Tandon and Michigan State University showed that it could use adversarial attacks to design what it called a masterprint—a single fingerprint that could pass for 77 percent of the prints in a low-security fingerprint reader.14 The team was also able to fool higher-security readers, or commercial fingerprint readers trained on different datasets, a significant portion of the time. The masterprints even looked like regular fingerprints—unlike other spoofed images that contain static or other distortions—which made the spoofing harder to spot.
The skills it learns in the dream world are transferrable to the real computer game as well, so it gets better at the real thing by training in its internal model. Not all the AI’s dream-tested strategies worked in the real world, however. One of the things it learned was how to hack its own dream—just like all those AIs in chapter 6 that hacked their simulations. By moving in a certain way, the AI discovered that it could exploit a glitch in its internal model that would prevent the monsters from firing any fireballs at all. This strategy, of course, failed in the real world. Human dreamers can sometimes be similarly disappointed when they wake and discover they can no longer fly.
And watch out for those hidden giraffes.