By Joshua Rothman
In your brain, neurons are arranged in networks big and small. With every action, with every thought, the networks change: neurons are included or excluded, and the connections between them strengthen or fade. This process goes on all the time—it’s happening now, as you read these words—and its scale is beyond imagining. You have some eighty billion neurons sharing a hundred trillion connections or more. Your skull contains a galaxy’s worth of constellations, always shifting.
Geoffrey Hinton, the computer scientist who is often called “the godfather of A.I.,” handed me a walking stick. “You’ll need one of these,” he said. Then he headed off along a path through the woods to the shore. It wound across a shaded clearing, past a pair of sheds, and then descended by stone steps to a small dock. “It’s slippery here,” Hinton warned, as we started down.
New knowledge incorporates itself into your existing networks in the form of subtle adjustments. Sometimes they’re temporary: if you meet a stranger at a party, his name might impress itself only briefly upon the networks in your memory. But they can also last a lifetime, if, say, that stranger becomes your spouse. Because new knowledge merges with old, what you know shapes what you learn. If someone at the party tells you about his trip to Amsterdam, the next day, at a museum, your networks may nudge you a little closer to the Vermeer. In this way, small changes create the possibility for profound transformations.
“We had a bonfire here,” Hinton said. We were on a ledge of rock jutting out into Ontario’s Georgian Bay, which stretches to the west into Lake Huron. Islands dotted the water; Hinton had bought this one in 2013, when he was sixty-five, after selling a three-person startup to Google for forty-four million dollars. Before that, he’d spent three decades as a computer-science professor at the University of Toronto—a leading figure in an unglamorous subfield known as neural networks, which was inspired by the way neurons are connected in the brain. Because artificial neural networks were only moderately successful at the tasks they undertook—image categorization, speech recognition, and so on—most researchers considered them to be at best mildly interesting, or at worst a waste of time. “Our neural nets just couldn’t do anything better than a child could,” Hinton recalled. In the nineteen-eighties, when he saw “The Terminator,” it didn’t bother him that Skynet, the movie’s world-destroying A.I., was a neural net; he was pleased to see the technology portrayed as promising.
From the small depression where the fire had been, cracks in the stone, created by the heat, radiated outward. Hinton, who is tall, slim, and English, poked the spot with his stick. A scientist through and through, he is always remarking on what is happening in the physical world: the lives of animals, the flow of currents in the bay, the geology of the island. “I put a mesh of rebar under the wood, so the air could get in, and it got hot enough that the metal actually went all soft,” he said, in a wondering tone. “That’s a real fire—something to be proud of!”
For decades, Hinton tinkered, building bigger neural nets structured in ingenious ways. He imagined new methods for training them and helping them improve. He recruited graduate students, convincing them that neural nets weren’t a lost cause. He thought of himself as participating in a project that might come to fruition a century in the future, after he died. Meanwhile, he found himself widowed and raising two young children alone. During one particularly difficult period, when the demands of family life and research overwhelmed him, he thought that he’d contributed all he could. “I was dead in the water at forty-six,” he said. He didn’t anticipate the speed with which, about a decade ago, neural-net technology would suddenly improve. Computers got faster, and neural nets, drawing on data available on the Internet, started transcribing speech, playing games, translating languages, even driving cars. Around the time Hinton’s company was acquired, an A.I. boom began, leading to the creation of systems like OpenAI’s ChatGPT and Google’s Bard, which many believe are starting to change the world in unpredictable ways.
Hinton set off along the shore, and I followed, the fractured rock shifting beneath me. “Now watch this,” he said. He stood before a lumpy, person-size boulder, which blocked our way. “Here’s how you get across. You throw your stick”—he tossed his to the other side of the boulder—“and then there are footholds here and here, and a handhold here.” I watched as he scrambled over with easy familiarity, and then, more tentatively, I took the same steps myself.
Whenever we learn, our networks of neurons change—but how, exactly? Researchers like Hinton, working with computers, sought to discover “learning algorithms” for neural nets, procedures through which the statistical “weights” of the connections among artificial neurons could change to assimilate new knowledge. In 1949, a psychologist named Donald Hebb proposed a simple rule for how people learn, often summarized as “Neurons that fire together wire together.” Once a group of neurons in your brain activates in synchrony, it’s more likely to do so again; this helps explain why doing something is easier the second time. But it quickly became apparent that computerized neural networks needed another approach in order to solve complicated problems. As a young researcher, in the nineteen-sixties and seventies, Hinton drew networks of neurons in notebooks and imagined new knowledge arriving at their borders. How would a network of a few hundred artificial neurons store a concept? How would it revise that concept if it turned out to be flawed?
We made our way around the shore to Hinton’s cottage, the only one on the island. Glass-enclosed, it stood on stilts atop a staircase of broad, dark rocks. “One time, we came out here and a huge water snake stuck his head up,” Hinton said, as we neared the house. It was a fond memory. His father, a celebrated entomologist who’d named a little-known stage of metamorphosis, had instilled in him an affection for cold-blooded creatures. When he was a child, he and his dad kept a pit full of vipers, turtles, frogs, toads, and lizards in the garage. Today, when Hinton is on the island—he is often there in the warmer months—he sometimes finds snakes and brings them into the house, so that he can watch them in a terrarium. He is a good observer of nonhuman minds, having spent a lifetime thinking about thinking from the bottom up.
Earlier this year, Hinton left Google, where he’d worked since the acquisition. He was worried about the potential of A.I. to do harm, and began giving interviews in which he talked about the “existential threat” that the technology might pose to the human species. The more he used ChatGPT, an A.I. system trained on a vast corpus of human writing, the more uneasy he got. One day, someone from Fox News wrote to him asking for an interview about artificial intelligence. Hinton enjoys sending snarky single-sentence replies to e-mails—after receiving a lengthy note from a Canadian intelligence agency, he responded, “Snowden is my hero”—and he began experimenting with a few one-liners. Eventually, he wrote, “Fox News is an oxy moron.” Then, on a lark, he asked ChatGPT if it could explain his joke. The system told him his sentence implied that Fox News was fake news, and, when he called attention to the space before “moron,” it explained that Fox News was addictive, like the drug OxyContin. Hinton was astonished. This level of understanding seemed to represent a new era in A.I.
Video From The New Yorker
What a Mammal’s Loss Teaches Us About Mortality: Requiem for a Whale