Blog: Google’s Demis Hassabis is one relentlessly curious public face of AI – ZDNet
As a salesman or an ambassador for artificial intelligence, one could do worse than Demis Hassabis. The 42-year-old co-founder of DeepMind, which Google bought in 2014 for several hundred million dollars, Hassabis comes across as a warm, open, good-humored, and relentlessly curious fellow.
At a talk at the Institute for Advanced Study in Princeton, New Jersey, on Saturday, he addressed a packed auditorium with a lightning summary of where AI has been and where it’s going.
Hassabis peppered his responses to questions with frequent exclamations of “good question,” or “that’s a great question,” and went into depth in several of his responses to somewhat technical queries.
He offered a view about AI ethics, too: Just say “no” to bad projects.
“The best thing AI researchers can do is vote with their feet, not work with companies that have outcomes you don’t agree with,” he said.
“We have committed to not ever working on any military or surveillance applications, no matter what,” he said of DeepMind.
“There aren’t enough researchers to go around, and attracting enough talent is quite important, so actually researchers individually have quite a lot of power,” he observed. “So through soft influence, you can influence a lot.”
Hassabis punctuated his talk about “reinforcement learning,” the variety of AI that DeepMind focuses on, with knowing humor. When DeepMind was founded, in 2010, “we thought that step one would be, if we can fundamentally understand the nature of intelligence, and if we can recreate it artificially, it should be possible to use that to solve everything else,” he said.
“Imagine trying to make that pitch to a venture capitalist in 2010!” he said, to much laughter from the audience.
The AI and ML deployments are well underway, but for CXOs the biggest issue will be managing these initiatives, and figuring out where the data science team fits in and what algorithms to buy versus build.
Hassabis, who was a master at chess at age thirteen, and who had to delay his entry to Cambridge by a year because he was too young initially, managed to inject an element of appreciation for what is human into his talk. He recounted the strong impression made on him by Gary Gasparov’s defeat in 1997 at the hands of IBM’s “Deep Blue” chess playing computer.
“I was a little disappointed by this match,” said Hassabis. “It was amazing from a technical standpoint, but I was more impressed with Gary’s mind than with the computer,” he said.
“Gary was able to compete with this brute, and he could do other things: he could tie his shoes, he could talk politics, he could speak languages.”
Deep Blue, he observed, couldn’t even play a much simpler game like tic tac toe without additional explicit programming. “Something was missing,” in this approach, Hassabis concluded. “This stuck in my mind, this issue of the lack of generality — something was missing.”
DeepMind, he explained, has moved away from the rules-based, logic-based approach that gave birth to Deep Blue. In reinforcement learning, which DeepMind exploited to defeat the world’s grandmasters at chess, and at the ancient strategy game Go, the computer has achieved a level of generality that leans toward what he described as “the world’s first general-purpose learning system.”
AlphaGo, the first version, in 2016, was “bootstrapped with mimicking the behavior of humans by studying several hundred thousand games.” But the follow-up version, AlphaGo Zero, dispensed with human knowledge and just learned from self-play. AlphaZero, last year, broadened the approach to handling not just one game, but any two-player game of perfect information, including chess and Go and shogi, which is basically the Japanese version of chess.
“This is not brute force,” Hassabis emphasized repeatedly. With Go, there are ten-to-the-one-hundred-and-seventieth power total board positions, he noted, more than there are atoms in the universe. So AlphaGo could not search them all, he said. “It really has to extrapolate knowledge, it’s not something as simple as just memorizing.”
The expert systems, such as Stockfish, search tens of millions of moves, he noted, while a human grand master only looks at hundreds of moves. “Why are they able to be competitive? Because they have a way better valuation function versus the weak valuation of chess programs,” he said, meaning, the set of human heuristics that asses how good or bad the result will be from a given state of play by taking a given action.
AlphaZero, he said, is in the middle, searching tens of thousands of moves with a much better valuation function. That valuation function is the most difficult part of reinforcement learning, and Hassabis noted that human heuristics in Go are akin to intuition. He said it not with distain, but with a touch of reverence for the mystical quality of human ability.
Chess programs, by contrast, were using a weaker form of human heuristics, getting “stuck” in human notions of the game. Stockfish and other programs tend to pursue what’s called “material,” capturing the other player’s pieces. But AlphaGo won by focusing on what’s called mobility, occupying many more places on the board, a “dynamic attack” style. That’s because AlphaZero didn’t know the rules Stockfish did. The lesson: “These in-built rules we spent twenty years developing might be getting in the way” of understanding.
It was “mind-blowing” watching AlphaZero develop, he reflected. “AlphaZero got stronger than Stockfish after just four hours of training,” he noted. “It starts out randomly making moves, and in four hours it’s the strongest-playing entity in the history of the world in chess.
“You could press a button, go away, and after you’re back from tea, it’s done!”
Asked about how to debug AlphaStar, Hassabis replied that it’s a “very interesting question.”
“In cases where it didn’t win, it wasn’t actually a bug, it was a knowledge gap,” he said. “It was a question of how much coverage you have of the entire domain space.”
“We thought about creating a version of AlphaGo that wasn’t just getting stronger, but trigging these kinds of problems,” he said, meaning, corner cases that flummoxed the process of generalizing. “It would get rewarded for triggering these delusions, as we call them, to force AlphaGo to explore these kinds of uncomfortable areas in the domain.”
Hassabis talked about his favorite project at DeepMind, understanding protein folding. “Proteins are exquisite molecular machines, he said, and to figure out how they bundle up into the shapes they do involves showing the machine examples of ‘labeled’ proteins and having it learn things such as ‘distributions of angles’ and ‘distance histograms.'”
“We can now give the system a new protein it has never seen before, and ask it to predict the 3-D structure.”
DeepMind’s AlphaFold became the top performer in predicting protein folding at an annual “Olympics” of the discipline, the CASP13 competition, he boasted.
“So there was something different in the nature of the approach we used.”
The problem of protein folding is not solved, he said, and DeepMind is still “working hard on this.” But more than any one problem, as an institution, he said, DeepMind remained committed to advancing science broadly speaking.
“The philosophy of what we are doing is something I think about a lot,” he said.
“We are trying to find a meta-solution to solve other problems.”
Hassabis ended his talk with a slide noting “many key challenges remain.”
Those challenges include “unsupervised learning,” “memory and one-shot learning,” “imagination-based planning with generative models,” “learning abstract concepts,” “transfer learning,” and “language understanding.” The really big picture could be summed up more neatly. It’s still all about the enormous amounts of data.
“One thing I see confronting society today is the overload of information we are generating, and just the scale of the complexity of the problems we are confronting, things like climate change,” he said.
“You know, the buzzword today is AI, but for years it was big data,” he observed. “I think it’s still about data. I think actually big data is the problem,” he continued, “AI is the answer to finding the structure in the data,” he added, because “intelligence is a process that converts unstructured information into useful knowledge.”
Hassabis ended his talk with a slide showing late Caltech physicist Richard Feynmann, saying he was in agreement with Feynmann’s motto: “What I cannot create, I do not understand.”
During a reception of punch and cookies, Hassabis was mobbed outside the Institute’s Fuld Hall. He seemed to be enjoying fielding questions, pausing to look into the distance, or down at the ground, to listen to the question and then to reflect.
An older gentleman, with a twinkle in his eye, Martin Rees, the Institute trustee and Astronomer Royal, walked up beside a reporter. Looking at the crowd around Hassabis, he smiled and nodded approvingly, remarking, “He’s attracting the young people.”