a very interesting article by @WSJ
Origin: The Ultimate Learning Machines
Why are quintessentially geeky places like DARPA and Google suddenly interested in talking about something as profoundly ungeeky as babies? It turns out that understanding babies and young children may be one key to ensuring that the current “AI spring” continues…
…
With a machine learning system like Google Deep Mind’s Alpha Zero, you can train a computer from scratch to play a videogame or even chess or Go. …
The problem is that these new algorithms are beginning to bump up against significant limitations. They need enormous amounts of data, only some kinds of data will do, and they’re not very good at generalizing from that data. Babies seem to learn much more general and powerful kinds of knowledge than AIs do, from much less and much messier data. In fact, human babies are the best learners in the universe. How do they do it? And could we get an AI to do the same?
First, there’s the issue of data. AIs need enormous amounts of it…
Children, on the other hand, can learn new categories from just a small number of examples. A few storybook pictures can teach them not only about cats and dogs but jaguars and rhinos and unicorns.
The kind of data that children learn from is also very different from the data AI needs. The pictures that feed the AI algorithms have been curated by people, so they generally provide good examples and clear categories. (Nobody posts that messed-up smartphone shot where the cat ran halfway out of the picture.)
…Researchers like Linda Smith at Indiana University and Michael Frank at Stanford University have outfitted toddlers with super-light head-mounted cameras—a sort of baby Go Pro. …the cameras show a chaotic series of badly filmed videos of a few familiar things—balls and toys and parents and dogs—moving around at odd angles.
AIs also need what computer scientists call “supervision.” In order to learn, they must be given a label for each image they “see” or a score for each move in a game. Baby data, by contrast, is largely unsupervised. …
Even with a lot of supervised data, AIs can’t make the same kinds of generalizations that human children can. Their knowledge is much narrower and more limited, and they are easily fooled by what are called “adversarial examples.” … A small change in the learning problem means that they have to start all over again.
One of the secrets of children’s learning is that they construct models or theories of the world. Toddlers may not learn how to play chess, but they develop common-sense ideas about physics and psychology. Psychologists like Elizaberth Spelke at Harvard have shown that even 1-year-old babies know a lot about objects: They are surprised if they see a toy car hover in midair or pass through a wall, even if they’ve never seen the car or the wall before. Babies know something about people, too. Felix Warneken at the University of Michigan has shown that if 1-year-olds see someone accidentally drop a pen on the floor and reach for it, they will pick up the pen and give it to them. But they won’t do this if the person intentionally throws the pen to the floor.
…
Another secret of children’s learning is familiar to every parent—they are insatiably curious and active experimenters. Parents call this “getting into everything.” AIs have mostly been stuck inside their mainframes, passively absorbing data. They haven’t had much opportunity to get out there and gather the data themselves, or to select which data will teach them the most.
Recent studies also show just how intelligent this playful everyday experimentation can be. …The babies displayed curiosity, playing more with the toys that did weird things than with those that behaved more predictably. But they also played differently—dropping the gravity-defying car and banging the wall-dissolving one against the table. It’s as if they were trying to figure out just why these objects were so weird.
In my lab at Berkeley, we’re collaborating with computer scientists like Deepak Pathak and Pulkit Agrawal who try to make AIs that are similarly curious, active learners. …these AIs get a reward when they do something that leads to a surprising or unexpected result, and this makes them explore weird events, just like the babies. …
…
AIs can learn from very specific and controlled kinds of human supervision. But human children learn from the people around them in much more sophisticated ways. …
…
Studies in our lab and others show that children decide how to imitate intelligently, based on what they think the other person is trying to do and how the world works. So far, robots can sometimes learn to exactly replicate a particular action, but they can’t imitate in the sophisticated way that children can.
There is another way that social life is a crucial part of babies’ brilliance. Even very young babies already have a moral sense, rooted in their relationships with the people who care for them. Toddlers are already altruistic and empathetic and have basic ideas of fairness and compassion. For babies, learning and love, computation and care, are inextricably connected. Designing a truly intelligent AI, like raising a child, means instilling those ungeeky virtues. This might be a good direction for DARPA and Google too.
Is it possible for physical systems to solve all of these problems? In some sense, it must be, because those physical systems already exist: They’re called babies.
But we are still very far from approaching that level of intelligence in machines. That’s OK, because we don’t really want AIs to replicate human intelligence; what we want is an AI that can help make us even smarter. To create more helpful machines, like curious AIs or imitative robots, the best way forward is to take our cues from babies.
Dr. Gopnik, a columnist for Review, is a professor of psychology at the University of California, Berkeley.