In pursuit of artificial intelligence with a human mind Getting to the heart of human-like action with robots and computer models
“I was determined to do it precisely because I was told it was impossible.” So says Yasuo Kuniyoshi, professor at the University of Tokyo’s Graduate School of Information Science and Technology, in a quiet tone. However, the sharp glint in his eye betrays his grand ambition of developing a truly clever artificial intelligence to benefit humankind.
AI doesn’t “think” the same way humans do
Some current forms of artificial intelligence (AI), such as speech recognition and automated driving, are just as competent as humans—if not better—at carrying out their given tasks (figure 1). However, just as AI developed for speech recognition cannot play chess, and chess-playing AI cannot drive a car, existing forms of AI are incapable of any actions beyond those intended by their creators. Because AI does not “think” the same way humans do, it cannot adapt to conditions besides the preconceived context it was programmed for in advance. For AI to be truly intelligent and highly adaptable, it must be able to think in the same way as humans. To achieve this goal, as Kuniyoshi explains, “It is necessary to understand the nature of human intelligence and the basic principles that generate human behavior.”
Human-like action is hatched from physical traits rather than sophisticated control
To investigate the principles that produce human-like action, in the 2000s, Kuniyoshi and his colleagues successfully built a robot with a musculoskeletal system resembling that of an animal’s that could leap onto a chair from the floor, as well as creating a humanoid robot that could stand itself up from a reclining position by swinging its legs, thereby building momentum. The key point here is that the robots were able to perform these actions from start to finish without precisely controlling every step. In analyzing how humans rise to their feet, Kuniyoshi’s team realized that there is much variation in motion, but at a given moment there is a very narrow gateway that the motion’s trajectory must cross to achieve the upright position. As long as the subject gains a knack for pinning down this gateway, it will be able to get up even if there is some deviation in the motion’s trajectory, before and after the gateway. Therefore, so long as a robot also acquires the aptitude for crossing the gateway at the right moment, its every move does not have to be tightly controlled. Based on the results of these studies, Kuniyoshi and his colleagues concluded that “human behavior arises more as a result of the constraints and interactions governed by a person’s physical traits and their environment than being something that is regulated by the central nervous system” (movies 2 and 3).
Musculoskeletal system and the environment spawn spontaneous human-like actions
Building upon his success in getting a humanoid robot to perform an action he was aiming for, Kuniyoshi has now turned his attention to how fetuses develop intelligence to explore the fundamental principles of intelligence of our species. For his studies, Kuniyoshi has modeled a virtual fetus consisting of approximately 400 muscles and a skeleton, gestating in an environment resembling a womb filled with amniotic fluid to run computer simulations. This fetus does not possess “innate behaviors,” namely a pre-existing process that spawns specific movements. However, when vibrations sent through neural signals from the spinal cord to random muscles reached the body’s other muscles through the skeleton or feedback of pressure from the amniotic fluid and uterine wall, the muscles and corresponding spinal circuits began coordinating their movements, leading to the emergence of actions resembling those of an actual fetus in the womb (movie 4).
Recently, Kuniyoshi used a computer model of a fetus in the 32nd week of gestation with cerebral neural circuits, inside the womb of a woman, to observe how the cerebrum receives sensory information and the neural circuits learn about the body through touch and somatic sensation. Furthermore, he compared a fetus that was raised inside the womb with one raised outside it, and found that learning occurring inside the womb led to more enhanced development of neural circuits than that occurring outside it. Kuniyoshi believes human cognition and actions—that is, sociality and awareness of the outside world—gradually emerge from this foundation built on cognition of one’s own body.
No pain, no gain
Kuniyoshi describes the ultimate aim of his research as “a robot that has developed the real ability to understand correctly what people are saying, and be able to converse and interact with them naturally, just as humans do with each other, based on its own experiences and bodily sensations. The road Kuniyoshi has traveled to get where he is in his research has not been a smooth one. As he says, “Ever since I was a student, only a few others seemed to share the same research interest as mine. It was very disheartening.” However, he continues, “When I decided to pursue AI for my research as an undergraduate student, I read every book and paper in the library on this subject and cognitive science. When I was a graduate student, I read everything from academic papers on vision psychology to books on philosophy I could get my hands on to create a robot that could recognize and imitate human actions. People would express doubt when I gave presentations at academic conferences, saying things like, ‘That’s a great idea, but it’ll never work,’ but I would grit my teeth and kept plugging away. It was really tough going, but in time, I realized that pursuing research that everyone says is impossible makes it more worthwhile because no one else has taken on the challenge.” As he speaks, Kuniyoshi’s penetrating gaze is telling of his steely resolve pursuing the vision he has held on to for the last 30 years, from the time he was a student when he first set his sights on developing a truly intelligent human-like robot.
Interview/text: Takehiro Mizutani