Thanks to Sebastien Cagnon, a French student in our lab, who kindly translated the French article about our laboratory by PIERRE VANDEGINSTE (AU JAPON), Les Echos.
Understanding human rather than looking like him: the new challenge of Japanese robots
Human-like robots, is it really useful? To help us, the main problem is to decipher human reactions. In Japan, nest of androids, innovation is now leading in that direction.
Japan’s population is shrinking, therefore aging. This nation may arrive during this century to a point where there will be so many old people that there won’t be enough labor force to assist them, to accompany them in their old days. This speech is the official argument for a wide research program on humanoid robots. Robots looking like us, with two arms, two legs? From a certain point of view, here is the impression about the commentaries around this project: since they should help tired or handicapped people, these robots have to look like us. On the field, however, most of the specialists point out that this is a shortcut, even a misinterpretation. The first thing we should expect from a robot is that it should have an outstanding understanding of the human body, just like any of us do. He should ‘capture’ the human, instead of roughly copying it.
“The anthropomorphism is a central problem… because we are anthropomorphist”, Yoshihiko Nakamura summarizes. Researcher at the University of Tokyo, director of a laboratory named after him, he is one the best specialists world wide on the topic of human body simulation and interpretation. “We are made to be able to understand the body of our kinds, guess their efforts, their sensations”, he adds.
Yoshihiko Nakamura and his team have developed a complete model of the human body and algorithms that can “read” the dynamics (gesture, pose, walking pattern…) of anyone from motion capture information obtained from cameras. Thanks to rapid progress in the field, and a pressing demand from Hollywood, the identification of human movement has quickly passed the step of following markers on the body to direct marker less 3D reconstruction via artificial vision. This may let us foresee a robot that can realize, “feel” what a person is doing just by looking at him.
The mathematics models engineered by the researchers from the University of Tokyo have a lot of possibilities: not only can they understand the movements, categorize them, and even name them (the tennis player’s service, the boxer’s punch…), but also determine which muscle is used, how much tension and torsion is working in each muscle and articulation. Reciprocally, other softwares make the robots reproduce these same movements, in order to mimic humans’ moves. Such a robot can therefore learn exactly the same way we always do: by copying the master’s gesture.
Yoshihiko Nakamura is working on the wider field of “bodies’ language”, the interactions via gestures. For example, he realized some kind of virtual fight between a human and a robot: the robot had to understand each hit from his opponent to respond with the appropriate parry. The researcher has now been called by an application for the future. He is preparing the landing on the moon of the first Japanese: a robot controlled from the Earth by humans who would just show him what to do.
The example of R2D2
“When I see someone wearing a heavy object, said Gentian Venture, I already know roughly how much it weighs, what is its density, where is its center of gravity. I guess the effort to bring the action.” This young French researcher, from the CEA, incorporated the Nakamura lab in 2004. She has headed since last year its own laboratory, in the Tokyo University of Agriculture and Technology. Her research on human-robot interaction is designed to empower the second to interpret the behavior of the first to help effectively. She even tries to collect clues in the “language” of the human body to capture the emotions. This way, a robot could assist a patient or senior based on his moods, fear, anger, boredom … Such a robot should have a human appearance to help? “Does R2D2, from ‘Star Wars’ need arms and legs to be indispensable?” replies the researcher …
The “most humanoid robots” than one can see are in Tsukuba, JRL (Joint Robotics Laboratory), a joint venture between CNRS (including the LAAS in Toulouse and Montpellier LIRMM) and AIST (National Institute of Advanced Industrial Science and Technology). Its director, the French Abderrahmane Kheddar, is supported by Eiichi Yoshida. From JRL comes a line of androids named HRP-1 (made by Honda) to HRP-4C. But it is still HRP-2 (manufactured by Kawada Company) which serves as a platform for most projects.
The human-robot collaboration is the main topic here. “Something as simple as wearing a table with a human is a very delicate task for a robot,” admits Eiichi Yoshida. And a research topic for some more years, one can assume when watching video clip showing HRP-2 at work. He does what he can, and is very clumsy. His arms and legs are impressive, but its perception and coordination skills are very basic compared to what nature has given us.
More difficult still is body contact. Helping a senior walking without hurry, requires subtle body awareness through the robot, and adaptive responses… And these algorithms will still require years of research.
Indeed, one can say that a robot is primarily the software. A lot of software. And sensors, a lot of sensors. Finally engines, scrap. It’s not by showing off with smooth mechanics, by pretending to be the humans they are not, that androids will be useful. And long before artificial creatures in our image come into our lives, gear and roller clamps will probably prove quite useful provided they know how to capture our humanity.