He skilfully prances over hill and dale: researchers have used machine learning to teach a four-legged robot to walk cleverly and quickly through unknown terrain. “ANYmal” proved this on a demanding hiking tour through the Alps. Beforehand, the robot learned in a virtual world to combine its visual perception of the environment with the sense of touch in its legs. Such sure-footed techno four-legged friends could be used in the future to explore dangerous and complex environments where conventional robots fail, say the developers.
Stones, roots, steep passages on slippery ground and many other challenges have to be mastered: on a demanding hiking tour, you should be aware of what our perception system is capable of so that we don’t stumble. The decision as to where we set foot and how is based on a complex evaluation of various sensory impressions and experiences. Humans and animals automatically combine visual impressions from the environment with perceptions of the subsoil through the sensitivity of their limbs. For example, we can recognize slippery or yielding ground and adapt our stepping behavior accordingly based on the movement experiences we have acquired over the course of our lives.
challenge for robotics
Transferring these complex capabilities to autonomous robotic systems represents a major challenge for technicians – and the previous solution approaches therefore left a lot to be desired. “The reason for this is that the information recorded by laser sensors and cameras about the immediate environment is often incomplete and ambiguous,” explains Takahiro Miki, from the Swiss Federal Institute of Technology in Zurich (ETH). The robot’s view can be disturbed by difficult lighting conditions, dust or fog, or certain aspects of the environment can easily lead to misinterpretations.
For example, high grass, shallow puddles or loose snow can appear as insurmountable obstacles, although the robot could easily cross them. The opposite is of course also possible and so the techno being can end up lying helplessly on its back or fall. “Robots must therefore be able to decide for themselves when to trust images of their environment and move quickly, and when to proceed cautiously and with small steps,” says Miki. “Therein lies the great challenge.”
As Miki and his colleagues report, they have now succeeded in giving their four-legged robot ANYmal the ability to combine visual environmental perception with the sense of touch. In addition, he has the ability to learn to react spontaneously to certain challenges in a meaningful way. This is made possible by a new control system based on a neural network. It combines the information from ANYmal’s camera system with that from the sensors in its legs. In order to interpret the impressions meaningfully, the robotic four-legged friend must first be trained.
Training in a virtual world
It does this through machine learning in a computer world that accurately replicates the physical characteristics and responses of reality. In the virtual training camp, the robot is then confronted with numerous obstacles and sources of error. In this way, his artificial mind can learn how to best master the challenges. An important aspect is that the system learns which data is more important than others in certain situations. The system works even when sensor data from the immediate environment is ambiguous or diffuse, the researchers say. ANYmal then plays it safe and relies on its sense of touch. “Thanks to the training, the robot is ultimately able to master the most difficult terrain in nature without having seen it beforehand,” says senior author Marco Hutter from ETH.
The scientists have already been able to document this through a number of different test runs: ANYmal padded confidently and briskly through narrow tunnel and cave systems and overcame numerous obstacles and difficult terrain. However, the most impressive test operation took place in the Swiss Alps: The robot showed its skills on a hiking tour in the summit area of Mount Etzel at the southern end of Lake Zurich. The way to the top was characterized by steep passages, steps and forest paths with scree, roots and sometimes slippery ground. ANYmal managed the 120 meter climb in 31 minutes without falling or making any mistakes. The scientists report that he was four minutes faster than an average hiker.
The team is convinced that there is considerable application potential for the clever robotic system: ANYmal or its “descendants” could, for example, explore areas after earthquakes, nuclear disasters or during a forest fire, so the developers hope.
Swiss Federal Institute of Technology in Zurich, specialist article: Science Robotics, doi: 10.1126/scirobotics.abk2822
(Credit: Nicole Davidson / ETH Zurich)