New algorithms help four-legged robots operate in the wild

New algorithms help four-legged robots operate in the wild

Credit: University of California – San Diego

A team led by the University of California, San Diego, has developed a new system of algorithms that enables four-legged robots to walk and run over challenging terrain while avoiding both static and moving obstacles.

In tests, the system directed the robot to maneuver autonomously and quickly across sandy surfaces, gravel, grass and rugged dirt hills covered in branches and fallen leaves without crashing into poles, trees, bushes, rocks, benches or people. The robot also navigates a crowded office space without hitting boxes, desks, or chairs.

This work brings researchers closer to building robots that can perform search-and-rescue missions or gather information in places that are too dangerous or difficult for humans.

The team will present their work at the 2022 International Conference on Intelligent Robots and Systems (IROS), which will be held from October 23-27 in Kyoto, Japan.






A new algorithm system developed by UCSD engineers enables four-legged robots to walk and run over challenging terrain while avoiding both static and moving obstacles. This work brings researchers closer to building robots that can perform search-and-rescue missions or gather information in places that are too dangerous or difficult for humans. Credit: University of California, San Diego Jacobs School of Engineering

The system gives the legged robot more versatility because of the way it combines the robot’s sense of sight with another sensing method called proprioception, which includes the robot’s sense of motion, direction, speed, location and touch — in this case, under its feet.

Currently, most ways to train legged robots to walk and navigate rely on either proprioception or vision, but not both at the same time, said senior study author Xiaolong Wang, professor of electrical and computer engineering at the University of California, San Diego Jacobs School of Engineering. . .

“In one case, it’s like training a blind robot to walk just by touching and feeling the ground,” he said. “In the other, the robot plans its leg movements based on sight alone. It doesn’t learn two things at the same time.” Wang. “In our work, we combine proprioception and computer vision to enable a legged robot to move efficiently and smoothly – while avoiding obstacles – in a variety of challenging environments, not just well-defined ones.”

The system developed by Wang and his team uses a special set of algorithms to combine data from real-time images captured by a depth camera on the robot’s head with data from sensors on the robot’s legs. This was not a simple task. Wang explained: “The problem is that during real-world operation, there is sometimes a slight delay in receiving images from the camera, so the data from two different sensing methods does not always arrive at the same time.”

The team’s solution was to simulate this mismatch by randomizing the two sets of inputs—a technique the researchers call multimodal delay randomization. Then the combined and random inputs were used to train the reinforcement learning policy in an inclusive manner. This approach helped the robot make decisions quickly on the move and anticipate changes in its environment early on, so that it could move and dodge obstacles faster on different types of terrain without the aid of a human operator.

Going forward, Wang and his team are working to make legged robots more diverse so they can conquer more challenging terrain. “Currently, we can train the robot to perform simple movements such as walking, running and avoiding obstacles. Our next goals are to enable the robot to go up and down stairs, walk on stones, change directions and jump over obstacles.”

The team released their code on GitHub and the paper is available at arXiv Prepress server.


Robot teaches itself to walk using reinforcement learning


more information:
Chieko Sarah Imai et al, Vision-guided quadrupedal locomotion in the wild with multimodal delay randomization, arXiv (2022). arXiv: 2109.14549 [cs.RO]And the arxiv.org/abs/2109.14549

conspiracy: iros2022.org/

GitHub: github.com/Mehooz/vision4leg

Journal information:
arXiv


Provided by University of California – San Diego


the quote: New Algorithms Helps Four-legged Robots Run in the Wild (2022, Oct 4th) Retrieved Oct 4, 2022 from https://techxplore.com/news/2022-10-algorithms-four-legged-robots-wild.html

This document is subject to copyright. Notwithstanding any fair dealing for the purpose of private study or research, no part may be reproduced without written permission. The content is provided for informational purposes only.


#algorithms #fourlegged #robots #operate #wild

Leave a Comment

Your email address will not be published. Required fields are marked *