The new research could help robots feel layers of fabric rather than relying on computer vision tools to just see them.
The work could allow robots to help people with household tasks such as folding clothes.
Humans use their senses of vision and touch to grab a glass or pick up a piece of cloth. It’s so routine that few think about it. For robots, this Tasks very tough.
It’s hard to quantify the amount of data collected by touch, and sense has been hard to simulate in robots — until recently.
says David Held, associate professor in the School of Computer Science and chair of the Robotics Perception and Doing (R-PAD) Laboratory at Carnegie Mellon University.
“A lot of the sense of touch that humans do is natural to us. We don’t think much about it, so we don’t realize how important it is,” Held says.
For example, to fold clothes, robots need a sensor to mimic the way human fingers can feel the top layer of a towel or shirt and pick up the layers underneath. The researchers could teach the robot to feel and grab the top layer of fabric, but without the robot feeling the other layers of fabric, the robot would only grab the top layer and never succeed in folding the fabric.
“How can we solve this problem?” He asks hold. “Well, maybe what we need is a touch sensor. “
ReSkin, developed by researchers at Carnegie Mellon and Meta AI, was the perfect solution. The open-source touch-sensing “skin” is made of a thin, flexible polymer embedded with magnetic particles to measure triaxial touch cues. in The last paperthe researchers used ReSkin to help the robot feel layers of fabric rather than relying on its vision sensors to see them.
says Thomas Wing, a doctoral student in the R-PAD lab, who worked on the project with postdoctoral fellow Daniel Setta and graduate student Sashank Tirumala. “We can use this touch sensing to determine how many layers of fabric we’ve captured by pressing on the sensor.”
Other research has used tactile sensing to grip solid objects, but fabric is deformable, meaning it shifts when you touch it — making the task more difficult. Adjusting the robot’s grip on the canvas changes its shape and sensor readings.
The researchers did not teach the robot how or where to grip the tissue. Instead, they taught him how many layers of fabric he was holding by estimating how many layers he was holding using the sensors in ReSkin, then adjusting control to try again. The team evaluated the robot to capture one and two layers of fabric and used different fabric textures and colors to show generalization outside of the training data.
The thinness and flexibility of the ReSkin sensor allowed it to teach robots how to work with something as delicate as layers of fabric.
This profile sensor So small, we’ve been able to do this very delicate job, inserting it between layers of fabric, which we can’t do with other sensors, especially optical sensors,” Weng says. “We’ve been able to use it to do tasks that were previously unachievable.”
There is a lot of research that needs to be done before handing the laundry basket over to a robot. It all starts with steps like smoothing out a wrinkled piece of fabric, choosing the right number of layers of fabric you want to fold, and then folding the fabric in the right direction.
“It’s really an exploration of what we can do with this new sensor,” Weng says. “We’re exploring how to get robots to feel this magnetic skin of soft objects, and we’re exploring simple strategies for manipulating the fabric that we need so that robots can eventually do our laundry.”
The team presented their paper at the 2022 International Conference on Robotics and Intelligent Systems in Kyoto, Japan.
Source: Stacey Federoff for Carnegie Mellon University
#sensitive #robots #laundry