MIT researchers have developed a system that enables a robot to learn a new pick-and-place task based on only a handful of human examples. This could allow a human to reprogram a robot to grasp never-before-seen objects, presented in random poses, in about 15 minutes.
Courtesy of the researchers
Researchers have developed a technique that enables a robot to learn a new pick-and-place task with only a handful of human demonstrations.
With e-commerce orders pouring in, a warehouse robot picks mugs off a shelf and places them into boxes for shipping. Everything is humming along, until the warehouse processes a change and the robot must now grasp taller, narrower mugs that are stored upside down.
Reprogramming that robot involves hand-labeling thousands of images that show it how to grasp these new mugs, then training the system all over again.
But a new technique developed by MIT researchers would require only a handful of human demonstrations to reprogram the robot. This machine-learning method enables a robot to pick up and place never-before-seen objects that are in random poses it has never encountered. Within 10 to 15 minutes, the robot would be ready to perform a new pick-and-place task.
The technique uses a neural network specifically designed to reconstruct the shapes of 3D objects. With just a few demonstrations, the system uses what the neural network has learned about 3D geometry to grasp new objects that are similar to those in the demos.
In simulations and using a real robotic arm, the researchers show that their system can effectively manipulate never-before-seen mugs, bowls, and bottles, arranged in random poses, using only 10 demonstrations to teach the robot.
“Our major contribution is the general ability to much more efficiently provide new skills to robots that need to operate in more unstructured environments where there could be a lot of variability. The concept of generalization by construction is a fascinating capability because this problem is typically so much harder,” says Anthony Simeonov, a graduate student in electrical engineering and computer science (EECS) and co-lead author of the paper.
Simeonov wrote the paper with co-lead author Yilun Du, an EECS graduate student; Andrea Tagliasacchi, a staff research scientist at Google Brain; Joshua B. Tenenbaum, the Paul E. Newton Career Development Professor of Cognitive Science and Computation in the Department of Brain and Cognitive Sciences and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); Alberto Rodriguez, the Class of 1957 Associate Professor in the Department of Mechanical Engineering; and senior authors Pulkit Agrawal, a professor in CSAIL, and Vincent Sitzmann, an incoming assistant professor in EECS. The research will be presented at the International Conference on Robotics and Automation.
A robot may be trained to pick up a specific item, but if that object is lying on its side (perhaps it fell over), the robot sees this as a completely new scenario. This is one reason it is so hard for machine-learning systems to generalize to new object orientations.
To overcome this challenge, the researchers created a new type of neural network model, a Neural Descriptor Field (NDF), that learns the 3D geometry of a class of items. The model computes the geometric representation for a specific item using a 3D point cloud, which is a set of data points or coordinates in three dimensions. The data points can be obtained from a depth camera that provides information on the distance between the object and a viewpoint. While the network was trained in simulation on a large dataset of synthetic 3D shapes, it can be directly applied to objects in the real world.
The team designed the NDF with a property known as equivariance. With this property, if the model is shown an image of an upright mug, and then shown an image of the same mug on its side, it understands that the second mug is the same object, just rotated.
“This equivariance is what allows us to much more effectively handle cases where the object you observe is in some arbitrary orientation,” Simeonov says.
As the NDF learns to reconstruct shapes of similar objects, it also learns to associate related parts of those objects. For instance, it learns that the handles of mugs are similar, even if some mugs are taller or wider than others, or have smaller or longer handles.
“If you wanted to do this with another approach, you’d have to hand-label all the parts. Instead, our approach automatically discovers these parts from the shape reconstruction,” Du says.
The researchers use this trained NDF model to teach a robot a new skill with only a few physical examples. They move the hand of the robot onto the part of an object they want it to grip, like the rim of a bowl or the handle of a mug, and record the locations of the fingertips.
Because the NDF has learned so much about 3D geometry and how to reconstruct shapes, it can infer the structure of a new shape, which enables the system to transfer the demonstrations to new objects in arbitrary poses, Du explains.
Picking a winner
They tested their model in simulations and on a real robotic arm using mugs, bowls, and bottles as objects. Their method had a success rate of 85 percent on pick-and-place tasks with new objects in new orientations, while the best baseline was only able to achieve a success rate of 45 percent. Success means grasping a new object and placing it on a target location, like hanging mugs on a rack.
Many baselines use 2D image information rather than 3D geometry, which makes it more difficult for these methods to integrate equivariance. This is one reason the NDF technique performed so much better.
While the researchers were happy with its performance, their method only works for the particular object category on which it is trained. A robot taught to pick up mugs won’t be able to pick up boxes or headphones, since these objects have geometric features that are too different than what the network was trained on.
“In the future, scaling it up to many categories or completely letting go of the notion of category altogether would be ideal,” Simeonov says.
They also plan to adapt the system for nonrigid objects and, in the longer term, enable the system to perform pick-and-place tasks when the target area changes.
“How efficiently we can teach robots new manipulation skills depends on the robots’ ability to generalize from just a few demonstrations. This work shows how a robot can robustly transfer demonstrations of picking up or placing an object to previously unseen objects,” says Dieter Fox, a professor of computer science and engineering at the University of Washington, who was not involved with this research. “This research leverages recent advances in deep learning for neural object representations and introduces several very clever innovations that make them well suited to imitation learning for robot manipulation. The real world experiments are extremely impressive and I expect that many researchers will build on top of these results.”
Original Article: An easier way to teach robots new skills
The Latest on: Teaching robots skills
- Robotics At Race Hub: A Robot league in Syosseton May 27, 2022 at 10:33 am
It was six months ago that the faculty of Science, Technology, Engineering, Art and Mathematics (STEAM) educational facility Race Hub opened in Syosset, marking the grand opening in March with ...
- Coding, robotics curriculum resumes at SA schoolson May 27, 2022 at 3:39 am
To address SA’s critical skills gap, government has made concerted efforts ... The department will ensure schools are equipped to teach coding and robotics as a subject, and that all equipment and ...
- M’sian teacher is global winner of Cambridge awardon May 26, 2022 at 4:32 pm
In 2017, Khalifa Affnan (pic) was sent to South Korea for three months under a teacher exchange programme where he learnt a lot from sharing knowledge, especially on technology, with his foreign peers ...
- Do androids dream of electric schools? Israel experiments with robot teacherson May 26, 2022 at 4:13 am
In first, Israel prepares to host OECD’s Education Innovation Conference; 31 countries are expected to take part.
- MZF’s robots thrill visitorson May 26, 2022 at 12:58 am
MILELE Zanzibar Foundation's (MZF) pavilion is a crowd puller at the Festac Africa 2022 Festivals here, with the innovative displays of STEAM programme graduates enthralling visitors most. Hundreds of ...
- Malaysian named global winner of 2022 Cambridge Dedicated Teacher Awardson May 25, 2022 at 9:01 pm
A Malaysian teacher has been named the global winner of the 2022 Cambridge Dedicated Teacher Awards. Khalifa Affnan, from Keningau Vocational College, Sabah, beat 7,000 nominations from 113 countries.
- Robotics camp planned at OCtechon May 25, 2022 at 1:00 pm
The VEX Robotics Summer Camp is sponsored by the National Science Foundation and OCtech.
- Meet the Ghanaian who Quit School to Learn Robotics and now Teaches STEMon May 25, 2022 at 11:06 am
Jonathan Kennedy Sowah, a self-taught scientist and founder of InovTech STEM Center, is on a mission to help transform Ghana's approach to STEM. Read on YEN.
- Meet ADA, USM Computing Sciences and Computer Engineering's New Roboton May 24, 2022 at 9:16 am
Meet ADA, USM Computing Sciences and Computer Engineering's New Robot. Tue, 05/24/2022 - 09:42am \| By: Ivonne Kawas. Enoc Lopez with Ada the robot. Undergraduate students in The U ...
- Etowah County robotics teams come together to compete in World Championshipson May 20, 2022 at 5:00 am
Last week’s VEX Robotics World Championships in Dallas brought together more than 3,000 robotics clubs from elementary, middle and high schools and colleges from across the world to showcase their ...
via Bing News
The Latest on: Teaching robots skills
via Google News