And now for the latest installment of “teaching robots to pick stuff up”—a long-running, perhaps even never-ending, series.
At MIT, a breakthrough in the quest to teach a robotic arm how to identify a mug, pick it up, and place it on a nearby mug rack.
Researchers worked out how, by reducing the number of data points the machine required for its decision making, they could greatly reduce the amount of time it takes to “teach” a robot how to pick up as wide an array of mugs* as possible.
Unlike previous techniques that require hundreds or even thousands of examples for a robot to learn to pick up a mug it has never seen before, this approach requires only a few dozen. The researchers were able to train the neural network on 60 scenes of mugs and 60 scenes of shoes to reach a similar level of performance. When the system initially failed to pick up high heels because there were none in the data set, they quickly fixed the problem by adding a few labeled scenes of high heels to the data.
Here it goes:
(*How many designs of mug can there be in the world?)