And now for the latest installment of “teaching robots to pick stuff up”—a long-running, perhaps even never-ending, series.
At MIT, a breakthrough in the quest to teach a robotic arm how to identify a mug, pick it up, and place it on a nearby mug rack.
Researchers worked out how, by reducing the number of data points the machine required for its decision making, they could greatly reduce the amount of time it takes to “teach” a robot how to pick up as wide an array of mugs* as possible.
Unlike previous techniques that require hundreds or even thousands of examples for a robot to learn to pick up a mug it has never seen before, this approach requires only a few dozen. The researchers were able to train the neural network on 60 scenes of mugs and 60 scenes of shoes to reach a similar level of performance. When the system initially failed to pick up high heels because there were none in the data set, they quickly fixed the problem by adding a few labeled scenes of high heels to the data.
Here it goes:
(*How many designs of mug can there be in the world?)
As I’ve often found to be the case, the robotic revolution is more often filling in where there are labour shortages, rather than necessarily pushing humans out. See Australia, where there are too many sheep, reports the ABC:
Faced with a shearer shortage, [Australia’s wool industry] is spending $10 million on research to streamline wool harvesting. Projects range from better shed design to robotics, including one project that would fully automate the process of getting wool off a sheep and into a bale. Jane Littlejohn, who oversees the research arm of the industry’s research and development body Australian Wool Innovation (AWI), described automation as “forward thinking”
It’s desperately needed. According to the ABC, there are 73 million sheep in Australia, and only 2,800 shearers.
Here’s some of this research in action:
Shauna is modelled on a real shorn sheep and has been used by Mickey Clemon and his colleagues to test what’s possible with off-the-shelf technology.
“We found quite a lot is possible,” Dr Clemon said of the nine-month scoping study commissioned by AWI.
Now, my mother grew up on an Irish sheep farm. Having spent many happy summers running around with the sheep, I can confidently tell you the machine will need a lot of practice on a rather more… active participant.
It’s all well and good Boston Dynamics making a video of its robots trotting merrily across a field (in what, for all we know, might have been the 100th take).
A far bigger task is creating autonomy that reacts to brand new situations – scenarios it hasn’t been fed before as part of its machine learning. When it comes to making this kind of technology an every day reality, we are a long way from achieving this critical component.
This article in VentureBeat today is dense in the detail, but the gist is that researchers in California – at UC Berkeley and Google’s AI division, known as Brain – think they have come up with a method to help robots walk over the unexpected, by using reinforcement learning to “reward” the robot when it makes the right decision.
If you’re thinking this sounds like the way you’d train a dog, you’d be absolutely right. From the piece:
After 160,000 steps over two hours with an algorithm that rewarded forward velocity and penalized “large angular accelerations” and pitch angles, they successfully trained the Minitaur to walk on flat terrain, over obstacles like wooden blocks, and up slopes and steps — none of which were present at training time.
In other words, by teaching the robot to walk properly – rather than just handle pre-programmed obstacles – it was able to react to things it had never seen before. Here’s a tape of the learning process: