Robot learns to walk through unfamiliar terrain

It’s all well and good Boston Dynamics making a video of its robots trotting merrily across a field (in what, for all we know, might have been the 100th take).

A far bigger task is creating autonomy that reacts to brand new situations – scenarios it hasn’t been fed before as part of its machine learning. When it comes to making this kind of technology an every day reality, we are a long way from achieving this critical component.

This article in VentureBeat today is dense in the detail, but the gist is that researchers in California – at UC Berkeley and Google’s AI division, known as Brain – think they have come up with a method to help robots walk over the unexpected, by using reinforcement learning to “reward” the robot when it makes the right decision.

If you’re thinking this sounds like the way you’d train a dog, you’d be absolutely right. From the piece:

After 160,000 steps over two hours with an algorithm that rewarded forward velocity and penalized “large angular accelerations” and pitch angles, they successfully trained the Minitaur to walk on flat terrain, over obstacles like wooden blocks, and up slopes and steps — none of which were present at training time.

In other words, by teaching the robot to walk properly – rather than just handle pre-programmed obstacles – it was able to react to things it had never seen before. Here’s a tape of the learning process:

That’s one small step for robot…