Labour shortage means robots might start shearing Aussie sheep

As I’ve often found to be the case, the robotic revolution is more often filling in where there are labour shortages, rather than necessarily pushing humans out. See Australia, where there are too many sheep, reports the ABC:

Faced with a shearer shortage, [Australia’s wool industry] is spending $10 million on research to streamline wool harvesting. Projects range from better shed design to robotics, including one project that would fully automate the process of getting wool off a sheep and into a bale. Jane Littlejohn, who oversees the research arm of the industry’s research and development body Australian Wool Innovation (AWI), described automation as “forward thinking”

It’s desperately needed. According to the ABC, there are 73 million sheep in Australia, and only 2,800 shearers.

Here’s some of this research in action:

Shauna is modelled on a real shorn sheep and has been used by Mickey Clemon and his colleagues to test what’s possible with off-the-shelf technology.

“We found quite a lot is possible,” Dr Clemon said of the nine-month scoping study commissioned by AWI.

Now, my mother grew up on an Irish sheep farm. Having spent many happy summers running around with the sheep, I can confidently tell you the machine will need a lot of practice on a rather more… active participant.

Robot learns to walk through unfamiliar terrain

It’s all well and good Boston Dynamics making a video of its robots trotting merrily across a field (in what, for all we know, might have been the 100th take).

A far bigger task is creating autonomy that reacts to brand new situations – scenarios it hasn’t been fed before as part of its machine learning. When it comes to making this kind of technology an every day reality, we are a long way from achieving this critical component.

This article in VentureBeat today is dense in the detail, but the gist is that researchers in California – at UC Berkeley and Google’s AI division, known as Brain – think they have come up with a method to help robots walk over the unexpected, by using reinforcement learning to “reward” the robot when it makes the right decision.

If you’re thinking this sounds like the way you’d train a dog, you’d be absolutely right. From the piece:

After 160,000 steps over two hours with an algorithm that rewarded forward velocity and penalized “large angular accelerations” and pitch angles, they successfully trained the Minitaur to walk on flat terrain, over obstacles like wooden blocks, and up slopes and steps — none of which were present at training time.

In other words, by teaching the robot to walk properly – rather than just handle pre-programmed obstacles – it was able to react to things it had never seen before. Here’s a tape of the learning process:

That’s one small step for robot…