Robot picks out the right mug and then hangs it on a rack (it’s harder than it sounds, alright?)

And now for the latest installment of “teaching robots to pick stuff up”—a long-running, perhaps even never-ending, series.

At MIT, a breakthrough in the quest to teach a robotic arm how to identify a mug, pick it up, and place it on a nearby mug rack.

Researchers worked out how, by reducing the number of data points the machine required for its decision making, they could greatly reduce the amount of time it takes to “teach” a robot how to pick up as wide an array of mugs* as possible.

From Technology Review’s The Algorithm newsletter:

Unlike previous techniques that require hundreds or even thousands of examples for a robot to learn to pick up a mug it has never seen before, this approach requires only a few dozen. The researchers were able to train the neural network on 60 scenes of mugs and 60 scenes of shoes to reach a similar level of performance. When the system initially failed to pick up high heels because there were none in the data set, they quickly fixed the problem by adding a few labeled scenes of high heels to the data.

Here it goes:


(*How many designs of mug can there be in the world?)

Google leads gaming down a perilous path

My take, for BBC News, on Google’s Stadia, a streaming-only gaming platform.

Briefly, I think both gamers and games-makers will be cautious of this idea. Not buying a console and each game sounds great, but the cost of paying for such high-speed internet would probably wipe out any savings made.

And the subscription model – if it indeed ends up being one – could lead to developers having to bastardize their ideas in order to maximise income. My conclusion:

The ad-laden, endorphin-pumping, lootbox-peddling mobile gaming industry might be considered the canary in a very miserable coal-mine, here. Paying for a games consoles, and the games on it, may not be such a bad thing after all.

Ricky Gervais on the increased velocity of “outrage”

Ricky Gervais is doing the media rounds at the moment to promote his new Netflix show, After Life.

I binged it on a recent flight to Canada. Like most of his post-Office* work, it’s… decent enough. Emotional moments are too heavily signposted early on, but delivered so tenderly that you don’t care when they arrive.

And in one episode he offers a charming take on the importance of local newspapers to a community. “They’re not for reading,” his character, a reporter, says. “They’re for being in.”

Anyway. Speaking to the New York Times, Gervais had this to say about the modern outrage cycle:

Twenty years ago, if you saw something on TV that offended you and you wanted to let someone know, you would’ve had to get a pen and paper and write, “Dear BBC, I’m bothered.” But you didn’t do it because it was too much trouble. Now with Twitter, you can just go, “[Expletive] you!” to a comedian who’s offended you. Then a journalist will see that and say, “So-and-so said a thing and people are furious.” No. The rest of us don’t give a [expletive] and wouldn’t have heard about it if it hadn’t been made a headline. Everything is exaggerated.

The most frustrating example of this I can think of from the past year was the “row” over the video of Alexandria Ocasio-Cortez’s teenage dancing. “Conservatives are offended”, read headlines based on, quite literally, one anonymous tweet.

While we’re talking about Ricky Gervais, I did greatly enjoy this part of his recent Netflix comedy special, in which he talks about how people behave on Twitter when it comes to becoming irate at posts that have nothing to do with them:

*(or rather, post-Derek, surely one of the most underrated programmes ever shown on British television)

One year on from Cambridge Analytica

The Guardian’s Julia Carrie Wong on the one-year anniversary of the Facebook–Cambridge Analytica scandal:

What changed was how we saw those facts. It was as if we had all gone away on a long voyage, returned home to an uneasy sense that something was different, and were not immediately able to grasp that it was ourselves who had changed and not the rooms and furnishings that surrounded us.

I like this take. A lot. Facebook hasn’t changed in any meaningful way since this broke out, and I still stand by my view that the company, feeling victimised, still sees this as some kind of passing storm.

But, if what Wong says is true, that won’t matter. As long as the public’s attitude has changed—which the article argues it has—then Facebook will have no choice but to adapt or be left behind.

California governor to suspend death penalty

“I do not believe that a civilized society can claim to be a leader in the world as long as its government continues to sanction the premeditated and discriminatory execution of its people.”

— California governor Gavin Newsom as he prepares to order a halt to California’s death penalty. The state has 737 inmates currently on death row, the highest of any US state. California has executed 722 people in its history. The most recent, Clarance Allen, was in 2006.

Open AI becomes for-profit

An interesting about-face for one of the emerging machine learning/AI organisations. Open AI, set up with funding from Elon Musk and Y Combinator’s Sam Altman, is now becoming a for-profit company after all:

It calls itself a “capped-profit” company. That’s a term it coined to mean it will limit the amount of money it returns to investors and employees and use most of whatever it generates to fund its non-profit entity, which will continue to exist. The non-profit entity will rule the company’s board with more board seats, and investors and employees have to sign a pledge acknowledging that the non-profit comes before their financial interests.

I recently covered Open AI’s “fake news machine”. As part of my reporting I spoke to several independent AI researchers who had a very dim view of the motives behind Open AI—pointing to the fact that the “research” released by the company has never been peer-reviewed. Open AI is designed, one researcher argued, purely with the intent of generating publicity.

As a non-profit billed as a discussion generator around the future of ethics in AI, publicity stunts are perhaps welcome. But a for-profit firm? I predict a closer watch on what OpenAI does from now on, with tougher scrutiny over what really constitutes a breakthrough.

Slate: What Zuckerberg’s new ‘privacy first’ vision for Facebook is hiding

It’s a diversion, a magician’s misdirection full of red herrings. When it comes to privacy, Facebook has been getting into trouble, deflecting, apologizing, and failing to deliver on promises of meaningful privacy protections for more than a decade. And its CEO wants to distract us from that record with a few well-placed changes so we miss his dangerous inaction elsewhere. Even taking him at his word—a generosity Facebook certainly hasn’t earned—Zuckerberg’s essay shows that he fundamentally misunderstands what “privacy” means. Read more cynically, the post seems to use a narrow definition of the concept to distract us from the ways Facebook will likely continue to expand its invasion of our digital private lives for profit.

— Slate’s Ari Ezra Waldman writing about Mark Zuckerberg’s pledge to make Facebook a more privacy-centric “digital living room”.

Tesla keeping more stores open—but there’s a catch

Tesla clearly hadn’t properly run the numbers before making its announcement about Model 3 pricing, and a decision to close most of its stores.

It now says it will keep “significantly more stores open than previously announced”. It’s hard to judge the significance of this, as Tesla didn’t tell us how many stores it was planning to close in the first place, nor does it have a concrete number now—other than to say 20% of its 300 or so stores are under review.

Keeping those stores open will come at a cost:

As a result of keeping significantly more stores open, Tesla will need to raise vehicle prices by about 3% on average worldwide. In other words, we will only close about half as many stores, but the cost savings are therefore only about half.

Potential Tesla owners will have a week to place their order before prices rise, so current prices are valid until March 18th. There will be no price increase to the $35,000 Model 3. The price increases will only apply to the more expensive variants of Model 3, as well as Model S and X.

No doubt Tesla fans will remain happy about all this. But I wonder how potential customers will feel about having to foot the bill for Tesla’s retail strategy?

Family angry as dying man is delivered test results—via robot

The collision between emerging robotics and human emotion will never be more acutely felt than in the healthcare industry.

Here’s a fascinating story (spotted by my colleague) about a family left deeply upset after a robot, not a nurse, was despatched to tell a man he was gravely ill and about to die.

Ernest Quintana, 79, was taken to hospital on Sunday, the San Jose Mercury News reports.

His granddaughter was told a nurse was going to share some test results:

Then the video-device wheeled itself in. Another machine delivering oxygen through a mask to her grandfather made it so noisy she had to repeat the words over the video to him, and struggled to keep her composure as she realized the gravity of the situation when the doctor on the video told her “I don’t know if he’s going to get home” and suggested giving him morphine “to make sure you’re comfortable.” She videotaped the encounter, fearing she would forget what was said.

The hospital, run by US healthcare mega-chain Kaiser Permanente, offered its “sincere condolences”, though defended the use of the technology, even in such delicate situations. Spokeswoman Michelle Gaskill-Hames said the company was reaching out to discuss the families concerns, but:

Gaskill-Hames bristled at the characterization of the video device as a “robot,” calling it “inaccurate and inappropriate” and insisting that “in every aspect of our care, and especially when communicating difficult information, we do so with compassion in a personal manner.”

The Mercury notes the company that makes the robot, InTouch Health, does indeed call it a robot. Here’s its promo video:

There’s no question this technology is going to become more widely-used across the healthcare industry. And with very good reason: anything that can make the physician’s time more efficient is a good innovation for (our) health.

But yet, we must remain human.

The Mercury quotes Arthur Caplan from NYU School of Medicine’s ethics division as saying its rare for patients to be told they are dying via robot, but it is something with which we must become accustomed. The piece ends with Caplan acknowledging:

“The mere fact that this family was upset tells me we’ve got to do better.”

No publicity is bad publicity?

One might say Huawei is matching Apple’s ability to innovate– when it comes to marketing, at least. The Telegraph’s James Titcomb:

Oh. Oh my: