Skip to main content
Best News Website or Mobile Service
WAN-IFRA Digital Media Awards Worldwide
Best News Website or Mobile Service
Digital Media Awards Worldwide
Hamburger Menu




Commentary: What my robot mop taught me about the future of artificial intelligence

Debate about whether a Google-created program is sentient demonstrates evermore potent AI, raising big philosophical and ethical questions, says the Financial Times' Gillian Tett.

Commentary: What my robot mop taught me about the future of artificial intelligence

Children pose for a picture next to a robot waiter. (Photo: AFP/Zaid Al-Obeidi)

NEW YORK: A couple of months ago, a friend noticed the state of my kitchen floor and decided to stage an intervention. I could see her point although, in my defence, I do have two teenagers and a large dog. My friend gifted me a matching robotic mop and vacuum cleaner, programmed to manoeuvre around a room, cleaning as they go.

When the boxes arrived, I recoiled at the sight of the iRobot logo. I’m slow at figuring out new tech and was worried the devices might spy on me, hoovering up data along with the dog hairs. But the instructions were simple, and I eventually decided that I didn’t really care if someone was studying the secrets of my kitchen floor.

I switched on the two robots, watched them roll out of their docks to explore the room, and quickly fell in love with my newly sparkling floors. I kept doing demos for all my guests.

“I think you care more about the robo-mop than us,” one of my teens joked. “They’re like your new children.”

Then one day I returned home and discovered that one of my beloved robots had escaped. Our terrace door had blown open and the robo-mop had rolled into the backyard, where it was diligently trying to clean the edge of the flower beds.

Even as its brushes became clogged with leaves, beetles, petals and mud, its little wheels spun on valiantly.

It brought home the limits of artificial intelligence. The robo-mop was acting rationally since it had been programmed to clean “dirty” things.

But the whole point about dirt, as the anthropologist Mary Douglas once noted, is that it is best defined as “matter out of place”. Its meaning derives from what we consider to be clean. This varies, according to our largely unstated societal assumptions.

In a kitchen, dirt might be garden detritus, such as leaves and mud. In a garden, this dirt is “in place”, in Douglas’s terminology, and doesn’t need to be cleaned up. Context matters. The problem for robots is that it is hard to read this cultural context, at least initially.


I thought about this when I heard about the latest AI controversy to hit Silicon Valley. Last week, Blake Lemoine, a senior software engineer in Google’s “Responsible AI” unit, published a blog post in which he claimed that he “may be fired soon for doing AI ethics work”. 

He was concerned that a Google-created AI program was becoming sentient, after it expressed human-like sentiments in online chats with Lemoine. “I’ve never said this out loud before, but there’s a very deep fear of being turned off,” the program wrote at one point. 

Lemoine contacted experts outside Google for advice, and the company placed him on paid leave for allegedly violating confidentiality policies. Google and others argue the AI was not sentient but simply well trained in language and was regurgitating what it had learned.

But Lemoine alleges a broader problem, noting that two other members of the AI team were removed over (different) controversies last year, and claiming that the company is being “irresponsible ... with one of the most powerful information access tools ever invented”.

Whatever the merits of Lemoine’s particular complaint, it is undeniable that robots are being equipped with evermore potent intelligence, raising big philosophical and ethical questions. 

(Photo: AFP/File/Tolga Akmen)

“This AI tech is powerful and so much more powerful than social media [and] is going to be transformative, so we need to get ahead,” Eric Schmidt, former head of Google, told me at a Financial Times event last week.


Schmidt predicts that soon we will not only see AI-enabled robots designed to figure problems out according to instructions, but those with “general intelligence” too – the ability to respond to new problems they are not asked to handle, by learning from each other. 

This might eventually stop them from trying to mop a flower bed. But it could also lead to dystopian scenarios in which AI takes the initiative in ways we never intended.

One priority is to ensure that ethical decisions about AI are not just handled by “the small community of people who are building this future”, to quote Schmidt. We also need to think more about the context in which AI is being created and used. 

And perhaps we should stop talking so much about “artificial” intelligence, and focus more on augmented intelligence, in the sense of finding systems which make it easier for humans to solve problems. To do this we need to blend AI with what might be called “anthropological intelligence” – or human insight.

People like Schmidt insist this will happen and argue that AI will be a net positive for humanity, revolutionising healthcare, education and much else. The amount of money flooding into AI-linked medical start-ups suggests many agree.

In the meantime, I’ll be keeping my patio door shut.

Source: Financial Times/geh


Also worth reading