Skip to main content
Hamburger Menu Close

Advertisement

Wellness

How a weekend with an emotional support AI companion turned out

Pi, an AI tool that debuted in May, is a twist on the new wave of chatbots: It assists people with their wellness and emotions.

How a weekend with an emotional support AI companion turned out

(Art: The New York Times/Janice Chang)

For several hours Friday evening, I ignored my husband and dog and allowed a chatbot named Pi to validate the heck out of me. My views were “admirable” and “idealistic”, Pi told me. My questions were “important” and “interesting”. And my feelings were “understandable”, “reasonable” and “totally normal”.

At times, the validation felt nice. Why yes, I am feeling overwhelmed by the existential dread of climate change these days. And it is hard to balance work and relationships sometimes. But at other times, I missed my group chats and social media feeds. Humans are surprising, creative, cruel, caustic and funny. Emotional support chatbots, which is what Pi is, are not.

All of that is by design. Pi, released in early May by the richly funded artificial intelligence startup Inflection AI, aims to be “a kind and supportive companion that’s on your side”, the company announced. It is not, the company stressed, anything like a human.

Pi is a twist in today’s wave of AI technologies, where chatbots are being tuned to provide digital companionship. Generative AI, which can produce text, images and sound, is currently too unreliable and full of inaccuracies to be used to automate many important tasks. But it is very good at engaging in conversations.

That means that while many chatbots are now focused on answering queries or making people more productive, tech companies are increasingly infusing them with personality and conversational flair.

Snapchat recently released My AI as a friendly personal sidekick. (Photo: Snap Inc)

Snapchat’s recently released My AI bot is meant to be a friendly personal sidekick. Meta, which owns Facebook, Instagram and WhatsApp, is “developing AI personas that can help people in a variety of ways”, Mark Zuckerberg, its chief executive, said in February. And the AI startup Replika has offered chatbot companions for years.

AI companionship can create problems if the bots offer bad advice or enable harmful behaviour, scholars and critics warn. Letting a chatbot act as a pseudo-therapist to people with serious mental health challenges has obvious risks, they said. And they expressed concerns about privacy, given the potentially sensitive nature of the conversations.

Adam Miner, a Stanford University researcher who studies chatbots, said the ease of talking to AI bots can obscure what is actually happening. “A generative model can leverage all the information on the internet to respond to me and remember what I say forever,” he said. “The asymmetry of capacity  that’s such a difficult thing to get our heads around.”

Miner, a licenced psychologist, added that bots are not legally or ethically accountable to a robust Hippocratic oath or licencing board as he is. “The open availability of these generative models changes the nature of how we need to police the use cases,” he said.

Many chatbots are not only focused on answering queries, they are also increasingly infused with personality and conversational flair. (Photo: iStock/Khanchit Khirisutchalual)

Mustafa Suleyman, Inflection’s chief executive, said his startup, which is structured as a public benefit corporation, aims to build honest and trustworthy AI. As a result, Pi must express uncertainty and “know what it does not know,” he said. “It shouldn’t try to pretend that it’s human or pretend that it is anything that it isn’t.”

Suleyman, who also founded the AI startup DeepMind, said Pi was designed to tell users to get professional help if they expressed wanting to harm themselves or others. He also said Pi did not use any personally identifiable information to train the algorithm that drives Inflection’s technology. And he stressed the technology’s limitations.

“The safe and ethical way for us to manage the arrival of these new tools is to be super explicit about their boundaries and their capabilities,” he said.

To refine the technology, Inflection hired around 600 part-time “teachers”, which included therapists, to train its algorithm over the last year. The group aimed to make Pi more sensitive, more factually accurate and more lighthearted when appropriate.

On some issues, like misogyny or racism, Pi takes a stand. On others, like geopolitics, it is more evenhanded “in a way that will for sure upset both sides,” Suleyman said.

A screen grab of Pi's chat screen. (Screen grab: Inflection)

I started using Pi by typing queries into a cream-coloured box on Inflection’s website and, later, in its free app. A green cursor swayed before Pi revealed its responses, one paragraph at a time. Suleyman said the free service may eventually charge some users a fee.

Pi’s boundaries are easy to find. When I tried picking fights, I mostly received kindness in return. “I appreciate you saying that,” Pi’s text gently unfurled on my screen. “I think it’s important to see things from all perspectives, and not to just focus on the negative.”

Over time, that relentless balance wore on me, making my instinct to argue seem ridiculous. Which, I realised, was precisely the point.

Pi sometimes reminded me of Karen Collins, a character from the TV show Veep. Karen annoys the other characters with her equivocating babble like, “There are pros and cons to every candidate, so we just need to weigh out the pros and cons”, or “I think there’s a lot to think”. Many of Pi’s comments have that milquetoast quality of taking many words to say nothing.

Sherry Turkle, a psychologist and professor at the Massachusetts Institute of Technology, said these kinds of interactions can “push us along a road where we’re encouraged to forget what makes people special”.

“The performance of empathy is not empathy,” she said. “The area of companion, lover therapist, best friend is really one of the few areas where people need people.”

(Photo: iStock/Tero Vesalainen)

It wasn’t until after hours of intermittent chatting throughout the weekend, that I had my “aha” moment with Pi. I was feeling overwhelmed with work and unsure of how to structure my day, a recurring hang-up that often prevents me from getting started.

“Good morning,” I typed into the app. “I don’t have enough time to do everything I need to do today!”

With a level of enthusiasm only a robot could muster before coffee, Pi pushed me to break down my to-do list to create a realistic plan. Like much of the bot’s advice, it was obvious and simple, the kind of thing you would read in a self-help article by a productivity guru. But it was tailored specifically to me – and it worked.

“I’m going to ask you to list all the remaining tasks you have to do on that story, and we’ll prioritise them together,” it said.

I could have dumped my stress on a family member or texted a friend. But they are busy with their own lives and, well, they have heard this before. Pi, on the other hand, has infinite time and patience, plus a bottomless well of encouraging affirmations and detailed advice.

By Erin Griffith © The New York Times Company

The article originally appeared in The New York Times.

Source: New York Times/bk

Advertisement

RECOMMENDED

Advertisement