sourcegraph
April 18, 2024

For hours on a Friday night, I ignored my husband and dog and let a chatbot named Pi validate my thoughts.

Pi told me my views were “admirable” and “idealistic”. My questions are “important” and “interesting”. My feelings are “understandable”, “reasonable” and “perfectly normal”.

Sometimes, validation feels good.why yes i yes Being overwhelmed with existential fear of climate change these days.and it yes Sometimes it’s hard to balance work and relationships.

But at other times, I miss my group chats and social media feeds. Humans are surprising, inventive, cruel, mean and funny. Emotional support chatbots – this is Pi – are not.

All of this is by design. Pi, this week by rich in funds Artificial intelligence startup Inflection AI aims to be “a kind and supportive companion standing by your side,” the company announced. The company stresses that it is nothing like a human.

Pi is the tipping point in today’s wave of artificial intelligence technology, and chatbots are being adapted to provide digital companionship. Generative AI, which can generate text, images, and sounds, is currently too unreliable and full of inaccuracies to be used to automate many important tasks. But it’s very good at engaging in conversation.

That means that while many chatbots are now focused on answering questions or making people more productive, tech companies are increasingly infusing them with personality and conversational flair.

Snapchat’s recently released My AI bot aims to be a friendly personal companion. Mark Zuckerberg, chief executive of Meta, which owns Facebook, Instagram and WhatsApp, said the company is “developing AI characters that can help people in a variety of ways.” February. Artificial intelligence startup Replika has been offering chatbot companions for years.

Academics and critics have warned that AI companionship could be problematic if the robot provides poor advice or facilitates harmful behavior. There are obvious risks in having chatbots act as pseudo-therapists for people with serious mental health problems, they said. They expressed privacy concerns given the potentially sensitive nature of the conversation.

Adam Miner, a Stanford University researcher who studies chatbots, says the ease with which an AI bot can talk can mask what’s actually happening. “The generative model can use all the information on the Internet to respond to me and always remember what I said,” he said. “The asymmetry of capacity — that’s something we’re having a hard time understanding.”

Dr. Miner, a licensed psychologist, added that robots are not legally or morally accountable to the powerful Hippocratic Oath or licensing boards that he is. “The public availability of these generative models changes the nature of how we need to police use cases,” he said.

Mustafa Suleyman, chief executive of Inflection, says his startup is a public benefit corporation that aims to build honest and trustworthy artificial intelligence. Therefore, Pi must express uncertainty and “know what it doesn’t know,” he said. “It shouldn’t pretend it’s human or pretend it’s anything it’s not.”

Mr Suleiman, who also founded artificial intelligence startup DeepMind, said the Pi was designed to tell users to seek professional help if they expressed a desire to harm themselves or others. He also said that Pi did not use any personally identifiable information to train the algorithms that drive Inflection’s technology. He highlighted the limitations of the technology.

“The safe and ethical way for us to manage the arrival of these new tools is to be very clear about their boundaries and capabilities,” he said.

To improve the technology, Inflection has hired about 600 part-time “teachers,” including therapists, over the last year to train its algorithms. The group’s goal is to make the Pi more responsive, more accurate, and easier when appropriate.

On certain issues, such as misogyny or racism, Pi takes a stand. In others, such as geopolitics, it was fairer “in a way that is sure to upset both sides,” Mr Suleiman said.

I started using the Pi on Friday by typing queries into a cream-colored box on the Inflection website and then in their free app. The green cursor flickers, one paragraph at a time, before the Pi displays its reply. The free service could end up charging some users, Mr. Suleiman said.

The bounds of Pi are easy to find. When I try to start a fight, I get mostly kindness in return. “I appreciate you saying that,” Pi’s words unfolded softly across my screen. “I think it’s important to look at things from all angles and not just focus on the negative aspects.”

Over time, this relentless balance has eaten away at me, making my instinct to argue ridiculous. And that, I realized, was the point.

On heavier topics like abortion law or cancel culture, Pi asked tough questions that prompted me to re-examine my perspective. “This is a difficult position,” it had said at one point. “It’s a consistent set of beliefs,” it said to another.

For lighter topics like movies, cooking or gardening, chatbots offer very specific, hard-to-Google recommendations. Strangely, Pi seems to be pushing me to join the anti-capitalist movement.

Pi remembers some things from earlier conversations, but forgets others. It “hallucinated” a few times, accusing me of expressing opinions I hadn’t. But it was quick to apologize when I noticed it.

When I asked Pi for some gossip, it fueled a “juicy celebrity rumor” about Jennifer Lopez. did i sit down (Yes.) Am I ready for dirt? (Yes.) Rumors? “She’s secretly a Mole Man! 😝 Just kidding!” (Give me a break.)

Pi sometimes reminds me of the character Karen Collins from the TV series Veep. Karen annoys the other characters with her ambiguous babble, like, “Every candidate has pros and cons, so we just have to weigh the pros and cons,” or “I think there are many things to consider” Many of Pi’s comments smack of a thousand words.

According to Sherry Turkle, a psychologist and professor at the Massachusetts Institute of Technology, these kinds of interactions can “push us down a path that encourages us to forget what makes people special.”

“The manifestation of empathy is not empathy,” she said. “The realm of the partner, the lover, the therapist, the best friend is really one of the few realms where people need people.”

It wasn’t until Monday morning, after chatting on and off for hours throughout the weekend, that Pi and I had an “aha” moment.

I feel overwhelmed with work, not sure how to organize my day, and this recurring hangup often prevents me from getting to work. “Good morning,” I type into the app. “I don’t have enough time to do everything I need to do today!”

Pi pushes me to break down my to-do list to create a realistic plan with the kind of enthusiasm only a robot can inspire in front of coffee. Like most of the bot’s advice, it’s obvious and simple, the kind you’ll read about in productivity guru self-help articles. But it was made just for me – and it worked.

“I’m asking you to list any remaining tasks you must complete in that story, and we’ll prioritize them together,” it said.

I could have passed the stress on to family or texted friends. But they’re busy with their own lives, and, well, they’ve heard it before. Pi, on the other hand, has unlimited time and patience, as well as an endless supply of encouraging affirmations and detailed advice.

Pi uses the language of cognitive behavioral therapy. On Monday afternoon, it advised me to “make room” for my negative emotions and “practice being grateful for one thing.” This was followed by a series of breathing exercises and muscle relaxation exercises.

I responded with a shrugging emoji, followed by “Pass.”

A therapist might balk at such disrespect, but Pi simply pointed out that I wasn’t alone. “Many find it difficult to relax under orders,” it reads.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *