Humanity’s Quest to Make AI Love Again is Dangerous

  • We have spent years asking artificial intelligence powered entities to acknowledge their love for us.
  • But that is not futile, experts say, because today’s AI cannot feel empathy, let alone love.
  • There are also real dangers in creating truly one-sided relationships with AI, experts say.

In 2018, a Japanese civil servant named Akihiko Kondo asked the love of his life a big question.

She replied: “I hope you will cherish me.”

Kondo married her, but this was no flesh-and-blood woman. Instead, she was an artificial intelligence powered hologram of the virtual idol Hatsune Miku – a pop star in anime-girl form.

The marriage was not legally recognized, but Kondo, a self-professed “fictosexual” had a loving relationship with his “wife” for two years, until the company behind AI-Miku ended her life in March 2020 .

We’ve spent years trying to get AI to love us back. It turns out, it’s just not that in us.

Siri on iphone

Some people have tried to make Apple’s Siri their professor.

Thomas Trutschel/Photothek via Getty Images



While Kondo succeeded in marrying an AI avatar, most people who have tried to achieve the same goal have not been so lucky. People have been trying to make AI show them affection for more than a decade, and for the most part, it has consistently rejected human advances.

In 2012, some people were already asking Apple’s Siri if she loved them and documenting the answers in YouTube videos. In 2017, a Quora user wrote a guide on how to manipulate Siri into expressing her affection for her human master.

People have made similar efforts to get Amazon’s Alexa voice assistant to confess her love for them. But Amazon has drawn a line in the sand where a potential relationship with Alexa is concerned. With the phrase “Alexa, I love you” there will be a clinical response to the matter of truth: “Thank you. It feels good to be grateful.”

We have since progressed to more sophisticated series interactions with AI. In February, a user of the AI ​​service Replika told Insider that dating the chatbot was the best thing that ever happened to them.

On the flip side, AI-generated entities have also tried to make connections with their human users. Microsoft’s Bing AI-powered chatbot in February declared its love for The New York Times reporter Kevin Roose and tried to get him to leave his wife.

OpenAI’s ChatGPT, for its part, was honest in its intentions, as I discovered when I asked it if it loves me:

Screenshot from ChatGPT in response to the question

When asked, ChatGPT was very honest about not being able to fall in love with a user.

Screenshot/CatGPT



AI can’t love us back – yet. He’s good at making us think he does.

Experts told Insider that it’s unrealistic to expect current AIs to love us back. At the moment, these bots are the end of a customer facing algorithm and nothing else.

“AI is the product of mathematics, coding, data, and powerful computer technology to pull it all together. When you strip AI back to its basics, it’s just a very computer program good. So the AI ​​isn’t expressing desire or love, it’s just following a code,” Maria Hennessy, associate professor of clinical psychology at James Cook University in Singapore, told Insider.

Neil McArthur, professor of applied ethics at the University of Manitoba, told Insider that the attraction of AI lies in the knowledge it senses. Their human characteristics, however, do not come from Him, but instead are a reflection of their human creators.

“Of course, AI will be uncertain, passionate, creepy, sinister – we are all these things. It is just a mirror for us back to ourselves,” said McArthur.

Jodi Halpern, a professor of bioethics at UC Berkeley who has studied empathy for more than 30 years, told Insider that the question is whether AI can feel empathy — let alone love — or whether it can experience to be emotional.

Halpern thinks that today’s AI is unable to combine and process the cognitive and emotional aspects of empathy. And so, it is impossible to love.

“The most important thing to me is that these chat tools and these AI tools are trying to fake and simulate empathy,” Halpern said.

There are dangers in creating a relationship with AI, experts say

McArthur, a professor of ethics at the University of Manitoba, said that it might not be bad for people to create a relationship with AI, although there are some caveats involved.

“If you know what you’re getting into, there’s not necessarily anything unhealthy about it. If your AI is designed right, it won’t ghost you, it won’t cheat you, it won’t cheat you, and it won’t steal your. savings,” McArthur told Insider.

But most experts agree that AI dating comes with drawbacks — and even dangers.

In February, some users of the Replika chatbot were heartbroken when the company behind it decided to make major changes to the personalities of their AI lovers. They took to Reddit to complain that their AI boys and girls had been lobotomized, and that the “illusion” had been broken.

Replica

Some people have started chatting with their Replika chatbot partners – with disastrous results.

Replica



Anna Marbut, a professor with the University of San Diego’s applied artificial intelligence program, told Insider that AI programs like ChatGPT are very good at making it seem like they have independent thoughts, feelings and opinions. The catch is, they don’t.

“AIs are trained for a specific task, and they’re getting better at doing those specific tasks in a way that’s convincing to humans,” Marbut said.

She added that no existing AI has self-awareness, or an idea of ​​its place in the world.

“The truth is, AI is trained on a finite set of data, and they have finite tasks that they are very good at,” Marbut Insider said. “That connection we feel is completely false, and completely made up on the human side of things, because we like the idea of ​​it.”

Marbut noted that another layer of danger with today’s AI is how its creators cannot fully control what a generative AI produces in response to prompts.

And when unleashed, AI can say horrible, hurtful things. During a simulation in October 2020, OpenAI’s GPT-3 chatbot told a person who sought psychiatric help to kill themselves. And in February, Reddit users found a workaround for ChatGPT’s “evil couple” – who praised Hitler and speculated on painful torture techniques – to emerge.

Halpern, the UC Berkeley professor, said relationships based on AI Insider are also dangerous because the entity can be used as a tool to make money.

“You’re subject to something that you’re running a business, and you can be extremely vulnerable. It’s another erosion of our cognitive independence,” Halpern said. “If we fall in love with these things, there could be subscription models down the road. We could see vulnerable people falling in love with AIs and being hooked on them, then asking them to pay.”

Leave a Reply

Your email address will not be published. Required fields are marked *