• OpenAI CEO Sam Altman said he had “deep misgivings” about friendships with AI.
  • “We named it ChatGPT and not a person’s name very intentionally,” he said during WSJ’s Tech Live event on Tuesday.
  • Altman’s comments come amid a growing number of chatbots trying to make chatbots more human-like and friendly.

OpenAI CEO Sam Altman has misgivings about friendships between humans and AI.

“I personally really have deep misgivings about this vision of the future where everyone is super close to AI friends, and not more so with their human friends,” Altman told the Wall Street Journal’s Joanna Stern during the Tech Live event in Laguna Beach, California on Tuesday.

Altman clarified that though he does not want that future, he accepted that others might. He also said that while he thought of personalization and personality as positives, it was important to keep distinguishing between interactions with AI and humans.

“We named it ChatGPT and not a person’s name very intentionally,” said Altman. He added that OpenAI makes it clear to users that they aren’t talking to a person.

Altman’s comments come amid a growing number of AI companies putting out chatbots with human-like personality and friendliness as a key feature. 

These include Anthropic AI’s “friendly” chatbot Claude, Character.AI’s chatbots with user-created personalities, and Meta’s celebrity AI assistants, which have backstories and use the likeness of public figures like Kendall Jenner.

“Our AI assistant is called Meta AI. It’s similarly neutral and has a neutral tone and has a neutral personality,” Chris Cox, Meta’s chief product officer, said in response to Altman’s comments at a separate interview during the same event. “But I also think there’s room for more expression and more playfulness in what’s possible,” he added. 

To be sure, Altman isn’t alone in his concern about humans bonding with chatbots.

Humans tend to anthropomorphize — or attach human qualities — to objects, according to a report published by the think tank Public Citizen. The report warned that companies deploying chatbots could exploit users’ trust and manipulate their emotions.

And as reported by Insider’s Rob Price, there are ethical concerns over users getting attached to their chatbots. When AI startup Replika disabled their chatbots’ “erotic role-play” feature, it caused an uproar among users who felt their AI companions had been “lobotomized.”

OpenAI and Altman did not immediately respond to requests for comment from Insider, sent outside regular business hours.

Read the full article here

Share.
Exit mobile version