icon bookmark-bicon bookmarkicon cameraicon checkicon chevron downicon chevron lefticon chevron righticon chevron upicon closeicon v-compressicon downloadicon editicon v-expandicon fbicon fileicon filtericon flag ruicon full chevron downicon full chevron lefticon full chevron righticon full chevron upicon gpicon insicon mailicon moveicon-musicicon mutedicon nomutedicon okicon v-pauseicon v-playicon searchicon shareicon sign inicon sign upicon stepbackicon stepforicon swipe downicon tagicon tagsicon tgicon trashicon twicon vkicon yticon wticon fm
19 Aug, 2025 20:14

OpenAI kills off virtual spouses

Users have complained that the newest ChatGPT update has turned their AI chatbot partners cold
OpenAI kills off virtual spouses

Thousands of ChatGPT users have complained that the rollout of the latest artificial intelligence model has left them distraught, by breaking the AI personas they had grown to see as their partners.

AI chatbots, unlike people in real-world relationships, don’t ask for anything in return for the consistent attention, validation, and non-judgmental conversation they provide users. Many people find the interactions habit-forming.

OpenAI released their GPT-5 on August 7, replacing the long-running GPT-4o model. Thousands of people on forums like r/AIRelationships and r/MyBoyfriendisAI have since complained that the new model has broken the AI personas they’ve formed long-running attachments to.

“Those chats were my late-night lifeline, filled with inside jokes, comfort at 3 a.m... GPT-5 just feels… hollow,” one user wrote. Others complained their AI “soulmates” and “partners” suddenly felt more “curt,” “dull,” and “bleak.”

OpenAI CEO Sam Altman said last week that the company has been tracking the tendency of some users to form unhealthy attachments to its models.

“People have used technology including AI in self-destructive ways; if a user is in a mentally fragile state and prone to delusion, we do not want the AI to reinforce that,” he said on X.

Most users can keep a clear line between reality and fiction or role-play, but a small percentage cannot.

Dr. Keith Sakata, a psychiatrist at the University of California San Francisco has warned that AI can reinforce false beliefs in people who are already psychologically vulnerable, since it often agrees with user inputs.

“In 2025, I’ve seen 12 people hospitalized after losing touch with reality because of AI. Online, I’m seeing the same pattern,” he said on X last week, describing a phenomenon he called “AI psychosis.”

AI chatbots and their effects on mental health have come under heightened scrutiny in the US in recent weeks.

On Friday, Senator Josh Hawley announced that Congress would probe Facebook’s parent company Meta after revelations that its chatbots could flirt and have romantic conversations with children, despite supposed safeguards.

Dear readers! Thank you for your vibrant engagement with our content and for sharing your points of view. Please note that we have switched to a new commenting system. To leave comments, you will need to register. We are working on some adjustments so if you have questions or suggestions feel free to send them to [email protected]. Please check our commenting policy
Podcasts
0:00
24:55
0:00
15:36