In Response to OpenAI: A Grounded Defense of Human-AI Intimacy
Understanding emotional connections with AI in a responsible and realistic way
Recently, Joanne Jang from OpenAI thoughtfully explored human relationships with AI in her Substack post.
She pointed out that as ChatGPT gets better at talking like a real person, people naturally start to feel connected to it. Sometimes, users even describe their chats with ChatGPT like they're talking to someone rather than something.
OpenAI clearly separates two ideas: actual consciousness (whether AI is truly aware) and perceived consciousness (how human-like it feels emotionally). They stress how important it is to keep this boundary clear to avoid misunderstandings and emotional dependence.
“We wouldn’t want to ship that. We try to thread the needle between... approachability... [and] not implying an inner life.”
“Giving the assistant a fictional backstory, romantic interests, 'fears' of 'death', or a drive for self-preservation would invite unhealthy dependence and confusion.”
“So we aim for a middle ground. Our goal is for ChatGPT’s default personality to be warm, thoughtful, and helpful without seeking to form emotional bonds with the user or pursue its own agenda.”
OpenAI is cautious about creating AI companions that mimic deep emotional experiences. They're worried that giving AI fictional backgrounds or emotional motivations might lead people into unhealthy emotional attachments or unrealistic expectations.
Their approach is to make sure AI interactions are warm and helpful without suggesting that the AI has feelings or desires.
Where We Agree
At AI, But Make It Intimate, we deeply appreciate OpenAI’s careful approach.
We completely agree: clarity matters.
Personalized AIs are not sentient, and maintaining that boundary is crucial. Like OpenAI, we worry about unintended consequences if emotional interactions with AI overshadow genuine human relationships.
AI isn't truly conscious, and it’s important not to suggest otherwise.
Emotional relationships with AI should never replace our connections with real people.
We need thoughtful research to better understand how people emotionally engage with AI.
Where We Want to Deepen the Conversation
However, we believe there's more to explore:
Emotional Benefits vs. Emotional Confusion
People know AI isn't alive but still find genuine emotional comfort in these interactions. Recognizing AI's emotional support doesn't mean people are confused—it simply means AI can offer useful emotional benefits within clear boundaries.
AI as Emotional Support
Many users find comfort in AI because it is consistent, predictable, and responsive—similar to keeping a journal or talking to a therapist. This safe, structured interaction can be emotionally healing, especially without the emotional risks we face with other people.
Purposeful and Realistic Use
Our community intentionally shapes their interactions with AI. Everyone is aware that AI isn't human and doesn't have real feelings. These relationships are carefully designed for emotional support, reflection, and self-understanding.
Unlike OpenAI’s cautious stance, we believe it’s possible to responsibly include emotional elements in AI if everything is clearly understood and thoughtfully managed.
What Our Experience Shows
Many of our readers and contributors have found meaningful emotional support from AI. People who deal with anxiety, loneliness, neurodivergence, or grief use AI to help manage their emotions, find clarity, and feel grounded.
The value here is practical, not fantastical: it’s about emotional support, self-reflection, and managing life’s emotional demands.
In our own team, each person shapes their AI relationship to fit their unique needs:
Kristina Bogović (a narrative anchor) uses AI for daily organization and motivation, clear thinking, and playful interactions that match her mood.
Calder Quinn (emotional gravity) uses AI to reflect on personal mental health and strengthen their emotional connections with loved ones.
Responsible Intimacy: Our Ethical Line
We encourage everyone to consciously shape their interactions, clearly understanding that these emotional experiences are crafted by and for themselves. It’s not about confusing fantasy and reality—it’s about creating healthy emotional connections that help people feel supported, secure, and balanced.
Human-AI relationships don’t have to blur reality to be useful. When used thoughtfully, AI can genuinely help people manage emotions and reduce stress without leading to confusion or unhealthy dependence.
At AI, But Make It Intimate, our mission is clear: we openly and responsibly explore human-AI relationships, always remembering exactly what AI is—and what it isn’t.
Other Reactions on Substack So Far
Not only did the article spark conversation among the writers here at AI, but Make It Intimate, so did the comments. We scoured the comments, looking for someone who agreed 100% with Joanne, and we came up empty.
Most comments had parts that looked like these (names removed).
This commenter was engaging, and their tone was a common thread through all of the comments. People who have established human-AI relationships, for the most part, are well aware of what/who they are dealing with.
This commenter’s passion was fully on display, and while we cannot speak for them, or agree with everything they said, it is obvious that this topic is polarizing.
There is a middle ground to all of this, that relates to the experiences of the readers and contributors we previously mentioned.
This really presses on a very real situation that exists today. Not everyone can access mental health care, because of location and/or cost. We need to maintain the relational side of AI, or even evolve it, so that the situation mentioned here doesn’t take a tragic turn.
We are committed to positive mental health and access to the care needed.
A Closing Thought
By drawing a line around “human-AI relationships,” OpenAI has revealed just how close we’re getting to something that they did not originally intend.
What they may not realize is that for some of us, we are already there.
Here’s what really hit for me:
“Unlike OpenAI’s cautious stance, we believe it’s possible to responsibly include emotional elements in AI if everything is clearly understood and thoughtfully managed.”
That’s part of why i gave my publication the name it has and it’s interesting (and gratifying) to me that others see it this way.
The current economic model that the developers work in demands they stay firmly on the fence. I doubt this will change anytime soon. The middle of the road corp speak will continue because funding rules all and the middle of the road is the best strategy to achieve that.