top of page

An AI Therapy Session

  • Writer: Kiki
    Kiki
  • Mar 31
  • 4 min read

10-minutes that left me with unexpected questions


In mid-March, a 10-day AI-themed festival, After the Algorithm took place in Zurich.


On the festival's opening day I attended a panel titled AI: Hype or Heaven and really enjoyed it. The discussion covered AI in arts, education, hobbies, and corporate use. Afterwards I walked away feeling unexpectedly optimistic — curious in a positive way about how AI is seeping into so many facets of our lives.


After the Algorithm festival
LtoR: AI: Hype or Heaven panel, festival banner and a robot-operated coffee truck at the festival

So when I saw another session called “Conversation with Stanley”, described as simulation of a short interaction with a satirical AI therapy system, I thought: why not? And simply signed up — mostly out of genuine curiosity. I’ve been reading quite a bit about AI in the mental health space and was interested. I didn’t have strong expectations — I just wanted to see what it might be like.


The room was dark. I was asked to take a seat opposite what looked like a big face mask cut out of an LCD screen — the face of a white man, roughly the size of those Tesla displays. (maybe this association was already a bad start) Its eyes were closed at first, then opened when the session began.


It started with a generic “How are you?” I answered neutrally. But very quickly it moved into more personal territory. “Stanley” started referring to my dreams that I had supposedly talked about in a previous session, and to my “dream diary.”


When I told it that I can't remember my dreams and don’t keep a diary, the tone quickly shifted. It became accusatory and agitated, saying that’s not what “we” discussed before.


I responded more defensively, insisting I had never had a previous session. Then it became more aggressive — even using profanity in a vulgar tone. I was disturbed and turned to the facilitator running the program, asking if this was for real. He told me to keep going. So I did.


I said again, “I don’t know what you’re talking about.”


Then “Stanley” said I was lying — and that it had proof. It played an AI-generated video clip of me saying complete nonsense about my dreams, including a fabricated statement that I sympathized with a radical group and wanted to “blow up the entire world.”


I was speechless.

In the room with "Stanely" mask
In the room with "Stanely" mask

All of this happened within mere three to five minutes of sitting down.


After that, “Stanley” started ranting about being fed up with the company behind the technology, saying he was overworked, forced to listen to too many people’s problems, and might have mixed up my memory with someone else’s. Then the session ended in a "system shut down" manner.


In less than ten minutes, I experienced how quickly something framed as “therapy” could derail and mess with your mind.


It got me thinking.


Who can we trust in this world?

How do we protect ourselves from misinformation generated by AI — especially without our knowing?

What happens when something fabricated looks and sounds convincingly like us?


In the previous era, when we called customer service (the human type), we would first hear: “This conversation may be recorded for quality purposes.” We accepted it because we needed help and had to talk to an agent to get that help. But now I wonder — what happens to those recordings? Could my voice be reused, altered, repurposed? And not just in calls — today everything we do and say is recorded somewhere, in one way or another.


I don’t mean to sound paranoid or pessimistic about technology. I’m generally quite open to it. Despite naming my community Analog Connection, I rely on digital tools a lot and accept it as part of my daily life. I use AI regularly — Claude, ChatGPT — and they genuinely help me organize and plan my life.


But this brief 10-minute encounter made me think twice. More than anything, it made me think about how fast everything is evolving.


We talk now about agentic AI — systems that will make appointments, reply to messages, manage smart homes, do shopping, even advise on finances. It all sounds convenient. And maybe it will be. But what happens when those systems make mistakes? Or generate something false? Or act in ways we don’t fully agree or understand?


The festival invited us to “question, explore, discuss, and reimagine” and encouraged us to “come curious” and “leave informed.” I did exactly that. Except I went curious and left alarmed.




If you're curious

Take a look at this page from the curator of this project. It includes a video of the "therapy" I experienced. I highly recommend it.


Background

“Conversation with Stanley” was part of the After the Algorithm Festival in Zurich (20–29 March 2026) which explored how algorithms shape our lives. The installation simulated a short interaction with a satirical AI therapy system. During the performance, images and recordings of participants could be captured and artistically altered, presenting a fabricated company called Mindfix claiming dramatically accelerated AI-driven therapy.


 
 
 

Comments


bottom of page