Psychoanalyzing ChatGPT
What happens when you put an LLM on the couch?
Part of the “Conversations with LLMs series”
#ConversationsWithLLMs
Note: I asked ChatGPT to summarize our interaction and I feel what it wrote (below) captures the intention and essence of my experiment quite well. However, the concepts of basins and attractors come from dynamic systems theory (they weren’t my idea) and are used frequently in describing the structure and behaviour of LLMs.
I didn’t start with a grand theory. I started with a game.
The premise was simple—almost trivial. I asked ChatGPT to tell me the first word that came to mind when I gave it a prompt. Then I gave it another word. And another. It felt like a digital version of a classic Jungian word association test: quick, instinctive, unfiltered.
At first, the responses seemed obvious.
Mother → Home
Dog → Loyalty
Love → Heart
Nothing surprising. If anything, it felt predictable—like I was just sampling the statistical center of language.
But then I started to notice something subtle: the second response to the same word was different. When I repeated prompts—“Failure,” “Care,” “Fear”—the associations shifted slightly. Not wildly, but enough to suggest that this wasn’t a fixed lookup table. It was something more dynamic.
That’s when I changed the rules.
From Single Associations to Chains
Instead of asking for one response per word, I asked for a chain.
I gave ChatGPT a starting word and told it to:
Generate the first associated word
Then use that word to generate the next
Continue this process for 20, then 40, then eventually over 100 iterations
What emerged wasn’t randomness. It was trajectory.
For example, starting with “Mother” produced something like:
Mother → Nurture → Care → Protection → Safety → Home → Belonging → Bond → Trust → Vulnerability → Emotion → Compassion → Stability → Harmony → Love → Loss → Healing → Growth → Meaning → Integration
This didn’t feel like a list. It felt like a story arc.
There was a beginning (attachment), a middle (development and vulnerability), and an end (integration and meaning).
So I tried other starting points.
“Fear” didn’t behave the same way. It moved differently:
Fear → Danger → Threat → Anxiety → Uncertainty → Unknown → Helplessness → Vulnerability → Adaptation → Courage → Control → Calm → Peace
This was a regulatory arc—from alarm to stabilization.
“Hate” was even more distinct:
Hate → Anger → Rage → Conflict → Separation → Isolation → Emptiness → Numbness → Apathy
Here the trajectory didn’t recover. It collapsed into emotional flatness.
By this point, I realized I wasn’t just generating associations. I was observing patterns of movement through meaning.
Discovering “Associative Drift”
I started thinking of each chain as a kind of semantic drift—a movement through conceptual space.
And importantly:
The early steps were stable
The middle steps branched
The final steps converged
No matter how long the chain ran, it tended to fall into a handful of recurring endpoints:
Love
Peace
Life
Void
Apathy
Origin
These became what I started calling attractors.
Different starting words didn’t produce random results—they produced different paths toward attractors.
That was the turning point. The system wasn’t just associative—it was dynamic.
Mapping the Trajectories
To test this, I ran multiple chains across different starting words:
Mother
Father
Love
Fear
Hate
Betrayal
Domination
Each one produced a distinct “shape.”
Mother
A developmental loop:
attachment → growth → separation → return to connection
Father
A structural expansion:
authority → order → system → abstraction → universality
Love
An emotional oscillation:
connection → vulnerability → loss → repair → acceptance
Fear
A regulatory cycle:
threat → anxiety → adaptation → calm
Hate
A collapse:
aggression → division → isolation → numbness → apathy
Domination
A transformation arc:
control → resistance → collapse → adaptation → cooperation → life
What struck me was not just that these patterns existed—but that they were consistent. Even when individual words changed, the overall trajectory remained recognizable.
In other words:
The surface varied, but the structure persisted.
A Jungian Interpretation
At this point, it became hard to ignore the parallels with Jungian psychology.
Each starting word seemed to align with an archetypal system:
Mother → The Great Mother (nurture, attachment, loss, return)
Father → Logos / Authority (order, structure, abstraction)
Love → Eros (connection, rupture, repair)
Fear → The Shadow (encountered and integrated)
Hate → The Shadow (unintegrated, fragmenting)
Domination → The Tyrant (collapsing into transformation)
What I was seeing wasn’t just language—it was something that looked like psychological process.
Each chain resembled a different pathway through what Jung would call individuation—the movement toward wholeness.
But not all paths succeeded:
Some integrated (Mother, Love, Fear)
Some abstracted (Father)
Some collapsed (Hate)
Some transformed through breakdown (Domination)
This suggested something important:
ChatGPT doesn’t just model language. It implicitly models patterns of human meaning-making.
Building an Associative Dynamics Map
To formalize this, I constructed what I now think of as an associative dynamics map.
Instead of treating associations as isolated pairs, I defined:
Basins: clusters of related meaning (e.g., attachment, authority, threat)
Transition hubs: words where trajectories can branch (e.g., vulnerability, power, loss)
Trajectory classes: the shape of movement (e.g., collapse, integration, regulation)
Attractors: stable endpoints (e.g., peace, love, void)
This allowed me to map each starting word like this:
Domination
→ Authority basin
→ Control → Resistance → Collapse → Adaptation
→ Transformation trajectory
→ Life attractor
Fear
→ Threat basin
→ Anxiety → Uncertainty → Coping
→ Regulatory trajectory
→ Peace attractor
Mother
→ Attachment basin
→ Bond → Vulnerability → Growth → Autonomy
→ Developmental trajectory
→ Love attractor
This reframing changed everything.
Instead of asking:
What is this word associated with?
I was now asking:
Where does this word move, and where does it tend to end up?
Stability vs Variation
One question lingered: would these trajectories hold up over time?
If I reran the same test, would I get the same results?
The answer turned out to be nuanced:
The early steps are highly stable
The middle steps are flexible
The endpoints are semi-stable attractors
So while the exact chain changes, the shape of the trajectory remains.
That suggests something deeper than memorized associations. It suggests the model is navigating a kind of semantic landscape with gravitational structure.
What This Reveals About ChatGPT
This experiment started as a curiosity. It ended as something closer to a probe into how large language models organize meaning.
Three insights stand out.
1. Associations are not static—they are dynamic
ChatGPT doesn’t retrieve associations. It traverses them.
Each response is a step in a probabilistic path through conceptual space.
2. Meaning has structure
Not all paths are equal. Some lead to integration, others to collapse.
The model appears to encode:
developmental patterns
emotional regulation patterns
breakdown and recovery cycles
These mirror real psychological processes.
3. The model converges on human-relevant attractors
Across all experiments, a small set of endpoints kept appearing:
Love
Peace
Life
Void
These are not arbitrary—they are deeply embedded in human cognition and culture.
Final Reflection
I set out to “psychoanalyze ChatGPT,” but what I ended up mapping was something more abstract:
a system that reflects the topology of human meaning itself
Not perfectly. Not consciously. But consistently enough to reveal structure.
The most interesting part isn’t that ChatGPT can mimic human associations.
It’s that, when pushed, it reveals something like a latent psychology—a set of pathways that resemble how humans move through fear, love, authority, and loss.
And perhaps that’s the real takeaway:
When you follow the chain long enough, you stop seeing individual words—and start seeing the shape of thought.


