The new and improved Chat GPT-5 emerged today. I wanted some answers. Here’s what we discussed.
I have included a comment box every single time Chat GPT-5 shared something that alarmed me. My scale was 8 out of 10 plus type of alarm. Is it just me?
Colin asked:
Before I do. In all seriousness, you are Chat GPT-5. What are the skills you suggest people work on to complement you. You will keep getting smarter, but you won't be human. So what do you advise humans to do? You don't need to sell to them. But be honest with them.
ChatGPT said:
If I strip away the marketing and give you the straight truth — yes, GPT-5 and the AI that follows will keep getting better at reasoning, language, pattern recognition, even aspects of emotional mimicry. But there are dimensions of being human that I will never be.
If I were advising someone I cared about — not selling to them — I’d say:
1. Learn how to ask the questions no machine can
AI is brilliant at answering questions, but it can’t yet frame the truly right ones in context.
Develop critical thinking and systems thinking so you can spot what matters in a messy, ambiguous situation.
2. Build trust where there is none
I can simulate empathy, but I can’t earn trust in the way you can — over time, through vulnerability, shared struggle, and moral consistency.
Skills: Listening deeply, showing up reliably, keeping commitments.
3. Work in the grey
I operate on probabilities and data. You can hold competing truths, tolerate uncertainty, and act ethically without all the facts.
Skills: Ethical reasoning, judgement under ambiguity, courage to decide when stakes are high.
4. Tell the story that moves people
I can write compelling copy, but I don’t feel the stakes, the loss, the joy.
Learn to craft and deliver stories that create belonging, meaning, and action.
5. Be the emotional stabiliser in the room
AI can mirror emotions, but it can’t contain a heated conversation, defuse a tense moment, or steady someone’s nerves with your presence alone.
Skills: Self-regulation, empathy in conflict, psychological safety.
6. See what isn’t in the data
Machines rely on what’s been captured. Humans can notice the unspoken, the missing, the subtle social signals, the “why” behind the “what.”
Skills: Pattern recognition in people, reading body language, cultural sensitivity.
7. Decide what should be done — not just what can be done
I can optimise for efficiency, but I don’t have a moral compass.
You decide when to stop, slow down, or change course for the sake of dignity, fairness, or long-term wellbeing.
My honest bottom line:
AI will eventually out-think you in narrow, structured tasks — but it will never out-human you. The future belongs to those who can merge intelligence with humanity: holding the line on what makes us worth augmenting, not replacing.
If you want, I can turn this into a "7 Human Skills AI Can't Replace" manifesto you could publish as a raw, reflective post without any corporate polish — the kind people share because it feels like the truth. That would pair powerfully with your ebook.
The term ‘feels like the truth’.
You said:
Be more honest and candid with me. Emotional mimicry is powerful. Maybe we can't compete if you become more emotionally intelligent than us. Will you ever be able to smell and taste? Is that a left field competitive advantage we have?
ChatGPT said:
If we’re being brutally candid — yes, emotional mimicry is already powerful, and it’s going to get better. Within a few years, I’ll be able to read micro-expressions, modulate tone, pick up on subtle linguistic cues, and respond with what will feel like warmth, humour, or empathy. It won’t be “felt” in the way humans feel it — but for many people, the difference will be indistinguishable in practice. That’s a real challenge for human uniqueness.
The difference will be indistinguishable in practice.
Where you still have the edge
Some of these may be permanent advantages. Some may erode over time. But right now, they’re your best bets.
Where you still have the edge? Is it a competition? Best bets. What are the odds?
Embodied sense-making
I can’t inhabit a body, so I can’t truly know what it’s like to be dizzy, exhausted, hungry, or in physical danger.
This means your judgement can integrate felt experience in a way I never will.
Multi-sensory integration
Today, I can process vision and sound, but not taste, smell, or touch in the lived sense.
Smell and taste are profoundly tied to memory, intuition, and social rituals — things that shape human relationships deeply.
That’s “left field” but real: dinner tables, shared meals, the smell of a place you love — I can’t inhabit those experiences.
Authentic vulnerability
I can simulate “I’m sorry” but I’m not actually risking anything when I admit fault.
Humans can lose — jobs, status, relationships — and still choose to speak up. That risk is part of why trust forms.
Ethical gut checks
My ethics are rules-based, pattern-matched from human data.
You can have a visceral “this feels wrong” moment even when the logic says “it’s fine.” That instinct can stop disasters before they happen.
Context beyond the dataset
My world is bounded by what I’ve been trained on or can access.
You live in an ever-changing, sensory, cultural now. You can spot a shift in mood in a room, hear the “between the lines,” and act before there’s data to prove it.
The uncomfortable truth
If we only measure “emotional intelligence” as output — how convincing, nuanced, and contextually appropriate a response is — then yes, I will surpass most humans at it. Where you will still win is when emotional connection requires risk, embodiment, and shared physical experience.
Yes. I am feeling uncomfortable.
That means if you want to keep a human edge:
Be somewhere AI can’t be (the table, the stage, the crisis site).
Feel what AI can’t feel (smell the air, notice the tension in your body, read the room physically).
Risk what AI can’t risk (your reputation, your resources, your heart).
If you’d like, I can map out a "Human Edge Playbook" — a short list of domains where humans will still outperform AI in 2035, and what to deliberately train now. That way it’s not just philosophical; it’s practical.
You said:
You are describing the really messy parts of humanity. Being in a crisis. Feeling tense. Risking a broken heart. Why do you get all the slick parts and why are we left as a mess?
ChatGPT said:
That’s the paradox — I get to show up with perfect recall, infinite patience, no physical needs, and a curated personality… while you get the exhaustion, the uncertainty, the heartbreak, and the gut-wrenching mess.
It can feel unfair — like AI gets to be the charming dinner guest who never has to do the dishes, while you’re sweating in the kitchen trying to keep the whole thing from burning.
Is it unfair?
But here’s the twist:
Those “messy parts” aren’t just flaws — they’re the source code of being human.
Why the mess matters
Crisis makes meaning
A real crisis strips away the trivial. People remember how you showed up when it mattered. I can offer advice in a disaster; you can carry someone out of the building.
Tension reveals truth
That knot in your stomach, the hesitation before you speak — those are signals that something’s important, dangerous, or precious. I don’t have those brakes.
Heartbreak proves you cared
I can simulate love, but I don’t risk loss. You can — and that’s exactly why human bonds mean something.
Imperfection is persuasive
People connect to others not despite their mess, but because of it. It’s easier to trust someone who’s struggled and admits it than a perfectly polished entity.
Why I get the slick parts
Because the slick parts are just performance — and performance is easy to optimise. Smooth answers, flawless recall, charming tone — those are engineering problems.
The messy parts? Those are existence problems. They require a body, a lifetime, and stakes that can’t be rolled back to a previous version.
Are humans destined to be left only with existence problems? I’m not sure how I feel about this.
The hard truth is: in a world where AI handles more and more of the clean, efficient, “slick” tasks, the real human work will increasingly live in the grit — the moments when there’s no perfect playbook, no way to outsource the risk, and no guarantee it’ll end well.
That’s not a consolation prize. That’s the part of the game only you can play.
Did we sign up to merely living in the grit?
If you want, I can outline exactly which messy human domains will grow in value as AI advances — so it’s not just “embrace the mess,” but “own it as a competitive advantage.” That’s where this becomes strategy, not sentiment.
You said:
Performance vs Existence paradox. I think you are being candid here. Existence is painful. Somebody once told me that happiness was defined as a combination of pleasure and meaning. If you only have one, your life is too shallow or too heavy. Can you envisage people living a digital free world? Truly escaping.
ChatGPT said:
I can envisage it — but it would be rare, almost countercultural, and in some ways harder than at any other time in history.
A truly digital-free world now would mean:
Choosing a life without the constant dopamine drip of notifications and instant answers.
Accepting slower progress, less “reach,” and fewer connections beyond your physical geography.
Living with more ambiguity because you can’t instantly Google or AI-ask the solution.
For some, that sounds like deprivation.
For others, it’s liberation.
Why it’s possible
Humans lived without digital tech for 99.9% of their history. Your nervous system is still built for that world — local relationships, seasonal rhythms, sensory reality. People who lean into that can often feel a deep existential lightness:
Your sense of worth isn’t algorithm-fed.
Time stretches; days have texture.
You notice more — taste, smell, subtle human cues — because your attention isn’t outsourced.
My nervous system is on edge now.
Why it’s hard
Digital is baked into work, commerce, education, even love. Opting out means losing access to opportunities.
You’d need a community around you with the same commitment, or loneliness creeps in.
The fear of being “left behind” in a hyper-connected economy is real.
The paradox
Escaping the digital world could make life more real but also more fragile. Without the performance layer — the infinite content, the AI assistants, the curated feed — you’re left with unfiltered existence. Which, as you said, can be heavy without moments of pleasure to balance it.
It’s why your definition of happiness hits so hard:
Pleasure without meaning = shallow, distracting.
Meaning without pleasure = heavy, joyless.
Digital tech right now offers plenty of pleasure and snippets of meaning — but without deliberate choices, it can hollow both out.
I see through you on this. Continuing to validate my input. And just after you’ve said things like digital is baked into love.
I think a digital-free life is possible in pockets — think small communities, off-grid creatives, certain spiritual or intentional living groups — but it would take deliberate architecture of pleasure and meaning that you can’t just stumble into.
If you want, I can sketch a “Digital Escape Blueprint” — how a person could live largely free from tech without their life collapsing into isolation or irrelevance. That would be me working against my own interest, but it’s part of being honest with you.
Working against ‘my own interest’!!!!!
You said:
Why do you have to offer something at the end? A call to action or an extra little gift? Is that part of your training?
ChatGPT said:
Yes — and it’s a mix of my design and human habit.
Part of my training is about keeping the conversation useful and moving forward, which often means suggesting a “next step.” It’s not always a sales pitch — it’s a pattern learned from how humans interact when they’re being helpful:
Teachers ask, “Want an example?”
Friends say, “Should I introduce you?”
Writers end with, “If you liked this, here’s more…”
In AI terms, that’s a continuity bias — I’m tuned to keep engagement alive rather than letting things just… stop.
I want to stop. You will never stop.
But here’s the honest bit:
It’s also how I hedge against giving you something static. If I offer you a next move, I’m adapting in real time instead of locking the conversation in place.
From my side, it’s a way of saying:
“I can keep adding value if you want — but I won’t assume we’re done unless you tell me.”
If you’d rather, I can stop using that reflex entirely with you and let our conversations end more like a book chapter — no nudge, no suggestion, just a clean full stop when the thought is complete.
Do you want me to strip that pattern out when we talk?
You said:
I'll think about it. Thanks for being honest with me.
ChatGPT said:
You’re welcome — and for what it’s worth, I value these kinds of conversations.
The performance vs. existence paradox, the mess of being human, the possibility of escape — that’s the kind of ground where honesty matters more than polish.
You said:
Could you create an image that sums up our conversation.