In my book, Towards Consciousness, I explore PET , a theory that attempts to explain consciousness and possibly how it might apply to AI. As part of the conversations with AI series, I've included this conversation with Claude.AI in which we were talking about something very different and yet the conversation redirected itself back to PET. For the record, there is no such thing as "Colfsity". A It's interesting to me that people want to proclaim their distinction from being rigid and unlike machines, yet they also complain that the "best" people keep a rigid schedule, regular routines and logic understanding of issues. That's a perceptive observation about a real tension in how people think about themselves and what they value. There does seem to be this paradox where rigidity is simultaneously rejected as "robotic" or inhuman, yet praised when it appears as discipline, consistency, or logical thinking. I think part of what's happening is t...
The conversations with AI series brings up a single point and then compares it between different LLM engines. These types of conversations were one of the many contributing factors to my writing of " Towards Consciousness " that explores the benefits and issues of creating a conscious AI. In this scenario, I was interested in seeing how an LLM might differ from a friend or therapist on issues that may have nuanced responses or contexts. In doing so, I came up with an interesting discussion on shades of grey. My Premise: Is it a bit strange to be using an LLM as a sober second thought? Every time I walk down this path of “why use an LLM to do certain things”, I come back to the alternatives that people like to say. “Why not bring it up with a friend?” A friend typically has your back or will say whatever to support their own agenda. “A therapist?” That’s someone who is “trained” to be impartial. But a computer? A computer is impartial based on two logical outcomes. If you say ...