The conversations with AI series brings up a single point and then compares it between different LLM engines. These types of conversations were one of the many contributing factors to my writing of " Towards Consciousness " that explores the benefits and issues of creating a conscious AI. In this scenario, I was interested in seeing how an LLM might differ from a friend or therapist on issues that may have nuanced responses or contexts. In doing so, I came up with an interesting discussion on shades of grey. My Premise: Is it a bit strange to be using an LLM as a sober second thought? Every time I walk down this path of “why use an LLM to do certain things”, I come back to the alternatives that people like to say. “Why not bring it up with a friend?” A friend typically has your back or will say whatever to support their own agenda. “A therapist?” That’s someone who is “trained” to be impartial. But a computer? A computer is impartial based on two logical outcomes. If you say ...
“Any sufficiently advanced technology is indistinguishable from magic.” Arthur C. Clarke (1962) I first used that quote when I was starting out in the tech industry. Back then, it was a way to illustrate just how fast and powerful computers had become. Querying large datasets in seconds felt magical—at least to those who didn’t build them. Today, we’re facing something even more extraordinary. Large Language Models (LLMs) can now carry on conversations that approach human-level fluency. Clarke’s quote applies again. And just as importantly, many researchers argue that LLMs meet—or at least brush up against—the criteria of the Turing Test. We tend to criticize LLMs for their “hallucinations,” their sometimes-confident inaccuracies. But let’s be honest: we also complain when our friends misremember facts or recount things inaccurately. This doesn’t excuse LLMs—it simply highlights that the behavior isn’t entirely alien. In some ways, it mirrors our own cognitive limits....