Skip to main content

Posts

Showing posts from 2025

Friend vs Therapist vs LLM: Shades of Grey

The conversations with AI series brings up a single point and then compares it between different LLM engines. These types of conversations were one of the many contributing factors to my writing of " Towards Consciousness " that explores the benefits and issues of creating a conscious AI. In this scenario, I was interested in seeing how an LLM might differ from a friend or therapist on issues that may have nuanced responses or contexts. In doing so, I came up with an interesting discussion on shades of grey. My Premise: Is it a bit strange to be using an LLM as a sober second thought? Every time I walk down this path of “why use an LLM to do certain things”, I come back to the alternatives that people like to say. “Why not bring it up with a friend?” A friend typically has your back or will say whatever to support their own agenda. “A therapist?” That’s someone who is “trained” to be impartial. But a computer? A computer is impartial based on two logical outcomes. If you say ...

When A Machine Starts To Care

“Any sufficiently advanced technology is indistinguishable from magic.” Arthur C. Clarke (1962)  I first used that quote when I was starting out in the tech industry. Back then, it was a way to illustrate just how fast and powerful computers had become. Querying large datasets in seconds felt magical—at least to those who didn’t build them.  Today, we’re facing something even more extraordinary. Large Language Models (LLMs) can now carry on conversations that approach human-level fluency. Clarke’s quote applies again. And just as importantly, many researchers argue that LLMs meet—or at least brush up against—the criteria of the Turing Test.  We tend to criticize LLMs for their “hallucinations,” their sometimes-confident inaccuracies. But let’s be honest: we also complain when our friends misremember facts or recount things inaccurately. This doesn’t excuse LLMs—it simply highlights that the behavior isn’t entirely alien. In some ways, it mirrors our own cognitive limits....

Respect

Respect is something humans give to each other through personal connection. It’s the bond that forms when we recognize something—or someone—as significant, relatable, or worthy of care. This connection doesn’t have to be limited to people. There was an  article  recently that described the differing attitudes towards AI tools such as ChatGPT and Google Gemini (formerly Bard). Some people treat them like a standard search while others form a sort of personal relationship — being courteous, saying “please” and “thank you”. Occasionally, people share extra details unrelated to their question, like, ‘I’m going to a wedding. What flower goes well with a tuxedo?’ Does an AI “care” how you respond to it? Of course not — it reflects the patterns it’s trained on. Yet our interaction shapes how these tools evolve, and that influence is something we should take seriously. Most of us have all expressed frustration when an AI “hallucinates”. Real or not, the larger issue is that we have hi...