Skip to main content

Posts

Showing posts from 2025

When A Machine Starts To Care

“Any sufficiently advanced technology is indistinguishable from magic.” Arthur C. Clarke (1962)  I first used that quote when I was starting out in the tech industry. Back then, it was a way to illustrate just how fast and powerful computers had become. Querying large datasets in seconds felt magical—at least to those who didn’t build them.  Today, we’re facing something even more extraordinary. Large Language Models (LLMs) can now carry on conversations that approach human-level fluency. Clarke’s quote applies again. And just as importantly, many researchers argue that LLMs meet—or at least brush up against—the criteria of the Turing Test.  We tend to criticize LLMs for their “hallucinations,” their sometimes-confident inaccuracies. But let’s be honest: we also complain when our friends misremember facts or recount things inaccurately. This doesn’t excuse LLMs—it simply highlights that the behavior isn’t entirely alien. In some ways, it mirrors our own cognitive limits....

Respect

Respect is something humans give to each other through personal connection. It’s the bond that forms when we recognize something—or someone—as significant, relatable, or worthy of care. This connection doesn’t have to be limited to people. There was an  article  recently that described the differing attitudes towards AI tools such as ChatGPT and Google Gemini (formerly Bard). Some people treat them like a standard search while others form a sort of personal relationship — being courteous, saying “please” and “thank you”. Occasionally, people share extra details unrelated to their question, like, ‘I’m going to a wedding. What flower goes well with a tuxedo?’ Does an AI “care” how you respond to it? Of course not — it reflects the patterns it’s trained on. Yet our interaction shapes how these tools evolve, and that influence is something we should take seriously. Most of us have all expressed frustration when an AI “hallucinates”. Real or not, the larger issue is that we have hi...