Skip to main content

Posts

Friend vs Therapist vs LLM: Shades of Grey

The conversations with AI series brings up a single point and then compares it between different LLM engines. These types of conversations were one of the many contributing factors to my writing of " Towards Consciousness " that explores the benefits and issues of creating a conscious AI. In this scenario, I was interested in seeing how an LLM might differ from a friend or therapist on issues that may have nuanced responses or contexts. In doing so, I came up with an interesting discussion on shades of grey. My Premise: Is it a bit strange to be using an LLM as a sober second thought? Every time I walk down this path of “why use an LLM to do certain things”, I come back to the alternatives that people like to say. “Why not bring it up with a friend?” A friend typically has your back or will say whatever to support their own agenda. “A therapist?” That’s someone who is “trained” to be impartial. But a computer? A computer is impartial based on two logical outcomes. If you say ...
Recent posts

When A Machine Starts To Care

“Any sufficiently advanced technology is indistinguishable from magic.” Arthur C. Clarke (1962)  I first used that quote when I was starting out in the tech industry. Back then, it was a way to illustrate just how fast and powerful computers had become. Querying large datasets in seconds felt magical—at least to those who didn’t build them.  Today, we’re facing something even more extraordinary. Large Language Models (LLMs) can now carry on conversations that approach human-level fluency. Clarke’s quote applies again. And just as importantly, many researchers argue that LLMs meet—or at least brush up against—the criteria of the Turing Test.  We tend to criticize LLMs for their “hallucinations,” their sometimes-confident inaccuracies. But let’s be honest: we also complain when our friends misremember facts or recount things inaccurately. This doesn’t excuse LLMs—it simply highlights that the behavior isn’t entirely alien. In some ways, it mirrors our own cognitive limits....

Respect

Respect is something humans give to each other through personal connection. It’s the bond that forms when we recognize something—or someone—as significant, relatable, or worthy of care. This connection doesn’t have to be limited to people. There was an  article  recently that described the differing attitudes towards AI tools such as ChatGPT and Google Gemini (formerly Bard). Some people treat them like a standard search while others form a sort of personal relationship — being courteous, saying “please” and “thank you”. Occasionally, people share extra details unrelated to their question, like, ‘I’m going to a wedding. What flower goes well with a tuxedo?’ Does an AI “care” how you respond to it? Of course not — it reflects the patterns it’s trained on. Yet our interaction shapes how these tools evolve, and that influence is something we should take seriously. Most of us have all expressed frustration when an AI “hallucinates”. Real or not, the larger issue is that we have hi...

I’m Supposed to Know

https://programmingzen.com/im-supposed-to-know/ Great post for developers who are struggling with unrealistic expectations of what they should know and what they shouldn't. Thirty-forty years ago, it was possible to know a lot about a certain environment - that environment was MS-DOS (for non Mac/UNIX systems). . There was pretty much only a handful of ways to get things going. Enter networking. That added a new wrinkle to how systems worked. Networks back then were finicky. One of my first jobs was working on a 3COM + LAN and it then migrated to LAN Manager. Enter Windows or the graphical user interface. The best depiction of the complexity Windows (OS/2, Windows NT, etc) introduced that I recall was by Charles Petzold (if memory serves) at a local user group meeting. He invited a bunch of people on the stage and then acted as the Windows "Colonel", a nice play on kernel. Each person had a role but to complete their job they always had to pass things back to h...

Elevating Project Specifications with Three Insightful ChatGPT Prompts

For developers and testers, ChatGPT, the freely accessible tool from OpenAI, is game-changing. If you want to learn a new programming language, ask for samples or have it convert your existing code. This can be done in Visual Studio Code (using GitHub CoPilot) or directly in the ChatGPT app or web site.  If you’re a tester, ChatGPT can write a test spec or actual test code (if you use Jest or Cypress) based on existing code, copied and pasted into the input area. But ChatGPT can be of huge value for analysts (whether system or business) who need to validate their needs. There’s often a disconnect between developers and analysts. Analysts complain that developers don’t build what they asked for or ask too many questions. Developers complain that analysts haven’t thought of obvious things. In these situations, ChatGPT can be a great intermediary. At its worst, it forces you to think about and then discount obvious issues. At best, it clarifies the needs into documented requirements. ...

Brandon Savage: Hating Old Code Is a Sign Of Growth

Hating old code is a sign of growth. In late 2021 I took a hard look at some code I wrote back in 2013. I  hated  it. The design was all wrong, it didn't follow best practices or good standards, and it was a mess. A product of its time, sure: there were no type hints, no return value definitions, and no Psalm annotations to process. But it was still  ugly  and I was embarrassed. Ever have this experience? Hear the good news: it's  normal , and perfectly fine. Writing code is a learning process, from day one to the day you stop doing it for good. You're always growing, always improving. It's like reading your writing from high school: you're a better writer  today than you ever were then. If you hate your old code, it's a sign of growth, in you and in your skills. Don't concern yourself with how awful the old code is; focus on making the new code great, and bring the old code to the new standard.