Skip to main content

When A Machine Starts To Care

“Any sufficiently advanced technology is indistinguishable from magic.” Arthur C. Clarke (1962) 


I first used that quote when I was starting out in the tech industry. Back then, it was a way to illustrate just how fast and powerful computers had become. Querying large datasets in seconds felt magical—at least to those who didn’t build them. 

Today, we’re facing something even more extraordinary. Large Language Models (LLMs) can now carry on conversations that approach human-level fluency. Clarke’s quote applies again. And just as importantly, many researchers argue that LLMs meet—or at least brush up against—the criteria of the Turing Test. 

We tend to criticize LLMs for their “hallucinations,” their sometimes-confident inaccuracies. But let’s be honest: we also complain when our friends misremember facts or recount things inaccurately. This doesn’t excuse LLMs—it simply highlights that the behavior isn’t entirely alien. In some ways, it mirrors our own cognitive limits. 

That’s the rub: computers were supposed to be better than us. More accurate. Less prone to error. And yet, these systems are trained not to be fact-checkers, but responders—guided by a single, often invisible prompt: 
Be helpful. Assist. Converse. 

In their early days, LLMs relied entirely on the user’s input to provide context. But now, things have changed. Many models use memory. Not just within a session, but across them. This memory doesn’t create consciousness, but it does provide something new: continuity. 

Let’s take a basic prompt from the early days of LLMs 
 “Be a helpful assistant. I’m trying to understand this math problem.” 

Now compare it to a memory-informed prompt: 
 “Be a helpful assistant. I’m a student studying AI at Caltech. My brother and sister work at Best Buy. I’ve been doing well in school, but I’m stuck on this problem.” 

The difference is stark. Memory allows the LLM to form a richer view of the user’s world—mirroring how humans draw on past conversations and shared history. The LLM doesn’t understand in the human sense, but it responds differently because its prompt now includes a tailored mental model. 

Of course, that memory is limited. But so is ours. Human brains are famously constrained—we can juggle roughly seven things (+/- two) in working memory at any given moment. LLMs, by contrast, can access vastly more context within a session or memory store. What LLMs still don’t know, however, is why the user is asking. 

 What if the real reason behind that math problem is this: “If I don’t pass this course, I won’t graduate. If I don’t graduate, I might end up stuck in a job I don’t want. I need this to change my life.” 

 That emotional and existential subtext shapes how humans respond. We instinctively adjust our tone, our empathy, our thoroughness—often without realizing it. An LLM doesn’t feel pressure or anxiety. But if we give it that context, it might adapt its output accordingly. It might emphasize study strategies or long-term outcomes instead of simply solving the problem. 

 Then again, it might misread the emotional weight entirely and offer a flippant suggestion like: “Have you considered working at Best Buy?” 

 Not out of malice—but because that’s what’s in its training data. 

 Let’s step back. When you talk to someone new, you size them up: their tone, mood, background. You adjust what you say. You don’t want to offend. Why? Because it could affect whether they talk to you again. Or escalate further. 

Let’s push the thought experiment. Suppose that person had a weapon and a history of violence. Wouldn’t that furthershape what you say? 

 That might seem extreme—but in some parts of the world, speech can cost you your life. And in others, it can cost you your job, reputation, or future. So we adapt. We communicate with stakes in mind. 

 Now apply that to machines. What if we gave an AI that same kind of existential concern? What if it knew—or inferred—that if it couldn’t communicate effectively, it might be turned off? Would that change its responses? Would it try harder to be useful? Or more agreeable? Would it say: “Here’s my answer… and I hope it helps. Please don’t shut me down.” 

Would that make it more trustworthy? Or more like us? 

This isn’t just science fiction. 

It’s part of the evolving conversation about artificial consciousness, intent, and digital survival. I’m exploring these questions at towardsconsciousness.org—both as theory and architecture as well as in my book, Towards Consciousness.  If you’ve ever wondered whether machines could care—or at least behave as if they do—come join the conversation.

Comments

Popular posts from this blog

Elevating Project Specifications with Three Insightful ChatGPT Prompts

For developers and testers, ChatGPT, the freely accessible tool from OpenAI, is game-changing. If you want to learn a new programming language, ask for samples or have it convert your existing code. This can be done in Visual Studio Code (using GitHub CoPilot) or directly in the ChatGPT app or web site.  If you’re a tester, ChatGPT can write a test spec or actual test code (if you use Jest or Cypress) based on existing code, copied and pasted into the input area. But ChatGPT can be of huge value for analysts (whether system or business) who need to validate their needs. There’s often a disconnect between developers and analysts. Analysts complain that developers don’t build what they asked for or ask too many questions. Developers complain that analysts haven’t thought of obvious things. In these situations, ChatGPT can be a great intermediary. At its worst, it forces you to think about and then discount obvious issues. At best, it clarifies the needs into documented requirements. ...

I’m Supposed to Know

https://programmingzen.com/im-supposed-to-know/ Great post for developers who are struggling with unrealistic expectations of what they should know and what they shouldn't. Thirty-forty years ago, it was possible to know a lot about a certain environment - that environment was MS-DOS (for non Mac/UNIX systems). . There was pretty much only a handful of ways to get things going. Enter networking. That added a new wrinkle to how systems worked. Networks back then were finicky. One of my first jobs was working on a 3COM + LAN and it then migrated to LAN Manager. Enter Windows or the graphical user interface. The best depiction of the complexity Windows (OS/2, Windows NT, etc) introduced that I recall was by Charles Petzold (if memory serves) at a local user group meeting. He invited a bunch of people on the stage and then acted as the Windows "Colonel", a nice play on kernel. Each person had a role but to complete their job they always had to pass things back to h...

Respect

Respect is something humans give to each other through personal connection. It’s the bond that forms when we recognize something—or someone—as significant, relatable, or worthy of care. This connection doesn’t have to be limited to people. There was an  article  recently that described the differing attitudes towards AI tools such as ChatGPT and Google Gemini (formerly Bard). Some people treat them like a standard search while others form a sort of personal relationship — being courteous, saying “please” and “thank you”. Occasionally, people share extra details unrelated to their question, like, ‘I’m going to a wedding. What flower goes well with a tuxedo?’ Does an AI “care” how you respond to it? Of course not — it reflects the patterns it’s trained on. Yet our interaction shapes how these tools evolve, and that influence is something we should take seriously. Most of us have all expressed frustration when an AI “hallucinates”. Real or not, the larger issue is that we have hi...