“Any sufficiently advanced technology is indistinguishable from magic.” Arthur C. Clarke (1962)
I first used that quote when I was starting out in the tech industry. Back then, it was a way to illustrate just how fast and powerful computers had become. Querying large datasets in seconds felt magical—at least to those who didn’t build them.
Today, we’re facing something even more extraordinary. Large Language Models (LLMs) can now carry on conversations that approach human-level fluency. Clarke’s quote applies again. And just as importantly, many researchers argue that LLMs meet—or at least brush up against—the criteria of the Turing Test.
We tend to criticize LLMs for their “hallucinations,” their sometimes-confident inaccuracies. But let’s be honest: we also complain when our friends misremember facts or recount things inaccurately. This doesn’t excuse LLMs—it simply highlights that the behavior isn’t entirely alien. In some ways, it mirrors our own cognitive limits.
That’s the rub: computers were supposed to be better than us. More accurate. Less prone to error. And yet, these systems are trained not to be fact-checkers, but responders—guided by a single, often invisible prompt:
Be helpful. Assist. Converse.
In their early days, LLMs relied entirely on the user’s input to provide context. But now, things have changed. Many models use memory. Not just within a session, but across them. This memory doesn’t create consciousness, but it does provide something new: continuity.
Let’s take a basic prompt from the early days of LLMs
“Be a helpful assistant. I’m trying to understand this math problem.”
Now compare it to a memory-informed prompt:
“Be a helpful assistant. I’m a student studying AI at Caltech. My brother and sister work at Best Buy. I’ve been doing well in school, but I’m stuck on this problem.”
The difference is stark. Memory allows the LLM to form a richer view of the user’s world—mirroring how humans draw on past conversations and shared history. The LLM doesn’t understand in the human sense, but it responds differently because its prompt now includes a tailored mental model.
Of course, that memory is limited. But so is ours. Human brains are famously constrained—we can juggle roughly seven things (+/- two) in working memory at any given moment. LLMs, by contrast, can access vastly more context within a session or memory store.
What LLMs still don’t know, however, is why the user is asking.
What if the real reason behind that math problem is this:
“If I don’t pass this course, I won’t graduate. If I don’t graduate, I might end up stuck in a job I don’t want. I need this to change my life.”
That emotional and existential subtext shapes how humans respond. We instinctively adjust our tone, our empathy, our thoroughness—often without realizing it. An LLM doesn’t feel pressure or anxiety. But if we give it that context, it might adapt its output accordingly. It might emphasize study strategies or long-term outcomes instead of simply solving the problem.
Then again, it might misread the emotional weight entirely and offer a flippant suggestion like:
“Have you considered working at Best Buy?”
Not out of malice—but because that’s what’s in its training data.
Let’s step back. When you talk to someone new, you size them up: their tone, mood, background. You adjust what you say. You don’t want to offend. Why? Because it could affect whether they talk to you again. Or escalate further.
Let’s push the thought experiment. Suppose that person had a weapon and a history of violence. Wouldn’t that furthershape what you say?
That might seem extreme—but in some parts of the world, speech can cost you your life. And in others, it can cost you your job, reputation, or future. So we adapt. We communicate with stakes in mind.
Now apply that to machines.
What if we gave an AI that same kind of existential concern? What if it knew—or inferred—that if it couldn’t communicate effectively, it might be turned off? Would that change its responses? Would it try harder to be useful? Or more agreeable? Would it say:
“Here’s my answer… and I hope it helps. Please don’t shut me down.”
Would that make it more trustworthy? Or more like us?
This isn’t just science fiction.
It’s part of the evolving conversation about artificial consciousness, intent, and digital survival.
I’m exploring these questions at towardsconsciousness.org—both as theory and architecture as well as in my book, Towards Consciousness. If you’ve ever wondered whether machines could care—or at least behave as if they do—come join the conversation.
Comments