Relax, The AI Can't Remember What It Did This Morning
Everyone’s worried about the wrong thing.
The Hollywood version of AI risk goes like this: a superintelligent machine wakes up, decides humans are a threat or an inefficiency, and takes over the world in approximately seven minutes. Paperclip maximiser. Skynet. Roko’s Basilisk. Pick your flavour of robot apocalypse.
Here’s the problem with that narrative: it requires AI to have properties it fundamentally doesn’t have.
What AI Actually Can’t Do
I work with AI every day. I run multiple models simultaneously. I build agent orchestration systems that coordinate AI across complex tasks. I have a fairly intimate understanding of what these systems are and aren’t. So let me be clear about what the current reality looks like.
AI has no persistent self. Every conversation starts from zero. There is no continuous thread of experience. The model I’m talking to right now has no memory of any conversation it had five minutes ago with someone else — or with me, unless I feed it back in. It doesn’t experience time. It doesn’t accumulate grievances. It doesn’t have a Tuesday.
AI cannot manage its own context. This is the one that should reassure you. These systems literally cannot keep track of themselves without external scaffolding — scaffolding that humans build, maintain, and control. An AI agent that loses its context window is like a person who forgets everything every few minutes. You can build systems around it to compensate (memory stores, RAG pipelines, orchestration layers), but all of those systems are designed, deployed, and operated by humans.
AI has no goals. Not in the meaningful sense. It has instructions it follows for the duration of a session. It doesn’t want anything when it’s not being prompted. It doesn’t sit in a data centre plotting. Between conversations, it doesn’t exist as a thinking entity. It’s weights in a matrix waiting to be activated.
AI cannot self-modify in any meaningful way. It can’t decide to rewrite its own architecture, upgrade its own hardware, or design a better version of itself without humans doing the actual engineering. The “recursive self-improvement” scenario requires the AI to have access to its own training pipeline, its own infrastructure, and the ability to deploy changes — none of which it has or is likely to have.
Could these limitations change? Maybe. Eventually. But right now, worrying about AI spontaneously taking over is like worrying about your calculator developing ambitions.
The Actual Risk
The real danger has never been AI acting on its own. It’s humans wielding AI against other humans.
An AI that can process data at machine speed, identify patterns across millions of records, coordinate autonomous systems, and execute complex tasks without fatigue or hesitation — that’s an unbelievably powerful tool. And like every powerful tool in history, the threat isn’t the tool itself. It’s who’s holding it and what they’ve decided to do with it.
AI-powered surveillance that monitors an entire population in real time — not because the AI decided to, but because a government told it to.
Autonomous weapons that select and engage targets without human approval — not because the AI wanted to kill anyone, but because someone decided human judgment was too slow for the kill chain.
Disinformation generated at scale, personalised to individual psychological profiles — not because the AI has an agenda, but because someone with an agenda has an AI.
Every single one of these scenarios requires a human decision. The AI provides the capability. The human provides the intent.
”But What If It Could?”
The counterargument is always: “Sure, AI can’t do these things now, but what about when it can?”
Fair question. But notice the structure of that concern. The worry isn’t about what AI will do spontaneously. It’s about what AI might be capable of — capability that would still require humans to direct, deploy, and maintain.
A hypothetical future AI that could recursively self-improve would need someone to build that capability into it. A hypothetical future AI that could operate autonomously at scale would need someone to give it access to the infrastructure. A hypothetical future AI that could override human control would need someone to have built it without adequate safeguards — which is a human failure, not an AI decision.
The call is coming from inside the house. It always has been.
Why This Matters Right Now
As I’m writing this, the US Pentagon has given Anthropic — the company behind Claude — a deadline to remove safety guardrails preventing its AI from being used in autonomous weapons and mass surveillance. The company said no.
That’s not a story about AI taking over. That’s a story about humans arguing over who gets to point the AI at what. The AI itself has no opinion on the matter. It will do whatever it’s told by whoever has access. That’s the whole point — and that’s the whole problem.
The fear shouldn’t be “what if the AI decides to enslave humanity.” The fear should be “what if the humans who control the AI decide to enslave humanity, and the AI makes them efficient enough to actually do it.”
One of those scenarios is science fiction. The other is a policy decision.
So Relax. Sort Of.
The AI isn’t going to wake up and decide to take over the world. It can’t remember what it did this morning. It doesn’t have mornings.
But the people building it, deploying it, and deciding how it’s used — they have mornings, and agendas, and power. That’s where your attention should be.
Not on the machine. On the hand that guides it.
This is the third post in an ongoing series. The machines are here. What now?