
This has been a slow week for writing here because it’s been a demanding week at the day job.
These things happen.
Oddly, it’s been a banner week for reader traffic here even with fewer new posts than I normally aim for.
This does mean, though, that my mind is less focused on political topics and the ongoing demise of the United States and more focused on technical topics and the ongoing demise of critical thinking.
Yep, it’s “AI” again.
I am fortunate in that I only occasionally have to deal with people who think “AI” is real in my day job. Occasionally someone will “vibe code” something for a web site, but frankly the user-facing side of things isn’t my problem most of the time and we maintain pretty effective spheres of responsibility so it doesn’t interfere with me much.
Outside of work, though, I encounter many, many people in technical jobs who are getting “AI first” mandates dropped on them, and it’s not going well for any of them.
So, first, just a reminder:
“AI” isn’t real.
There is no “artificial intelligence”.
It doesn’t exist.
What generative AI systems do is output material we associate with intelligence because until now it’s really only been produced by intelligence.
Things like comprehensible graphics and legible text.
To be fair, this is an amazing technical achievement and the folks who built the interfaces to the models did some really clever work.
But there’s no intelligence involved.
These systems don’t actually understand anything.
They can output a picture of a horse, but they don’t even have an internal concept of “horse”.
They can output a paragraph of text about economics, but they don’t have any concept of what an “economy” is.
They can output reams and reams of computer code, but they have no concept of what “code” is or what the user wants it to do.
These systems choose pixels, words and code only as tokens that are probabilistically associated with other tokens.
They don’t know what a pixel, a word or any code is. They don’t even have any concept of “knowing”.
Everything they do is just weights and probabilities.
And this is the really, really important thing to understand: generative AI systems are all just probabilities.
They can produce output that is actually correct and fits the purpose for which it was requested.
Of course they can.
But that’s only because on that request the flow of probabilities happened to line up in such a way that its output was accurate by chance.
It could just as easily not have, and it will be inaccurate and inappropriate on other requests. That’s just how chance works.
Generative AI systems cannot be made reliable.
Because chance is a fundamental part of how they work.
They can produce plausible outputs because of chance, and they will produce inaccurate and false (but still plausible-seeming) outputs because of chance.
They simply cannot be made reliable or trustworthy because chance is a fundamental part of what they are.
And so relying on them or trusting them for anything is just foolish.
They are toys, and trying to use them as tools is not going to end well.
- AIpocalypse: Generative AI Cannot Be Reliable - 2026-02-13
- C’mon Democrats: Zero Votes For Fascism - 2026-02-11
- Digital Sovereignty: Not Just For Countries - 2026-02-09
- Artificial Stupidity: It Only Seems Right Because You Don’t Know It’s Wrong
- You Should Read This: The Generative AI Con
- GenAI Hate: Yes, It’s Luddism But Not How You Think (And Not Just That)
- You Should Read This: The Hater’s Guide To The AI Bubble
- Leviathan Search: You Can Make The World You Want To Live In
