The Epistemically Uninsured
There is a new class divide. Not rich and poor. Not digital natives and digital immigrants. Not even prompt engineers and the prompt-engineered. I suggest it is the epistemically insured and the epistemically uninsured.
The insured have at least a rough model of how large language models are built and how they operate. Not a PhD. Just a map. Tokens, probability distributions, training data scraped from the internet, reinforcement learning, pattern completion. They know the machine is a kind of statistical parrot with extraordinary recall and zero lived experience.
The uninsured?
They’re taking legal advice from a ventriloquist’s dummy. The dummy sounds articulate. It even uses Latin. The insured know humans trained it, shaped it, constrained it. The uninsured think the dummy has a law degree.
What “Uninsured” Means
To be epistemically uninsured is not to be stupid. It is to be operating a system without any mental model of its failure modes and limitations. It’s the difference between knowing your car might aquaplane or skid in heavy rain and believing your car is an obedient horse. One slows down in storms. The other yanks the reins and yells at the horse.
The Core Risk
Large language models (LLMs) generate text by predicting the next most probable token given prior context. They do not: know what they are saying; verify claims; care about truth; remember you in any human sense; possess intentions. Yet they simulate all of this convincingly. And here lies the insurance gap.
The uninsured mistake coherence for correspondence. They see fluent language and assume grounded knowledge. They see confidence and infer authority. They see synthesis and assume understanding. As Rory Sutherland might say: we are astonishingly susceptible to signals that look like expensive thinking. LLMs produce expensive-looking thinking at bargain prices.
Who Falls Into This Category?
Policy makers mandating AI use in schools without understanding what training data means. University leaders who think detection software solves the epistemology problem. Teachers who either ban it outright or outsource judgment to it entirely. Researchers who use it to “assist” reflexive qualitative analysis without grasping what has been delegated. Students who believe the model is a neutral oracle rather than a probabilistic mirror.
What do all these folk have in common? An inadequate model of the model, i.e. of the LLM.
Is It Only a Problem When Things Go Wrong?
The most dangerous moment is when things go right. When the answer is plausible. When the citations look tidy. When the tone feels reassuring. Failure is obvious when the model has Napoleon attend one of your Zoom meeting.
Failure is subtle. When it gives you 80% accuracy and you build policy on the remaining 20%. Insurance is not for the crash you see coming. It is for the drift you ignore or do not notice.
The Delegation Problem
Every new way of working (aka technology) redistributes capacities between humans and machines. When you delegate arithmetic to a calculator, you lose manual fluency and gain speed. You don’t consider what new capacities a user now requires. When you delegate navigation to GPS, you lose spatial memory and gain efficiency. With LLMs, we are delegating: drafting, summarising, pattern detection, ideation, feedback, and sadly, in some cases, judgment
The epistemically insured ask: What have I just handed over? What has it handed back to me to do? What did I stop doing?
The uninsured ask: Can it do it faster?
The Fluency Trap
Douglas Adams once joked that anything invented after you’re thirty is against the natural order of things. LLMs are worse. They feel like the natural order of thought itself. They produce language at the speed of thinking, which tricks us into believing they are thinking. But they are more like weather systems than philosophers. They generate patterns. Sometimes beautiful ones. Sometimes destructive ones. Always indifferent ones.
The Insurance Premium
So what does epistemic insurance look like? Not fear. Not bans. Not techno-euphoria. It looks like: a rough understanding of token prediction; awareness of training data bias; knowledge of hallucination rates; habitual cross-checking; comfort with approximation; reluctance to outsource judgment wholesale; designing tasks where human discernment remains central.
In education especially, this matters. If teachers do not understand the machine, they cannot redesign assessment meaningfully. If researchers do not understand the machine, they cannot describe what has been delegated in their methods sections. If policy makers don’t understand the machine, policy becomes emotional risk management masquerading as epistemology.
The Uncomfortable Truth
Most debates about AI in education are about morality, productivity, or cheating. Very few are about epistemology. We argue about whether students should use it. We rarely ask: What model of the model is required to use it well? Without that question we make policy on vibes.
A Quick Diagnostic
If someone says: “The AI knows or the AI decided or the AI understands your context.” You are talking to an uninsured driver.
If someone says:
“The model predicts or the model approximates or the model simulates…” You are closer to insured territory.
Language reveals models. Models reveal risk.
Why This Matters Now
LLMs are moving from novelty to infrastructure. Search, writing tools, grading systems, research assistants, curriculum planning, therapy bots, code copilots. Infrastructure is dangerous precisely because it disappears.
When the tool becomes background, epistemic vigilance fades. And the uninsured begin making structural decisions. Importantly the decisions we take now shape how this all plays out in future. Technological path dependence is in play.
Final Thought
Being epistemically insured does not mean distrusting the machine. It means distrusting your own intuitions about the machine. That is harder.
As Robin Williams once implied in a different context: just because something talks doesn’t mean it has something to say. LLMs talk magnificently. The question is whether we know what we’re hearing and whether we’ve read the fine print on the policy.