December 14, 2025

Bibs & bobs #30

 The Algorithm Everyone Thinks They Understand

How Generative AI Turned Higher Education into an Interpretive Free-for-All


Higher education has encountered many disruptions over the years: the hand held calculator, Wikipedia, PowerPoint, students who email at 2:14 a.m. All of which followed a predictable playbook: ban first and then crudely domesticate. 


GenAI is different. It’s a language model. As a model it appears to have become the most productive meaning-generating machine universities have ever seen, without generating any agreed meaning.


Put the same LLM in front of ten academics and you don’t get ten evaluations. You get ten cosmologies. Welcome to the Great Academic Inkblot Test. Everyone sees something. No one sees the same thing. To one group, GenAI is:

  • A stochastic parrot (usually capitalised, always cited)
  • A glorified autocomplete with a marketing budget
  • Proof that students have finally stopped thinking altogether (apparently this happened on a Wednesday)
  • A plot by the tech bros to replace formal education 

To another group, often posting in all caps with rocket emojis:

  • A productivity miracle
  • A personal research assistant
  • The thing that will finally expose how pointless peer review always was
  • And feel free to add to the list… 

Meanwhile, assessment experts see plagiarism in a trench coat. Educational futurists see personalised tutors for every child. Administrators see efficiency, code for fewer academics. Lawyers see billable hours. Students see… well, something useful, but they’re not telling us exactly what.


Same artefact. Wildly different “reads.”


If this were a psychology experiment and maybe it is, someone would already be publishing a paper on mass pareidolia. 


Is it just pareidolia though? Pareidolia is when you see faces in clouds, saints in toast, or emotional depth in your Roomba. 


At first glance, GenAI fits the bill nicely:

  • A deliciously ambiguous system
  • An underestimated, limited “understanding”
  • Hyperactive “meaning-making”

Seth Godin [1] recently warned that we’re projecting intention, personality, even morality onto systems that are, at base, just maths and statistics. Fair enough. But here’s the problem. Pareidolia assumes we’re mistaken.


What’s happening in universities is something a little more deliberate, and more awkward. This isn’t mis-seeing. It’s motivated seeing. Most academic takes on GenAI aren’t hallucinations. They’re strategic interpretations: I have a robust intellectual frame that I can wrap around any damn new fangled phenomenon if I choose to [2]! 


People aren’t asking, “What is this thing?” They or their subconscious is asking, “What does this mean for my workload, my assessment design, my authority, my job, my research agenda?”


It means that: If you built your career on individual authorship, GenAI is an existential threat. If you’ve been drowning in marking, it’s a lifeboat. If you run an integrity office, it’s a compliance nightmare.

If you sell ed-tech consultancy, it’s a once-in-a-lifetime opportunity.


These are not errors of perception. They’re interest-laden readings.

Calling this pareidolia lets us pretend we’re all just confused humans staring at clouds. In reality, we’re lawyers arguing over the will while the patient is still alive.


Why the noise is getting louder, not quieter


There’s a comforting belief circulating online that once we “really understand” GenAI, the bad takes will fade. This is adorable. The opposite is happening. As technical understanding improves, interpretations multiply: New affordances create new anxieties; new capabilities force new boundary disputes; every update destabilises last month’s certainty


Clarity doesn’t reduce disagreement. It raises the stakes.

This is why the discourse feels and actually is unbearable. It’s not a lack of knowledge. It’s a lack of shared settlement about: What counts as learning; What counts as cheating; What counts as skill; What counts as human contribution anymore. And let’s not forget how universities are famously bad at renegotiating settlements quickly.


The Rorschach machine problem


GenAI functions less like a tool and more like a capacity redistribution device. It quietly shifts: Who can write; Who can code; Who can summarise; Who can pass first-year subjects with alarming confidence.


And whenever capacities shift, institutions panic—not because of the machine, but because categories become unstable. That’s when the noise begins: Moral panics dressed as policy; Policy dressed as pedagogy; Pedagogy dressed as technical misunderstanding. Everyone rushes to name the thing first, because naming controls the framing and sets the agenda.


A final, mildly impolite suggestion


If GenAI were just pareidolia, the academic world could wait it out. But it isn’t. What we’re seeing is a kind of interpretive inflation: too many plausible stories competing at once, each anchored to a different institutional fear or hope.


So the real question for higher education isn’t: “What is GenAI, really?” It’s: “Which interpretations are we choosing to stabilise—and who benefits from those choices?”


Because the faces in the clouds aren’t accidental anymore. They are being drawn on purpose.



Notes


[1] See Seth Godin’s recent post. 


[2] See Survivor: Assessment Island: Students vs AI vs Academics.

No comments:

Bibs & bobs #30

  The Algorithm Everyone Thinks They Understand How Generative AI Turned Higher Education into an Interpretive Free-for-All Higher education...