On Being Given Homework by a Statistical Parrot
I recently asked a large language model to read through my blog archive and tell me what themes it could see, how they had developed over time, and where the blind spots might be. I use Blogger which is old creaky and somewhat difficult to navigate. I am too lazy to Substack it.
I was curious about what it might do and drew a little on Mark Carrigan’s play with LLMs and exploring his much larger blog.
The task I set the LLM (GPT5.4) proved to be more than interesting and likely, a mistake. Not a huge mistake. Not one involving fire, lawyers, or a small drone lodged in the ceiling. More the kind of mistake where you casually invite a machine into your study and discover, ten minutes later, that it has developed views about your intellectual habits. OMG!
The machine did a pretty decent job which was somewhat annoying.
There are few experiences in modern life more undignified than feeding several years of your thoughts into a predictive text engine and having it come back with something that amounts to: recurring themes, strong voice, interesting preoccupations, my attachment to a few favourite concepts, and then perhaps the suggest that it is time to examine the bits I’ve been avoiding.
This was not criticism in the ordinary sense. It looked more like homework.
So I found myself in the unusual position of being assigned a future writing agenda by a machine whose principal qualification is that it has swallowed a great deal of the internet and survived with its grammar mostly intact. Naturally, I have turned this into a blog post.
What the machine thought I’d been doing
Its reading of the archive was irritating but also reasonably accurate. I make no claim for tidy thinking. Here I wonder how to refer to “it” without giving it too much agency or human-like attributes. I’ll stick with it and perhaps machine.
It noticed that the blog keeps circling a few stubborn concerns: the delegation of work to machines, the exchange of capacities between humans and nonhumans, the habit educational institutions have of responding to novelty by trying to staple it to an existing policy template, and the continuing farce in which fluent output gets mistaken for understanding simply because it arrives in complete sentences and doesn’t immediately burst into flames. Fair enough.
It also noticed that the tone has shifted over time, which it has. The earlier posts were more notebook-like: exploratory, connective, mildly civilised. The later ones are more theatrical, more satirical, and more willing to stage conversations with LLMs, dashboards, vice-chancellors, and other entities whose relationship to reality can best be described as tenuous, maybe intermittent or perhaps elastic. Maybe aspirational works better. And, again, fair enough.
Apparently the blog has evolved from “here are some things I’ve been thinking about” into “here is a small comic theatre in which sociotechnical absurdity is allowed to speak for itself.” Too close to the bone.
All of this was apparent to me but which I had not paid much attention to. I guess it is useful to know explicitly. One would prefer to hear this from a distinguished critic rather than from a machine that can also explain photosynthesis in the style of a pirate.
The machine’s complaints
The interesting part came when it stopped summarising the archive and started identifying what I had not done enough of. This is where the whole exchange took on the slightly sinister feel of a performance review being conducted by upholstery.
Its first complaint was that my favourite concept may be becoming too useful. This is a known hazard with one’s favourite theories/ideas. You find one that illuminates a lot, and before long it starts turning up everywhere, like a Swiss Army knife in a culture that has forgotten there are other forms of cutlery. There is more than a touch of intellectual path dependance at play here.
In my case it is the concept is the exchange of capacities. I’ve found it useful for thinking about calculators, writing, educational technologies, AI, research work, and all the odd little bargains humans strike when they ask nonhumans to do things for them.
The machine (it), with devastating politeness, suggested that if an idea explains everything, it may be at risk of becoming the intellectual equivalent of a universal remote: handy, but increasingly vague about what exactly it is controlling. What it wanted instead was detail. Not another elegant generalisation. A case. One mundane task at a time. One human. One machine. One actual mess. Kind of impertinent, maybe even rude but likely correct.
Its second complaint was that I write a lot about new capacities, complementary skills and hybrid forms of work, but less about what disappears. Memory, perhaps. Patience. Tolerance for ambiguity. The ability to sit with a bad draft without immediately calling in a fluent synthetic intern to produce twelve alternatives, three summaries and a tone that is somehow polished yet faintly suspiciously LLM looking.
This is awkward territory. I’d rather not join the ranks of those who mistake a minor personal discomfort for the collapse of the social order. Plenty of older scholarly practices were not always profound. They were just slow and accompanied by terrible coffee. But the question remains: what are we relieved to lose, what are we too lazy to defend, and what fades at genuine cost? That does seem worth some attention.
The machine also noted that I talk too often about “humans,” which is one of those generously oversized categories academics wheel out when they are too tired to sort people properly. Students, teachers, support staff, managers, multilingual writers, disciplinary experts, uncertain novices and neurodivergent learners do not all encounter LLMs in the same way. Ouch! That is and should have been made explicit. Which made it especially irritating when talkative software pointed it out.
Then it made a point I genuinely liked: I spend a fair bit of time on meaning, discourse, framing and institutional absurdity, but less on pace. Not meaning. Pace.
LLMs do not simply generate text. They alter the rhythm of thought. They accelerate drafting, multiply options, and make it possible to bury yourself beneath a landslide of plausible prose before you’ve had time to decide whether any of it deserves to exist. Generation scales beautifully. Judgment remains stubbornly artisanal. This post is the result of over twenty lengthy interactions with “it”.
At one point the machine used the phrase cognitive carpet bombing, which was so offensively apt that I immediately stole it while objecting on principle to being out phrased by software.
Finally, it pointed out that I enjoy satirising institutions from a gratifying altitude while paying somewhat less attention to the pipes. Dashboards, policies, strategic hallucinations and managerial theatre are all very well, but what about “procurement, licensing, support workloads, platform dependence, privacy compromises and the invisible labour required to keep the magic box from dribbling nonsense across the faculty?” In short it was asking where is the infrastructure? This may have been the machine’s harshest criticism. It is one thing to mock the uniforms. It is another to inspect the plumbing.
The awkward part
The awkward part was not that the machine said these things. The awkward part was that some of them were actually useful. That is a different problem.
A summary is harmless. A summary just sits there and makes you sound more organised than you really are. But when a machine reads your archive, spots where you have become comfortable, and hands you a more interesting version of your own unfinished to-do list, something slightly stranger has happened. Not intelligence in the grand human and mythic sense. Not understanding in the fully human sense involving biography, vanity, memory, bad knees, awkward conferences and the ability to make tea while holding two contradictory thoughts. Seriously challenging even for a passed his use by date loosely unretired academic.
But it was certainly something capable of reorganising my local intellectual weather. Annoyingly, this was very much in keeping with the point I have been making for a while. Delegating work to machines is never simple substitution. It rearranges the work. New tasks appear. New judgments become necessary. The human does not vanish. The human is re-specified.
In this case, I did not ask the LLM to write my next posts. I asked it to read what I had already written and tell me where the productive discomfort lay. It responded by manufacturing new work for me to do which is something it has always done. Sure it saves some of the superficial practices but always makes more work.
It has become, in effect, one of those colleagues who reads your draft closely, identifies three weaknesses, suggests six better questions, and then is unavailable for a face to face meeting, leaving you with the labour and them with the smug aura of insight. In university terms, this makes it dangerously promotable.
A small agenda, courtesy of the robot
So, in the spirit of accepting useful irritation wherever it arises, here are some of the posts the machine appears to think I should now write:
A post about the capacities we lose, not just the ones we gain.
A post about pace, overload and what happens when idea generation outruns judgment.
A post about students not as victims of AI confusion, but as practical theorists of the machine.
A post about infrastructure, procurement and hidden labour.
A post testing whether “AI sensibility” is actually better than “AI literacy,” or whether it sometimes risks becoming a more elegant kind of vagueness.
A post about pleasure, because one of the under-discussed facts of LLM use is that it can be genuinely enjoyable. Not always trustworthy. Not always wise. But enjoyable. And any account of adoption that ignores enjoyment is leaving out one of the engines.
I don’t like admitting this, it is a decent list. It is also an annoying intolerable thing to have been handed by autocomplete with professional ambitions.
Final indignity
The final indignity is that I am now turning the machine’s critique into a blog post. Which means the LLM has not only read the archive and issued the agenda, but has now entered the archive as one of the things helping shape what comes next. Not author. Stay calm for now!
Also not thinker in the full human sense, complete with biography, embarrassment, status anxiety and the memory of sitting through PowerPoint presentations that should have had legal representation.
Participant? Certainly.
A mildly insolent participant. One of those unnervingly competent entities that does not so much replace thought as rearrange the furniture around it and then leave a note suggesting the room works better this way. Which, rather offensively, is almost exactly the argument I have been making all along.
So there it is.
I asked a language model to read my blog.
It read it.
It identified the themes.
It found the blind spots.
It set me homework.
And the really annoying part is that some of the homework was not bad.
The future, it seems, is not intelligent machines replacing human thought. It is machines reading your back catalogue and suggesting you could stand to be more specific. Which is somehow both less glamorous and more unsettling.
No comments:
Post a Comment