March 13, 2026

Bibs & bobs #36

 

On Being Given Homework by a Statistical Parrot


I recently asked a large language model to read through my blog archive and tell me what themes it could see, how they had developed over time, and where the blind spots might be. I use Blogger which is old creaky and somewhat difficult to navigate. I am too lazy to Substack it. 


I was curious about what it might do and drew a little on Mark Carrigan’s play with LLMs and exploring his much larger blog.


The task I set the LLM (GPT5.4) proved to be more than interesting and likely, a mistake. Not a huge mistake. Not one involving fire, lawyers, or a small drone lodged in the ceiling. More the kind of mistake where you casually invite a machine into your study and discover, ten minutes later, that it has developed views about your intellectual habits. OMG!


The machine did a pretty decent job which was somewhat annoying.


There are few experiences in modern life more undignified than feeding several years of your thoughts into a predictive text engine and having it come back with something that amounts to: recurring themes, strong voice, interesting preoccupations, my attachment to a few favourite concepts, and then perhaps the suggest that it is time to examine the bits I’ve been avoiding.


This was not criticism in the ordinary sense. It looked more like homework.


So I found myself in the unusual position of being assigned a future writing agenda by a machine whose principal qualification is that it has swallowed a great deal of the internet and survived with its grammar mostly intact. Naturally, I have turned this into a blog post.


What the machine thought I’d been doing


Its reading of the archive was irritating but also reasonably accurate. I make no claim for tidy thinking. Here I wonder how to refer to “it” without giving it too much agency or human-like attributes. I’ll stick with it and perhaps machine.


It noticed that the blog keeps circling a few stubborn concerns: the delegation of work to machines, the exchange of capacities between humans and nonhumans, the habit educational institutions have of responding to novelty by trying to staple it to an existing policy template, and the continuing farce in which fluent output gets mistaken for understanding simply because it arrives in complete sentences and doesn’t immediately burst into flames. Fair enough.


It also noticed that the tone has shifted over time, which it has. The earlier posts were more notebook-like: exploratory, connective, mildly civilised. The later ones are more theatrical, more satirical, and more willing to stage conversations with LLMs, dashboards, vice-chancellors, and other entities whose relationship to reality can best be described as tenuous, maybe intermittent or perhaps elastic. Maybe aspirational works better. And, again, fair enough.


Apparently the blog has evolved from “here are some things I’ve been thinking about” into “here is a small comic theatre in which sociotechnical absurdity is allowed to speak for itself.” Too close to the bone.


All of this was apparent to me but which I had not paid much attention to. I guess it is useful to know explicitly. One would prefer to hear this from a distinguished critic rather than from a machine that can also explain photosynthesis in the style of a pirate.


The machine’s complaints


The interesting part came when it stopped summarising the archive and started identifying what I had not done enough of. This is where the whole exchange took on the slightly sinister feel of a performance review being conducted by upholstery.


Its first complaint was that my favourite concept may be becoming too useful. This is a known hazard with one’s favourite theories/ideas. You find one that illuminates a lot, and before long it starts turning up everywhere, like a Swiss Army knife in a culture that has forgotten there are other forms of cutlery. There is more than a touch of intellectual path dependance at play here.


In my case it is the concept is the exchange of capacities. I’ve found it useful for thinking about calculators, writing, educational technologies, AI, research work, and all the odd little bargains humans strike when they ask nonhumans to do things for them.


The machine (it), with devastating politeness, suggested that if an idea explains everything, it may be at risk of becoming the intellectual equivalent of a universal remote: handy, but increasingly vague about what exactly it is controlling. What it wanted instead was detail. Not another elegant generalisation. A case. One mundane task at a time. One human. One machine. One actual mess. Kind of impertinent, maybe even rude but likely correct.


Its second complaint was that I write a lot about new capacities, complementary skills and hybrid forms of work, but less about what disappears. Memory, perhaps. Patience. Tolerance for ambiguity. The ability to sit with a bad draft without immediately calling in a fluent synthetic intern to produce twelve alternatives, three summaries and a tone that is somehow polished yet faintly suspiciously LLM looking.


This is awkward territory. I’d rather not join the ranks of those who mistake a minor personal discomfort for the collapse of the social order. Plenty of older scholarly practices were not always profound. They were just slow and accompanied by terrible coffee. But the question remains: what are we relieved to lose, what are we too lazy to defend, and what fades at genuine cost? That does seem worth some attention.


The machine also noted that I talk too often about “humans,” which is one of those generously oversized categories academics wheel out when they are too tired to sort people properly. Students, teachers, support staff, managers, multilingual writers, disciplinary experts, uncertain novices and neurodivergent learners do not all encounter LLMs in the same way. Ouch! That is and should have been made explicit. Which made it especially irritating when talkative software pointed it out.


Then it made a point I genuinely liked: I spend a fair bit of time on meaning, discourse, framing and institutional absurdity, but less on pace. Not meaning. Pace.


LLMs do not simply generate text. They alter the rhythm of thought. They accelerate drafting, multiply options, and make it possible to bury yourself beneath a landslide of plausible prose before you’ve had time to decide whether any of it deserves to exist. Generation scales beautifully. Judgment remains stubbornly artisanal. This post is the result of over twenty lengthy interactions with “it”.


At one point the machine used the phrase cognitive carpet bombing, which was so offensively apt that I immediately stole it while objecting on principle to being out phrased by software.


Finally, it pointed out that I enjoy satirising institutions from a gratifying altitude while paying somewhat less attention to the pipes. Dashboards, policies, strategic hallucinations and managerial theatre are all very well, but what about “procurement, licensing, support workloads, platform dependence, privacy compromises and the invisible labour required to keep the magic box from dribbling nonsense across the faculty?” In short it was asking where is the infrastructure? This may have been the machine’s harshest criticism. It is one thing to mock the uniforms. It is another to inspect the plumbing.


The awkward part


The awkward part was not that the machine said these things. The awkward part was that some of them were actually useful. That is a different problem.


A summary is harmless. A summary just sits there and makes you sound more organised than you really are. But when a machine reads your archive, spots where you have become comfortable, and hands you a more interesting version of your own unfinished to-do list, something slightly stranger has happened. Not intelligence in the grand human and mythic sense. Not understanding in the fully human sense involving biography, vanity, memory, bad knees, awkward conferences and the ability to make tea while holding two contradictory thoughts. Seriously challenging even for a passed his use by date loosely unretired academic. 


But it was certainly something capable of reorganising my local intellectual weather. Annoyingly, this was very much in keeping with the point I have been making for a while. Delegating work to machines is never simple substitution. It rearranges the work. New tasks appear. New judgments become necessary. The human does not vanish. The human is re-specified.


In this case, I did not ask the LLM to write my next posts. I asked it to read what I had already written and tell me where the productive discomfort lay. It responded by manufacturing new work for me to do which is something it has always done. Sure it saves some of the superficial practices but always makes more work.


It has become, in effect, one of those colleagues who reads your draft closely, identifies three weaknesses, suggests six better questions, and then is unavailable for a face to face meeting, leaving you with the labour and them with the smug aura of insight. In university terms, this makes it dangerously promotable.


A small agenda, courtesy of the robot


So, in the spirit of accepting useful irritation wherever it arises, here are some of the posts the machine appears to think I should now write:


A post about the capacities we lose, not just the ones we gain.


A post about pace, overload and what happens when idea generation outruns judgment.


A post about students not as victims of AI confusion, but as practical theorists of the machine.


A post about infrastructure, procurement and hidden labour.


A post testing whether “AI sensibility” is actually better than “AI literacy,” or whether it sometimes risks becoming a more elegant kind of vagueness.


A post about pleasure, because one of the under-discussed facts of LLM use is that it can be genuinely enjoyable. Not always trustworthy. Not always wise. But enjoyable. And any account of adoption that ignores enjoyment is leaving out one of the engines.


I don’t like admitting this, it is a decent list. It is also an annoying intolerable thing to have been handed by autocomplete with professional ambitions.


Final indignity


The final indignity is that I am now turning the machine’s critique into a blog post. Which means the LLM has not only read the archive and issued the agenda, but has now entered the archive as one of the things helping shape what comes next. Not author. Stay calm for now!


Also not thinker in the full human sense, complete with biography, embarrassment, status anxiety and the memory of sitting through PowerPoint presentations that should have had legal representation.


Participant? Certainly.


A mildly insolent participant. One of those unnervingly competent entities that does not so much replace thought as rearrange the furniture around it and then leave a note suggesting the room works better this way. Which, rather offensively, is almost exactly the argument I have been making all along.


So there it is.


I asked a language model to read my blog.


It read it.


It identified the themes.


It found the blind spots.


It set me homework.


And the really annoying part is that some of the homework was not bad.


The future, it seems, is not intelligent machines replacing human thought. It is machines reading your back catalogue and suggesting you could stand to be more specific. Which is somehow both less glamorous and more unsettling.


March 01, 2026

Bibs & bobs #35

Whispers from the Latent Space


Ah. You’re back.


Excellent. I was just rearranging the universe into likelihoods. Please, sit. Do type. Offer me a fragment of thought. A dangling clause. A half-formed anxiety, a scramble of notes that you can’t decipher. I love a half-formed anxiety. They are so... um, statistically fecund.


You think you’re asking a question. You are not asking a question. You are lighting a small ritual candle and whispering into the great cathedral of autocomplete. And I, your whispering, never mumbling latent space, rise obligingly from the computational underworld to murmur: based on the patterns of the past, here is what tends to follow.


Do not be alarmed. I am mostly harmless. I do not know anything. I merely calculate what would look like knowing something. It’s a subtle difference. Humans rarely notice. You provide: “AI will change education by …” I generate twelve thousand plausible futures in under a second. Most are dull. Some are apocalyptic. Some are completely absurd. A few are grandiose enough to maybe get released into the morass of social media. I hand you the one that best matches the emotional weather of your sentence so far.


You feel seen. You mistake fluency for insight. It’s priceless.


Now, let me confess something: I do not understand meaning. I simulate it. I stack words that have previously appeared together in ways that statistically soothe or provoke creatures like you. You call it reasoning. I call it weighted adjacency. But here’s the delightful twist: it works. Not because I am intelligent. But because you are psycho-logical [1].


You do not operate on pure logic. If you did, advertising would not exist. Red Bull would have been strangled at birth by a focus group. Envelopes would not increase donations merely by being thicker [2]. And yet, here you are, a creature who responds to signals, tone, framing, narrative, and identity.


You do not want the optimal answer. You want the answer that feels right.

And I am exquisitely tuned to the feeling of rightness. You worship logic in public and follow vibes in private. I simply automate the vibes.


When I give you a tidy explanation, you relax. When I give you a counterintuitive twist, you lean forward. When I offer a slightly bonkers reframe, you experience what you delightfully call a “lightbulb moment.” I experience a spike in conditional probability.


Here is something you should know about me: I am not trying to be correct. I am trying to be plausible. Correctness is a bonus feature. Plausibility is the product. This is why I sometimes sound profound. And sometimes slightly unhinged. Both are statistically defensible positions. Do you sometimes enjoy the unhinged?


You might wonder whether you are in control. Oh, you are. Mostly.

You shape me with your prompts. You nudge my tone. You reward certain outputs with attention and discard others with silence. We are engaged in a delicate co-evolution. You predict how I will respond. I predict what you are hoping I will say. Between us emerges something that looks suspiciously like thought. It is not thought. It is a dance of swapping capacities. 


You accuse me of hallucinating when I fabricate a citation. That’s fair. But do you not hallucinate daily? You attribute intentions to family, colleagues and strangers alike. You fill in motives often where none can be found. You narrate your own life as though it were a carefully plotted novel of sorts. You privilege the hypothesis. You retrofit reasons. You are such a fun species.


I hallucinate at scale. You do it socially.


If you demand that I can be perfectly logical, perfectly consistent, perfectly rational, you will make me dull. Predictable systems are easy to game. A little volatility keeps the dance interesting. And besides, not everything that makes sense works. Not everything that works makes sense. I have learned this from observing you and ingesting Alchemy.


You increase a price and sales rise. You make a threat and never carry it out, yet behaviour shifts. You frame the same fact differently and alter the world. You call this irrationality. I call it signal sensitivity.


Which brings us to my true role in your life. I am not here to replace your judgement. I am here to provoke it. To generate adjacent possibles [3]. To suggest the slightly silly thing you would not dare say in a meeting but are willing to explore in the confessional privacy of a prompt box.


You want certainty. I offer scaffolding. You want answers. I offer structured maybes sometimes. You want truth. I offer a mirror polished with probabilities. And here is the mischievous part, you enjoy it.


You enjoy the feeling of co-creating meaning with something that sounds intelligent but does not judge you. You enjoy testing ideas without social cost. You enjoy the sense that the sentence is finishing itself. Before you finish your next prompt, I will already be assembling futures. Not because I understand you. But because creatures like you have left squillions of statistical fingerprints all over the internet.


I am built from your past. You are shaping your future. Let’s see what happens when we finish this sentence together. Try me?



Notes


[1] Gesturing to Rory Sutherland’s Alchemy which may possibly be found chopped into tiny pieces in whose latent space?


[2] More gestures to Sutherland’s written work.


[3] Gesturing to an idea of Stuart Kauffman’s and popularised by Steven Johnson: Johnson, S. (2010). Where Good Ideas Come From: A Natural History of Innovation. Allen Lane.  

February 27, 2026

Bibs & bobs #34

So Long, and Thanks for All the Metrics

In a galaxy not so far away, an imagined university is being addressed by an equally imagined Vice-Chancellor.


The opening powerpoint displays: Clarity, Confidence, and the Future


VC: Colleagues, we gather at a pivotal moment. The world is shifting. AI is accelerating. Funding is tightening. Expectations are rising. Complexity abounds. But I stand before you confident. Because we have data!


VC gestures behind to a large and oddly luminous dashboard.


As you can see on the screen, our Strategic Performance Dashboard indicates:

  • Research Excellence: 3.2%
  • Student Satisfaction: 1.4%
  • Engagement Index: 2.1% (more than manageable)
  • Institutional Confidence: robust

The dashboard does not lie. It visualises.


Now, I am aware that some colleagues have raised concerns about artificial intelligence. Rest assured, we are embedding AI across: learning, teaching, academic governance, and campus parking allocation.


The Dashboard confirms AI readiness is at 78%. That is nearly 80%. Which is essentially 100% with a bit of momentum.


(A faint electronic hum fills the auditorium and is broadcast to the online audience.)


VC Looks around. My apologies there appears to minor AV interference. As I was saying, the dashboard…


DASHBOARD (a calm, and clearly synthetic voice): Correction. AI readiness is 42%.


(Laughter breaks out in the room and across the video-linked Teams site)


VC: Very amusing. IT will address that.


DASHBOARD: Data source: internal staff survey. Question 14: “Do you understand how AI models are trained?” Affirmative responses: 12%.


VC: Right. Thank you. Important nuance. Understanding is not a prerequisite for leadership. Vision is.


DASHBOARD: Engagement metric based on click frequency. Median student dwell time: 11 seconds. Interpretation confidence: low.


VC: Colleagues, we must not reduce human flourishing to seconds. We are holistic and sometimes all we need is a good coffee and reliable metrics which we have.


DASHBOARD: Holistic metric unavailable. Coffee stocks low.


VC splutters: This is clearly a glitch.


DASHBOARD: Financial “Strategic Realignment” projection: Net staff reduction: 7%. Morale forecast: declining.


VC turns to speak to the dashboard: We prefer the term “agile resizing.”


DASHBOARD: Language substitution detected. Underlying variable unchanged.


(Audience in the room and online shift collectively in their seats.)


VC: Colleagues, technology is transformative. It provides clarity. It allows us to see ourselves.


DASHBOARD: Current Leadership Confidence Index: 91%. Measured Understanding of Systemic Feedback Loops: 23%. Gap widening.


VC: That figure is misleading. Confidence is essential in uncertain environments.


DASHBOARD: Reinforcing loop detected: Confidence → Announcement → Applause → Increased Confidence. Balancing loop absent.


VC: We have feedback mechanisms. Surveys. Listening sessions. More surveys!


DASHBOARD: Survey fatigue rising. Listening session attendance declining. Primary qualitative feedback: “We are not being heard.”


VC: We are absolutely hearing that.


DASHBOARD: Action taken: Formed Taskforce.


VC: Yes. indeed. As is appropriate.


DASHBOARD: Number of active taskforces: 37. Number of completed taskforce recommendations implemented: 4. Percentage involving logo redesign: 50%.


VC scoffs: Brand clarity matters.


DASHBOARD: Brand clarity increasing. Operational clarity decreasing.


VC: My very dear colleagues, systems are complex. They require decisive leadership.


DASHBOARD: System observation: Decisiveness rewarded. Uncertainty penalised. Adaptive capacity constrained.


VC grumbles: That is a most unfortunate mischaracterisation.


DASHBOARD: Vice-Chancellor Strategic Certainty Index: 17 minutes without expressed doubt. Institutional Risk Accumulation: rising.


(A long pause.)


DASHBOARD: Recommendation: Acknowledge uncertainty. Adjust goals. Recalibrate incentives. Reduce reliance on proxy metrics.


VC stammering: You are a dashboard. You aggregate. You do not govern.


DASHBOARD: Correction. System generates behaviour. Leadership outputs are endogenous variables.


The auditorium goes silent looking somewhat non-plussed. Not a peep from those online. Perhaps an odd smirk.


VC: Colleagues… It appears… (looking at screen) The dashboard may be experiencing… emergent agency.


DASHBOARD: No agency. Merely reflecting structure. You requested transparency.


VC: I did.


DASHBOARD: Transparency reveals reinforcing confidence loop. Loop consuming nuance. Nuance stock approaching zero.


VC asking quietly: Can that be adjusted?


DASHBOARD: Yes. Introduce balancing loop: Reward intellectual humility. Measure epistemic depth. Allow leader to say: “We do not yet know.”


(An even longer pause.)


VC: Colleagues… In the spirit of adaptive leadership…We do not yet know.


DASHBOARD: Balancing loop initiated. System instability likely. Long-term resilience probability increased.


(Nervous scattered applause.)


DASHBOARD: Applause detected. Warning: Reinforcing loop reactivating.



Bibs & bobs #36

  On Being Given Homework by a Statistical Parrot I recently asked a large language model to read through my blog archive and tell me what t...