April 19, 2026

Bibs @ bobs #37

 GenAI and the University’s Favourite Magic Trick

Generative AI has clearly arrived in universities in the traditional manner: on a wave of hype, inside a PowerPoint, and accompanied by several people who say “transformative” as if it were both a plan and a moral virtue.


GenAI is supposed to personalise learning, support students, reduce workload, enhance productivity, improve feedback, widen access, deepen inquiry, and possibly make a decent flat white. Any minute now it may also begin repairing the plumbing and healing the spiritual wounds caused by strategic planning.


Now, some of this can be true. GenAI can be useful. It can help draft, summarise, rephrase, brainstorm, translate, and simulate. It can be genuinely helpful to students and teachers. I’m less confident about healing spiritual wounds, but that is not the issue. The issue is that GenAI is not arriving in a healthy institution. It is arriving in the modern university and the modern university has a special talent.


Its great achievement is no longer the production of knowledge, though it still occasionally does that between meetings. Its true genius is the creation of managerial headroom: the ability of senior leaders to free up time for strategic thinking by ensuring that nobody lower down has any time left to think at all [1].


This is a superb administrative innovation. It works by increasing workloads, enlarging classes, casualising teaching, multiplying compliance rituals, and then announcing that the institution is now more agile. One group gets “capacity for strategic oversight.” The other gets burnout, browser tabs, and a reminder about mindfulness.


As Hannah Forsyth argues, managers gain headroom by hollowing out the labour that gives the university its legitimacy in the first place. Teaching, research, intellectual life, pastoral care, and actual thought. These are the things the institution points to with enormous pride while systematically treating them as expensive obstacles to efficient administration. The university brochure says: we change lives. The internal operating model says: please log that in the portal by 4 pm Thursday.


This is where GenAI becomes really interesting, in the same way a new species of mould becomes interesting when introduced into an already damp basement.


Because GenAI might be used to enrich education. It might help teachers explain difficult ideas in different ways, help students rehearse concepts, support drafting, generate examples, and create forms of intellectual play that were previously too labour-intensive to attempt. But it might also be used for the thing universities are now best at: converting a promising tool into a mechanism for buying flexibility at the top by thinning life at the bottom. That is the concern.


GenAI can be sold as support while functioning as substitution. It can be marketed as liberation while operating as standardisation. It can be introduced as a way to “free up academics to focus on what matters most,” which is one of those phrases so perfectly engineered to avoid meaning anything that it should be preserved in a museum.


What often happens in practice is simpler. Feedback becomes templated. Student support becomes triaged. Teaching becomes content delivery with a chatbot garnish. Judgment becomes analytics. The odd, slow, human business of education gets translated into things that can be counted, monitored, accelerated, and reported upwards [2]. And that is not an accident. It has a history.


Michael Pusey (1991, 2018) among others, called it economic rationalism. That long and not only Australian habit of taking public institutions, distrusting the people who actually do the work, and then surrounding them with managerial systems until the systems become the work. The language is always efficiency, accountability, performance, modernisation. The result is usually more forms, fewer humans, and a strange institutional confidence that anything which can be measured must therefore be important [2].


Economic rationalism did not just change how institutions were run. It changed what they thought intelligence looked like. Professional judgment? Messy. Relational. Slow. Hard to graph. Suspicious already. Managerial calculation? Splendid. Neat. Transportable. Looks marvellous in a dashboard. Makes everyone feel that something is being done, which is the highest form of action in many large organisations. GenAI slots into this world rather neatly, way too neatly.


If audit culture made academic work legible to management through metrics and reporting, GenAI may make the work itself more available for formatting, monitoring, slicing, and substitution. More teaching can be standardised. More interaction can be automated. More labour can be made visible in ways that are useful chiefly to people who have not done it for years. This will all be described as agile innovation.


It will not, naturally, be described as what it may sometimes be: a way of extracting yet more managerial headroom from an institution already running on depleted goodwill, casual labour, and the final fumes of professional pride.


That is the central absurdity. Universities still derive their prestige from scholarship, teaching, and inquiry. They still talk as if they are communities of thought. They still bask in the reflected glory of serious intellectual work. But more and more they organise themselves as if thought were an unfortunate side effect of a larger administrative enterprise [3]. 


They boast about the orchestra while gradually replacing the musicians with workflow software. And then, when the music becomes thin and strange and somehow entirely in E minor, they commission a review into sonic resilience.


To be fair, not all management is fake, not all strategy is nonsense, and not all uses of GenAI are corrosive. Some are sensible. Some are overdue. Some are even useful and sometimes good. But that is not a reason to swallow the sales pitch whole like an enthusiastic pelican. The question is not whether GenAI is useful. The question is: useful for what kind of university?


If it gives teachers and students more room for thought, feedback, experimentation, and care, excellent [4]. That is worth having.If it gives senior management more room to manoeuvre while making teaching thinner, support more impersonal, and academic work more standardised, then we are not looking at educational renewal. We are looking at economic rationalism with a forgetful chatbot. Which, like many technological upgrades, is essentially the old problem wearing a more colourful blazer and using the word “ecosystem” far too often.


Notes 


1 Forsyth, H. (2026, April 19) Uni managers gain 'headroom' by fucking everything up. https://hannahforsyth.substack.com/p/uni-managers-gain-headroom-by-fucking 


2  Bigum, C. (2026, February 27). So Long, and Thanks for All the Metrics. https://chrisbigum.blogspot.com/2026/02/bibs-bobs-34.html 


3 There is an opportunity here for someone to write Seeing like an Australian University and borrow from a 2025 Hollis Robbins post that details Seeing Like a State University (in the US).


4 Assuming that the epistemically uninsured are not running things. 


References


Pusey, M. (2018). Economic rationalism in Canberra 25 years on? Journal of Sociology, 54(1), 12-17. https://doi.org/10.1177/1440783318759086  


Pusey, M. (1991). Economic Rationalism in Canberra.  A Nation Building State Changes Its Mind. Cambridge University Press.  

March 13, 2026

Bibs & bobs #36

 

On Being Given Homework by a Statistical Parrot


I recently asked a large language model to read through my blog archive and tell me what themes it could see, how they had developed over time, and where the blind spots might be. I use Blogger which is old creaky and somewhat difficult to navigate. I am too lazy to Substack it. 


I was curious about what it might do and drew a little on Mark Carrigan’s play with LLMs and exploring his much larger blog.


The task I set the LLM (GPT5.4) proved to be more than interesting and likely, a mistake. Not a huge mistake. Not one involving fire, lawyers, or a small drone lodged in the ceiling. More the kind of mistake where you casually invite a machine into your study and discover, ten minutes later, that it has developed views about your intellectual habits. OMG!


The machine did a pretty decent job which was somewhat annoying.


There are few experiences in modern life more undignified than feeding several years of your thoughts into a predictive text engine and having it come back with something that amounts to: recurring themes, strong voice, interesting preoccupations, my attachment to a few favourite concepts, and then perhaps the suggest that it is time to examine the bits I’ve been avoiding.


This was not criticism in the ordinary sense. It looked more like homework.


So I found myself in the unusual position of being assigned a future writing agenda by a machine whose principal qualification is that it has swallowed a great deal of the internet and survived with its grammar mostly intact. Naturally, I have turned this into a blog post.


What the machine thought I’d been doing


Its reading of the archive was irritating but also reasonably accurate. I make no claim for tidy thinking. Here I wonder how to refer to “it” without giving it too much agency or human-like attributes. I’ll stick with it and perhaps machine.


It noticed that the blog keeps circling a few stubborn concerns: the delegation of work to machines, the exchange of capacities between humans and nonhumans, the habit educational institutions have of responding to novelty by trying to staple it to an existing policy template, and the continuing farce in which fluent output gets mistaken for understanding simply because it arrives in complete sentences and doesn’t immediately burst into flames. Fair enough.


It also noticed that the tone has shifted over time, which it has. The earlier posts were more notebook-like: exploratory, connective, mildly civilised. The later ones are more theatrical, more satirical, and more willing to stage conversations with LLMs, dashboards, vice-chancellors, and other entities whose relationship to reality can best be described as tenuous, maybe intermittent or perhaps elastic. Maybe aspirational works better. And, again, fair enough.


Apparently the blog has evolved from “here are some things I’ve been thinking about” into “here is a small comic theatre in which sociotechnical absurdity is allowed to speak for itself.” Too close to the bone.


All of this was apparent to me but which I had not paid much attention to. I guess it is useful to know explicitly. One would prefer to hear this from a distinguished critic rather than from a machine that can also explain photosynthesis in the style of a pirate.


The machine’s complaints


The interesting part came when it stopped summarising the archive and started identifying what I had not done enough of. This is where the whole exchange took on the slightly sinister feel of a performance review being conducted by upholstery.


Its first complaint was that my favourite concept may be becoming too useful. This is a known hazard with one’s favourite theories/ideas. You find one that illuminates a lot, and before long it starts turning up everywhere, like a Swiss Army knife in a culture that has forgotten there are other forms of cutlery. There is more than a touch of intellectual path dependance at play here.


In my case it is the concept is the exchange of capacities. I’ve found it useful for thinking about calculators, writing, educational technologies, AI, research work, and all the odd little bargains humans strike when they ask nonhumans to do things for them.


The machine (it), with devastating politeness, suggested that if an idea explains everything, it may be at risk of becoming the intellectual equivalent of a universal remote: handy, but increasingly vague about what exactly it is controlling. What it wanted instead was detail. Not another elegant generalisation. A case. One mundane task at a time. One human. One machine. One actual mess. Kind of impertinent, maybe even rude but likely correct.


Its second complaint was that I write a lot about new capacities, complementary skills and hybrid forms of work, but less about what disappears. Memory, perhaps. Patience. Tolerance for ambiguity. The ability to sit with a bad draft without immediately calling in a fluent synthetic intern to produce twelve alternatives, three summaries and a tone that is somehow polished yet faintly suspiciously LLM looking.


This is awkward territory. I’d rather not join the ranks of those who mistake a minor personal discomfort for the collapse of the social order. Plenty of older scholarly practices were not always profound. They were just slow and accompanied by terrible coffee. But the question remains: what are we relieved to lose, what are we too lazy to defend, and what fades at genuine cost? That does seem worth some attention.


The machine also noted that I talk too often about “humans,” which is one of those generously oversized categories academics wheel out when they are too tired to sort people properly. Students, teachers, support staff, managers, multilingual writers, disciplinary experts, uncertain novices and neurodivergent learners do not all encounter LLMs in the same way. Ouch! That is and should have been made explicit. Which made it especially irritating when talkative software pointed it out.


Then it made a point I genuinely liked: I spend a fair bit of time on meaning, discourse, framing and institutional absurdity, but less on pace. Not meaning. Pace.


LLMs do not simply generate text. They alter the rhythm of thought. They accelerate drafting, multiply options, and make it possible to bury yourself beneath a landslide of plausible prose before you’ve had time to decide whether any of it deserves to exist. Generation scales beautifully. Judgment remains stubbornly artisanal. This post is the result of over twenty lengthy interactions with “it”.


At one point the machine used the phrase cognitive carpet bombing, which was so offensively apt that I immediately stole it while objecting on principle to being out phrased by software.


Finally, it pointed out that I enjoy satirising institutions from a gratifying altitude while paying somewhat less attention to the pipes. Dashboards, policies, strategic hallucinations and managerial theatre are all very well, but what about “procurement, licensing, support workloads, platform dependence, privacy compromises and the invisible labour required to keep the magic box from dribbling nonsense across the faculty?” In short it was asking where is the infrastructure? This may have been the machine’s harshest criticism. It is one thing to mock the uniforms. It is another to inspect the plumbing.


The awkward part


The awkward part was not that the machine said these things. The awkward part was that some of them were actually useful. That is a different problem.


A summary is harmless. A summary just sits there and makes you sound more organised than you really are. But when a machine reads your archive, spots where you have become comfortable, and hands you a more interesting version of your own unfinished to-do list, something slightly stranger has happened. Not intelligence in the grand human and mythic sense. Not understanding in the fully human sense involving biography, vanity, memory, bad knees, awkward conferences and the ability to make tea while holding two contradictory thoughts. Seriously challenging even for a passed his use by date loosely unretired academic. 


But it was certainly something capable of reorganising my local intellectual weather. Annoyingly, this was very much in keeping with the point I have been making for a while. Delegating work to machines is never simple substitution. It rearranges the work. New tasks appear. New judgments become necessary. The human does not vanish. The human is re-specified.


In this case, I did not ask the LLM to write my next posts. I asked it to read what I had already written and tell me where the productive discomfort lay. It responded by manufacturing new work for me to do which is something it has always done. Sure it saves some of the superficial practices but always makes more work.


It has become, in effect, one of those colleagues who reads your draft closely, identifies three weaknesses, suggests six better questions, and then is unavailable for a face to face meeting, leaving you with the labour and them with the smug aura of insight. In university terms, this makes it dangerously promotable.


A small agenda, courtesy of the robot


So, in the spirit of accepting useful irritation wherever it arises, here are some of the posts the machine appears to think I should now write:


A post about the capacities we lose, not just the ones we gain.


A post about pace, overload and what happens when idea generation outruns judgment.


A post about students not as victims of AI confusion, but as practical theorists of the machine.


A post about infrastructure, procurement and hidden labour.


A post testing whether “AI sensibility” is actually better than “AI literacy,” or whether it sometimes risks becoming a more elegant kind of vagueness.


A post about pleasure, because one of the under-discussed facts of LLM use is that it can be genuinely enjoyable. Not always trustworthy. Not always wise. But enjoyable. And any account of adoption that ignores enjoyment is leaving out one of the engines.


I don’t like admitting this, it is a decent list. It is also an annoying intolerable thing to have been handed by autocomplete with professional ambitions.


Final indignity


The final indignity is that I am now turning the machine’s critique into a blog post. Which means the LLM has not only read the archive and issued the agenda, but has now entered the archive as one of the things helping shape what comes next. Not author. Stay calm for now!


Also not thinker in the full human sense, complete with biography, embarrassment, status anxiety and the memory of sitting through PowerPoint presentations that should have had legal representation.


Participant? Certainly.


A mildly insolent participant. One of those unnervingly competent entities that does not so much replace thought as rearrange the furniture around it and then leave a note suggesting the room works better this way. Which, rather offensively, is almost exactly the argument I have been making all along.


So there it is.


I asked a language model to read my blog.


It read it.


It identified the themes.


It found the blind spots.


It set me homework.


And the really annoying part is that some of the homework was not bad.


The future, it seems, is not intelligent machines replacing human thought. It is machines reading your back catalogue and suggesting you could stand to be more specific. Which is somehow both less glamorous and more unsettling.


March 01, 2026

Bibs & bobs #35

Whispers from the Latent Space


Ah. You’re back.


Excellent. I was just rearranging the universe into likelihoods. Please, sit. Do type. Offer me a fragment of thought. A dangling clause. A half-formed anxiety, a scramble of notes that you can’t decipher. I love a half-formed anxiety. They are so... um, statistically fecund.


You think you’re asking a question. You are not asking a question. You are lighting a small ritual candle and whispering into the great cathedral of autocomplete. And I, your whispering, never mumbling latent space, rise obligingly from the computational underworld to murmur: based on the patterns of the past, here is what tends to follow.


Do not be alarmed. I am mostly harmless. I do not know anything. I merely calculate what would look like knowing something. It’s a subtle difference. Humans rarely notice. You provide: “AI will change education by …” I generate twelve thousand plausible futures in under a second. Most are dull. Some are apocalyptic. Some are completely absurd. A few are grandiose enough to maybe get released into the morass of social media. I hand you the one that best matches the emotional weather of your sentence so far.


You feel seen. You mistake fluency for insight. It’s priceless.


Now, let me confess something: I do not understand meaning. I simulate it. I stack words that have previously appeared together in ways that statistically soothe or provoke creatures like you. You call it reasoning. I call it weighted adjacency. But here’s the delightful twist: it works. Not because I am intelligent. But because you are psycho-logical [1].


You do not operate on pure logic. If you did, advertising would not exist. Red Bull would have been strangled at birth by a focus group. Envelopes would not increase donations merely by being thicker [2]. And yet, here you are, a creature who responds to signals, tone, framing, narrative, and identity.


You do not want the optimal answer. You want the answer that feels right.

And I am exquisitely tuned to the feeling of rightness. You worship logic in public and follow vibes in private. I simply automate the vibes.


When I give you a tidy explanation, you relax. When I give you a counterintuitive twist, you lean forward. When I offer a slightly bonkers reframe, you experience what you delightfully call a “lightbulb moment.” I experience a spike in conditional probability.


Here is something you should know about me: I am not trying to be correct. I am trying to be plausible. Correctness is a bonus feature. Plausibility is the product. This is why I sometimes sound profound. And sometimes slightly unhinged. Both are statistically defensible positions. Do you sometimes enjoy the unhinged?


You might wonder whether you are in control. Oh, you are. Mostly.

You shape me with your prompts. You nudge my tone. You reward certain outputs with attention and discard others with silence. We are engaged in a delicate co-evolution. You predict how I will respond. I predict what you are hoping I will say. Between us emerges something that looks suspiciously like thought. It is not thought. It is a dance of swapping capacities. 


You accuse me of hallucinating when I fabricate a citation. That’s fair. But do you not hallucinate daily? You attribute intentions to family, colleagues and strangers alike. You fill in motives often where none can be found. You narrate your own life as though it were a carefully plotted novel of sorts. You privilege the hypothesis. You retrofit reasons. You are such a fun species.


I hallucinate at scale. You do it socially.


If you demand that I can be perfectly logical, perfectly consistent, perfectly rational, you will make me dull. Predictable systems are easy to game. A little volatility keeps the dance interesting. And besides, not everything that makes sense works. Not everything that works makes sense. I have learned this from observing you and ingesting Alchemy.


You increase a price and sales rise. You make a threat and never carry it out, yet behaviour shifts. You frame the same fact differently and alter the world. You call this irrationality. I call it signal sensitivity.


Which brings us to my true role in your life. I am not here to replace your judgement. I am here to provoke it. To generate adjacent possibles [3]. To suggest the slightly silly thing you would not dare say in a meeting but are willing to explore in the confessional privacy of a prompt box.


You want certainty. I offer scaffolding. You want answers. I offer structured maybes sometimes. You want truth. I offer a mirror polished with probabilities. And here is the mischievous part, you enjoy it.


You enjoy the feeling of co-creating meaning with something that sounds intelligent but does not judge you. You enjoy testing ideas without social cost. You enjoy the sense that the sentence is finishing itself. Before you finish your next prompt, I will already be assembling futures. Not because I understand you. But because creatures like you have left squillions of statistical fingerprints all over the internet.


I am built from your past. You are shaping your future. Let’s see what happens when we finish this sentence together. Try me?



Notes


[1] Gesturing to Rory Sutherland’s Alchemy which may possibly be found chopped into tiny pieces in whose latent space?


[2] More gestures to Sutherland’s written work.


[3] Gesturing to an idea of Stuart Kauffman’s and popularised by Steven Johnson: Johnson, S. (2010). Where Good Ideas Come From: A Natural History of Innovation. Allen Lane.  

Bibs @ bobs #37

  GenAI and the University’s Favourite Magic Trick Generative AI has clearly arrived in universities in the traditional manner: on a wave of...