December 09, 2025

Bibs & bobs #29

 



Semiotics and Agency in LLM Education, a wee manifesto

(or: How to Stop Worrying and Learn to Love Hybrid Intelligence) [1]


1. A simple heresy to begin with


LLMs are not tools. They are talkative actors in our networks.


Let’s reject the quaint notion that a language model is a “resource,” as if it were chalk with delusions of grandeur. Instead, let’s think of LLMs as semiotic companions: entities that co-produce meaning, muddle agency, and quietly reorganise how humans think, teach, and justify things to committees.


If this makes you uncomfortable, good. Discomfort is the first sign you’re paying attention.


2. Every interaction with a LLM is a redistribution of labour—

and we should study the choreography, not lay the attribution silliness game.


Humans delegate. Machines respond. Humans interpret. Machines rephrase. Somewhere in this swirl, capabilities shift. The interest is not in scolding students for using AI, nor in applauding its “efficiency.”Let’s map the material-semiotic traffic of work, meaning, responsibility, and credit. If that sounds suspiciously Latourian, that’s because it is [2].


3. Refuse purity myths.


There is no such thing as “unassisted work”—only unacknowledged networks. Once upon a time, education pretended that thought was produced in sealed human containers. Then along came LLMs, and suddenly the leaks became visible.


Good, visibility is liberating. Hybrid authorship is not a corruption; it is the natural state of intellectual life finally revealed in HD.


4. Instead of asking “Is this AI-generated?”, we ask the far more interesting question: “How did the human–machine ensemble produce this?”


This reorientation changes:

  • assessment
  • pedagogy
  • ethics
  • institutional trust
  • maybe, if only, the ontology of learning itself

It also has the delightful side-effect of making AI detection companies deeply unhappy. I reckon that should be considered a feature.


5. The project is not about taming AI. It is about understanding how humans improvise with new actors on the stage.


It’s important to resist domestication narratives (“AI must be controlled”). Although it is fun to think about who or what is being domesticated? Mutually assured domestication?


We also reject apocalypse narratives (“AI will replace us by Tuesday”).

Instead, we investigate the messy middle where humans and models negotiate meaning, miscommunicate gloriously, patch over each other’s flaws, and occasionally produce the occasional bit of brilliance punctuated with a modicum of inane nonsense that neither side could claim alone.


6. We treat LLMs as a “thing in common”—a shared semiotic object that reorganises authority [3].


Following Rancière, LLMs unsettle the old monopoly of the explainer. Teachers no longer gatekeep meaning; models generate it in abundance. This is not the end of teaching. It is an invitation to re-stage what teaching might become.


This not a threat but an historical opportunity: the redistribution of interpretive work.


7. Research that begins with curiosity, not containment.


Such research studies:

  • new capacities that emerge
  • the collapse of old categories
  • the semiotic turbulence of hybrid agency
  • the pedagogical futures opened by machines that can talk back

Refuse the urge to prematurely stabilise any of this, even though institutions crave certainty. What is needed is conceptual oxygen. 


8. This is an affirmation that play is a legitimate research method.


Play is not frivolous. Play is how humans explore shifting semiotic landscapes without pretending to have mastered them. To play with LLMs is to discover what agency feels like when it leaks, when it bends, when it returns an answer you didn’t expect, when it multiplies your workload inexplicably.


This work requires that sensibility.  


9. Finally: All this offers no silver bullets, only and hopefully, better ways to see.


This approach is a lens, not a fix. A sensibility, not a policy. A working theory of hybrid meaning-making in a world that has pretended, for too long, that cognition was a solo act.


If any of this succeeds, education will become more honest, more humane, and far more interesting.


And if not? Well, at least the work will have mapped the network with style.




A Douglas Adams version of the manifesto
(or: The Mostly Harmless Guide to Semiotics, Agency, and Those Chatty Machine Things)


A manifesto in which learning outcomes panic, machines develop opinions, and humans try to look clever while the universe giggles.


1. Don’t Panic.

Especially when the LLM starts making sense before you do. 


For reasons no one entirely understands, humans have developed a habit of treating language models as “tools.” This is rather like calling a whale “a smallish fish,” or calling hyperspace travel “a brisk stroll in the park.” It is technically incorrect, metaphorically misleading, and—if you stare at it long enough—profoundly embarrassing.


The manifesto therefore begins with the only sensible instruction: Whatever happens, don’t panic. Panic leads to rubrics.


2. LLMs are not tools. They are semiotic hitchhikers.


They arrive uninvited, sit politely in the corner, and then suddenly help you write chapter four. They are the kind of companions who “just pop by to help,” and two hours later you realise they’ve reorganised your intellectual furniture. Treating them as simple implements is like treating the Babel fish as decorative earwax.


3. Assessment purity died around the time the machines learned to paraphrase. I offer my condolences.


Once upon a time, universities believed essays were the work of solitary geniuses hunched over mahogany desks lit by the soft glow of sincerity. Then someone plugged ChatGPT into the Wi-Fi. Now the illusion is gone, the solitude is gone, and if we’re honest, the mahogany desk was never there either.


Let’s agree to accept this gracefully rather than clutching to one’s learning outcomes like a bureaucratic security blanket.


4. Hybrid agency is not a problem. It’s Tuesday that always is.


Humans have always relied on networks of help: books, friends, caffeine, last-minute panic, and elaborate rituals involving printers. The only difference now is that the helper responds in full sentences. This is not a crisis. This is an upgrade.


5. Stop asking “Did the student use a LLM? Start asking “Did the ensemble produce something that makes sense?”


Honestly, the old question is as useful as asking whether a Vogon poem was written with a pencil or carved into the hull of a passing spacecraft. The result is still what it is, and it still may cause pain. Let’s recommend shifting from policing to interpreting, a much more dignified activity that also requires fewer court subpoenas.


6. Teachers are no longer the sole explainers of the universe. They are now co-explainers with machines that never sleep.


This is not the end of teaching. It is the end of pretending teaching ever relied on exclusive access to understanding. Think of LLMs as that overly keen workshop assistant who runs around behind you saying, “I can help! I can help!” while occasionally reorganising your toolbench into an avant-garde installation.


7. Play is essential. The universe is ridiculous; your methodology can be too.


Experiment. Prod. Poke. Ask absurd questions. Ask a LLM what a post-LLM essay looks like, and then ask it to explain its explanation in the style of a tired badger.


This is not frivolous. This is empirical research conducted with a sense of humour, which is the only known antidote for epistemic hand wringing.


8. Remember: the meaning is not where you think it is. It’s somewhere between the human, the machine, and the somewhat, or totally confused marking rubric.


Semiotics is about tracing these strange relationships—like exploring a planet where gravity occasionally takes the afternoon off. Agency, meanwhile, is shared, borrowed, lent out, misplaced, and occasionally returned with biscuit crumbs stuck to it. The job is simply to observe this with sincerity and mild amusement.


9. Finally: All this offers no promises. It simply offers an alternative frame.


It cannot guarantee clarity.
It cannot guarantee stability.
It especially cannot guarantee that learning outcomes will stop fainting every time a LLM enters the room.

But it can guarantee a better view of the absurdity, the possibility, and the wildly inventive hybrid creatures we call “students.”


In the great tradition of intergalactic research, that seems enough.



Notes


[1] There is a second, Douglas Adams version, of the manifesto that follows this. 


[2] As I’ve noted in previous posts, the exchange of capacities when humans get a machine to do something, a delegation, draws heavily on Latour, B. (1992). Where are the missing masses? Sociology of a few mundane artifacts. In W. Bijker & J. Law (Eds.), Shaping Technology/Building Society: Studies in Sociological Change (pp. 225-258). MIT Press. http://www.bruno-latour.fr/sites/default/files/50-MISSING-MASSES-GB.pdf  


[3] After a throwaway line about Rancière’s notion of a thing in common, on LinkedIn some days ago, Jack Tsao got in touch to point to some lovely work he recently published exactly in this vein: Tsao, J., & Nogues, C. (2024, 2024/02/01/). Beyond the author: Artificial intelligence, creative writing and intellectual emancipation. Poetics, 102, 101865. https://doi.org/https://doi.org/10.1016/j.poetic.2024.101865  

December 07, 2025

Bibs & bobs #28


Assessment After AI: A Playful Field Guide for the Perplexed


A bit of fun with yet another paper on AI, which is lazy code for LLMs,  and assessment: Assessment after Artificial Intelligence: The Research We Should Be Doing. It’s a paper that reads like a UN climate report for grading, only with more footnotes and fewer glaciers. 


Imagine for a moment, a group of eminent assessment scholars locked in a room in Melbourne for three days. Not hostages, just academics at a workshop, which is basically the same thing minus the ransom note.


Their mission?


To work out how assessment is supposed to survive the arrival of artificial intelligence when they mean to say LLMs, those chatty silicon sidekicks that write essays faster than students can locate the assignment rubrics.


Predictably, what came out of this locked-room scenario was not a quick fix, nor a silver bullet, nor even a mildly shiny paperclip. Instead, we got a framework: six big questions meant to guide the next decade of research while we pretend everything is still under control.


Let’s walk through these questions politely but with a bit of fun.


1. Why do we assess? (Other than tradition, habit, and institutional anxiety?)


The paper argues we must confront the “why” because AI makes authorship fuzzy. Translation: We’ve finally noticed that the emperor’s assessment rubric is wearing no clothes.


For years we pretended that a submitted document reflected some clean, isolated human mind. Now that a machine can write it, we’re forced to admit the truth: it was never that simple. Students have always borrowed, paraphrased, Googled, begged, borrowed, stolen, and occasionally read. LLMs just made this mess more visible.


2. Who is being assessed? (Hint: it’s no longer obvious.)


In a world where a student, a language model, three drafts, and a panic attack all co-produce the final essay at 2:39 AM, whose capability is the grade describing?


The authors elevate it to a grand philosophical crisis. A sceptic would shrug and say: it only looks profound if you pretend assessment wasn’t always a group project—just one done slowly, expensively, and without the help of a polite robot.”


Still, they’re right to point out that a grade now reflects what might be called an actor-network: student + AI + institution + vague vibes + the teacher’s cognitive state before morning coffee + any amount of interruptions by the student’s unconscious + the ambient temperature in the room where it happened. This should make us rethink “authorship” instead of doubling down on techno-policing.


3. What should we be assessing? (Besides whether students can avoid a LLM detector.)


The authors warn that focusing only on assurance of learning leads to absurdities—like redesigning tasks until they only measure who can be forced to hand-write something under fluorescent lights.


A more playful reframing might look like a choice between a few futures:

  • Assessing what humans still do better than machines (judgment, sense-making, moral reasoning, refusing terms and conditions before clicking “accept”), or
  • Assessing what humans can now pull off with machines—feats that were pure fantasy before LLMs muscled into the partnership, or
  • Clinging to the essay as if it were the last biscuit in the staff room

Their point stands: once you admit hybrid capability is real, old learning outcomes don’t just wobble—they clutch their pearls, faint dramatically, and start twitching like Victorian ghosts confronted with electricity.


4. How should we assess? (Preferably without turning classrooms into biometric exam prisons.)


The paper politely critiqued surveillance creep. Can I put it more bluntly: Designing tasks to “keep LLMs out” is like trying to keep water out of a colander by yelling at it sternly. The authors call for interpretive credibility rather than technical containment.


The playful version: If you need a forensics lab to understand whether the student learned something, the assessment has already failed.


5. Where should assessment live? (Spoiler: not only in a 13-week unit.)


The authors wisely suggest programmatic assessment—multiple touchpoints, longitudinal signals, ecosystems of evidence.


They carefully avoid saying the obvious: The unit-as-king is a historical artifact from when libraries closed at 5pm and “online” meant a queue.


LLMs will force assessment to slip out of the unit silo and into something that looks more like a learning portfolio crossed with a personal data biography, minus the dystopia (hopefully).


6. What if…? (The fun part, but also the part most institutions are allergic to.)


The authors call this the speculative imagination space:

What if assessment didn’t rely on artefacts?
What if capability were evidenced through rehearsals, simulations, dialogue portfolios, or—radical thought—students actually doing meaningful work?


The paper invites this visioning politely. A sceptical expert might add: Universities only innovate when forced, so “what if” research is less blue-sky dreaming and more preparing sandbags before the LLM tsunami breaches the levee.


The Real Subtext (The Bit the Paper Can’t Say Out Loud)


Across its elegant prose and carefully built principles, the paper is basically whispering: “Friends, the game board has changed. Stop pretending it hasn’t.”


The six questions aren’t a roadmap; they’re a diplomatic way of telling the sector that:

  • Detection is dead.
  • Purity myths are dead.
  • Unit-based assessment is dying.
  • Learning outcomes may need reincarnation.
  • The essay will survive only if we justify it better than “we’ve always done it.”

And research needs to stop rediscovering old insights with LLMs sprinkled on top like parmesan.


A Final Analogy


Assessment after LLMs is like discovering your long-trusted telescope has been quietly fitted with an automatic star-drawing attachment. You can still look through it. You can still see something.But unless you understand who—or what—helped produce the image, you have no idea whether you’re engaging in astronomy or imagination.


This paper doesn’t fix the telescope. But it does give you a decent user manual, a philosophical warranty card, and a hint that maybe the real work is learning to observe the sky differently.


 

Bibs & bobs #29

  Semiotics and Agency in LLM Education, a wee manifesto (or: How to Stop Worrying and Learn to Love Hybrid Intelligence) [1] 1. A simple he...