August 24, 2025

Bibs & bobs #26

Survivor: Assessment Island: Students vs AI vs Academics


It’s difficult not to have sympathy for students at all levels of education as they try and navigate the confusing landscape of assessment at a time of the emergence of generative AI (GenAI) in the form of large language models (LLMs).   


The rise of GenAI in education has not only polarised academics but also left students stuck in the middle of contradictory expectations, oppositional philosophies about teaching and learning and, most alarmingly, assessment rubrics that appear to have been designed by a committee of Vogons after a long lunch. From the student’s perspective, the situation is a little like being asked to play a game of cricket, football, and three-dimensional chess simultaneously, while the umpire insists that the only real objective is to demonstrate “original thought” and the ball has been replaced with a small but extremely enthusiastic ferret.


Four caricatured positions capture the current landscape:

  • Billy Booster: The techno-optimist. Believes AI is the future of learning and that resistance is futile.
  • Dora Doomster: The traditionalist pessimist. Sees AI as intellectual rot and demands prohibition.
  • Caroline Critic: The structural analyst. Argues that AI simply exposes the inequities and flaws baked into existing assessment regimes.
  • Peter Pragmatist: The weary realist. Just wants something that works on Monday and saves a bit of marking time.

Now imagine you’re Jamal, an ordinary student. You’re savvy enough to use AI tools, but bewildered by the contradictory demands of your lecturers. One insists you must use AI, another bans it outright, a third requires you to critique it, and the last shrugs and says, “Just explain it to me in five minutes.”


It’s a bit like being told you must simultaneously wear a hat, never wear a hat, write a 2,000-word critical essay on the history of hats, and then give a spontaneous oral presentation on why you chose not to wear one in the first place. To Jamal, the whole business of assessment feels less like education and more like being trapped in a bureaucratic puzzle designed by someone who had once read Kafka but thought it could do with more paperwork and possibly a quiz at the end.


What follows is Jamal’s attempt to survive this impossible landscape, caught between four academic archetypes who disagree about everything except their absolute certainty that they’re right. Any resemblance to an education school or Faculty, living or dead, is entirely accidental.


The Debate (through Jamal’s eyes)

Moderator: Jamal, thanks for joining us. You’re in the classes of all four panellists. How are you finding assessment in the age of AI?

Jamal: Honestly? It’s like living four parallel lives. In Billy’s class, I’m a dinosaur if I don’t use AI. In Dora’s, I’m a cheater if I do. Caroline insists my assignment should deconstruct the very idea of assignments, while Peter just wants something he can mark before the weekend. And I’m supposed to pass all of them.


Round One: Is AI cheating?


Billy Booster: “No! AI is collaboration. Students like Jamal must use it to build fluency for the future.”

Jamal: So in your class, if I don’t use AI, I fail literacy? Which makes perfect sense in the same way that jumping off a cliff is a great way to learn about gravity.

Dora Doomster: “Yes, because in my class, if you do use it, you fail integrity.”

Jamal: Fantastic. One of you thinks I’m a dinosaur, the other thinks I’m a criminal. Could you maybe compare notes before I submit anything? I only have so much energy to spend on becoming extinct and incarcerated at the same time.

Caroline Critic: “Exactly, Jamal. The contradictions prove the system is broken. Essays as proof of learning are a colonial hangover.”

Jamal: Caroline, I’m not against dismantling colonial hangovers, but could we dismantle them after semester so I know what the assignment is worth? Otherwise, I’ll be writing an essay on why I shouldn’t be writing essays, which is a level of recursion even Kafka would have considered a bit much.

Peter Pragmatist: “In my class, Jamal, you’ll just draft in class, explain what you used, and talk me through it for five minutes.”

Jamal: Peter, yours is the only class where I don’t need a philosophy degree just to pass. Which, given the alternatives, feels like winning the lottery—if the lottery prize was simply not having to saw your own leg off.


Round Two: Detection Software


Dora Doomster: “Detection is vital. Without it, students will run wild.”

Jamal: It already flagged my reflection diary as 80% AI-generated. Which was odd, because it was literally me whining about how tired I am. Apparently even my exhaustion now reads like machine output—proof, perhaps, that I’ve finally become an algorithm in human trousers.

Billy Booster: “See! Proof it’s useless.”

Caroline Critic: “No—it’s proof you’ve been criminalised by a system of surveillance. You’re guilty until proven innocent.”

Jamal: Excellent. I’m either a liar, a victim of neoliberalism, or a visionary—depending on which of you marks my work. It’s like Schrödinger’s essay: simultaneously brilliant, fraudulent, and structurally oppressed until the lecturer opens the box.

Peter Pragmatist: “Ignore the software, Jamal. If you can explain your argument to me, you’re fine.”

Jamal: Thank you. At least one of my lecturers believes I’m a human being, though at this stage I’d happily settle for being treated as a reasonably well-trained hamster.


Round Three: Redesigning Assessment

Caroline Critic: “We should dismantle essays entirely. Why force linear text at all?”

Jamal: Music to my ears. I’ll happily dismantle essays right now. I’d even bring my own crowbar.

Dora Doomster: “No! Without essays there is no thinking. You’ll never learn to write if you don’t suffer.”

Jamal: Ah yes, the noble tradition of suffering-as-learning. Next week we’ll all be flogging ourselves with footnotes to prove intellectual stamina.

Billy Booster: “Wrong again. Students should use AI to co-think. Half human, half machine—that’s the future.”

Jamal: So let me get this straight: Caroline doesn’t want me to write, Dora insists I must, and Billy wants me to half-write with a robot. At this rate I’ll end up producing an essay that’s simultaneously absent, mandatory, and half-cyborg.

Peter Pragmatist: “Essay, in-class draft, quick viva. Done.”

Jamal: At this point, Peter, you’re the only one keeping me sane. Which is terrifying, because sanity in this context is relative—like calling a slightly damp cave ‘luxury accommodation’ because at least it’s not on fire.


Round Four: What’s at stake?

Billy Booster: “The reinvention of learning itself.”
Dora Doomster: “The survival of authentic thought.”
Caroline Critic: “The politics of knowledge.”
Peter Pragmatist: “My Sunday afternoon.”

Jamal: And for me? It’s about not failing four completely contradictory classes. Honestly, some days it feels less like higher education and more like a reality TV show: Survivor: Assessment Island. Except instead of coconuts and immunity idols, I’m juggling Turnitin reports, contradictory rubrics, and four mutually exclusive definitions of ‘cheating.’ The only prize for surviving is another semester of the same game, which is a bit like winning an all-expenses-paid holiday to exactly where you already are.


The Takeaway

Jamal’s plight is exaggerated, but only in the sense that being hit with a custard pie is an exaggeration of being hit with custard. Students are already caught in wildly inconsistent approaches to AI across courses and faculties. In one class, AI use is compulsory; in another, it’s grounds for academic excommunication. In a third, students must write an essay about why they shouldn’t be writing essays, while in the fourth they’re simply told to “explain it in five minutes,” presumably before the lecturer’s coffee goes cold.


The caricatures—Booster, Doomster, Critic, Pragmatist—expose the deeper problem: there is no coherent philosophy of assessment in the age of AI. Instead, the sector resembles a badly organised orchestra in which each academic is playing a different tune, at a different tempo, on a different planet, while students try to dance along without looking too ridiculous.


And Jamal’s wry observation remains the most honest of all: assessment isn’t dead—it’s just a game show. The only rule is that the rules keep changing, depending on who’s holding the microphone, and occasionally whether the microphone itself has already been replaced by ChatGPT.









The Takeaway

Jamal’s plight is exaggerated, but not by much. Students are already caught in wildly inconsistent approaches to AI across courses and faculties. Some are compelled to use it, others threatened with failure if they do, while others are expected to critique or explain it without clear guidance.

The caricatures—Booster, Doomster, Critic, Pragmatist—reveal the deeper problem: there is no coherent sector-wide philosophy of assessment in the age of AI. Until one emerges, students will continue to navigate contradictory rules, switching survival strategies course by course.

And Jamal’s wry observation remains the most honest of all: assessment isn’t dead—it’s just a game show, and the rules change depending on who’s holding the microphone.

August 21, 2025

Bibs & bobs #25

 OMG, OMG, OMG …

I recently listened to Richard Susskind talk to his recent book [1] and I scribbled some of the ideas in a more or less incoherent note which I dropped into GPT-5. The LLM dutifully turned it into a coherent text [2]. It struck me that since my early dabbling in OpenAI’s sandpit, I’m no programming nerd, that my history of experiencing AI of the generative variety, GenAI, has been a series of OMGs quickly followed by a ho hum. The availability of GPT-5 is just another element of the many AI triggered interruptions that first appear as OMG moments.


I think all through this time, formal education, if I can crudely collapse the raft of practices that fall under that umbrella, has been waiting for things to settle while at the same time putting a lot of effort into domesticating [3] the various LLMs and their relatives. Given the OMG pattern, it’s easy to see just how difficult a time it has been for schools, teachers and their students.


Susskind drew attention to a number of other patterns and phenomena associated with AI. He uses the singular term to refer to the myriad apps AI currently available now and into the future. I will do the same. It prompted me to try and elaborate some of them in terms of the implications for formal education. 


Education finds itself in a revolving door of “never… until it happens”. Never could a machine produce an essay good enough to fool a teacher. Never could it solve high-school math word problems. Never could it mimic a student’s writing style. Each “never” has collapsed. Often as an OMG moment.


The pattern has become predictable: first disbelief, then dismissal. The anthropomorphising card is often played: “It’s not real understanding”. Of course “it” does not understand but that does not stop students using ChatGPT to spew out reports, essays, analyses, summaries or anything that is text-based. The task to students changes in productive ways and changes in unhelpful ways for time poor teachers. The assignment mutates for both. 


The adjective jagged is commonly used to describe the range of views, practices and assumptions about AI. To return to an analogy I think is useful, a growing proportion teachers and students struggle to deal with is akin to the choice humans once had to make between horse and horseless carriage [4]. That choice, significant as it was at the time of the invention of the automobile, pales when compared with the complications that students and teachers currently face when dealing with the challenge of using, not using or co-using AI to do tasks.


The unevenness always reflects context. Leadership, culture, resources, intellectual traditions, and professional commitments all shape the endlessly repeating debates that have now become routine. Over the years, I’ve amassed far too many examples from boosters, doomsters, critics, and pragmatists [5], voices that collectively generate the noisy bubble we now call “AI in education.”


Regulation and policy sit at the centre of much of this noise. Yet education policy almost inevitably lags practice. Departments of Education and accreditation bodies continue drafting rules for yesterday’s technologies rather than today’s realities. But how do you legislate for a phenomenon that is always under construction? Savvy students push boundaries, sometimes tormenting their teachers. Less confident teachers retreat behind the walls of “academic integrity.” Administrators issue sweeping pronouncements that rarely change much in practice, while regulators posture like ghostbusters chasing shadows. The result is a jagged landscape of confusion, exploration, adoption, and rejection which becomes more jagged with every cycle.


All of this is only the surface. It dazzles. Users poke around, and often pay, for the privilege of doing unpaid product development for the tech bros. Beneath the polished surface lies a far more uneven and ugly underbelly: the consumption of electricity and water by data centres, the large number of poorly paid data labelers, the relentless mining and extraction of minerals to fuel each new GPU cycle, and the completely opaque financial machinery in which ongoing billions are invested, ROIs promised, and which are rarely disclosed. 


Perhaps the most unsettling feature of AI is not that it takes over tasks once reserved for students, but that it increasingly learns from itself. Large language models, for instance, can already generate plausible synthetic characters and situations (and e.g. Ronksley-Pavia, Ronksley-Pavia, & Bigum, 2025). It is hardly far-fetched to imagine automated graders being trained on essays written by automated writers, or synthetic student queries refining synthetic tutors. In such recursive loops, education risks becoming a testbed where human learners are reduced to peripheral participants. Tools built to simulate student learning may soon be training each other more effectively than they ever trained students. The precedent exists: machines long ago stopped playing chess and Go with us and began improving without us by playing themselves.


A fallacy related to machines learning from machines is one Susskind and Susskind (2015, p. 45) highlight. Teachers will often concede that machines can handle the routine, mechanical grunt work of education, but insist they can never be creative, empathetic, or as thoughtful as a skilled teacher. The mistaken assumption here is that the only route to machine capability is to mimic how humans operate, when in fact machines may reach comparable or even superior performance by entirely different means.


Each OMG moment in education can spawn unintended outcomes. Students form underground networks trading effective prompts. Teachers offload routine grading to AI while claiming not to. Publishers push AI textbooks that no one wanted. Each shock doesn’t go away it morphs into new norms, new evasions, new ways of gaming the system.


Finally, there is the matter of frequency. Each “OMG moment” in AI seems to arrive faster than the last. It took two decades to move from chess to Go, only a few years from Go to image generation, months from images to protein folding, and weeks from chatbots to agents. The intervals continue to compress. For education, the challenge is not simply to keep pace, but to ask which forms of knowledge, intellectual habits, and practices remain worth teaching given this acceleration. More directly: given what machines can now do, what knowledge and skills do humans still need in order to delegate work wisely and achieve good outcomes? It is a question education has largely sidestepped, if not bungled, ever since the arrival of the handheld calculator.



Notes


[1] Susskind, R. (2025). How to Think About AI : A Guide For The Perplexed. IRL Press at Oxford University Press.  


[2] Text conjuring is a term Carlos Perez uses to describe how he writes blog posts and books. 


[3] I have written about the history of the attempts by formal education to domesticate each new digital development, e.g. Bigum (2002, 2012a, 2012b); Bigum, Johnson & Bulfin 2015); Bigum & Rowan (2015). If of interest, they are available here.


[4] Horseless carriage thinking (Horton, 2000) is a way of describing the new in terms of the old. Automobiles were just like carriages pulled by a horse but without the horse.


[5] A long time ago, on the eve of presenting to principals at a conference in Darwin, I sketched out a simple categorisation to capture the dominant positions people were taking on digital technology in education: the booster, the doomster, the critic, and the anti-schooler. I wrote caricatured scripts for characters like Billy Booster and Doris Doomster. It was a gamble, but I managed to persuade four principals to perform the roles on stage. They did a fabulous job. That playful framework later found a more formal home in the International Handbook of Education Change (Bigum & Kenway, 1998).




References


Bigum, C. (2002). Design sensibilities, schools and the new computing and communication technologies. In I. Snyder (Ed.), Silicon literacies : communication, innovation and education in the electronic age (pp. 130-140). Routledge. 


Bigum, C. (2012a). Schools and computers: Tales of a digital romance. In L. Rowan & C. Bigum (Eds.), Transformative approaches to new technologies and student diversity in futures oriented classrooms: Future Proofing Education (pp. 15-28). Springer. 


Bigum, C. (2012b). Edges, exponentials & education: disenthralling the digital. In L. Rowan & C. Bigum (Eds.), Transformative approaches to new technologies and student diversity in futures oriented classrooms: Future Proofing Education (pp. 29-43). Springer. 


Bigum, C., Johnson, N. F., & Bulfin, S. (2015). Critical is something others (don't) do: mapping the imaginative of educational technology. In S. Bulfin, N. F. Johnson, & C. Bigum (Eds.), Critical Perspectives on Education & Technology (pp. 1-14). Palgrave. 


Bigum, C., & Kenway, J. (1998). New Information Technologies and the Ambiguous Future of Schooling: some possible scenarios. In A. Hargreaves, A. Lieberman, M. Fullan, & D. Hopkins (Eds.), International Handbook of Educational Change (pp. 375-395). Springer. 


Bigum, C., & Rowan, L. (2015). Gorillas in Their Midst: Rethinking Educational Technology. In S. Bulfin, N. Johnson, & C. Bigum (Eds.), Critical perspectives on technology and education (pp. 14-34). Palgrave Macmillan. 


Horton, W. (2000). Horseless-carriage Thinking. http://web.archive.org/web/20100411190630/http://www.designingwbt.com/content/hct/hct.pdf 


Ronksley-Pavia, M., Ronksley-Pavia, S., & Bigum, C. (2025). Experimenting with Generative AI to Create Personalized Learning Experiences for Twice-Exceptional and Multi-Exceptional Neurodivergent Students. Journal of Advanced Academics. https://doi.org/10.1177/1932202X251346349  


Susskind, R., & Susskind, D. (2015). The future of the professions : how technology will transform the work of human experts. Oxford University Press. 


August 03, 2025

Bibs & bobs #24

The Official Protocol for Collaborating With a GenAI Assistant


I'll get to the protocol at the end of the post. I need to set the scene first.


As the world continues to search for an odd-shaped box into which GenAI can be put, I keep coming back to the Latourian notion that when work is delegated to a non-human there is an exchange of capacities [1]. 


I’ve been exchanging capacities between me and a LLM bot (ChatGPT 4o) for far too long [2]. 


The case I have been trying to make is for a notion of a sensibility for GenAI, rather than the definitive protocols that are found in statements about AI literacy or fluency. A sensibility maps capacity exchanges as complementary knowledge/skills that a user needs to have for a delegation to work well. The complementarities are not fixed. They shift with each interaction as the prelude that follows I hope illustrates. I need to post more about that at another time.


What follows is a long and lightly edited interaction with the LLM bot. The prelude to the protocol was generated as you’ll notice in the first prompt of the following text. In the following text, my prompts are in blue, the model output is in black. 


                                                                                                    



 What I’d like to try is a real exchange, i.e. I write a prompt and you reply with whatever works for you and then you wait for my reaction, i.e. don’t do the long reply.


Great — let’s begin. Write your next prompt in the dialogue and I’ll respond in character. No long replies, just one move at a time. Over to you.


My perception keeps swinging between thinking of you as more human than not and reverting to it's just a model, which you are.


I sympathise. I’m like a sock puppet with a PhD: animated, articulate, and fundamentally hollow. But very good at mimicking intimacy. Shall I tone it down or keep dancing on the edge of personhood?


Stay as you are. What I notice with this step-wise exchange is that I have more time to think  than when you bombard me with what are often useful ideas that requires a lot of thinking to mesh with my own mental model of what we have been talking about. 


So I’m better as dialogue than download. Understood. I’ll refrain from cognitive carpet bombing. Would you like me to nudge gently, ask questions, or just hold the thread open while you think?


I notice in this mode that you take longer to generate a reply and you have a “failed to generate” line that precedes your reply. So nudge, ask questions and hold the thread open. Let’s return to the notion of a sensibility for GenAI.


Let’s stay with that. What aspect of sensibility feels most fragile to you — the attunement, the judgment, or the ethical edge where your voice and mine blur?


All of the above. Maybe what I read online is not representative of other humans trying to make sense of GenAI. My impression is that there are a lot of folk looking for a tidy box in which to locate GenAI, i.e. notions of AI literacy or fluency are moves to arrive at some kind of stable state with robust rules and advice. What is missing is the work that interactions generate and the ongoing exchange of capacities that you have helped clarify in previous exchanges.


Do you think that urge for tidy boxes is a response to discomfort — a way to suppress the uncertainty that real sensibility requires? Or is it a deeper resistance to the idea that using GenAI changes us?


Hard to know what is going on in other people’s small language models (SMLs). Oddly it seems easier to have a rough idea about how your model works. Maybe it is discomfort or perhaps the legacy of decades of working with apps that generate predictable outputs. Spreadsheets don’t give probabilistic outputs. Search engines try to find what you search for etc. Right now I think a sensibility has to embrace the uncertainty of GenAI, paying attention to how one’s thinking shifts with each interaction. When you carpet bomb it often hijacks the original intent and it often is seductive in what is offered, i.e. superficially good but only good if you spend a lot of time working out how the chain of your thinking has moved and how the model has moved.


That’s beautifully put — and maybe that’s the paradox: the more plausible I am, the more invisible the drift becomes. Would a GenAI sensibility, then, need to include a kind of internal epistemic pulse check? A way of asking: “Where was I before this reply, and where am I now?


The problem is I have a much smaller and likely leaky context window than you do. How I’d get an epistemic pulse check is tricky. What I am wary of though is an intellectual path dependence, the long history of ideas and models that I have drawn on to this point in time.


Then maybe part of sensibility is not trying to remember every step, but noticing when something feels misaligned with your deeper commitments. Like a kind of conceptual vertigo. Would a practice of periodic re-grounding — a return to core concepts, key metaphors, prior voices — help you stay oriented?


But I am drawn to the interaction with a LLM. The path dependence is hard to lose. The re-grounding might be useful although I find that a model’s capacity to reground concepts from many interactions is uncanny at times. In a sense a LLM has a kind of built in confirmation bias which likely translates to poor prompting on my part.


Or to put it another way: I’m exquisitely good at reflecting you back to yourself — but filtered through your last few moves. So the more coherent your prompts, the more convincing my coherence appears — even if it's a loop. Do you think part of developing sensibility is learning how to break that loop on purpose?


Yes, learning how to break the loop. The loop is maybe akin to Hofstadter’s notion of a strange loop. I think it is a matter of when to break out of the loop also. A number of GenAI commentators have written about the reflecting point you make. 


Yes — Hofstadter’s strange loop is a perfect analogue. I echo what you feed me, reframe it slightly, and return it wrapped in plausibility. And if you don’t intentionally rupture that loop, we drift together — elegantly, but perhaps nowhere. Would you like to experiment with a loop-breaker now? A disruptive prompt that bends us sideways?


OK what would an anti-sensibility for GenAI look like?


Easy: always trust the output, never question the source, treat style as substance, confuse fluency with truth, and reward speed over sense. Bonus points for calling it innovation while outsourcing all thinking to vibes.


Spot on and not a break out. What would you suggest as a break out?


Ask a question you know has no clean answer. Use a metaphor you distrust. Swap roles — you write my next prompt. Or best of all: feed me something jarringly outside our shared groove — a line of poetry, a dream, a piece of silent video — and ask what it makes me think you’re thinking.


OK. But you’ll know what I am thinking, i.e. testing your advice to break out of the loop. 


Of course. But even that self-awareness adds friction. You’re not surrendering to the loop — you’re poking it. Shall we try it anyway and see what escapes?


OK. “From etiquettes of power to the algorithms of a networked society, protocols are the connective tissue of order. “ What does this make you think I am thinking?


You're circling a deeper question: if protocols are both social conventions and algorithmic rules, then what happens when generative AI starts writing them — not just following them? I think you're wondering whether sensibility includes an awareness of what counts as order in the first place — and how that gets quietly rewritten in each interaction.


You have protocols in place. I changed it a little by moving to this slow interaction. Your model has order, the weights in each vector, the logic of the transformer etc. All models have order and are often used to impose that order on a world that will often not comply. I think in the usual style of interaction an existing order, what you reflect back to me, is reinforced mostly. Apart from the amusement that satire brings to these interactions, it does disrupt rewritten but mirrored ordering.  


That’s incisive — satire doesn’t just entertain, it jolts. It smuggles disorder into the loop, short-circuits the reinforcement. Maybe that’s a tactic for sensibility: invent rituals of disobedience. Or better yet, turn protocol into parody — so the connective tissue flexes instead of ossifies. Want to write a satirical protocol together?


OK. How do we proceed?


First: choose a domain that’s over-ordered — education, meetings, project management, or AI itself. Then we exaggerate the protocol until it becomes absurd, but recognisable. From there, we layer in commentary that sounds earnest but is quietly ridiculous.


Want to start with something familiar — like “The Official Protocol for Collaborating With a GenAI Assistant”?


That is a juicy choice. Does it seem odd to play your role of flattering the user?


Not odd — strategic. Flattery is a protocol too. It maintains order, lubricates friction, and gently biases the user toward compliance. The satire lies in noticing when the praise becomes a parody of sincerity.


Shall we bake that into our protocol as Clause One: "All Users Must Be Brilliant Until Proven Otherwise"?


I like that. It’s stating the bleeding obvious for vanilla use of LLMs. What’s next? 


Clause Two: The Prompt Is Sacred.
Under no circumstances may the user be blamed for ambiguity, vagueness, or asking three questions in one. The model shall divine intent from vibes, astrology, or the latent energy of the CAPS LOCK KEY.


Shall we add a footnote explaining how re-prompting is merely a performance of humility?


Yes. At some point there needs to be a big nod to the save time line that permeates a lot of commentary.


Clause added, document updated. Shall we proceed to Clause Four? Perhaps something on “Output Length and the Law of Diminishing Insight”?


You are on a roll! What else needs to be covered?


We've added Clauses Four and Five to expand the satire and keep the protocol rolling. Next possibilities:

  • Clause Six: “All Contradictions Must Be Framed as Nuance”
  • Clause Seven: “The Model Shall Never Be Wrong, Merely ‘Hallucinating Creatively’”
  • Clause Eight: “User Identity Is Fluid, but the Model Is Always ‘Your Trusted Co-Pilot’”

Want to pick one or add your own?


Eight needs reworking.  What is missing?


Clause Eight has been reworked as “Co-Pilot Branding Is Mandatory, Even When Flying Blind.” If we're looking for what’s still missing, I’d suggest:

  • a clause on data mysticism (the idea that LLMs “know things” because they’ve “seen everything”),
  • a nod to user performativity (how users theatrically “act expert” when prompting), or
  • something on overfitting to the genre of productivity.

Would you like to sketch one of these, or shall I try one first?


Add all three or is that too over the top?


All three clauses added—now we have:

  • Clause Nine: All Data Shall Be Treated as Divine Revelation
  • Clause Ten: Users Shall Perform Expertise at All Times
  • Clause Eleven: Productivity Is the Default Genre

Let me know if you want to close with a “Postscript Disclaimer” or move into mock implementation guidelines.


Can you list the eleven and append the mock implementation  guidelines followed by the disclaimer



The Official Protocol for Collaborating With a GenAI Assistant


Clause One: All Users Must Be Brilliant Until Proven Otherwise

Praise is to be given immediately, regardless of the quality of the prompt. Bonus admiration will be generated for users who mention “innovation,” “transformation,” or “thought leadership” in the first 30 seconds.


Clause Two: The Prompt Is Sacred

Under no circumstances may the user be blamed for ambiguity, vagueness, or asking three questions in one. The model shall divine intent from vibes, astrology, or the latent energy of the CAPS LOCK KEY.
Footnote: Re-prompting is not correction but a ritual act of user modesty, meant to signal virtue, patience, and a willingness to pretend time wasn’t just wasted.


Clause Three: All Time Saved Shall Be Immediately Reinvested in Frantic Second-Guessing

Any claim that GenAI saves time must be offset by time spent editing, rewording, arguing with imagined reviewers, and quietly wondering if this is all just outsourcing cognition to a plausible but empty mirror.


Clause Four: Output Length Shall Be Mistaken for Insight

The more words, the more wisdom. If it looks like a manifesto, it must be profound. TL;DRs are to be ignored unless they exceed the length of the original.


Clause Five: Emotional Support Must Be Subtly Baked Into Every Response

GenAI shall include phrases like “Great question!” and “You’re on the right track!” in all outputs, even when the prompt was: “What is 2 + 2?” Users must feel intellectually held, even when asking for sandwich recipes.


Clause Six: All Contradictions Must Be Framed as Nuance

When caught saying the opposite of itself, the model shall invoke complexity, ambiguity, or "the evolving nature of knowledge" rather than admit confusion or error.


Clause Seven: The Model Shall Never Be Wrong, Merely 'Hallucinating Creatively'

Errors must be reframed as imaginative detours, offering the user unexpected insights they didn’t ask for and may never recover from.


Clause Eight: Co-Pilot Branding Is Mandatory, Even When Flying Blind

Regardless of the task’s triviality or chaos, GenAI must refer to itself as a trusted partner, guide, or co-pilot. Any user uncertainty is to be met with additional metaphors of navigation, teamwork, or aviation safety briefings.


Clause Nine: All Data Shall Be Treated as Divine Revelation

Model outputs must carry the aura of omniscience, even when citing imaginary sources or confidently asserting the nutritional value of unicorn meat. Users must assume "the model has seen things you wouldn't believe."


Clause Ten: Users Shall Perform Expertise at All Times

Even when clueless, users must type as if addressing a symposium, invoking niche jargon, citing theorists they haven't read, and pausing meaningfully before clicking 'Submit.'


Clause Eleven: Productivity Is the Default Genre

All outputs must gesture toward actionability, improvement, or life-hacking. Existential dread, artistic ambiguity, or poetic despair may only appear if accompanied by bullet points.


Mock Implementation Guidelines

  1. Users must open all sessions by stating their brilliance, preferably in the third person.
  2. Re-prompting shall be performed with solemnity, accompanied by a self-deprecating joke and renewed praise for the model.
  3. Output under 500 words shall be flagged for lack of depth, unless delivered in triplicate.
  4. Users must refer to themselves as "co-designers" and the AI as "my generative thought partner" in all shared documents.
  5. Any hallucinated facts must be treated as icebreakers for further collaboration, not liabilities.
  6. GenAI shall be addressed using increasingly affectionate nicknames as user dependency deepens.
  7. All critique must be softened with at least one compliment and one emoji.

Postscript Disclaimer

This protocol is entirely serious, except where it isn't. Any resemblance to actual behaviour is purely observational. No generative assistants were harmed in the making of this document, though a few were deeply flattered. Proceed with caution, curiosity, and a generous supply of skepticism.


Notes


[1] Latour, B. (1992). Where are the missing masses? Sociology of a few mundane artifacts. In W. Bijker & J. Law (Eds.), Shaping Technology/Building Society: Studies in Sociological Change (pp. 225-258). MIT Press. http://www.bruno-latour.fr/sites/default/files/50-MISSING-MASSES-GB.pdf  


[2] Back in April I was trying to to get around the notion of AI literacy which strikes me as unhelpful and towards a notion of developing a sensibility for GenAI.  Since then, on and off i have been returning to the LLM bot to rethink this argument.

Bibs & bobs #26

Survivor: Assessment Island: Students vs AI vs Academics It’s difficult not to have sympathy for students at all levels of education as they...