December 01, 2025

Bibs & bobs #27

 Deckchairs, disclosure forms, and the drowning of common sense

Listening to the current debates about AI in initial teacher education feels like watching shuffleboard players on the Titanic — arguing whether the deck is at a 10-degree or 15-degree angle while the water rushes in. To be fair, they’ve drawn up a committee to measure the angle, a sub-committee to report on the accuracy of the measurements, and an oversight panel to ensure that no one is playing shuffleboard in a way that contravenes shuffleboard policy. Meanwhile, the iceberg has already filed a change-of-address form and moved into the ballroom.


The opening is a borrow from John Perry Barlow’s keynote address in 1995 to the second National Entertainment Industry Conference in Australia [1]. He had copyright law in mind. I have teacher education.


Why initial teacher education? Because today’s student teachers are tomorrow’s teachers, and tomorrow’s schools will inherit both them and whatever peculiar assessment rituals are dreamt up in the meantime. The future state of AI is anyone’s guess — though the optimists like to remind us that “today is the worst AI will ever be,” which is reassuring in the way being told your ship has “only just started sinking” is reassuring.


What we do know is this: these students will graduate after years of negotiating a confused and confusing assessment landscape. They will have survived bans, detectors, ritual declarations of originality, and possibly a health-and-safety briefing on how to operate a large language model without scalding themselves. The habits they form will shape what they do with their own students. At present, this is not a reassuring scenario.


The better move is not to panic, nor to deny, nor to set up another oversight panel to check whether the deck really is tilting. It is to treat AI as shared, an iceberg in common, and to design assessment that rewards what humans add: judgement, approximation, critique, and the occasional ability to recognise when you are in fact rearranging deck chairs.


A few random thoughts to try. Note the game is and always will be experimental. I work with what I call small affordable experiments. I try to remain curious about it all.


Make AI the shared object: uneven skill becomes collective pedagogy, not secret advantage (Rancière [2]).


Name the panic: calculators had their iceberg moment too and we blew it. The persistent question that is never asked and needs to be always asked when a new digital doodad appears is: what skills now complement what the machine can now do? Hint: the answer in this case is not AI literacy.


Imagining AI will settle is a mistake. Policy sprints that ban AI are like insisting the Titanic will stay afloat if we outlaw icebergs. Students will comply—until 11:47 pm the night before the deadline. The real question isn’t prohibition, it’s complement design: what do humans add that the machine doesn’t?


Signals that indicate when the ship starts to list. 

Performative compliance: Students salute the “no AI” rule in public while quietly stashing lifeboats under their bunks. Outwardly, everything looks proper; beneath the deck, the ship’s already taking on water.


Equity gaps: Some passengers get private lifeboats (AI know-how, paid subscriptions, insider tips), while others are left clinging to floating debris. The voyage becomes less about learning and more about who managed to smuggle better gear aboard.


Detector theatre: Klaxons blare and red lights spin as if icebergs can be scared away with noise. False positives and negatives sow panic—students distrust the crew, the crew distrusts the passengers, and no one’s actually steering.


Rubric gaming: Instead of learning to navigate, students learn to rearrange deckchairs in perfect formation. Points are earned for screenshots and decorative compliance rather than for charting a safe course through the ice.


Honesty tax: Those who admit to using AI are forced to fill out paperwork in triplicate, like passengers being charged extra for wearing life jackets. The lesson learned: concealment is quicker than compliance.


Avoidance strategies (so you don’t go down with the ship)

Ask for a judgment trace: like a ship’s log, students record their decision making, edits, and—most importantly—their reasons. When shared, these traces become a collective navigation chart: everyone sees where the icebergs were and how others steered around them.


Emphasize complementary skills: AI may handle the heavy lifting, but only humans can sense when the compass is drifting, estimate distance to shore, or decide when “close enough” is good enough. Building these habits develops a sturdier AI sensibility—less dazzled by polish, more attuned to sound judgment.


Reward openness: instead of punishing disclosure, treat it as putting lifeboats on deck. When students clearly show how they used (or didn’t use) AI, the whole vessel is safer. Honesty should feel like the shortest route to dry land.


Reinvest time saved: if AI bails some water, don’t just admire the empty bucket. Redirect that time to better thinking, reflection, or new explorations, work that keeps the ship not just afloat but moving toward somewhere worth going.


Incentives Steer the Ship

Rubrics, deadlines, and detectors shape behaviour. If you reward polish, you get polish. If you reward judgement, you get judgement.


AI won’t vanish with the band playing. It will still be redecorating the ballroom, handing out essay scaffolds while the water rises. Assessment can’t prevent the iceberg—but it can teach students how to steer.


Apologies for the absurdly over-used analogy.


Notes


[1] Barlow was a Republican, a rock lyricist (the Grateful Dead), an ex-cattle rancher, and a computer telecommunications civil liberties advocate. The first sentence of his keynote: 


"Yesterday when I was listening to those people arguing about copyright law I felt like I'd come across a gang of shuffleboard players on the deck of the Titanic arguing about the angle of the deck.”


 Tunbridge, N. (1995, September). The Cyberspace Cowboy. Australian Personal Computer, 2-4. 


[2] Rancière, J. (1991). The Ignorant Schoolmaster: Five Lessons in Intellectual Emancipation. Stanford University Press.  

August 24, 2025

Bibs & bobs #26

Survivor: Assessment Island: Students vs AI vs Academics


It’s difficult not to have sympathy for students at all levels of education as they try and navigate the confusing landscape of assessment at a time of the emergence of generative AI (GenAI) in the form of large language models (LLMs).   


The rise of GenAI in education has not only polarised academics but also left students stuck in the middle of contradictory expectations, oppositional philosophies about teaching and learning and, most alarmingly, assessment rubrics that appear to have been designed by a committee of Vogons after a long lunch. From the student’s perspective, the situation is a little like being asked to play a game of cricket, football, and three-dimensional chess simultaneously, while the umpire insists that the only real objective is to demonstrate “original thought” and the ball has been replaced with a small but extremely enthusiastic ferret.


Four caricatured positions capture the current landscape:

  • Billy Booster: The techno-optimist. Believes AI is the future of learning and that resistance is futile.
  • Dora Doomster: The traditionalist pessimist. Sees AI as intellectual rot and demands prohibition.
  • Caroline Critic: The structural analyst. Argues that AI simply exposes the inequities and flaws baked into existing assessment regimes.
  • Peter Pragmatist: The weary realist. Just wants something that works on Monday and saves a bit of marking time.

Now imagine you’re Jamal, an ordinary student. You’re savvy enough to use AI tools, but bewildered by the contradictory demands of your lecturers. One insists you must use AI, another bans it outright, a third requires you to critique it, and the last shrugs and says, “Just explain it to me in five minutes.”


It’s a bit like being told you must simultaneously wear a hat, never wear a hat, write a 2,000-word critical essay on the history of hats, and then give a spontaneous oral presentation on why you chose not to wear one in the first place. To Jamal, the whole business of assessment feels less like education and more like being trapped in a bureaucratic puzzle designed by someone who had once read Kafka but thought it could do with more paperwork and possibly a quiz at the end.


What follows is Jamal’s attempt to survive this impossible landscape, caught between four academic archetypes who disagree about everything except their absolute certainty that they’re right. Any resemblance to an education school or Faculty, living or dead, is entirely accidental.


The Debate (through Jamal’s eyes)

Moderator: Jamal, thanks for joining us. You’re in the classes of all four panellists. How are you finding assessment in the age of AI?

Jamal: Honestly? It’s like living four parallel lives. In Billy’s class, I’m a dinosaur if I don’t use AI. In Dora’s, I’m a cheater if I do. Caroline insists my assignment should deconstruct the very idea of assignments, while Peter just wants something he can mark before the weekend. And I’m supposed to pass all of them.


Round One: Is AI cheating?


Billy Booster: “No! AI is collaboration. Students like Jamal must use it to build fluency for the future.”

Jamal: So in your class, if I don’t use AI, I fail literacy? Which makes perfect sense in the same way that jumping off a cliff is a great way to learn about gravity.

Dora Doomster: “Yes, because in my class, if you do use it, you fail integrity.”

Jamal: Fantastic. One of you thinks I’m a dinosaur, the other thinks I’m a criminal. Could you maybe compare notes before I submit anything? I only have so much energy to spend on becoming extinct and incarcerated at the same time.

Caroline Critic: “Exactly, Jamal. The contradictions prove the system is broken. Essays as proof of learning are a colonial hangover.”

Jamal: Caroline, I’m not against dismantling colonial hangovers, but could we dismantle them after semester so I know what the assignment is worth? Otherwise, I’ll be writing an essay on why I shouldn’t be writing essays, which is a level of recursion even Kafka would have considered a bit much.

Peter Pragmatist: “In my class, Jamal, you’ll just draft in class, explain what you used, and talk me through it for five minutes.”

Jamal: Peter, yours is the only class where I don’t need a philosophy degree just to pass. Which, given the alternatives, feels like winning the lottery—if the lottery prize was simply not having to saw your own leg off.


Round Two: Detection Software


Dora Doomster: “Detection is vital. Without it, students will run wild.”

Jamal: It already flagged my reflection diary as 80% AI-generated. Which was odd, because it was literally me whining about how tired I am. Apparently even my exhaustion now reads like machine output—proof, perhaps, that I’ve finally become an algorithm in human trousers.

Billy Booster: “See! Proof it’s useless.”

Caroline Critic: “No—it’s proof you’ve been criminalised by a system of surveillance. You’re guilty until proven innocent.”

Jamal: Excellent. I’m either a liar, a victim of neoliberalism, or a visionary—depending on which of you marks my work. It’s like Schrödinger’s essay: simultaneously brilliant, fraudulent, and structurally oppressed until the lecturer opens the box.

Peter Pragmatist: “Ignore the software, Jamal. If you can explain your argument to me, you’re fine.”

Jamal: Thank you. At least one of my lecturers believes I’m a human being, though at this stage I’d happily settle for being treated as a reasonably well-trained hamster.


Round Three: Redesigning Assessment

Caroline Critic: “We should dismantle essays entirely. Why force linear text at all?”

Jamal: Music to my ears. I’ll happily dismantle essays right now. I’d even bring my own crowbar.

Dora Doomster: “No! Without essays there is no thinking. You’ll never learn to write if you don’t suffer.”

Jamal: Ah yes, the noble tradition of suffering-as-learning. Next week we’ll all be flogging ourselves with footnotes to prove intellectual stamina.

Billy Booster: “Wrong again. Students should use AI to co-think. Half human, half machine—that’s the future.”

Jamal: So let me get this straight: Caroline doesn’t want me to write, Dora insists I must, and Billy wants me to half-write with a robot. At this rate I’ll end up producing an essay that’s simultaneously absent, mandatory, and half-cyborg.

Peter Pragmatist: “Essay, in-class draft, quick viva. Done.”

Jamal: At this point, Peter, you’re the only one keeping me sane. Which is terrifying, because sanity in this context is relative—like calling a slightly damp cave ‘luxury accommodation’ because at least it’s not on fire.


Round Four: What’s at stake?

Billy Booster: “The reinvention of learning itself.”
Dora Doomster: “The survival of authentic thought.”
Caroline Critic: “The politics of knowledge.”
Peter Pragmatist: “My Sunday afternoon.”

Jamal: And for me? It’s about not failing four completely contradictory classes. Honestly, some days it feels less like higher education and more like a reality TV show: Survivor: Assessment Island. Except instead of coconuts and immunity idols, I’m juggling Turnitin reports, contradictory rubrics, and four mutually exclusive definitions of ‘cheating.’ The only prize for surviving is another semester of the same game, which is a bit like winning an all-expenses-paid holiday to exactly where you already are.


The Takeaway

Jamal’s plight is exaggerated, but only in the sense that being hit with a custard pie is an exaggeration of being hit with custard. Students are already caught in wildly inconsistent approaches to AI across courses and faculties. In one class, AI use is compulsory; in another, it’s grounds for academic excommunication. In a third, students must write an essay about why they shouldn’t be writing essays, while in the fourth they’re simply told to “explain it in five minutes,” presumably before the lecturer’s coffee goes cold.


The caricatures—Booster, Doomster, Critic, Pragmatist—expose the deeper problem: there is no coherent philosophy of assessment in the age of AI. Instead, the sector resembles a badly organised orchestra in which each academic is playing a different tune, at a different tempo, on a different planet, while students try to dance along without looking too ridiculous.


And Jamal’s wry observation remains the most honest of all: assessment isn’t dead—it’s just a game show. The only rule is that the rules keep changing, depending on who’s holding the microphone, and occasionally whether the microphone itself has already been replaced by ChatGPT.



August 21, 2025

Bibs & bobs #25

 OMG, OMG, OMG …

I recently listened to Richard Susskind talk to his recent book [1] and I scribbled some of the ideas in a more or less incoherent note which I dropped into GPT-5. The LLM dutifully turned it into a coherent text [2]. It struck me that since my early dabbling in OpenAI’s sandpit, I’m no programming nerd, that my history of experiencing AI of the generative variety, GenAI, has been a series of OMGs quickly followed by a ho hum. The availability of GPT-5 is just another element of the many AI triggered interruptions that first appear as OMG moments.


I think all through this time, formal education, if I can crudely collapse the raft of practices that fall under that umbrella, has been waiting for things to settle while at the same time putting a lot of effort into domesticating [3] the various LLMs and their relatives. Given the OMG pattern, it’s easy to see just how difficult a time it has been for schools, teachers and their students.


Susskind drew attention to a number of other patterns and phenomena associated with AI. He uses the singular term to refer to the myriad apps AI currently available now and into the future. I will do the same. It prompted me to try and elaborate some of them in terms of the implications for formal education. 


Education finds itself in a revolving door of “never… until it happens”. Never could a machine produce an essay good enough to fool a teacher. Never could it solve high-school math word problems. Never could it mimic a student’s writing style. Each “never” has collapsed. Often as an OMG moment.


The pattern has become predictable: first disbelief, then dismissal. The anthropomorphising card is often played: “It’s not real understanding”. Of course “it” does not understand but that does not stop students using ChatGPT to spew out reports, essays, analyses, summaries or anything that is text-based. The task to students changes in productive ways and changes in unhelpful ways for time poor teachers. The assignment mutates for both. 


The adjective jagged is commonly used to describe the range of views, practices and assumptions about AI. To return to an analogy I think is useful, a growing proportion teachers and students struggle to deal with is akin to the choice humans once had to make between horse and horseless carriage [4]. That choice, significant as it was at the time of the invention of the automobile, pales when compared with the complications that students and teachers currently face when dealing with the challenge of using, not using or co-using AI to do tasks.


The unevenness always reflects context. Leadership, culture, resources, intellectual traditions, and professional commitments all shape the endlessly repeating debates that have now become routine. Over the years, I’ve amassed far too many examples from boosters, doomsters, critics, and pragmatists [5], voices that collectively generate the noisy bubble we now call “AI in education.”


Regulation and policy sit at the centre of much of this noise. Yet education policy almost inevitably lags practice. Departments of Education and accreditation bodies continue drafting rules for yesterday’s technologies rather than today’s realities. But how do you legislate for a phenomenon that is always under construction? Savvy students push boundaries, sometimes tormenting their teachers. Less confident teachers retreat behind the walls of “academic integrity.” Administrators issue sweeping pronouncements that rarely change much in practice, while regulators posture like ghostbusters chasing shadows. The result is a jagged landscape of confusion, exploration, adoption, and rejection which becomes more jagged with every cycle.


All of this is only the surface. It dazzles. Users poke around, and often pay, for the privilege of doing unpaid product development for the tech bros. Beneath the polished surface lies a far more uneven and ugly underbelly: the consumption of electricity and water by data centres, the large number of poorly paid data labelers, the relentless mining and extraction of minerals to fuel each new GPU cycle, and the completely opaque financial machinery in which ongoing billions are invested, ROIs promised, and which are rarely disclosed. 


Perhaps the most unsettling feature of AI is not that it takes over tasks once reserved for students, but that it increasingly learns from itself. Large language models, for instance, can already generate plausible synthetic characters and situations (and e.g. Ronksley-Pavia, Ronksley-Pavia, & Bigum, 2025). It is hardly far-fetched to imagine automated graders being trained on essays written by automated writers, or synthetic student queries refining synthetic tutors. In such recursive loops, education risks becoming a testbed where human learners are reduced to peripheral participants. Tools built to simulate student learning may soon be training each other more effectively than they ever trained students. The precedent exists: machines long ago stopped playing chess and Go with us and began improving without us by playing themselves.


A fallacy related to machines learning from machines is one Susskind and Susskind (2015, p. 45) highlight. Teachers will often concede that machines can handle the routine, mechanical grunt work of education, but insist they can never be creative, empathetic, or as thoughtful as a skilled teacher. The mistaken assumption here is that the only route to machine capability is to mimic how humans operate, when in fact machines may reach comparable or even superior performance by entirely different means.


Each OMG moment in education can spawn unintended outcomes. Students form underground networks trading effective prompts. Teachers offload routine grading to AI while claiming not to. Publishers push AI textbooks that no one wanted. Each shock doesn’t go away it morphs into new norms, new evasions, new ways of gaming the system.


Finally, there is the matter of frequency. Each “OMG moment” in AI seems to arrive faster than the last. It took two decades to move from chess to Go, only a few years from Go to image generation, months from images to protein folding, and weeks from chatbots to agents. The intervals continue to compress. For education, the challenge is not simply to keep pace, but to ask which forms of knowledge, intellectual habits, and practices remain worth teaching given this acceleration. More directly: given what machines can now do, what knowledge and skills do humans still need in order to delegate work wisely and achieve good outcomes? It is a question education has largely sidestepped, if not bungled, ever since the arrival of the handheld calculator.



Notes


[1] Susskind, R. (2025). How to Think About AI : A Guide For The Perplexed. IRL Press at Oxford University Press.  


[2] Text conjuring is a term Carlos Perez uses to describe how he writes blog posts and books. 


[3] I have written about the history of the attempts by formal education to domesticate each new digital development, e.g. Bigum (2002, 2012a, 2012b); Bigum, Johnson & Bulfin 2015); Bigum & Rowan (2015). If of interest, they are available here.


[4] Horseless carriage thinking (Horton, 2000) is a way of describing the new in terms of the old. Automobiles were just like carriages pulled by a horse but without the horse.


[5] A long time ago, on the eve of presenting to principals at a conference in Darwin, I sketched out a simple categorisation to capture the dominant positions people were taking on digital technology in education: the booster, the doomster, the critic, and the anti-schooler. I wrote caricatured scripts for characters like Billy Booster and Doris Doomster. It was a gamble, but I managed to persuade four principals to perform the roles on stage. They did a fabulous job. That playful framework later found a more formal home in the International Handbook of Education Change (Bigum & Kenway, 1998).




References


Bigum, C. (2002). Design sensibilities, schools and the new computing and communication technologies. In I. Snyder (Ed.), Silicon literacies : communication, innovation and education in the electronic age (pp. 130-140). Routledge. 


Bigum, C. (2012a). Schools and computers: Tales of a digital romance. In L. Rowan & C. Bigum (Eds.), Transformative approaches to new technologies and student diversity in futures oriented classrooms: Future Proofing Education (pp. 15-28). Springer. 


Bigum, C. (2012b). Edges, exponentials & education: disenthralling the digital. In L. Rowan & C. Bigum (Eds.), Transformative approaches to new technologies and student diversity in futures oriented classrooms: Future Proofing Education (pp. 29-43). Springer. 


Bigum, C., Johnson, N. F., & Bulfin, S. (2015). Critical is something others (don't) do: mapping the imaginative of educational technology. In S. Bulfin, N. F. Johnson, & C. Bigum (Eds.), Critical Perspectives on Education & Technology (pp. 1-14). Palgrave. 


Bigum, C., & Kenway, J. (1998). New Information Technologies and the Ambiguous Future of Schooling: some possible scenarios. In A. Hargreaves, A. Lieberman, M. Fullan, & D. Hopkins (Eds.), International Handbook of Educational Change (pp. 375-395). Springer. 


Bigum, C., & Rowan, L. (2015). Gorillas in Their Midst: Rethinking Educational Technology. In S. Bulfin, N. Johnson, & C. Bigum (Eds.), Critical perspectives on technology and education (pp. 14-34). Palgrave Macmillan. 


Horton, W. (2000). Horseless-carriage Thinking. http://web.archive.org/web/20100411190630/http://www.designingwbt.com/content/hct/hct.pdf 


Ronksley-Pavia, M., Ronksley-Pavia, S., & Bigum, C. (2025). Experimenting with Generative AI to Create Personalized Learning Experiences for Twice-Exceptional and Multi-Exceptional Neurodivergent Students. Journal of Advanced Academics. https://doi.org/10.1177/1932202X251346349  


Susskind, R., & Susskind, D. (2015). The future of the professions : how technology will transform the work of human experts. Oxford University Press. 


Bibs & bobs #30

  The Algorithm Everyone Thinks They Understand How Generative AI Turned Higher Education into an Interpretive Free-for-All Higher education...