December 01, 2025

Bibs & bobs #27

 Deckchairs, disclosure forms, and the drowning of common sense

Listening to the current debates about AI in initial teacher education feels like watching shuffleboard players on the Titanic — arguing whether the deck is at a 10-degree or 15-degree angle while the water rushes in. To be fair, they’ve drawn up a committee to measure the angle, a sub-committee to report on the accuracy of the measurements, and an oversight panel to ensure that no one is playing shuffleboard in a way that contravenes shuffleboard policy. Meanwhile, the iceberg has already filed a change-of-address form and moved into the ballroom.


The opening is a borrow from John Perry Barlow’s keynote address in 1995 to the second National Entertainment Industry Conference in Australia [1]. He had copyright law in mind. I have teacher education.


Why initial teacher education? Because today’s student teachers are tomorrow’s teachers, and tomorrow’s schools will inherit both them and whatever peculiar assessment rituals are dreamt up in the meantime. The future state of AI is anyone’s guess — though the optimists like to remind us that “today is the worst AI will ever be,” which is reassuring in the way being told your ship has “only just started sinking” is reassuring.


What we do know is this: these students will graduate after years of negotiating a confused and confusing assessment landscape. They will have survived bans, detectors, ritual declarations of originality, and possibly a health-and-safety briefing on how to operate a large language model without scalding themselves. The habits they form will shape what they do with their own students. At present, this is not a reassuring scenario.


The better move is not to panic, nor to deny, nor to set up another oversight panel to check whether the deck really is tilting. It is to treat AI as shared, an iceberg in common, and to design assessment that rewards what humans add: judgement, approximation, critique, and the occasional ability to recognise when you are in fact rearranging deck chairs.


A few random thoughts to try. Note the game is and always will be experimental. I work with what I call small affordable experiments. I try to remain curious about it all.


Make AI the shared object: uneven skill becomes collective pedagogy, not secret advantage (Rancière [2]).


Name the panic: calculators had their iceberg moment too and we blew it. The persistent question that is never asked and needs to be always asked when a new digital doodad appears is: what skills now complement what the machine can now do? Hint: the answer in this case is not AI literacy.


Imagining AI will settle is a mistake. Policy sprints that ban AI are like insisting the Titanic will stay afloat if we outlaw icebergs. Students will comply—until 11:47 pm the night before the deadline. The real question isn’t prohibition, it’s complement design: what do humans add that the machine doesn’t?


Signals that indicate when the ship starts to list. 

Performative compliance: Students salute the “no AI” rule in public while quietly stashing lifeboats under their bunks. Outwardly, everything looks proper; beneath the deck, the ship’s already taking on water.


Equity gaps: Some passengers get private lifeboats (AI know-how, paid subscriptions, insider tips), while others are left clinging to floating debris. The voyage becomes less about learning and more about who managed to smuggle better gear aboard.


Detector theatre: Klaxons blare and red lights spin as if icebergs can be scared away with noise. False positives and negatives sow panic—students distrust the crew, the crew distrusts the passengers, and no one’s actually steering.


Rubric gaming: Instead of learning to navigate, students learn to rearrange deckchairs in perfect formation. Points are earned for screenshots and decorative compliance rather than for charting a safe course through the ice.


Honesty tax: Those who admit to using AI are forced to fill out paperwork in triplicate, like passengers being charged extra for wearing life jackets. The lesson learned: concealment is quicker than compliance.


Avoidance strategies (so you don’t go down with the ship)

Ask for a judgment trace: like a ship’s log, students record their decision making, edits, and—most importantly—their reasons. When shared, these traces become a collective navigation chart: everyone sees where the icebergs were and how others steered around them.


Emphasize complementary skills: AI may handle the heavy lifting, but only humans can sense when the compass is drifting, estimate distance to shore, or decide when “close enough” is good enough. Building these habits develops a sturdier AI sensibility—less dazzled by polish, more attuned to sound judgment.


Reward openness: instead of punishing disclosure, treat it as putting lifeboats on deck. When students clearly show how they used (or didn’t use) AI, the whole vessel is safer. Honesty should feel like the shortest route to dry land.


Reinvest time saved: if AI bails some water, don’t just admire the empty bucket. Redirect that time to better thinking, reflection, or new explorations, work that keeps the ship not just afloat but moving toward somewhere worth going.


Incentives Steer the Ship

Rubrics, deadlines, and detectors shape behaviour. If you reward polish, you get polish. If you reward judgement, you get judgement.


AI won’t vanish with the band playing. It will still be redecorating the ballroom, handing out essay scaffolds while the water rises. Assessment can’t prevent the iceberg—but it can teach students how to steer.


Apologies for the absurdly over-used analogy.


Notes


[1] Barlow was a Republican, a rock lyricist (the Grateful Dead), an ex-cattle rancher, and a computer telecommunications civil liberties advocate. The first sentence of his keynote: 


"Yesterday when I was listening to those people arguing about copyright law I felt like I'd come across a gang of shuffleboard players on the deck of the Titanic arguing about the angle of the deck.”


 Tunbridge, N. (1995, September). The Cyberspace Cowboy. Australian Personal Computer, 2-4. 


[2] Rancière, J. (1991). The Ignorant Schoolmaster: Five Lessons in Intellectual Emancipation. Stanford University Press.  

Bibs & bobs #27

  Deckchairs, disclosure forms, and the drowning of common sense Listening to the current debates about AI in initial teacher education feel...