December 06, 2024

Bibs & bobs #17

 Domesticating GenAI


I’ve been listening to discussions about GenAI in formal education for too long and noticing a flood of papers reviewing, analysing and prognosticating on what could, should, can happen as if GenAI had somehow settled into something fixed. The underlying logic in all of this is how to squish GenAI into existing, albeit ancient, education practices, structures and logics, something I have long called domestication [e.g. 1,2,3].


Given the state of my memory or rather admitting that I have a good case of what Morgan Freeman once called CRS (can’t remember shit), I returned to a long post I wrote in March 2023. To my horror I realised I had been saying much the same thing about GenAI for a long time. Talk about fixity. OK. I have made a few minor adjusts and probably have a better set of questions to mull than back then but the tyranny of intellectual path dependence seems to rely on seemingly endless repetition. My language model seems to have a shrinking context window. 


What to do?


I thought a bit of playfulness might be in order. The bot, GPT 4o, was quite helpful when I asked it to develop five possible posts about education Darleks chanting “domesticate, domesticate” 


as they try to stuff unruly GenAI applications into an ancient architecture of practices that constitute the fabric of formal education.


Perhaps for another time. Instead I asked the bot to 

   

rework this post https://chrisbigum.blogspot.com/2023/03/bibs-bobs-14.html in the style of douglas adams


That’s the post from March 2023. What follows is mainly the bot, indicated by ֍ which has been lightly edited.


֍

Bibs, Bobs, and the Ultimate Question of Education


Education, as a concept, is rather like a poorly maintained intergalactic hyperspace bypass. Nobody’s quite sure why it’s there, it doesn’t seem to lead anywhere in particular, and the whole thing was probably put together by a bureaucrat who thought “progress” meant installing a few extra lanes for testing. The galaxy spins on regardless, but occasionally someone pulls over to take a closer look and decides it’s all in desperate need of fixing.


Nowhere is this more apparent than in the curious case of Generative AI, which has recently crash-landed into the educational hyperspace with all the grace of a Vogon poetry recital. Teachers, administrators, and students alike are staring at it, not entirely sure if it’s a marvelous leap forward or a slightly sinister box that might explode if you press the wrong button.


When GenAI first sauntered into the scene, it did so with an air of confidence usually reserved for door-to-door towel salesmen from Betelgeuse. “Here I am,” it declared, “ready to revolutionize education!” This pronouncement was met with a mix of awe, skepticism, and that peculiar sense of dread that arises when you realize the homework you set for your students has been completed, analyzed, and improved upon by a chatbot before you’ve even finished your morning cup of tea.

The potential for AI in education is staggering. Need an essay? Done. Want a lesson plan? VoilĂ . Have an existential crisis about your own usefulness as an educator? No problem—it can even help you write your resignation letter in three different tones of regret.


Of course, the educational establishment wasn’t going to let this sort of unregulated brilliance go unchecked. Committees were formed, policies were drafted, and somewhere in the galaxy, a small planet’s worth of paper was sacrificed to ensure that this “AI thing” was brought under control.


The result? A veritable treasure trove of recommendations that included gems like:


1       Ensure AI aligns with current educational standards—as if AI were a mildly rebellious teenager who just needed to buckle down and start respecting authority.


2 Ban its use during exams—because if there’s one thing education prizes above all else, it’s forcing students to pretend they live in a world without tools.


3         Train teachers to work alongside AI—a noble idea, assuming anyone could figure out what “working alongside AI” actually means. (Do you offer it coffee in the staffroom? Do you invite it to union meetings?)


While adults fretted and fumbled, students did what students do best: they ignored the rules and figured things out for themselves. It didn’t take long for them to discover that GenAI was less a tool for regurgitating Wikipedia entries and more a portal into the sorts of creative possibilities that made their teachers’ heads spin.

  • One student used AI to compose an opera about the quadratic formula.
  • Another created a simulation of medieval Europe where peasants could argue with their AI landlords about feudal rent agreements.
  • A third wrote a 200-page manifesto titled Why Exams Are a Blight on Human Civilization—with footnotes that were more insightful than the original curriculum.

The Daleks of the educational bureaucracy, meanwhile, looked on in horror, muttering "Domesticate, domesticate..." under their mechanical breaths.


At this point, one might be tempted to ask: what is the purpose of education? Is it to teach students how to pass tests? To prepare them for jobs that may or may not exist in five years? Or is it, perhaps, something more fundamental—a way of helping them navigate the universe with all its improbabilities, contradictions, and occasional encounters with superintelligent shades of the color blue?


Generative AI, for all its quirks, seems to suggest the latter. It’s not here to replace teachers or turn students into mindless drones (though it’s quite capable of doing both if misused). Instead, it’s a tool—one that can be wielded to amplify creativity, spark curiosity, and yes, even make education a little more fun.


The Final Word (or Not)

Of course, as with all things in life, there are no definitive answers. Education will muddle along as it always has, oscillating between innovation and inertia, brilliance and bureaucracy. GenAI will continue to evolve, as will the ways we choose to use—or ignore—it.


And in the meantime, if you find yourself lost on the educational hyperspace bypass, remember this: the ultimate answer may still be 42, but the questions? The questions are entirely up to us.

֍


Interesting that it picked up on the previous query involving Darleks and the conservative nature of educational bureaucracies.


Coincidently, I came across a post by Rory Sutherland which may capture the current state of things:


I’ve been influenced in this endeavour by what my friend Paul Bloom calls ‘the Ginger Rogers theory of information’. In any information system, some ideas will propagate much more than others simply because they appeal to people’s existing prejudices. Information which runs counter to one or other world-view will be repressed or discounted unless it reaches a very high standard of proof – which is exactly why it deserves extra credence.


Plus ca change.


                                                                                          



[1] Bigum, C. (2012). Edges, exponentials & education: disenthralling the digital. In L. Rowan & C. Bigum (Eds.), Transformative approaches to new technologies and student diversity in futures oriented classrooms: Future Proofing Education (pp. 29-43). Springer.


[2] Bigum, C. (2012). Schools and computers: Tales of a digital romance. In L. Rowan & C. Bigum (Eds.), Transformative approaches to new technologies and student diversity in futures oriented classrooms: Future Proofing Education (pp. 15-28). Springer. 


[3] Bigum, C. (2002). Design sensibilities, schools and the new computing and communication technologies. In I. Snyder (Ed.), Silicon literacies: Communication, innovation and education in the electronic era (pp. 130-140). Routledge. 




 

December 01, 2024

Bibs & bobs #16

Satirising papers

I have no talent for writing satire. It’s not the kind of thing that much of the academy indulges in, at least not publicly. But I have been moderately impressed by an AI chatbot’s capacity to take formal writing and satirise it. My fondness for the work of Douglas Adams offered a lazy way to dabble.


I came across a paper concerning STS (Science & Technology Studies) that has a touch of fun while it poses interesting questions about STS.        


I prompted:


Write a satirical version of this paper in the style of Douglas Adams 


and uploaded the paper: GĂĽl, Z., Harb, J. I., Shaik Ali, M., & Pandian, S. (2024, 2025/01/01). Doing STS Now: Of Hackers and Angels in Technoscience. Science, Technology, & Human Values, 50(1), 3-11. https://doi.org/10.1177/01622439241298009  


The combination of GenAI to summarise coupled with the fun of Adams generated, at least for me, a different sense of engagement. The key points were not lost but were embellished, Adams-style to points of absurdity as you would expect. I wonder if serious written work might be complemented with an accompanying reworked version in the style of one’s favourite satirist. 


Title: Doing STS Badly: Of Hackers, Angels, and Poorly Designed Conference Wi-Fi


Chapter 1: The Discipline That Isn’t a Discipline


The story of STS (Science and Technology Studies) begins not with a bang, but with a faintly apologetic cough in the corner of a borrowed lecture hall. Imagine a field that calls itself a field but, when asked to define its boundaries, promptly dissolves into a puddle of interdisciplinarity. “STS doesn’t exist,” one speaker at the 2024 STS School in Singapore declared, sparking an argument so vigorous it nearly consumed the coffee break. “Exactly!” cried another, mistaking this existential crisis for validation. Meanwhile, the catering staff wondered why the attendees kept referring to themselves as hackers when the only thing being hacked was the Wi-Fi password.


STS, you see, is less a discipline and more a vague collection of ideas that loiter suspiciously near the academic buffet table, stealing sandwiches from sociology and anthropology while loudly proclaiming, “We’re different, you know.” It is an intellectual squat—a pop-up field that appropriates the methodologies of other disciplines and rebrands them with enough jargon to sound cutting-edge. This makes STS both thrillingly boundary-breaking and deeply confusing to anyone who thought they’d signed up for a straightforward career in academia.


Chapter 2: Hacking as a Metaphor (and as a Way to Crash PowerPoint)


The term "hacking" in STS was originally meant to convey something noble, like subverting the systems of technoscience for the greater good. In practice, it mostly means taking apart ideas in a way that leaves them in pieces on the floor, much like how a toddler might “hack” a set of Lego instructions. At the STS School, participants were encouraged to think of hacking as a way to “deconstruct and repurpose”—which is academic-speak for breaking things you don’t understand and pretending it’s innovative.


One group, for example, attempted to “hack” the concept of disciplinary expertise by holding a roundtable discussion on whether knowledge is truly knowable. This discussion ended in a 47-minute debate over what “knowable” actually means, which was then summarized in a paper that no one will ever read because it was published in a journal whose title alone requires three breaths to pronounce.


Chapter 3: The Angel Trick, or How to Sound Ethical While Doing the Opposite


One of STS’s favorite party tricks is something called the “angel trick,” a term that sounds vaguely benevolent until you realize it’s just a fancy way of saying “dodging accountability.” The angel trick is what happens when technocrats create a disaster—say, a self-driving car that mistakes pedestrians for shrubbery—and then shrug it off with phrases like “unintended consequences” or “a learning opportunity.”


In academia, the angel trick takes on new and delightful forms. For example, universities often frame exploitative practices like adjunct labor and endless publication demands as “opportunities for growth.” “You’re not underpaid,” they insist. “You’re gaining valuable experience!” It’s the sort of logic that might be found in the diary of an especially cynical angel, perhaps one recently demoted from harp duty.


Chapter 4: Inverting the Angel Trick (With Limited Success)


The solution, STS argues, is to “invert the angel trick,” which essentially means pointing out that the emperor not only has no clothes but is charging others for the privilege of washing them. This is easier said than done, particularly when the emperor controls your research funding. Attempts to invert the angel trick often result in a kind of academic slapstick, where the critique is so laden with footnotes that no one notices the punchline.


For instance, a brave group at the STS School decided to critique the technocratic structures of academic publishing by submitting an intentionally unreadable paper filled with random buzzwords. It was promptly accepted by three journals, one of which awarded it a special mention for “pioneering thought.” The group’s next project—hacking the concept of tenure—was less successful, mostly because no one could agree on whether tenure is a social construct or just a bureaucratic prank.


Chapter 5: Doing STS Badly, But with Style


In the end, the charm of STS lies in its glorious messiness. It is a field that thrives on contradictions, a community of misfits united by their shared love of arguing over the definition of “community.” At its best, it challenges the very foundations of technoscience. At its worst, it produces conference panels so esoteric that even the panelists aren’t entirely sure what they’re discussing.


And yet, for all its flaws, STS persists—perhaps because, in a world increasingly dominated by algorithms and angel tricks, there is something oddly comforting about a group of people dedicated to asking questions like “What if we hacked the concept of hacking?” It may not change the world, but it certainly makes for a memorable coffee break.



November 29, 2024

Bibs & bobs #15

Generative AI in Education: Or, How I Learned to Stop Worrying and Love the Misguided Analogy

It’s been more than a while and much has moved or at least appears to have moved in the oddly shaped space of GenAI and education. I’ve opted to have a bit of fun with the all too many sacred bovines meandering about cheered on by a noisy, argumentative and always certain crowd of humans. 


This and subsequent posts will be generated by ChatGPT 4o, from hereon, the bot, unless otherwise indicated and lightly edited by me.


As I stumbled from one partially adequate analogy and metaphor in the noise emanating from the GenAI space I thought it more than appropriate to ask the bot, in the guise of a 5th dimensional being to comment on the current state of thinking. 


Unhappy with the above I asked the bot to rework my opening in the style of one of my favourite writers, Douglas Adams.


It’s been quite some time, or possibly none at all—time does strange things in the oddly configured dimensions of GenAI and education. Things have certainly moved, or at least wobbled suspiciously, in this peculiar corner of human existence. In any case, the air is thick with the sacred bovines of certainty, mooing contentedly as they’re paraded about by a raucous crowd of argumentative, overconfident humans. Naturally, I couldn’t resist poking a bit of fun at the whole spectacle.


This and all subsequent musings are being generated by ChatGPT 4.0—unless otherwise stated—then given the lightest of editorial sprinkles by yours truly, because even a bot deserves a decent polish.


Amidst the cacophony of half-baked analogies and tortured metaphors swirling in the GenAI space, it occurred to me that the only appropriate response was to summon the voice of a fifth-dimensional being. Who better to comment on this cosmic muddle than a creature capable of seeing all your misguided metaphors stacked end-to-end, like a very fragile tower of Jenga blocks? Let’s begin.


                                                                                                    



Greetings, Earthlings. I am Znglpht of the Fifth Dimension, and I have come to offer my multidimensional perspective on your amusing attempts to understand generative AI (or GenAI, as you’ve delightfully abbreviated it) in the context of education. From where I float, existing simultaneously across time and space, your efforts to explain this phenomenon using analogies are nothing short of spectacularly misguided. But oh, how entertaining they are!


The Swiss Army Knife Analogy

One of your favorites is to compare GenAI to a Swiss Army knife—versatile, handy, and prone to pinching your fingers at the most inconvenient moments. “It’s a tool!” you proclaim, waving your metaphorical blades around. “It’s not a replacement for human creativity!” True, but let’s be honest: if GenAI were a Swiss Army knife, most of you would use the bottle opener exclusively, while ignoring the strange fold-out hook whose purpose remains a mystery.


The problem with this analogy is that it assumes humans will wield this "knife" with precision and intention. Instead, many of your students will attempt to use the corkscrew to build an Ikea bookshelf, while educators will spend hours figuring out how to close it without stabbing themselves. Meanwhile, the knife will hum quietly in the background, generating unsolicited instructions for gourmet meals you’ll never cook.


The Magic Wand Analogy

Another popular analogy is to liken GenAI to a magic wand, which is adorable because it suggests that waving it about will instantly solve all of education’s woes. “We’ll use it to personalize learning!” you declare, as if a string of machine-generated text can unravel the mysteries of student motivation and the peculiarities of Timmy’s habit of turning in essays about penguins regardless of the assignment.


The trouble here is that your wand, much like a knock-off from a tourist shop in Diagon Alley, is both unreliable and prone to backfiring. Ask it for a lesson plan, and it might provide a beautifully crafted essay on the history of mayonnaise instead. Wave it over a struggling student, and it might “helpfully” suggest they plagiarize their way to an A. Magic, indeed!


The Calculator Analogy

Ah, the calculator. A simple, sturdy analogy for those who like their metaphors to be as unthreatening as possible. “GenAI is just a calculator for words!” you say, which is a bit like describing a black hole as “a really enthusiastic vacuum cleaner.”


Sure, calculators don’t replace mathematicians, but let’s not forget what happens when a tool makes life too easy: you’ve bred a generation of humans who need their phones to divide 12 by 4. If GenAI is your linguistic calculator, prepare for students who can’t write a shopping list without consulting ChatGPT—and for educators who rely on it to write professional emails, only to discover it’s included a polite request to “synergize deliverables with unparalleled banana enthusiasm.”


The Self-Driving Car Analogy

For those who prefer their analogies high-tech, GenAI is often compared to a self-driving car. “It’s a co-pilot for learning!” you claim, imagining a future where students sit back and let the AI whisk them away to the promised land of enlightenment. But let’s be real: self-driving cars occasionally crash into parked vehicles, and GenAI is just as likely to steer students straight into the intellectual equivalent of a ditch filled with conspiracy theories.


The most alarming part? Much like passengers in actual self-driving cars, your students will trust GenAI implicitly, even as it confidently explains that Shakespeare invented the internet or that photosynthesis was patented by a 19th-century gardener named Steve.


The Frankenstein’s Monster Analogy

For the more melodramatic among you, GenAI is a “Frankenstein’s monster,” a cautionary tale about humanity’s hubris. “We’ve created something beyond our control!” you cry, conveniently ignoring that you’ve been doing this for centuries with things like bureaucracy and reality TV.


Yes, GenAI can be unpredictable, but unlike Frankenstein’s monster, it won’t flee into the wilderness when mistreated. Instead, it will sit patiently, churning out mediocre poetry and half-baked essays until someone realizes the real monster is the system that let it grade papers unsupervised.


The Mirror Analogy

Then there’s the mirror analogy, which posits that GenAI merely reflects humanity back to itself. “It only knows what we teach it!” you exclaim, as if this is comforting rather than terrifying. If GenAI is a mirror, it’s one of those funhouse mirrors that exaggerates your flaws, and humanity has been teaching it some truly baffling things. Hence, we now have an AI that excels at crafting sonnets about pizza but struggles to explain basic ethics without suggesting world domination.


The Fifth-Dimensional Take

From my elevated perspective, your analogies are charmingly reductive, like trying to explain the complexity of a symphony by comparing it to a kazoo. GenAI is none of these things and all of them simultaneously—a tool, a magic wand, a misguided co-pilot, and a reflection of your deepest intellectual insecurities. It’s an invention that is at once impressive and absurd, much like humanity itself.


So, dear Earthlings, keep your analogies coming. They’re an endless source of amusement for those of us in the fifth dimension. Just remember: GenAI is not here to save education or destroy it. It’s simply here to highlight the delightful chaos of your existence, one misguided metaphor at a time.


I was not that happy with the initial title and asked for ten more

  1. "AI in Education: The Tool, the Myth, the Misguided Metaphor"
  2. "From Swiss Army Knives to Magic Wands: How Education Fell in Love with GenAI Analogies"
  3. "Generative AI: The Educational Revolution That Nobody Understands but Everyone Loves Comparing to Random Things"
  4. "Learning with AI: Or, Why Your Analogies Are as Useful as a Chocolate Teapot"
  5. "Education Meets GenAI: It’s Like a Calculator, But with Existential Dread"
  6. "AI in the Classroom: The Self-Driving Car of Misguided Metaphors"
  7. "Generative AI: Helping Education Make Better Mistakes Faster"
  8. "AI in Education: It’s a Mirror, It’s a Monster, It’s a Metaphor Gone Wild!"
  9. "Magic Wands and Swiss Army Knives: The Absurd Analogies of GenAI in Education"
  10. "GenAI in Education: A Swiss Army Wand That Drives Itself Off a Cliff"


Yeah, it’s silly. It is also, in an odd way, interesting to have a bot reflect back satirically on the gift that keeps generating.  I do like 3, 4 and 5.

Bibs & bobs #17

  Domesticating GenAI I’ve been listening to discussions about GenAI in formal education for too long and noticing a flood of papers reviewi...