December 06, 2024

Bibs & bobs #17

 Domesticating GenAI


I’ve been listening to discussions about GenAI in formal education for too long and noticing a flood of papers reviewing, analysing and prognosticating on what could, should, can happen as if GenAI had somehow settled into something fixed. The underlying logic in all of this is how to squish GenAI into existing, albeit ancient, education practices, structures and logics, something I have long called domestication [e.g. 1,2,3].


Given the state of my memory or rather admitting that I have a good case of what Morgan Freeman once called CRS (can’t remember shit), I returned to a long post I wrote in March 2023. To my horror I realised I had been saying much the same thing about GenAI for a long time. Talk about fixity. OK. I have made a few minor adjusts and probably have a better set of questions to mull than back then but the tyranny of intellectual path dependence seems to rely on seemingly endless repetition. My language model seems to have a shrinking context window. 


What to do?


I thought a bit of playfulness might be in order. The bot, GPT 4o, was quite helpful when I asked it to develop five possible posts about education Darleks chanting “domesticate, domesticate” 


as they try to stuff unruly GenAI applications into an ancient architecture of practices that constitute the fabric of formal education.


Perhaps for another time. Instead I asked the bot to 

   

rework this post https://chrisbigum.blogspot.com/2023/03/bibs-bobs-14.html in the style of douglas adams


That’s the post from March 2023. What follows is mainly the bot, indicated by ֍ which has been lightly edited.


֍

Bibs, Bobs, and the Ultimate Question of Education


Education, as a concept, is rather like a poorly maintained intergalactic hyperspace bypass. Nobody’s quite sure why it’s there, it doesn’t seem to lead anywhere in particular, and the whole thing was probably put together by a bureaucrat who thought “progress” meant installing a few extra lanes for testing. The galaxy spins on regardless, but occasionally someone pulls over to take a closer look and decides it’s all in desperate need of fixing.


Nowhere is this more apparent than in the curious case of Generative AI, which has recently crash-landed into the educational hyperspace with all the grace of a Vogon poetry recital. Teachers, administrators, and students alike are staring at it, not entirely sure if it’s a marvelous leap forward or a slightly sinister box that might explode if you press the wrong button.


When GenAI first sauntered into the scene, it did so with an air of confidence usually reserved for door-to-door towel salesmen from Betelgeuse. “Here I am,” it declared, “ready to revolutionize education!” This pronouncement was met with a mix of awe, skepticism, and that peculiar sense of dread that arises when you realize the homework you set for your students has been completed, analyzed, and improved upon by a chatbot before you’ve even finished your morning cup of tea.

The potential for AI in education is staggering. Need an essay? Done. Want a lesson plan? Voilà. Have an existential crisis about your own usefulness as an educator? No problem—it can even help you write your resignation letter in three different tones of regret.


Of course, the educational establishment wasn’t going to let this sort of unregulated brilliance go unchecked. Committees were formed, policies were drafted, and somewhere in the galaxy, a small planet’s worth of paper was sacrificed to ensure that this “AI thing” was brought under control.


The result? A veritable treasure trove of recommendations that included gems like:


1       Ensure AI aligns with current educational standards—as if AI were a mildly rebellious teenager who just needed to buckle down and start respecting authority.


2 Ban its use during exams—because if there’s one thing education prizes above all else, it’s forcing students to pretend they live in a world without tools.


3         Train teachers to work alongside AI—a noble idea, assuming anyone could figure out what “working alongside AI” actually means. (Do you offer it coffee in the staffroom? Do you invite it to union meetings?)


While adults fretted and fumbled, students did what students do best: they ignored the rules and figured things out for themselves. It didn’t take long for them to discover that GenAI was less a tool for regurgitating Wikipedia entries and more a portal into the sorts of creative possibilities that made their teachers’ heads spin.

  • One student used AI to compose an opera about the quadratic formula.
  • Another created a simulation of medieval Europe where peasants could argue with their AI landlords about feudal rent agreements.
  • A third wrote a 200-page manifesto titled Why Exams Are a Blight on Human Civilization—with footnotes that were more insightful than the original curriculum.

The Daleks of the educational bureaucracy, meanwhile, looked on in horror, muttering "Domesticate, domesticate..." under their mechanical breaths.


At this point, one might be tempted to ask: what is the purpose of education? Is it to teach students how to pass tests? To prepare them for jobs that may or may not exist in five years? Or is it, perhaps, something more fundamental—a way of helping them navigate the universe with all its improbabilities, contradictions, and occasional encounters with superintelligent shades of the color blue?


Generative AI, for all its quirks, seems to suggest the latter. It’s not here to replace teachers or turn students into mindless drones (though it’s quite capable of doing both if misused). Instead, it’s a tool—one that can be wielded to amplify creativity, spark curiosity, and yes, even make education a little more fun.


The Final Word (or Not)

Of course, as with all things in life, there are no definitive answers. Education will muddle along as it always has, oscillating between innovation and inertia, brilliance and bureaucracy. GenAI will continue to evolve, as will the ways we choose to use—or ignore—it.


And in the meantime, if you find yourself lost on the educational hyperspace bypass, remember this: the ultimate answer may still be 42, but the questions? The questions are entirely up to us.

֍


Interesting that it picked up on the previous query involving Darleks and the conservative nature of educational bureaucracies.


Coincidently, I came across a post by Rory Sutherland which may capture the current state of things:


I’ve been influenced in this endeavour by what my friend Paul Bloom calls ‘the Ginger Rogers theory of information’. In any information system, some ideas will propagate much more than others simply because they appeal to people’s existing prejudices. Information which runs counter to one or other world-view will be repressed or discounted unless it reaches a very high standard of proof – which is exactly why it deserves extra credence.


Plus ca change.


                                                                                          



[1] Bigum, C. (2012). Edges, exponentials & education: disenthralling the digital. In L. Rowan & C. Bigum (Eds.), Transformative approaches to new technologies and student diversity in futures oriented classrooms: Future Proofing Education (pp. 29-43). Springer.


[2] Bigum, C. (2012). Schools and computers: Tales of a digital romance. In L. Rowan & C. Bigum (Eds.), Transformative approaches to new technologies and student diversity in futures oriented classrooms: Future Proofing Education (pp. 15-28). Springer. 


[3] Bigum, C. (2002). Design sensibilities, schools and the new computing and communication technologies. In I. Snyder (Ed.), Silicon literacies: Communication, innovation and education in the electronic era (pp. 130-140). Routledge. 




 

December 01, 2024

Bibs & bobs #16

Satirising papers

I have no talent for writing satire. It’s not the kind of thing that much of the academy indulges in, at least not publicly. But I have been moderately impressed by an AI chatbot’s capacity to take formal writing and satirise it. My fondness for the work of Douglas Adams offered a lazy way to dabble.


I came across a paper concerning STS (Science & Technology Studies) that has a touch of fun while it poses interesting questions about STS.        


I prompted:


Write a satirical version of this paper in the style of Douglas Adams 


and uploaded the paper: Gül, Z., Harb, J. I., Shaik Ali, M., & Pandian, S. (2024, 2025/01/01). Doing STS Now: Of Hackers and Angels in Technoscience. Science, Technology, & Human Values, 50(1), 3-11. https://doi.org/10.1177/01622439241298009  


The combination of GenAI to summarise coupled with the fun of Adams generated, at least for me, a different sense of engagement. The key points were not lost but were embellished, Adams-style to points of absurdity as you would expect. I wonder if serious written work might be complemented with an accompanying reworked version in the style of one’s favourite satirist. 


Title: Doing STS Badly: Of Hackers, Angels, and Poorly Designed Conference Wi-Fi


Chapter 1: The Discipline That Isn’t a Discipline


The story of STS (Science and Technology Studies) begins not with a bang, but with a faintly apologetic cough in the corner of a borrowed lecture hall. Imagine a field that calls itself a field but, when asked to define its boundaries, promptly dissolves into a puddle of interdisciplinarity. “STS doesn’t exist,” one speaker at the 2024 STS School in Singapore declared, sparking an argument so vigorous it nearly consumed the coffee break. “Exactly!” cried another, mistaking this existential crisis for validation. Meanwhile, the catering staff wondered why the attendees kept referring to themselves as hackers when the only thing being hacked was the Wi-Fi password.


STS, you see, is less a discipline and more a vague collection of ideas that loiter suspiciously near the academic buffet table, stealing sandwiches from sociology and anthropology while loudly proclaiming, “We’re different, you know.” It is an intellectual squat—a pop-up field that appropriates the methodologies of other disciplines and rebrands them with enough jargon to sound cutting-edge. This makes STS both thrillingly boundary-breaking and deeply confusing to anyone who thought they’d signed up for a straightforward career in academia.


Chapter 2: Hacking as a Metaphor (and as a Way to Crash PowerPoint)


The term "hacking" in STS was originally meant to convey something noble, like subverting the systems of technoscience for the greater good. In practice, it mostly means taking apart ideas in a way that leaves them in pieces on the floor, much like how a toddler might “hack” a set of Lego instructions. At the STS School, participants were encouraged to think of hacking as a way to “deconstruct and repurpose”—which is academic-speak for breaking things you don’t understand and pretending it’s innovative.


One group, for example, attempted to “hack” the concept of disciplinary expertise by holding a roundtable discussion on whether knowledge is truly knowable. This discussion ended in a 47-minute debate over what “knowable” actually means, which was then summarized in a paper that no one will ever read because it was published in a journal whose title alone requires three breaths to pronounce.


Chapter 3: The Angel Trick, or How to Sound Ethical While Doing the Opposite


One of STS’s favorite party tricks is something called the “angel trick,” a term that sounds vaguely benevolent until you realize it’s just a fancy way of saying “dodging accountability.” The angel trick is what happens when technocrats create a disaster—say, a self-driving car that mistakes pedestrians for shrubbery—and then shrug it off with phrases like “unintended consequences” or “a learning opportunity.”


In academia, the angel trick takes on new and delightful forms. For example, universities often frame exploitative practices like adjunct labor and endless publication demands as “opportunities for growth.” “You’re not underpaid,” they insist. “You’re gaining valuable experience!” It’s the sort of logic that might be found in the diary of an especially cynical angel, perhaps one recently demoted from harp duty.


Chapter 4: Inverting the Angel Trick (With Limited Success)


The solution, STS argues, is to “invert the angel trick,” which essentially means pointing out that the emperor not only has no clothes but is charging others for the privilege of washing them. This is easier said than done, particularly when the emperor controls your research funding. Attempts to invert the angel trick often result in a kind of academic slapstick, where the critique is so laden with footnotes that no one notices the punchline.


For instance, a brave group at the STS School decided to critique the technocratic structures of academic publishing by submitting an intentionally unreadable paper filled with random buzzwords. It was promptly accepted by three journals, one of which awarded it a special mention for “pioneering thought.” The group’s next project—hacking the concept of tenure—was less successful, mostly because no one could agree on whether tenure is a social construct or just a bureaucratic prank.


Chapter 5: Doing STS Badly, But with Style


In the end, the charm of STS lies in its glorious messiness. It is a field that thrives on contradictions, a community of misfits united by their shared love of arguing over the definition of “community.” At its best, it challenges the very foundations of technoscience. At its worst, it produces conference panels so esoteric that even the panelists aren’t entirely sure what they’re discussing.


And yet, for all its flaws, STS persists—perhaps because, in a world increasingly dominated by algorithms and angel tricks, there is something oddly comforting about a group of people dedicated to asking questions like “What if we hacked the concept of hacking?” It may not change the world, but it certainly makes for a memorable coffee break.



Bibs & bobs #17

  Domesticating GenAI I’ve been listening to discussions about GenAI in formal education for too long and noticing a flood of papers reviewi...