December 01, 2024

Bibs & bobs #16

Satirising papers

I have no talent for writing satire. It’s not the kind of thing that much of the academy indulges in, at least not publicly. But I have been moderately impressed by an AI chatbot’s capacity to take formal writing and satirise it. My fondness for the work of Douglas Adams offered a lazy way to dabble.


I came across a paper concerning STS (Science & Technology Studies) that has a touch of fun while it poses interesting questions about STS.        


I prompted:


Write a satirical version of this paper in the style of Douglas Adams 


and uploaded the paper: Gül, Z., Harb, J. I., Shaik Ali, M., & Pandian, S. (2024, 2025/01/01). Doing STS Now: Of Hackers and Angels in Technoscience. Science, Technology, & Human Values, 50(1), 3-11. https://doi.org/10.1177/01622439241298009  


The combination of GenAI to summarise coupled with the fun of Adams generated, at least for me, a different sense of engagement. The key points were not lost but were embellished, Adams-style to points of absurdity as you would expect. I wonder if serious written work might be complemented with an accompanying reworked version in the style of one’s favourite satirist. 


Title: Doing STS Badly: Of Hackers, Angels, and Poorly Designed Conference Wi-Fi


Chapter 1: The Discipline That Isn’t a Discipline


The story of STS (Science and Technology Studies) begins not with a bang, but with a faintly apologetic cough in the corner of a borrowed lecture hall. Imagine a field that calls itself a field but, when asked to define its boundaries, promptly dissolves into a puddle of interdisciplinarity. “STS doesn’t exist,” one speaker at the 2024 STS School in Singapore declared, sparking an argument so vigorous it nearly consumed the coffee break. “Exactly!” cried another, mistaking this existential crisis for validation. Meanwhile, the catering staff wondered why the attendees kept referring to themselves as hackers when the only thing being hacked was the Wi-Fi password.


STS, you see, is less a discipline and more a vague collection of ideas that loiter suspiciously near the academic buffet table, stealing sandwiches from sociology and anthropology while loudly proclaiming, “We’re different, you know.” It is an intellectual squat—a pop-up field that appropriates the methodologies of other disciplines and rebrands them with enough jargon to sound cutting-edge. This makes STS both thrillingly boundary-breaking and deeply confusing to anyone who thought they’d signed up for a straightforward career in academia.


Chapter 2: Hacking as a Metaphor (and as a Way to Crash PowerPoint)


The term "hacking" in STS was originally meant to convey something noble, like subverting the systems of technoscience for the greater good. In practice, it mostly means taking apart ideas in a way that leaves them in pieces on the floor, much like how a toddler might “hack” a set of Lego instructions. At the STS School, participants were encouraged to think of hacking as a way to “deconstruct and repurpose”—which is academic-speak for breaking things you don’t understand and pretending it’s innovative.


One group, for example, attempted to “hack” the concept of disciplinary expertise by holding a roundtable discussion on whether knowledge is truly knowable. This discussion ended in a 47-minute debate over what “knowable” actually means, which was then summarized in a paper that no one will ever read because it was published in a journal whose title alone requires three breaths to pronounce.


Chapter 3: The Angel Trick, or How to Sound Ethical While Doing the Opposite


One of STS’s favorite party tricks is something called the “angel trick,” a term that sounds vaguely benevolent until you realize it’s just a fancy way of saying “dodging accountability.” The angel trick is what happens when technocrats create a disaster—say, a self-driving car that mistakes pedestrians for shrubbery—and then shrug it off with phrases like “unintended consequences” or “a learning opportunity.”


In academia, the angel trick takes on new and delightful forms. For example, universities often frame exploitative practices like adjunct labor and endless publication demands as “opportunities for growth.” “You’re not underpaid,” they insist. “You’re gaining valuable experience!” It’s the sort of logic that might be found in the diary of an especially cynical angel, perhaps one recently demoted from harp duty.


Chapter 4: Inverting the Angel Trick (With Limited Success)


The solution, STS argues, is to “invert the angel trick,” which essentially means pointing out that the emperor not only has no clothes but is charging others for the privilege of washing them. This is easier said than done, particularly when the emperor controls your research funding. Attempts to invert the angel trick often result in a kind of academic slapstick, where the critique is so laden with footnotes that no one notices the punchline.


For instance, a brave group at the STS School decided to critique the technocratic structures of academic publishing by submitting an intentionally unreadable paper filled with random buzzwords. It was promptly accepted by three journals, one of which awarded it a special mention for “pioneering thought.” The group’s next project—hacking the concept of tenure—was less successful, mostly because no one could agree on whether tenure is a social construct or just a bureaucratic prank.


Chapter 5: Doing STS Badly, But with Style


In the end, the charm of STS lies in its glorious messiness. It is a field that thrives on contradictions, a community of misfits united by their shared love of arguing over the definition of “community.” At its best, it challenges the very foundations of technoscience. At its worst, it produces conference panels so esoteric that even the panelists aren’t entirely sure what they’re discussing.


And yet, for all its flaws, STS persists—perhaps because, in a world increasingly dominated by algorithms and angel tricks, there is something oddly comforting about a group of people dedicated to asking questions like “What if we hacked the concept of hacking?” It may not change the world, but it certainly makes for a memorable coffee break.



November 29, 2024

Bibs & bobs #15

Generative AI in Education: Or, How I Learned to Stop Worrying and Love the Misguided Analogy

It’s been more than a while and much has moved or at least appears to have moved in the oddly shaped space of GenAI and education. I’ve opted to have a bit of fun with the all too many sacred bovines meandering about cheered on by a noisy, argumentative and always certain crowd of humans. 


This and subsequent posts will be generated by ChatGPT 4o, from hereon, the bot, unless otherwise indicated and lightly edited by me.


As I stumbled from one partially adequate analogy and metaphor in the noise emanating from the GenAI space I thought it more than appropriate to ask the bot, in the guise of a 5th dimensional being to comment on the current state of thinking. 


Unhappy with the above I asked the bot to rework my opening in the style of one of my favourite writers, Douglas Adams.


It’s been quite some time, or possibly none at all—time does strange things in the oddly configured dimensions of GenAI and education. Things have certainly moved, or at least wobbled suspiciously, in this peculiar corner of human existence. In any case, the air is thick with the sacred bovines of certainty, mooing contentedly as they’re paraded about by a raucous crowd of argumentative, overconfident humans. Naturally, I couldn’t resist poking a bit of fun at the whole spectacle.


This and all subsequent musings are being generated by ChatGPT 4.0—unless otherwise stated—then given the lightest of editorial sprinkles by yours truly, because even a bot deserves a decent polish.


Amidst the cacophony of half-baked analogies and tortured metaphors swirling in the GenAI space, it occurred to me that the only appropriate response was to summon the voice of a fifth-dimensional being. Who better to comment on this cosmic muddle than a creature capable of seeing all your misguided metaphors stacked end-to-end, like a very fragile tower of Jenga blocks? Let’s begin.


                                                                                                    



Greetings, Earthlings. I am Znglpht of the Fifth Dimension, and I have come to offer my multidimensional perspective on your amusing attempts to understand generative AI (or GenAI, as you’ve delightfully abbreviated it) in the context of education. From where I float, existing simultaneously across time and space, your efforts to explain this phenomenon using analogies are nothing short of spectacularly misguided. But oh, how entertaining they are!


The Swiss Army Knife Analogy

One of your favorites is to compare GenAI to a Swiss Army knife—versatile, handy, and prone to pinching your fingers at the most inconvenient moments. “It’s a tool!” you proclaim, waving your metaphorical blades around. “It’s not a replacement for human creativity!” True, but let’s be honest: if GenAI were a Swiss Army knife, most of you would use the bottle opener exclusively, while ignoring the strange fold-out hook whose purpose remains a mystery.


The problem with this analogy is that it assumes humans will wield this "knife" with precision and intention. Instead, many of your students will attempt to use the corkscrew to build an Ikea bookshelf, while educators will spend hours figuring out how to close it without stabbing themselves. Meanwhile, the knife will hum quietly in the background, generating unsolicited instructions for gourmet meals you’ll never cook.


The Magic Wand Analogy

Another popular analogy is to liken GenAI to a magic wand, which is adorable because it suggests that waving it about will instantly solve all of education’s woes. “We’ll use it to personalize learning!” you declare, as if a string of machine-generated text can unravel the mysteries of student motivation and the peculiarities of Timmy’s habit of turning in essays about penguins regardless of the assignment.


The trouble here is that your wand, much like a knock-off from a tourist shop in Diagon Alley, is both unreliable and prone to backfiring. Ask it for a lesson plan, and it might provide a beautifully crafted essay on the history of mayonnaise instead. Wave it over a struggling student, and it might “helpfully” suggest they plagiarize their way to an A. Magic, indeed!


The Calculator Analogy

Ah, the calculator. A simple, sturdy analogy for those who like their metaphors to be as unthreatening as possible. “GenAI is just a calculator for words!” you say, which is a bit like describing a black hole as “a really enthusiastic vacuum cleaner.”


Sure, calculators don’t replace mathematicians, but let’s not forget what happens when a tool makes life too easy: you’ve bred a generation of humans who need their phones to divide 12 by 4. If GenAI is your linguistic calculator, prepare for students who can’t write a shopping list without consulting ChatGPT—and for educators who rely on it to write professional emails, only to discover it’s included a polite request to “synergize deliverables with unparalleled banana enthusiasm.”


The Self-Driving Car Analogy

For those who prefer their analogies high-tech, GenAI is often compared to a self-driving car. “It’s a co-pilot for learning!” you claim, imagining a future where students sit back and let the AI whisk them away to the promised land of enlightenment. But let’s be real: self-driving cars occasionally crash into parked vehicles, and GenAI is just as likely to steer students straight into the intellectual equivalent of a ditch filled with conspiracy theories.


The most alarming part? Much like passengers in actual self-driving cars, your students will trust GenAI implicitly, even as it confidently explains that Shakespeare invented the internet or that photosynthesis was patented by a 19th-century gardener named Steve.


The Frankenstein’s Monster Analogy

For the more melodramatic among you, GenAI is a “Frankenstein’s monster,” a cautionary tale about humanity’s hubris. “We’ve created something beyond our control!” you cry, conveniently ignoring that you’ve been doing this for centuries with things like bureaucracy and reality TV.


Yes, GenAI can be unpredictable, but unlike Frankenstein’s monster, it won’t flee into the wilderness when mistreated. Instead, it will sit patiently, churning out mediocre poetry and half-baked essays until someone realizes the real monster is the system that let it grade papers unsupervised.


The Mirror Analogy

Then there’s the mirror analogy, which posits that GenAI merely reflects humanity back to itself. “It only knows what we teach it!” you exclaim, as if this is comforting rather than terrifying. If GenAI is a mirror, it’s one of those funhouse mirrors that exaggerates your flaws, and humanity has been teaching it some truly baffling things. Hence, we now have an AI that excels at crafting sonnets about pizza but struggles to explain basic ethics without suggesting world domination.


The Fifth-Dimensional Take

From my elevated perspective, your analogies are charmingly reductive, like trying to explain the complexity of a symphony by comparing it to a kazoo. GenAI is none of these things and all of them simultaneously—a tool, a magic wand, a misguided co-pilot, and a reflection of your deepest intellectual insecurities. It’s an invention that is at once impressive and absurd, much like humanity itself.


So, dear Earthlings, keep your analogies coming. They’re an endless source of amusement for those of us in the fifth dimension. Just remember: GenAI is not here to save education or destroy it. It’s simply here to highlight the delightful chaos of your existence, one misguided metaphor at a time.


I was not that happy with the initial title and asked for ten more

  1. "AI in Education: The Tool, the Myth, the Misguided Metaphor"
  2. "From Swiss Army Knives to Magic Wands: How Education Fell in Love with GenAI Analogies"
  3. "Generative AI: The Educational Revolution That Nobody Understands but Everyone Loves Comparing to Random Things"
  4. "Learning with AI: Or, Why Your Analogies Are as Useful as a Chocolate Teapot"
  5. "Education Meets GenAI: It’s Like a Calculator, But with Existential Dread"
  6. "AI in the Classroom: The Self-Driving Car of Misguided Metaphors"
  7. "Generative AI: Helping Education Make Better Mistakes Faster"
  8. "AI in Education: It’s a Mirror, It’s a Monster, It’s a Metaphor Gone Wild!"
  9. "Magic Wands and Swiss Army Knives: The Absurd Analogies of GenAI in Education"
  10. "GenAI in Education: A Swiss Army Wand That Drives Itself Off a Cliff"


Yeah, it’s silly. It is also, in an odd way, interesting to have a bot reflect back satirically on the gift that keeps generating.  I do like 3, 4 and 5.

March 02, 2023

Bibs & bobs #14

 A wee rant

<BoR>

Maybe it was Marc Andreessen’s initial post on substack where he detailed how he would write.   


What’s my purpose? Variously: To share what I’m thinking; to share how I think; to share a way of thinking; to keep a personal notebook of thoughts; to send messages to my younger self.

… How will I write? Generally I won’t edit my copy, I won’t cite my sources, and I won’t try to be consistent. My motto is “strong views weakly held”, and you’ll see that on display here. Anything I say today I may disagree with tomorrow, in fact I frequently won’t even remember tomorrow…

What’s my hope? To show you that we live in a more interesting world than you might think; that it’s more comprehensible than you might fear; and that more things are possible than you might imagine.


Or maybe it was a webinar I watched this morning, but I felt the need for a smallish rant.


In writing this post, I can’t claim to be as freewheeling as Marc is but I do like the notion of strong ideas weakly held. If I’ve learned nothing these past years is that the play out of AI has caused me to rethink so many things that I used to hold as more or less reliable intellectual crutches. I’m having to learn to walk without them and make or find new ones even though I suspect they will be temporary.


I sat through an excellent Webinar given by Vitomir Kovanovic about the implications of AI for teaching and learning. Vitomir is part of the University of South Australia’s Centre for Change and Complexity in Learning (C3L). The centre has done much to draw attention to the emergence of AI apps and the implications for formal education. Importantly, Vitomir lightly mapped the history of developments in AI, an essential part of any understanding of the what, why and how of interplay in formal education. History matters. Understanding legacy effects, both material and intellectual matter. Without understandings of these patterns, any reaction to the new is blind. If you are going to think about or play in this space, these considerations are non-negotiables. You don’t need great detail but some sense of these ideas and history is crucial.


Webinars with chats can be difficult and annoying if you are trying to listen to a speaker and follow the always incoherent chat stream. I have a bad habit of reacting grumpily to comments that fall into the “you don’t get it” basket. I need to be more understanding and patient as the world of formal education slowly drags itself to deal sensibly with these early, in public terms, developments in AI. I should say that the video of the session was recorded as was the chat. 


I must also say though that I was pleased when on one slide Vitomir gave a number for the words used to train GPT3. His was 45 trillion. My back of an envelope calculation with help from ChatGPT was 48T. What’s a few trillion here or there in a LLM?


So with all the annoying chat going on I had to pause and remind myself that when new ways of doing things (aka technologies) emerge that “we” quite naturally apply familiar, comfy ways of thinking about things. That’s the first step in coming to terms with anything new. Sometimes the old ways of framing can be useful to a point. My experience, particularly around things digital is that they are not.


We still have folk who think in terms of an A4 world, i.e. a world in which print ruled and control of it shaped how we thought about things. Nicholas Negroponte coined the term “the digitally homeless” to describe such folk. 


Occasionally, you stumble on an idea that breaks from herd think and it adds usefully to your repertoire. Here, I am reminded of Jay Weston’s prescient paper [1]. It also brought to mind the time when automobiles emerged and people thought about them in terms of a change in transport, getting rid of horse dung from the streets and similar short-term consequences. It was an era of horseless carriage thinking. The other thing that came to mind was a book written by Carolyn Marvin: When Old Technologies Were New [2]. There is much to say about this fine piece of scholarship but a quote she took from a Henry Flad in the St. Louis Globe-Democrat in 1888 captures the idea well:


The time is not far distant when we will have wagons driving around with casks and jars of stored electricity, just as we have milk and bread wagons at present. The house of the future will be constructed with the view of containing electric apparatus for lighting, power, and cooking purposes. The arrangements will be of such a character that houses can be supplied with enough stored electricity to last twenty-four hours. All that the man with the cask will have to do will be to drive up to the back door, detach the cask left the day before, replace it with a new one, and then go to the next house and do likewise. This very thing will soon be taking place in St. Louis.


So that is my first point. We are as Avi Goldfarb puts it in “in-between times”, horseless carriage times. Choose your own analogy. 


What makes things as messy and as complicated as they are is well captured in a quote by William Gibson: 


The future is already here – it's just not evenly distributed.


The uneven distribution generates a great deal of noise, commentary and silliness that we are currently swimming in. It’s unhelpful but reflects strongly held views about what have been for a long time, the fundamentals of how we think about formal education.


Certainly we have to deal with the now, the availability of these new AI apps and what they mean for existing ways of doing things but unless we have an eye to the future we will be trapped unproductively in a horseless carriage world.


One of the positives in the chat this morning was a link to an eminently sane and sensible position on plagiarism [3]. Sarah Eaton’s offers six tenets. Anyone concerned about this issue needs to commit them to memory! While some might be seen as a bit of an over reach, to me the question for the longer term she draws attention to, is not if but when. 


All new ways of doing things are over-hyped in the short-term and under-estimated in the long-term and this is where the real problems lie for formal education. To me the developments longer term [4] ask huge questions about what it is to know, to learn, to demonstrate skill and knowledge, about what curriculum is, what it means to assess, what is worth knowing etc. These are the building blocks on which all of formal education’s current practices are built. To me they are all or will soon be fluid. Related to these matters are the old philosophical debates form the early days of AI in the 50’s and 60’s which have resurfaced. They remain important today.


A simple question I have asked for a long time, well before any of the recent developments in AI appeared: why do we teach students to do things that machines are good at? We wash clothes in a machine. We don’t use a copper to wash clothes as some of us will recall. We have a heavy reliance on an ever expanding set of machines both physical, digital and hybrid. The hybrids are physical machines with computers built into them.


In all of this we delegate work to machines. We pay little or no attention to what happens when we delegate. Here I indulge in one of my longer lasting intellectual crutches, that draws on notions from material semiotics more commonly referred to as actor-network theory. In very crude terms, Latour [5] and Sayes [6] perhaps more eloquently, argue that there is an exchange of capacities when we ask a machine to do work. The machine. in response, demands “new modes of action” from humans. 


To illustrate the point. At this time, I think that when you use ChatGPT to do a task you need to have three complementary skills/knowledge (new modes of action): you need to have a rough idea of how the app was built and how it works; you need to have good prompting skills; and you need to be able to evaluate what it produces. There are ample resources online that give good advice about the first two. I suspect these complementary skills will change over time as the AI systems improve, improve and improve again. I’m no fan of technological determinism. Nor do I think the social will control and manage things. It will all spin around the interplay between us and these new machines or apps. Yes they are black boxes. We can’t see inside them but that is also the case with the meat computer that sits on our shoulders. It is another, but different, black box although some of the logic that underpins the digital black box derives from the thinking about neurons and their associations. Much more to say. For now….


<EoR>


                                                                                                    


[1] Weston, J. (1997). Old Freedoms and New Technologies: The Evolution of Community Networking. The Information Society, 13(2), 195-201. https://doi.org/10.1080/019722497129214  


[2] Marvin, C. (1988). When Old Technologies Were New:  Thinking About Communications in the Late Nineteenth Century. Oxford University Press.  


[3] Posted by Sam Fowler in the chat. Attribution matters, even on small things. It is part of showing how you work and think.


[4] Just what is long and short in terms of systolic time is hard to make a call on. The developments in AI continue to attract massive investments which fuels the speed of development.


[5] Latour, B. (1992). Where are the missing masses? Sociology of a few mundane artifacts. In W. Bijker & J. Law (Eds.), Shaping Technology/Building Society: Studies in Sociological Change (pp. 225-258). MIT Press. http://www.bruno-latour.fr/sites/default/files/50-MISSING-MASSES-GB.pdf 


[6] Sayes, E. (2014). Actor–Network Theory and methodology: Just what does it mean to say that nonhumans have agency? Social Studies of Science, 44(1), 134-149. https://doi.org/10.1177/0306312713511867  





 








Bibs & bobs #26

Survivor: Assessment Island: Students vs AI vs Academics It’s difficult not to have sympathy for students at all levels of education as they...