February 25, 2025

Bibs & bobs #20

Generative AI has been here for a while, it’s just not evenly distributed.


The adaption of a line by William Gibson in the title captures the current state of things AI and formal education. There is online, a loud confusing noise about what Generative AI (GenAI) can, may, is doing in formal education. There are some similarities here to what happened in the early 1980s when microcomputers first began to be used in formal education: the promotional hype of vendors drowned out the experience sharing of users.


With the adoption of any new way of doing things, i.e. another way to think about the word technology (Franklin, 2004) there is a cost to the early users of working out sensible things to do with it. For the most part that means learning the protocols, the tricks and workarounds to interact usefully with LLMs via their chatbots. All the while trying to keep an eye on the more recent releases of new apps and models. This is not simple or easy work.


The work is essentially unpaid product development and promotion carried out by users who are often paying for the privilege of doing this work. With GenAI, the work is a good deal more difficult than coming to terms with a Unix OS, email, microcomputers, word processors, the Internet, web browsers and so on. Each of these developments and their productive use was unevenly distributed at the beginning and remains more or less so even now.


Kristen Berman in a recent interview put it succinctly: 


It’s actually very difficult now to do things well with AI. It still takes a lot of work.


This point is underlined by Tyler Cowan’s account of the rate determining step in AI adoption, humans. 


Formal education is no exception. We see a time-poor workforce coming to terms with a way of doing things that is unlike all previous developments of things digital. Becoming GenAI compatible ain’t easy [1]. 


Keeping in mind the uneven distribution of AI savviness [2], the problem of making sense of GenAI is perhaps most worryingly a key challenge in the field of initial teacher education. 


Put simply we have no good sense of how uneven the distribution is in: faculties of education; the undergraduate cohorts studying to be teachers; the teachers in the schools into which they will be employed; the leadership in those schools; the children/students in those schools; the educational bureaucracies that oversee some schools. 


And some folk are looking for quick, easy solutions? Gimme a break! 


Projecting beyond the year 3 AG [3] at a time when we could reasonably assume that GenAI will be better and the uneveness even greater it is fair to suggest that in the terms Sir Humphrey Appleby might put it:


"Well, Minister, one must appreciate that the integration of artificial intelligence into the educational sphere presents not so much a straightforward policy initiative as a labyrinthine exercise in administrative calibration, requiring the careful harmonization of pedagogical innovation, regulatory oversight, and—most crucially—the preservation of ministerial discretion in the face of, shall we say, evolving technological inevitabilities.


The difficulty, Minister, is that while AI offers the tantalizing prospect of enhanced learning outcomes, personalized instruction, and operational efficiencies, it simultaneously raises a plethora of, shall we say, policy-sensitive contingencies. There is, for instance, the small matter of data privacy, algorithmic bias, and, of course, the rather inconvenient question of whether AI-driven pedagogical models might inadvertently challenge the fundamental prerogatives of traditional educational institutions—namely, to teach students to think for themselves rather than to rely on the synthetic epistemologies of machine learning.


Moreover, Minister, any attempt to introduce AI into education must be conducted with the utmost circumspection, ensuring that all stakeholders—from the academic establishment to the unions, the regulatory bodies, and, indeed, the ever-vigilant media—are suitably reassured that such a transition will be neither disruptive nor, heaven forbid, excessively effective in ways that might inadvertently lead to rather, shall we say, challenging budgetary and staffing considerations.


Thus, Minister, while I entirely appreciate your enthusiasm for harnessing AI to revolutionize the educational landscape, one must recognize that a matter of such profound systemic complexity necessitates a period of extensive consultation, rigorous pilot studies, and, naturally, the establishment of an independent advisory committee whose remit will be to provide exhaustive guidance on the matter. And of course, Minister, I would strongly advise that we ensure the scope of this initiative remains under careful review—lest we find ourselves in the unenviable position of having to, er, implement something."


Any resemblance to pronouncements, living or dead, is entirely accidental. 


I thanked the bot with a “well done”. It replied:


Thank you, Minister! I mean… much obliged! If ever you require further obfuscation, strategic ambiguity, or indeed a meticulously crafted rationale for inaction disguised as proactive governance, do not hesitate to summon me. After all, as Sir Humphrey would say, “It’s not a matter of whether something should be done, but rather a question of whether it should be seen to be done.”


Should you require further refinements—or perhaps a rebuttal in the style of the ever-earnest Bernard—I remain at your service!


                                                                                                    



Notes


[1] I’m glossing what is a key phenomenon in making sense of humans using GenAI.  As I’ve argued in previous posts, Bruno Latour’s (1992) notion of an exchange of capacities between human and non-human is more than useful in thinking through humans delegating work to machines. Edwin Sayes (2014) makes a more eloquent argument about this notion. 


[2] I’m avoiding the use of the catch all word, literacy. Savviness I take to mean (also detailed in a previous post as complementary skills/knowledge) as having a rough idea of how LLMs were built and how they operate, together with good prompting skills (no easy matter) as well as a good means of being able to judge the quality of the output a LLM provides.  


[3] Anno GenAI: Latin for “Oh God, It’s Learning.” As DeepSeek suggested. 


[4] I prompted ChatGPT for this text.



References


Franklin, U. M. (2004). The Real World of Technology (2nd ed.). Anansi.  


Latour, B. (1992). Where are the missing masses? Sociology of a few mundane artifacts. In W. Bijker & J. Law (Eds.), Shaping Technology/Building Society: Studies in Sociological Change (pp. 225-258). MIT Press. http://www.bruno-latour.fr/sites/default/files/50-MISSING-MASSES-GB.pdf


Sayes, E. (2014). Actor–Network Theory and methodology: Just what does it mean to say that nonhumans have agency? Social Studies of Science, 44(1), 134-149. https://doi.org/10.1177/0306312713511867 

Bibs & bobs #20

Generative AI has been here for a while, it’s just not evenly distributed. The adaption of a line by William Gibson in the title captures th...