Frames and bots and brains
This post is about frames and framing. I wanted a title that was catchy and a bit playful. What kept popping into my head was the melody of “trains and boats and planes” (Bacharach and David). So I got the LLM bot (ChatGPT 4o) to make some suggestions. Oddly it opted to do research and went online to tackle the request (6 minutes, 2 sources and 17 searches later, ten suggestions. I chose the first.
What’s being framed? It’s not Roger Rabbit. It’s AI and formal education. I think in hindsight the rabbit would have been easier and more fun.
To begin, prompted by Eric Schmidt’s argument that AI is underhyped, I wondered if that may apply to AI and formal education. As anyone reading this post will have noticed there is no shortage of commentary, claims and cherry-picked critique and boosterism in this space. I’ve found little of the noise useful until I came across a post by J. Owen Matson which reminded me of the long history of posthumanism and its proponents [1]. It’s a bit embarrassing to have to be reminded given my long standing interest in Latour’s material semiotics or actor-network theory (ANT) [2].
To lay out the point I have been stumbling toward, frames matter.
Using an analogy from physics, light behaves as a particle or a wave depending on how you frame or observe it. So too, humans and AI can be seen to behave differently depending on the frame we deploy.
Much of the current educational response to the use of LLMs operates like an experiment to observe the photoelectric effect [3], which sees light as particles.
For the use of LLMs, we assume that the human is a rational, autonomous subject, one that is discrete and separate from other objects and artefacts. Our observations of AI are structured accordingly. We see LLMs as tools. We see risks of cheating. We look for detection protocols. We worry about plagiarism and creativity. We return to an old binary question [4] what part is social or human and what part is machine? More recently we worry about cognitive offloading [5] in much the same way that people worried about the loss of arithmetic skills when calculators appeared.
The nature of AI in education is not fixed. Like light, it reveals different characteristics of itself depending on how we choose to observe it.
If we switch the experimental setup and look for waves, the double slit experiment, we get a different outcome. We see entanglements of humans, machines, models that enact a reality that is singularly different to that offered by the alternative frame, a reality where the boundaries between human and non-human are blurry at best.
This frame has a label, the posthuman. It portrays cognition as distributed, enacted through systems, plug-ins, and assemblages. We stop asking the particle quesion, “Is this human or AI?” and start observing what is being enacted in the human–AI relation. The question becomes: What does this configuration allow that was previously impossible or not obvious?
Acknowledgement of GenAI use
I spent too much time being provoked by and arguing with ChatGPT 4o in thinking through this post. I did get the bot to satirise some of this to disrupt my flow. I won’t bore you with any satire in this post. What is clear to me is that prolonged and thoughtful interaction with a LLM bot revisits ideas, from the history, that had slipped from my much smaller context window, aka my short-term memory.
Notes
[1] Matson draws on the work of Katherine Hayles and scholars who have contributed to the notion of the posthuman like Braidotti, Haraway and many others.
[2] Latour, B. (2005). Reassembling the Social: An Introduction to Actor-Network-Theory. Oxford University Press; Michael, M. (2016). Actor network theory : trials, trails and translations (1st ed.). SAGE.
[3] When light is shone onto a metal surface, light above a certain frequency causes electrons to be emitted.
[4] The binary problem for me dates back the the late 1970s and repeatedly stumbling to work out in an educational setting which bit was social and which bit technical. Some time later I came across Latour’s ANT and the point that binaries are not explanations, they are things to be explained.
[5] For example: The Memory Paradox: Why Our Brains Need Knowledge in an Age of AI.
No comments:
Post a Comment