March 02, 2023

Bibs & bobs #14

 A wee rant

<BoR>

Maybe it was Marc Andreessen’s initial post on substack where he detailed how he would write.   


What’s my purpose? Variously: To share what I’m thinking; to share how I think; to share a way of thinking; to keep a personal notebook of thoughts; to send messages to my younger self.

… How will I write? Generally I won’t edit my copy, I won’t cite my sources, and I won’t try to be consistent. My motto is “strong views weakly held”, and you’ll see that on display here. Anything I say today I may disagree with tomorrow, in fact I frequently won’t even remember tomorrow…

What’s my hope? To show you that we live in a more interesting world than you might think; that it’s more comprehensible than you might fear; and that more things are possible than you might imagine.


Or maybe it was a webinar I watched this morning, but I felt the need for a smallish rant.


In writing this post, I can’t claim to be as freewheeling as Marc is but I do like the notion of strong ideas weakly held. If I’ve learned nothing these past years is that the play out of AI has caused me to rethink so many things that I used to hold as more or less reliable intellectual crutches. I’m having to learn to walk without them and make or find new ones even though I suspect they will be temporary.


I sat through an excellent Webinar given by Vitomir Kovanovic about the implications of AI for teaching and learning. Vitomir is part of the University of South Australia’s Centre for Change and Complexity in Learning (C3L). The centre has done much to draw attention to the emergence of AI apps and the implications for formal education. Importantly, Vitomir lightly mapped the history of developments in AI, an essential part of any understanding of the what, why and how of interplay in formal education. History matters. Understanding legacy effects, both material and intellectual matter. Without understandings of these patterns, any reaction to the new is blind. If you are going to think about or play in this space, these considerations are non-negotiables. You don’t need great detail but some sense of these ideas and history is crucial.


Webinars with chats can be difficult and annoying if you are trying to listen to a speaker and follow the always incoherent chat stream. I have a bad habit of reacting grumpily to comments that fall into the “you don’t get it” basket. I need to be more understanding and patient as the world of formal education slowly drags itself to deal sensibly with these early, in public terms, developments in AI. I should say that the video of the session was recorded as was the chat. 


I must also say though that I was pleased when on one slide Vitomir gave a number for the words used to train GPT3. His was 45 trillion. My back of an envelope calculation with help from ChatGPT was 48T. What’s a few trillion here or there in a LLM?


So with all the annoying chat going on I had to pause and remind myself that when new ways of doing things (aka technologies) emerge that “we” quite naturally apply familiar, comfy ways of thinking about things. That’s the first step in coming to terms with anything new. Sometimes the old ways of framing can be useful to a point. My experience, particularly around things digital is that they are not.


We still have folk who think in terms of an A4 world, i.e. a world in which print ruled and control of it shaped how we thought about things. Nicholas Negroponte coined the term “the digitally homeless” to describe such folk. 


Occasionally, you stumble on an idea that breaks from herd think and it adds usefully to your repertoire. Here, I am reminded of Jay Weston’s prescient paper [1]. It also brought to mind the time when automobiles emerged and people thought about them in terms of a change in transport, getting rid of horse dung from the streets and similar short-term consequences. It was an era of horseless carriage thinking. The other thing that came to mind was a book written by Carolyn Marvin: When Old Technologies Were New [2]. There is much to say about this fine piece of scholarship but a quote she took from a Henry Flad in the St. Louis Globe-Democrat in 1888 captures the idea well:


The time is not far distant when we will have wagons driving around with casks and jars of stored electricity, just as we have milk and bread wagons at present. The house of the future will be constructed with the view of containing electric apparatus for lighting, power, and cooking purposes. The arrangements will be of such a character that houses can be supplied with enough stored electricity to last twenty-four hours. All that the man with the cask will have to do will be to drive up to the back door, detach the cask left the day before, replace it with a new one, and then go to the next house and do likewise. This very thing will soon be taking place in St. Louis.


So that is my first point. We are as Avi Goldfarb puts it in “in-between times”, horseless carriage times. Choose your own analogy. 


What makes things as messy and as complicated as they are is well captured in a quote by William Gibson: 


The future is already here – it's just not evenly distributed.


The uneven distribution generates a great deal of noise, commentary and silliness that we are currently swimming in. It’s unhelpful but reflects strongly held views about what have been for a long time, the fundamentals of how we think about formal education.


Certainly we have to deal with the now, the availability of these new AI apps and what they mean for existing ways of doing things but unless we have an eye to the future we will be trapped unproductively in a horseless carriage world.


One of the positives in the chat this morning was a link to an eminently sane and sensible position on plagiarism [3]. Sarah Eaton’s offers six tenets. Anyone concerned about this issue needs to commit them to memory! While some might be seen as a bit of an over reach, to me the question for the longer term she draws attention to, is not if but when. 


All new ways of doing things are over-hyped in the short-term and under-estimated in the long-term and this is where the real problems lie for formal education. To me the developments longer term [4] ask huge questions about what it is to know, to learn, to demonstrate skill and knowledge, about what curriculum is, what it means to assess, what is worth knowing etc. These are the building blocks on which all of formal education’s current practices are built. To me they are all or will soon be fluid. Related to these matters are the old philosophical debates form the early days of AI in the 50’s and 60’s which have resurfaced. They remain important today.


A simple question I have asked for a long time, well before any of the recent developments in AI appeared: why do we teach students to do things that machines are good at? We wash clothes in a machine. We don’t use a copper to wash clothes as some of us will recall. We have a heavy reliance on an ever expanding set of machines both physical, digital and hybrid. The hybrids are physical machines with computers built into them.


In all of this we delegate work to machines. We pay little or no attention to what happens when we delegate. Here I indulge in one of my longer lasting intellectual crutches, that draws on notions from material semiotics more commonly referred to as actor-network theory. In very crude terms, Latour [5] and Sayes [6] perhaps more eloquently, argue that there is an exchange of capacities when we ask a machine to do work. The machine. in response, demands “new modes of action” from humans. 


To illustrate the point. At this time, I think that when you use ChatGPT to do a task you need to have three complementary skills/knowledge (new modes of action): you need to have a rough idea of how the app was built and how it works; you need to have good prompting skills; and you need to be able to evaluate what it produces. There are ample resources online that give good advice about the first two. I suspect these complementary skills will change over time as the AI systems improve, improve and improve again. I’m no fan of technological determinism. Nor do I think the social will control and manage things. It will all spin around the interplay between us and these new machines or apps. Yes they are black boxes. We can’t see inside them but that is also the case with the meat computer that sits on our shoulders. It is another, but different, black box although some of the logic that underpins the digital black box derives from the thinking about neurons and their associations. Much more to say. For now….


<EoR>


                                                                                                    


[1] Weston, J. (1997). Old Freedoms and New Technologies: The Evolution of Community Networking. The Information Society, 13(2), 195-201. https://doi.org/10.1080/019722497129214  


[2] Marvin, C. (1988). When Old Technologies Were New:  Thinking About Communications in the Late Nineteenth Century. Oxford University Press.  


[3] Posted by Sam Fowler in the chat. Attribution matters, even on small things. It is part of showing how you work and think.


[4] Just what is long and short in terms of systolic time is hard to make a call on. The developments in AI continue to attract massive investments which fuels the speed of development.


[5] Latour, B. (1992). Where are the missing masses? Sociology of a few mundane artifacts. In W. Bijker & J. Law (Eds.), Shaping Technology/Building Society: Studies in Sociological Change (pp. 225-258). MIT Press. http://www.bruno-latour.fr/sites/default/files/50-MISSING-MASSES-GB.pdf 


[6] Sayes, E. (2014). Actor–Network Theory and methodology: Just what does it mean to say that nonhumans have agency? Social Studies of Science, 44(1), 134-149. https://doi.org/10.1177/0306312713511867  





 








No comments:

Bibs & bobs #17

  Domesticating GenAI I’ve been listening to discussions about GenAI in formal education for too long and noticing a flood of papers reviewi...