December 24, 2022

Bibs & bobs #12



ChatGPT fun

Amid all the doom, panic and boosterism around ChatGPT there are tiny glimmers of fun. Here is one from Mark Schaefer: 20 Entertaining Uses of ChatGPT You Never Knew Were Possible. 


Accessible academic outputs

I mentioned the notion of embedding new text into LLMs which could then be interrogated. A model for this is Sahil Lavingia’s book, The minimalist entrepreneur. Here, you can ask his book questions. It’s roughly similar to Google’s talk to books. 


I began to wonder, what if academic publications all had an option like this. An option with which you could ask questions of the paper, chapter or book with the possibility of then extending your query via the references in the selected work. Yeah wild, but a fun shuffle of the academic game.


This would be a big step beyond what apps like Bearly offer currently. 


AI and writing

A collection of apps that use AI to support writing put together by Jeremy Caplan.


Transcription

There are a lot of options for transcripts of video and audio files. AssemblyAI is worth a play.  I was interested in a recent presentation by Venkatesh Rao to DEVCON in Bogota.  It mangled his name but the rest was pretty good. The app has a lot of options.


Formal Education and ChatGPT

As predictable as night follows day. Instead of doing the smart thing, i.e. given what machines can do some Oz universities will cling to pre-AI forms of assessment. So so so stupid.  The folk making these decisions are being paid big $. Who of them have explored or looked at how any of this is unfolding? Ban or domesticate is the old playbook formal education has used since the digital happened. It has not learned a thing from previous digital develops that go back to the late 1970s.  A measure of how smart any formal educational system or organisation is will be how well they deal with these developments, now and into the rapidly approaching future.  Below, I suggest the likely emergence of a new game that corporate universities will play: Whac-an-AI.


IMHO these developments will make Gutenberg seem like a tiny sneeze in the history of civilisation. 


Models, meddles and envy

The term physics envy is sometimes used to describe a motive behind some research in the sciences of the social. More broadly, the influence of different models drawn from science, i.e. Newtonian physics, chaos, non-linearity, complexity, emergent behaviour and so on, can be found in the logic that underpins a lot of research of the social and particularly in education which seems prone to picking up ideas that appear shiny or new, even if they are neither.


I’m of the view that there is nothing wrong with drawing on models and ideas from other fields but if you do, you need a decent helicopter view of the idea, its history and its limitations before drawing on it as metaphor or analogy.


For instance, if you wanted a quick and eloquent helicopter view of dimensions and the associated physics and mathematics it would be hard to go past a post of Margaret Wertheim’s. 


Perhaps it is an indication of how difficult it is to do good research in the so-called soft sciences that good helicopter views in this field of ideas, agendas and models are uncommon.  


Whac-an-AI

As each new bit of AI pops up to support writing, coding, planning and so on, and students studying in those fields continue to draw on them, you have a scenario for the perfect game of Whac-an-AI. In this game, the corporate university in order to protect it’s brand: our graduates don’t cheat, mistakenly commits to a practice of teaching their graduates how to do things that machines are now good at [1].


One thing that is predictable in this new wild space is that we will see more folk employed to manage and do the whacing. 


In anticipation of such a game emerging, I propose a new international standard for university stupidity called the Whac-an-AI.  Universities can be scored on an open ended scale by the number of moles they have more or less managed or tried to whac. Other awards spring to mind like best AI whac of the year, the most diligent AI whacer and so on. In time it could rival the ubiquitous ratings of universities that some universities appear to be obsessed with. If only :).


Sadly, if a tiny fraction of the effort associated with Whac-an-AI was put into:


1 supporting student understanding of what is going on in terms of LLMs and there many relatives


2 teaching students how to write half decent prompts and 


3 teaching students how to evaluate what LLMs output 


then life would be so much easier and better for students, staff and even managers!

The problem any university will face trying to ban or block AI use by students is that the moles will keep getting better and will breed rapidly. I am a bit of a fan of helicopter views of things. This tweet from Sterling Crispin puts the technical side of things into some perspective. 


Imagine an AI model that's 3x larger and more powerful than GPT3 aka ChatGPT


Google already built that in April, called PaLM, on their own TPU hardware competing with NVIDIA. People think ChatGPT will replace Google but they basically invented transformers in '17 (the T in GPT)






Imagine students playing with Monster moles!  The problem of course in an exponentially improving space is that the moles will keep getting better and will multiply faster than a corporate university could employ whacers.


Living through times that have elements that increase exponentially is an important part of making some sense of what is going on. As Dan Shipper recently wrote:


In his 1999 book The Age of Spiritual MachinesRay Kurzweil wrote: “It is in the nature of exponential growth that events develop extremely slowly for extremely long periods of time, but as one glides through the knee of the curve, events erupt at an increasingly furious pace. And that is what we will experience as we enter the twenty-first century.”


A long time back I opted to use EdExEd as a label for my agenda. The Ex in the label is for exponentials. The other two: edges and education. Education should be plural I think.






                                                                                                    



[1] This is not a new phenomenon. Each new way of doing things has been met with attempts to ban or domesticate in formal education. A long time ago the hand held calculator was subject to banning and eventual domestication. Rarely was any thought given to the complementary knowledge and skills needed to make good use of the new way of doing things. There are lots of examples of this Dalek mindset.


December 20, 2022

Bibs and bobs #11

 

Graphically illustrating research

I’m often drawn to images that convey ideas/information clearly. Maybe it’s the novelty. For example Erik Brynjolfsson’s digram about AI and human tasks in B&B #9 has stayed with me and is currently framing how to muddle through the notion of embedding new text into GPT. This post has some examples of using graphical representations. And, of course, the prospect of having an app like DALL.E come up with your representations will likely come to mind.

Learning to live with GPT

This is an interesting take on using ChatGPT, chatting to yourself. I think that this approach could possibly work for reflecting on research design, analysis or framings. It’s worth a try. Michelle Huang is an artist from New York she fed GPT text and then asked it questions. Here is the account.


Bryan Alexander has put together a collection of posts: Resources for exploring ChatGPT and higher education. 


The ripples of ChatGPT continue to play out. The counter ripples of moral panics in education are already growing. I expect a lot of noise and not much sanity to continue. Until we see how any of this plays out in practices, the way things get done, it’s difficult to say much more. 


In the interim, hats off to the small army of folk churning out AI apps. One or two will make the big time and create serious waves. All we can be sure of is that most of the current predictions will get it wrong.


Change, reform and scale in formal education

The ANT daemon that sits annoyingly on my shoulder keeps reminding me that most of the ways change is thought about in education are misleading at best, i.e. the early adopter stuff of Everett Rogers (think early adopters, change agents etc). Fine if you are into post the event categories but not useful if you are interested in how it happens and what keeps any change in place. 


The history of change/reform in education is a sad one. Lots of fab ideas that glow for a bit and then just fade away.


I came across this post by Sam Chaltain that offers a more interesting approach to thinking about change and importantly, scale. Chaltain’s point resonates with Hemant Taneja’s book, Unscaled [1]. We appear to be living through another unscaling in formal education as time-revered but clunky approaches to assessing students become unsettled. 


Thinking like this is anathema to those who would seek to measure the outcomes of a one size fits all education systems. The managerialoso’s [2] existence is likely under some threat.   


Music

I don’t have any formal music knowledge but the how of this app intrigued me. Riffusion is wild in so many ways. The best way to get a take on it is to play.  


                                                                                                    



[1] Taneja, H. (2018). Unscaled: how AI and a new generation of upstarts are creating the economy of the future. PublicAffairs.  


[2] Managerialoso is a term I coined to refer to, as ChatGPT suggested:


The term "managerialoso" could potentially be used to indicate that the group is overly focused on managerial practices or principles, to the point of being excessive.


The particular set of managerial principles and practices more often than not are top down, hierarchical and belong to an era long since gone. Any resemblance to managers living or dead is, of course purely coincidental.



December 11, 2022

Bibs & bobs #10

New Avenues

A fab post by Robin Sloan [1] on finding new ways of relating online:


Here’s my exhortation:


Let 2023 be a year of experimentation and invention!


Let it come from the edges, the margins, the provinces, the marshes!


THIS MEANS YOU!


I am thinking specifically of experimentation around “ways of relating online”. I’ve used that phrase before, and I acknowledge it might be a bit obscurebut, for me, it captures the rich overlap of publishing and networking, media and conviviality. It’s this domain that was so decisively captured in the 2010s, and it’s this domain that is newly up for grabs. …


I want to insist on an amateur internet; a garage internet; a public library internet; a kitchen table internet. Now, at last, in 2023, I want to tell the tech CEOs and venture capitalists: pipe down. Buzz off. Go fave each other’s tweets.


It’s inspiring stuff. If you can’t repurpose, rework, rebuild relating then what is the point of all of this? 


Sloan’s exhortation ought to be seriously heeded by users of the now ubiquitous LMS in formal education.


Search Scholarly Materials Preserved in the Internet Archive

From the post:

Looking for a research paper but can’t find a copy in your library’s catalog or popular search engines? Give Internet Archive Scholar a try! We might have a PDF from a “vanished” Open Access publisher in our web archive, an author’s pre-publication manuscript from their archived faculty webpage, or a digitized microfilm version of an older publication.


Let’s destroy the planet

Via Jason Kottke: 

Creative coder Neal Agarwal has launched his newest project: Asteroid Launcher. You can choose the asteroid’s composition (iron, stone, comet, etc.), size, speed, angle of incidence, and place of impact. Then you click “launch” and see the havoc you’ve wrought upon the world, with all kinds of interesting statistics. 

And yes you can drop one anywhere in Oz!


Delegating work to machines

A fair chunk of the Twitter world and blogosphere appears to be celebrating, complaining, or doom-saying as ChatGPT fever spreads. Unlike COVID-19 and its variants this AI won’t be easily stopped. There appears no m-RNA-like approach to slow any of this down. The digital ecosystem has spawned an interesting contagion. I found myself collecting bits and pieces about it all.


The number of apps (see this catalog) and amount of commentary is more than large. It’s like trying to catch a runaway train using roller skates. If you want to keep an eye on new apps and a selection of commentary then Ben’s Bites is handy.


Another thoughtful post from Henrik Karlsson and GPT wrangling [2]. The argument he makes about augmenting thinking resonates with how I think about this weird word prediction beast on steroids, aka ChatGPT. Henrik links to a post by Matt Webb. 


I was drawn into the speculating and tweeted (7.12.22)

Am musing about the increased production of text online due to the use of LLMs. If we cope by using LLM-based apps to summarise text, are we moving into an endlessly expanding loop akin to the paper clip maximiser?


Right now thee is a lot of noisy prognostication. Yes guilty as charged. It reminds me of Carolyn Marvin’s wonderful book, When Old Technologies were New. It’s an analysis of humans anticipating and framing the advent of things like electricity and the telephones.


A lot of the thinking about AI, particularly in education seems to look a lot like horseless carriage thinking, i.e. it’s just like what we had before but with a digital tweak [3]. So we cling to classical, industrial models of formal education and keep aiming for better models of students, teachers and wonder why the models don’t behave all that well.


I am working on a draft for the implications of these developments for research in the sciences of the social. The usual tribes of boosters, doomsters, critics and the end of schooling can be found in the many posts. It’s likely to be a fun ride, at least for the moment. 


Personally, like George Siemens, I think this is a biggie. To me, it is akin to the availability of the first word processors or being connected to the Internet for the first time and paying for email or to the time when the first search engines made their debut. We know how much things changed when the cost of search approached zero. We are living through the moment when the cost of accessible predictability is doing the same.


I keep being drawn back to Drew McDermott’s classic paper, ‘Artificial Intelligence Meets Natural Stupidity’, so much wishful naming. Much more to say, but don’t hold your breath waiting for the draft paper.


How to speak honeybee

An account of the work of Karl von Frisch. Lessons everywhere. And if honeybee is not for you you might find a touch of kiwi a bit of fun:  Taika Waititi reading a letter about a speeding ticket is well worth the time.



                                                                                                    


[1] Ht @vgr


[2] Which is apparently called prompt engineering. I like the wrangling better.


[3] Code for domesticating the new technology.


December 04, 2022

Bibs & bobs #9


Delegating research work to machines

Machines of various sorts have always played a role in doing research. I recall from a long time ago and in a universe too distant to remember that I used a piece of software written in Fortran to model experimental NMR data called LAOCOON.


Today we routinely use apps to write with, search databases, model with (spreadsheets and related software) and many more routinely used pieces of software. 


Now with the availability of Large Language Models (LLMs) [1] it was not surprising to see a number of AI apps that are designed to support some research tasks or some that might be exploited to help in routine research work. I have mentioned some of these in previous Bibs & bobs.


Paper digest is one of a number of interesting apps that one might play with to see what it is capable of. David Beer has a useful commentary and account of his explorations with the app. 


My sense is that we will try out and sometimes adopt one or two of these apps if they manage to demonstrate they are better at doing some of the tasks one routinely does in research. Most of these apps make use of GPT-3 and will likely make use of it’s soon to be released successor, GPT-4. Supposedly this may improve the quality of output from the various dependent apps. What is in prospect for GPT-n is anyone’s guess. 


But back to the here and now, the app called Elicit, is at present, pretty handy in many respects. But to me, the area that is much more interesting is indicated in the figure from Erik Brynjolfsson’s paper, The Turing Trap [2]. It is the new tasks that humans can do with the help of machines.  There are many “adjacent possibles” [3] to be explored.



Even at this early stage, the in-between time for AI in research assistance, the odd or weird way in which these models operate, alien intelligence as Rudy Ruggles [4] once put it, can at times, nudge you to seeing dots that can be joined, connections that are new to you at least.


The other spin off from working with machines like those I have mentioned is that that the detail of working with such apps can be easily documented and replicated if necessary. They ought to go to make up a useful component of any research reporting. This is a prospect that is easier if you publish in places that have escaped the A4 mindset.


Delegating work to machines in formal education

This past week the Australian Association for Research in Education held its annual conference in Adelaide. One of the keynotes was given by George Siemens. He spoke about AI as one of the biggest challenges facing education [5]. I fully agree that AI is the biggest challenge but it is also likely to be a significant opportunity and not in the way that is typically argued for in education circles, that of improving things. Simply, things will change and change in ways that are difficult to anticipate and plan for. I’m not heartened by the history of formal education’s engagement with the digital which boils down to: domesticate it and if it can’t be domesticated, ban it. 


LLMs will prove difficult to shoehorn into the domesticate or ban approach. They are developed from text, a lot of text. Having text apps that are predictive, think spellchecker on steroids, generates a lot of intriguing issues for teachers, students, modes of assessment and so on. In relation to this, Richard Gibson wrote a useful post in which he reported his experiments with GPT-3. It mirrors to some degree the opinion piece that Mike Sharples published [6]. Both are well worth reading.


The recent release of GPTchat will likely shake up the educational establishment. How these developments are framed now will play a big part in what happens in the near future. Even now, the current framings fall into familiar patterns to be found in the history of formal education’s engagement with the digital. Drawing on one I developed a long time ago, they can be labelled boosters, doomsters, critics and anti/de-schoolers [7]. I hope that for the new kid on the block, AI, that the labels don’t require too much dot joining. 


I have never found those framings particularly interesting or even useful. For my part, framing these developments holistically is more helpful, i.e. the human plus machine, centaur as some would suggest points to an augmentation, a dependence of one on the other. Framing these developments as changes to the way things are done around here [8] is then a first and important step. Seeing an holistic technology as formalised practice necessarily connects to culture. The impact that search which costs next to nothing has had on how we work, think and carry out routine educational tasks illustrates the point well. The prospect of prediction that costs next to nothing will likely be more profound than that of the cost of search getting close to zero.


My second move draws on the work of Bruno Latour [9]  in which he argues that in any delegation of work to a machine, there is an exchange of capacities, i.e. for a machine to do a task it requires a human to do something new, something additional to the delegation of work, something complementary. A simple example is that of the use of a calculator. If a calculator is used to calculate a sum and the user has no approximation skills then the number provided by the calculator can’t be checked [10]. 


For something like GPTchat two things come to mind. There is a skill to be developed in terms of prompting the app. Secondly, once the app has generated its output there will always be a need for evaluation, something these models are incapable of as they have no context.


AI appropriately framed, can be positioned as an augmentation to what humans do. We all need augmentation. We depend on machines to do so much of our work already, including all sorts of chemical and mechanical augmentations [11]. Some folk are more in need than others, those with disabilities of various kinds. But we are all in need of them, me perhaps more than most. The potential here is enormous. There is much more to say. For now this is where I’ll leave it.


Blogging impact on policy

An interesting blog post from LSE on the citation of LSE blog posts in policy documents. It is not a big impact but appears to be one that is steadily growing. The grey literature rises. 


More on path dependence

Trung Phan has a detailed account of some of the classic examples of path dependence in technology [12]. His newsletter is well worth a follow if you have interests in AI as is Ben Tossell’s.  


A directory of AI-based apps

If you find all of these developments of AI apps somewhat bewildering, it’s useful to recall that the drivers of these apps is venture capital which seems to have bottomless pockets, for now. It reminds me of the time when early microcomputers were the state of art for desktop computing and there was a huge investment, for that era, in software that ran on these devices. Education then and now always identified as the market to crack. 


Futurepedia.io documents all new, popular and verified apps with details about costs and what they supposedly can do. 


Big numbers

When it became clear that the growth of data on the Internet was growing rapidly, helpful explanations in terms of the number books stacked between earth and the Moon and similar illustrations have become common.  The latest [13]:


By the 2030s, the world will generate around a yottabyte of data per year — that’s 10^24 bytes, or the amount that would fit on DVDs stacked all the way to Mars.

Which has prompted the need for new names for these mind numbingly large or small numbers.  I can recall vaguely when I came across petabytes. It was much later than when the name was chosen to indicate 10^15. From the article:




Further from the article,

With the annual volume of data generated globally having already hit zettabytes, informal suggestions for 1027 — including ‘hella’ and ‘bronto’ — were starting to take hold, he says. Google’s unit converter, for example, already tells users that 1,000 yottabytes is 1 hellabyte, and at least one UK government website quotes brontobyte as the correct term.


Just a few prefixes with which to dazzle your colleagues. The paper is worth a skim.


                                                                                             


[1] Venkatesh Rao suggests LLMs should be called memory creatures. His piece on AI as superhistory and mentioned in Bibs & bobs #2 is important. 


[2] Brynjolfsson, E. (2022). The Turing Trap: The Promise & Peril of Human-Like Artificial Intelligence. Daedalus, 151(2), 272-287. https://doi.org/10.1162/daed_a_01915  


[3] A term coined by Stuart Kauffman and popularised by Steven Johnson in his 2010 book, Where Good Ideas Come From.


[4] Ruggles, R. L. (1997). Knowledge management tools. Butterworth-Heinemann, p. 205 


[5] There is a brief account of his presentation here. 


[6] Sharples, M. (2022). Automated Essay Writing: An AIED Opinion. International Journal of Artificial Intelligence in Education, 32(4), 1119-1126. https://doi.org/10.1007/s40593-022-00300-7 


[7] This crude categorisation was something I developed the night before a keynote presentation I gave to a Principals conference in Darwin in 1996. I wrote some scripts for each of the stereotypes and four Principles kindly volunteers to play the parts. They all did brilliant jobs hamming it up! I still have the scripts if interested.


Subsequently, a good colleague and I made use of the categorisation more formally: Bigum, C., & Kenway, J. (1998). New Information Technologies and the Ambiguous Future of Schooling: some possible scenarios. In A. Hargreaves, A. Lieberman, M. Fullan, & D. Hopkins (Eds.), International Handbook of Educational Change (pp. 375-395). Kluwer Academic Publishers. 


[8] A framing Ursula Franklin made, drawing on Kenneth Boulding, in her excellent 2004 book, The Real World of Technology.


[9] Latour, B. (1992). Where are the missing masses? Sociology of a few mundane artifacts. In W. Bijker & J. Law (Eds.), Shaping Technology/Building Society: Studies in Sociological Change (pp. 225-258). MIT Press. http://www.bruno-latour.fr/sites/default/files/50-MISSING-MASSES-GB.pdf  


[10] You could add additional complementary skills to this task, like an understanding of significant figures and perhaps, in precision calculations how the calculator does its arithmetic.


While the complementary skills in using AI apps may appear to be simply checking the generated output, I’d argue that we need to proceed on a case by case basis and identifying what complementary skills and knowledge are necessary is not a trivial task.


[11] I’ve lost count of how many I have. Good thing i have a machine to keep track of some of them.


[12] I pointed to the notion of intellectual path dependence in Bibs & bobs #8.


[13] Gibney, E. (2022). How many yottabytes in a quettabyte? Extreme numbers get new names. Nature. https://doi.org/10.1038/d41586-022-03747-9  

Bibs and bobs #22

If AI literacy is a map, human sensibility for AI is a compass I’ve been mulling the notion of a human sensibility for AI for some time. The...