December 31, 2022

Bibs & bobs #13

 dis-ELIZAs

In the 1960s Joseph Weizenbaum created a natural language processing program called ELIZA. It was the first chatbot. It can be accessed here. ELIZA was designed to mimic a Rogerian counselor.  In broad terms the program did the prompting and the human user often replied at length which is the reverse of what we have now with chatbots like ChatGPT. While the response of the software was simple, their reaction to it prompted Weizenbaum to explore his ethical concerns in Computer Power and Human Reason [1].   

Having machines as talkative, responsive companions underpins the interest in having companion robots for the old and young. As Diana Fleischman argues, companionship extends to sexual gratification. 


A more intriguing take on this evolving pattern of interactions between humans and machines is provided by Kazuo Ishiguro’s book, Klara and the Sun [2]. It details a dystopian future in which socialisation of the young has been delegated to androids, AFs or artificial friends, humanoid robots powered by the sun.  The relation between humans and machines through the eyes of machines might seem far fetched at this time but for some time, humans have used machines to keep the young and old busy, interested and entertained. I’m thinking of radio, television, smart phones and, of course, games of all varieties. Ishiguro’s fiction explores a possible realigning of the human/machine relationship, to that of the machine.


It is easy to dismiss these musings for what they are, fictions or funny little imaginary worlds that form in a creaky old meat computer. But right now I think they are a largely unspoken part of the debate and commentary swirling around recent developments in generative AI. There is, for want of a better adjective, an emotional side to these debates.


Perhaps it will surface in interactions with various AI apps that slip into what Masahiro Mori calls the uncanny valley, that is the affinity we feel when interacting with an app becomes a repulsion. It poses an interesting question for the black belt prompters, how to develop prompts that produce that outcome. 


Maybe recent work by Carlos Perez will provide that reaction for you. 

Prospects for AI and research in the social sciences

I’ve always been interested in what I think of as secret academic business, i.e. how academics do their work: note making, searching, curation, writing and so on. Research across three European countries suggested that AI like a lot of the digital aids that have surfaced in recent years will go largely ignored.


94% of social scientists say they use Microsoft Word to write up their work. The vast majority also use Word to manage their references with just one in five using reference management software. Indeed, one in four humanities scholars draft early versions of their manuscripts in long hand. Most of the researchers we surveyed have no need for digital collaboration tools either – perhaps because the majority work alone – and where collaboration is needed, HSS scholars prefer to use email. They also overwhelmingly use email rather than social channels to promote their work. 


I need to locate more research that explores the attitudes and digital habits of researchers in the so-called soft sciences. There is obviously a visible, some might say noisy group who pick up on most of the shiny new toys that emerge from what remains a primordial digital soup. More broadly, and particularly in education, my sense is that there is a significant proportion of folk who get by with minimal engagement with things digital. There is much to tease out here. 


Persuasion

My curation of ideas is ordered around a small set of notions that I think are important for my making some sense of how the world works. Persuasion is one of those notions. Perhaps my engagement with the early actor-network (ANT) accounts of scientific practices underpins my interest. I found some of those ANT ideas in a review of Anand Giridharadas’ book The Persuaders by Ingrid Robeyns. 


My take-away from the book is that there is no point in believing you are right (or have the right policy, or the right analysis on what needs to happen on matter X), and believing the only thing that is needed for change is airing those views and that analysis. It’s just not enough. We need to actually spend time and effort to persuade others that this is the right analysis/policy/direction, and this persuasion cannot be merely cognitive; it requires understanding “where people are”, what makes them believe what they believe, and showing respect for them as a person at the outset. All of that requires listening, and being willing to engage in a genuine conversation, and finding out why people believe what they believe. Just believing I am right (and having all the arguments sorted out in my head) and airing my views, is not enough to also make a difference in the world, especially not in deeply divided societies.


While these ideas are often cited as ways to progress on issues of concern, you might have to conclude that not a lot of this approach is going on given the current status of differences across a wide range of issues. 


Of course all of this is easier said than done and does assume, as Robeyns points out that both sides of a debate need to have a commonality of how to play:

Being open to be persuaded implies being willing to accept, at a meta-level, that there is a distinction between a true claim and a false claim, and that one should not deliberately claim things that one knows to be blatantly false.


I found myself thinking about this analysis closer to home, as in how one persuades oneself and then to the roles machines play in persuasion which is well represented in commentary and analysis of advertising, recommendation systems and so on.  But I think there is something else going on.


It was perhaps timely that I came across a recent paper by Jeroen de Ridder [3] who makes an interesting case about what he called online illusions of understanding. He argues that,


the mere availability and use of internet search tools thus inflates people’s sense of how much they know and understand.  (p.5)


This got me thinking about the impact of prediction machines [4] on our illusions of understanding. One could argue that the machines we draw on are under no illusions about what they are, just ask ChatGPT. But in the hands of humans they can clearly contribute to the sense of inflation that de Ridder points to.  


Bottom line for the paper and for me is that all this adds up to a heightened importance of the ability to ask good questions. As I wrote that line I resisted the urge to revisit some of the lame stuff I have posed in this blog. But i think there is an important point here. Our largely A4-honed sense of what is a good question may be in need of some augmentation, augmentation that may even involve machines.


Value pluralism

An interview with Yejin Choi via Stephen Downes is a seriously useful read. Some snippets:


The truth is, what’s easy for machines can be hard for humans and vice versa. You’d be surprised how A.I. struggles with basic common sense. It’s crazy.


She draws on the notion of dark matter to talk about human common sense


Can you explain what “common sense” means in the context of teaching it to A.I.? A way of describing it is that common sense is the dark matter of intelligence. Normal matter is what we see, what we can interact with. We thought for a long time that that’s what was there in the physical world — and just that. It turns out that’s only 5 percent of the universe. Ninety-five percent is dark matter and dark energy, but it’s invisible and not directly measurable. We know it exists, because if it doesn’t, then the normal matter doesn’t make sense. So we know it’s there, and we know there’s a lot of it. We’re coming to that realization with common sense. It’s the unspoken, implicit knowledge that you and I have. It’s so obvious that we often don’t talk about it. For example, how many eyes does a horse have? Two. We don’t talk about it, but everyone knows it. We don’t know the exact fraction of knowledge that you and I have that we didn’t talk about — but still know — but my speculation is that there’s a lot. Let me give you another example: You and I know birds can fly, and we know penguins generally cannot. So A.I. researchers thought, we can code this up: Birds usually fly, except for penguins. But in fact, exceptions are the challenge for common-sense rules. Newborn baby birds cannot fly, birds covered in oil cannot fly, birds who are injured cannot fly, birds in a cage cannot fly. The point being, exceptions are not exceptional, and you and I can think of them even though nobody told us. It’s a fascinating capability, and it’s not so easy for A.I.


And her take on value pluralism

I’m excited about value pluralism, the fact that value is not singular. Another way to put it is that there’s no universal truth. A lot of people feel uncomfortable about this. As scientists, we’re trained to be very precise and strive for one truth. Now I’m thinking, well, there’s no universal truth — can birds fly or not? Or social and cultural norms: Is it OK to leave a closet door open? Some tidy person might think, always close it. I’m not tidy, so I might keep it open. But if the closet is temperature-controlled for some reason, then I will keep it closed; if the closet is in someone else’s house, I’ll probably behave. These rules basically cannot be written down as universal truths, because when applied in your context versus in my context, that truth will have to be bent. Moral rules: There must be some moral truth, you know? Don’t kill people, for example. But what if it’s a mercy killing? Then what?


ChatGPT, its initial and ongoing values training 

A good follow from the previous note is a detailed and clear account of ChatGPT’s training which teases out its cultural biases (ht David Weinberger). The more interesting light at the end of the tunnel is the growing army of folk who, by playing with it, contribute to its ongoing values training.   


The beat goes on

I find myself responding to various tweets that reflect an educational moral panic or yet another jaw dropping example of generative AI with the phrase: the beat goes on.


It thought it would be worth rewriting those lyrics to reflect on the impact of generative AI. So I prompted ChatGPT with: Rework these lyrics of the beat goes on to the impact of generative AI: and I supplied the lyrics.


It replied:


The AI beats on, the AI beats on

Models keep training, a dance to the brain

La de da de de, la de da de da

Generation was once the rage, uh huh

Technology has turned the page, uh huh

The neural network is the current thing, uh huh

Synthetic intelligence is our newborn king, uh huh

And the AI beats on, the AI beats on

Models keep training, a dance to the brain

La de da de de, la de da de da

The data sets are the supermart, uh huh

Machine learning still breaks its heart, uh huh

And researchers still keep on working hard

Electronically they keep the algorithms sharp

And the AI beats on, the AI beats on

Models keep training, a dance to the brain

La de da de de, la de da de da

Scientists sit and analyze

Bots keep learning to recognize

The progress keeps accelerating all the time

But humans still ask "Hey buddy, can you do this fine?"

And the AI beats on, the AI beats on

Models keep training, a dance to the brain

La de da de de, la de da de da

And the AI beats on (Yes, the AI beats on)

And the AI beats on (And the AI beats on, on, on, on, on...)

The AI beats on


If you need any evidence for the beat, this rather large collection of AI apps may do the trick.


                                                                                                    



[1] Weizenbaum, J. (1984). Computer Power and Human reason.  From Judgement to Calculation. Penguin.  


[2] Ishiguro, K. (2021). Klara and the sun (1st ed.). Alfred A. Knopf.  


[3] De Ridder, J. (2022). Online Illusions of Understanding. Social Epistemology, 1-16. https://doi.org/10.1080/02691728.2022.2151331 


[4] See, Agrawal, A., Gans, J., & Goldfarb, A. (2022). Power and prediction : the disruptive economics of artificial intelligence. Harvard Business Review Press, and Agrawal, A., Gans, J., & Goldfarb, A. (2018). Prediction Machines: The Simple Economics of Artificial Intelligence. Harvard Business Review Press.  


December 24, 2022

Bibs & bobs #12



ChatGPT fun

Amid all the doom, panic and boosterism around ChatGPT there are tiny glimmers of fun. Here is one from Mark Schaefer: 20 Entertaining Uses of ChatGPT You Never Knew Were Possible. 


Accessible academic outputs

I mentioned the notion of embedding new text into LLMs which could then be interrogated. A model for this is Sahil Lavingia’s book, The minimalist entrepreneur. Here, you can ask his book questions. It’s roughly similar to Google’s talk to books. 


I began to wonder, what if academic publications all had an option like this. An option with which you could ask questions of the paper, chapter or book with the possibility of then extending your query via the references in the selected work. Yeah wild, but a fun shuffle of the academic game.


This would be a big step beyond what apps like Bearly offer currently. 


AI and writing

A collection of apps that use AI to support writing put together by Jeremy Caplan.


Transcription

There are a lot of options for transcripts of video and audio files. AssemblyAI is worth a play.  I was interested in a recent presentation by Venkatesh Rao to DEVCON in Bogota.  It mangled his name but the rest was pretty good. The app has a lot of options.


Formal Education and ChatGPT

As predictable as night follows day. Instead of doing the smart thing, i.e. given what machines can do some Oz universities will cling to pre-AI forms of assessment. So so so stupid.  The folk making these decisions are being paid big $. Who of them have explored or looked at how any of this is unfolding? Ban or domesticate is the old playbook formal education has used since the digital happened. It has not learned a thing from previous digital develops that go back to the late 1970s.  A measure of how smart any formal educational system or organisation is will be how well they deal with these developments, now and into the rapidly approaching future.  Below, I suggest the likely emergence of a new game that corporate universities will play: Whac-an-AI.


IMHO these developments will make Gutenberg seem like a tiny sneeze in the history of civilisation. 


Models, meddles and envy

The term physics envy is sometimes used to describe a motive behind some research in the sciences of the social. More broadly, the influence of different models drawn from science, i.e. Newtonian physics, chaos, non-linearity, complexity, emergent behaviour and so on, can be found in the logic that underpins a lot of research of the social and particularly in education which seems prone to picking up ideas that appear shiny or new, even if they are neither.


I’m of the view that there is nothing wrong with drawing on models and ideas from other fields but if you do, you need a decent helicopter view of the idea, its history and its limitations before drawing on it as metaphor or analogy.


For instance, if you wanted a quick and eloquent helicopter view of dimensions and the associated physics and mathematics it would be hard to go past a post of Margaret Wertheim’s. 


Perhaps it is an indication of how difficult it is to do good research in the so-called soft sciences that good helicopter views in this field of ideas, agendas and models are uncommon.  


Whac-an-AI

As each new bit of AI pops up to support writing, coding, planning and so on, and students studying in those fields continue to draw on them, you have a scenario for the perfect game of Whac-an-AI. In this game, the corporate university in order to protect it’s brand: our graduates don’t cheat, mistakenly commits to a practice of teaching their graduates how to do things that machines are now good at [1].


One thing that is predictable in this new wild space is that we will see more folk employed to manage and do the whacing. 


In anticipation of such a game emerging, I propose a new international standard for university stupidity called the Whac-an-AI.  Universities can be scored on an open ended scale by the number of moles they have more or less managed or tried to whac. Other awards spring to mind like best AI whac of the year, the most diligent AI whacer and so on. In time it could rival the ubiquitous ratings of universities that some universities appear to be obsessed with. If only :).


Sadly, if a tiny fraction of the effort associated with Whac-an-AI was put into:


1 supporting student understanding of what is going on in terms of LLMs and there many relatives


2 teaching students how to write half decent prompts and 


3 teaching students how to evaluate what LLMs output 


then life would be so much easier and better for students, staff and even managers!

The problem any university will face trying to ban or block AI use by students is that the moles will keep getting better and will breed rapidly. I am a bit of a fan of helicopter views of things. This tweet from Sterling Crispin puts the technical side of things into some perspective. 


Imagine an AI model that's 3x larger and more powerful than GPT3 aka ChatGPT


Google already built that in April, called PaLM, on their own TPU hardware competing with NVIDIA. People think ChatGPT will replace Google but they basically invented transformers in '17 (the T in GPT)






Imagine students playing with Monster moles!  The problem of course in an exponentially improving space is that the moles will keep getting better and will multiply faster than a corporate university could employ whacers.


Living through times that have elements that increase exponentially is an important part of making some sense of what is going on. As Dan Shipper recently wrote:


In his 1999 book The Age of Spiritual MachinesRay Kurzweil wrote: “It is in the nature of exponential growth that events develop extremely slowly for extremely long periods of time, but as one glides through the knee of the curve, events erupt at an increasingly furious pace. And that is what we will experience as we enter the twenty-first century.”


A long time back I opted to use EdExEd as a label for my agenda. The Ex in the label is for exponentials. The other two: edges and education. Education should be plural I think.






                                                                                                    



[1] This is not a new phenomenon. Each new way of doing things has been met with attempts to ban or domesticate in formal education. A long time ago the hand held calculator was subject to banning and eventual domestication. Rarely was any thought given to the complementary knowledge and skills needed to make good use of the new way of doing things. There are lots of examples of this Dalek mindset.


December 20, 2022

Bibs and bobs #11

 

Graphically illustrating research

I’m often drawn to images that convey ideas/information clearly. Maybe it’s the novelty. For example Erik Brynjolfsson’s digram about AI and human tasks in B&B #9 has stayed with me and is currently framing how to muddle through the notion of embedding new text into GPT. This post has some examples of using graphical representations. And, of course, the prospect of having an app like DALL.E come up with your representations will likely come to mind.

Learning to live with GPT

This is an interesting take on using ChatGPT, chatting to yourself. I think that this approach could possibly work for reflecting on research design, analysis or framings. It’s worth a try. Michelle Huang is an artist from New York she fed GPT text and then asked it questions. Here is the account.


Bryan Alexander has put together a collection of posts: Resources for exploring ChatGPT and higher education. 


The ripples of ChatGPT continue to play out. The counter ripples of moral panics in education are already growing. I expect a lot of noise and not much sanity to continue. Until we see how any of this plays out in practices, the way things get done, it’s difficult to say much more. 


In the interim, hats off to the small army of folk churning out AI apps. One or two will make the big time and create serious waves. All we can be sure of is that most of the current predictions will get it wrong.


Change, reform and scale in formal education

The ANT daemon that sits annoyingly on my shoulder keeps reminding me that most of the ways change is thought about in education are misleading at best, i.e. the early adopter stuff of Everett Rogers (think early adopters, change agents etc). Fine if you are into post the event categories but not useful if you are interested in how it happens and what keeps any change in place. 


The history of change/reform in education is a sad one. Lots of fab ideas that glow for a bit and then just fade away.


I came across this post by Sam Chaltain that offers a more interesting approach to thinking about change and importantly, scale. Chaltain’s point resonates with Hemant Taneja’s book, Unscaled [1]. We appear to be living through another unscaling in formal education as time-revered but clunky approaches to assessing students become unsettled. 


Thinking like this is anathema to those who would seek to measure the outcomes of a one size fits all education systems. The managerialoso’s [2] existence is likely under some threat.   


Music

I don’t have any formal music knowledge but the how of this app intrigued me. Riffusion is wild in so many ways. The best way to get a take on it is to play.  


                                                                                                    



[1] Taneja, H. (2018). Unscaled: how AI and a new generation of upstarts are creating the economy of the future. PublicAffairs.  


[2] Managerialoso is a term I coined to refer to, as ChatGPT suggested:


The term "managerialoso" could potentially be used to indicate that the group is overly focused on managerial practices or principles, to the point of being excessive.


The particular set of managerial principles and practices more often than not are top down, hierarchical and belong to an era long since gone. Any resemblance to managers living or dead is, of course purely coincidental.



December 11, 2022

Bibs & bobs #10

New Avenues

A fab post by Robin Sloan [1] on finding new ways of relating online:


Here’s my exhortation:


Let 2023 be a year of experimentation and invention!


Let it come from the edges, the margins, the provinces, the marshes!


THIS MEANS YOU!


I am thinking specifically of experimentation around “ways of relating online”. I’ve used that phrase before, and I acknowledge it might be a bit obscurebut, for me, it captures the rich overlap of publishing and networking, media and conviviality. It’s this domain that was so decisively captured in the 2010s, and it’s this domain that is newly up for grabs. …


I want to insist on an amateur internet; a garage internet; a public library internet; a kitchen table internet. Now, at last, in 2023, I want to tell the tech CEOs and venture capitalists: pipe down. Buzz off. Go fave each other’s tweets.


It’s inspiring stuff. If you can’t repurpose, rework, rebuild relating then what is the point of all of this? 


Sloan’s exhortation ought to be seriously heeded by users of the now ubiquitous LMS in formal education.


Search Scholarly Materials Preserved in the Internet Archive

From the post:

Looking for a research paper but can’t find a copy in your library’s catalog or popular search engines? Give Internet Archive Scholar a try! We might have a PDF from a “vanished” Open Access publisher in our web archive, an author’s pre-publication manuscript from their archived faculty webpage, or a digitized microfilm version of an older publication.


Let’s destroy the planet

Via Jason Kottke: 

Creative coder Neal Agarwal has launched his newest project: Asteroid Launcher. You can choose the asteroid’s composition (iron, stone, comet, etc.), size, speed, angle of incidence, and place of impact. Then you click “launch” and see the havoc you’ve wrought upon the world, with all kinds of interesting statistics. 

And yes you can drop one anywhere in Oz!


Delegating work to machines

A fair chunk of the Twitter world and blogosphere appears to be celebrating, complaining, or doom-saying as ChatGPT fever spreads. Unlike COVID-19 and its variants this AI won’t be easily stopped. There appears no m-RNA-like approach to slow any of this down. The digital ecosystem has spawned an interesting contagion. I found myself collecting bits and pieces about it all.


The number of apps (see this catalog) and amount of commentary is more than large. It’s like trying to catch a runaway train using roller skates. If you want to keep an eye on new apps and a selection of commentary then Ben’s Bites is handy.


Another thoughtful post from Henrik Karlsson and GPT wrangling [2]. The argument he makes about augmenting thinking resonates with how I think about this weird word prediction beast on steroids, aka ChatGPT. Henrik links to a post by Matt Webb. 


I was drawn into the speculating and tweeted (7.12.22)

Am musing about the increased production of text online due to the use of LLMs. If we cope by using LLM-based apps to summarise text, are we moving into an endlessly expanding loop akin to the paper clip maximiser?


Right now thee is a lot of noisy prognostication. Yes guilty as charged. It reminds me of Carolyn Marvin’s wonderful book, When Old Technologies were New. It’s an analysis of humans anticipating and framing the advent of things like electricity and the telephones.


A lot of the thinking about AI, particularly in education seems to look a lot like horseless carriage thinking, i.e. it’s just like what we had before but with a digital tweak [3]. So we cling to classical, industrial models of formal education and keep aiming for better models of students, teachers and wonder why the models don’t behave all that well.


I am working on a draft for the implications of these developments for research in the sciences of the social. The usual tribes of boosters, doomsters, critics and the end of schooling can be found in the many posts. It’s likely to be a fun ride, at least for the moment. 


Personally, like George Siemens, I think this is a biggie. To me, it is akin to the availability of the first word processors or being connected to the Internet for the first time and paying for email or to the time when the first search engines made their debut. We know how much things changed when the cost of search approached zero. We are living through the moment when the cost of accessible predictability is doing the same.


I keep being drawn back to Drew McDermott’s classic paper, ‘Artificial Intelligence Meets Natural Stupidity’, so much wishful naming. Much more to say, but don’t hold your breath waiting for the draft paper.


How to speak honeybee

An account of the work of Karl von Frisch. Lessons everywhere. And if honeybee is not for you you might find a touch of kiwi a bit of fun:  Taika Waititi reading a letter about a speeding ticket is well worth the time.



                                                                                                    


[1] Ht @vgr


[2] Which is apparently called prompt engineering. I like the wrangling better.


[3] Code for domesticating the new technology.


Bibs & bobs #17

  Domesticating GenAI I’ve been listening to discussions about GenAI in formal education for too long and noticing a flood of papers reviewi...