December 04, 2022

Bibs & bobs #9


Delegating research work to machines

Machines of various sorts have always played a role in doing research. I recall from a long time ago and in a universe too distant to remember that I used a piece of software written in Fortran to model experimental NMR data called LAOCOON.


Today we routinely use apps to write with, search databases, model with (spreadsheets and related software) and many more routinely used pieces of software. 


Now with the availability of Large Language Models (LLMs) [1] it was not surprising to see a number of AI apps that are designed to support some research tasks or some that might be exploited to help in routine research work. I have mentioned some of these in previous Bibs & bobs.


Paper digest is one of a number of interesting apps that one might play with to see what it is capable of. David Beer has a useful commentary and account of his explorations with the app. 


My sense is that we will try out and sometimes adopt one or two of these apps if they manage to demonstrate they are better at doing some of the tasks one routinely does in research. Most of these apps make use of GPT-3 and will likely make use of it’s soon to be released successor, GPT-4. Supposedly this may improve the quality of output from the various dependent apps. What is in prospect for GPT-n is anyone’s guess. 


But back to the here and now, the app called Elicit, is at present, pretty handy in many respects. But to me, the area that is much more interesting is indicated in the figure from Erik Brynjolfsson’s paper, The Turing Trap [2]. It is the new tasks that humans can do with the help of machines.  There are many “adjacent possibles” [3] to be explored.



Even at this early stage, the in-between time for AI in research assistance, the odd or weird way in which these models operate, alien intelligence as Rudy Ruggles [4] once put it, can at times, nudge you to seeing dots that can be joined, connections that are new to you at least.


The other spin off from working with machines like those I have mentioned is that that the detail of working with such apps can be easily documented and replicated if necessary. They ought to go to make up a useful component of any research reporting. This is a prospect that is easier if you publish in places that have escaped the A4 mindset.


Delegating work to machines in formal education

This past week the Australian Association for Research in Education held its annual conference in Adelaide. One of the keynotes was given by George Siemens. He spoke about AI as one of the biggest challenges facing education [5]. I fully agree that AI is the biggest challenge but it is also likely to be a significant opportunity and not in the way that is typically argued for in education circles, that of improving things. Simply, things will change and change in ways that are difficult to anticipate and plan for. I’m not heartened by the history of formal education’s engagement with the digital which boils down to: domesticate it and if it can’t be domesticated, ban it. 


LLMs will prove difficult to shoehorn into the domesticate or ban approach. They are developed from text, a lot of text. Having text apps that are predictive, think spellchecker on steroids, generates a lot of intriguing issues for teachers, students, modes of assessment and so on. In relation to this, Richard Gibson wrote a useful post in which he reported his experiments with GPT-3. It mirrors to some degree the opinion piece that Mike Sharples published [6]. Both are well worth reading.


The recent release of GPTchat will likely shake up the educational establishment. How these developments are framed now will play a big part in what happens in the near future. Even now, the current framings fall into familiar patterns to be found in the history of formal education’s engagement with the digital. Drawing on one I developed a long time ago, they can be labelled boosters, doomsters, critics and anti/de-schoolers [7]. I hope that for the new kid on the block, AI, that the labels don’t require too much dot joining. 


I have never found those framings particularly interesting or even useful. For my part, framing these developments holistically is more helpful, i.e. the human plus machine, centaur as some would suggest points to an augmentation, a dependence of one on the other. Framing these developments as changes to the way things are done around here [8] is then a first and important step. Seeing an holistic technology as formalised practice necessarily connects to culture. The impact that search which costs next to nothing has had on how we work, think and carry out routine educational tasks illustrates the point well. The prospect of prediction that costs next to nothing will likely be more profound than that of the cost of search getting close to zero.


My second move draws on the work of Bruno Latour [9]  in which he argues that in any delegation of work to a machine, there is an exchange of capacities, i.e. for a machine to do a task it requires a human to do something new, something additional to the delegation of work, something complementary. A simple example is that of the use of a calculator. If a calculator is used to calculate a sum and the user has no approximation skills then the number provided by the calculator can’t be checked [10]. 


For something like GPTchat two things come to mind. There is a skill to be developed in terms of prompting the app. Secondly, once the app has generated its output there will always be a need for evaluation, something these models are incapable of as they have no context.


AI appropriately framed, can be positioned as an augmentation to what humans do. We all need augmentation. We depend on machines to do so much of our work already, including all sorts of chemical and mechanical augmentations [11]. Some folk are more in need than others, those with disabilities of various kinds. But we are all in need of them, me perhaps more than most. The potential here is enormous. There is much more to say. For now this is where I’ll leave it.


Blogging impact on policy

An interesting blog post from LSE on the citation of LSE blog posts in policy documents. It is not a big impact but appears to be one that is steadily growing. The grey literature rises. 


More on path dependence

Trung Phan has a detailed account of some of the classic examples of path dependence in technology [12]. His newsletter is well worth a follow if you have interests in AI as is Ben Tossell’s.  


A directory of AI-based apps

If you find all of these developments of AI apps somewhat bewildering, it’s useful to recall that the drivers of these apps is venture capital which seems to have bottomless pockets, for now. It reminds me of the time when early microcomputers were the state of art for desktop computing and there was a huge investment, for that era, in software that ran on these devices. Education then and now always identified as the market to crack. 


Futurepedia.io documents all new, popular and verified apps with details about costs and what they supposedly can do. 


Big numbers

When it became clear that the growth of data on the Internet was growing rapidly, helpful explanations in terms of the number books stacked between earth and the Moon and similar illustrations have become common.  The latest [13]:


By the 2030s, the world will generate around a yottabyte of data per year — that’s 10^24 bytes, or the amount that would fit on DVDs stacked all the way to Mars.

Which has prompted the need for new names for these mind numbingly large or small numbers.  I can recall vaguely when I came across petabytes. It was much later than when the name was chosen to indicate 10^15. From the article:




Further from the article,

With the annual volume of data generated globally having already hit zettabytes, informal suggestions for 1027 — including ‘hella’ and ‘bronto’ — were starting to take hold, he says. Google’s unit converter, for example, already tells users that 1,000 yottabytes is 1 hellabyte, and at least one UK government website quotes brontobyte as the correct term.


Just a few prefixes with which to dazzle your colleagues. The paper is worth a skim.


                                                                                             


[1] Venkatesh Rao suggests LLMs should be called memory creatures. His piece on AI as superhistory and mentioned in Bibs & bobs #2 is important. 


[2] Brynjolfsson, E. (2022). The Turing Trap: The Promise & Peril of Human-Like Artificial Intelligence. Daedalus, 151(2), 272-287. https://doi.org/10.1162/daed_a_01915  


[3] A term coined by Stuart Kauffman and popularised by Steven Johnson in his 2010 book, Where Good Ideas Come From.


[4] Ruggles, R. L. (1997). Knowledge management tools. Butterworth-Heinemann, p. 205 


[5] There is a brief account of his presentation here. 


[6] Sharples, M. (2022). Automated Essay Writing: An AIED Opinion. International Journal of Artificial Intelligence in Education, 32(4), 1119-1126. https://doi.org/10.1007/s40593-022-00300-7 


[7] This crude categorisation was something I developed the night before a keynote presentation I gave to a Principals conference in Darwin in 1996. I wrote some scripts for each of the stereotypes and four Principles kindly volunteers to play the parts. They all did brilliant jobs hamming it up! I still have the scripts if interested.


Subsequently, a good colleague and I made use of the categorisation more formally: Bigum, C., & Kenway, J. (1998). New Information Technologies and the Ambiguous Future of Schooling: some possible scenarios. In A. Hargreaves, A. Lieberman, M. Fullan, & D. Hopkins (Eds.), International Handbook of Educational Change (pp. 375-395). Kluwer Academic Publishers. 


[8] A framing Ursula Franklin made, drawing on Kenneth Boulding, in her excellent 2004 book, The Real World of Technology.


[9] Latour, B. (1992). Where are the missing masses? Sociology of a few mundane artifacts. In W. Bijker & J. Law (Eds.), Shaping Technology/Building Society: Studies in Sociological Change (pp. 225-258). MIT Press. http://www.bruno-latour.fr/sites/default/files/50-MISSING-MASSES-GB.pdf  


[10] You could add additional complementary skills to this task, like an understanding of significant figures and perhaps, in precision calculations how the calculator does its arithmetic.


While the complementary skills in using AI apps may appear to be simply checking the generated output, I’d argue that we need to proceed on a case by case basis and identifying what complementary skills and knowledge are necessary is not a trivial task.


[11] I’ve lost count of how many I have. Good thing i have a machine to keep track of some of them.


[12] I pointed to the notion of intellectual path dependence in Bibs & bobs #8.


[13] Gibney, E. (2022). How many yottabytes in a quettabyte? Extreme numbers get new names. Nature. https://doi.org/10.1038/d41586-022-03747-9  

November 27, 2022

Bibs & bobs #8

 Bibs and bobs #8

Literature and research

A post derived from a paper the author wrote about the role of literature in AI research brought to mind an idea I have been clumsily wrangling with, of what I have called intellectual path dependence. Path dependence is a well established idea that articulates with my ongoing interest in material semiotics. One of the more famous and colourful instances of path dependence is retold by Kevin Kelly [1]:


There’s an old story about the long reach of early choices that is basically true: Ordinary Roman carts were constructed to match the width of imperial Roman war chariots because it was eas­ier to follow the ruts in the road left by the war chariots. The chariots were sized to accommodate the width of two large warhorses, which translates into our English measurement of 4' 8.5". Roads throughout the vast Roman Empire were built to this specification. When the le­gions of Rome marched into Britain, they constructed long-distance imperial roads 4' 8.5" wide. When the English started building tram­ ways, they used the same width so the same horse carriages could be used. And when they started building railways with horseless carriages, naturally the rails were 4' 8.5" wide. Imported laborers from the British Isles built the first railways in the Americas using the same tools and jigs they were used to. Fast-forward to the U.S. space shuttle, which is built in parts around the country and assembled in Florida. Because the two large solid-fuel rocket engines on the side of the launch shuttle were sent by railroad from Utah, and that line traversed a tunnel not much wider than the standard track, the rockets themselves could not be much wider in diameter than 4' 8.5". As one wag concluded: “So, a major de­sign feature of what is arguably the world’s most advanced transporta­tion system was determined over two thousand years ago by the width of two horses’ arse.” More or less, this is how technology constrains it­ self over time. 


Academic publishing is, of course, an exercise in demonstrating path dependence, i.e. that research builds on or adds to the research work of others. The standard practice is to make a statement and support it by citing one or more sources that you imply support the statement. I sometimes find it amusing to see how others have bent the arguments of my research to best suit their purpose. This is a phenomenon more common in the sciences of the social. There is much to say about the citation game but for another time.


What I puzzle about are the dots that have been well connected and reinforced over time in brain-1 [2], think the width of Roman war chariots. Those connections are further reinforced in brain-2 [3] which has links that don’t fade over time. 


I guess it is the last point in the Kelly quote that bothers me, how my thinking gets constrained over time as I join more dots and particularly those that are already a dense set of connections.


I suppose this kind of assembling of stuff around core ideas is what the species does, pattern matching, which relates to what Leonard Mlodinow draws attention to in his book Subliminal [4]:


As it turns out, the brain is a decent scientist but an absolutely outstanding lawyer. The result is that in the struggle to fashion a coherent, convincing view of ourselves and the rest of the world, it is the impassioned advocate that usually wins over the truth seeker. We’ve seen in earlier chapters how the unconscious mind is a master at using limited data to construct a version of the world that appears realistic and complete to its partner, the conscious mind.


Protecting one’s butt is a good thing I suppose but perhaps not when it goes to the lengths that Mlodinow illustrates with this example:


For example, in the 1950s and ’60s a debate raged about whether the universe had had a beginning or whether it had always been in existence. One camp supported the big bang theory, which said that the cosmos began in a manner indicated by the theory’s name. The other camp believed in the steady state theory, the idea that the universe had always been around, in more or less the same state that it is in today. In the end, to any disinterested party, the evidence landed squarely in support of the big bang theory, especially after 1964, when the afterglow of the big bang was serendipitously detected by a pair of satellite communications researchers at Bell Labs. That discovery made the front page of the New York Times, which proclaimed that the big bang had won out. What did the steady state researchers proclaim? After three years, one proponent finally accepted it with the words “The universe is in fact a botched job, but I suppose we shall have to make the best of it.” Thirty years later, another leading steady state theorist, by then old and silver-haired, still believed in a modified version of his theory.


I guess this is why science can sometimes appear to move ahead one burial at a time.


And, while I’m joining dots, the two hemisphere hypothesis popularised by Iain McGilchrist [5] adds beautifully to my puzzling:


The brain is, importantly, divided into two hemispheres: you could say, to sum up a vastly complex matter in a phrase, that the brain’s left hemisphere is designed to help us ap-prehend – and thus manipulate – the world; the right hemisphere to com-prehend it – see it all for what it is. The problem is that the very brain mechanisms which succeed in simplifying the world so as to subject it to our control militate against a true understanding of it. Meanwhile, compounding the problem, we take the success we have in manipulating it as proof that we understand it.


If your head does not hurt by now, the drugs are likely working.


Research in education: delegating work to machines

I gave a short, intentionally provocative presentation at an internal education research conference recently which I plan to turn into a short working paper [6]. I was trying to point to the insular nature of much of educational research. I made the analogy with a surfer that only goes to one beach to surf, compared to surfers who go to a variety of beaches. The beaches were crude analogies for intellectual fields in education, e.g. critical sociology, developmental psychology, poststructural feminism etc.  


Being good at surfing at a particular beach has its advantages. It helps progress an academic career but if all you do is work the one beach you miss all of the different opportunities that appear on different beaches, i.e. you more or less insulate yourself from the hummingbird effect via the always interesting and prolific Steven Johnson [7]. 


In the brief conversation that followed my pitch, another Steven, Hodge, offered another view of education’s silo positioning. It was that, very crudely paraphrasing, other fields did not value what education had to offer. Much more to say about that.


Delegating research work to a machine

I’ll try and put together a useful list of apps that might be used to support research tasks. In the interim, Azeem Azhar has a brief commentary on Metaphor and Elicit both of which I have been playing with for some time.


What does AI think of humans?

A fun post by Alberto Romero about a wee experiment he did, having two AIs have a conversation about humans. 


There is quite a bit of fun to be had with large language models as Janelle Shane has shown with many examples.


Along similar lines, a fun bit of GPT-3 wrangling:  A conversation in which I teach GPT-3 to read a book by Henrik Karlsson.


                                                                                                    


[1]  Kelly, K. (2010). What Technology Wants. Penguin, pp 180-181.


[2] The meat computer that sits on my shoulders.


[3] I keep my zettelkasten in an app called DEVONthink (OSX only).


[4] Mlodinow, L. (2012). Subliminal : the revolution of the new unconscious and what it teaches us about ourselves (1st ed.). Allen Lane.  


[5] McGilchrist, I. (2021). The Matter With Things: Our Brains, Our Delusions and the Unmaking of the World. Perspectiva Press.  


[6] Don’t hold you breath.


[7] Johnson, S. (2014). How we got to now : six innovations that made the modern world. Riverhead Books.  







November 14, 2022

Bibs & bobs #7

 Change

I’ve been interested in change for a long time, particularly change in education. I have found that the way change is framed in education is often unhelpful. Typically it is some version of Everett Rogers’ Diffusion of Innovations. The use of terms like change agent, early adopter etc are always telltale signs that you are in diffusion land [1].


Education has a long history of reforms of various sorts that have all ended up in the dust bin which, I’d suggest is due to relying on unhelpful ideas about change. There is a lot of work, other than “rolling it out” or mandating it that needs to occur. One aspect of a key part of that work was recently written about by Steven Johnson. 


Steven Johnson is one of the more interesting thinkers who shares his work online. He recently posted a piece on popularisers. Johnson wrote about the significant popularisers play in advances in medical practices, i.e. a new approach is developed that is shown to be a good solution to a medical problem. It does not automatically mean that word simply spreads from these initial experiments. It needs one or more folk to make the practice well known. He writes:


The key point here is that when we talk about the history of innovation, we often over-index on the inventors and underplay the critical role of popularizers, the people who are unusually gifted at making the case for adopting a new innovation, or who have a platform that gives them an unusual amount of influence


The notion reminded me of the three kinds of people that can produce large effects described in Malcom Gladwell’s book Tipping Point. He called them connectors, mavens and salesmen. 


In the academic world it might be assumed that getting published or even posting in a blog is sufficient to selling an idea. If the idea is any good, Gladwell would suggest you need folk to make connections, dot joiners, who together with salesmen can spread the idea or new way of doing things.


Whatever label is used, it is an important idea for all the would-be/wannabe reformers or changers of things in education. The wee actor-network daemon that sits on my shoulder reminds me of the quote from Grint and Woolgar’s, The Machine at Work: 


If Foucault is right that truth and power are intimately intertwined, those seeking to change the world might try strategies to recruit powerful allies rather than assuming that the quest for the truth will, in and of itself, lead to dramatic changes in levels and forms of social inequality. p. 168


And then as the daemon nudges me, you need to police the new arrangement, to keep all the things that have gone into a new way of doing something, in place. All too often, education reforms resemble a hit and run approach. Dump the innovation in a site, hold participants hands for a short time, get it working and then leave. 


The discovery ecosystem

Michael Nielsen and Kanjun Qiu have written an important piece titled, A Vision of Metascience: An Engine of Improvement for the Social Processes of Science. They ask the intriguing question:


how well does the discovery ecosystem learn, and can we improve the way it learns?


They begin with the fun alien approach which simply put is if you had to invent a system for discovery from scratch would it look like what we have today? The same question can be asked of most of the creaking, ancient systems that operate today (think your favourite research funding agency, universities, schools etc), all glossed with digital glitter but steadfastly holding the line against any significant attempts to change them. Robert Pirsig [2] captures it well in this long quote:


To speak of certain government and establishment institutions as “the system” is to speak correctly, since these organizations are founded upon the same structural conceptual relationships as a motorcycle. They are sustained by structural relationships even when they have lost all other meaning and purpose. People arrive at a factory and perform a totally meaningless task from eight to five without question because the structure demands that it be that way. There's no villain, no “mean guy” who wants them to live meaningless lives, it's just that the structure, the system demands it and no one is willing to take on the formidable task of changing the structure just because it is meaningless.

But to tear down a factory or to revolt against a government or to avoid repair of a motorcycle because it is a system is to attack effects rather than causes; and as long as the attack is upon effects only, no change is possible. The true system, the real system, is our present construction of systematic thought itself, rationality itself, and if a factory is torn down but the rationality which produced it is left standing, then that rationality will simply produce another factory. If a revolution destroys a systematic government, but the systematic patterns of thought that produced that government are left intact, then those patterns will repeat themselves in the succeeding government. There's so much talk about the system. And so little understanding.


So, it’s not just a matter of tearing down silly structures and pointless measures, you’d have to do a memory wipe of everyone to be sure they structures and measures did not reappear in a different guise. 


The difficulty of all the silliness is well captured in a conversation Clay Shirky had with Daniel Pink:

Pink: You say something else about organizations that I found especially compelling—about their instinct for self-perpetuation.

Shirky: Well, organizations that are founded to solve problems end up committed to the preservation of the problems. So Trentway-Wagar, an Ontario-based bus company, sues PickupPal, an online ride-sharing service, because T-W isn’t committed to solving transportation problems. It’s committed to solving transportation problems with buses. In the media world, Britannica is now committed to making reference works that can’t easily be referred to, and the music industry is now distributing music that can’t easily be shared because new ways of distributing music undermine the old business model. [3]


Change, as I have been trying to suggest in this wee blog post, ain’t a simple matter.


The local

There has been a good deal of commentary about the effects of embracing globalisation as the solution to the world’s economic problems. The push back as supply chains have been seriously disrupted, something we are likely to be living with on a semi-permanent basis, thoughts turn to the local and its geography economics and politics among other things. 


Geography matters, as Tomas Pueyo keeps wonderfully demonstrating over and over. 


I prompted Metaphor (mentioned below but imply is “you want links, I can find them for you”) with:

There is a growing unease about globalisation


It produced over one hundred links, many of which were particularly useful: links to books, papers, blog posts etc. Sure there is work to do to sift them a task also likely for AI down the track. 


This snippet is a place holder for me. it may be that we are living through a correction to globalise anything that moves to one where the local is noticed for its importance.


Mind blowing

I have been watching The Peripheral, streaming on Prime weekly. It’s based on William Gibson’s book of the same title. Crudely, it is about humans “inhabiting” nonhuman avatars across time. Maybe it is another instance of science beginning to ape science fiction as this paper points precisely in that direction, without the time travel and with no mention of Gibson.   


Delegating work to a machine

Another open access bit of AI. Metaphor:

Metaphor is a search engine that’s trained for link prediction. This means that given some context, it tries to predict the link that would most likely follow that text. You interact with Metaphor search by writing prompts: these are snippets of text that could precede a link.


You need a Discord account (easily done) to access the app. I have only tried it on a few ideas and it was more than useful.  I tried it on the Mind Blowing paragraph about. Heck of a set of links were generated.


This app is mentioned in an excellent post by Rodolfo Rosini: The next Google search engine will be Generative AI. 


I expect more and more of these apps which likely already have found their way into current standard research practices for folk who are not asleep at the wheel. It’s only a matter of time before grant writing apps begin to appear which will of course be met be grant assessing apps. The beat goes on as Sapiens continues to shrink. 

                                                                                                    



[1] If you are interested, I wrote about this a long time ago: Bigum, C. (2000). Actor-network theory and online university teaching: translation versus diffusion. In B. A. Knight & L. Rowan (Eds.), Researching Futures Oriented Pedagogies (pp. 7-22). PostPressed.  Download


[2] Pirsig, R. M. (1974). Zen and the art of motorcycle maintenance: an inquiry into values. Morrow.  


[3] Shirky, C., & Pink, D. (2010). Cognitive Surplus: The Great Spare-Time Revolution. Wired (June), np. https://www.wired.com/2010/05/ff-pink-shirky/ 



November 08, 2022

Bibs & bobs #6

 Delegating work to nonhumans

A good deal of academic and student work involves coming to terms with publications which can prove tricky and time consuming if you are unfamiliar with the genre and or content. This online app does a fair job “explaining” chunks of text from any paper you submit to it.


And for the music oriented folk, an app for forming musical ideas: Note. 


This post by Stripe Partners opens the delegation issue further. Specifically it explores the shift of humans as craft people to expert technicians and then, with the advent of AI, to users. Having machines do all the heavy lifting involved in a task that once required significant technical skill results in non-expert users “self-serve”. The shift is represented thus:





It’s an important framing of what we are going through re all the AI apps that have  appeared. What keeps nagging at me though is the observation that E. O. Wilson made as reported in an opinion piece in the NY Times by Tristan Harris:


A decade ago, Edward O. Wilson, the Harvard professor and renowned father of sociobiology, was asked whether humans would be able to solve the crises that would confront them over the next 100 years.

“Yes, if we are honest and smart,” he replied. “The real problem of humanity is the following: We have Paleolithic emotions, medieval institutions and godlike technology.”



Bruno Latour

If you had not already joined the dots, a big influence, probably the biggest in my puzzling about the delegation of work to nonhumans is Bruno Latour who sadly passed away recently. There have been a number of tributes about Latour’s work and contribution to intellectual life but I think Stephen Muecke’s post on Aeon recently has been one of the better ones I have come across. 


Educating at scale, tutoring and the digital 

I’ve been participating in a local biweekly conversation based at Griffith, the curriculum collective, convened by the always thoughtful Steven Hodge. The group as been working through what might thought of a recent collection of publications in and around theorising curriculum.  


The problem of how to pass on what might be judged to be good, useful, valuable, important or just interesting to the next generation is something the species has muddled through since it emerged on the planet. That we are able to do so has meant that Homo Sapiens and not cephalopods run things. 


Over time circumstances have determined that humans have used, and at a times experimented with, a variety of formal and informal modes of educating the young while at the same time mulling the bigger questions of why, what and how.


We now live in an era dominated, at least in terms of student numbers, by what is sometimes called mass schooling. Mass schooling requires a crude application of a one size fits all logic. It is in play in many parts of formal education, e.g. age-based schooling, special needs schooling, year level teaching of a discipline in universities etc. 


Two posts helped to open up the curriculum question for me. Erik Hoel writing about how geniuses used to be raised and Henrik Karlsson musing about GPT-3 augmenting human intelligence.  There is much to be said here. The connection between a history of curriculum and the emergence of mass communication comes into view. The emergence of AI systems that support a notion of curriculum that begins to resemble some aristocratic tutoring is, to say the least, intriguing. 


I’m not holding my breath in this respect given the massive investment in systems of mass schooling, the conservative nature of formal education systems and the sorry history of curriculum reform. Nevertheless, it’s a possibility that is worth keeping an eye on.

October 23, 2022

Bibs & bobs #5

 Bibs and bobs #5


Delegating work to nonhumans

This b&b is short. I’ve been spending way too much time mulling the big framings of automation versus augmentation. While I have a lot of sympathy for the argument put forward by Erik Brynjolfsson in this paper: Brynjolfsson, E. (2022). The Turing Trap: The Promise & Peril of Human-Like Artificial Intelligence. Daedalus, 151(2), 272-287. https://doi.org/10.1162/daed_a_01915, What needs a lot more attention is what actually happens when augmentation happens, the trade-offs, the capacity swaps between human and nonhuman. There is a lot of noise in AI around the neglect of this issue, e.g. the alignment problem and the other panics associated with giving large language models tasks. Much more to say and think about.


Abstractions

A wild, playful and insightful ride courtesy of Venkatesh Rao, @vgr. If you emerge from this unmoved then good luck with keeping those trusty mental routines spinning the way they always have.


Expectations

There is a lot of fun things the brain does when we tinker with things like expectations, placebos and the like. 


Reproducibility

Via Steve Stewart-Williams, 

73 teams tested the same hypotheses with the same data. Some found negative results, some positive, some nada. No effect of expertise or confirmation bias. "Idiosyncratic researcher variability is a threat to the reliability of scientific findings." 


link.


Humour

If you are in need of inspiration for a book dedication, a list.  And if you were a fan of Whose Line is it Anyway?, you might enjoy ToonProv. 



October 10, 2022

Bibs & bobs #4

 Bibs and bobs #4

Maps and their effects

A wonderful video by Johnny Harris about “the island of California” a period in human history that I found useful when thinking about scenario planning.  The video is also a neatly framed instance of fake news.


AI and education

There is a no shortage of commentary, hype, spin, doom saying and wishful naming [McDermott, D. (1976). Artificial Intelligence Meets Natural Stupidity. SIGART Newsletter(April), 4-9.] to be found in relation  to AI and education. This excellent post by Michael Feldstein gives a useful overview of the current state of what I think of as LLM wrangling. As I have noted and the focus of much of my thinking is concerned with the problem of delegating work to machines. It seems to be very much black box territory. You poke the LLM with text to see how it responds. 


This is exactly the logic that ought to inform thinking about how to deal with LLMs as they currently exist and their deployment in formal education settings. Instead of having educational panic  #971: OMG we can’t use a plagiarism checker to see if this was written by a student or a machine. I recall the time when software that generated crossword puzzles appeared. Many teachers were overjoyed, an app (called software way back when) that created busy work for students. Yay! There were however a few teachers who embraced the app differently. They had students use the app to produce crosswords. You can guess which students learned more about a topic built around a crossword.


Educational panics about the digital go back at least as far as the advent of electronic calculators a very long time ago. The opportunity to think through their use and ask more sensible questions, e.g. what complementary skills do students need in order to use these devices, was largely missed. Approximation skills anyone? 


Formal education requires a selective amnesia. It is illustrated by an almost manic capacity to preserve practices that have long outlived their usefulness. The origins of the practices are long forgotten. They were likely developed to solve a particular problem at the time, a problem that no longer exists. The practice lives on, ghostly, inexorably. Age-based schooling is an obvious example. In time so will the current madness around measurement.


So as we begin to see educational panic after educational panic over AI and formal education. It is reassuring to know that there are sane folk out there, e.g. the book by Mike Sharples & Rafael Pérez y Pérez, Story Machines, which illustrates an alternative approach to think about writing and LLMs. The Story Machines website is here.


I’m still of the view we seem to be too attached to a single and limiting view of AI which is why I like the argument in the post Venkatesh Rao wrote about AI as artificial time or super history It’s different, and IMHO a better way to think about AI as it is currently being developed. 


Neurotypification


a normal person is anyone who has not been sufficiently investigated - Edmond A. Murphy


Elegant post on measuring mental traits. A demolition job on the notion of normality. 


via @RosemarieNorth

Neurotypical syndrome is a neurobiological disorder characterized by preoccupation with social concerns, delusions of superiority, and obsession with conformity. There is no known cure   —Laura Tisoncik


So many spectra, so little time to find my spot on each.


Searching

This list of search options was compiled by the good crew at Recomendo.



Links

Stephen's Web ~ Education at a Glance 2022 ~ Stephen Downes  links to the OECD annual report.


HOME | OpenAcademics  well worth a prowl around.


ditto for Academic Chatter | Twitter, Instagram | Linktree


and Online Library and Publication Platform | OAPEN fo open access books

Bibs & bobs #14

  A wee rant <BoR> Maybe it was Marc Andreessen’s initial post on substack where he detailed how he would write.     What’s my purpos...