November 29, 2024

Bibs & bobs #15

Generative AI in Education: Or, How I Learned to Stop Worrying and Love the Misguided Analogy

It’s been more than a while and much has moved or at least appears to have moved in the oddly shaped space of GenAI and education. I’ve opted to have a bit of fun with the all too many sacred bovines meandering about cheered on by a noisy, argumentative and always certain crowd of humans. 


This and subsequent posts will be generated by ChatGPT 4o, from hereon, the bot, unless otherwise indicated and lightly edited by me.


As I stumbled from one partially adequate analogy and metaphor in the noise emanating from the GenAI space I thought it more than appropriate to ask the bot, in the guise of a 5th dimensional being to comment on the current state of thinking. 


Unhappy with the above I asked the bot to rework my opening in the style of one of my favourite writers, Douglas Adams.


It’s been quite some time, or possibly none at all—time does strange things in the oddly configured dimensions of GenAI and education. Things have certainly moved, or at least wobbled suspiciously, in this peculiar corner of human existence. In any case, the air is thick with the sacred bovines of certainty, mooing contentedly as they’re paraded about by a raucous crowd of argumentative, overconfident humans. Naturally, I couldn’t resist poking a bit of fun at the whole spectacle.


This and all subsequent musings are being generated by ChatGPT 4.0—unless otherwise stated—then given the lightest of editorial sprinkles by yours truly, because even a bot deserves a decent polish.


Amidst the cacophony of half-baked analogies and tortured metaphors swirling in the GenAI space, it occurred to me that the only appropriate response was to summon the voice of a fifth-dimensional being. Who better to comment on this cosmic muddle than a creature capable of seeing all your misguided metaphors stacked end-to-end, like a very fragile tower of Jenga blocks? Let’s begin.


                                                                                                    



Greetings, Earthlings. I am Znglpht of the Fifth Dimension, and I have come to offer my multidimensional perspective on your amusing attempts to understand generative AI (or GenAI, as you’ve delightfully abbreviated it) in the context of education. From where I float, existing simultaneously across time and space, your efforts to explain this phenomenon using analogies are nothing short of spectacularly misguided. But oh, how entertaining they are!


The Swiss Army Knife Analogy

One of your favorites is to compare GenAI to a Swiss Army knife—versatile, handy, and prone to pinching your fingers at the most inconvenient moments. “It’s a tool!” you proclaim, waving your metaphorical blades around. “It’s not a replacement for human creativity!” True, but let’s be honest: if GenAI were a Swiss Army knife, most of you would use the bottle opener exclusively, while ignoring the strange fold-out hook whose purpose remains a mystery.


The problem with this analogy is that it assumes humans will wield this "knife" with precision and intention. Instead, many of your students will attempt to use the corkscrew to build an Ikea bookshelf, while educators will spend hours figuring out how to close it without stabbing themselves. Meanwhile, the knife will hum quietly in the background, generating unsolicited instructions for gourmet meals you’ll never cook.


The Magic Wand Analogy

Another popular analogy is to liken GenAI to a magic wand, which is adorable because it suggests that waving it about will instantly solve all of education’s woes. “We’ll use it to personalize learning!” you declare, as if a string of machine-generated text can unravel the mysteries of student motivation and the peculiarities of Timmy’s habit of turning in essays about penguins regardless of the assignment.


The trouble here is that your wand, much like a knock-off from a tourist shop in Diagon Alley, is both unreliable and prone to backfiring. Ask it for a lesson plan, and it might provide a beautifully crafted essay on the history of mayonnaise instead. Wave it over a struggling student, and it might “helpfully” suggest they plagiarize their way to an A. Magic, indeed!


The Calculator Analogy

Ah, the calculator. A simple, sturdy analogy for those who like their metaphors to be as unthreatening as possible. “GenAI is just a calculator for words!” you say, which is a bit like describing a black hole as “a really enthusiastic vacuum cleaner.”


Sure, calculators don’t replace mathematicians, but let’s not forget what happens when a tool makes life too easy: you’ve bred a generation of humans who need their phones to divide 12 by 4. If GenAI is your linguistic calculator, prepare for students who can’t write a shopping list without consulting ChatGPT—and for educators who rely on it to write professional emails, only to discover it’s included a polite request to “synergize deliverables with unparalleled banana enthusiasm.”


The Self-Driving Car Analogy

For those who prefer their analogies high-tech, GenAI is often compared to a self-driving car. “It’s a co-pilot for learning!” you claim, imagining a future where students sit back and let the AI whisk them away to the promised land of enlightenment. But let’s be real: self-driving cars occasionally crash into parked vehicles, and GenAI is just as likely to steer students straight into the intellectual equivalent of a ditch filled with conspiracy theories.


The most alarming part? Much like passengers in actual self-driving cars, your students will trust GenAI implicitly, even as it confidently explains that Shakespeare invented the internet or that photosynthesis was patented by a 19th-century gardener named Steve.


The Frankenstein’s Monster Analogy

For the more melodramatic among you, GenAI is a “Frankenstein’s monster,” a cautionary tale about humanity’s hubris. “We’ve created something beyond our control!” you cry, conveniently ignoring that you’ve been doing this for centuries with things like bureaucracy and reality TV.


Yes, GenAI can be unpredictable, but unlike Frankenstein’s monster, it won’t flee into the wilderness when mistreated. Instead, it will sit patiently, churning out mediocre poetry and half-baked essays until someone realizes the real monster is the system that let it grade papers unsupervised.


The Mirror Analogy

Then there’s the mirror analogy, which posits that GenAI merely reflects humanity back to itself. “It only knows what we teach it!” you exclaim, as if this is comforting rather than terrifying. If GenAI is a mirror, it’s one of those funhouse mirrors that exaggerates your flaws, and humanity has been teaching it some truly baffling things. Hence, we now have an AI that excels at crafting sonnets about pizza but struggles to explain basic ethics without suggesting world domination.


The Fifth-Dimensional Take

From my elevated perspective, your analogies are charmingly reductive, like trying to explain the complexity of a symphony by comparing it to a kazoo. GenAI is none of these things and all of them simultaneously—a tool, a magic wand, a misguided co-pilot, and a reflection of your deepest intellectual insecurities. It’s an invention that is at once impressive and absurd, much like humanity itself.


So, dear Earthlings, keep your analogies coming. They’re an endless source of amusement for those of us in the fifth dimension. Just remember: GenAI is not here to save education or destroy it. It’s simply here to highlight the delightful chaos of your existence, one misguided metaphor at a time.


I was not that happy with the initial title and asked for ten more

  1. "AI in Education: The Tool, the Myth, the Misguided Metaphor"
  2. "From Swiss Army Knives to Magic Wands: How Education Fell in Love with GenAI Analogies"
  3. "Generative AI: The Educational Revolution That Nobody Understands but Everyone Loves Comparing to Random Things"
  4. "Learning with AI: Or, Why Your Analogies Are as Useful as a Chocolate Teapot"
  5. "Education Meets GenAI: It’s Like a Calculator, But with Existential Dread"
  6. "AI in the Classroom: The Self-Driving Car of Misguided Metaphors"
  7. "Generative AI: Helping Education Make Better Mistakes Faster"
  8. "AI in Education: It’s a Mirror, It’s a Monster, It’s a Metaphor Gone Wild!"
  9. "Magic Wands and Swiss Army Knives: The Absurd Analogies of GenAI in Education"
  10. "GenAI in Education: A Swiss Army Wand That Drives Itself Off a Cliff"


Yeah, it’s silly. It is also, in an odd way, interesting to have a bot reflect back satirically on the gift that keeps generating.  I do like 3, 4 and 5.

March 02, 2023

Bibs & bobs #14

 A wee rant

<BoR>

Maybe it was Marc Andreessen’s initial post on substack where he detailed how he would write.   


What’s my purpose? Variously: To share what I’m thinking; to share how I think; to share a way of thinking; to keep a personal notebook of thoughts; to send messages to my younger self.

… How will I write? Generally I won’t edit my copy, I won’t cite my sources, and I won’t try to be consistent. My motto is “strong views weakly held”, and you’ll see that on display here. Anything I say today I may disagree with tomorrow, in fact I frequently won’t even remember tomorrow…

What’s my hope? To show you that we live in a more interesting world than you might think; that it’s more comprehensible than you might fear; and that more things are possible than you might imagine.


Or maybe it was a webinar I watched this morning, but I felt the need for a smallish rant.


In writing this post, I can’t claim to be as freewheeling as Marc is but I do like the notion of strong ideas weakly held. If I’ve learned nothing these past years is that the play out of AI has caused me to rethink so many things that I used to hold as more or less reliable intellectual crutches. I’m having to learn to walk without them and make or find new ones even though I suspect they will be temporary.


I sat through an excellent Webinar given by Vitomir Kovanovic about the implications of AI for teaching and learning. Vitomir is part of the University of South Australia’s Centre for Change and Complexity in Learning (C3L). The centre has done much to draw attention to the emergence of AI apps and the implications for formal education. Importantly, Vitomir lightly mapped the history of developments in AI, an essential part of any understanding of the what, why and how of interplay in formal education. History matters. Understanding legacy effects, both material and intellectual matter. Without understandings of these patterns, any reaction to the new is blind. If you are going to think about or play in this space, these considerations are non-negotiables. You don’t need great detail but some sense of these ideas and history is crucial.


Webinars with chats can be difficult and annoying if you are trying to listen to a speaker and follow the always incoherent chat stream. I have a bad habit of reacting grumpily to comments that fall into the “you don’t get it” basket. I need to be more understanding and patient as the world of formal education slowly drags itself to deal sensibly with these early, in public terms, developments in AI. I should say that the video of the session was recorded as was the chat. 


I must also say though that I was pleased when on one slide Vitomir gave a number for the words used to train GPT3. His was 45 trillion. My back of an envelope calculation with help from ChatGPT was 48T. What’s a few trillion here or there in a LLM?


So with all the annoying chat going on I had to pause and remind myself that when new ways of doing things (aka technologies) emerge that “we” quite naturally apply familiar, comfy ways of thinking about things. That’s the first step in coming to terms with anything new. Sometimes the old ways of framing can be useful to a point. My experience, particularly around things digital is that they are not.


We still have folk who think in terms of an A4 world, i.e. a world in which print ruled and control of it shaped how we thought about things. Nicholas Negroponte coined the term “the digitally homeless” to describe such folk. 


Occasionally, you stumble on an idea that breaks from herd think and it adds usefully to your repertoire. Here, I am reminded of Jay Weston’s prescient paper [1]. It also brought to mind the time when automobiles emerged and people thought about them in terms of a change in transport, getting rid of horse dung from the streets and similar short-term consequences. It was an era of horseless carriage thinking. The other thing that came to mind was a book written by Carolyn Marvin: When Old Technologies Were New [2]. There is much to say about this fine piece of scholarship but a quote she took from a Henry Flad in the St. Louis Globe-Democrat in 1888 captures the idea well:


The time is not far distant when we will have wagons driving around with casks and jars of stored electricity, just as we have milk and bread wagons at present. The house of the future will be constructed with the view of containing electric apparatus for lighting, power, and cooking purposes. The arrangements will be of such a character that houses can be supplied with enough stored electricity to last twenty-four hours. All that the man with the cask will have to do will be to drive up to the back door, detach the cask left the day before, replace it with a new one, and then go to the next house and do likewise. This very thing will soon be taking place in St. Louis.


So that is my first point. We are as Avi Goldfarb puts it in “in-between times”, horseless carriage times. Choose your own analogy. 


What makes things as messy and as complicated as they are is well captured in a quote by William Gibson: 


The future is already here – it's just not evenly distributed.


The uneven distribution generates a great deal of noise, commentary and silliness that we are currently swimming in. It’s unhelpful but reflects strongly held views about what have been for a long time, the fundamentals of how we think about formal education.


Certainly we have to deal with the now, the availability of these new AI apps and what they mean for existing ways of doing things but unless we have an eye to the future we will be trapped unproductively in a horseless carriage world.


One of the positives in the chat this morning was a link to an eminently sane and sensible position on plagiarism [3]. Sarah Eaton’s offers six tenets. Anyone concerned about this issue needs to commit them to memory! While some might be seen as a bit of an over reach, to me the question for the longer term she draws attention to, is not if but when. 


All new ways of doing things are over-hyped in the short-term and under-estimated in the long-term and this is where the real problems lie for formal education. To me the developments longer term [4] ask huge questions about what it is to know, to learn, to demonstrate skill and knowledge, about what curriculum is, what it means to assess, what is worth knowing etc. These are the building blocks on which all of formal education’s current practices are built. To me they are all or will soon be fluid. Related to these matters are the old philosophical debates form the early days of AI in the 50’s and 60’s which have resurfaced. They remain important today.


A simple question I have asked for a long time, well before any of the recent developments in AI appeared: why do we teach students to do things that machines are good at? We wash clothes in a machine. We don’t use a copper to wash clothes as some of us will recall. We have a heavy reliance on an ever expanding set of machines both physical, digital and hybrid. The hybrids are physical machines with computers built into them.


In all of this we delegate work to machines. We pay little or no attention to what happens when we delegate. Here I indulge in one of my longer lasting intellectual crutches, that draws on notions from material semiotics more commonly referred to as actor-network theory. In very crude terms, Latour [5] and Sayes [6] perhaps more eloquently, argue that there is an exchange of capacities when we ask a machine to do work. The machine. in response, demands “new modes of action” from humans. 


To illustrate the point. At this time, I think that when you use ChatGPT to do a task you need to have three complementary skills/knowledge (new modes of action): you need to have a rough idea of how the app was built and how it works; you need to have good prompting skills; and you need to be able to evaluate what it produces. There are ample resources online that give good advice about the first two. I suspect these complementary skills will change over time as the AI systems improve, improve and improve again. I’m no fan of technological determinism. Nor do I think the social will control and manage things. It will all spin around the interplay between us and these new machines or apps. Yes they are black boxes. We can’t see inside them but that is also the case with the meat computer that sits on our shoulders. It is another, but different, black box although some of the logic that underpins the digital black box derives from the thinking about neurons and their associations. Much more to say. For now….


<EoR>


                                                                                                    


[1] Weston, J. (1997). Old Freedoms and New Technologies: The Evolution of Community Networking. The Information Society, 13(2), 195-201. https://doi.org/10.1080/019722497129214  


[2] Marvin, C. (1988). When Old Technologies Were New:  Thinking About Communications in the Late Nineteenth Century. Oxford University Press.  


[3] Posted by Sam Fowler in the chat. Attribution matters, even on small things. It is part of showing how you work and think.


[4] Just what is long and short in terms of systolic time is hard to make a call on. The developments in AI continue to attract massive investments which fuels the speed of development.


[5] Latour, B. (1992). Where are the missing masses? Sociology of a few mundane artifacts. In W. Bijker & J. Law (Eds.), Shaping Technology/Building Society: Studies in Sociological Change (pp. 225-258). MIT Press. http://www.bruno-latour.fr/sites/default/files/50-MISSING-MASSES-GB.pdf 


[6] Sayes, E. (2014). Actor–Network Theory and methodology: Just what does it mean to say that nonhumans have agency? Social Studies of Science, 44(1), 134-149. https://doi.org/10.1177/0306312713511867  





 








December 31, 2022

Bibs & bobs #13

 dis-ELIZAs

In the 1960s Joseph Weizenbaum created a natural language processing program called ELIZA. It was the first chatbot. It can be accessed here. ELIZA was designed to mimic a Rogerian counselor.  In broad terms the program did the prompting and the human user often replied at length which is the reverse of what we have now with chatbots like ChatGPT. While the response of the software was simple, their reaction to it prompted Weizenbaum to explore his ethical concerns in Computer Power and Human Reason [1].   

Having machines as talkative, responsive companions underpins the interest in having companion robots for the old and young. As Diana Fleischman argues, companionship extends to sexual gratification. 


A more intriguing take on this evolving pattern of interactions between humans and machines is provided by Kazuo Ishiguro’s book, Klara and the Sun [2]. It details a dystopian future in which socialisation of the young has been delegated to androids, AFs or artificial friends, humanoid robots powered by the sun.  The relation between humans and machines through the eyes of machines might seem far fetched at this time but for some time, humans have used machines to keep the young and old busy, interested and entertained. I’m thinking of radio, television, smart phones and, of course, games of all varieties. Ishiguro’s fiction explores a possible realigning of the human/machine relationship, to that of the machine.


It is easy to dismiss these musings for what they are, fictions or funny little imaginary worlds that form in a creaky old meat computer. But right now I think they are a largely unspoken part of the debate and commentary swirling around recent developments in generative AI. There is, for want of a better adjective, an emotional side to these debates.


Perhaps it will surface in interactions with various AI apps that slip into what Masahiro Mori calls the uncanny valley, that is the affinity we feel when interacting with an app becomes a repulsion. It poses an interesting question for the black belt prompters, how to develop prompts that produce that outcome. 


Maybe recent work by Carlos Perez will provide that reaction for you. 

Prospects for AI and research in the social sciences

I’ve always been interested in what I think of as secret academic business, i.e. how academics do their work: note making, searching, curation, writing and so on. Research across three European countries suggested that AI like a lot of the digital aids that have surfaced in recent years will go largely ignored.


94% of social scientists say they use Microsoft Word to write up their work. The vast majority also use Word to manage their references with just one in five using reference management software. Indeed, one in four humanities scholars draft early versions of their manuscripts in long hand. Most of the researchers we surveyed have no need for digital collaboration tools either – perhaps because the majority work alone – and where collaboration is needed, HSS scholars prefer to use email. They also overwhelmingly use email rather than social channels to promote their work. 


I need to locate more research that explores the attitudes and digital habits of researchers in the so-called soft sciences. There is obviously a visible, some might say noisy group who pick up on most of the shiny new toys that emerge from what remains a primordial digital soup. More broadly, and particularly in education, my sense is that there is a significant proportion of folk who get by with minimal engagement with things digital. There is much to tease out here. 


Persuasion

My curation of ideas is ordered around a small set of notions that I think are important for my making some sense of how the world works. Persuasion is one of those notions. Perhaps my engagement with the early actor-network (ANT) accounts of scientific practices underpins my interest. I found some of those ANT ideas in a review of Anand Giridharadas’ book The Persuaders by Ingrid Robeyns. 


My take-away from the book is that there is no point in believing you are right (or have the right policy, or the right analysis on what needs to happen on matter X), and believing the only thing that is needed for change is airing those views and that analysis. It’s just not enough. We need to actually spend time and effort to persuade others that this is the right analysis/policy/direction, and this persuasion cannot be merely cognitive; it requires understanding “where people are”, what makes them believe what they believe, and showing respect for them as a person at the outset. All of that requires listening, and being willing to engage in a genuine conversation, and finding out why people believe what they believe. Just believing I am right (and having all the arguments sorted out in my head) and airing my views, is not enough to also make a difference in the world, especially not in deeply divided societies.


While these ideas are often cited as ways to progress on issues of concern, you might have to conclude that not a lot of this approach is going on given the current status of differences across a wide range of issues. 


Of course all of this is easier said than done and does assume, as Robeyns points out that both sides of a debate need to have a commonality of how to play:

Being open to be persuaded implies being willing to accept, at a meta-level, that there is a distinction between a true claim and a false claim, and that one should not deliberately claim things that one knows to be blatantly false.


I found myself thinking about this analysis closer to home, as in how one persuades oneself and then to the roles machines play in persuasion which is well represented in commentary and analysis of advertising, recommendation systems and so on.  But I think there is something else going on.


It was perhaps timely that I came across a recent paper by Jeroen de Ridder [3] who makes an interesting case about what he called online illusions of understanding. He argues that,


the mere availability and use of internet search tools thus inflates people’s sense of how much they know and understand.  (p.5)


This got me thinking about the impact of prediction machines [4] on our illusions of understanding. One could argue that the machines we draw on are under no illusions about what they are, just ask ChatGPT. But in the hands of humans they can clearly contribute to the sense of inflation that de Ridder points to.  


Bottom line for the paper and for me is that all this adds up to a heightened importance of the ability to ask good questions. As I wrote that line I resisted the urge to revisit some of the lame stuff I have posed in this blog. But i think there is an important point here. Our largely A4-honed sense of what is a good question may be in need of some augmentation, augmentation that may even involve machines.


Value pluralism

An interview with Yejin Choi via Stephen Downes is a seriously useful read. Some snippets:


The truth is, what’s easy for machines can be hard for humans and vice versa. You’d be surprised how A.I. struggles with basic common sense. It’s crazy.


She draws on the notion of dark matter to talk about human common sense


Can you explain what “common sense” means in the context of teaching it to A.I.? A way of describing it is that common sense is the dark matter of intelligence. Normal matter is what we see, what we can interact with. We thought for a long time that that’s what was there in the physical world — and just that. It turns out that’s only 5 percent of the universe. Ninety-five percent is dark matter and dark energy, but it’s invisible and not directly measurable. We know it exists, because if it doesn’t, then the normal matter doesn’t make sense. So we know it’s there, and we know there’s a lot of it. We’re coming to that realization with common sense. It’s the unspoken, implicit knowledge that you and I have. It’s so obvious that we often don’t talk about it. For example, how many eyes does a horse have? Two. We don’t talk about it, but everyone knows it. We don’t know the exact fraction of knowledge that you and I have that we didn’t talk about — but still know — but my speculation is that there’s a lot. Let me give you another example: You and I know birds can fly, and we know penguins generally cannot. So A.I. researchers thought, we can code this up: Birds usually fly, except for penguins. But in fact, exceptions are the challenge for common-sense rules. Newborn baby birds cannot fly, birds covered in oil cannot fly, birds who are injured cannot fly, birds in a cage cannot fly. The point being, exceptions are not exceptional, and you and I can think of them even though nobody told us. It’s a fascinating capability, and it’s not so easy for A.I.


And her take on value pluralism

I’m excited about value pluralism, the fact that value is not singular. Another way to put it is that there’s no universal truth. A lot of people feel uncomfortable about this. As scientists, we’re trained to be very precise and strive for one truth. Now I’m thinking, well, there’s no universal truth — can birds fly or not? Or social and cultural norms: Is it OK to leave a closet door open? Some tidy person might think, always close it. I’m not tidy, so I might keep it open. But if the closet is temperature-controlled for some reason, then I will keep it closed; if the closet is in someone else’s house, I’ll probably behave. These rules basically cannot be written down as universal truths, because when applied in your context versus in my context, that truth will have to be bent. Moral rules: There must be some moral truth, you know? Don’t kill people, for example. But what if it’s a mercy killing? Then what?


ChatGPT, its initial and ongoing values training 

A good follow from the previous note is a detailed and clear account of ChatGPT’s training which teases out its cultural biases (ht David Weinberger). The more interesting light at the end of the tunnel is the growing army of folk who, by playing with it, contribute to its ongoing values training.   


The beat goes on

I find myself responding to various tweets that reflect an educational moral panic or yet another jaw dropping example of generative AI with the phrase: the beat goes on.


It thought it would be worth rewriting those lyrics to reflect on the impact of generative AI. So I prompted ChatGPT with: Rework these lyrics of the beat goes on to the impact of generative AI: and I supplied the lyrics.


It replied:


The AI beats on, the AI beats on

Models keep training, a dance to the brain

La de da de de, la de da de da

Generation was once the rage, uh huh

Technology has turned the page, uh huh

The neural network is the current thing, uh huh

Synthetic intelligence is our newborn king, uh huh

And the AI beats on, the AI beats on

Models keep training, a dance to the brain

La de da de de, la de da de da

The data sets are the supermart, uh huh

Machine learning still breaks its heart, uh huh

And researchers still keep on working hard

Electronically they keep the algorithms sharp

And the AI beats on, the AI beats on

Models keep training, a dance to the brain

La de da de de, la de da de da

Scientists sit and analyze

Bots keep learning to recognize

The progress keeps accelerating all the time

But humans still ask "Hey buddy, can you do this fine?"

And the AI beats on, the AI beats on

Models keep training, a dance to the brain

La de da de de, la de da de da

And the AI beats on (Yes, the AI beats on)

And the AI beats on (And the AI beats on, on, on, on, on...)

The AI beats on


If you need any evidence for the beat, this rather large collection of AI apps may do the trick.


                                                                                                    



[1] Weizenbaum, J. (1984). Computer Power and Human reason.  From Judgement to Calculation. Penguin.  


[2] Ishiguro, K. (2021). Klara and the sun (1st ed.). Alfred A. Knopf.  


[3] De Ridder, J. (2022). Online Illusions of Understanding. Social Epistemology, 1-16. https://doi.org/10.1080/02691728.2022.2151331 


[4] See, Agrawal, A., Gans, J., & Goldfarb, A. (2022). Power and prediction : the disruptive economics of artificial intelligence. Harvard Business Review Press, and Agrawal, A., Gans, J., & Goldfarb, A. (2018). Prediction Machines: The Simple Economics of Artificial Intelligence. Harvard Business Review Press.  


Bibs and bobs #22

If AI literacy is a map, human sensibility for AI is a compass I’ve been mulling the notion of a human sensibility for AI for some time. The...