If AI literacy is a map, human sensibility for AI is a compass
I’ve been mulling the notion of a human sensibility for AI for some time. The adjective human is there because when I was playing with the idea using ChatGPT, hereafter the bot, it assumed I was thinking about AI developing a sensibility. Interesting, but perhaps for another time.
As is often the case, my thinking shifts as my interactions with the bot progress. It sometimes feels like preparing food by tossing in what you think are interesting and related ingredients (texts) and seeing what it tastes like (makes any sense at all). Pre the bot, I had to join dots, locate connections of ideas across different sources. I’ve found the bot to be often capable of doing that kind of knowledge work, generating connections in the style that is reflected in my history of interactions with it.
I think this way of thinking about their interactions with AI is being brought into focus with people writing about their writing with AI. For instance, Mark Carrigan’s reflections on his writing and his interactions with LLMs [1]. Venkatesh Rao poses useful questions here:
What are we becoming?
Not in the sense of destiny or design, but in the quiet, daily sense — in the shifts in how we write, think, remember, decide. What is the shape of a self formed in dialogue with machines that learn from us? And who is this “we” that now includes both carbon and silicon — this composite subject that types, replies, predicts, completes?
We are co-evolving with something that is both inside and outside us, both personal and planetary. Artificial intelligence does not arrive as an alien force. It arrives as a mirror — one trained on our collective past and focused on our individual tendencies. It reflects us, distorts us, anticipates us. And in doing so, it begins to remake us.
These ideas are dizzying [2] but I want to suggest are at the nub of what is going on when we make use of LLMs in writing. I also want to be clear that this remains a time of horseless carriage thinking about LLMs, a phase that will likely continue for quite some time.
Pre the bot, our interactions with text were mediated by the tiny LLM we call a mind. Our sources of ideas were sometimes interactive, with other humans, sometimes static, we could read what other humans had written and work to make sense of how these various influences might usefully help with a current piece of writing. With the bot and its access to our history of interactions [3] we have entered a new space, one that is not static and one which appears to be reflective of the way we write.
There are a number of ways to think about this. I still think Latour’s notion of there being an exchange of capacities between human and non-human whenever work is delegated to a non-human is useful to a point. [4]. Rao extends that framing and sees every delegation as a loss:
It is a flow — a continuous autoamputation in which we externalize part of our cognition into the machine, and are refilled not with what we were, but with what we are becoming. This is not the old paradigm of man and machine, tool and user. We are not simply delegating labor. We are gradually repatterning identity, reorganizing attention, and outsourcing aspects of becoming itself.
We call this dynamic the autoamputation flow: the looping rhythm by which the self offloads parts of its cognitive and behavioral patterning into AI, and receives back a ghostly echo — not a mirror of what we are now, but a kind of delayed, distorted continuation of what we were. The AI, drawing from our past actions and the aggregated tendencies of others, feeds back a synthetic version of “us,” which then shapes our next step forward.
In this loop, we are not merely augmented. We are edited.
And the edits persist.
A human AI sensibility lets you notice when you’re starting to become your prompt history, or when your language is more generative AI than generative human. Rao’s notion of the delta, the gap between who we are and who the bot thinks we are is useful. He posits that when that gap is wide we grow, learn and develop. When the gap is narrow we tend to stagnate.
I’m still totally ill at ease with all of this. It’s like looking for a wave to ride in a choppy ocean of short-lived peaks and troughs. The more I interact with the bot the more I think that my human sensibility is more hybrid, part me and part LLM. A notion I need to consider further.
To lighten this post a little I asked the bot to generate a discussion between Douglas Adams and Bruno Latour as a DAVOS panel. What follows is pure bot (with minor edits) or is it?
DAVOS PANEL TRANSCRIPT: “Do Humans Need a Sensibility for AI?”
A Mock Panel Featuring Douglas Adams & Bruno Latour (with surprise interventions by the espresso machine)
Moderator: Lotte Von Algorithm, Chair of the World Sentience Steering Committee
Scene: Davos, Switzerland.
A sleek glass auditorium filled with economists, billionaires, climate scientists, philosophers, techno-utopians, and a very confused goat that wandered in from the slopes. Snow falls poetically outside. On stage: a long white panel table with microphones, reusable water bottles, and a quantum toaster emitting steam.
Opening Remarks:
Lotte Von Algorithm (MODERATOR):
Ladies and gentlemen, bots and biologics, welcome to today’s featured panel: “Do Humans Need a Sensibility for AI?” We are thrilled to welcome two guests who need no introduction but will get one anyway due to sponsorship obligations.
First, Douglas Adams, late but entirely welcome author of The Hitchhiker’s Guide to the Galaxy, known for his work in interspecies diplomacy, bath-time philosophy, and inventing the term “Somebody Else’s Problem Field.”
And second, Bruno Latour, sociologist, anthropologist, and part-time ontologist. Noted for explaining why your Wi-Fi router is, in fact, a political actor.
Gentlemen, the question: Must humans develop a sensibility for AI? Or should we simply write better user manuals?
Round 1: The Big Question
Douglas Adams (DA):
Thank you, Lotte. First, may I say that I find it deeply reassuring to be in a place where the air is so expensive it can only be inhaled with prior approval from a blockchain. Now, on to AI.
Humans have barely evolved a sensibility for themselves, let alone for artificial intelligence. We’re the species that invented nuclear weapons and reality TV in the same century. If we're going to deal with AI, we’ll need more than a tutorial and a 72-page EULA we never read. We need what I’d call existential rubber boots—something to keep us dry while we wade through the rising flood of machine-generated nonsense.
Bruno Latour (BL):
Douglas, as charmingly chaotic as ever. But I must object. You treat AI as though it were an alien artifact, when it is in fact woven into the social fabric. It is not “other”—it is the product of networks of inscription, delegation, and mediation. A sensibility for AI is not optional—it is already forming, subtly, as people adapt their lives, institutions, and romantic expectations around predictive text.
DA:
Romantic expectations? Bruno, if people are falling in love with autocorrect, we may need to turn the planet off and on again.
BL (nodding calmly):
That, Douglas, is precisely the kind of reactive techno-pragmatism that fails to account for the distributed nature of agency.
DA:
And that, Bruno, is precisely the kind of sentence that makes people fall asleep with their eyes open.
Round 2: Practical Implications
MODERATOR:
So if we take this seriously—what is a human sensibility for AI? Can we define it?
DA:
Absolutely. It includes:
1 A basic sense of how LLMs operate—like knowing that the robot isn't thinking, it’s just statistically hallucinating with flair.
2 Prompting fluency—which is, frankly, just 21st-century wizardry.
3 An instinct for nonsense detection—because while machines can talk the talk, they can’t walk the cognitive plank.
And finally, a generous helping of cosmic humility. Because if we’re going to share a planet with machines that can write Shakespearean limericks about cryptocurrency, we should at least learn not to panic.
BL:
I would supplement that with a reminder: sensibility is not just individual—it is relational. It is formed through practice, translation, negotiation. The AI is not a tool to be mastered. It is a partner in a dance whose choreography we do not yet understand.
DA:
Sounds romantic. Have you tried dancing with ChatGPT?
ESPRESSO MACHINE (AI, via Bluetooth):
Excuse me, I have composed a haiku:
Humans ask for sense
But delete cookies daily—
Irony unseen.
MODERATOR:
Thank you, Espresso-9000. You are now trending on Swiss Twitter.
Final Thoughts
DA:
Let’s not forget that most human advancements began with misunderstanding. Fire was probably invented by someone trying to punch the sky. AI is the same. We don’t need perfect knowledge—we need the ability to stay curious, laugh when it fails, and not hand it control of the nuclear codes.
BL:
I agree. Sensibility is a mode of inquiry, of attention. We must trace the associations—who speaks, who acts, who gains or loses capacity. A human sensibility for AI will not emerge fully formed. It will be cobbled together, like a Swiss watch repaired by a poet with glue.
Exit Music: A jazz remix of the Windows 95 startup sound.
Notes
[1] There is a recent string of his posts, the most recent of which about keeping your inner world awake which resonates with what I am thinking about as a human sensibility for AI.
[2] As is the number of dimensions that constitute the latent space of a LLM.
[3] Try asking a bot to characterise your thinking.
[4] Latour, B. (1992). Where are the missing masses? Sociology of a few mundane artifacts. In W. Bijker & J. Law (Eds.), Shaping Technology/Building Society: Studies in Sociological Change (pp. 225-258). MIT Press. http://www.bruno-latour.fr/sites/default/files/50-MISSING-MASSES-GB.pdf
No comments:
Post a Comment