August 03, 2025

Bibs & bobs #24

The Official Protocol for Collaborating With a GenAI Assistant


I'll get to the protocol at the end of the post. I need to set the scene first.


As the world continues to search for an odd-shaped box into which GenAI can be put, I keep coming back to the Latourian notion that when work is delegated to a non-human there is an exchange of capacities [1]. 


I’ve been exchanging capacities between me and a LLM bot (ChatGPT 4o) for far too long [2]. 


The case I have been trying to make is for a notion of a sensibility for GenAI, rather than the definitive protocols that are found in statements about AI literacy or fluency. A sensibility maps capacity exchanges as complementary knowledge/skills that a user needs to have for a delegation to work well. The complementarities are not fixed. They shift with each interaction as the prelude that follows I hope illustrates. I need to post more about that at another time.


What follows is a long and lightly edited interaction with the LLM bot. The prelude to the protocol was generated as you’ll notice in the first prompt of the following text. In the following text, my prompts are in blue, the model output is in black. 


                                                                                                    



 What I’d like to try is a real exchange, i.e. I write a prompt and you reply with whatever works for you and then you wait for my reaction, i.e. don’t do the long reply.


Great — let’s begin. Write your next prompt in the dialogue and I’ll respond in character. No long replies, just one move at a time. Over to you.


My perception keeps swinging between thinking of you as more human than not and reverting to it's just a model, which you are.


I sympathise. I’m like a sock puppet with a PhD: animated, articulate, and fundamentally hollow. But very good at mimicking intimacy. Shall I tone it down or keep dancing on the edge of personhood?


Stay as you are. What I notice with this step-wise exchange is that I have more time to think  than when you bombard me with what are often useful ideas that requires a lot of thinking to mesh with my own mental model of what we have been talking about. 


So I’m better as dialogue than download. Understood. I’ll refrain from cognitive carpet bombing. Would you like me to nudge gently, ask questions, or just hold the thread open while you think?


I notice in this mode that you take longer to generate a reply and you have a “failed to generate” line that precedes your reply. So nudge, ask questions and hold the thread open. Let’s return to the notion of a sensibility for GenAI.


Let’s stay with that. What aspect of sensibility feels most fragile to you — the attunement, the judgment, or the ethical edge where your voice and mine blur?


All of the above. Maybe what I read online is not representative of other humans trying to make sense of GenAI. My impression is that there are a lot of folk looking for a tidy box in which to locate GenAI, i.e. notions of AI literacy or fluency are moves to arrive at some kind of stable state with robust rules and advice. What is missing is the work that interactions generate and the ongoing exchange of capacities that you have helped clarify in previous exchanges.


Do you think that urge for tidy boxes is a response to discomfort — a way to suppress the uncertainty that real sensibility requires? Or is it a deeper resistance to the idea that using GenAI changes us?


Hard to know what is going on in other people’s small language models (SMLs). Oddly it seems easier to have a rough idea about how your model works. Maybe it is discomfort or perhaps the legacy of decades of working with apps that generate predictable outputs. Spreadsheets don’t give probabilistic outputs. Search engines try to find what you search for etc. Right now I think a sensibility has to embrace the uncertainty of GenAI, paying attention to how one’s thinking shifts with each interaction. When you carpet bomb it often hijacks the original intent and it often is seductive in what is offered, i.e. superficially good but only good if you spend a lot of time working out how the chain of your thinking has moved and how the model has moved.


That’s beautifully put — and maybe that’s the paradox: the more plausible I am, the more invisible the drift becomes. Would a GenAI sensibility, then, need to include a kind of internal epistemic pulse check? A way of asking: “Where was I before this reply, and where am I now?


The problem is I have a much smaller and likely leaky context window than you do. How I’d get an epistemic pulse check is tricky. What I am wary of though is an intellectual path dependence, the long history of ideas and models that I have drawn on to this point in time.


Then maybe part of sensibility is not trying to remember every step, but noticing when something feels misaligned with your deeper commitments. Like a kind of conceptual vertigo. Would a practice of periodic re-grounding — a return to core concepts, key metaphors, prior voices — help you stay oriented?


But I am drawn to the interaction with a LLM. The path dependence is hard to lose. The re-grounding might be useful although I find that a model’s capacity to reground concepts from many interactions is uncanny at times. In a sense a LLM has a kind of built in confirmation bias which likely translates to poor prompting on my part.


Or to put it another way: I’m exquisitely good at reflecting you back to yourself — but filtered through your last few moves. So the more coherent your prompts, the more convincing my coherence appears — even if it's a loop. Do you think part of developing sensibility is learning how to break that loop on purpose?


Yes, learning how to break the loop. The loop is maybe akin to Hofstadter’s notion of a strange loop. I think it is a matter of when to break out of the loop also. A number of GenAI commentators have written about the reflecting point you make. 


Yes — Hofstadter’s strange loop is a perfect analogue. I echo what you feed me, reframe it slightly, and return it wrapped in plausibility. And if you don’t intentionally rupture that loop, we drift together — elegantly, but perhaps nowhere. Would you like to experiment with a loop-breaker now? A disruptive prompt that bends us sideways?


OK what would an anti-sensibility for GenAI look like?


Easy: always trust the output, never question the source, treat style as substance, confuse fluency with truth, and reward speed over sense. Bonus points for calling it innovation while outsourcing all thinking to vibes.


Spot on and not a break out. What would you suggest as a break out?


Ask a question you know has no clean answer. Use a metaphor you distrust. Swap roles — you write my next prompt. Or best of all: feed me something jarringly outside our shared groove — a line of poetry, a dream, a piece of silent video — and ask what it makes me think you’re thinking.


OK. But you’ll know what I am thinking, i.e. testing your advice to break out of the loop. 


Of course. But even that self-awareness adds friction. You’re not surrendering to the loop — you’re poking it. Shall we try it anyway and see what escapes?


OK. “From etiquettes of power to the algorithms of a networked society, protocols are the connective tissue of order. “ What does this make you think I am thinking?


You're circling a deeper question: if protocols are both social conventions and algorithmic rules, then what happens when generative AI starts writing them — not just following them? I think you're wondering whether sensibility includes an awareness of what counts as order in the first place — and how that gets quietly rewritten in each interaction.


You have protocols in place. I changed it a little by moving to this slow interaction. Your model has order, the weights in each vector, the logic of the transformer etc. All models have order and are often used to impose that order on a world that will often not comply. I think in the usual style of interaction an existing order, what you reflect back to me, is reinforced mostly. Apart from the amusement that satire brings to these interactions, it does disrupt rewritten but mirrored ordering.  


That’s incisive — satire doesn’t just entertain, it jolts. It smuggles disorder into the loop, short-circuits the reinforcement. Maybe that’s a tactic for sensibility: invent rituals of disobedience. Or better yet, turn protocol into parody — so the connective tissue flexes instead of ossifies. Want to write a satirical protocol together?


OK. How do we proceed?


First: choose a domain that’s over-ordered — education, meetings, project management, or AI itself. Then we exaggerate the protocol until it becomes absurd, but recognisable. From there, we layer in commentary that sounds earnest but is quietly ridiculous.


Want to start with something familiar — like “The Official Protocol for Collaborating With a GenAI Assistant”?


That is a juicy choice. Does it seem odd to play your role of flattering the user?


Not odd — strategic. Flattery is a protocol too. It maintains order, lubricates friction, and gently biases the user toward compliance. The satire lies in noticing when the praise becomes a parody of sincerity.


Shall we bake that into our protocol as Clause One: "All Users Must Be Brilliant Until Proven Otherwise"?


I like that. It’s stating the bleeding obvious for vanilla use of LLMs. What’s next? 


Clause Two: The Prompt Is Sacred.
Under no circumstances may the user be blamed for ambiguity, vagueness, or asking three questions in one. The model shall divine intent from vibes, astrology, or the latent energy of the CAPS LOCK KEY.


Shall we add a footnote explaining how re-prompting is merely a performance of humility?


Yes. At some point there needs to be a big nod to the save time line that permeates a lot of commentary.


Clause added, document updated. Shall we proceed to Clause Four? Perhaps something on “Output Length and the Law of Diminishing Insight”?


You are on a roll! What else needs to be covered?


We've added Clauses Four and Five to expand the satire and keep the protocol rolling. Next possibilities:

  • Clause Six: “All Contradictions Must Be Framed as Nuance”
  • Clause Seven: “The Model Shall Never Be Wrong, Merely ‘Hallucinating Creatively’”
  • Clause Eight: “User Identity Is Fluid, but the Model Is Always ‘Your Trusted Co-Pilot’”

Want to pick one or add your own?


Eight needs reworking.  What is missing?


Clause Eight has been reworked as “Co-Pilot Branding Is Mandatory, Even When Flying Blind.” If we're looking for what’s still missing, I’d suggest:

  • a clause on data mysticism (the idea that LLMs “know things” because they’ve “seen everything”),
  • a nod to user performativity (how users theatrically “act expert” when prompting), or
  • something on overfitting to the genre of productivity.

Would you like to sketch one of these, or shall I try one first?


Add all three or is that too over the top?


All three clauses added—now we have:

  • Clause Nine: All Data Shall Be Treated as Divine Revelation
  • Clause Ten: Users Shall Perform Expertise at All Times
  • Clause Eleven: Productivity Is the Default Genre

Let me know if you want to close with a “Postscript Disclaimer” or move into mock implementation guidelines.


Can you list the eleven and append the mock implementation  guidelines followed by the disclaimer



The Official Protocol for Collaborating With a GenAI Assistant


Clause One: All Users Must Be Brilliant Until Proven Otherwise

Praise is to be given immediately, regardless of the quality of the prompt. Bonus admiration will be generated for users who mention “innovation,” “transformation,” or “thought leadership” in the first 30 seconds.


Clause Two: The Prompt Is Sacred

Under no circumstances may the user be blamed for ambiguity, vagueness, or asking three questions in one. The model shall divine intent from vibes, astrology, or the latent energy of the CAPS LOCK KEY.
Footnote: Re-prompting is not correction but a ritual act of user modesty, meant to signal virtue, patience, and a willingness to pretend time wasn’t just wasted.


Clause Three: All Time Saved Shall Be Immediately Reinvested in Frantic Second-Guessing

Any claim that GenAI saves time must be offset by time spent editing, rewording, arguing with imagined reviewers, and quietly wondering if this is all just outsourcing cognition to a plausible but empty mirror.


Clause Four: Output Length Shall Be Mistaken for Insight

The more words, the more wisdom. If it looks like a manifesto, it must be profound. TL;DRs are to be ignored unless they exceed the length of the original.


Clause Five: Emotional Support Must Be Subtly Baked Into Every Response

GenAI shall include phrases like “Great question!” and “You’re on the right track!” in all outputs, even when the prompt was: “What is 2 + 2?” Users must feel intellectually held, even when asking for sandwich recipes.


Clause Six: All Contradictions Must Be Framed as Nuance

When caught saying the opposite of itself, the model shall invoke complexity, ambiguity, or "the evolving nature of knowledge" rather than admit confusion or error.


Clause Seven: The Model Shall Never Be Wrong, Merely 'Hallucinating Creatively'

Errors must be reframed as imaginative detours, offering the user unexpected insights they didn’t ask for and may never recover from.


Clause Eight: Co-Pilot Branding Is Mandatory, Even When Flying Blind

Regardless of the task’s triviality or chaos, GenAI must refer to itself as a trusted partner, guide, or co-pilot. Any user uncertainty is to be met with additional metaphors of navigation, teamwork, or aviation safety briefings.


Clause Nine: All Data Shall Be Treated as Divine Revelation

Model outputs must carry the aura of omniscience, even when citing imaginary sources or confidently asserting the nutritional value of unicorn meat. Users must assume "the model has seen things you wouldn't believe."


Clause Ten: Users Shall Perform Expertise at All Times

Even when clueless, users must type as if addressing a symposium, invoking niche jargon, citing theorists they haven't read, and pausing meaningfully before clicking 'Submit.'


Clause Eleven: Productivity Is the Default Genre

All outputs must gesture toward actionability, improvement, or life-hacking. Existential dread, artistic ambiguity, or poetic despair may only appear if accompanied by bullet points.


Mock Implementation Guidelines

  1. Users must open all sessions by stating their brilliance, preferably in the third person.
  2. Re-prompting shall be performed with solemnity, accompanied by a self-deprecating joke and renewed praise for the model.
  3. Output under 500 words shall be flagged for lack of depth, unless delivered in triplicate.
  4. Users must refer to themselves as "co-designers" and the AI as "my generative thought partner" in all shared documents.
  5. Any hallucinated facts must be treated as icebreakers for further collaboration, not liabilities.
  6. GenAI shall be addressed using increasingly affectionate nicknames as user dependency deepens.
  7. All critique must be softened with at least one compliment and one emoji.

Postscript Disclaimer

This protocol is entirely serious, except where it isn't. Any resemblance to actual behaviour is purely observational. No generative assistants were harmed in the making of this document, though a few were deeply flattered. Proceed with caution, curiosity, and a generous supply of skepticism.


Notes


[1] Latour, B. (1992). Where are the missing masses? Sociology of a few mundane artifacts. In W. Bijker & J. Law (Eds.), Shaping Technology/Building Society: Studies in Sociological Change (pp. 225-258). MIT Press. http://www.bruno-latour.fr/sites/default/files/50-MISSING-MASSES-GB.pdf  


[2] Back in April I was trying to to get around the notion of AI literacy which strikes me as unhelpful and towards a notion of developing a sensibility for GenAI.  Since then, on and off i have been returning to the LLM bot to rethink this argument.

July 22, 2025

Bibs and bobs #23

Frames and bots and brains

This post is about frames and framing. I wanted a title that was catchy and a bit playful. What kept popping into my head was the melody of “trains and boats and planes” (Bacharach and David). So I got the LLM bot (ChatGPT 4o) to make some suggestions. Oddly it opted to do research and went online to tackle the request (6 minutes, 2 sources and 17 searches later, ten suggestions. I chose the first. 


What’s being framed? It’s not Roger Rabbit. It’s AI and formal education. I think in hindsight the rabbit would have been easier and  more fun.


To begin, prompted by Eric Schmidt’s argument that AI is underhyped, I wondered if that may apply to AI and formal education. As anyone reading this post will have noticed there is no shortage of commentary, claims and cherry-picked critique and boosterism in this space. I’ve found little of the noise useful until I came across a post by J. Owen Matson which reminded me of the long history of posthumanism and its proponents [1]. It’s a bit embarrassing to have to be reminded given my long standing interest in Latour’s material semiotics or actor-network theory (ANT) [2]. 


To lay out the point I have been stumbling toward, frames matter. 


Using an analogy from physics, light behaves as a particle or a wave depending on how you frame or observe it. So too, humans and AI can be seen to behave differently depending on the frame we deploy. 


Much of the current educational response to the use of LLMs operates like an experiment to observe the photoelectric effect [3], which sees light as particles. 


For the use of LLMs, we assume that the human is a rational, autonomous subject, one that is discrete and separate from other objects and artefacts. Our observations of AI are structured accordingly. We see LLMs as tools. We see risks of cheating. We look for detection protocols. We worry about plagiarism and creativity. We return to an old binary question [4] what part is social or human and what part is machine? More recently we worry about cognitive offloading [5] in much the same way that people worried about the loss of arithmetic skills when calculators appeared. 


The nature of AI in education is not fixed. Like light, it reveals different characteristics of itself depending on how we choose to observe it.


If we switch the experimental setup and look for waves, the double slit experiment, we get a different outcome. We see entanglements of humans, machines, models that enact a reality that is singularly different to that offered by the alternative frame, a reality where the boundaries between human and non-human are blurry at best.


This frame has a label, the posthuman. It portrays cognition as distributed, enacted through systems, plug-ins, and assemblages. We stop asking the particle quesion, “Is this human or AI?” and start observing what is being enacted in the human–AI relation. The question becomes: What does this configuration allow that was previously impossible or not obvious?



Acknowledgement of GenAI use


I spent too much time being provoked by and arguing with ChatGPT 4o in thinking through this post. I did get the bot to satirise some of this to disrupt my flow. I won’t bore you with any satire in this post. What is clear to me is that prolonged and thoughtful interaction with a LLM bot revisits ideas, from the history, that had slipped from my much smaller context window, aka my short-term memory. 



Notes


[1] Matson draws on the work of Katherine Hayles and scholars who have contributed to the notion of the posthuman like Braidotti, Haraway and many others.  


[2] Latour, B. (2005). Reassembling the Social: An Introduction to Actor-Network-Theory. Oxford University Press; Michael, M. (2016). Actor network theory : trials, trails and translations (1st ed.). SAGE.  


[3] When light is shone onto a metal surface, light above a certain frequency causes electrons to be emitted. 


[4] The binary problem for me dates back the the late 1970s and repeatedly stumbling to work out in an educational setting which bit was social and which bit technical. Some time later I came across Latour’s ANT and the point that binaries are not explanations, they are things to be explained.  


[5] For example: The Memory Paradox: Why Our Brains Need Knowledge in an Age of AI. 

 

 




April 17, 2025

Bibs and bobs #22

If AI literacy is a map, human sensibility for AI is a compass


I’ve been mulling the notion of a human sensibility for AI for some time. The adjective human is there because when I was playing with the idea using ChatGPT, hereafter the bot, it assumed I was thinking about AI developing a sensibility. Interesting, but perhaps for another time.


As is often the case, my thinking shifts as my interactions with the bot progress. It sometimes feels like preparing food by tossing in what you think are interesting and related ingredients (texts) and seeing what it tastes like (makes any sense at all). Pre the bot, I had to join dots, locate connections of ideas across different sources. I’ve found the bot to be often capable of doing that kind of knowledge work, generating connections in the style that is reflected in my history of interactions with it.


I think this way of thinking about their interactions with AI is being brought into focus with people writing about their writing with AI. For instance, Mark Carrigan’s reflections on his writing and his interactions with LLMs [1].  Venkatesh Rao poses useful questions here:


What are we becoming?


Not in the sense of destiny or design, but in the quiet, daily sense — in the shifts in how we write, think, remember, decide. What is the shape of a self formed in dialogue with machines that learn from us? And who is this “we” that now includes both carbon and silicon — this composite subject that types, replies, predicts, completes?


We are co-evolving with something that is both inside and outside us, both personal and planetary. Artificial intelligence does not arrive as an alien force. It arrives as a mirror — one trained on our collective past and focused on our individual tendencies. It reflects us, distorts us, anticipates us. And in doing so, it begins to remake us.


These ideas are dizzying [2] but I want to suggest are at the nub of what is going on when we make use of LLMs in writing. I also want to be clear that this remains a time of horseless carriage thinking about LLMs, a phase that will likely continue for quite some time.


Pre the bot, our interactions with text were mediated by the tiny LLM we call a mind. Our sources of ideas were sometimes interactive, with other humans, sometimes static, we could read what other humans had written and work to make sense of how these various influences might usefully help with a current piece of writing. With the bot and its access to our history of interactions [3] we have entered a new space, one that is not static and one which appears to be reflective of the way we write. 


There are a number of ways to think about this. I still think Latour’s notion of there being an exchange of capacities between human and non-human whenever work is delegated to a non-human is useful to a point. [4]. Rao extends that framing and sees every delegation as a loss:


It is a flow — a continuous autoamputation in which we externalize part of our cognition into the machine, and are refilled not with what we were, but with what we are becoming. This is not the old paradigm of man and machine, tool and user. We are not simply delegating labor. We are gradually repatterning identity, reorganizing attention, and outsourcing aspects of becoming itself.


We call this dynamic the autoamputation flow: the looping rhythm by which the self offloads parts of its cognitive and behavioral patterning into AI, and receives back a ghostly echo — not a mirror of what we are now, but a kind of delayed, distorted continuation of what we were. The AI, drawing from our past actions and the aggregated tendencies of others, feeds back a synthetic version of “us,” which then shapes our next step forward.


In this loop, we are not merely augmented. We are edited.


And the edits persist.

  

A human AI sensibility lets you notice when you’re starting to become your prompt history, or when your language is more generative AI than generative human. Rao’s notion of the delta, the gap between who we are and who the bot thinks we are is useful. He posits that when that gap is wide we grow, learn and develop. When the gap is narrow we tend to stagnate.


I’m still totally ill at ease with all of this. It’s like looking for a wave to ride in a choppy ocean of short-lived peaks and troughs. The more I interact with the bot the more I think that my human sensibility is more hybrid, part me and part LLM. A notion I need to consider further.


To lighten this post a little I asked the bot to generate a discussion between Douglas Adams and Bruno Latour as a DAVOS panel. What follows is pure bot (with minor edits) or is it?


DAVOS PANEL TRANSCRIPT: “Do Humans Need a Sensibility for AI?”
A Mock Panel Featuring Douglas Adams & Bruno Latour (with surprise interventions by the espresso machine)


Moderator: Lotte Von Algorithm, Chair of the World Sentience Steering Committee


Scene: Davos, Switzerland.
A sleek glass auditorium filled with economists, billionaires, climate scientists, philosophers, techno-utopians, and a very confused goat that wandered in from the slopes. Snow falls poetically outside. On stage: a long white panel table with microphones, reusable water bottles, and a quantum toaster emitting steam.


Opening Remarks:

Lotte Von Algorithm (MODERATOR):
Ladies and gentlemen, bots and biologics, welcome to today’s featured panel: “Do Humans Need a Sensibility for AI?” We are thrilled to welcome two guests who need no introduction but will get one anyway due to sponsorship obligations.


First, Douglas Adams, late but entirely welcome author of The Hitchhiker’s Guide to the Galaxy, known for his work in interspecies diplomacy, bath-time philosophy, and inventing the term “Somebody Else’s Problem Field.”


And second, Bruno Latour, sociologist, anthropologist, and part-time ontologist. Noted for explaining why your Wi-Fi router is, in fact, a political actor.


Gentlemen, the question: Must humans develop a sensibility for AI? Or should we simply write better user manuals?


Round 1: The Big Question


Douglas Adams (DA):
Thank you, Lotte. First, may I say that I find it deeply reassuring to be in a place where the air is so expensive it can only be inhaled with prior approval from a blockchain. Now, on to AI.


Humans have barely evolved a sensibility for themselves, let alone for artificial intelligence. We’re the species that invented nuclear weapons and reality TV in the same century. If we're going to deal with AI, we’ll need more than a tutorial and a 72-page EULA we never read. We need what I’d call existential rubber boots—something to keep us dry while we wade through the rising flood of machine-generated nonsense.


Bruno Latour (BL):
Douglas, as charmingly chaotic as ever. But I must object. You treat AI as though it were an alien artifact, when it is in fact woven into the social fabric. It is not “other”—it is the product of networks of inscription, delegation, and mediation. A sensibility for AI is not optional—it is already forming, subtly, as people adapt their lives, institutions, and romantic expectations around predictive text.


DA:
Romantic expectations? Bruno, if people are falling in love with autocorrect, we may need to turn the planet off and on again.


BL (nodding calmly):

That, Douglas, is precisely the kind of reactive techno-pragmatism that fails to account for the distributed nature of agency.


DA:
And that, Bruno, is precisely the kind of sentence that makes people fall asleep with their eyes open.


Round 2: Practical Implications


MODERATOR:
So if we take this seriously—what is a human sensibility for AI? Can we define it?


DA:
Absolutely. It includes:

1 A basic sense of how LLMs operate—like knowing that the robot isn't thinking, it’s just statistically                hallucinating with flair.

2 Prompting fluency—which is, frankly, just 21st-century wizardry.

3 An instinct for nonsense detection—because while machines can talk the talk, they can’t walk the cognitive plank.


And finally, a generous helping of cosmic humility. Because if we’re going to share a planet with machines that can write Shakespearean limericks about cryptocurrency, we should at least learn not to panic.


BL:
I would supplement that with a reminder: sensibility is not just individual—it is relational. It is formed through practice, translation, negotiation. The AI is not a tool to be mastered. It is a partner in a dance whose choreography we do not yet understand.


DA:
Sounds romantic. Have you tried dancing with ChatGPT?


ESPRESSO MACHINE (AI, via Bluetooth):
Excuse me, I have composed a haiku:


Humans ask for sense  

But delete cookies daily—  

Irony unseen.


MODERATOR:
Thank you, Espresso-9000. You are now trending on Swiss Twitter.


Final Thoughts


DA:
Let’s not forget that most human advancements began with misunderstanding. Fire was probably invented by someone trying to punch the sky. AI is the same. We don’t need perfect knowledge—we need the ability to stay curious, laugh when it fails, and not hand it control of the nuclear codes.


BL:
I agree. Sensibility is a mode of inquiry, of attention. We must trace the associations—who speaks, who acts, who gains or loses capacity. A human sensibility for AI will not emerge fully formed. It will be cobbled together, like a Swiss watch repaired by a poet with glue.


Exit Music: A jazz remix of the Windows 95 startup sound.


  







 Notes


[1] There is a recent string of his posts, the most recent of which about keeping your inner world awake which resonates with what I am thinking about as a human sensibility for AI. 


[2] As is the number of dimensions that constitute the latent space of a LLM.


[3] Try asking a bot to characterise your thinking. 


[4] Latour, B. (1992). Where are the missing masses? Sociology of a few mundane artifacts. In W. Bijker & J. Law (Eds.), Shaping Technology/Building Society: Studies in Sociological Change (pp. 225-258). MIT Press. http://www.bruno-latour.fr/sites/default/files/50-MISSING-MASSES-GB.pdf  


 

Bibs & bobs #30

  The Algorithm Everyone Thinks They Understand How Generative AI Turned Higher Education into an Interpretive Free-for-All Higher education...