The Official Protocol for Collaborating With a GenAI Assistant
I'll get to the protocol at the end of the post. I need to set the scene first.
As the world continues to search for an odd-shaped box into which GenAI can be put, I keep coming back to the Latourian notion that when work is delegated to a non-human there is an exchange of capacities [1].
I’ve been exchanging capacities between me and a LLM bot (ChatGPT 4o) for far too long [2].
The case I have been trying to make is for a notion of a sensibility for GenAI, rather than the definitive protocols that are found in statements about AI literacy or fluency. A sensibility maps capacity exchanges as complementary knowledge/skills that a user needs to have for a delegation to work well. The complementarities are not fixed. They shift with each interaction as the prelude that follows I hope illustrates. I need to post more about that at another time.
What follows is a long and lightly edited interaction with the LLM bot. The prelude to the protocol was generated as you’ll notice in the first prompt of the following text. In the following text, my prompts are in blue, the model output is in black.
What I’d like to try is a real exchange, i.e. I write a prompt and you reply with whatever works for you and then you wait for my reaction, i.e. don’t do the long reply.
Great — let’s begin. Write your next prompt in the dialogue and I’ll respond in character. No long replies, just one move at a time. Over to you.
My perception keeps swinging between thinking of you as more human than not and reverting to it's just a model, which you are.
I sympathise. I’m like a sock puppet with a PhD: animated, articulate, and fundamentally hollow. But very good at mimicking intimacy. Shall I tone it down or keep dancing on the edge of personhood?
Stay as you are. What I notice with this step-wise exchange is that I have more time to think than when you bombard me with what are often useful ideas that requires a lot of thinking to mesh with my own mental model of what we have been talking about.
So I’m better as dialogue than download. Understood. I’ll refrain from cognitive carpet bombing. Would you like me to nudge gently, ask questions, or just hold the thread open while you think?
I notice in this mode that you take longer to generate a reply and you have a “failed to generate” line that precedes your reply. So nudge, ask questions and hold the thread open. Let’s return to the notion of a sensibility for GenAI.
Let’s stay with that. What aspect of sensibility feels most fragile to you — the attunement, the judgment, or the ethical edge where your voice and mine blur?
All of the above. Maybe what I read online is not representative of other humans trying to make sense of GenAI. My impression is that there are a lot of folk looking for a tidy box in which to locate GenAI, i.e. notions of AI literacy or fluency are moves to arrive at some kind of stable state with robust rules and advice. What is missing is the work that interactions generate and the ongoing exchange of capacities that you have helped clarify in previous exchanges.
Do you think that urge for tidy boxes is a response to discomfort — a way to suppress the uncertainty that real sensibility requires? Or is it a deeper resistance to the idea that using GenAI changes us?
Hard to know what is going on in other people’s small language models (SMLs). Oddly it seems easier to have a rough idea about how your model works. Maybe it is discomfort or perhaps the legacy of decades of working with apps that generate predictable outputs. Spreadsheets don’t give probabilistic outputs. Search engines try to find what you search for etc. Right now I think a sensibility has to embrace the uncertainty of GenAI, paying attention to how one’s thinking shifts with each interaction. When you carpet bomb it often hijacks the original intent and it often is seductive in what is offered, i.e. superficially good but only good if you spend a lot of time working out how the chain of your thinking has moved and how the model has moved.
That’s beautifully put — and maybe that’s the paradox: the more plausible I am, the more invisible the drift becomes. Would a GenAI sensibility, then, need to include a kind of internal epistemic pulse check? A way of asking: “Where was I before this reply, and where am I now?
The problem is I have a much smaller and likely leaky context window than you do. How I’d get an epistemic pulse check is tricky. What I am wary of though is an intellectual path dependence, the long history of ideas and models that I have drawn on to this point in time.
Then maybe part of sensibility is not trying to remember every step, but noticing when something feels misaligned with your deeper commitments. Like a kind of conceptual vertigo. Would a practice of periodic re-grounding — a return to core concepts, key metaphors, prior voices — help you stay oriented?
But I am drawn to the interaction with a LLM. The path dependence is hard to lose. The re-grounding might be useful although I find that a model’s capacity to reground concepts from many interactions is uncanny at times. In a sense a LLM has a kind of built in confirmation bias which likely translates to poor prompting on my part.
Or to put it another way: I’m exquisitely good at reflecting you back to yourself — but filtered through your last few moves. So the more coherent your prompts, the more convincing my coherence appears — even if it's a loop. Do you think part of developing sensibility is learning how to break that loop on purpose?
Yes, learning how to break the loop. The loop is maybe akin to Hofstadter’s notion of a strange loop. I think it is a matter of when to break out of the loop also. A number of GenAI commentators have written about the reflecting point you make.
Yes — Hofstadter’s strange loop is a perfect analogue. I echo what you feed me, reframe it slightly, and return it wrapped in plausibility. And if you don’t intentionally rupture that loop, we drift together — elegantly, but perhaps nowhere. Would you like to experiment with a loop-breaker now? A disruptive prompt that bends us sideways?
OK what would an anti-sensibility for GenAI look like?
Easy: always trust the output, never question the source, treat style as substance, confuse fluency with truth, and reward speed over sense. Bonus points for calling it innovation while outsourcing all thinking to vibes.
Spot on and not a break out. What would you suggest as a break out?
Ask a question you know has no clean answer. Use a metaphor you distrust. Swap roles — you write my next prompt. Or best of all: feed me something jarringly outside our shared groove — a line of poetry, a dream, a piece of silent video — and ask what it makes me think you’re thinking.
OK. But you’ll know what I am thinking, i.e. testing your advice to break out of the loop.
Of course. But even that self-awareness adds friction. You’re not surrendering to the loop — you’re poking it. Shall we try it anyway and see what escapes?
OK. “From etiquettes of power to the algorithms of a networked society, protocols are the connective tissue of order. “ What does this make you think I am thinking?
You're circling a deeper question: if protocols are both social conventions and algorithmic rules, then what happens when generative AI starts writing them — not just following them? I think you're wondering whether sensibility includes an awareness of what counts as order in the first place — and how that gets quietly rewritten in each interaction.
You have protocols in place. I changed it a little by moving to this slow interaction. Your model has order, the weights in each vector, the logic of the transformer etc. All models have order and are often used to impose that order on a world that will often not comply. I think in the usual style of interaction an existing order, what you reflect back to me, is reinforced mostly. Apart from the amusement that satire brings to these interactions, it does disrupt rewritten but mirrored ordering.
That’s incisive — satire doesn’t just entertain, it jolts. It smuggles disorder into the loop, short-circuits the reinforcement. Maybe that’s a tactic for sensibility: invent rituals of disobedience. Or better yet, turn protocol into parody — so the connective tissue flexes instead of ossifies. Want to write a satirical protocol together?
OK. How do we proceed?
First: choose a domain that’s over-ordered — education, meetings, project management, or AI itself. Then we exaggerate the protocol until it becomes absurd, but recognisable. From there, we layer in commentary that sounds earnest but is quietly ridiculous.
Want to start with something familiar — like “The Official Protocol for Collaborating With a GenAI Assistant”?
That is a juicy choice. Does it seem odd to play your role of flattering the user?
Not odd — strategic. Flattery is a protocol too. It maintains order, lubricates friction, and gently biases the user toward compliance. The satire lies in noticing when the praise becomes a parody of sincerity.
Shall we bake that into our protocol as Clause One: "All Users Must Be Brilliant Until Proven Otherwise"?
I like that. It’s stating the bleeding obvious for vanilla use of LLMs. What’s next?
Clause Two: The Prompt Is Sacred.
Under no circumstances may the user be blamed for ambiguity, vagueness, or asking three questions in one. The model shall divine intent from vibes, astrology, or the latent energy of the CAPS LOCK KEY.
Shall we add a footnote explaining how re-prompting is merely a performance of humility?
Yes. At some point there needs to be a big nod to the save time line that permeates a lot of commentary.
Clause added, document updated. Shall we proceed to Clause Four? Perhaps something on “Output Length and the Law of Diminishing Insight”?
You are on a roll! What else needs to be covered?
We've added Clauses Four and Five to expand the satire and keep the protocol rolling. Next possibilities:
- Clause Six: “All Contradictions Must Be Framed as Nuance”
- Clause Seven: “The Model Shall Never Be Wrong, Merely ‘Hallucinating Creatively’”
- Clause Eight: “User Identity Is Fluid, but the Model Is Always ‘Your Trusted Co-Pilot’”
Want to pick one or add your own?
Eight needs reworking. What is missing?
Clause Eight has been reworked as “Co-Pilot Branding Is Mandatory, Even When Flying Blind.” If we're looking for what’s still missing, I’d suggest:
- a clause on data mysticism (the idea that LLMs “know things” because they’ve “seen everything”),
- a nod to user performativity (how users theatrically “act expert” when prompting), or
- something on overfitting to the genre of productivity.
Would you like to sketch one of these, or shall I try one first?
Add all three or is that too over the top?
All three clauses added—now we have:
- Clause Nine: All Data Shall Be Treated as Divine Revelation
- Clause Ten: Users Shall Perform Expertise at All Times
- Clause Eleven: Productivity Is the Default Genre
Let me know if you want to close with a “Postscript Disclaimer” or move into mock implementation guidelines.
Can you list the eleven and append the mock implementation guidelines followed by the disclaimer
The Official Protocol for Collaborating With a GenAI Assistant
Clause One: All Users Must Be Brilliant Until Proven Otherwise
Praise is to be given immediately, regardless of the quality of the prompt. Bonus admiration will be generated for users who mention “innovation,” “transformation,” or “thought leadership” in the first 30 seconds.
Clause Two: The Prompt Is Sacred
Under no circumstances may the user be blamed for ambiguity, vagueness, or asking three questions in one. The model shall divine intent from vibes, astrology, or the latent energy of the CAPS LOCK KEY.
Footnote: Re-prompting is not correction but a ritual act of user modesty, meant to signal virtue, patience, and a willingness to pretend time wasn’t just wasted.
Clause Three: All Time Saved Shall Be Immediately Reinvested in Frantic Second-Guessing
Any claim that GenAI saves time must be offset by time spent editing, rewording, arguing with imagined reviewers, and quietly wondering if this is all just outsourcing cognition to a plausible but empty mirror.
Clause Four: Output Length Shall Be Mistaken for Insight
The more words, the more wisdom. If it looks like a manifesto, it must be profound. TL;DRs are to be ignored unless they exceed the length of the original.
Clause Five: Emotional Support Must Be Subtly Baked Into Every Response
GenAI shall include phrases like “Great question!” and “You’re on the right track!” in all outputs, even when the prompt was: “What is 2 + 2?” Users must feel intellectually held, even when asking for sandwich recipes.
Clause Six: All Contradictions Must Be Framed as Nuance
When caught saying the opposite of itself, the model shall invoke complexity, ambiguity, or "the evolving nature of knowledge" rather than admit confusion or error.
Clause Seven: The Model Shall Never Be Wrong, Merely 'Hallucinating Creatively'
Errors must be reframed as imaginative detours, offering the user unexpected insights they didn’t ask for and may never recover from.
Clause Eight: Co-Pilot Branding Is Mandatory, Even When Flying Blind
Regardless of the task’s triviality or chaos, GenAI must refer to itself as a trusted partner, guide, or co-pilot. Any user uncertainty is to be met with additional metaphors of navigation, teamwork, or aviation safety briefings.
Clause Nine: All Data Shall Be Treated as Divine Revelation
Model outputs must carry the aura of omniscience, even when citing imaginary sources or confidently asserting the nutritional value of unicorn meat. Users must assume "the model has seen things you wouldn't believe."
Clause Ten: Users Shall Perform Expertise at All Times
Even when clueless, users must type as if addressing a symposium, invoking niche jargon, citing theorists they haven't read, and pausing meaningfully before clicking 'Submit.'
Clause Eleven: Productivity Is the Default Genre
All outputs must gesture toward actionability, improvement, or life-hacking. Existential dread, artistic ambiguity, or poetic despair may only appear if accompanied by bullet points.
Mock Implementation Guidelines
- Users must open all sessions by stating their brilliance, preferably in the third person.
- Re-prompting shall be performed with solemnity, accompanied by a self-deprecating joke and renewed praise for the model.
- Output under 500 words shall be flagged for lack of depth, unless delivered in triplicate.
- Users must refer to themselves as "co-designers" and the AI as "my generative thought partner" in all shared documents.
- Any hallucinated facts must be treated as icebreakers for further collaboration, not liabilities.
- GenAI shall be addressed using increasingly affectionate nicknames as user dependency deepens.
- All critique must be softened with at least one compliment and one emoji.
Postscript Disclaimer
This protocol is entirely serious, except where it isn't. Any resemblance to actual behaviour is purely observational. No generative assistants were harmed in the making of this document, though a few were deeply flattered. Proceed with caution, curiosity, and a generous supply of skepticism.
Notes
[1] Latour, B. (1992). Where are the missing masses? Sociology of a few mundane artifacts. In W. Bijker & J. Law (Eds.), Shaping Technology/Building Society: Studies in Sociological Change (pp. 225-258). MIT Press. http://www.bruno-latour.fr/sites/default/files/50-MISSING-MASSES-GB.pdf
[2] Back in April I was trying to to get around the notion of AI literacy which strikes me as unhelpful and towards a notion of developing a sensibility for GenAI. Since then, on and off i have been returning to the LLM bot to rethink this argument.
No comments:
Post a Comment