March 06, 2025

Bibs & bobs #21

 Thinking about the notion of a robust AI sensibility


I’ve been thinking about the notion of what I currently think of as a robust AI sensibility, a knowledge/skill/experience set that goes beyond the notions of literacy or fluency. I tried out ChatGPT 4.5 in deep research mode to see how it went. I forgot to record the time it took but it was over five minutes. It generated a little under a 5,000 word post.


Prompts

LLM output

LLM processing

My commentary


I want to make an argument about what I think of as a robust AI sensibility. The notion captures what I think of as an exchange of capacities between human and non-human when a LLM is used by a human. Terms like literacy  and fluency are too easily used to gesture vaguely to what I think is a more complex relationship between human and machine. The basis of a robust sensibility to me means having a rough understanding of how LLMs were built and how they operate; good prompting skills; and an ability or approach to judging the quality of the output.  After that, sensibility becomes highly idiosyncratic, i.e. depends on the human, the field being discussed and the capabilities of the LLM.


Your idea of a 'robust AI sensibility' sounds intriguing. To develop your argument effectively, could you clarify:

  • What kind of argument are you aiming for? (e.g., philosophical, technical, practical, ethical?)
  • Do you want to engage with existing scholarship or definitions of literacy, fluency, or human-machine interaction?
  • Should this argument be structured as an essay, an opinion piece, a research paper, or another format?
  • Are there specific examples of human-LLM interactions you want to highlight?
  • Do you have a particular audience in mind (e.g., AI researchers, general public, educators)?

Let me know how you'd like to proceed!


I want a theory-based practical account that does engage with the existing scholarship. The format is an opinion piece for a blog. The audience is educators.


Great! I’ll develop a theory-based yet practical argument on a 'robust AI sensibility' that engages with existing scholarship on literacy, fluency, and human-machine interaction. The format will be an opinion piece tailored for educators, balancing conceptual depth with practical applications.

I'll find relevant research on AI literacy, human-AI collaboration, and educational approaches to working with LLMs. I'll also incorporate examples to illustrate how educators can cultivate this sensibility in their own practices.

I'll let you know once the piece is ready!


Theory-Based Practical Argument on Robust AI Sensibility

ChatGPT

I’m reviewing works on AI literacy, human-machine collaboration, and digital sensibility to build a holistic understanding of robust AI sensibility in education.

I’m focusing on scholarly articles and blogs about AI literacy in education, human-machine interaction, and critical AI literacy to consolidate knowledge for crafting a robust AI sensibility argument.


Searched for AI literacy and human-machine collaboration

ChatGPT

Considering refining the search to "AI literacy vs AI fluency" or "digital literacy" for more precise results.


Searched for AI literacy and LLMs for educators

ChatGPT

I’m focusing on AI literacy courses for educators, balancing theoretical and practical approaches, and considering minimal literature citation. A more precise search might be "robust sensibility" and "AI."


Searched for AI sensibility and literacy fluency

ChatGPT

I'm gathering info on "AI sensibility" and exploring it as a concept similar to sensibility, which involves sensitivity or awareness, and whether "robust AI sensibility" could mean more than just AI literacy.


Searched for LLM and AI literacy

ChatGPT

I'm noting "LLM literacy" and "A Framework for AI Literacy" from EDUCAUSE Review, with a potential link to "Critical AI Literacy" examining LLMs and epistemic injustices. Progressing through these ideas might reveal more about their interplay.


Read about.open.ac.uk

ChatGPT

I’m piecing together a definition of critical AI literacy, emphasizing critical thinking, evaluation, and EDIA. This involves engaging with existing scholarship and considering the gaps in current AI education.


Read goodreads.com

ChatGPT

Piecing together the concept of AI sensibility as melding knowledge, skill, and attitude, discerning it from mere literacy or fluency.


Searched for AI fluency vs AI literacy

ChatGPT

I’m examining entries like corporate documents on corporate perspectives, educational paradigms, and expert talks to understand AI literacy's development from literacy to fluency.


Searched for Magerko 2020 AI literacy definition


Read circlcenter.org

ChatGPT

OK, let me see. I'm thinking about searching within the PDF or "this" for clear definitions.


Read more from circlcenter.org


Read static1.squarespace.com

ChatGPT

Thinking through what AI literacy entails, mapping out a collection of essential competencies.


Read more from static1.squarespace.com

ChatGPT

I'm thinking through the critical evaluation of AI technologies, effective communication and collaboration with AI, and using AI as a tool in various environments, suggesting a comprehensive set of skills.


Read more from static1.squarespace.com

ChatGPT

I'm emphasizing how Content-AI integration can enhance classroom writing through iterative, reflective mediation. This method fosters a more profound human-AI collaboration, avoiding superficial fixes.


Read learningguild.com

ChatGPT

Piecing together AI literacy, fluency, and sensibility: foundational knowledge, innovative application, and intuitive, responsible creativity.


Searched for human-AI collaboration educators with LLM

ChatGPT

Evaluating the potential relevance of a PDF on "AI-Assisted Academic Writing" to discussions on AI literacy and human-AI collaboration.


Read from o3WhatCanWeDo.pdf

ChatGPT

I'm digging into how AI affects academic writing, spotting misinformation risks, and weighing the need for smart user monitoring.


Read from o3WhatCanWeDo.pdf

ChatGPT

Leveraging AI to draft, cite, and respond to reviews, enhancing academic collaboration.

AI tools will streamline academic writing by drafting, citing, and engaging reviewers, significantly boosting efficiency and support.


Read from o3WhatCanWeDo.pdf

ChatGPT

Academic writing might involve AI assistance for drafting, citations, and reviewer comments. This approach shows a dynamic, collaborative process.


Read ibm.com

ChatGPT

OK, let me see. It’s noting LLMs' use in writing but their struggle with deep academic tasks like research and argument formulation.


Read more from ibm.com

ChatGPT

This passage focuses on the inadequacies of LLMs in teaching critical thinking and intellectual tasks, emphasizing the need for human involvement. It explores AI's effective use in specific educational contexts, showcasing examples from English, science, and history, and references educators like Ethan Mollick.


Searched for teacher prompting ChatGPT example

ChatGPT

OK, let me see. This exploration into practical prompting highlights the importance of trial and error. Progress in writing prompts hinges on experimenting and fine-tuning.


Read classwork.com

ChatGPT

I’m pulling together practical advice on effective ChatGPT use and critical AI literacy's context-specific nature to craft a well-rounded citation.


Read about.open.ac.uk

ChatGPT

Wondering about AI literacy's domain specificity, contextual distinctions, and practical examples. Considering various scholarly insights to inform the synthesis.


Read isbos.org

ChatGPT

Examining Dr. Beau's insights on AI literacy, incorporating technical skills, critical thinking, ethics, and communication, and including images to enhance the blog's engagement.


Searched for image of teacher-student AI collaboration in education

ChatGPT

I'm digging into "Teacher Student Collaboration Images" or specific cartoons on AI literacy and education. Freepik and open licenses may have the right visuals.


Searched for "robust AI sensibility" educational blog image

ChatGPT

Hmm, I’m thinking about the search's potential for images, but it seems less promising.


Searched for teacher generative AI stock image

ChatGPT

Focusing on free, embeddable images, I'm leaning towards stocks like Pixabay, which offers diverse content under a generous license.


Read pixabay.com

ChatGPT

I'm assessing various AI-generated images of teachers to identify one that aligns with educational context.


Read cdn.pixabay.com


Searched for Pixabay AI teacher classroom image

ChatGPT

I’m thinking through the possibility that [0] and [1] could be the same or show similar images, as [1] involves sunlight and a blackboard.


Read pixabay.com

ChatGPT

I’m curious about the hyperlink at [58], which might lead to the actual image. Is this the page you are referring to?


Read cdn.pixabay.com

ChatGPT

I’m mapping out methods to address the 'unsupported content type' error, including using the 'embed_image' method, manually checking the image link, and adjusting the image size.


Read more from cdn.pixabay.com

ChatGPT

Trying to resolve "unsupported content-type" error by exploring base64 embedding or linking to smaller pixabay images.

I'm thinking through whether to use Google Images to find relevant cartoon images or concepts pertaining to AI literacy.


Searched for AI literacy illustration human-machine collaboration

ChatGPT

I'm rethinking using AI-generated images or cartoon illustrations since the system might mistakenly flag realistic AI-generated images as real people.


Read more from cdn.pixabay.com

ChatGPT

I’m linking the cartoon teacher to "teaching concept" or "learning about AI" through a caption about building AI sensibility.


Searched for stock illustration human-robot collaboration education

ChatGPT

Considering Freepik with attribution or exploring Pixabay for free, public use.


Searched for Pixabay robot teacher illustration

ChatGPT

"Robot Teach Book" has caught my eye, so I’m taking a closer look.


Read pixabay.com

ChatGPT

I'm thinking about the "Robot Teach Book technology future". It might depict a robot as a teacher or student, possibly reading a book.


Read cdn.pixabay.com

ChatGPT

I’m incorporating a cartoon robot teacher to symbolize AI in education, using it at the introduction with an embed_image citation.


Searched for Pixabay robot handshake human illustration


Read pixabay.com


Read cdn.pixabay.com

ChatGPT

I’m thinking about embedding the image with [99†embed_image] to address the JPG content type issue. This might bypass domain restrictions and avoid a content-type error.


Read more from cdn.pixabay.com

ChatGPT

Working with smaller image dimensions.


Searched for Pixabay AI education concept illustration

ChatGPT

Compiling relevant citations to support points, including definitions of AI literacy, differentiation from fluency, human-AI collaboration, and ethical considerations.


Read isbos.org

ChatGPT

I noticed an error processing an image featuring a female teacher in a classroom. This could be due to it being AI-generated.


Read more from cdn.pixabay.com

ChatGPT

Considering image format or bug causes. Trying different URLs or downloading images sizes to test.

OK, let me see. Considering the suitability of [43] and [99] images, outlining potential themes like AI in education or human teacher guidance.

I’m gathering info on promoting critical thinking, ethical use, and personalizing AI’s role. Progressing towards a balanced, effective approach.

I'm highlighting the importance of continuous development, promoting a growth mindset, and experimenting with AI to guide students effectively in an AI-centric world.


Read more from isbos.org

ChatGPT

Lines 278-283 seem to emphasize the importance of not blindly trusting and asking for references.


Read more from isbos.org

ChatGPT

I’m pulling together citations, considering a mix of academic and blog-style references to keep the content engaging. The goal is to balance readability with credibility, ensuring a good flow throughout.



Beyond AI Literacy: Cultivating a Robust AI Sensibility in Education

(Download Robot, Teach, Book. Royalty-Free Stock Illustration Image - Pixabay) A cartoon robot teacher with a book and pointer, symbolizing AI as a learning partner.


It spent some time trying to find or generate an image to suit the post. Here is the image:




Ugh!


In the past year, many educators have encountered AI tools like ChatGPT with a mix of excitement and anxiety. Some worry that generative AI will encourage shortcut-seeking or even replace human expertise, while others see new possibilities for teaching and learning. What’s becoming clear is that we need more than basic AI literacy or fluency – we need a robust AI sensibility. This sensibility is a nuanced combination of knowledge, skills, and mindset that enables educators to harness AI’s potential critically and creatively. In this opinion piece, I’ll argue that developing a robust AI sensibility is key for educators. I’ll define what this sensibility entails, how it goes beyond simplistic notions of “literacy” or “fluency,” and offer practical guidance (grounded in research and real examples) for cultivating it in our teaching practice.


I don’t write like this. I’m wary of definitions, particularly in a field that keeps moving and changing.


From Literacy and Fluency to Sensibility

Educators often talk about AI literacy – the basic competencies needed to understand and use artificial intelligence. AI literacy is commonly defined as the ability to critically evaluate AI technologies, communicate and collaborate with AI, and use AI as a tool in various contexts (Microsoft Word - CHI 2020 AI Literacy Paper-FINAL.docx). In other words, a literate user knows what AI is (and isn’t), how to get it to do useful tasks, and how to interpret its outputs responsibly. AI literacy builds a foundation of understanding that helps users avoid naive mistakes or misuse. For example, an AI-literate teacher might understand that a tool like ChatGPT generates text by analyzing patterns, which helps the teacher trust it cautiously – appreciating its speed and fluency, yet remaining alert to errors or biases.


Some experts distinguish AI fluency as a step beyond literacy. Where literacy is about essential knowledge and safe use, AI fluency means mastering AI as a creative partner (Bridging the AI Competency Gap: From Literacy to Fluency in L&D : Articles | The Learning Guild) (Bridging the AI Competency Gap: From Literacy to Fluency in L&D : Articles | The Learning Guild). An AI-fluent educator doesn’t just avoid pitfalls; they reimagine workflows and innovate with AI’s help. As one workplace learning guide explains, literacy equips users to navigate AI responsibly (building trust and avoiding risk), while fluency “empowers them to reimagine workflows, solve problems creatively, and unlock AI’s potential as a true partner” (Bridging the AI Competency Gap: From Literacy to Fluency in L&D : Articles | The Learning Guild) (Bridging the AI Competency Gap: From Literacy to Fluency in L&D : Articles | The Learning Guild). In education, a teacher with AI fluency might not only use an AI tool to grade papers more efficiently, but also invent new AI-driven activities that engage students in creative thinking.


Robust AI sensibility, however, goes further than either term. Sensibility implies a grounded intuition and critical mindset developed through experience. It’s not just knowing the concepts or being technically skilled – it’s having the discernment to use AI in context-appropriate, ethical, and personally effective ways. A robust AI sensibility encompasses literacy and fluency but also includes:

  • Deep Understanding – knowing in broad terms how AI models work and what their limits are. For instance, recognizing that a large language model (LLM) like ChatGPT doesn’t “know” facts but predicts words based on training data, so it can sound confident while being wrong. This understanding guards us against over-trusting AI.
  • Critical Judgement – the habit of questioning and verifying AI outputs rather than taking them at face value. This includes spotting when an answer seems off, checking sources, and being alert to biases. A robust sensibility means you never fully “turn off” your human critical thinking just because the AI produced a slick answer.
  • Skillful Interaction – knowing how to communicate with AI effectively. Often this is described as prompt engineering or simply effective prompting. It means being able to translate your goals into instructions the AI can follow, and refining those instructions when the result isn’t what you need.
  • Contextual Adaptation – an awareness that how you use AI should adapt to your specific discipline, task, and the particular AI tool. For example, your approach to using an AI writing assistant in an English class will differ from how you use a coding assistant in a computer science class. Sensibility includes that adaptive know-how.
  • Ethical Mindset – understanding the ethical implications of AI use: issues of academic honesty, bias, privacy, and fairness. An educator with AI sensibility not only uses AI, but does so transparently and teaches students to use it responsibly (for example, by citing AI-assisted work or avoiding misuse in assessments).


I do like how it has structured the argument to put sensibility as an attribute beyond literacy and fluency. 


In short, robust AI sensibility is an idiosyncratic, evolving faculty. It’s “idiosyncratic” because each educator will develop their own style and standards for using AI, shaped by their subject matter and personal values. And it’s “evolving” because AI technologies and our social norms around them are rapidly changing – sensibility requires continual learning and adjustment. Unlike a checklist of skills, sensibility is more like a habit of mind or a social practice that one cultivates (OU Microsoft Word Report Template). Just as critical digital literacy is context-specific and something we do rather than simply have (OU Microsoft Word Report Template), an AI sensibility is something educators build through practice, reflection, and adaptation in their daily work with AI.


The argument I like, the writing not so much.


Understanding How LLMs “Think” (and Why It Matters)

A starting point for robust AI sensibility is demystifying AI itself. Today’s generative AIs – especially large language models – excel at producing human-like text, but in fundamentally different ways from how a person communicates. LLMs generate responses by statistically predicting likely word sequences based on vast training data. They do not have genuine understanding, factual memory in the human sense, or an intent to tell the truth. As a result, they can appear remarkably fluent while being completely wrong or nonsensical. Educators developing an AI sensibility learn about these characteristics so they can work with the AI’s strengths and around its weaknesses.


It appears to be in “teach mode”, not the style I’d use. 


Existing scholarship on AI literacy emphasizes grasping core concepts of how AI works. For example, the AI for K-12 initiative outlines “Five Big Ideas” of AI – such as the fact that AI systems perceive the world through sensors, maintain internal models, learn from data, and can impact society in positive or negative ways (AI Literacy: Closing the Artificial Intelligence Skills Gap | IBM). Knowing these principles helps teachers and students alike approach AI more critically. A key insight is that making AI interact comfortably with humans is still a major challenge (AI Literacy: Closing the Artificial Intelligence Skills Gap | IBM) – meaning these tools may generate output that sounds authoritative or human-like, yet they lack true common sense and can easily miscommunicate or misinterpret intent.


One vivid example is how LLMs handle factual tasks. An LLM might help a student draft an essay, but it cannot teach the underlying research and reasoning skills needed for quality work (AI Literacy: Closing the Artificial Intelligence Skills Gap | IBM). As Cornell’s Center for Teaching and Innovation cautions, an LLM “might help a learner write a paper or report, but [it] cannot teach the learner how to conduct original research, synthesize information from sources, formulate arguments, express opinions or cite sources properly” (AI Literacy: Closing the Artificial Intelligence Skills Gap | IBM). In other words, the heavy cognitive lifting – analyzing, evaluating, creating new ideas – remains with us. If students (or educators) rely on the AI to do more than it actually can, the work suffers. A robust sensibility means always being aware of what tasks AI can genuinely handle and what tasks require distinctly human judgment.


Importantly, LLMs have well-documented issues with accuracy and bias. They sometimes hallucinate – producing incorrect statements or even fabricated sources – and they reflect biases present in their training data. For educators, this means two things. First, don’t assume the AI is correct. Even if an answer looks plausible, double-check any facts or sources it provides. Second, realize that AI outputs may carry subtle biases (for example, reflecting stereotypes or one-sided perspectives), so use them as a starting point to discuss and examine those biases, rather than as neutral truth. Researchers in critical AI literacy note that AI systems can reinforce existing power imbalances and “epistemic injustices” if used uncritically (OU Microsoft Word Report Template). Developing sensibility involves asking whose perspective the AI might be missing or what unintended messages its responses could send in a classroom setting. This critical awareness aligns with what we already do as educators with other media – we teach students to question the authority of a textbook or website, and now we extend that to AI-generated content.


In sum, part of building AI sensibility is building a mental model of the AI: understanding it as a fallible, tool-like entity with strengths (speed, language fluency, breadth of information) and weaknesses (no true understanding, possible errors/bias, lack of context unless provided). With this foundation, an educator is prepared to use AI in an informed way, rather than as magic oracles. As one teacher put it, “learning more about AI increased [educators’] optimism about its benefits and their ability to shape its use” once they understood how it works and what its limits are (AI Literacy: Closing the Artificial Intelligence Skills Gap | IBM) (AI Literacy: Closing the Artificial Intelligence Skills Gap | IBM).


The Art of Effective Prompting and Interaction

If understanding AI is one pillar of sensibility, communication with AI is the other. Large language models are conversational by design – they take input (a prompt) and produce output. The quality and usefulness of that output often depends greatly on how we ask. Thus, learning to craft good prompts is emerging as a new literacy for educators.


At first, prompting might seem straightforward: you type a question, you get an answer. But any educator who has experimented with ChatGPT or a similar tool quickly discovers that how you ask makes a big difference. Consider a teacher trying to get help generating quiz questions on a novel. A vague prompt like “Give me some questions on To Kill a Mockingbird” might yield simple recall questions. But if the teacher instead prompts, “You are an English teacher creating a quiz. Generate 5 thoughtful discussion questions about themes and symbolism in To Kill a Mockingbird, with answers.”, the output will likely be deeper and more on target. The teacher might then follow up: “Great, now make question 3 more challenging for advanced students.” In this iterative way, the educator guides the AI to produce a better result.


I would have swapped the focus and suggest having students write prompts to explore the book. Using AI to generate work for students strikes me as unhelpful. I’d have tasks that asked a lot more of students as they made use of AI to think through analysis.


This kind of iterative, strategic prompting is a skill that improves with practice. In fact, resources for teachers recommend experimenting with prompts and refining them to suit your needs. As one guide notes, “The key is to experiment to see what works. It does take a few tries to get good at writing prompts... Keep refining your prompts until you get something that fits your students and classroom.” (50 ChatGPT prompts for teachers | TeacherMade). What does refining a prompt look like? It could mean specifying the reading level of the output (“Explain the water cycle in 5th-grade language”), providing an example or format (“List pros and cons in a table.”), or giving the AI a role or persona (“Act as a tutor and explain step-by-step…”). These techniques help coax the AI toward the kind of response you want.


Effective prompting also involves knowing how to handle partial or unsatisfactory results. If an AI’s first answer is off-base or contains errors, a user with robust sensibility doesn’t just shrug and give up. Instead, they treat it like a rough draft: either ask the AI to improve it (“Can you provide more detail on point 2 and check the facts?”) or use the information as a springboard and do their own editing. In this sense, interacting with an AI can become a dialogical process, almost like coaching a student. You give feedback (through another prompt) and the AI attempts again.


For educators, mastering prompting is especially powerful. It means you can bend the AI to serve pedagogical goals – whether it’s simplifying a complex explanation, generating multiple examples for a concept, or even adopting a Socratic questioning style to challenge a student. All these require carefully worded prompts and follow-ups. Fortunately, teachers are generally skilled communicators, and many find that after a learning curve, they develop an intuition for prompting. They develop a personal style with the AI: some may prefer very structured prompts, others a more conversational approach; some may use a formal tone, others playful – whatever gets the job done. There’s no single right way to prompt, which is why we speak of sensibility: each educator will hone their own approach over time, suited to their voice and needs.


One practical tip is to reflect on successful vs. failed AI interactions. For example, if an attempt to get lesson plan ideas from ChatGPT yielded trivial suggestions, examine your prompt. Were you too broad? Did you provide enough context? Adjust and try again. Conversely, if you got a brilliant activity idea after tweaking your prompt, take note of what you changed. Sharing these experiences with colleagues can accelerate learning this art. Many teachers have begun swapping prompt ideas and even prompt “hacks” (like telling the AI its answer will be reviewed by an expert, to raise its rigor). Such community knowledge-building is part of growing a collective sensibility around AI in education.


There is so much advice like this online. It signals the current thinking around this teacher-centred way of thinking and working. It’s a domestication attempt that I think while useful perhaps initially misses the point of supporting students to develop their own AI sensibilities. 


Cultivating Your Own AI Sensibility in Practice

What does developing a robust AI sensibility look like in day-to-day educational practice? It’s not a one-time training or a box to check, but an ongoing process. Here are some practical steps and strategies for educators to cultivate this sensibility:

  1. Learn the Basics of Your AI Tool – Start by understanding the particular AI tools you use. Is it a general LLM like ChatGPT, or a subject-specific AI tutor, or a data analysis assistant? What data was it trained on, and what knowledge cutoff or update cycle does it have? Knowing this helps set expectations. For instance, if you know ChatGPT’s training data goes up to 2021, you won’t expect it to have the latest scientific findings. If an AI chemistry tutor is great at explaining concepts but not reliable at complex calculations, you’ll use it accordingly. This tool-specific knowledge is part of sensibility – you tailor your use to the tool’s strengths.
  2. Integrate AI with Lesson Goals in Mind – Rather than using AI for its own sake, anchor its use to your instructional objectives. Ask, “How can this tool enhance what I’m trying to achieve?” Maybe you want to differentiate instruction for varied reading levels – you could prompt the AI to adapt text for struggling readers. Or you need fresh project ideas – you could brainstorm with the AI. Always consider pedagogy first, tool second. This mindset prevents AI from becoming a gimmick; instead, it becomes a valuable extension of your teaching toolkit, used with purpose and intention.
  3. Practice and Refine Prompting – As discussed, make experimenting with prompts a regular habit. Try out new prompt phrasings when you have a spare moment and see how the AI responds. Over time, you’ll build a repertoire of effective prompts (and you can save these somewhere). For example, you might discover that starting a prompt with “Help me brainstorm…” yields more creative outputs, or that asking the AI to “role-play” as a debate opponent produces a great classroom discussion prompt. This is an ongoing learning-by-doing process. The more you converse with AI, the more you learn how to converse with AI. If possible, compare notes with colleagues: what wording worked for them? Prompting is partly art, partly technique – and it develops faster with collaborative learning.
  4. Always Verify and Vet AI Output – Develop a routine of checking AI-generated material before you use it in class or pass it to students. A robust sensibility treats AI output as a draft or suggestion, not a final authority. Got an answer or content from AI? Cross-check key facts. If the AI gave you a historical anecdote to use in a lesson, look it up quickly to ensure it’s real and accurate. If it created a quiz, review each question and answer for correctness and appropriateness. This might sound time-consuming, but it’s the educational equivalent of proofreading – essential to ensure quality. Over time, you’ll get a sense of which types of AI outputs are likely to need heavy editing versus which are usually fine. (For instance, you might learn that AI explanations of math concepts are often good, but its references for literature need double-checking.) By consistently vetting outputs, you’re also modeling the critical stance we want students to adopt. In fact, you can turn this into a learning activity: show your class an AI-generated paragraph with a few errors and have them identify and fix the mistakes. This builds their critical eye as well as yours.
  5. Address Ethical and Quality Concerns Explicitly – Robust sensibility includes an ethical compass. Talk about when and how it’s appropriate to use AI in your classroom. For example, you might decide that using AI to generate practice problems is great, but using AI to write student essays is not allowed. Communicate these boundaries to students, and more importantly, the reasons behind them (e.g. “We value original thought and want to see your reasoning, so you shouldn’t have ChatGPT write your essay for you.”). Also, teach students how to use AI responsibly when they do use it: for instance, if they use AI for research, they should always check the sources or have the AI provide references (AI Literacy: A New Paradigm in Education | Post Details). Emphasize that AI can assist learning but doesn’t replace it – a point backed by experts who warn against “over-reliance” on AI without understanding (AI Literacy: A New Paradigm in Education | Post Details). At the International School of Boston, educators explicitly remind students not to blindly trust AI and to verify its answers by asking for supporting evidence (AI Literacy: A New Paradigm in Education | Post Details). They even encourage students to repeat tasks without AI to ensure they have truly learned the skill, using any discrepancies to highlight where AI might have led them astray or made them skip a learning step (AI Literacy: A New Paradigm in Education | Post Details). Incorporating such practices into your teaching helps nurture students’ own AI sensibility alongside yours.
  6. Adapt to Your Discipline and Personal Style – Finally, remember that sensibility is personal and contextual. Feel free to adapt how you use AI to fit your subject area and your style. A history teacher might use AI to generate multiple perspectives on a historical event and then guide students in analyzing the bias in each perspective. A science teacher might use AI to simulate a lab setup or generate sample data, but will stress the importance of the scientific method and real experimentation. An English teacher might use an AI grammar checker or have students improve a story generated by AI, focusing on literary style and human creativity. In each case, the teacher’s own expertise and goals shape the AI’s role. There is no one-size-fits-all: robust sensibility means finding what works for you and your students. It’s okay if your use of AI looks different from your colleague’s. Maybe you’re an early adopter who uses AI daily in planning lessons, or maybe you’re cautious and only use it occasionally to save time on administrative tasks – both can be examples of sensibility if done thoughtfully. The key is that you remain in the driver’s seat. Use AI as an “augmenter” of your teaching, not a replacement. As one educator puts it, “AI is a tool but not a substitute for human creativity, critical thinking, or deep understanding.” (AI Literacy: A New Paradigm in Education | Post Details). Keeping that principle at heart ensures that your use of AI, however extensive or limited, stays aligned with educational values.


Conclusion: Sensibility as the New Literacy

In the age of intelligent machines, simply knowing how to use AI is not enough; we must also know when, why, and in what way to use AI. This is the crux of a robust AI sensibility. It’s theory-informed – drawing on what we know about AI’s workings and its social implications – yet it’s also deeply practical and personal. For educators, cultivating this sensibility is quickly becoming an essential professional competency, akin to digital literacy in the internet age. The difference is that AI is a far more dynamic and uncanny partner: it can generate content, make suggestions, and even appear to converse like a colleague. This amplifies both the opportunities and the pitfalls.


The good news is that developing an AI sensibility is very much within our reach, and in fact, many teachers are already well on their way. Every time you critically evaluate an AI-generated quiz before giving it to students, every time you tweak a prompt to get a better outcome, every time you discuss an AI’s mistake as a teachable moment – you are practicing the habits of robust AI sensibility. Over time, these habits coalesce into a confident, thoughtful approach to human-machine collaboration in the classroom.


Educators are lifelong learners, and this is just one more arena of learning. By engaging with AI rather than ignoring it, we can shape how these tools fit into our schools. We can model for students what it means to use emerging technology wisely and creatively. In doing so, we prepare them not just to use AI, but to do so with their own sensibility – to question, adapt, and innovate as empowered human partners in a world of smart machines. That, ultimately, is the new literacy we owe our students: not just reading and writing with AI, but thinking and living with it in a way that is informed, critical, and richly human.


Finally it gets to students. And yes modelling is what it is all about. But it has to be collaborative.  I should have included that in formulating the prompt. 


Leaving aside the style, the bot can produce a useful logic given a specific problem in deep research mode. 


Sources:

  1. Long, D. & Magerko, B. (2020). What is AI Literacy? Definition and competencies (Microsoft Word - CHI 2020 AI Literacy Paper-FINAL.docx).
  2. Learning Guild (2023). From AI Literacy to AI Fluency – differentiating basic understanding from innovative use (Bridging the AI Competency Gap: From Literacy to Fluency in L&D : Articles | The Learning Guild) (Bridging the AI Competency Gap: From Literacy to Fluency in L&D : Articles | The Learning Guild).
  3. Van Noorden, R. & Perkel, J. (2023). Scientists’ use of AI in writing – survey results on collaboration with AI in academia (o3WhatCanWeDo.pdf).
  4. Cornell CTI (2023). Guidance on Generative AI in education – limits of LLMs in learning (AI Literacy: Closing the Artificial Intelligence Skills Gap | IBM).
  5. Beau, M. (2025). AI Literacy: A New Paradigm in Education – blog post on combining technical fluency with critical evaluation (AI Literacy: A New Paradigm in Education | Post Details) (AI Literacy: A New Paradigm in Education | Post Details).
  6. TeacherMade (2023). 50 ChatGPT Prompts for Teachers – advice on experimenting with prompt engineering (50 ChatGPT prompts for teachers | TeacherMade).
  7. The Open University (2025). Critical AI Literacy Framework – emphasizes context-specific practice and bias awareness (OU Microsoft Word Report Template).
  8. International School of Boston (2024). Integrating AI in Coding & Research Classes – strategies to prevent over-reliance and encourage critical use of AI (AI Literacy: A New Paradigm in Education | Post Details) (AI Literacy: A New Paradigm in Education | Post Details).
  9. International School of Boston (2024). AI Guidelines for Students – stressing verification of AI outputs and the auxiliary role of AI (AI Literacy: A New Paradigm in Education | Post Details) (AI Literacy: A New Paradigm in Education | Post Details).







Bibs & bobs #21

  Thinking about the notion of a robust AI sensibility I’ve been thinking about the notion of what I currently think of as a robust AI sensi...