Dominant narratives of AI in HE (by Jonathan Tulloch)

Martin: I read and discussed my colleague Jon’s notes for a recent presentation he did and asked him to write them up to share them further and I am delighted to say he agreed to share them here. Jon’s provocations below are framed from a position of considerable learning tech expertise and experience and in my view offer a really thoughtful challenge to dominant narratives related to AI. As you may know, I love a good prod at the hornets’ nest and Jon asks some really fundamental questions that, like the technologies we all find ourselves confronted with, are themselves disruptive.

I do find myself sightly uncomfortable about my role in relation to AI. As a learning technologist, I am frequently asked to train people in how to use it, and to enthuse about all its possibilities.   I am expected to generate engagement from academic staff – and to assure them that ‘AI is essential for Industry 5.0’ (or wherever we are up to now). Now I’m not suggesting this is wrong.  But I am uncomfortable about the lack of criticality – and lack of precision – underpinning our reasons for using it.  I rarely get asked to address the issue of why we should be using it.  Industry 5.0 is often given as the reason, but this concept itself appears to be rather vague guesswork about what the future might look like.  And the concept of AI is itself amorphous: Constantly changing, and encompassing an impossibly broad set of ideas, systems, processes, functions and platforms. To say ‘we must promote AI’ to equip students for ‘Industry 5.0’ is to say we must promote something undefined, to equip students for something unknown. So I end up feeling like an attendant standing in front of a slide at a water park enthusiastically encouraging people to dive into the tunnel, promising them how much fun it will be.  When really, I have no idea where the tunnel leads.  And nobody else seems to know either.

I’m not even convinced it’s a tunnel. So in this post, I want to suggest 2 things.

  1. There are at present two dominant narratives around AI – Neither of which properly address what people want from it or why we should be using it.
  2. These narratives appear to be perpetuated within Higher Education.

Based on these suggestions, I want to ask a question:

Can we imagine a different narrative?  If so, how?  And would it be helpful?

So are we ready? Then we can begin…

This is the first narrative – I call this the ‘hey, look at what you can do!’ narrative.  This narrative focuses on all the cool stuff you can do with AI. You know the kind of thing:

Hey look! – you can use AI to generate your own clipart, cute avatars or biologically improbably photos!  

You can get AI to predict your email responses, organise your to-do list, and generate your own cartoon characters!

I have been suitably impressed by people who have sent me examples of how they have created AI videos of themselves conducting law lectures while riding horses in the wild west!  All very cool. Of course, none of these were things you actually asked for, or thought you needed.  And most of the time you don’t really understand why you’re doing it.  And yet you feel that somehow you have to. We can feel an imperative to use them – because if we don’t we run the risk of being an outsider.  Disadvantaged – or something similar. You can see this in adverts for platforms like ClickUp, Monday.com or Grammarly.  The suggestion that you should be using AI to write proper sentences, organise your time and manage your projects – regardless of whether you feel you need support in any of those areas.  And of course, if you are NOT using AI – then you are inevitably going to fall behind everyone else.

These tap into a feeling that is very much present when we think about AI.  Are you willing to risk being the only one left can’t use it?  The person still trying to fax their job applications to potential employers? AI is the simply the way things will have to be done in the future – so get on board with all the cool things AI can do, or be left at the station.

Alternatively, in the second narrative we hear about all the terrifying stuff AI is going to do to you, whether you want it to or not.  I call this the ‘AI controls your future’ narrative. You know the kind of thing.  AI is going to take over our jobs, make all our decisions for us, write all our songs and presumably, eventually conclude we are surplus to requirements and stick us in small tubes to function as a power source.  A lot of films have been based on this narrative – so it’s a good one.  People like it.

This is the narrative behind those headlines that imply that AI will control the future.  It is a dystopian vision – where AI ends up either saving humanity or destroying it. This narrative is reflected in the now-famous open letter sent by the Future of Life Institute in 2023 to ChatGPT:

Highlighted text reads: “Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?”

According to this narrative, civilization is at a tipping point, and it is humanity itself that is at risk.  But even as we ponder the fact that Elon Musk was a signatory to this letter, we sense that there is an inevitability to all this.  Mustafa Suleyman, one of the key figures in the development of gen AI, describes it as a wave of technology – that cannot be “uninvented or blocked indefinitely” and that “leads humanity toward either catastrophic or dystopian outcomes”. Underpinning this narrative is anxiety and a sense of powerlessness.  Anxiety about the impact of AI, and a worry that there is absolutely nothing we can do about it.  That we are in the last days of humanity – the last days where things like community, compassion, fallibility, imagination have value.

So – those are the two narratives. AI represents either cool toys that you have to learn to use if you want to stay relevant.  Or it a threat to humanity itself. It is what you can do, or what you must do.  What it isn’t, is something you want to do. Now you may already be ahead of me with this. In the whole debate around AI in Universities, we largely seem to be perpetuating these two dominant narratives.  We appear increasingly desperate to find some – any way of incorporating Gen AI into our modules, partly because – hey look, isn’t it cool?  This might engage my students or make them interested, or impress my external examiner. And partly because of the feeling we will be failing them somehow, if we don’t immerse them in AI as soon as possible, because if our students are not able to use AI then how will they get a job? So we try teaching them how to use Copilot to plan their assignments, how to use Claude to develop code, or how to use Gemini to teach them new software.  Of course AI isn’t necessary for any of these tasks – but if our students can’t use this kind of tool, they might be left behind.

That’s the first narrative

At the same time Universities are immersed in the dystopian narratives in which AI poses an existential threat to the values of education. A good example is the issues of academic integrity in the age of AI.  The feeling that AI poses a threat to the integrity of education – to authenticity and ethics.  I’ve lost count of the amount of times I have been told how urgent it is that we get AI detecting software for written assignments.  Or how often I have hear people talk about how important it is to develop assessments where AI simply can’t be used.

That’s the second narrative.  The one in which AI is a cataclysmic threat that needs to be countered.

Interestingly, we can see these narratives in systematic reviews listing the most-cited scholarly articles about the use of AI in higher education. Have a look at the titles below and see if we you can spot these narratives at work…

Of course there are no right or wrong answers to this – but here are some thoughts:

  • The Dwivedi source implies that generative AI is creative significant challenges for ‘research, practice and policy’ – it is destabilising the existing status quo, and demanding that we re-think practice to accommodate it.  This is the second narrative.
  • The Lee source is coloured by the first narrative – ‘look at these cool AI chatbots!’  The actual study relates more to the need for individualised learning for students, but the title shapes this discussion into one in which the focus is the AI tool.
  • The Rudolph source builds the second narrative explicitly into it’s title – referring directly to the idea that Gen AI spells ‘the end of traditional assessment in higher education’.
  • The Tlili source again leans into the second narrative, implying that Gen AI is either a force of extreme evil, or a force of extreme good.  Either way, it is something that significantly determines our choices.
  • The Pavlik title reflects the second narrative: It describes the tool but  suggesting that we ‘collaborate’ with it.  This suggests that Gen AI is not a tool used to serve our needs, but something that has equal status with human practitioners.  But this extends further to suggest this collaboration is unavoidable: The title does not suggest that this collaboration is a choice to be made, but a reality to confront.  And if we have no choice but to collaborate, doesn’t that imply we are not of equal status?

I could go on, but I’m sure you get the point. We can see these narratives in student surveys as well. In 2024, the Digital Education Council released its report on student expectations about AI – titled ‘What Students Want’.  Among their findings, they found that 86% of students surveyed “claim to use AI in their studies”.  The most recent study from HEPI shows this to have risen now to 95% in 2026. Mostly they are using it to research for information and fix their grammar – but over the last few years there have been increasing signs of students using AI more as a kind of digital tutor: Explaining concepts and providing feedback. But the interesting bit is when you look for evidence of what is driving them. 

Because the DEC survey showed 52% of students actually think AI negatively impacts their academic performance.  In their 2026 report, HEPI showed that 51% of students think AI negatively impacts their student experience. An earlier HEPI survey showed 82% of students wanting to use AI less.  So why are they using it? Well, according to the 2026 HEPI survey, 68% of students believe “it is essential to understand and be able to use AI effectively”. The JISC survey similarly found that students were “concerned about acquiring the necessary generative AI skills for future workplaces”. This is the first narrative.  Gen AI is a cool new toy that everyone in using, and there is almost a FOMO (‘Fear of Missing Out’) attitude towards it.  Notice that the emphasis is that Gen AI is necessary for future workplaces – not that Gen AI improves those workplaces. But there is anxiety too.  Again in the 2026 survey, 65% of students express fear that AI makes learning less valuable.

A Higher Education for Good survey found many students expressing the worry that AI will “render them incapable of functioning without it”, fearing the “dehumanization of education”.  The same report highlights concerns about “the lack of humanity in A.I.” and that “A.I. in education could lead to ubiquitous surveillance of students”.  In all the surveys there is a common thread in which students express implicit or explicit fears about how AI threatens the very humanity of educational communities. They demonstrate fears of dehumanization, a distrust of AI, and the feeling that AI will never be able to provide the same value as human production. 

This is the second narrative.

And we are still no closer to understanding what people actually want. Back in February, I conducted a survey and asked people ‘what do you wish AI could do for you?’  The results were interesting.  Staff wanted AI to ease the burden of marking for them.  Students wanted AI to help them with time management. And with tidying. Both groups picked something they found frustrating or difficult and said this was what they wanted AI to help them with. It is significant that staff did not suggest they wanted AI to create their teaching resources for them or write their lectures for them – although at many events like this, that is what they are being shown they can do. And it is significant that students did not suggest they wanted AI to write their essays for them –perhaps because they were worried about admitting it.  But at the same time they didn’t want AI to handle their childcare.  Or do their cooking. Because some things are difficult, yes.  And time-consuming, yes.  But they are also fulfilling on a human level – and as the open letter to AI said, we don’t want AI to take over those very human things.  If AI is going to do things for us – let it be the mind-numbing, soul-destroying stuff that makes us feel less human.  So, what are those things?  Not ‘what can AI already do for us’, or ‘what we must learn to do with AI’.  But what – in an ideal world – would we actually want AI to do for us?

Or let me put it another way: In the words of Neil Postman…

 “What is the problem to which this technology is the solution?”

Think specifically about yourselves as educators – and your students.  And try and avoid falling into the two dominant narratives.

Bibliography:

Attewell, S. (no date) How will generative AI affect students and employment?, Luminate. Available at: https://luminate.prospects.ac.uk/how-will-generative-ai-affect-students-and-employment (Accessed: 28 May 2025).

Batista, J., Mesquita, A. and Carnaz, G. (2024) ‘Generative AI and Higher Education: Trends, Challenges, and Future Directions from a Systematic Literature Review.’, Information (2078-2489), 15(11), p. 676. Available at: https://doi.org/10.3390/info15110676.

Brynjolfsson, E., Li, D. and Raymond, L. (2025) ‘Generative AI at Work*’, The Quarterly Journal of Economics, 140(2), pp. 889–942. Available at: https://doi.org/10.1093/qje/qjae044.

Chan, C.K.Y. (2024) Generative AI in Higher Education; The ChatGPT Effect. London: Routledge.

Digital Education Council Global AI Student Survey 2024 (2024). Digital Education Council. Available at: https://www.digitaleducationcouncil.com/post/digital-education-council-global-ai-student-survey-2024.

Freeman, J. (no date) ‘Student Generative AI Survey 2025’.

Gebru, T. et al. (2024) Statement from the listed authors of Stochastic Parrots on the “AI pause” letter, Dair Institute. Available at: https://www.dair-institute.org/blog/letter-statement-March2023/ (Accessed: 28 May 2025).

Gulati, P. et al. (2025) ‘Generative AI Adoption and Higher Order Skills’. arXiv. Available at: https://doi.org/10.48550/arXiv.2503.09212.

Hashem, R. et al. (2024) ‘AI to the rescue: Exploring the potential of ChatGPT as a teacher ally for workload relief and burnout prevention’, Research and Practice in Technology Enhanced Learning, 19, pp. 023–023. Available at: https://doi.org/10.58459/rptel.2024.19023.

Jacobides, M.G. and Ma, M.D. (2024) ‘IoD | London Business School Policy Paper – Assessing the expected impact of Generative AI on the UK competitive landscape’.

Laura, RonaldS. and Chapman, A. (2009) ‘The technologisation of education: philosophical reflections on being too plugged in.’, International Journal of Children’s Spirituality, 14(3), pp. 289–298. Available at: https://doi.org/10.1080/13644360903086554.

Leaver, T. and and Srdarov, S. (2025) ‘Generative AI and children’s digital futures: New research challenges’, Journal of Children and Media, 19(1), pp. 65–70. Available at: https://doi.org/10.1080/17482798.2024.2438679.

Ogunleye, B. et al. (2024) ‘A Systematic Review of Generative AI for Teaching and Learning Practice’, Education Sciences, 14(6), p. 636. Available at: https://doi.org/10.3390/educsci14060636.

Pang, W. and Wei, Z. (2025) ‘Shaping the Future of Higher Education: A Technology Usage Study on Generative AI Innovations.’, Information (2078-2489), 16(2), p. 95. Available at: https://doi.org/10.3390/info16020095.

Postman, N. (1999) Building a Bridge to the 18th Century: How the Past Can Improve Our Future. New York: Vintage Books.

‘Student Generative Artificial Intelligence Survey 2026’ (2026) HEPI, 12 March. Available at: https://www.hepi.ac.uk/reports/student-generative-ai-survey-2026/ (Accessed: 26 March 2026).

Student perceptions of generative AI report (2024). JISC. Available at: https://www.jisc.ac.uk/reports/student-perceptions-of-generative-ai.

Thomson, H. (2025) ‘“Don’t ask what AI can do for us, ask what it is doing to us”: are ChatGPT and co harming human intelligence?’, The Guardian, 19 April. Available at: https://www.theguardian.com/technology/2025/apr/19/dont-ask-what-ai-can-do-for-us-ask-what-it-is-doing-to-us-are-chatgpt-and-co-harming-human-intelligence (Accessed: 1 May 2025).

Wei, X. et al. (2025) ‘The effects of generative AI on collaborative problem-solving and team creativity performance in digital story creation: an experimental study.’, International Journal of Educational Technology in Higher Education, 22(1), pp. 1–27. Available at: https://doi.org/10.1186/s41239-025-00526-0.

Youth Talks on AI (2024). Switzerland: Higher Education for Good. Available at: https://youth-talks.org/wp-content/uploads/2024/06/Youth-Talks-on-AI-Final-report-03062024.pdf.

Yusuf, A., Pervin, N. and Román-González, M. (2024) ‘Generative AI and the future of higher education: a threat to academic integrity or reformation? Evidence from multicultural perspectives.’, International Journal of Educational Technology in Higher Education, 21(1), pp. 1–29. Available at: https://doi.org/10.1186/s41239-024-00453-6.

‘Good’ English & the AI thing

I was once told by a fellow English teacher (back in the day) that it was funny I was teaching English. I asked why. ‘Well you’re all ‘wiv’ and ‘froo,” she said, highlighting my non standard accent. After that I meekly tried to speak ‘better’ for ages, especially in company of the other English teachers. It knocked my sideways but over time I fought back in small ways. Not enough though. Conversations and a couple of articles I read on WonkHE this week have given me a chance to think about how AI is offerring leverage to change and a critical lens on what we value in so called academic English.

I have just come from a meeting with Kelly Webb-Davies, whose thinking (nicely exemplified here ‘I wrote this- or did I?’ and here: ‘On the toxicity of assuming writing is thinking’ ) and ongoing work at Oxford I have long found a perfect prod to the hornets’ nest that is my brain. What Kelly consistently does so well, and what came through again very strongly in our discussion, is a refusal to accept binary, nuance-less framing of AI as either an existential threat or panacea to drudgery. Instead, Kelly’s focus in our conversation was towards much harder and more important questions: what do we think writing is for (anywhere but especially in academia where student writing is the means by which they are evaluated not the the thing they are being judged on), why we value it so highly in assessment, and where might our assumptions about writing, cognition and academic legitimacy be misplaced. 

Kelly’s work is particularly powerful in contrasting privileged and/or colonial thinking about ‘correct’ and ‘acceptable’ academic writing with translanguaging, our idiolects, cultural and linguistic histories and neurodiversity. Kelly problematises assumptions about writing as a technology that suggest writing is a natural proxy for thinking or even THE way of doing thinking (can you do thinking?). In addition, writing has always evolved alongside other technologies, from inscription to print to word processors, and yet higher education continues to treat a narrow, highly codified form of written English as if it were a timeless measure of intellectual engagement. Thinking manifests in multiple ways, many of which are poorly served by conventional academic writing. AI is able to bridge idiolect to ‘accepted’ language in standard forms (see how I keep using inverted commas?) translating between ways of thinking and the forms the academy currently recognises. This then raises questions about how far we might use AI to compromise in this way or whether we might leverage this truly disruptive phenomenon to challenge yet more of our fundamental beliefs about what is and is not acceptable as well as how we validate learning. Received wisdom may not be that wise at all; a point I tried to make in my previous post. I’m really looking forward to reading and hearing more about Kelly’s ideas a for Voice First Written Assessment (VFWA) which is a really practical and thoughtful approach that values the way people think, speak and write, putting that at the heart of assessments (while enabling valid assessment; everyone’s a winner!)

My meeting coincided with publication of a piece from Jim Dickinson earlier this week. Dickinson does not deny the risks of AI, nor does he minimise the evidence that poorly designed uses of AI can undermine learning. The folk (me included) writing or talking about AI often feel obligated to front load caveats and boundaries (I understand this; I don’t want to talk about that… while thinking:  ‘please don’t derail this session by insisting we talk about ‘y’) though Dickinson does pretty good job of weaving a LOT of things in! He very helpfully brings together ‘famous’ – perhaps even notorious- studies from a growing body of research that shows, when taken collectively, how uncritical, convenience-driven AI use can hollow out motivation, attention and the owning of ideas and learning itself but can also be a boon to ‘productive struggle’ and valuable scaffold (if used advisedly).  What emerged was that the problem is not helpful when framed as AI as this broad, abstract threat. The ongoing research is critical but that shouldn’t legitimise stalling interventions, exploration and confrontation of the harsh realities of the extensive, actual, often unsupported use of AI in research and writing (as production) workflows. The way learning and assessment continue to be designed around immature understandings constrained by tradition and conservatism (my reading, not his words) gives a sense that what we thought we were doing were frail even before ChatGPT. The same tool that produces passivity in one design can deepen judgement and persistence in another. Potential and frailty coexist, and the difference is pedagogical intent, design and scaffolding.

In another piece this week on WonkHE, Rex McKenzie, makes an observation that will I am sure cause debate and consternation but it’s a fundamental one. McKenzie’s comparison between university expectations and journal publishing practices is pretty stark. He shows that while students in many (most? all?) institutions are policed for using AI to shape language, structure and expression, the professional academic world (as manifested in journal policies on AI use) has largely accepted AI-assisted writing, provided accountability and disclosure remain with the human author. This contrast exposes how much of what we currently assess is not intellectual substance but adherence to a particular linguistic performance. Across academic publishing trust appears to be growing on the assumption that if writing is honed it is ok because the research and ideas are owned by the authors. This then raises a really uncomfortable question about permitted AI use (not least in traffic light systems) where ideation is often seen as fine but the use of AI to support writing is taboo. Have we got it arse about face? (I used that phrasing as a less than subtle way to signal that no AI is choosing my metaphors). 

Rex McKenzie’s thinking converges strongly with Kelly’s argument. The insistence that “writing is thinking” is not an innocent pedagogical claim; it is historically and culturally situated. It privileges those already fluent in dominant academic registers and marginalises others, whether through class, language background, neurotype or other trait or characteristic. Treating one form of English as the sole legitimate evidence of cognitive engagement risks mistaking conformity for rigour. AI does not create this problem, but (finally!) it makes it harder to ignore. Talking with Kelly and reading these articles and thinking about the ‘problem’ of AI I arrive at what will be for many a really uncomfortable conclusion. It’s been said before but it’s worth saying again through a lens of challenge to the hegemony of standard forms of expression and writing: The AI threat is not primarily about cheating, efficiency or technological disruption. It’s a threat to convention; to conservatism; to tradition; to imagined halcyon times. Let us re-articulate and argue about what we value, what we recognise and what we are willing to redesign. If we continue to treat writing as both the locus and the pre-eminent proof of thinking, we will remain trapped in defensive and incoherent policy positions. If, instead, we take seriously questions of cognitive engagement, judgement and inclusion, then AI catalyses (and can even enable) long-overdue honesty about assessment, pedagogy and realities of how systems continue to marginalise. 

Kelly will be speaking at a compassionate assessment event on 5th March – you can also see my contribution to the compassionate assessment resources via QAA pages here.

AI and academic misconduct- some context and provocations

I’m sorting through all my files as I prep to hand over my role and start my new job and came across this discussion activity I designed but have yet to run. It’s in 3 parts with each part designed to be ‘released’ after the discussion of the previous part. In this way it could work synchronously and asynchronously.

Part : AI, cheating and misuse


AI-related misconduct in HE is far more complex than simple notions of “cheating”. In fact, how we define cheating (both institutionally and individually) is worth revisiting too. Recent prominent news stories (examples at end) reveal sharp inconsistencies in how universities define, detect and sanction what is seen as inappropriate AI use, despite there being, at best, only emergent policy and certainly wide ranging understandings and interpretations of what is acceptable.  Some ban all AI; others permit limited use with disclosure. Cases show students wrongly accused, rules unevenly applied and anxiety rising across the sector amongst both staff and students.

Key points

  • Huge variation exists between and within institutions: one university may expel a student for Grammarly use, another may allow it.
  • Detector-driven accusations often fail under appeal because AI scores are not proof. Policies lag behind practice: vague guidance leaves staff and students uncertain what constitutes fair or inappropriate use.
  • False positives where detectors are used disproportionately affect neurodivergent and non-native English writers whose linguistic patterns deviate from ‘naturalisitc’ norms/ expectations/ markers.
  • Misconduct procedures must ensure the provider proves wrongdoing; suspicion is insufficient.
  • I would argue that assignments with fabricated or ‘hallucinated’ references should automatically fail though this does not seem to be consistent, even where such examples are used as markers of inappropriate AI use.

Initial discussion prompts

  • Is ‘AI misconduct’ a meaningful category or an unhelpful new label for old issues? How else might we provide an umbrella term? Perhaps one that is more neutral for assessment policy.
  • How do inconsistent institutional rules affect fairness for students across programmes? Where are these inconsistencies? What needs updating/ changing?
  • What does it mean for academic judgement when technology, not evidence, drives decisions? Even though we have made a decision at King’s not to use detectors we need to ensure ‘free’ detectors are not used nor are other digital traps or tricks that undermine trust (eg, deliberately false references included in reading; hidden prompt ‘poisoners’ in assessment briefs).
  • Do we need to overhaul common understandings (eg in academic integrity policy) of what constitutes cheating, how far 3rd party proof reading can be considered legitimate and whether plagiarism is an adequate umbrella term in age of AI? Is this a good time to rethink what we assume are shared understandings and to consider whether this ideal is actually a mask or illusion?

Part 2: Detecting AI: Not as simple or obvious as we might think


Detection of AI-generated writing either by tech means (using AI to detect AI) or ‘because it feels like AI’ has never been as easy as some make out,  and it is increasingly difficult as tools proliferate and improve. While tools claim to identify machine-written work through statistical cues such as ‘perplexity’ and ‘burstiness’, modern large language models  are now able to better mimic these fluctuations. Detection confidence often rests on illusion: humans and detectors alike may only be spotting inexpert AI writing. Skilled prompting, ‘humanisers’, and improved models blur the line between human and synthetic text, exposing the fallibility of ‘gut-feeling detection’. Read more here.

Key points

  • Early AI detectors relied on predictable signatures in low-temperature outputs; current models vary their linguistic rhythm automatically, making detection increasingly unreliable.
  • Human readers are equally fallible: many assumed ‘tells’ (clarity, neatness, tone, specific word choices, punctualtion norms) may simply reflect the style of conscientious, neurodivergent or multilingual students.
  • False accusations carry ethical and procedural risks. The Office of the Independent Adjudicator has also reinforced risks related to use of AI detectors.
  • King’s guidance (rightly imo) cautions against use of detection tools and emphasises contextual evidence and dialogue. This includes detection by ‘feel’ of writing.
  • The sector consensus ( see articles from Jisc, THE and Wonkhe, below) is clear: detectors may inform suspicion but never constitute evidence.

Discussion prompts

  • What do you notice first when you think ‘this feels AI-written’? Are there red lines we can all agree are out of bounds? Are the boundaries of acceptable use fluid? Can we have policy for that?
  • How could you verify suspicion without breaching trust or due process?
    When detection becomes guesswork, what remains of professional judgement?

Part 3. Mitigation strategies


Preventing AI-related misconduct demands more than surveillance; it requires redesign. UK universities (see media reports below) increasingly emphasise prevention through clarity, curriculum and compassionate assessment. The approach promoted by King’s Academy thus far is one that promotes a culture of integrity with the aim of creating a shared understanding, not fear of detection.

Key points

  • Clarify rules: define what counts as acceptable AI use in each assignment and require students to declare any assistance. This is what the initial guidance and current roadshows are about but how can we formalise that?
    We need to embed AI literacy: teach students how to use AI ethically and critically rather than banning it outright. And staff too of course.
  • Redesign assessment: prioritise process, originality, and context (drafts, reflections, local or personal data).
  • Diversify formats: include vivas, oral presentations, in-person elements, or authentic tasks resistant to outsourcing.
  • Support equity: ensure guidance accounts for assistive technologies and language tools legitimately used by disabled or multilingual students.
  • Encourage dialogue: normalise discussion of AI use between staff and students rather than treating it as taboo.

Discussion prompts

  • What elements of your assessment design could make AI misuse less tempting or effective?
  • How might explicit permission to use AI (within limits) enhance transparency and trust?
  • Which AI-aware skills do your students most need to learn and how will we accomplish that? Where does the role of policy sit ? where in the academy are the contradictions and tensions?

Further reading

Topinka, R. (2024) ‘The software says my student cheated using AI. They say they’re innocent. Who do I believe?’, The Guardian, 13 February. Available at: https://www.theguardian.com/commentisfree/2024/feb/13/software-student-cheated-combat-ai

Coldwell, W. (2024) ‘‘I received a first but it felt tainted and undeserved’: inside the university AI cheating crisis’, The Guardian, 15 December. Available at: https://www.theguardian.com/technology/2024/dec/15/i-received-a-first-but-it-felt-tainted-and-undeserved-inside-the-university-ai-cheating-crisis

Havergal, C. (2025) ‘Students win plagiarism appeals over generative AI detection tool’, Times Higher Education, 15 July. Available at: https://www.timeshighereducation.com/news/students-win-plagiarism-appeals-over-generative-ai-detection-tool

Dickinson, J. (2025, May 8). “Academic judgement? Now that’s magic.” Wonkhe. Available at: https://wonkhe.com/blogs/academic-judgement-now-thats-magic/  

Grove, J. (2024) ‘Student AI cheating cases soar at UK universities’, Times Higher Education, 1 November. Available at: https://www.timeshighereducation.com/news/student-ai-cheating-cases-soar-uk-universities Times Higher Education (THE)

Rowsell, J. (2025) ‘Universities need to “redefine cheating” in age of AI’, Times Higher Education, 27 June. Available at: https://www.timeshighereducation.com/news/universities-need-redefine-cheating-age-ai

Webb, M. (2023, March 17). AI writing detectors – concepts and considerations. National Centre for AI / Jisc. Available at: https://nationalcentreforai.jiscinvolve.org/wp/2023/03/17/ai-writing-detectors/ Artificial intelligence+1

Illusions of usefulness and singing trees

I am two weeks back from keynoting and conferencing in Switzerland. I have declined most overseas invites (sometimes with a tear in my eye) but this was a chance to work with someone I have known in the faculty development space for several years and whom I greatly admire, Dr Ruth Puhr. I had expected to find time to reflect as soon as I got back from this visit to Zurich for the Swiss Faculty Development Network (SFDN) conference but instead of the usual cheese and chocolate delicacies, I brought back flu and only now is my brain beginning to de-fug. Anyway, it was hosted at the impressive PHZH in the very clean (but still with a livelier side designed for those younger than me) city of Zurich. 

Padagogische Hochschule Zürich

I was invited to focus specifically on the ‘interesting’ (and challenging, difficult, complex) space academic developers find themselves in when it comes to AI so framed my keynote as ‘Promises, pitfalls and paradoxes’. The purpose of the day was to create a shared, evidence-informed space to think seriously about what AI means for faculty development, not as a technical problem to be solved but as a pedagogical, institutional, ethical and perhaps even existential challenge to be worked through collectively. The programme deliberately balanced framing and provocation with empirical and practice-based contributions. Across the day, sessions ranged from my provocations, the student perspective and through a range of experiments and studies of AI use in teaching and learning, to practical explorations of chatbots, institutional support structures, sustainability and emerging competency frameworks. The institutions represented ranged from large research intensives to small, specialist providers. All against a backdrop of Christmas markets, singing trees and the aroma of Gluhwein.

Taken together, the conference showed that the Swiss context is not so different from the UK. Grappling with lack of consensus, real tensions between a sense of urgency to adapt against a necessity for prudence and anxiety about the many complex implications.  Discussions centred on how AI is actually being used by students and staff (and where it is not), where assumptions do not always align with evidence, and why faculty development must find ways to acknowledge, work with and move beyond blind engagement narratives or refusal to engage towards more nuanced, literate and context-sensitive approaches.

I was particularly delighted to share the keynote platform with Julia Bogdan, whose perspective as co-President of the Swiss Student Union brought an essential counterpoint to institutional and academic viewpoints. The breadth of sessions that followed reinforced this aim, showing how questions of AI adoption cut across levels, roles and disciplines. 

The whole day was excellent but two sessions stood out in particular. The first, presented by Valentina Rossi and Marc Laperrouza from EFPL (the Swiss Federal Tech Institute) looked at teacher-curated chatbots and provided a solid illustration of core tensions. Evidence from a real course context showed, probably unsurprisingly,  that students reported the grounded/ curated chatbot as supportive for learning and more trustworthy than a generic chatbot, though still not as effective (or at least valued) as the conventional method used (teaching supported with annotated slides), and with only modest effects on interest. The study was a good reminder that curation (and who does the curation) matters, that student trust is shaped by pedagogical framing (so surfacing rationales for approaches is key), and that AI is not a substitute for teaching so much as a means to reconfigure or augment it. What is paramount though is that these students were given the opportunity to realise this as they worked through the module

The second session, presented by Adrian Holzer from Neuchatel University, reported findings from a controlled field study examining whether generative AI enhances or impairs active learning and outcomes in a data visualisation course. Undergraduate students completed a 90-minute task under three conditions: students working alone, students working with AI and AI working alone. While students with AI experienced a substantial reduction in perceived task difficulty, their assignment scores did not differ significantly from those of students without AI. By contrast and a reminder if needed that we need to keep talking about assessment design and validity, AI on its own outperformed both student groups. The most important finding for me and one that students need to be helped to understand was that students who perceived AI as most useful were, counterintuitively, those who performed worst, indicating an illusion of usefulness rather than genuine performance gains. High-performing students used AI strategically, giving clear context, iterating on outputs and treating AI as a reviewer or tutor. Lower-performing students struggled to articulate needs, issued vague commands and drifted into off-task interactions. The team concluded that AI’s educational value depends less on access and so much more on students’ AI literacy. 

Across the day, I was struck by the value of experimentation and sharing practice with others, particularly where this involved calculated risk rather than certainty.  One of the sessions came with multiple apologies for the smallness of its scale but I do not think we should be apologising for such things- we are all learning and sharing experience and findings is critical. A key lesson was the reminder that experience and application can be profound teachers, often in unexpected ways and that, increasingly, we (faculty developers) find ourselves with the opportunity to talk good teaching, learning and assessment because of AI…and that’s something we are actually happy about all things considered!

A second set of reflections concerned scale and difference. While there was a strong sense of common issues and shared opportunities across institutions, these did not remove the need to engage seriously with contextual differences. What felt encouraging was a growing consensus around broad strategies for approach, even where local enactments necessarily diverge. This suggests a maturing subfield of faculty development, one that is moving towards broadly shared principles that acknowledge widely conflicting perspectives and don’t assume or attempt to impose uniform solutions.

Aside from all this, was the moment my keynote was disturbed by a deep throated, guttural scream that resonated across the lecture hall. I wondered out loud whether we should inviestigate the apparent catastrophe but I was re-assured it was the Padagogische Schule’s voice coaching class. I bet they don’t do THAT at the IoE.

So much ornate wood in the lecture halls!

Giant humanoids and automata

Today I had the honour of delivering a lecture as part of the Associate of King’s College (AKC) series. The AKC has its origins in the earliest days of King’s, almost two centuries ago, and continues as a cross-disciplinary programme and this year there are around 5,000 participants, including staff. It was a busy room (most of the 5000 watch online though), and I felt very aware of the history behind the series while speaking. This post is not intended to summarise the entire lecture but is a quick reflection based on a question posed by one the students after the lecture.

The lecture itself, titled Rethinking Human Learning in the Age of AI, was structured as a journey through time and across cultures. I wanted to draw attention to the long history of machines, automata and tools designed to work alongside us or, at times, to imitate us. Alongside this, I wanted to acknowledge current concerns about cognitive offloading, over-reliance on AI, and the anxiety that students (and others) may be outsourcing thinking itself. Rather than focus solely on the present moment, I wanted to show that many of these concerns are not new. They have deep roots in myth, invention and cultural imagination.

I began by considering why humans have been drawn to making machines that act or look like us. First, Talos, the bronze giant forged by Hephaestus to guard Crete. Talos, always on, 24/7 sentinel, ever watchful, apparently sentient, yet bound to servitude. Despite his scale, he was defeated by Jason (see how around 6 mins in the video below). The question I raised was: why build a giant in human form to defend an island when there might have been other, more efficient forms of defence? And what are the hidden consequences of a defender shaped like a human? And do we not actually feel sympathy for Talos when he dies?

The second example was the story of Yan Shi, artificer to King Mu around 3000 years ago, who constructed a singing and dancing automaton. The figure was so lifelike that it provoked admiration, but when it began to flirt with women in the court, the king’s admiration turned to fear and fury. Yan Shi had to dismantle it to reveal its workings and save his own life. The story anticipates Masahiro Mori’s ‘Uncanny Valley’ effect. The discomfort arises not simply from human likeness, but from behaviour that unsettles what we assume about intention, autonomy and control.

The third example was the tradition of the Karakuri puppets dating from 17th century Japan, whose fluid, human-like movements still evoke fascination. As with the Bunraku puppets (life size theatrical puppets), we know they are not real, yet we are drawn to the artistry and precision. There is both enchantment and a kind of deception. The craftsmanship invites admiration, but it also encourages us to question what lies beneath the surface.

With all these examples I suggested that our enchantment with lifelike machines can be both captivating and disarming. In each case, the machine is inspired by human design and each in its own way astounds and captivates. But Talos, despite his size, had a single point of vulnerability. Yan Shi’s automaton ultimately profoundly disturbed its audience. The Karakuri mechnaical dolls delight and amaze in the main but like Talos and Yan Shi’s automaton challenge us to ‘look under the hood’ to see both how they work and to uncover frailties.  My point, when I eventually go to it, was that natural language exchanges and fluent outputs of some modern AI tools can similarly enchant us and lead us to assume capabilities it does not have. We need to, within our own capabilities, look under  the hood.

I went on from there… you can see the whole journey reflected in this which I presented as an advance organiser (try to ignore the flags, they spoil it a bit):

After the lecture, a student asked what I thought was an excellent question: why are modern robots, especially those that intersect with novel AI tools, and representations of them in popular culture so frequently humanoid? Why, given that the most effective machines are those designed for highly specific functions, are we drawn to building robots in our own image? We talked about how a Roomba vacuum cleaner is far more practical than a 5-foot humanoid robot pushing around a standard vacuum but, still, in the popular imagination the latter is the imagined domestic help of the future a la Rosie from the Jetsons.

Industrial robots in car plants are arms, not bodies, because arms are what are required to achieve the task. So why does the humanoid form persist so strongly in imagination and development?

I replied, almost instinctively, that one of the reasons we return to the humanoid shape is a lack of imagination. Even among the highly skilled, it is difficult to escape the pull of the human form. We continue to project ourselves onto our technologies, even when the task at hand requires something entirely different. This is not to say that humanoid robots are always misguided. Sometimes there is a clear functional rationale. But in many cases the fascination with the human shape seems to outweigh the practical benefits. We see this in videos of humanoid robots attempting to play football, really really badly, yet we persist.

I mentioned an example I saw recently at the New Scientist Festival: a robotic elderly human head, connected to a large language model, with articulated features. It was being trialled in care homes. I found this compelling because it was not the typical youthful, idealised (and so typically female – which raises other disturbing assumptions I have to say) robot form that popular technology tends to prioritise. It was designed because there was a specific need to support human interaction where human presence was limited. It was based on their own research evidence that residents responded better to a human face than to a screen or disembodied voice. It did not need a body to fulfil that role but it needed the head. Problem> design > research > testing > honing. It contrasts dramatically with the ‘let’s make a robot that takes us into the uncanny valley and out the other side!’

What do you think?

Incidentally, I do not know why I included the word automata in my lecture as I failed (as I always do ) to say it properly.

Comet limits restless legs

I’m one of those people whose knee is constantly jiggling. Especially when I am sat in ‘receive’ mode in a meeting or something. To reduce the jiggling I fiddle with things and the thing I have been fiddling with will be familiar to anyone who likes to see what all the fuss is about with new tech. I’m always asking myself- novelty or utility? (I had my fingers burnt with interactive whiteboards and have been cautious ever since). You may be interested in the output of Perplexity’s ‘Comet’- the browser based AI agent, the outcomes of which are littering LinkedIn right now- or the video below which is a conversation between me and one of my AI avatars… if not either of these I’d stop reading now tbh.

In the image below is a link to what I instructed using a simple prompt: “display this video in a window with some explanatory text about what it is and then have a self-marking multi choice quiz below it.” [youtube link]

It is a small web application that displays a YouTube video, provides some explanatory text, and then offers a self-marking multiple choice quiz beneath it.

Click on the image to see the artefact and try the quiz

The process was straightforward but illuminating. The agent prepared an interactive webpage with three generated files (index.html, style.css, and app.js) and then assembled them into a functioning app. It automatically embedded the YouTube video correctly (but needed an additional prompt when it did not initially display), added explanatory text about the focus of the video (AI in education at King’s College London), and then generated an eight-question multiple choice quiz based on the transcript.

The quiz has self-marking functionality, with immediate feedback, score tracking and final results. The design is clean and the layout works in my view. The questions cover key points from the transcript: principles, the presenter’s role, policy considerations and recommendations for upskilling. The potential applications are pretty obvious I think. Next step would be to look at likely accessibility issues (a quick check highlights a number of heading and formatting issues), finding a better solution for hosting and then the extent to which fine tuning the questions for level is do-able with ease. But given I only needed to tweak one for this example, even that basic functionality suggests this will be of use.

The real novelty here is the browser but also the execution. I have tried a few side-by-side experiments with Claude and in each the fine tuning needed for a satisfactory output was less here. The one failed experiment so far is converting all my saved links to a searchable / filterable dashboard. The dashboard looks good but I think there were too many links and it kept failing to make all the links active. Where tools like notebook LM are offerring a counter UX to text in; reams out LLMs of the ChatGPT variety, this offers a closer-to-seamless agent experience and it is both ease of use and actualy utility that will drive use I think.

I’ll believe it when I see it

I finally got round to trying out ‘Nano Banana’, the Google AI Studio image editor. It’s incredible that the ‘naming of AI things’ is as sensible as the naming of cables for Apple products and conducted by a team of 7 year olds. Anyway, long story short, this is pretty remarkable and pretty depressing all in one go.

Here are my first two edits. Each has two iterations. Prompts are followed by the new version in each case.

Unaltered image 1 (Woolwich Ferry, London, taken by me)

Insert a star trek like space ship realistically in the sky

it’s a bit too big, make it smaller, more distant

Boldly going 400 yards. Unless there’s a light breeze.

Unaltered image 2

I look bald in this. I need to be wearing a hat suitable for a spy

My daughter behind me needs to be my arch nemesis. make her stealthy and holding a water pistol

It looks nothing like my daughter and it’s amazing the AI found a hat that actually fits.

The little AI symbol in the corner is sure to fox anyone using such tools for nefariousness.

‘I can tell when it’s been written by AI’

Warning: the first two paragraphs might feel a bit like wading through treacle but I think what follows is useful and is probably necessary context to the activity linked at the end!

LLMs generate text using sophisticated prediction/ probability models, and whilst I am no expert (so if you want proper, technical accounts please do go to an actual expert!) I think it useful to hone in on three concepts that help explain how their outputs feel and read: temperature, perplexity and burstiness. Temperature sets how adventurous the word-by-word choices are: low values produce steady, highly predictable prose; high values (this is on a 0-1 scale) invite surprise and variation (supposedly more ‘creativity’ and certainly more hallucination). Perplexity measures how hard it is to predict the next word overall, and burstiness captures how unevenly those surprises cluster, like the mix of long and short sentences in some human writing, and maybe even a smattering of stretched metaphor and whimsy. Most early (I say early making it sound like mediaeval times but we’re talking 2-3 years ago!) AI writing felt ‘flat’ or ‘bland’ and therefore more detectable to human readers because default temperatures were conservative and burstiness was low.

I imagine most ChatGPT (other tools are available) users do not think much about such things given these are not visible choices in the main user interface. Funnily enough, I do recall these were options in the tools that were publicly available and pre-dated GPT 3.5 (the BIG release in November ’22). Like a lot of things skilled use can impact (so a user might specify a style or tone in the prompt). Also, with money comes better options so, for example,  Pro account custom GPTs can have precise built in customisations. I also note that few seem to use the personalisation options that override some of the things that many folk find irritating in LLM outputs (Mine states for example that it should use British English as default, never use em dashes and use ‘no mark up’ as default). I should also note that some tools still allow for temperature manipulation in the main user interface (Google Gemini AI Studio for example) or when using the API (ChatGPT). Google AI Studio also has a ‘top P’ setting allowing users to specify the extent to which word choices are predictable or not.  These things can drive you to distraction so it’s probably no wonder that most right-thinking, time poor people have no time for experimental tweaking of this nature. But as models have evolved, developers have embedded dynamic temperature controls and other tuning methods that automatically vary these qualities. The result is that the claim ‘I can tell when it’s AI’ may be true of inexpert, unmodified outputs from free tools but so much harder from more sophisticated use and paid for tools. Interestingly, the same appears true for AI detectors. The early detectors’ reliance on low-temperature signatures now need revisiting too for those not already convinced of their vincibility.  

Evolutionary and embedded changes therefore have a humanising effect on LLM outputs. Modern systems can weave in natural fluctuations of rhythm and unexpected word choices, erasing much of the familiar ChatGPT blandness. Skilled (some would say ‘cynical’) users, whether through careful prompting or bypassing text through paraphrasers and ‘humanisers’,  can amplify this further. Early popular detectors such as GPTZero (at my work we are clear colleagues should NEVER be uploading student work to such platforms btw) leaned heavily on perplexity and burstiness patterns to spot machine-generated work, but this is increasingly a losing battle. Detector developers are responding with more complex model-based classifiers and watermarking ideas, yet the arms race remains uneven: every generation of LLMs makes it easier to sidestep statistical fingerprints and harder to prove authorship with certainty.

For fun I ran this article through GPT Zero….Phew!

It is also worth reflecting on what kinds of writing we value. My own style, for instance, happily mixes a smorgasbord of metaphors in a dizzying (or maybe its nauseating) cocktail of overlong sentences, excessive comma use and dated cultural references (ooh, and sprinkles in frequent parentheses too). Others might genuinely prefer the neat, low-temperature clarity an AI can produce. And some humans write with such regularity that a detector might wrongly flag them as synthetic. I understand that these traits may often reflect the writing of neurodivergent or multi-lingual students.

To explore this phenomenon and your own thinking further, please try this short activity. I used my own text as a starting point and generated (in Perplexity) five AI variants of varying temperatures. The activity was built in Claude. The idea is it reveals your own preferred ‘perplexity and burstiness combo’ and might prompt a fresh look at your writing preferences and the blurred boundaries between human and machine style. The temperature degree is revealed when you make your selection. Please try it out and let know how I might improve it (or whether I should chuck it out the window i.e. DefenestrAIt it)

Obviously, as my job is to encourage thinking and reflection about what this means for those teaching, those studying and broadly the institution they work or study in, I’ll finish with a few questions to stimulate reflection or discussion:

In teaching: Do you think you can detect AI writing? How might you respond when you suspect AI use but cannot prove it with certainty? What happens to the teacher-student relationship when detection becomes guesswork rather than evidence?

For assignment design: Could you shift towards process-focused assessment or tasks requiring personal experience, local knowledge or novel data? What kinds of writing assignments become more meaningful when AI can handle the routine ones? Has that actually changed in your discipline or not?

For your students: How can understanding these technical concepts help students use AI tools more thoughtfully rather than simply trying to avoid detection? What might students learn about their own writing voice through activities that reveal their personal perplexity and burstiness patterns? What is it about AI outputs that students who use them value and what is it that so many teachers disdain?

For your institution: Should institutions invest in detection tools given this technological arms race, or focus resources elsewhere? How might academic integrity policies need updating as reliable detection becomes less feasible?

For equity: Are students with access to sophisticated prompting techniques or ‘humanising’ tools gaining unfair advantages? How do we ensure that AI developments don’t widen existing educational inequalities? Who might we be inadvertently discriminating against with blanket bans or no use policies?

For the bigger picture: What kinds of human writing and thinking do we most want to cultivate in an age when machines can produce increasingly convincing text? How do we help students develop authentic voice and critical thinking skills that remain distinctly valuable?

When you know the answer to the last question, let me know.

Essays & AI: collective reflections on the manifesto one year on

Its roughly a year since we (Claire Gordon and I plus a collective of academics from King’s & LSE) published the Manifesto for the Essay in the Age of AI. Despite improvements in the tech AND often pretty compelling evidence and arguments for the reduction of take home, long form writing in summative assessments, I STILL maintain the essay has a role as I did this time last year. On one of the pages of the AI in Education short course authored by colleagues at King’s from the Institute of Psychiatry, Psychology & Neuroscience (Brenda Williams) and Faculty of Dentistry, Oral & Craniofacial Sciences (Pinsuda Srisontisuk and Isabel Miletich) they detail patterns of student AI usage. They end with a suggestion that participants take a structured approach to analysing the Manifesto and the outcome is around 150 responses (to date) offerring a broad range of thoughts and ideas from educators working across disciplines and educational levels across the world. This was the forum prompt:

Is the essay dead?

The manifesto above argues that this is not the case, but many believe that long form writing is no longer a reliable way to assess students. What do you think?

Although contributors come from diverse contexts, some shared patterns and tensions really stand out which I share below. I finish with a wee bit of my own flag waving (seems to be a popular pastime recently).

Sentiment balance

The overwhelming sentiment is broad agreement and reformist.

  • Most participants explicitly reject the idea that “the essay is dead”. They value essays for nurturing critical thinking, argumentation, independence and the ability to sustain a coherent structure.
  • A minority voice expresses stronger doubts, usually linked to practical issues (e.g. heavy marking loads, students’ shrinking reading stamina, or the ease of AI-generated text) and call for greater diversification of assessment.
  • There is also a strand of cautious pragmatism: many see the need for significant redesign of both teaching and assessment to remain relevant and credible.

In short, the mood is hopeful and constructive rather than nostalgic or doom ‘n’ gloom. The essay is not to be discarded but has to be re-imagined.

Here are a couple of sample responses:

Not quite dead, no. I think of essays as a ‘thinking tool’ – it’s a difficult cognitive task, but a worthwhile one. I think, as mentioned in the study, an evolution towards ‘process orientated’ assessment could be the saviour of the essay. Perhaps a movement away from the product (an essay itself) being the sole provider of a summative grade is what’s needed. Thinking of coursework, planning, supervisor meetings and a reflective journal on how their understanding developed over the process of researching, synthesising, planning, writing and redrafting could be included. (JF)

In their current form, many take-home essay assessments no long reliably measure a students’ learning, nor mirror the skills students need for the workplace (as has arguably always been the case for many subjects). I wonder if students may increasingly struggle to see the value of writing essays too. However, I do value the thought processes that go into crafting long form writing. I think if essays are thoughtfully redesigned and include an element of choice for the learner, perhaps with the need to draw on some in-house case study or locally significant issue, then essays are not necessarily dead.(AM)

The neat dodge to this question is to suggest the essay will be like the ship of Theseus. It will remain but every component in it will be made of different materials 🙂 (EP)

Key themes emerging from the comments

1. Process over product
A strikingly common thread is the shift from valuing the final script to valuing the journey of thought and writing. Contributors repeatedly advocate staged submissions, reflective journals, prompts disclosure, oral defences or supervised drafting. This aligns directly with the manifesto’s calls to redefine essay purposes and embed critical reflection (points 3 and 4).

2. Productive integration of AI
Few respondents argue for banning AI (obviously the responses are skewed towards those willing to undertake an AI in Education short course in the first place!). Instead, many echo the manifesto’s seventh and eighth points on integration and equity. Suggestions include:

  • require students to document prompts and edits,
  • use AI to generate counter-arguments or critique drafts,
  • support second-language writers or neurodivergent students with AI grammar or audio aids,
  • design tasks tied to personal data, lab results or workplace contexts that AI cannot easily fabricate.

A persistent caution is that without clear guidance, AI may encourage superficial engagement or plagiarism. Transparent ground rules and explicit teaching of critical AI literacy are seen as essential.

3. Expanding forms and contexts
Many contributors support the manifesto’s second point on diverse forms of written work. They propose hybrid assessments such as essays combined with oral presentations, podcasts, infographics or portfolios. Others emphasise discipline-specific needs: scientific reporting, medical case notes, or creative writing, each with distinct conventions and AI implications.

4. Equity, access and institutional support
There is strong agreement that AI’s benefits and risks are unevenly distributed. Participants highlight the need for:

  • institutional investment in staff development and student training,
  • clarity on acceptable AI use across programmes,
  • assessment designs that do not disadvantage those with limited technological access.

5. Rethinking academic integrity
Several comments resonate with the manifesto’s call to revisit definitions of cheating and originality. Rather than policing AI, some suggest designing assessments that render unauthorised use unhelpful or irrelevant, while foregrounding honesty and reflection.

What this means for the manifesto

The forum feedback affirms the manifesto’s central claim that the essay remains a vital, adaptable form, but it also pushes its agenda in useful directions.

  • Greater emphasis on process-based assessment. While the manifesto highlights process and reflection, practitioners want even stronger endorsement of multi-stage, scaffolded approaches and/ or dialogic or presentational components as the cornerstone of future essay design.
  • Operational guidance for AI use. Educators call for more than principles: they need models of prompt documentation, supervised writing practices and examples of AI-resistant or AI-enhanced tasks.
  • Disciplinary specificity. The manifesto could further acknowledge the wide variance in how essays function, from lab reports to creative pieces and provide pathways for each. Of course we, like everyone are subject to a major impediment…
  • Workload and resourcing. Several voices stress that meaningful change requires institutional support and realistic marking expectations; without these, even the best principles risk remaining aspirational. This for me is likely the biggest impediment, not least because of the ongoing, multi layered crises HE is confronted with just now.

Overall, the conversation demonstrates an appetite for renewal rather than retreat to sole reliance on in-person exams though this remains still a common call. I stand with the consensus view that the essay (and other long form writing) is not in terminal decline but in the midst of a necessary transformation. What we need to see is this: Educators alert to the affordances and limitations of AI, conversations happenning between students and those that support them in discipline and with academic skills and students writing assessments that are AI-literate. As we find our way to the other side of this transititional space we are in, deluged by inappropriate use and assessments too slow in changing, eventually the writing will (again) be genuinely engaging, students will see value in finding their own voices and we’ll move closer to consensus on some new ways of producing as legitimate. When I read posts on social media advocating wholesale shift to exams (irrespective of other competing damages this may connote and in apparent ignorance of the many ways cheating happens in invigilated in person exams) or ‘writing is pointless’ pieces I am struck by the usually implicit but sometimes overt assumption that writing is ONLY valuable as evidence of learning. Too rarely are formative/ developmental aspects rolled into the arguments alongside a failure to connect to persuasive (in this and wider for learning arguments) rationales for reconsidering the impact on grades on how students approach wiritng. And, finally, even if 80% of students did want the easiest route to a polished essay, I’m not abandoning the 20% that appreciate the skills development, the desirable difficulties and will to DO and BE as well as show what they KNOW. Too many of the current narratives advocate not only thowing the baby out with the bathwater but then refuse to feed the baby because, you know, the bathwater was dirty. Unpick THAT strangled metaphor if you can.

Plus ça change; plus c’est a scroll of death

Hang on it was summer a minute a go
I looked at my blog just now and saw my last post was in July. How did the summer go so fast? There’s a wind howling outside, I am wearing a jumper and both actual long dark wintry nights and the long dark metaphorical ones of our political climate seem to loom. To warm myself up a little I have been looking through some tools that offer AI integrations into learning management systems (LMS aka VLEs)* rather than doing ‘actual’ work. That exploration reminded me of the first ever article I had published back in 2004. The piece has long since disappeared from wherever I save the printed version and is no longer online (not everything digital lasts forever, thank goodness) but I dug the text out of an old online storage account and reading it through has made me realise how much things have changed broadly while, in other ways, it is still the same show rumbling along in the background, like Coronation Street (but no-one really remembers when it went from black and white to colour).

What I wrote back then
In that 2004 article I described the excitement of experimenting with synchronous and asynchronous digital discussion tools in WebCT (for those not ancient like me, Web Course Tools – WebCT- was an early VLE developed by the University of British Columbia which was eventually subsumed into Blackboard). I was teaching GCSE English and was programme leader for an ‘Access to Primary Teaching’ course and many of my students were part time so only on campus for 6 hours per week across two evenings. I’d earlier taught myself HTML so I could build a website for my history students- it had lots of text! It had hyperlinks! It had a scolling marquee! Images would have been nice but I knew my limits. When I saw WebCT, I was fired up by the possibilities of discussion forums and live chat. When I set it up and trialled it I saw peer support, increased engagement with tough topics, participation from ‘quiet’ students amongst other benefits. I was so persuaded by the added value potential I even ran workshops with colleagues to share that excitement.

See this great into to WebCT from someone in CS dept at British Columbia from 1998:

That is still me of course. My job has changed and so has the context, but the impulse to share enthusiasm for digital tools that foster dialogue and interaction remains why I do what I do. It was nice to read that and I felt a fleeting affection for that much younger teacher, blissfully unaware of the challenges ahead! Even so and forming a rattling cognitive dissonace that is still there, I was frustrated by the clunky design and awkward user interface that made persuading colleagues to use it really challenging. Log in issues took up a lot of time and balancing ‘learning’ use with what I then called ‘horseplay’ (what was I, 75?!) took a while to calibrate. Nevertheless, I thought these worth working through but, even with some evidence of uptake across the college I was at was apparent, there was a wider scepticism and reluctance. Why wouldn’t they? ‘it’s too complex’; ‘I am too busy’; ‘the way I do it now works just fine, thank you’. Pretty much every digital innovation has been accompanied by similar responses; even the good ones! I speculated about whether we needed a blank sheet of paper to rethink what an LMS could be, but concluded that institutions were more likely to tinker and add features than to start again.

2004? Feels like yesterday; feels like centuries ago
It was only 2003–4 (he says, painfully aware that I have colleagues who were born then), yet experimenting with an LMS felt novel and that comes over really clearly in my article. If you’d asked me this morning when I started using an LMS I might have said 1998 or 99. 2003 feels so recent in the contexct of my whole teaching career. What the heck was I doing before all that? Thinking back I realise that in my first full time job there was only one computer in our office and John S. got to use that as he was a trained typist (so he said). And older than me. In the article I was carefully explaining what chat and forums were and how they were different from one another, so the need for that dates the phenomeon too I suppose. Later, after moving to a Moodle institution, I became e-learning lead and engaged with JISC working groups- a JISC colleague who oversaw the VLE working group jokingly called me Mr Anti-Moodle because I was vocal in my critiques. It wasn’t quite acccurate- I was critical for sure but then, as now, I liked the concept but disliked the way it worked. Persuading people to adopt an LMS was hard as I said, and, while I have seen some brilliant use of Moodle and the like, my impression is that the majority (argue with me on this though) of LMS course are functional repositiories with interactive and creative applications the exception rather than the norm. The scroll of death was a thing in 2005 and it is as much of a thing now. It also made me think of current ‘Marmitey’ positions folk are taking re: AI. Basically, AI (big and ill defined as it usually is) has to come with nuance and understanding so binary, entrenched, one size fits all positons are unhelpful and, in my view, hard to rationalise and sustain.

The familiar LMS problem
Back to the LMS, from WebCT to Moodle and other common current systems, the underlying functionality has barely shifted (I mean from the perspective of your average teacher/lecturer or student). Many still say Moodle feels very 1990s (probably they mean early 2000s but I suspect they, like me, find it hard to reconcile the idea of any year starting with a 2000 could be a long time ago). Ultimately I think none of these systems offered a genuinely encouraging combination of interface and user experience and that is an issues that persists to this day. The legacy of those early design decisions lingers, and we are still working around them. People have been predicting the death of the VLE for years (including me) but it has not happened. When I first saw Microsoft Teams just before Covid, I thought here’s the nail in the coffin. I was wrong again. Maybe being wrong about the end of the LMS is another running theme.

Will AI change the LMS story?
So what about AI powered integrations? Will they revolutionise how the LMS works? Will they be part of the reason for a shift away from them? Unlikely in either sense is my best guess. Everything I see now is about embellishments and shortcuts that feed into the existing structure. My old dream of a blank-sheet LMS revolution has faded. Thirty years of teaching and more than twenty years using LMSs suggest that this is one component of digital education that will not fade away. The tools will keep evolving, but the slow, steady thrum of the LMS endures in the background. I realise that I have finally predicted non change so don’t bet on that as I have been wrong quite a bit in the past. What I do know is that digital discussions using tools to support dialogic pedagogies have persisted as have the issues related to them. Only 10-20% of my students use the forums! I hear that still. But what I realised in 2004 and maintain to this day is that 10-20% is a significat embellishment for some and alternative for others so I stick with what I said back then in that sense at least. Oh, and lurking is a legit and fine thing for yet others!

One of the most wonderful things about the AI in Education course (so close to 15,000 participants!) is the forums. They add layers of interest that cannot be planned or produced. I estimate only 10-15% of participants post but what a contribution they are making and its an enhancement that keeps me there and, I am convinced, adds real value to those not posting too.

*I’ll stick with LMS as this seems to be pretty ubiquitous these days though I am aware of the distinctions and when I wrote the piece about ‘WebCT’ the term VLE was very much go to.