Dominant narratives of AI in HE (by Jonathan Tulloch)

Martin: I read and discussed my colleague Jon’s notes for a recent presentation he did and asked him to write them up to share them further and I am delighted to say he agreed to share them here. Jon’s provocations below are framed from a position of considerable learning tech expertise and experience and in my view offer a really thoughtful challenge to dominant narratives related to AI. As you may know, I love a good prod at the hornets’ nest and Jon asks some really fundamental questions that, like the technologies we all find ourselves confronted with, are themselves disruptive.

I do find myself sightly uncomfortable about my role in relation to AI. As a learning technologist, I am frequently asked to train people in how to use it, and to enthuse about all its possibilities.   I am expected to generate engagement from academic staff – and to assure them that ‘AI is essential for Industry 5.0’ (or wherever we are up to now). Now I’m not suggesting this is wrong.  But I am uncomfortable about the lack of criticality – and lack of precision – underpinning our reasons for using it.  I rarely get asked to address the issue of why we should be using it.  Industry 5.0 is often given as the reason, but this concept itself appears to be rather vague guesswork about what the future might look like.  And the concept of AI is itself amorphous: Constantly changing, and encompassing an impossibly broad set of ideas, systems, processes, functions and platforms. To say ‘we must promote AI’ to equip students for ‘Industry 5.0’ is to say we must promote something undefined, to equip students for something unknown. So I end up feeling like an attendant standing in front of a slide at a water park enthusiastically encouraging people to dive into the tunnel, promising them how much fun it will be.  When really, I have no idea where the tunnel leads.  And nobody else seems to know either.

I’m not even convinced it’s a tunnel. So in this post, I want to suggest 2 things.

  1. There are at present two dominant narratives around AI – Neither of which properly address what people want from it or why we should be using it.
  2. These narratives appear to be perpetuated within Higher Education.

Based on these suggestions, I want to ask a question:

Can we imagine a different narrative?  If so, how?  And would it be helpful?

So are we ready? Then we can begin…

This is the first narrative – I call this the ‘hey, look at what you can do!’ narrative.  This narrative focuses on all the cool stuff you can do with AI. You know the kind of thing:

Hey look! – you can use AI to generate your own clipart, cute avatars or biologically improbably photos!  

You can get AI to predict your email responses, organise your to-do list, and generate your own cartoon characters!

I have been suitably impressed by people who have sent me examples of how they have created AI videos of themselves conducting law lectures while riding horses in the wild west!  All very cool. Of course, none of these were things you actually asked for, or thought you needed.  And most of the time you don’t really understand why you’re doing it.  And yet you feel that somehow you have to. We can feel an imperative to use them – because if we don’t we run the risk of being an outsider.  Disadvantaged – or something similar. You can see this in adverts for platforms like ClickUp, Monday.com or Grammarly.  The suggestion that you should be using AI to write proper sentences, organise your time and manage your projects – regardless of whether you feel you need support in any of those areas.  And of course, if you are NOT using AI – then you are inevitably going to fall behind everyone else.

These tap into a feeling that is very much present when we think about AI.  Are you willing to risk being the only one left can’t use it?  The person still trying to fax their job applications to potential employers? AI is the simply the way things will have to be done in the future – so get on board with all the cool things AI can do, or be left at the station.

Alternatively, in the second narrative we hear about all the terrifying stuff AI is going to do to you, whether you want it to or not.  I call this the ‘AI controls your future’ narrative. You know the kind of thing.  AI is going to take over our jobs, make all our decisions for us, write all our songs and presumably, eventually conclude we are surplus to requirements and stick us in small tubes to function as a power source.  A lot of films have been based on this narrative – so it’s a good one.  People like it.

This is the narrative behind those headlines that imply that AI will control the future.  It is a dystopian vision – where AI ends up either saving humanity or destroying it. This narrative is reflected in the now-famous open letter sent by the Future of Life Institute in 2023 to ChatGPT:

Highlighted text reads: “Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?”

According to this narrative, civilization is at a tipping point, and it is humanity itself that is at risk.  But even as we ponder the fact that Elon Musk was a signatory to this letter, we sense that there is an inevitability to all this.  Mustafa Suleyman, one of the key figures in the development of gen AI, describes it as a wave of technology – that cannot be “uninvented or blocked indefinitely” and that “leads humanity toward either catastrophic or dystopian outcomes”. Underpinning this narrative is anxiety and a sense of powerlessness.  Anxiety about the impact of AI, and a worry that there is absolutely nothing we can do about it.  That we are in the last days of humanity – the last days where things like community, compassion, fallibility, imagination have value.

So – those are the two narratives. AI represents either cool toys that you have to learn to use if you want to stay relevant.  Or it a threat to humanity itself. It is what you can do, or what you must do.  What it isn’t, is something you want to do. Now you may already be ahead of me with this. In the whole debate around AI in Universities, we largely seem to be perpetuating these two dominant narratives.  We appear increasingly desperate to find some – any way of incorporating Gen AI into our modules, partly because – hey look, isn’t it cool?  This might engage my students or make them interested, or impress my external examiner. And partly because of the feeling we will be failing them somehow, if we don’t immerse them in AI as soon as possible, because if our students are not able to use AI then how will they get a job? So we try teaching them how to use Copilot to plan their assignments, how to use Claude to develop code, or how to use Gemini to teach them new software.  Of course AI isn’t necessary for any of these tasks – but if our students can’t use this kind of tool, they might be left behind.

That’s the first narrative

At the same time Universities are immersed in the dystopian narratives in which AI poses an existential threat to the values of education. A good example is the issues of academic integrity in the age of AI.  The feeling that AI poses a threat to the integrity of education – to authenticity and ethics.  I’ve lost count of the amount of times I have been told how urgent it is that we get AI detecting software for written assignments.  Or how often I have hear people talk about how important it is to develop assessments where AI simply can’t be used.

That’s the second narrative.  The one in which AI is a cataclysmic threat that needs to be countered.

Interestingly, we can see these narratives in systematic reviews listing the most-cited scholarly articles about the use of AI in higher education. Have a look at the titles below and see if we you can spot these narratives at work…

Of course there are no right or wrong answers to this – but here are some thoughts:

  • The Dwivedi source implies that generative AI is creative significant challenges for ‘research, practice and policy’ – it is destabilising the existing status quo, and demanding that we re-think practice to accommodate it.  This is the second narrative.
  • The Lee source is coloured by the first narrative – ‘look at these cool AI chatbots!’  The actual study relates more to the need for individualised learning for students, but the title shapes this discussion into one in which the focus is the AI tool.
  • The Rudolph source builds the second narrative explicitly into it’s title – referring directly to the idea that Gen AI spells ‘the end of traditional assessment in higher education’.
  • The Tlili source again leans into the second narrative, implying that Gen AI is either a force of extreme evil, or a force of extreme good.  Either way, it is something that significantly determines our choices.
  • The Pavlik title reflects the second narrative: It describes the tool but  suggesting that we ‘collaborate’ with it.  This suggests that Gen AI is not a tool used to serve our needs, but something that has equal status with human practitioners.  But this extends further to suggest this collaboration is unavoidable: The title does not suggest that this collaboration is a choice to be made, but a reality to confront.  And if we have no choice but to collaborate, doesn’t that imply we are not of equal status?

I could go on, but I’m sure you get the point. We can see these narratives in student surveys as well. In 2024, the Digital Education Council released its report on student expectations about AI – titled ‘What Students Want’.  Among their findings, they found that 86% of students surveyed “claim to use AI in their studies”.  The most recent study from HEPI shows this to have risen now to 95% in 2026. Mostly they are using it to research for information and fix their grammar – but over the last few years there have been increasing signs of students using AI more as a kind of digital tutor: Explaining concepts and providing feedback. But the interesting bit is when you look for evidence of what is driving them. 

Because the DEC survey showed 52% of students actually think AI negatively impacts their academic performance.  In their 2026 report, HEPI showed that 51% of students think AI negatively impacts their student experience. An earlier HEPI survey showed 82% of students wanting to use AI less.  So why are they using it? Well, according to the 2026 HEPI survey, 68% of students believe “it is essential to understand and be able to use AI effectively”. The JISC survey similarly found that students were “concerned about acquiring the necessary generative AI skills for future workplaces”. This is the first narrative.  Gen AI is a cool new toy that everyone in using, and there is almost a FOMO (‘Fear of Missing Out’) attitude towards it.  Notice that the emphasis is that Gen AI is necessary for future workplaces – not that Gen AI improves those workplaces. But there is anxiety too.  Again in the 2026 survey, 65% of students express fear that AI makes learning less valuable.

A Higher Education for Good survey found many students expressing the worry that AI will “render them incapable of functioning without it”, fearing the “dehumanization of education”.  The same report highlights concerns about “the lack of humanity in A.I.” and that “A.I. in education could lead to ubiquitous surveillance of students”.  In all the surveys there is a common thread in which students express implicit or explicit fears about how AI threatens the very humanity of educational communities. They demonstrate fears of dehumanization, a distrust of AI, and the feeling that AI will never be able to provide the same value as human production. 

This is the second narrative.

And we are still no closer to understanding what people actually want. Back in February, I conducted a survey and asked people ‘what do you wish AI could do for you?’  The results were interesting.  Staff wanted AI to ease the burden of marking for them.  Students wanted AI to help them with time management. And with tidying. Both groups picked something they found frustrating or difficult and said this was what they wanted AI to help them with. It is significant that staff did not suggest they wanted AI to create their teaching resources for them or write their lectures for them – although at many events like this, that is what they are being shown they can do. And it is significant that students did not suggest they wanted AI to write their essays for them –perhaps because they were worried about admitting it.  But at the same time they didn’t want AI to handle their childcare.  Or do their cooking. Because some things are difficult, yes.  And time-consuming, yes.  But they are also fulfilling on a human level – and as the open letter to AI said, we don’t want AI to take over those very human things.  If AI is going to do things for us – let it be the mind-numbing, soul-destroying stuff that makes us feel less human.  So, what are those things?  Not ‘what can AI already do for us’, or ‘what we must learn to do with AI’.  But what – in an ideal world – would we actually want AI to do for us?

Or let me put it another way: In the words of Neil Postman…

 “What is the problem to which this technology is the solution?”

Think specifically about yourselves as educators – and your students.  And try and avoid falling into the two dominant narratives.

Bibliography:

Attewell, S. (no date) How will generative AI affect students and employment?, Luminate. Available at: https://luminate.prospects.ac.uk/how-will-generative-ai-affect-students-and-employment (Accessed: 28 May 2025).

Batista, J., Mesquita, A. and Carnaz, G. (2024) ‘Generative AI and Higher Education: Trends, Challenges, and Future Directions from a Systematic Literature Review.’, Information (2078-2489), 15(11), p. 676. Available at: https://doi.org/10.3390/info15110676.

Brynjolfsson, E., Li, D. and Raymond, L. (2025) ‘Generative AI at Work*’, The Quarterly Journal of Economics, 140(2), pp. 889–942. Available at: https://doi.org/10.1093/qje/qjae044.

Chan, C.K.Y. (2024) Generative AI in Higher Education; The ChatGPT Effect. London: Routledge.

Digital Education Council Global AI Student Survey 2024 (2024). Digital Education Council. Available at: https://www.digitaleducationcouncil.com/post/digital-education-council-global-ai-student-survey-2024.

Freeman, J. (no date) ‘Student Generative AI Survey 2025’.

Gebru, T. et al. (2024) Statement from the listed authors of Stochastic Parrots on the “AI pause” letter, Dair Institute. Available at: https://www.dair-institute.org/blog/letter-statement-March2023/ (Accessed: 28 May 2025).

Gulati, P. et al. (2025) ‘Generative AI Adoption and Higher Order Skills’. arXiv. Available at: https://doi.org/10.48550/arXiv.2503.09212.

Hashem, R. et al. (2024) ‘AI to the rescue: Exploring the potential of ChatGPT as a teacher ally for workload relief and burnout prevention’, Research and Practice in Technology Enhanced Learning, 19, pp. 023–023. Available at: https://doi.org/10.58459/rptel.2024.19023.

Jacobides, M.G. and Ma, M.D. (2024) ‘IoD | London Business School Policy Paper – Assessing the expected impact of Generative AI on the UK competitive landscape’.

Laura, RonaldS. and Chapman, A. (2009) ‘The technologisation of education: philosophical reflections on being too plugged in.’, International Journal of Children’s Spirituality, 14(3), pp. 289–298. Available at: https://doi.org/10.1080/13644360903086554.

Leaver, T. and and Srdarov, S. (2025) ‘Generative AI and children’s digital futures: New research challenges’, Journal of Children and Media, 19(1), pp. 65–70. Available at: https://doi.org/10.1080/17482798.2024.2438679.

Ogunleye, B. et al. (2024) ‘A Systematic Review of Generative AI for Teaching and Learning Practice’, Education Sciences, 14(6), p. 636. Available at: https://doi.org/10.3390/educsci14060636.

Pang, W. and Wei, Z. (2025) ‘Shaping the Future of Higher Education: A Technology Usage Study on Generative AI Innovations.’, Information (2078-2489), 16(2), p. 95. Available at: https://doi.org/10.3390/info16020095.

Postman, N. (1999) Building a Bridge to the 18th Century: How the Past Can Improve Our Future. New York: Vintage Books.

‘Student Generative Artificial Intelligence Survey 2026’ (2026) HEPI, 12 March. Available at: https://www.hepi.ac.uk/reports/student-generative-ai-survey-2026/ (Accessed: 26 March 2026).

Student perceptions of generative AI report (2024). JISC. Available at: https://www.jisc.ac.uk/reports/student-perceptions-of-generative-ai.

Thomson, H. (2025) ‘“Don’t ask what AI can do for us, ask what it is doing to us”: are ChatGPT and co harming human intelligence?’, The Guardian, 19 April. Available at: https://www.theguardian.com/technology/2025/apr/19/dont-ask-what-ai-can-do-for-us-ask-what-it-is-doing-to-us-are-chatgpt-and-co-harming-human-intelligence (Accessed: 1 May 2025).

Wei, X. et al. (2025) ‘The effects of generative AI on collaborative problem-solving and team creativity performance in digital story creation: an experimental study.’, International Journal of Educational Technology in Higher Education, 22(1), pp. 1–27. Available at: https://doi.org/10.1186/s41239-025-00526-0.

Youth Talks on AI (2024). Switzerland: Higher Education for Good. Available at: https://youth-talks.org/wp-content/uploads/2024/06/Youth-Talks-on-AI-Final-report-03062024.pdf.

Yusuf, A., Pervin, N. and Román-González, M. (2024) ‘Generative AI and the future of higher education: a threat to academic integrity or reformation? Evidence from multicultural perspectives.’, International Journal of Educational Technology in Higher Education, 21(1), pp. 1–29. Available at: https://doi.org/10.1186/s41239-024-00453-6.

AI and academic misconduct- some context and provocations

I’m sorting through all my files as I prep to hand over my role and start my new job and came across this discussion activity I designed but have yet to run. It’s in 3 parts with each part designed to be ‘released’ after the discussion of the previous part. In this way it could work synchronously and asynchronously.

Part : AI, cheating and misuse


AI-related misconduct in HE is far more complex than simple notions of “cheating”. In fact, how we define cheating (both institutionally and individually) is worth revisiting too. Recent prominent news stories (examples at end) reveal sharp inconsistencies in how universities define, detect and sanction what is seen as inappropriate AI use, despite there being, at best, only emergent policy and certainly wide ranging understandings and interpretations of what is acceptable.  Some ban all AI; others permit limited use with disclosure. Cases show students wrongly accused, rules unevenly applied and anxiety rising across the sector amongst both staff and students.

Key points

  • Huge variation exists between and within institutions: one university may expel a student for Grammarly use, another may allow it.
  • Detector-driven accusations often fail under appeal because AI scores are not proof. Policies lag behind practice: vague guidance leaves staff and students uncertain what constitutes fair or inappropriate use.
  • False positives where detectors are used disproportionately affect neurodivergent and non-native English writers whose linguistic patterns deviate from ‘naturalisitc’ norms/ expectations/ markers.
  • Misconduct procedures must ensure the provider proves wrongdoing; suspicion is insufficient.
  • I would argue that assignments with fabricated or ‘hallucinated’ references should automatically fail though this does not seem to be consistent, even where such examples are used as markers of inappropriate AI use.

Initial discussion prompts

  • Is ‘AI misconduct’ a meaningful category or an unhelpful new label for old issues? How else might we provide an umbrella term? Perhaps one that is more neutral for assessment policy.
  • How do inconsistent institutional rules affect fairness for students across programmes? Where are these inconsistencies? What needs updating/ changing?
  • What does it mean for academic judgement when technology, not evidence, drives decisions? Even though we have made a decision at King’s not to use detectors we need to ensure ‘free’ detectors are not used nor are other digital traps or tricks that undermine trust (eg, deliberately false references included in reading; hidden prompt ‘poisoners’ in assessment briefs).
  • Do we need to overhaul common understandings (eg in academic integrity policy) of what constitutes cheating, how far 3rd party proof reading can be considered legitimate and whether plagiarism is an adequate umbrella term in age of AI? Is this a good time to rethink what we assume are shared understandings and to consider whether this ideal is actually a mask or illusion?

Part 2: Detecting AI: Not as simple or obvious as we might think


Detection of AI-generated writing either by tech means (using AI to detect AI) or ‘because it feels like AI’ has never been as easy as some make out,  and it is increasingly difficult as tools proliferate and improve. While tools claim to identify machine-written work through statistical cues such as ‘perplexity’ and ‘burstiness’, modern large language models  are now able to better mimic these fluctuations. Detection confidence often rests on illusion: humans and detectors alike may only be spotting inexpert AI writing. Skilled prompting, ‘humanisers’, and improved models blur the line between human and synthetic text, exposing the fallibility of ‘gut-feeling detection’. Read more here.

Key points

  • Early AI detectors relied on predictable signatures in low-temperature outputs; current models vary their linguistic rhythm automatically, making detection increasingly unreliable.
  • Human readers are equally fallible: many assumed ‘tells’ (clarity, neatness, tone, specific word choices, punctualtion norms) may simply reflect the style of conscientious, neurodivergent or multilingual students.
  • False accusations carry ethical and procedural risks. The Office of the Independent Adjudicator has also reinforced risks related to use of AI detectors.
  • King’s guidance (rightly imo) cautions against use of detection tools and emphasises contextual evidence and dialogue. This includes detection by ‘feel’ of writing.
  • The sector consensus ( see articles from Jisc, THE and Wonkhe, below) is clear: detectors may inform suspicion but never constitute evidence.

Discussion prompts

  • What do you notice first when you think ‘this feels AI-written’? Are there red lines we can all agree are out of bounds? Are the boundaries of acceptable use fluid? Can we have policy for that?
  • How could you verify suspicion without breaching trust or due process?
    When detection becomes guesswork, what remains of professional judgement?

Part 3. Mitigation strategies


Preventing AI-related misconduct demands more than surveillance; it requires redesign. UK universities (see media reports below) increasingly emphasise prevention through clarity, curriculum and compassionate assessment. The approach promoted by King’s Academy thus far is one that promotes a culture of integrity with the aim of creating a shared understanding, not fear of detection.

Key points

  • Clarify rules: define what counts as acceptable AI use in each assignment and require students to declare any assistance. This is what the initial guidance and current roadshows are about but how can we formalise that?
    We need to embed AI literacy: teach students how to use AI ethically and critically rather than banning it outright. And staff too of course.
  • Redesign assessment: prioritise process, originality, and context (drafts, reflections, local or personal data).
  • Diversify formats: include vivas, oral presentations, in-person elements, or authentic tasks resistant to outsourcing.
  • Support equity: ensure guidance accounts for assistive technologies and language tools legitimately used by disabled or multilingual students.
  • Encourage dialogue: normalise discussion of AI use between staff and students rather than treating it as taboo.

Discussion prompts

  • What elements of your assessment design could make AI misuse less tempting or effective?
  • How might explicit permission to use AI (within limits) enhance transparency and trust?
  • Which AI-aware skills do your students most need to learn and how will we accomplish that? Where does the role of policy sit ? where in the academy are the contradictions and tensions?

Further reading

Topinka, R. (2024) ‘The software says my student cheated using AI. They say they’re innocent. Who do I believe?’, The Guardian, 13 February. Available at: https://www.theguardian.com/commentisfree/2024/feb/13/software-student-cheated-combat-ai

Coldwell, W. (2024) ‘‘I received a first but it felt tainted and undeserved’: inside the university AI cheating crisis’, The Guardian, 15 December. Available at: https://www.theguardian.com/technology/2024/dec/15/i-received-a-first-but-it-felt-tainted-and-undeserved-inside-the-university-ai-cheating-crisis

Havergal, C. (2025) ‘Students win plagiarism appeals over generative AI detection tool’, Times Higher Education, 15 July. Available at: https://www.timeshighereducation.com/news/students-win-plagiarism-appeals-over-generative-ai-detection-tool

Dickinson, J. (2025, May 8). “Academic judgement? Now that’s magic.” Wonkhe. Available at: https://wonkhe.com/blogs/academic-judgement-now-thats-magic/  

Grove, J. (2024) ‘Student AI cheating cases soar at UK universities’, Times Higher Education, 1 November. Available at: https://www.timeshighereducation.com/news/student-ai-cheating-cases-soar-uk-universities Times Higher Education (THE)

Rowsell, J. (2025) ‘Universities need to “redefine cheating” in age of AI’, Times Higher Education, 27 June. Available at: https://www.timeshighereducation.com/news/universities-need-redefine-cheating-age-ai

Webb, M. (2023, March 17). AI writing detectors – concepts and considerations. National Centre for AI / Jisc. Available at: https://nationalcentreforai.jiscinvolve.org/wp/2023/03/17/ai-writing-detectors/ Artificial intelligence+1

Giant humanoids and automata

Today I had the honour of delivering a lecture as part of the Associate of King’s College (AKC) series. The AKC has its origins in the earliest days of King’s, almost two centuries ago, and continues as a cross-disciplinary programme and this year there are around 5,000 participants, including staff. It was a busy room (most of the 5000 watch online though), and I felt very aware of the history behind the series while speaking. This post is not intended to summarise the entire lecture but is a quick reflection based on a question posed by one the students after the lecture.

The lecture itself, titled Rethinking Human Learning in the Age of AI, was structured as a journey through time and across cultures. I wanted to draw attention to the long history of machines, automata and tools designed to work alongside us or, at times, to imitate us. Alongside this, I wanted to acknowledge current concerns about cognitive offloading, over-reliance on AI, and the anxiety that students (and others) may be outsourcing thinking itself. Rather than focus solely on the present moment, I wanted to show that many of these concerns are not new. They have deep roots in myth, invention and cultural imagination.

I began by considering why humans have been drawn to making machines that act or look like us. First, Talos, the bronze giant forged by Hephaestus to guard Crete. Talos, always on, 24/7 sentinel, ever watchful, apparently sentient, yet bound to servitude. Despite his scale, he was defeated by Jason (see how around 6 mins in the video below). The question I raised was: why build a giant in human form to defend an island when there might have been other, more efficient forms of defence? And what are the hidden consequences of a defender shaped like a human? And do we not actually feel sympathy for Talos when he dies?

The second example was the story of Yan Shi, artificer to King Mu around 3000 years ago, who constructed a singing and dancing automaton. The figure was so lifelike that it provoked admiration, but when it began to flirt with women in the court, the king’s admiration turned to fear and fury. Yan Shi had to dismantle it to reveal its workings and save his own life. The story anticipates Masahiro Mori’s ‘Uncanny Valley’ effect. The discomfort arises not simply from human likeness, but from behaviour that unsettles what we assume about intention, autonomy and control.

The third example was the tradition of the Karakuri puppets dating from 17th century Japan, whose fluid, human-like movements still evoke fascination. As with the Bunraku puppets (life size theatrical puppets), we know they are not real, yet we are drawn to the artistry and precision. There is both enchantment and a kind of deception. The craftsmanship invites admiration, but it also encourages us to question what lies beneath the surface.

With all these examples I suggested that our enchantment with lifelike machines can be both captivating and disarming. In each case, the machine is inspired by human design and each in its own way astounds and captivates. But Talos, despite his size, had a single point of vulnerability. Yan Shi’s automaton ultimately profoundly disturbed its audience. The Karakuri mechnaical dolls delight and amaze in the main but like Talos and Yan Shi’s automaton challenge us to ‘look under the hood’ to see both how they work and to uncover frailties.  My point, when I eventually go to it, was that natural language exchanges and fluent outputs of some modern AI tools can similarly enchant us and lead us to assume capabilities it does not have. We need to, within our own capabilities, look under  the hood.

I went on from there… you can see the whole journey reflected in this which I presented as an advance organiser (try to ignore the flags, they spoil it a bit):

After the lecture, a student asked what I thought was an excellent question: why are modern robots, especially those that intersect with novel AI tools, and representations of them in popular culture so frequently humanoid? Why, given that the most effective machines are those designed for highly specific functions, are we drawn to building robots in our own image? We talked about how a Roomba vacuum cleaner is far more practical than a 5-foot humanoid robot pushing around a standard vacuum but, still, in the popular imagination the latter is the imagined domestic help of the future a la Rosie from the Jetsons.

Industrial robots in car plants are arms, not bodies, because arms are what are required to achieve the task. So why does the humanoid form persist so strongly in imagination and development?

I replied, almost instinctively, that one of the reasons we return to the humanoid shape is a lack of imagination. Even among the highly skilled, it is difficult to escape the pull of the human form. We continue to project ourselves onto our technologies, even when the task at hand requires something entirely different. This is not to say that humanoid robots are always misguided. Sometimes there is a clear functional rationale. But in many cases the fascination with the human shape seems to outweigh the practical benefits. We see this in videos of humanoid robots attempting to play football, really really badly, yet we persist.

I mentioned an example I saw recently at the New Scientist Festival: a robotic elderly human head, connected to a large language model, with articulated features. It was being trialled in care homes. I found this compelling because it was not the typical youthful, idealised (and so typically female – which raises other disturbing assumptions I have to say) robot form that popular technology tends to prioritise. It was designed because there was a specific need to support human interaction where human presence was limited. It was based on their own research evidence that residents responded better to a human face than to a screen or disembodied voice. It did not need a body to fulfil that role but it needed the head. Problem> design > research > testing > honing. It contrasts dramatically with the ‘let’s make a robot that takes us into the uncanny valley and out the other side!’

What do you think?

Incidentally, I do not know why I included the word automata in my lecture as I failed (as I always do ) to say it properly.

Comet limits restless legs

I’m one of those people whose knee is constantly jiggling. Especially when I am sat in ‘receive’ mode in a meeting or something. To reduce the jiggling I fiddle with things and the thing I have been fiddling with will be familiar to anyone who likes to see what all the fuss is about with new tech. I’m always asking myself- novelty or utility? (I had my fingers burnt with interactive whiteboards and have been cautious ever since). You may be interested in the output of Perplexity’s ‘Comet’- the browser based AI agent, the outcomes of which are littering LinkedIn right now- or the video below which is a conversation between me and one of my AI avatars… if not either of these I’d stop reading now tbh.

In the image below is a link to what I instructed using a simple prompt: “display this video in a window with some explanatory text about what it is and then have a self-marking multi choice quiz below it.” [youtube link]

It is a small web application that displays a YouTube video, provides some explanatory text, and then offers a self-marking multiple choice quiz beneath it.

Click on the image to see the artefact and try the quiz

The process was straightforward but illuminating. The agent prepared an interactive webpage with three generated files (index.html, style.css, and app.js) and then assembled them into a functioning app. It automatically embedded the YouTube video correctly (but needed an additional prompt when it did not initially display), added explanatory text about the focus of the video (AI in education at King’s College London), and then generated an eight-question multiple choice quiz based on the transcript.

The quiz has self-marking functionality, with immediate feedback, score tracking and final results. The design is clean and the layout works in my view. The questions cover key points from the transcript: principles, the presenter’s role, policy considerations and recommendations for upskilling. The potential applications are pretty obvious I think. Next step would be to look at likely accessibility issues (a quick check highlights a number of heading and formatting issues), finding a better solution for hosting and then the extent to which fine tuning the questions for level is do-able with ease. But given I only needed to tweak one for this example, even that basic functionality suggests this will be of use.

The real novelty here is the browser but also the execution. I have tried a few side-by-side experiments with Claude and in each the fine tuning needed for a satisfactory output was less here. The one failed experiment so far is converting all my saved links to a searchable / filterable dashboard. The dashboard looks good but I think there were too many links and it kept failing to make all the links active. Where tools like notebook LM are offerring a counter UX to text in; reams out LLMs of the ChatGPT variety, this offers a closer-to-seamless agent experience and it is both ease of use and actualy utility that will drive use I think.

‘I can tell when it’s been written by AI’

Warning: the first two paragraphs might feel a bit like wading through treacle but I think what follows is useful and is probably necessary context to the activity linked at the end!

LLMs generate text using sophisticated prediction/ probability models, and whilst I am no expert (so if you want proper, technical accounts please do go to an actual expert!) I think it useful to hone in on three concepts that help explain how their outputs feel and read: temperature, perplexity and burstiness. Temperature sets how adventurous the word-by-word choices are: low values produce steady, highly predictable prose; high values (this is on a 0-1 scale) invite surprise and variation (supposedly more ‘creativity’ and certainly more hallucination). Perplexity measures how hard it is to predict the next word overall, and burstiness captures how unevenly those surprises cluster, like the mix of long and short sentences in some human writing, and maybe even a smattering of stretched metaphor and whimsy. Most early (I say early making it sound like mediaeval times but we’re talking 2-3 years ago!) AI writing felt ‘flat’ or ‘bland’ and therefore more detectable to human readers because default temperatures were conservative and burstiness was low.

I imagine most ChatGPT (other tools are available) users do not think much about such things given these are not visible choices in the main user interface. Funnily enough, I do recall these were options in the tools that were publicly available and pre-dated GPT 3.5 (the BIG release in November ’22). Like a lot of things skilled use can impact (so a user might specify a style or tone in the prompt). Also, with money comes better options so, for example,  Pro account custom GPTs can have precise built in customisations. I also note that few seem to use the personalisation options that override some of the things that many folk find irritating in LLM outputs (Mine states for example that it should use British English as default, never use em dashes and use ‘no mark up’ as default). I should also note that some tools still allow for temperature manipulation in the main user interface (Google Gemini AI Studio for example) or when using the API (ChatGPT). Google AI Studio also has a ‘top P’ setting allowing users to specify the extent to which word choices are predictable or not.  These things can drive you to distraction so it’s probably no wonder that most right-thinking, time poor people have no time for experimental tweaking of this nature. But as models have evolved, developers have embedded dynamic temperature controls and other tuning methods that automatically vary these qualities. The result is that the claim ‘I can tell when it’s AI’ may be true of inexpert, unmodified outputs from free tools but so much harder from more sophisticated use and paid for tools. Interestingly, the same appears true for AI detectors. The early detectors’ reliance on low-temperature signatures now need revisiting too for those not already convinced of their vincibility.  

Evolutionary and embedded changes therefore have a humanising effect on LLM outputs. Modern systems can weave in natural fluctuations of rhythm and unexpected word choices, erasing much of the familiar ChatGPT blandness. Skilled (some would say ‘cynical’) users, whether through careful prompting or bypassing text through paraphrasers and ‘humanisers’,  can amplify this further. Early popular detectors such as GPTZero (at my work we are clear colleagues should NEVER be uploading student work to such platforms btw) leaned heavily on perplexity and burstiness patterns to spot machine-generated work, but this is increasingly a losing battle. Detector developers are responding with more complex model-based classifiers and watermarking ideas, yet the arms race remains uneven: every generation of LLMs makes it easier to sidestep statistical fingerprints and harder to prove authorship with certainty.

For fun I ran this article through GPT Zero….Phew!

It is also worth reflecting on what kinds of writing we value. My own style, for instance, happily mixes a smorgasbord of metaphors in a dizzying (or maybe its nauseating) cocktail of overlong sentences, excessive comma use and dated cultural references (ooh, and sprinkles in frequent parentheses too). Others might genuinely prefer the neat, low-temperature clarity an AI can produce. And some humans write with such regularity that a detector might wrongly flag them as synthetic. I understand that these traits may often reflect the writing of neurodivergent or multi-lingual students.

To explore this phenomenon and your own thinking further, please try this short activity. I used my own text as a starting point and generated (in Perplexity) five AI variants of varying temperatures. The activity was built in Claude. The idea is it reveals your own preferred ‘perplexity and burstiness combo’ and might prompt a fresh look at your writing preferences and the blurred boundaries between human and machine style. The temperature degree is revealed when you make your selection. Please try it out and let know how I might improve it (or whether I should chuck it out the window i.e. DefenestrAIt it)

Obviously, as my job is to encourage thinking and reflection about what this means for those teaching, those studying and broadly the institution they work or study in, I’ll finish with a few questions to stimulate reflection or discussion:

In teaching: Do you think you can detect AI writing? How might you respond when you suspect AI use but cannot prove it with certainty? What happens to the teacher-student relationship when detection becomes guesswork rather than evidence?

For assignment design: Could you shift towards process-focused assessment or tasks requiring personal experience, local knowledge or novel data? What kinds of writing assignments become more meaningful when AI can handle the routine ones? Has that actually changed in your discipline or not?

For your students: How can understanding these technical concepts help students use AI tools more thoughtfully rather than simply trying to avoid detection? What might students learn about their own writing voice through activities that reveal their personal perplexity and burstiness patterns? What is it about AI outputs that students who use them value and what is it that so many teachers disdain?

For your institution: Should institutions invest in detection tools given this technological arms race, or focus resources elsewhere? How might academic integrity policies need updating as reliable detection becomes less feasible?

For equity: Are students with access to sophisticated prompting techniques or ‘humanising’ tools gaining unfair advantages? How do we ensure that AI developments don’t widen existing educational inequalities? Who might we be inadvertently discriminating against with blanket bans or no use policies?

For the bigger picture: What kinds of human writing and thinking do we most want to cultivate in an age when machines can produce increasingly convincing text? How do we help students develop authentic voice and critical thinking skills that remain distinctly valuable?

When you know the answer to the last question, let me know.

Plus ça change; plus c’est a scroll of death

Hang on it was summer a minute a go
I looked at my blog just now and saw my last post was in July. How did the summer go so fast? There’s a wind howling outside, I am wearing a jumper and both actual long dark wintry nights and the long dark metaphorical ones of our political climate seem to loom. To warm myself up a little I have been looking through some tools that offer AI integrations into learning management systems (LMS aka VLEs)* rather than doing ‘actual’ work. That exploration reminded me of the first ever article I had published back in 2004. The piece has long since disappeared from wherever I save the printed version and is no longer online (not everything digital lasts forever, thank goodness) but I dug the text out of an old online storage account and reading it through has made me realise how much things have changed broadly while, in other ways, it is still the same show rumbling along in the background, like Coronation Street (but no-one really remembers when it went from black and white to colour).

What I wrote back then
In that 2004 article I described the excitement of experimenting with synchronous and asynchronous digital discussion tools in WebCT (for those not ancient like me, Web Course Tools – WebCT- was an early VLE developed by the University of British Columbia which was eventually subsumed into Blackboard). I was teaching GCSE English and was programme leader for an ‘Access to Primary Teaching’ course and many of my students were part time so only on campus for 6 hours per week across two evenings. I’d earlier taught myself HTML so I could build a website for my history students- it had lots of text! It had hyperlinks! It had a scolling marquee! Images would have been nice but I knew my limits. When I saw WebCT, I was fired up by the possibilities of discussion forums and live chat. When I set it up and trialled it I saw peer support, increased engagement with tough topics, participation from ‘quiet’ students amongst other benefits. I was so persuaded by the added value potential I even ran workshops with colleagues to share that excitement.

See this great into to WebCT from someone in CS dept at British Columbia from 1998:

That is still me of course. My job has changed and so has the context, but the impulse to share enthusiasm for digital tools that foster dialogue and interaction remains why I do what I do. It was nice to read that and I felt a fleeting affection for that much younger teacher, blissfully unaware of the challenges ahead! Even so and forming a rattling cognitive dissonace that is still there, I was frustrated by the clunky design and awkward user interface that made persuading colleagues to use it really challenging. Log in issues took up a lot of time and balancing ‘learning’ use with what I then called ‘horseplay’ (what was I, 75?!) took a while to calibrate. Nevertheless, I thought these worth working through but, even with some evidence of uptake across the college I was at was apparent, there was a wider scepticism and reluctance. Why wouldn’t they? ‘it’s too complex’; ‘I am too busy’; ‘the way I do it now works just fine, thank you’. Pretty much every digital innovation has been accompanied by similar responses; even the good ones! I speculated about whether we needed a blank sheet of paper to rethink what an LMS could be, but concluded that institutions were more likely to tinker and add features than to start again.

2004? Feels like yesterday; feels like centuries ago
It was only 2003–4 (he says, painfully aware that I have colleagues who were born then), yet experimenting with an LMS felt novel and that comes over really clearly in my article. If you’d asked me this morning when I started using an LMS I might have said 1998 or 99. 2003 feels so recent in the contexct of my whole teaching career. What the heck was I doing before all that? Thinking back I realise that in my first full time job there was only one computer in our office and John S. got to use that as he was a trained typist (so he said). And older than me. In the article I was carefully explaining what chat and forums were and how they were different from one another, so the need for that dates the phenomeon too I suppose. Later, after moving to a Moodle institution, I became e-learning lead and engaged with JISC working groups- a JISC colleague who oversaw the VLE working group jokingly called me Mr Anti-Moodle because I was vocal in my critiques. It wasn’t quite acccurate- I was critical for sure but then, as now, I liked the concept but disliked the way it worked. Persuading people to adopt an LMS was hard as I said, and, while I have seen some brilliant use of Moodle and the like, my impression is that the majority (argue with me on this though) of LMS course are functional repositiories with interactive and creative applications the exception rather than the norm. The scroll of death was a thing in 2005 and it is as much of a thing now. It also made me think of current ‘Marmitey’ positions folk are taking re: AI. Basically, AI (big and ill defined as it usually is) has to come with nuance and understanding so binary, entrenched, one size fits all positons are unhelpful and, in my view, hard to rationalise and sustain.

The familiar LMS problem
Back to the LMS, from WebCT to Moodle and other common current systems, the underlying functionality has barely shifted (I mean from the perspective of your average teacher/lecturer or student). Many still say Moodle feels very 1990s (probably they mean early 2000s but I suspect they, like me, find it hard to reconcile the idea of any year starting with a 2000 could be a long time ago). Ultimately I think none of these systems offered a genuinely encouraging combination of interface and user experience and that is an issues that persists to this day. The legacy of those early design decisions lingers, and we are still working around them. People have been predicting the death of the VLE for years (including me) but it has not happened. When I first saw Microsoft Teams just before Covid, I thought here’s the nail in the coffin. I was wrong again. Maybe being wrong about the end of the LMS is another running theme.

Will AI change the LMS story?
So what about AI powered integrations? Will they revolutionise how the LMS works? Will they be part of the reason for a shift away from them? Unlikely in either sense is my best guess. Everything I see now is about embellishments and shortcuts that feed into the existing structure. My old dream of a blank-sheet LMS revolution has faded. Thirty years of teaching and more than twenty years using LMSs suggest that this is one component of digital education that will not fade away. The tools will keep evolving, but the slow, steady thrum of the LMS endures in the background. I realise that I have finally predicted non change so don’t bet on that as I have been wrong quite a bit in the past. What I do know is that digital discussions using tools to support dialogic pedagogies have persisted as have the issues related to them. Only 10-20% of my students use the forums! I hear that still. But what I realised in 2004 and maintain to this day is that 10-20% is a significat embellishment for some and alternative for others so I stick with what I said back then in that sense at least. Oh, and lurking is a legit and fine thing for yet others!

One of the most wonderful things about the AI in Education course (so close to 15,000 participants!) is the forums. They add layers of interest that cannot be planned or produced. I estimate only 10-15% of participants post but what a contribution they are making and its an enhancement that keeps me there and, I am convinced, adds real value to those not posting too.

*I’ll stick with LMS as this seems to be pretty ubiquitous these days though I am aware of the distinctions and when I wrote the piece about ‘WebCT’ the term VLE was very much go to.

Innovation, AI and (weirdly) the new PSF

Mark Twain almost certainly said: 

“substantially all ideas are second-hand, consciously and unconsciously drawn from a million outside sources, and daily used by the garnerer with a pride and satisfaction born of the superstition that he originated them” 

and he is also attributed with saying:

 “A person with a new idea is a crank until the idea succeeds”

Both takes, perhaps even while being a little contradictory, relate to the idea of innovation. In this post that I initially drafted in an interaction with GPT Pro Advance voice chat while walking to work, I have thrown down some things that have been bothering me a bit about this surprisingly controversial word. 

Firstly, what counts as innovation in education? You often hear folk argue that, for example, audio feedback is no innovation as teachers somewhere or other have been doing it for donkeys years. The more I think though, the more I’m certain actions/ interventions/ experiments/ adaptations are rarely innovative by themselves but what matters fundamentally is context. Something that’s been around for years in one field or department might be utterly new and transformative somewhere else.

Objective Structured Clinical Examinations are something I have been thinking about a lot because I believe they may inspire others to adapt this assessment approach outside health professions. . In medical education, they’re routine. In business or political economy observable stations to assess performance or professional judgement would probably be deemed innovative. Chatting with colleagues, they could instantly see how something like that might work in their own context, but with different content, different criteria and perhaps a different ethos. In other words, in terms of the thing we might try to show we are doing to evidence the probably impossible to achieve ‘continuous improvement’ agenda, innovation isn’t about something being objectively new; it’s about it being new here. It’s about context, relevance and reapplication.

Innovation isn’t (just) what’s shiny

Ages ago I wrote about the danger and tendency for educators (and their leaders) to be dazzled by shiny things. But we need to move away from equating innovation with digital novelty. The current obsession is AI, unsurprisingly,  but it’s easy to get swept along in the sheen of it, especially if, like me, you are a vendor target. This, though reminds me that there’s a tendency to see innovation as synonymous with technological disruption. But I’d argue the more interesting innovations right now are not just about what AI can do, but how people are responding to it.

Arguable I know, but I do believe AI offers clear affordances: supporting diverse staff and student bodies, support for feedback, marking assistance, rewriting for tone, generating examples or case studies. And there’s real experimentation happening, much of it promising, some of it quietly radical. At the same time I’m seeing teams innovate in the opposite, analogue direction. Not because they’re nostalgic, conservative or anti tech (though some may be!),  but because they’re worried about academic integrity or concerned about the over-automation of thinking. We’re seeing a return to in-person vivas, handwritten tasks, oral assessments. So these are not new but because they are being re-justified in light of present challenges. It could be seen as innovation via resistance.

Collaboration as a key component of innovation

In amongst the amazing work reflected on, I see a lot a claims for innovative practice in the many Advance HE fellowship submissions I read as internal and external reviewer.  In some ways, seemingly very similar activities could be seen as innovative in one place and not another. While not a mandatory criterion, innovation is:

  • Encouraged through the emphasis on evidence-informed practice (V3) and responding to context (V4).
  • Often part of enhancing practice (A5) via continuing professional development.
  • Aligned with Core Knowledge K3, which stresses the importance of critical evaluation as a basis for effective practice—and this often involves improving or innovating methods.In the guidance for King’s applicants, innovation is positioned as a natural outcome of reflective practice

So while the new  PSF (2023) doesn’t promote innovation explicitly, what it does do (and this is new) is promote collaboration. it explicitly recognises the importance of collaboration and working with others, across disciplines, roles and institutions as a vital part of educational practice. That’s important because whilst in the past perceptions of innovation have stretched the definition and celebrated individual excellence in this space many of the most meaningful innovations I’ve seen emerge from collaboration and conversation. This takes us back to Twain and  borrowing, adapting, questioning.

We talk of interdisciplinarity (often with considerable insight and expertise like my esteemed colleagues Dave Ashby and Emma Taylor) and sometimes big but often small-scale, contextual innovation comes from these sideways encounters. But they require time, permission and a willingness to not always be the expert in the room. Something innovators with a lingering sense of the inspiring, individual creative may have trouble reconciling. 

Failure and innovation

We have a problem with failure in HE. We prefer success stories and polished case studies. But real innovation involves risk: things not quite working, not going to plan. Even failed experiments are educative. But often we structure our institutions to minimise that kind of risk, to reward what’s provable, publishable, measurable, successful. I have argued that  we do something similar to students. We say we want creativity, risk-taking, deep engagement. But we assess for precision, accuracy, conformity to narrow criteria and expectations. We encourage resilience, then punish failure with our blunt, subjective grading systems. We ask for experimentation but then rank it. So it’s no surprise if staff, like students, when encouraged to be creative or experimental, can be reluctant to try new things.

AI and innovation

I think I am finally getting to my point. The innovation AI  catalyses goes far beyond AI use cases. It’s prompting people to re-examine their curricula, reassess assessment designs, rethink what we mean by original thinking or independent learning. It’s forcing conversations we’ve long avoided, about what we value, how we assess,  and how we support students in an age of automated possibility. Even WHETHER we should continue to grade. (Incidentally I heard, amongst many fine presentations yesterday at the King’s /Cadmus event on assessment an inspiring argument against grading by Professor Bugewa Apampa from UEL. It’s so good to hear clearly articulated arguments on the necessity of confronting the issues related to grading from someone so senior). 

Despite my role (Ai and Innovation Lead) some of the best innovations I’ve seen aren’t about tech at all. They’re about human decisions in response to tech. They’re about asking, “What do we not want to automate?” or “How can we protect space for dialogue, for process or for pause?” 

If we only recognise innovation when it looks like disruption, we’ll miss a lot. 

Twain, Mark. Letter to: Helen Keller. 1903 Mar 17 [cited June, 19, 2025] Available from: https://www.afb.org/about-afb/history/helen-keller/letters/mark-twain-samuel-l-clemens/letter-miss-keller-mark-twain-st 

AI and the pragmatics of curriculum change

Whilst some (many) academic staff and students voice valid and expansive concerns about the use of or focus on Artificial intelligence in education I find myself (finally, perhaps, and later in life) much more pragmatic. We hear the loud voices and I applaud many acts of resistance but we cannot ignore the ‘explosive increase’ in AI use by students. It’s here and that is one driver. More positive drivers to change might be visible trends and future predictions in the global employment landscape  and the affordances in terms of data analytics and medical diagnostics (for example) that more widely defined AI promises. As I keep saying this doesn’t mean we need to rush to embrace anything nor does it imply that educators must become computer scientists overnight. But it does mean something has to give and, rather than (something else I have been saying for a long time) knee-jerk ‘everyone back in the exam halls’ type responses, it’s clear we need to move a wee bit faster in the names of credibility (of the education and the bits of paper we dish out at the end of it) as well as the value to the students of what we are teaching and , perhaps more controversially I suppose, how we teach and assess them. 

Over the past months, I’ve been working with colleagues to think through what AI’s presence means for curriculum transformation. In discussions with colleagues at King’s most recently, three interconnected areas keep surfacing and this is my effort to set them out: 

Content and Disciplinary Shifts

We need to reflect on not just what might we add, but what can we subtract or reweight. The core question becomes: How is AI reshaping knowledge and practice in this discipline, and how should our curricula respond?

This isn’t about inserting generic “AI in Society” modules everywhere. It’s about recognising discipline-specific shifts and preparing students to work with, and critically appraise/ engage with new tech, approaches, systems and idea (and the impacts consequent of implementation). Update 11th June 2025: My colleague, Dr Charlotte Haberstroh, pointed out on reading this that there is an additional important dimension to this and I agree. She suggests we need to find a way to enable students to question and make connections explicitly: ‘how does my disciplinary knowledge help me (the student) make sense of what’s happening, how can it inform my analysis of causes/consequences of how AI is being embedded into our society (within the part that I aim to contribute to in the future / participate in as responsible citizen). Examples she suggested: In Law it could be around how it alters the meaning of intellectual property, in HR it’s going to be about AI replacing workers (or not) and/or the business model of the tech firms driving these changes. In history it’s perhaps how have we adopted technologies in the past and how does that help us understand what we are doing now.

Some additional examples of how AI as content crosses all disciplines: 

  • Law: AI-powered legal research and contract review tools (e.g. Harvey) are changing the role of administration in law firms and the roles of junior solicitors.
  • Medicine: Diagnostic imaging is increasingly supported by machine learning, shifting the emphasis away from manual pattern recognition towards interpretation, communication, and ethical judgement.
  • Geography: Environmental modelling uses real-time AI data analytics, reshaping how students understand climate systems.
  • History and Linguistics: Machine learning is enabling large-scale text and language analysis, accelerating discovery while raising questions about authorship, interpretation, and cultural nuance.

Assessment Integrity and Innovation

Much of the current debate focuses on the security of assessment in the age of AI. That matters a lot of course but if it drives all our thinking and feel it is still the dominant narrative in HE spaces, we will double down on distrust, always default to suspicion and restriction rather than our starting point being designing for creativity, authenticity and inclusivity. 

First shift needs to be moving from ‘how do we catch cheating?”’to ‘Where and how can we ‘catch’ learning?’ as well as  ‘how do we design assessments that AI can’t meaningfully complete without student learning?’ Does this mean redefining ‘assessment’ beyond the narrow ‘evaluative ‘ definition we tend to elevate? Probably, yes. 

Risk is real, inappropriate/ foolish/ unacceptable …even malicious use of AI is a real thing too. So, robustness by design is important too: iterative, multi-stage tasks; oral components; personalised data sets; critical reflection. All are possible without reverting to closed-book exams. These have a place but are no panacea. 

Examples of AI-shaping assessment and design:

AI Integration & Critical Literacies

Students need access to AI tools; They need choice (this will be an increasingly big hurdle to navigate), they need structured opportunities to critique and reflect on their use. This means building critical AI literacy into our programmes or, minimally, the extra-curricular space, not as a bolt-on, but as an embedded activity. Given what I set out above, this will need nuancing to a disciplinary context. It’s happening in pockets but I would argue needs more investment and an upping of the pace- given the ongoing crises in UK HE (if not globally) it’s easy to see why this may not be seen as a priority. 

I think we need to do the following for all students. What do you think? 

  • Critical AI literacy (what it is, how it works (and where it doesn’t),  all the mess in connotes)
  • Aligned with better Information/digital literacy (how to verify, attribute, trace and reflect on outputs- and triangulate)
  • Assessment and feedback literacy (how to judge what’s been learned, and how it’s being measured)

Some examples of where the discipline needs nuance and separate focus and why it is so complex: 

  • English Literature/ History/ Politics: Is the essay dead? Students can prompt ChatGPT to generate essays but how are they generating passable essays when so much of the critique is about the banality of homogenised outputs and lack of anything resembling critical depth? How can we (in a context were anonymous submission is the default) maintain value in something deemed so utterly central to humanities and social science study? 
  • Medical and Nursing education:I often feel observed clinical examinations hold a potential template for wider adoption in non medical disciplines. And AI simulation tools offer lifelike decision-making environments so we are seeing increasing exploration of the potentials here: the literacy lies in knowing what AI can support and what it cannot do, and how to bridge that gap.Who learns this? Where is the time to do it? How are decision made about which tools to trail or purchase? 

Where to Start: Prompting thoughtful change

These three areas are best explored collectively, in programme teams, curriculum working groups or assessment review/ module review teams. I’d suggest to begin with these teams need to discuss and then move on from there. 

  1. Where have you designed assessments that acknowledge AI in terms of the content taught? What might you need to modify looking ahead?
    (e.g. updated disciplinary knowledge, methodological changes, professional practice)
  2. Have you modified assessments where vulnerability is a concern? Have you drawn on positive reasons for change (eg scholarship in effective assessment design)? (e.g. risk of generative AI substitution, over-reliance on closed tasks, integrity challenges)
  3. Have you designed or planned assessments that incorporate, develop or even fully embed AI use?
    (e.g. requiring students to use, reflect on or critique AI outputs as part of their task)

I do not think this is AI evangelism though I do accept that some will see it as such because I do believe that engagement is necessary and actually an ethical responsibility to our students. That’s a tough sell when some of those students are decrying anything with ‘AI’ in it as inherently and solely evil. I’m not trying  to win hearts and minds to embrace anything other than that these tech need much broader definition and understanding and from that we may critique and evolve.

Future of Work?

I wanted to drop these two reports in one place. Neither of course are concerned with wider ethical issues of AI and I do not want to come over as tech bro evangelist but I do think my pragmatism and the necessary ‘responsible engagement’ approach many institutions are now taking is buttressed by the (like or no) trends we are seeing which are profound. Barriers (as seen through eyes of employers) to change struck me too as we can see the similar manifestations in educational spaces: skills gaps, cultural resistance and outdated regulation.

The Future of Jobs Report 2025 explores how global labour markets will evolve by 2030 in response to intersecting drivers of change: technological advances (especially AI), economic volatility, demographic shifts, climate imperatives and geopolitical tensions. Based on responses from over 1,000 global employers covering more than 14 million workers, the report predicts large-scale job transformation. While 14% of current jobs (170 million) are expected to be created, 8% (92 million) will be displaced, resulting in a net 6% growth. The transition will be skills-intensive, with 59% of workers needing retraining. Those numbers are enogh to make you gasp and drop your coffee.

PwC’s 2025 Global AI Jobs Barometer presents an incredibly optimistic analysis of how AI is reshaping the global workforce. From a billion job ads and thousands of company reports across the globe, the report suggests that AI is enhancing productivity, shifting skill demands and increasing the value of workers. Rather than displacing workers it argues that AI is acting as a multiplier, especially when deployed agentically. The findings provide a counter perspective to common (and I’d argue, perfectly reasonable and rational!) fears about AI-induced job losses.

Whilst I am still wearing the biggest of my cynical hats, I concur that the need for urgent investment in skills (and critical engagement) is imperative and, lest we lose any residual handle on shaping the narratives in this space, we need to invest much more of our efforts into considering where we need to adapt what we research, the design and content of our currcula and the critical and practical skills we need to develop. Given the timeframes suggested in these reports, we’d better get on with it.

Is AI like a cute puppy?

Audio version of this post

TL:DR? No, it is not, so why would you embrace it?

I have mentioned this before but it keeps cropping up so I am going to labour the point again. The idea of ‘embracing’ AI in education (or anywhere) can be seen to grow as a narrative throughout 2023 and was already on a steep upward trajectory prior to that.

A line chart showing the frequency of the phrase “Embrace AI” in published texts from 2000 to 2022. The horizontal axis runs from 2000 to 2022; the vertical axis shows tiny percentage values from 0 % up to 0.00000024 %. From 2000 through about 2014, the blue line hugs the baseline at essentially 0 %, with a very slight rise between 2006 and 2012 and a dip around 2014. Beginning around 2015, the line climbs steeply, reaching approximately 0.00000022 % by 2022. A tooltip at the year 2000 notes a value of 0.00000000 %.
Google Ngram viewer for ‘Embrace AI’

But a significant contribution to this notion came in this HEPI blog of 5 January 2024. Professor Yike Guo urges UK universities to move beyond mere caution and become active adopters of artificial intelligence. Drawing on 34 years of AI, data-mining and machine-learning research at Imperial College London and his  role as Provost at HKUST, he warned that AI is not a peripheral tool but a fundamental shift in the educational paradigm. His focus on structural, systemic and pre-existing issues in how we construct education such as  the persistence of rote memorisation in curricula mirrors my own case for using AI as an opportunity to leverage research-informed changes long needed. Professor Guo advocates for compulsory AI literacy modules that teach students to interrogate and collaborate with digital co-pilots and insists that the true value of education will lie in cultivating ethical reasoning, emotional intelligence and creativity which, importantly, are qualities that machines cannot replicate. He says (and I quote this a lot):

“…UK universities face a choice: either embrace AI as an integral component of academic pursuit or risk obsolescence in a world where digital co-pilots could become as ubiquitous as textbooks.”

I tend to agree with much of Professor Guo’s stance: AI will reshape (and already is)  higher education pretty profoundly but I find his call to “embrace” AI really troubling. This phrase seems to be everywhere in relation to AI. I hear it every day and I don’t  think it is helpful at all.  I embrace my wife and daughter (and, somewhat awkwardly, my son and my mum: it’s a generational thing I think!), a kitten, and even my Spurs-supporting mates last week when we finally won a trophy after 17 years of pain (see picture below). 

A photograph taken inside a dimly lit bar showing a joyous celebration among football supporters. In the foreground, an older man wearing glasses and a flat cap laughs with his mouth wide open as a younger man embraces him from behind, both arms wrapped around his shoulders. The younger man, in a light trench coat, leans in close, smiling broadly. Behind them to the right, two other fans—one in a yellow Tottenham Hotspur shirt bearing the name “Kane” and the number 7—are similarly embracing. The background is softly focused, revealing a few more patrons and industrial-style décor with exposed beams and abstract wall art.
Me being embraced by my Spurs buddy ‘JM’ (Photo: Tom Sweetland)

But I do not embrace people or things I neither know nor trust. I do not embrace strangers. Even when I employed someone to complete a loft conversion, and we came to know them well over the course of the (interminable) job, we still didn’t end up hugging each other. Some people love their phones too much and might kiss and hug them but I think they’re daft. These are tools, nothing more. ‘Embracing AI’ narratives only feed anthropomorphism. It also feeds binary narratives: are you ‘fully embrace’ or ‘outright reject’? Actually, reality demands something far more nuanced.

To these ends, I am constantly challenging the idea of embracing AI. So, instead, I argue for engagement. We can engage with affection, care, warmth and appreciation, but we can also engage with suspicion, trepidation, anxiety, distrust, even fear. Engagement accommodates critical scrutiny as readily as it does positive and productive collaboration.So, bottom line, let’s drop the idea of embracing AI but encourage critical engagement with AI (in all its diversity…what we conceptualise AI as is another thing that vexes me btw). Also: Come on you Spurs!