Essays & AI: collective reflections on the manifesto one year on

Its roughly a year since we (Claire Gordon and I plus a collective of academics from King’s & LSE) published the Manifesto for the Essay in the Age of AI. Despite improvements in the tech AND often pretty compelling evidence and arguments for the reduction of take home, long form writing in summative assessments, I STILL maintain the essay has a role as I did this time last year. On one of the pages of the AI in Education short course authored by colleagues at King’s from the Institute of Psychiatry, Psychology & Neuroscience (Brenda Williams) and Faculty of Dentistry, Oral & Craniofacial Sciences (Pinsuda Srisontisuk and Isabel Miletich) they detail patterns of student AI usage. They end with a suggestion that participants take a structured approach to analysing the Manifesto and the outcome is around 150 responses (to date) offerring a broad range of thoughts and ideas from educators working across disciplines and educational levels across the world. This was the forum prompt:

Is the essay dead?

The manifesto above argues that this is not the case, but many believe that long form writing is no longer a reliable way to assess students. What do you think?

Although contributors come from diverse contexts, some shared patterns and tensions really stand out which I share below. I finish with a wee bit of my own flag waving (seems to be a popular pastime recently).

Sentiment balance

The overwhelming sentiment is broad agreement and reformist.

  • Most participants explicitly reject the idea that “the essay is dead”. They value essays for nurturing critical thinking, argumentation, independence and the ability to sustain a coherent structure.
  • A minority voice expresses stronger doubts, usually linked to practical issues (e.g. heavy marking loads, students’ shrinking reading stamina, or the ease of AI-generated text) and call for greater diversification of assessment.
  • There is also a strand of cautious pragmatism: many see the need for significant redesign of both teaching and assessment to remain relevant and credible.

In short, the mood is hopeful and constructive rather than nostalgic or doom ‘n’ gloom. The essay is not to be discarded but has to be re-imagined.

Here are a couple of sample responses:

Not quite dead, no. I think of essays as a ‘thinking tool’ – it’s a difficult cognitive task, but a worthwhile one. I think, as mentioned in the study, an evolution towards ‘process orientated’ assessment could be the saviour of the essay. Perhaps a movement away from the product (an essay itself) being the sole provider of a summative grade is what’s needed. Thinking of coursework, planning, supervisor meetings and a reflective journal on how their understanding developed over the process of researching, synthesising, planning, writing and redrafting could be included. (JF)

In their current form, many take-home essay assessments no long reliably measure a students’ learning, nor mirror the skills students need for the workplace (as has arguably always been the case for many subjects). I wonder if students may increasingly struggle to see the value of writing essays too. However, I do value the thought processes that go into crafting long form writing. I think if essays are thoughtfully redesigned and include an element of choice for the learner, perhaps with the need to draw on some in-house case study or locally significant issue, then essays are not necessarily dead.(AM)

The neat dodge to this question is to suggest the essay will be like the ship of Theseus. It will remain but every component in it will be made of different materials 🙂 (EP)

Key themes emerging from the comments

1. Process over product
A strikingly common thread is the shift from valuing the final script to valuing the journey of thought and writing. Contributors repeatedly advocate staged submissions, reflective journals, prompts disclosure, oral defences or supervised drafting. This aligns directly with the manifesto’s calls to redefine essay purposes and embed critical reflection (points 3 and 4).

2. Productive integration of AI
Few respondents argue for banning AI (obviously the responses are skewed towards those willing to undertake an AI in Education short course in the first place!). Instead, many echo the manifesto’s seventh and eighth points on integration and equity. Suggestions include:

  • require students to document prompts and edits,
  • use AI to generate counter-arguments or critique drafts,
  • support second-language writers or neurodivergent students with AI grammar or audio aids,
  • design tasks tied to personal data, lab results or workplace contexts that AI cannot easily fabricate.

A persistent caution is that without clear guidance, AI may encourage superficial engagement or plagiarism. Transparent ground rules and explicit teaching of critical AI literacy are seen as essential.

3. Expanding forms and contexts
Many contributors support the manifesto’s second point on diverse forms of written work. They propose hybrid assessments such as essays combined with oral presentations, podcasts, infographics or portfolios. Others emphasise discipline-specific needs: scientific reporting, medical case notes, or creative writing, each with distinct conventions and AI implications.

4. Equity, access and institutional support
There is strong agreement that AI’s benefits and risks are unevenly distributed. Participants highlight the need for:

  • institutional investment in staff development and student training,
  • clarity on acceptable AI use across programmes,
  • assessment designs that do not disadvantage those with limited technological access.

5. Rethinking academic integrity
Several comments resonate with the manifesto’s call to revisit definitions of cheating and originality. Rather than policing AI, some suggest designing assessments that render unauthorised use unhelpful or irrelevant, while foregrounding honesty and reflection.

What this means for the manifesto

The forum feedback affirms the manifesto’s central claim that the essay remains a vital, adaptable form, but it also pushes its agenda in useful directions.

  • Greater emphasis on process-based assessment. While the manifesto highlights process and reflection, practitioners want even stronger endorsement of multi-stage, scaffolded approaches and/ or dialogic or presentational components as the cornerstone of future essay design.
  • Operational guidance for AI use. Educators call for more than principles: they need models of prompt documentation, supervised writing practices and examples of AI-resistant or AI-enhanced tasks.
  • Disciplinary specificity. The manifesto could further acknowledge the wide variance in how essays function, from lab reports to creative pieces and provide pathways for each. Of course we, like everyone are subject to a major impediment…
  • Workload and resourcing. Several voices stress that meaningful change requires institutional support and realistic marking expectations; without these, even the best principles risk remaining aspirational. This for me is likely the biggest impediment, not least because of the ongoing, multi layered crises HE is confronted with just now.

Overall, the conversation demonstrates an appetite for renewal rather than retreat to sole reliance on in-person exams though this remains still a common call. I stand with the consensus view that the essay (and other long form writing) is not in terminal decline but in the midst of a necessary transformation. What we need to see is this: Educators alert to the affordances and limitations of AI, conversations happenning between students and those that support them in discipline and with academic skills and students writing assessments that are AI-literate. As we find our way to the other side of this transititional space we are in, deluged by inappropriate use and assessments too slow in changing, eventually the writing will (again) be genuinely engaging, students will see value in finding their own voices and we’ll move closer to consensus on some new ways of producing as legitimate. When I read posts on social media advocating wholesale shift to exams (irrespective of other competing damages this may connote and in apparent ignorance of the many ways cheating happens in invigilated in person exams) or ‘writing is pointless’ pieces I am struck by the usually implicit but sometimes overt assumption that writing is ONLY valuable as evidence of learning. Too rarely are formative/ developmental aspects rolled into the arguments alongside a failure to connect to persuasive (in this and wider for learning arguments) rationales for reconsidering the impact on grades on how students approach wiritng. And, finally, even if 80% of students did want the easiest route to a polished essay, I’m not abandoning the 20% that appreciate the skills development, the desirable difficulties and will to DO and BE as well as show what they KNOW. Too many of the current narratives advocate not only thowing the baby out with the bathwater but then refuse to feed the baby because, you know, the bathwater was dirty. Unpick THAT strangled metaphor if you can.

Plus ça change; plus c’est a scroll of death

Hang on it was summer a minute a go
I looked at my blog just now and saw my last post was in July. How did the summer go so fast? There’s a wind howling outside, I am wearing a jumper and both actual long dark wintry nights and the long dark metaphorical ones of our political climate seem to loom. To warm myself up a little I have been looking through some tools that offer AI integrations into learning management systems (LMS aka VLEs)* rather than doing ‘actual’ work. That exploration reminded me of the first ever article I had published back in 2004. The piece has long since disappeared from wherever I save the printed version and is no longer online (not everything digital lasts forever, thank goodness) but I dug the text out of an old online storage account and reading it through has made me realise how much things have changed broadly while, in other ways, it is still the same show rumbling along in the background, like Coronation Street (but no-one really remembers when it went from black and white to colour).

What I wrote back then
In that 2004 article I described the excitement of experimenting with synchronous and asynchronous digital discussion tools in WebCT (for those not ancient like me, Web Course Tools – WebCT- was an early VLE developed by the University of British Columbia which was eventually subsumed into Blackboard). I was teaching GCSE English and was programme leader for an ‘Access to Primary Teaching’ course and many of my students were part time so only on campus for 6 hours per week across two evenings. I’d earlier taught myself HTML so I could build a website for my history students- it had lots of text! It had hyperlinks! It had a scolling marquee! Images would have been nice but I knew my limits. When I saw WebCT, I was fired up by the possibilities of discussion forums and live chat. When I set it up and trialled it I saw peer support, increased engagement with tough topics, participation from ‘quiet’ students amongst other benefits. I was so persuaded by the added value potential I even ran workshops with colleagues to share that excitement.

See this great into to WebCT from someone in CS dept at British Columbia from 1998:

That is still me of course. My job has changed and so has the context, but the impulse to share enthusiasm for digital tools that foster dialogue and interaction remains why I do what I do. It was nice to read that and I felt a fleeting affection for that much younger teacher, blissfully unaware of the challenges ahead! Even so and forming a rattling cognitive dissonace that is still there, I was frustrated by the clunky design and awkward user interface that made persuading colleagues to use it really challenging. Log in issues took up a lot of time and balancing ‘learning’ use with what I then called ‘horseplay’ (what was I, 75?!) took a while to calibrate. Nevertheless, I thought these worth working through but, even with some evidence of uptake across the college I was at was apparent, there was a wider scepticism and reluctance. Why wouldn’t they? ‘it’s too complex’; ‘I am too busy’; ‘the way I do it now works just fine, thank you’. Pretty much every digital innovation has been accompanied by similar responses; even the good ones! I speculated about whether we needed a blank sheet of paper to rethink what an LMS could be, but concluded that institutions were more likely to tinker and add features than to start again.

2004? Feels like yesterday; feels like centuries ago
It was only 2003–4 (he says, painfully aware that I have colleagues who were born then), yet experimenting with an LMS felt novel and that comes over really clearly in my article. If you’d asked me this morning when I started using an LMS I might have said 1998 or 99. 2003 feels so recent in the contexct of my whole teaching career. What the heck was I doing before all that? Thinking back I realise that in my first full time job there was only one computer in our office and John S. got to use that as he was a trained typist (so he said). And older than me. In the article I was carefully explaining what chat and forums were and how they were different from one another, so the need for that dates the phenomeon too I suppose. Later, after moving to a Moodle institution, I became e-learning lead and engaged with JISC working groups- a JISC colleague who oversaw the VLE working group jokingly called me Mr Anti-Moodle because I was vocal in my critiques. It wasn’t quite acccurate- I was critical for sure but then, as now, I liked the concept but disliked the way it worked. Persuading people to adopt an LMS was hard as I said, and, while I have seen some brilliant use of Moodle and the like, my impression is that the majority (argue with me on this though) of LMS course are functional repositiories with interactive and creative applications the exception rather than the norm. The scroll of death was a thing in 2005 and it is as much of a thing now. It also made me think of current ‘Marmitey’ positions folk are taking re: AI. Basically, AI (big and ill defined as it usually is) has to come with nuance and understanding so binary, entrenched, one size fits all positons are unhelpful and, in my view, hard to rationalise and sustain.

The familiar LMS problem
Back to the LMS, from WebCT to Moodle and other common current systems, the underlying functionality has barely shifted (I mean from the perspective of your average teacher/lecturer or student). Many still say Moodle feels very 1990s (probably they mean early 2000s but I suspect they, like me, find it hard to reconcile the idea of any year starting with a 2000 could be a long time ago). Ultimately I think none of these systems offered a genuinely encouraging combination of interface and user experience and that is an issues that persists to this day. The legacy of those early design decisions lingers, and we are still working around them. People have been predicting the death of the VLE for years (including me) but it has not happened. When I first saw Microsoft Teams just before Covid, I thought here’s the nail in the coffin. I was wrong again. Maybe being wrong about the end of the LMS is another running theme.

Will AI change the LMS story?
So what about AI powered integrations? Will they revolutionise how the LMS works? Will they be part of the reason for a shift away from them? Unlikely in either sense is my best guess. Everything I see now is about embellishments and shortcuts that feed into the existing structure. My old dream of a blank-sheet LMS revolution has faded. Thirty years of teaching and more than twenty years using LMSs suggest that this is one component of digital education that will not fade away. The tools will keep evolving, but the slow, steady thrum of the LMS endures in the background. I realise that I have finally predicted non change so don’t bet on that as I have been wrong quite a bit in the past. What I do know is that digital discussions using tools to support dialogic pedagogies have persisted as have the issues related to them. Only 10-20% of my students use the forums! I hear that still. But what I realised in 2004 and maintain to this day is that 10-20% is a significat embellishment for some and alternative for others so I stick with what I said back then in that sense at least. Oh, and lurking is a legit and fine thing for yet others!

One of the most wonderful things about the AI in Education course (so close to 15,000 participants!) is the forums. They add layers of interest that cannot be planned or produced. I estimate only 10-15% of participants post but what a contribution they are making and its an enhancement that keeps me there and, I am convinced, adds real value to those not posting too.

*I’ll stick with LMS as this seems to be pretty ubiquitous these days though I am aware of the distinctions and when I wrote the piece about ‘WebCT’ the term VLE was very much go to.

Transparent AI workflows

I just posted this on innovation and I thought the way I got to that from an idea that popped into my head while listening to a podcast when walking to the station might be interesting for a few reasons. 1. Because we talk a lot about transparency in AI use…well the link below and then the final post reveals the stages I went through (other than the comprehensive final edit I did in MS Word). 2. I think it shows a lot about the increasing complexity in the nature of authorship. 3. It shows that where AI augments writing, how challenging it is to capture the nature of use and actually ‘be’ transparent because writing has suddenly become somehting I can do on the move and 4. will challenge many to consider the legitimacy and quality of writing when produced in this way. I should also note that this post, I did in the regular way of typing direxctly into the WordPress author window without (not sure why) built in spellechecker.

Here is the full transcript of the audio conversation followed by (at the draft stage) additional text based prompts I did while strap hanging on the tube. The final edit I did this afternoon on my laptop.

Image: https://www.pexels.com/@chris-f-38966/

Innovation, AI and (weirdly) the new PSF

Mark Twain almost certainly said: 

“substantially all ideas are second-hand, consciously and unconsciously drawn from a million outside sources, and daily used by the garnerer with a pride and satisfaction born of the superstition that he originated them” 

and he is also attributed with saying:

 “A person with a new idea is a crank until the idea succeeds”

Both takes, perhaps even while being a little contradictory, relate to the idea of innovation. In this post that I initially drafted in an interaction with GPT Pro Advance voice chat while walking to work, I have thrown down some things that have been bothering me a bit about this surprisingly controversial word. 

Firstly, what counts as innovation in education? You often hear folk argue that, for example, audio feedback is no innovation as teachers somewhere or other have been doing it for donkeys years. The more I think though, the more I’m certain actions/ interventions/ experiments/ adaptations are rarely innovative by themselves but what matters fundamentally is context. Something that’s been around for years in one field or department might be utterly new and transformative somewhere else.

Objective Structured Clinical Examinations are something I have been thinking about a lot because I believe they may inspire others to adapt this assessment approach outside health professions. . In medical education, they’re routine. In business or political economy observable stations to assess performance or professional judgement would probably be deemed innovative. Chatting with colleagues, they could instantly see how something like that might work in their own context, but with different content, different criteria and perhaps a different ethos. In other words, in terms of the thing we might try to show we are doing to evidence the probably impossible to achieve ‘continuous improvement’ agenda, innovation isn’t about something being objectively new; it’s about it being new here. It’s about context, relevance and reapplication.

Innovation isn’t (just) what’s shiny

Ages ago I wrote about the danger and tendency for educators (and their leaders) to be dazzled by shiny things. But we need to move away from equating innovation with digital novelty. The current obsession is AI, unsurprisingly,  but it’s easy to get swept along in the sheen of it, especially if, like me, you are a vendor target. This, though reminds me that there’s a tendency to see innovation as synonymous with technological disruption. But I’d argue the more interesting innovations right now are not just about what AI can do, but how people are responding to it.

Arguable I know, but I do believe AI offers clear affordances: supporting diverse staff and student bodies, support for feedback, marking assistance, rewriting for tone, generating examples or case studies. And there’s real experimentation happening, much of it promising, some of it quietly radical. At the same time I’m seeing teams innovate in the opposite, analogue direction. Not because they’re nostalgic, conservative or anti tech (though some may be!),  but because they’re worried about academic integrity or concerned about the over-automation of thinking. We’re seeing a return to in-person vivas, handwritten tasks, oral assessments. So these are not new but because they are being re-justified in light of present challenges. It could be seen as innovation via resistance.

Collaboration as a key component of innovation

In amongst the amazing work reflected on, I see a lot a claims for innovative practice in the many Advance HE fellowship submissions I read as internal and external reviewer.  In some ways, seemingly very similar activities could be seen as innovative in one place and not another. While not a mandatory criterion, innovation is:

  • Encouraged through the emphasis on evidence-informed practice (V3) and responding to context (V4).
  • Often part of enhancing practice (A5) via continuing professional development.
  • Aligned with Core Knowledge K3, which stresses the importance of critical evaluation as a basis for effective practice—and this often involves improving or innovating methods.In the guidance for King’s applicants, innovation is positioned as a natural outcome of reflective practice

So while the new  PSF (2023) doesn’t promote innovation explicitly, what it does do (and this is new) is promote collaboration. it explicitly recognises the importance of collaboration and working with others, across disciplines, roles and institutions as a vital part of educational practice. That’s important because whilst in the past perceptions of innovation have stretched the definition and celebrated individual excellence in this space many of the most meaningful innovations I’ve seen emerge from collaboration and conversation. This takes us back to Twain and  borrowing, adapting, questioning.

We talk of interdisciplinarity (often with considerable insight and expertise like my esteemed colleagues Dave Ashby and Emma Taylor) and sometimes big but often small-scale, contextual innovation comes from these sideways encounters. But they require time, permission and a willingness to not always be the expert in the room. Something innovators with a lingering sense of the inspiring, individual creative may have trouble reconciling. 

Failure and innovation

We have a problem with failure in HE. We prefer success stories and polished case studies. But real innovation involves risk: things not quite working, not going to plan. Even failed experiments are educative. But often we structure our institutions to minimise that kind of risk, to reward what’s provable, publishable, measurable, successful. I have argued that  we do something similar to students. We say we want creativity, risk-taking, deep engagement. But we assess for precision, accuracy, conformity to narrow criteria and expectations. We encourage resilience, then punish failure with our blunt, subjective grading systems. We ask for experimentation but then rank it. So it’s no surprise if staff, like students, when encouraged to be creative or experimental, can be reluctant to try new things.

AI and innovation

I think I am finally getting to my point. The innovation AI  catalyses goes far beyond AI use cases. It’s prompting people to re-examine their curricula, reassess assessment designs, rethink what we mean by original thinking or independent learning. It’s forcing conversations we’ve long avoided, about what we value, how we assess,  and how we support students in an age of automated possibility. Even WHETHER we should continue to grade. (Incidentally I heard, amongst many fine presentations yesterday at the King’s /Cadmus event on assessment an inspiring argument against grading by Professor Bugewa Apampa from UEL. It’s so good to hear clearly articulated arguments on the necessity of confronting the issues related to grading from someone so senior). 

Despite my role (Ai and Innovation Lead) some of the best innovations I’ve seen aren’t about tech at all. They’re about human decisions in response to tech. They’re about asking, “What do we not want to automate?” or “How can we protect space for dialogue, for process or for pause?” 

If we only recognise innovation when it looks like disruption, we’ll miss a lot. 

Twain, Mark. Letter to: Helen Keller. 1903 Mar 17 [cited June, 19, 2025] Available from: https://www.afb.org/about-afb/history/helen-keller/letters/mark-twain-samuel-l-clemens/letter-miss-keller-mark-twain-st 

AI and the pragmatics of curriculum change

Whilst some (many) academic staff and students voice valid and expansive concerns about the use of or focus on Artificial intelligence in education I find myself (finally, perhaps, and later in life) much more pragmatic. We hear the loud voices and I applaud many acts of resistance but we cannot ignore the ‘explosive increase’ in AI use by students. It’s here and that is one driver. More positive drivers to change might be visible trends and future predictions in the global employment landscape  and the affordances in terms of data analytics and medical diagnostics (for example) that more widely defined AI promises. As I keep saying this doesn’t mean we need to rush to embrace anything nor does it imply that educators must become computer scientists overnight. But it does mean something has to give and, rather than (something else I have been saying for a long time) knee-jerk ‘everyone back in the exam halls’ type responses, it’s clear we need to move a wee bit faster in the names of credibility (of the education and the bits of paper we dish out at the end of it) as well as the value to the students of what we are teaching and , perhaps more controversially I suppose, how we teach and assess them. 

Over the past months, I’ve been working with colleagues to think through what AI’s presence means for curriculum transformation. In discussions with colleagues at King’s most recently, three interconnected areas keep surfacing and this is my effort to set them out: 

Content and Disciplinary Shifts

We need to reflect on not just what might we add, but what can we subtract or reweight. The core question becomes: How is AI reshaping knowledge and practice in this discipline, and how should our curricula respond?

This isn’t about inserting generic “AI in Society” modules everywhere. It’s about recognising discipline-specific shifts and preparing students to work with, and critically appraise/ engage with new tech, approaches, systems and idea (and the impacts consequent of implementation). Update 11th June 2025: My colleague, Dr Charlotte Haberstroh, pointed out on reading this that there is an additional important dimension to this and I agree. She suggests we need to find a way to enable students to question and make connections explicitly: ‘how does my disciplinary knowledge help me (the student) make sense of what’s happening, how can it inform my analysis of causes/consequences of how AI is being embedded into our society (within the part that I aim to contribute to in the future / participate in as responsible citizen). Examples she suggested: In Law it could be around how it alters the meaning of intellectual property, in HR it’s going to be about AI replacing workers (or not) and/or the business model of the tech firms driving these changes. In history it’s perhaps how have we adopted technologies in the past and how does that help us understand what we are doing now.

Some additional examples of how AI as content crosses all disciplines: 

  • Law: AI-powered legal research and contract review tools (e.g. Harvey) are changing the role of administration in law firms and the roles of junior solicitors.
  • Medicine: Diagnostic imaging is increasingly supported by machine learning, shifting the emphasis away from manual pattern recognition towards interpretation, communication, and ethical judgement.
  • Geography: Environmental modelling uses real-time AI data analytics, reshaping how students understand climate systems.
  • History and Linguistics: Machine learning is enabling large-scale text and language analysis, accelerating discovery while raising questions about authorship, interpretation, and cultural nuance.

Assessment Integrity and Innovation

Much of the current debate focuses on the security of assessment in the age of AI. That matters a lot of course but if it drives all our thinking and feel it is still the dominant narrative in HE spaces, we will double down on distrust, always default to suspicion and restriction rather than our starting point being designing for creativity, authenticity and inclusivity. 

First shift needs to be moving from ‘how do we catch cheating?”’to ‘Where and how can we ‘catch’ learning?’ as well as  ‘how do we design assessments that AI can’t meaningfully complete without student learning?’ Does this mean redefining ‘assessment’ beyond the narrow ‘evaluative ‘ definition we tend to elevate? Probably, yes. 

Risk is real, inappropriate/ foolish/ unacceptable …even malicious use of AI is a real thing too. So, robustness by design is important too: iterative, multi-stage tasks; oral components; personalised data sets; critical reflection. All are possible without reverting to closed-book exams. These have a place but are no panacea. 

Examples of AI-shaping assessment and design:

AI Integration & Critical Literacies

Students need access to AI tools; They need choice (this will be an increasingly big hurdle to navigate), they need structured opportunities to critique and reflect on their use. This means building critical AI literacy into our programmes or, minimally, the extra-curricular space, not as a bolt-on, but as an embedded activity. Given what I set out above, this will need nuancing to a disciplinary context. It’s happening in pockets but I would argue needs more investment and an upping of the pace- given the ongoing crises in UK HE (if not globally) it’s easy to see why this may not be seen as a priority. 

I think we need to do the following for all students. What do you think? 

  • Critical AI literacy (what it is, how it works (and where it doesn’t),  all the mess in connotes)
  • Aligned with better Information/digital literacy (how to verify, attribute, trace and reflect on outputs- and triangulate)
  • Assessment and feedback literacy (how to judge what’s been learned, and how it’s being measured)

Some examples of where the discipline needs nuance and separate focus and why it is so complex: 

  • English Literature/ History/ Politics: Is the essay dead? Students can prompt ChatGPT to generate essays but how are they generating passable essays when so much of the critique is about the banality of homogenised outputs and lack of anything resembling critical depth? How can we (in a context were anonymous submission is the default) maintain value in something deemed so utterly central to humanities and social science study? 
  • Medical and Nursing education:I often feel observed clinical examinations hold a potential template for wider adoption in non medical disciplines. And AI simulation tools offer lifelike decision-making environments so we are seeing increasing exploration of the potentials here: the literacy lies in knowing what AI can support and what it cannot do, and how to bridge that gap.Who learns this? Where is the time to do it? How are decision made about which tools to trail or purchase? 

Where to Start: Prompting thoughtful change

These three areas are best explored collectively, in programme teams, curriculum working groups or assessment review/ module review teams. I’d suggest to begin with these teams need to discuss and then move on from there. 

  1. Where have you designed assessments that acknowledge AI in terms of the content taught? What might you need to modify looking ahead?
    (e.g. updated disciplinary knowledge, methodological changes, professional practice)
  2. Have you modified assessments where vulnerability is a concern? Have you drawn on positive reasons for change (eg scholarship in effective assessment design)? (e.g. risk of generative AI substitution, over-reliance on closed tasks, integrity challenges)
  3. Have you designed or planned assessments that incorporate, develop or even fully embed AI use?
    (e.g. requiring students to use, reflect on or critique AI outputs as part of their task)

I do not think this is AI evangelism though I do accept that some will see it as such because I do believe that engagement is necessary and actually an ethical responsibility to our students. That’s a tough sell when some of those students are decrying anything with ‘AI’ in it as inherently and solely evil. I’m not trying  to win hearts and minds to embrace anything other than that these tech need much broader definition and understanding and from that we may critique and evolve.

Future of Work?

I wanted to drop these two reports in one place. Neither of course are concerned with wider ethical issues of AI and I do not want to come over as tech bro evangelist but I do think my pragmatism and the necessary ‘responsible engagement’ approach many institutions are now taking is buttressed by the (like or no) trends we are seeing which are profound. Barriers (as seen through eyes of employers) to change struck me too as we can see the similar manifestations in educational spaces: skills gaps, cultural resistance and outdated regulation.

The Future of Jobs Report 2025 explores how global labour markets will evolve by 2030 in response to intersecting drivers of change: technological advances (especially AI), economic volatility, demographic shifts, climate imperatives and geopolitical tensions. Based on responses from over 1,000 global employers covering more than 14 million workers, the report predicts large-scale job transformation. While 14% of current jobs (170 million) are expected to be created, 8% (92 million) will be displaced, resulting in a net 6% growth. The transition will be skills-intensive, with 59% of workers needing retraining. Those numbers are enogh to make you gasp and drop your coffee.

PwC’s 2025 Global AI Jobs Barometer presents an incredibly optimistic analysis of how AI is reshaping the global workforce. From a billion job ads and thousands of company reports across the globe, the report suggests that AI is enhancing productivity, shifting skill demands and increasing the value of workers. Rather than displacing workers it argues that AI is acting as a multiplier, especially when deployed agentically. The findings provide a counter perspective to common (and I’d argue, perfectly reasonable and rational!) fears about AI-induced job losses.

Whilst I am still wearing the biggest of my cynical hats, I concur that the need for urgent investment in skills (and critical engagement) is imperative and, lest we lose any residual handle on shaping the narratives in this space, we need to invest much more of our efforts into considering where we need to adapt what we research, the design and content of our currcula and the critical and practical skills we need to develop. Given the timeframes suggested in these reports, we’d better get on with it.

Is AI like a cute puppy?

Audio version of this post

TL:DR? No, it is not, so why would you embrace it?

I have mentioned this before but it keeps cropping up so I am going to labour the point again. The idea of ‘embracing’ AI in education (or anywhere) can be seen to grow as a narrative throughout 2023 and was already on a steep upward trajectory prior to that.

A line chart showing the frequency of the phrase “Embrace AI” in published texts from 2000 to 2022. The horizontal axis runs from 2000 to 2022; the vertical axis shows tiny percentage values from 0 % up to 0.00000024 %. From 2000 through about 2014, the blue line hugs the baseline at essentially 0 %, with a very slight rise between 2006 and 2012 and a dip around 2014. Beginning around 2015, the line climbs steeply, reaching approximately 0.00000022 % by 2022. A tooltip at the year 2000 notes a value of 0.00000000 %.
Google Ngram viewer for ‘Embrace AI’

But a significant contribution to this notion came in this HEPI blog of 5 January 2024. Professor Yike Guo urges UK universities to move beyond mere caution and become active adopters of artificial intelligence. Drawing on 34 years of AI, data-mining and machine-learning research at Imperial College London and his  role as Provost at HKUST, he warned that AI is not a peripheral tool but a fundamental shift in the educational paradigm. His focus on structural, systemic and pre-existing issues in how we construct education such as  the persistence of rote memorisation in curricula mirrors my own case for using AI as an opportunity to leverage research-informed changes long needed. Professor Guo advocates for compulsory AI literacy modules that teach students to interrogate and collaborate with digital co-pilots and insists that the true value of education will lie in cultivating ethical reasoning, emotional intelligence and creativity which, importantly, are qualities that machines cannot replicate. He says (and I quote this a lot):

“…UK universities face a choice: either embrace AI as an integral component of academic pursuit or risk obsolescence in a world where digital co-pilots could become as ubiquitous as textbooks.”

I tend to agree with much of Professor Guo’s stance: AI will reshape (and already is)  higher education pretty profoundly but I find his call to “embrace” AI really troubling. This phrase seems to be everywhere in relation to AI. I hear it every day and I don’t  think it is helpful at all.  I embrace my wife and daughter (and, somewhat awkwardly, my son and my mum: it’s a generational thing I think!), a kitten, and even my Spurs-supporting mates last week when we finally won a trophy after 17 years of pain (see picture below). 

A photograph taken inside a dimly lit bar showing a joyous celebration among football supporters. In the foreground, an older man wearing glasses and a flat cap laughs with his mouth wide open as a younger man embraces him from behind, both arms wrapped around his shoulders. The younger man, in a light trench coat, leans in close, smiling broadly. Behind them to the right, two other fans—one in a yellow Tottenham Hotspur shirt bearing the name “Kane” and the number 7—are similarly embracing. The background is softly focused, revealing a few more patrons and industrial-style décor with exposed beams and abstract wall art.
Me being embraced by my Spurs buddy ‘JM’ (Photo: Tom Sweetland)

But I do not embrace people or things I neither know nor trust. I do not embrace strangers. Even when I employed someone to complete a loft conversion, and we came to know them well over the course of the (interminable) job, we still didn’t end up hugging each other. Some people love their phones too much and might kiss and hug them but I think they’re daft. These are tools, nothing more. ‘Embracing AI’ narratives only feed anthropomorphism. It also feeds binary narratives: are you ‘fully embrace’ or ‘outright reject’? Actually, reality demands something far more nuanced.

To these ends, I am constantly challenging the idea of embracing AI. So, instead, I argue for engagement. We can engage with affection, care, warmth and appreciation, but we can also engage with suspicion, trepidation, anxiety, distrust, even fear. Engagement accommodates critical scrutiny as readily as it does positive and productive collaboration.So, bottom line, let’s drop the idea of embracing AI but encourage critical engagement with AI (in all its diversity…what we conceptualise AI as is another thing that vexes me btw). Also: Come on you Spurs!

Rewilding higher education: weeds and wildflowers

Connie Gillies and Martin Compton

It was a privilege to offer reflections at Professor Cathy Elliott’s inaugural lecture, Rewilding the University recently. Her lecture was more than a celebration of an academic career: it was also a call to action. A provocation. A gentle but insistent reminder that education (and nature and the world!) does not need to look the way it does now. A packed lecture hall listened intently to Cathy’s arguments, ideas and jokes: it was a tough act to follow. Cathy said she hardly ever lectures but a skillful lecture is a thing of joy and is utterly compelling and we were lucky to witness one.  Here we share some reflections on Cathy’s ideas and how they have helped shape aspects of our own. 

Cathy made clear that rewilding is not a metaphor of neglect or abandonment, but of restoration, connection and flourishing. It recognises that overly managed systems, whether ecological or educational, can become depleted, homogenous and fragile. In both cases, monoculture and rigidity are warning signs: what Cathy referred to as ‘command and control’.  The invitation we heard was to value and support diversity, likewise in both nature  and education, to value what is often dismissed, and to allow for the possibility of unpredictable, unmeasurable growth.

This vision has shaped how we think about education and how we’ve each worked together with Cathy. Our own relationships, as a fellow academic (with similarly unconventional paths to current roles) and as a student (who had been disillusioned by educational experiences to the point of encountering Cathy’s course), and now as authors, as collaborators, is a component of the network that Connie has described as mycelial: Like subterranean fungal connections but nourishing ideas, allowing knowledge to travel, and making future growth possible. Like mycelium in forest ecosystems, these relationships and ideas remain largely invisible to the untrained eye, but they are foundational. They remind us that learning does not happen in isolation, but in intricate, collaborative webs.

When students sign up for Cathy’s Politics of Nature class, they often don’t fully grasp the lasting impact it will have on them. A friend once told Connie, “A Cathy Elliott module will change your life,” and while the statement may seem grand, it’s not far from the truth. For many, this course didn’t just teach content; it reshaped our approach to thinking, learning, and even our careers. Cathy’s teaching blends critical rigor with intellectual play, making the class a rare space where students can be both creatively curious and academically rigorous. Most importantly, she empowers students to discover their unique intellectual passions, encouraging them to contribute perspectives no one else could, simply because they aren’t anyone else.

Education, when rewilded, becomes an ecosystem. A space where mutual dependence is generative. A space where difference is not simply tolerated but required. It is through this lens that we’ve come to understand projects like ungrading, student co-authorship, and the politics of belonging, not as reforms, but as regenerative acts. These are not surface-level interventions, but shifts in the soil.

One of the most notable aspects of Cathy’s work is her broad intellectual curiosity. She’s not confined to any one field of study — from politics and nature to democracy, development, gender, race, disability and sexuality, Cathy’s academic interests are as diverse as they are profound. In an academic world that often pushes students toward ever-narrower specialization, Cathy’s approach encourages students to break free from this limitation.

Cathy’s teaching has long enacted this ethos. She nurtures students not through control but through trust. Her pedagogy invites learners to bring their whole selves, to make connections across disciplinary and personal boundaries, and to treat knowledge as something to be inhabited, not merely acquired. She encourages risk, slowness, reflection, and relationality which are qualities too often sidelined in institutional discourses of impact, efficiency and performance.

The dandelion is another metaphor Cathy draws on frequently and one we were also drawn to in our appreciation. Often dismissed as a weed, the dandelion (The French is ‘pissenlit’ which really does say everything about its reputation)  is in fact a profoundly restorative plant. It detoxifies soil, strengthens roots and nourishes ecosystems. It grows where it is not wanted and flourishes nonetheless. To children, it is a source of wonder, blown seeds, floating wishes,transformation, softness at one time, vibrant yellow before. But to adults, it is a nuisance to be removed. Cathy’s work, like the dandelion, asks us to reconsider who gets to decide what counts as valuable, as beautiful, as worthy. We need to ask ourselves to what extent have we constructed educational systems that we want to be like perfect lawns- predictable, clean, neat and each blade of grass much like the others. Cathy says: ‘don’t cut the grass and plant wildflowers instead!’ This is a literal and metaphorical phrase we can get behind!

This ethos extends into her work on gender, race and sexuality, which consistently challenges the structures that exclude some or  may diminish the presence or experience of others. In classrooms, in curricula, in institutional policy, she reminds us in her work that exclusion is never accidental, it is designed. But that also gives us pause for positive reflection: this means they can be redesigned. 

What we’ve come to understand through Cathy’s influence, and through our ongoing partnership, is that rewilding higher education is not a metaphorical indulgence, it is a pedagogical imperative. It calls us to rethink the terms of participation, the assumptions of merit, the rituals of assessment, and the conditions under which learning takes place. It also calls for attention to scale: recognising that large transformations begin with small shifts, relationships and new practices. 

It felt fitting, then, that the very day after Cathy’s lecture, a special issue of the Journal of Learning Development in Higher Education was published. Co-edited by one of us and containing a piece co-authored by the other, the issue is seeded with many of these same ideas. It features students and a Vice Chancellor; early career ac academics and emeritus professors, reimaginings of assessment, and reflections on academic community that echo and extend Cathy’s provocations. The special issue is a timely continuation of many of the conversations we have had with Cathy, who, unsurprisingly, also has a paper in the special issue and was part of the King’s/ UCL editorial collective. 

We both have very different careers and are at very different ends of them! But we share the sense that the rigid, often foreboding and frequently distrustful academy could be rewilded. It doesn’t have to be this way; more importantly, it could be otherwise.

Meme-ingful reflections on AI, teaching and assessment

I did a session earlier today for the RAISE special interest group on AI. I thought I’d have a bit of fun with it 1. because I was originally invited by Dr. Tadhg Blommerde (and Dr. Amarpreet Kaur) who likes a heterodox approach (see his YouTube channel here) and 2. Because I was preparing on Friday evening and my daughter was looking over my shoulder and suggesting more and more memes. Anyway, I was just reading the chat back and note my former colleague Steve asked: “Is the rest of the sector really short of memes these days now that Martin has them all?” I felt guilty so decided to share them back.

My point: There’s a danger we assume students will invariably cheat if given the chance. This meme challenges educators to reconsider what they define as cheating and encourages transparent, explicit dialogue around academic integrity. What will we lose if we assume all students are all about pulling a fast one?

My daughter (aged 13) suggested this one. How teachers view ChatGPT output: homogenised, overly polished essays lacking individuality. My daughter used the ‘who will be the next contestant on ‘The Bachelor’ (some reality show I am told) image to illustrate how teachers confidently claim they can spot AI-generated assignments because “they all look the same.” My point: I think this highlights early scepticism about AI-produced writing but that we should as educators consider the extent to which these tools have evolved beyond initial assumptions and remind our students (and ourselves) that imperfections and quirks can define a style. Just ask anyone reading one of my metaphor-stretched, overly complex sentences. Perhaps, for too long we have over-valued grammatical accuracy and formulaic writing?

My point: It’s not just about AI detectors of course. It’s more that this is an arms race we can’t win. If we see big tech as our enemy then fighting back with more of their big tech makes no sense. If we see students as the enemy then we have a much bigger problem. Collective punishment and starting with an assumption of guilt are hugely problematic in schools/ unis much as they are in life and tyrannical societies in general. When it comes to revisiting academic integrity I am keen discuss what it is we are protecting. I am also very much drawn to Ellis and Murdoch’s ‘responsive regulation’ approach. I don’t think I’m quite on the same page regarding automated detection but I do agree regarding the application (and resourcing of) deserved sanction for the ‘criminal’ (willful cheats) along with efforts to widen self-regulation and move as many students as possible from carelessness (or chancer behaviours) to self-regulation is critical.

Pretty obvious I guess but my point is this: We also need to resist assumptions that all students prioritise grades over genuine learning and creativity. Yes, there are those who are wilfully trying to find the easiest path to the piece of paper that confirms a grade or a degree or whatever. Yes, there are those whose heads may be turned by the promise of a corner-cutting opportunity. But there are SO many more who want to learn, who are anxious because they know others who are being accused of using these tech inappropriately (because, for example, they use ‘big’ words… really, this has happened). ALSO, we need to challenge the structural features that define education in terms of employability and value. I know how to use chatgpt but I am writing this. Why am I bothering writing? Because I like it. Because – I hope- my writing, even when convoluted (much like this sentence) is more compelling. Because it’s more gratifying than the thing I’m supposed to be doing. Above all, for me, it’s because it actually helps me articulate my thoughts better. We must continue valuing intrinsic motivation and the joy students derive from learning and creating independently. But more than that: we need to face up to the systemic issues that drive more students towards corner cutting or willful cheating. By the way, I often use generated text in things I write. All the alt text in these images is AI generated (then approved / edited by me) for example.

This leads me to the next one. I mean I do use AI every day for translation, transcription, information management, easing access to information, reformatting, providing alternative media, writing alt text… Many don’t I know. Many refuse; I know this too. But we are way into majority territory here I think. Students are recognising this real (or imagined) hypocrisy. The only really valid response to this I have heard goes something like: ‘I can use it because I am educated to x level. first year undergrads do not have the critical awareness or developed voice to make an informed choice’. I mean, I think that may be the case to an extent or in some cases but it reminds me a bit of the ‘pen licences’ my daughter’s primary school issued: you get one when you prove you can use a pencil first (little Timmy, bless him, is still on crayons). Have you seen the data on student routine use of generative AI? It elevates the tool to some sort of next level implement but is it even? I think I could make a better case for normalisation and acceptance of a future where human / AI hybrid writing is just how it is done (as per Dr Sarah Eaton’s work- note the firve other elements in the tenets.)

My point: The narratives around essential changes we need to implement ‘because of AI’ presents a false dichotomy between reverting to traditional exam halls or relying solely on AI detection tools. Neither option adequately addresses modern academic integrity challenges. Exams can be as problematic and inequitable as AI detection. It is not a binary choice. There are other things that can be done. I’ll leave this one hanging a bit as it overlaps with the next one.

My point: We need to critically re-evaluate how and why essays are used in assessment. We can maintain the essay but evolve its form to better reflect authentic, inclusive and meaningful assessments rather than relying on traditional, formulaic, high-stakes versions. Anyway I (with Dr Claire Gordon) have said it before, we already have a manifesto and Dr Alicia Syskja takes the argument to the next level here.

Really, though, you should have been there; we had a great time.

But how? And why even? Practical examples of ways assessments have been modified

Modifying or changing assessment ‘because of AI’ always feels like it feeds ‘us and them’ narratives of a forthcoming apocalypse (already predicted) and couches the change as necessary only because of this insidious, awful thing that no-one wants except men in leather chairs who stroke white cats.

It is of course MUCH more complex than that and much of the desired change has been promoted by folk with a progressive, reform, equity, inclusion eye who do (or immerse themselves in) scholarship of HE pedagogy and assessment practices.

Anyway, a colleague suggested that we should have a collection of ideas about practical ways assessments could be modified to either make them more AI ‘robust’ or at least ‘AI aware’ or ‘ AI inclusive’ (I’m hesitant to say ‘resitant’ of course). Whilst colleagues across King’s have been sharing and experimenting it is probably true to say that there is not a single point of reference. We are in King’s Academy working on remedying this as part of the wider push to support TASK (transforming assessment for students at King’s) and growing AI literacy but first I wanted to curate a few examples from elsewhere to offer a point of reference for me and to share with colleagues in the very near future. I’ve gone for diversity from things I have previously book marked. Other than that, they are here only to offer points of discussion, inspiration, provocation or comparison!

Before I start I should remind KIng’s colleagues of our own guidance and the assessment principles therein, note that with collleagues at LSE, UCL and Southampton I am working on some guidance on the use of AI to assist with marking (forthcoming and controversial). Some of the College Teaching Fund projects looked at assessment and This AI Assessment Scale from Perkins et al. (2024) has a lot of traction in the sector too and is not so dissimilar from the King’s 4 levels of use approach. It’s amazing how 2023 can feel a bit dated in terms of resources these days but this document form the QAA is still relevant and applicable and sets out broader, sector level approarpriate principles. In summary:

  • Institutions should review and reimagine assessment strategies, reducing assessment volume to create space for activities like developing AI literacy, a critical future graduate attribute.
  • Promote authentic and synoptic assessments, enabling students to apply integrated knowledge practically, often in workplace-related settings, potentially incorporating generative AI.
  • Move away from traditional, handwritten, invigilated exams towards innovative approaches like digital exams, observed discipline-specific assessments or oral examinations
  • Design coursework explicitly integrating generative AI, encouraging ethical use, reflection, and hybrid submissions clearly acknowledging AI-generated content.
  • Follow guiding principles ensuring assessments are sustainable, inclusive, aligned to learning outcomes, and effectively demonstrate relevant competencies, including appropriate AI usage.

I’m also increasingly referring to the two lane approach being adopted by Sydney which leans heavily into similar principles. Context is different to UK of course but I have a feeling we will find ourselves moving much closer to the broad approach here. It feels radical but perhaps no more radical than what many, if not most, unis did in Covid.

Finally, the examples

Example 1. UCL Medical Sciences BSc.

  • Evaluation of coursework assessments to determine susceptibility to generative AI and potential integration of AI tools.
  • Redesign of assessments to explicitly incorporate evaluation of ChatGPT-generated outputs, enhancing critical evaluation skills and understanding of AI limitations.
  • Integration of generative AI within module curricula and teaching practices, providing formative feedback opportunities.
  • Collection of student perspectives and experiences through questionnaires and focus groups on AI usage in learning and assessments.
  • Shift towards rethinking traditional assessment formats (MCQs, SAQs, essays) due to AI’s impact, encouraging ongoing pedagogical innovation discussions.

Example 2 – Cardiff University Immunology Wars

  • Gamification: Complex immunology concepts taught through a Star Wars-inspired, game-based approach.
  • AI-driven game design: ChatGPT 4.0 used to structure game scenarios, resources, and dynamic challenges.
  • Visual resources with AI: DALLE-3 employed to create engaging imagery for learning materials.
  • Iterative AI prompting: An innovative method using progressive ChatGPT interactions to refine complex game elements.
  • Practical, collaborative learning: Students collaboratively trade resources to combat diseases, supported by iterative testing and refinement of the game.

Example 3 Traffic lights University Winsconsin Green Bay

The traffic light system they are implementing is reflected in these three sample assessments:

  1. Red light – prohibited
  2. Yellow light – limited use
  3. Green Light – AI embedded into the task

Example 4 Imperial Business School MBA group work

  • Integration of AI: The original essay task was redesigned to explicitly require students to use an LLM, typically ChatGPT.
  • The change: Individual component of wider collaborative task. Students submit both the AI-generated output (250 words) and a critical evaluation of that output (250 words) on what is unique about a business proposal.
  • Critical Engagement Emphasis: The new task explicitly focuses on students’ critical analysis of AI capabilities and limitations concerning their business idea.
  • Reflective Skill Development: Students prompted to reflect on, critique, and consider improvements or extensions of AI-generated content, enhancing their evaluative and adaptive skills.

3 for 1! Example 5 – Harvard

Create a fictional character and interview them

World building for creative writing

Historical journey

More to follow…

Also note:

Manifesto for the essay

Related article (Compton & Gordon, 2024)
 
Also see: (Syska, 2025)We tried to kill the essay