AI and the pragmatics of curriculum change

Whilst some (many) academic staff and students voice valid and expansive concerns about the use of or focus on Artificial intelligence in education I find myself (finally, perhaps, and later in life) much more pragmatic. We hear the loud voices and I applaud many acts of resistance but we cannot ignore the ‘explosive increase’ in AI use by students. It’s here and that is one driver. More positive drivers to change might be visible trends and future predictions in the global employment landscape  and the affordances in terms of data analytics and medical diagnostics (for example) that more widely defined AI promises. As I keep saying this doesn’t mean we need to rush to embrace anything nor does it imply that educators must become computer scientists overnight. But it does mean something has to give and, rather than (something else I have been saying for a long time) knee-jerk ‘everyone back in the exam halls’ type responses, it’s clear we need to move a wee bit faster in the names of credibility (of the education and the bits of paper we dish out at the end of it) as well as the value to the students of what we are teaching and , perhaps more controversially I suppose, how we teach and assess them. 

Over the past months, I’ve been working with colleagues to think through what AI’s presence means for curriculum transformation. In discussions with colleagues at King’s most recently, three interconnected areas keep surfacing and this is my effort to set them out: 

Content and Disciplinary Shifts

We need to reflect on not just what might we add, but what can we subtract or reweight. The core question becomes: How is AI reshaping knowledge and practice in this discipline, and how should our curricula respond?

This isn’t about inserting generic “AI in Society” modules everywhere. It’s about recognising discipline-specific shifts and preparing students to work with, and critically appraise/ engage with new tech, approaches, systems and idea (and the impacts consequent of implementation). Update 11th June 2025: My colleague, Dr Charlotte Haberstroh, pointed out on reading this that there is an additional important dimension to this and I agree. She suggests we need to find a way to enable students to question and make connections explicitly: ‘how does my disciplinary knowledge help me (the student) make sense of what’s happening, how can it inform my analysis of causes/consequences of how AI is being embedded into our society (within the part that I aim to contribute to in the future / participate in as responsible citizen). Examples she suggested: In Law it could be around how it alters the meaning of intellectual property, in HR it’s going to be about AI replacing workers (or not) and/or the business model of the tech firms driving these changes. In history it’s perhaps how have we adopted technologies in the past and how does that help us understand what we are doing now.

Some additional examples of how AI as content crosses all disciplines: 

  • Law: AI-powered legal research and contract review tools (e.g. Harvey) are changing the role of administration in law firms and the roles of junior solicitors.
  • Medicine: Diagnostic imaging is increasingly supported by machine learning, shifting the emphasis away from manual pattern recognition towards interpretation, communication, and ethical judgement.
  • Geography: Environmental modelling uses real-time AI data analytics, reshaping how students understand climate systems.
  • History and Linguistics: Machine learning is enabling large-scale text and language analysis, accelerating discovery while raising questions about authorship, interpretation, and cultural nuance.

Assessment Integrity and Innovation

Much of the current debate focuses on the security of assessment in the age of AI. That matters a lot of course but if it drives all our thinking and feel it is still the dominant narrative in HE spaces, we will double down on distrust, always default to suspicion and restriction rather than our starting point being designing for creativity, authenticity and inclusivity. 

First shift needs to be moving from ‘how do we catch cheating?”’to ‘Where and how can we ‘catch’ learning?’ as well as  ‘how do we design assessments that AI can’t meaningfully complete without student learning?’ Does this mean redefining ‘assessment’ beyond the narrow ‘evaluative ‘ definition we tend to elevate? Probably, yes. 

Risk is real, inappropriate/ foolish/ unacceptable …even malicious use of AI is a real thing too. So, robustness by design is important too: iterative, multi-stage tasks; oral components; personalised data sets; critical reflection. All are possible without reverting to closed-book exams. These have a place but are no panacea. 

Examples of AI-shaping assessment and design:

AI Integration & Critical Literacies

Students need access to AI tools; They need choice (this will be an increasingly big hurdle to navigate), they need structured opportunities to critique and reflect on their use. This means building critical AI literacy into our programmes or, minimally, the extra-curricular space, not as a bolt-on, but as an embedded activity. Given what I set out above, this will need nuancing to a disciplinary context. It’s happening in pockets but I would argue needs more investment and an upping of the pace- given the ongoing crises in UK HE (if not globally) it’s easy to see why this may not be seen as a priority. 

I think we need to do the following for all students. What do you think? 

  • Critical AI literacy (what it is, how it works (and where it doesn’t),  all the mess in connotes)
  • Aligned with better Information/digital literacy (how to verify, attribute, trace and reflect on outputs- and triangulate)
  • Assessment and feedback literacy (how to judge what’s been learned, and how it’s being measured)

Some examples of where the discipline needs nuance and separate focus and why it is so complex: 

  • English Literature/ History/ Politics: Is the essay dead? Students can prompt ChatGPT to generate essays but how are they generating passable essays when so much of the critique is about the banality of homogenised outputs and lack of anything resembling critical depth? How can we (in a context were anonymous submission is the default) maintain value in something deemed so utterly central to humanities and social science study? 
  • Medical and Nursing education:I often feel observed clinical examinations hold a potential template for wider adoption in non medical disciplines. And AI simulation tools offer lifelike decision-making environments so we are seeing increasing exploration of the potentials here: the literacy lies in knowing what AI can support and what it cannot do, and how to bridge that gap.Who learns this? Where is the time to do it? How are decision made about which tools to trail or purchase? 

Where to Start: Prompting thoughtful change

These three areas are best explored collectively, in programme teams, curriculum working groups or assessment review/ module review teams. I’d suggest to begin with these teams need to discuss and then move on from there. 

  1. Where have you designed assessments that acknowledge AI in terms of the content taught? What might you need to modify looking ahead?
    (e.g. updated disciplinary knowledge, methodological changes, professional practice)
  2. Have you modified assessments where vulnerability is a concern? Have you drawn on positive reasons for change (eg scholarship in effective assessment design)? (e.g. risk of generative AI substitution, over-reliance on closed tasks, integrity challenges)
  3. Have you designed or planned assessments that incorporate, develop or even fully embed AI use?
    (e.g. requiring students to use, reflect on or critique AI outputs as part of their task)

I do not think this is AI evangelism though I do accept that some will see it as such because I do believe that engagement is necessary and actually an ethical responsibility to our students. That’s a tough sell when some of those students are decrying anything with ‘AI’ in it as inherently and solely evil. I’m not trying  to win hearts and minds to embrace anything other than that these tech need much broader definition and understanding and from that we may critique and evolve.

But how? And why even? Practical examples of ways assessments have been modified

Modifying or changing assessment ‘because of AI’ always feels like it feeds ‘us and them’ narratives of a forthcoming apocalypse (already predicted) and couches the change as necessary only because of this insidious, awful thing that no-one wants except men in leather chairs who stroke white cats.

It is of course MUCH more complex than that and much of the desired change has been promoted by folk with a progressive, reform, equity, inclusion eye who do (or immerse themselves in) scholarship of HE pedagogy and assessment practices.

Anyway, a colleague suggested that we should have a collection of ideas about practical ways assessments could be modified to either make them more AI ‘robust’ or at least ‘AI aware’ or ‘ AI inclusive’ (I’m hesitant to say ‘resitant’ of course). Whilst colleagues across King’s have been sharing and experimenting it is probably true to say that there is not a single point of reference. We are in King’s Academy working on remedying this as part of the wider push to support TASK (transforming assessment for students at King’s) and growing AI literacy but first I wanted to curate a few examples from elsewhere to offer a point of reference for me and to share with colleagues in the very near future. I’ve gone for diversity from things I have previously book marked. Other than that, they are here only to offer points of discussion, inspiration, provocation or comparison!

Before I start I should remind KIng’s colleagues of our own guidance and the assessment principles therein, note that with collleagues at LSE, UCL and Southampton I am working on some guidance on the use of AI to assist with marking (forthcoming and controversial). Some of the College Teaching Fund projects looked at assessment and This AI Assessment Scale from Perkins et al. (2024) has a lot of traction in the sector too and is not so dissimilar from the King’s 4 levels of use approach. It’s amazing how 2023 can feel a bit dated in terms of resources these days but this document form the QAA is still relevant and applicable and sets out broader, sector level approarpriate principles. In summary:

  • Institutions should review and reimagine assessment strategies, reducing assessment volume to create space for activities like developing AI literacy, a critical future graduate attribute.
  • Promote authentic and synoptic assessments, enabling students to apply integrated knowledge practically, often in workplace-related settings, potentially incorporating generative AI.
  • Move away from traditional, handwritten, invigilated exams towards innovative approaches like digital exams, observed discipline-specific assessments or oral examinations
  • Design coursework explicitly integrating generative AI, encouraging ethical use, reflection, and hybrid submissions clearly acknowledging AI-generated content.
  • Follow guiding principles ensuring assessments are sustainable, inclusive, aligned to learning outcomes, and effectively demonstrate relevant competencies, including appropriate AI usage.

I’m also increasingly referring to the two lane approach being adopted by Sydney which leans heavily into similar principles. Context is different to UK of course but I have a feeling we will find ourselves moving much closer to the broad approach here. It feels radical but perhaps no more radical than what many, if not most, unis did in Covid.

Finally, the examples

Example 1. UCL Medical Sciences BSc.

  • Evaluation of coursework assessments to determine susceptibility to generative AI and potential integration of AI tools.
  • Redesign of assessments to explicitly incorporate evaluation of ChatGPT-generated outputs, enhancing critical evaluation skills and understanding of AI limitations.
  • Integration of generative AI within module curricula and teaching practices, providing formative feedback opportunities.
  • Collection of student perspectives and experiences through questionnaires and focus groups on AI usage in learning and assessments.
  • Shift towards rethinking traditional assessment formats (MCQs, SAQs, essays) due to AI’s impact, encouraging ongoing pedagogical innovation discussions.

Example 2 – Cardiff University Immunology Wars

  • Gamification: Complex immunology concepts taught through a Star Wars-inspired, game-based approach.
  • AI-driven game design: ChatGPT 4.0 used to structure game scenarios, resources, and dynamic challenges.
  • Visual resources with AI: DALLE-3 employed to create engaging imagery for learning materials.
  • Iterative AI prompting: An innovative method using progressive ChatGPT interactions to refine complex game elements.
  • Practical, collaborative learning: Students collaboratively trade resources to combat diseases, supported by iterative testing and refinement of the game.

Example 3 Traffic lights University Winsconsin Green Bay

The traffic light system they are implementing is reflected in these three sample assessments:

  1. Red light – prohibited
  2. Yellow light – limited use
  3. Green Light – AI embedded into the task

Example 4 Imperial Business School MBA group work

  • Integration of AI: The original essay task was redesigned to explicitly require students to use an LLM, typically ChatGPT.
  • The change: Individual component of wider collaborative task. Students submit both the AI-generated output (250 words) and a critical evaluation of that output (250 words) on what is unique about a business proposal.
  • Critical Engagement Emphasis: The new task explicitly focuses on students’ critical analysis of AI capabilities and limitations concerning their business idea.
  • Reflective Skill Development: Students prompted to reflect on, critique, and consider improvements or extensions of AI-generated content, enhancing their evaluative and adaptive skills.

3 for 1! Example 5 – Harvard

Create a fictional character and interview them

World building for creative writing

Historical journey

More to follow…

Also note:

Manifesto for the essay

Related article (Compton & Gordon, 2024)
 
Also see: (Syska, 2025)We tried to kill the essay

Old problem, new era

“Empirical studies suggest that a majority of students cheat. Longitudinal studies over the past six decades have found that about 65–87% of college students in America have admitted to at least one form of nine types of cheating at some point during their college studies”

(Yu et al., 2018)

Shocking? Yes. But also reassuring in its own way. When you are presented with something like that from 2018 (ie. pre chatgpt) you realise that this is not a newly massive issue; it’s the same issue with a different aspect, lens or vehicle. Cheating in higher education has always existed, but I do acknowledge that generative AI has illuminated it with an intensity that makes me reach for the eclipse goggles. There are those that argue that essay mills and inappropriate third party support were phenomena that we had inadequately addressed as a sector for a long time. LLMs have somehow opened a fissure in the integrity debate so large that suddenly everyone wants to do something about it. it has become so much more complex because of that but also that visibility could be seen positively (I may be reaching but I genuinely think there is mileage in this) not least because: 

1. We are actually talking about it seriously. 

2. It may give us leverage to effect long needed changes. 

The common narratives I hear are ‘where there’s a will, there’s a way’ and chatgpt makes the ‘way’ easier. The problem though, in my view, is that just because the ‘way’ is easier does not mean the ‘will’ will necessarily increase. Assuming all students will cheat does nothing to build bridges, establish trust or provide an environment where the sort of essential mutual respect necessary for transparent and honest working can flourish.  You might point to the stat at the top of this page and say we are WAY past the need to keep measuring will!  Exams, as I’ve argued before, are no panacea, given the long-standing issues of authenticity and inclusivity they bring (as well as being the place where students have shown themselves to be most creative in their subversion techniques!). 

In contrast to this, study after study is finding that students are increasingly anxious about being accused of cheating when that was never their intention. They report unclear and sometimes contradictory guidance, leaving them uncertain about what is and isn’t acceptable. A compounding  issue  is the lack of consistency in how cheating is defined. it varies significantly between institutions, disciplines and even individual lecturers. I often ask colleagues whether certain scenarios constitute cheating, deliberately using examples involving marginalised students to highlight the inconsistencies.  Is it ok to get structural, content or proof reading  suggestions from your family? How does your access to human support differ if you are a first generation, neurodivergent student studying in a new language and country? Policies usually say “no” but to fool ourselves that this sort of ‘cheating’ is not routine would be hard to achieve and even harder to evidence. The boundaries are blurred, and the lack of consensus only adds to the confusion.

To help my thinking on this I looked again at some articles on cheating over time (going back to 1941!) that I had put in a folder and badly labelled as per usual and selected a few to give me a sense of the what and how as well as the why and to provide a baseline to inform the context around the current assumptions about cheating. Yu et al. (2018) use a long established categorisation of types of cheating with a modification to acknowledge unauthorised digital assistance:

  1. Copying sentences without citation.
  2. Padding a bibliography with unused sources.
  3. Using published materials without attribution.
  4. Accessing exam questions or answers in advance.
  5. Collaborating on homework without permission.
  6. Submitting work done by others.
  7. Giving answers to others during an exam.
  8. Copying from another student in an exam.
  9. Using unauthorised materials in an exam.

The what and how question reveals plenty of expected ways of cheating, especially in exams but it is also noted where teachers / lecturers are surprised by the extent and creativity. Four broad types:

  1. Plagiarism in various forms from self, to peers to deliberate inappropriate practices in citation.
  2. Homework and assignment cheating such as copying work, unauthorised collaboration, or failing to contribute fairly.
  3. Other academic dishonesty such as falsifying bibliographies, influencing grading or contract cheating.
  4. In exams.

The amount of exam based cheating reported should really challenge assumptions about the security of exams at the very least and remind us that they are no panacea whether we see this issue through an ongoing or a chatgpt lens. Stevens and Stevens (1987) in particular share some great pre-internet digital ingenuity and Simpkin and McLeod (2006) show how the internet broadened the scope and potential. These are some of the types reported over time: 

  1. Using unauthorised materials.
  2. Obtaining exam information in advance.
  3. Copying from other students.
  4. Providing answers to other students.
  5. Using technology to cheat (using microcassettes, pre-storing data in calculators, mobile phones. Not mentioned but now apparently a phenomenon is use of bone conduction tech in glasses and/ or smart glasses).
  6. Using encoded materials (rolled up pieces of paper for example).
  7. Hiring a surrogate to take an exam.
  8. Changing answers after scoring (this one in Drake,1941)
  9. Collaborating during an exam without permission.

These are the main reasons for cheating across the decades I could identify (from across all sources cited at the end):

  1. Difficulty of the work. When students are on the wrong course (I’m sure we can think of many reasons why this might occur), teaching is inadequate or insufficiently differentiated.
  2. Pressure to succeed. ‘Success’ when seen as the principal goal can subdue the conscience.
  3. Laziness. This is probably top of many academics’ assumptions and it is there in the research but also worth considering what else competes for attention and time and how ‘I can’t be bothered’ may also mask other issues even in self-reporting. 
  4. Perception that cheating is widespread. If students feel others are doing it and getting away with it, it increases the cheating.
  5. Low risk of getting caught.
  6. Sense of injustice in systemic approach, structural inequalities both real and perceived can be seen as a valid justification. 
  7. External factors such as evident cheating in wider society. A fascinating example of this was suggested to me by an academic who was trained in Soviet dominated Eastern Europe who said cheating was (and remains) a marker of subversion so carries its own respectability)
  8. Lack of understanding of what is allowed and is not- students reporting they have not been taught this and degrees of cheating blurred by some of the other factors here- when does collaboration become collusion?
  9. Cultural influences. Different norms and expectations can create issues and this comes back to my point about individualised (or contextualised) definitions of what is and is not appropriate. 
  10. My own experiences, over 30 years, of dealing with plagiarism cases often reveals very powerful, often traumatic, experiences that lead students to act in ways that are perceived as cheating.

For each it’s worth asking yourself:

How much is the responsibility for this on the student and how much on the teacher/ lecturer and / or institution (or even society)?

I suspect that the truly willful, utterly cynical students are the ones least likely to self declare and are least likely to get caught. This furthers my own discomfort about the mechanisms we rely (too heavily?) on to judge integrity too.

This skim through really did make clear to me that cheating and plagiarism are not the simple concepts that many say they are. Also cheating in exams is a much bigger thing than we might imagine. The reasons for cheating are where we need to focus I think.  Less so the ‘how’ as that becomes a battleground and further entrenches ‘us and them’ conceptualisations.  When designing curricula and assessments the unavoidable truth is we need to do better by moving away from one size fits all approaches, by realising cultural, social and cognitive differences will impact many of the ‘whys’ and hold ourselves to account when we create or exacerbate structural factors that broaden likelihood of cheating. 

I am definitely NOT saying give wilful cheaters a free pass but all the work many universities are doing on assessment reform needs to be seen through a much longer lens than the generative AI one. To focus only on that is to lose sight of the wider and longer issue. We DO have the capacity to change things for the better but that also means that many of us will be compelled (in a tense, under threat landscape) to learn more about how to challenge conventions and even invest much more time in programme level, iterative, AI cognisant teaching and assessment practices. Inevitably the conversations will start with the narrow and hyped and immediate manifestations of inappropriate AI use but let’s celebrate this as leverage; as a catalyst.  We’d do well, at the very least, to reconsider how we define cheating, why we consider some incredibly common behaviours as cheating (is it collusion or is it collaboration for example or proof reading help from 3rd parties). Beyond that, we should be having serious discussions about augmentation and hybridity in writing: what counts as acceptable support? How does that differ according to context and discipline? It will raise questions about the extent to which writing is the dominant assessment medium, about authenticity in assessment and about the rationale and perceived value of anonymity. 

It’s interesting to read how over 80 years ago (Drake, 1941) many of the behaviours we witness today in both students and their teachers have 21st century parallels. Strict disciplinarian responses or ignoring it because ‘they’re only harming themselves’ being common. In other words, the underlying causes were not being addressed. To finish I think this sets out the challenge confronting us well:

“Teachers in general, and college professors in particular, will not be enthusiastic about proposed changes. They are opposed to changes of any sort that may interfere with long- established routines-and examinations are a part of the hoary tradition of the academic past”

(Drake, 1941, p.420)

Drake, C. A. (1941). Why students cheat. Journal of Higher Education, 12(5)

Hutton, P. A. (2006). Understanding student cheating and what educators can do about it. College Teaching, 54(1), 171–176. https://www.jstor.org/stable/27559254 

Miles, P., et al. (2022). Why Students Cheat. The Journal of Undergraduate Neuroscience Education (JUNE), 20(2):A150-A160 

Rettinger, D. A., & Kramer, Y. (2009). Situational and individual factors associated with academic dishonesty. Research in Higher Education, 50(3), 293-313. https://doi.org/10.1007/s11162-008-9116-5 

Simkin, M. G., & McLeod, A. (2010). Why do college students cheat?. Journal of Business Ethics, 94, 441-453. https://doi.org/10.1007/s10551-009-0275-x 

Stevens, G. E., & Stevens, F. W. (1987). Ethical inclinations of tomorrow’s managers revisited: How and why students cheat. Journal of Education for Business, 63(1), 24-29. https://doi.org/10.1080/08832323.1987.10117269 

Yu, H., Glanzer, P. L., Johnson, B. R., Sriram, R., & Moore, B. (2018). Why college students cheat: A conceptual model of five factors. The Review of Higher Education, 41(4), 549-576. https://doi.org/10.1353/rhe.2018.0025 

Gallant, T. B., & Drinan, P. (2006). Organizational theory and student cheating: Explanation, responses, and strategies. The Journal of Higher Education, 77(5), 839-860. https://www.jstor.org/stable/3838789 

AI3*: Crossing the streams of artificial intelligence, academic integrity and assessment innovation

*That’s supposed to read AI3 but the title font refuses to allow superscript!

Yesterday I was delighted to keynote at the Universities at Medway annual teaching and learning conference. It’s a really interesting collaboration of three universities: University of Greenwich, University of Kent and Canterbury Christchurch University. Based at the Chatham campus in Medway you can’t help but notice the history the moment you enter the campus. Given that I’d worked at Greenwich for five years I was familiar with the campus but, as was always the case when I went there during my time at Greenwich, I experienced a moment of awe when seeing the campus buildings again. It’s actually part of the Chatham Dockyard World Heritage site and features the remarkable Drill Hall library. The reason I’m banging on about history is because such an environment really underscores for me some of those things that are emblematic of higher education in the United Kingdom (especially for those that don’t work or study in it!)

It has echoes of cultural shorthands and memes of university life that remain popular in representations of campus life and study. It’s definitely a bit out of date (and overtly UK centric) like a lot of my cultural references, but it made me think of all the murders in the Oxford set crime drama ‘Morse’.  The campus locations fossilised for a generation the idea of ornate buildings, musty libraries and deranged academics. Most universities of course don’t look like that and by and large academics tend not to be too deranged. Nevertheless we do spend a lot of time talking about the need for change and transformation whilst merrily doing things the way we’ve done them for decades if not hundreds of years. Some might call that deranged behaviour. And that, in essence, was the core argument of my keynote: For too long we have twiddled around the edges but there will be no better opportunity than now with machine-assisted leverage to do the things that give the lie to the idea that universities are seats of innovation and dynamism. Despite decades of research that have helped define broad principles for effective teaching, learning, assessment and feedback we default to lecture – seminar and essay – report – exam across large swathes of programmes. We privilege writing as the principle mechanism of evidencing learning. We think we know what learning looks like, what good writing is, what plagiarism and cheating are but a couple of quick scenarios to a room full of academics invariably reveal lack of consensus and a mass of tacit, hidden and sometimes very privileged understandings of those concepts.

Employing an undoubtedly questionable metaphor and unashamedly dated (1984) concept of ‘crossing the streams’ from the original Ghostbusters film, I argued that there are several parallels to the situation the citizens of New York first found themselves in way back when and not least the academics (initially mocked and defunded) who confront the paranormal manifestations in their Ghostbusters guises. First are the appearances of a trickle of ghosts and demons followed by a veritable deluge. Witness ChatGPTs release, the unprecedented sign ups and the ensuing 18 months wherein everything now has AI (even my toothbrush).   There’s an AI for That has logged 12,982 AIs to date to give an indication of that scale (I need to watch the film again to get an estimate on number of ghosts). Anyway, early in the film we learn that a Ghost catching device called a ‘Proton Pack’ emits energy streams but:


“The important thing to remember is that you must never under any circumstances, cross the streams.” (Dr Egon Spengler)

Inevitably, of course, the resolution to the escalating crisis is the necessity of crossing the streams to defeat and banish the ghosts and demons. I don’t think that generative AI is something that could or should be defeated and I definitely do not think that an arms race of detection and policing is the way forward either. But I do think we need to cross the streams of the three AIs: Artificial Intelligence; Academic Integrity and Assessment Innovation to help realise the long-needed changes.

Artificial Intelligence represents the catalyst not the reason for needing dramatic change.

Academic Integrity as a goal is fine but too often connotes protected knowledge, archaic practices, inflexible standards and a resistance to evolution.

Assessment innovation is the place where we can, through common language and understanding, address the concerns of perhaps more traditional or conservative voices about perceived robustness of assessments in a world where generative AI exists and is increasingly integrated into familiar tools along with what might be seen as more progressive voices who, well before ChatGPT, were arguing for more authentic, dialogic, process-focussed and, dare I say it, de-anonymised and humanly connected assessments.

Here is our opportunity. Crossing the streams may be the only way we mitigate a drift to obsolescence! MY concluding slide showed a (definitely NOT called Casper) friendly ghost which, I hope, connoted the idea that what we fear is the unknown but as we come to know it we find ways to shift from engagement (sometimes aggressively) to understanding and perhaps even an ‘embrace’ as many who talk of AI encourage us to do.

Incidentally, I asked the Captain (in my custom bot ‘Teaching Trek: Captain’s Counsel’) a question about change and he came up with a similar metaphor:

Blow Up the Enterprise: Sometimes, radical changes are necessary. I had to destroy the Enterprise to save my crew in “Star Trek III: The Search for Spock.” Academics should learn when to abandon a failing strategy and embrace new approaches, even if it means starting over.”

In a way I think I’d have had an easier time if I’d stuck with Star Trek metaphors. I was gratified to note that ‘The Search for Spock’ was also released in 1984. An auspicious year for dated cultural references from humans and bots alike.

—————–

Thanks:

The conference itself was great and I am grateful to Chloe, Emma, Julie and the team for orgnaising it and inviting me.

Earlier in the day I was inspired by presentations by colleagues from the three universities: Emma, Jimmy, Nicole, Stuart and Laura. The student panel was great too- started strongly with a rejection of the characterisation of students as idle and disintersted and carried on forcefully from there! And special thanks too to David Bedford (who I first worked with something like 10 years ago) who uses an analytical framework of his own devising called ‘BREAD’ as an aid to informing critical information literacy. His session adapted the framework for AI interactions and it prompted a question which led, over lunch, to me producing a (rough and ready) custom GPT based on it.

I should also acknowledge the works I referred to: 1. Sarah Eaton whose work on the 6 tenets of post-plagiarism I heartily recommended and to 2. Cath Ellis and Kane Murdoch* for their ‘enforcement pyramid’ which also works well as one of the vehicles that will help us navigate our way from the old to the new.

*Recommendation of this text does not in any way connote acceptance of Kane’s poor choice when it comes to football team preference.

AI Law

Watch the full video here

In the second AI conversation of the King’s Academy ‘Interfaculty Insights’ series, Professor Dan Hunter, Executive Dean of the Dickson Poon School of Law, shared his multifaceted engagement with artificial intelligence (AI). Prof Hunter discussed the transformative potential of AI, particularly generative AI, in legal education, practice, and beyond. With a long history in the field of AI and law, he offered a unique perspective on the challenges and opportunities presented by this rapidly evolving technology. To say he is firmly in the enthusiast camp, is probably an understatement.

A wooden gavel with ‘AI’ embossed on it

From his vantage point, Prof Hunter presents the following key ideas:

  1. AI tools (especially LLMs) are already demonstrating significant productivity gains for professionals and students alike but it is often more about the ways they can do ‘scut work’. Workers and students become more efficient and improve work quality when using these models. For those with lower skill levels the improvement is even more pronounced.
  2. While cognitive offloading to AI models raises concerns about losing specific skills (examples of long division or logarithms were mentioned), Prof Hunter argued that we must adapt to this new reality. The “cat is out of the bag” so our responsibility lies in identifying and preserving foundational skills while embracing the benefits of AI.
  3. Assessment methods in legal education (and by implication across disciplines) must evolve to accommodate AI capabilities. Traditional essay writing can be easily replicated by language models, necessitating more complex and time-intensive assessment approaches. Prof Hunter advocates for supporting the development of prompt engineering skills and requiring students to use AI models while reflecting on the process.
  4. The legal profession will undergo a significant shakeup, with early adopters thriving and those resistant to change struggling. Routine tasks will be automated obligating lawyers to move up the value chain and offer higher-value services. This disruption may lead to the need for retraining.
  5. AI models can help address unmet legal demand by making legal services more affordable and accessible. However, this will require systematic changes in how law is taught and practiced, with a greater emphasis on leveraging AI’s capabilities.
  6. In the short term, we tend to overestimate the impact of technological innovations, while underestimating their long-term effects. Just as the internet transformed our lives over decades, the full impact of generative AI may take time to unfold, but it will undoubtedly be transformative.
  7. Educators must carefully consider when cognitive offloading to AI is appropriate and when it is necessary for students to engage in the learning process without AI assistance. Finding the right balance is crucial for effective pedagogy in the AI era.
  8. Professional services staff can benefit from AI by identifying repetitive, language-based tasks that can be offloaded to language models. However, proper training on responsible AI use, data privacy, and information security is essential to avoid potential pitfalls.
  9. While AI models can aid in brainstorming, generating persuasive prose, and creating analogies, they currently lack the ability for critical thinking, planning, and execution. Humans must retain these higher-order skills, which cannot yet be outsourced to AI.
  10. Embracing AI in legal education and practice is not just about adopting the technology but also about fostering a mindset of change and continuous adaptation. As Prof Hunter notes, “If large language models were a drug, everyone would be prescribed them.” *

The first in the series was Dr Mandeep Gill Sagoo

* First draft of this summary generated from meeting transcript via Claude

Navigating the Path of Innovation: Dr. Mandeep Gill Sagoo’s Journey in AI-Enhanced Education

Dr. Mandeep Gill Sagoo, a Senior Lecturer in Anatomy at King’s College London, is actively engaged in leveraging artificial intelligence (AI) to enhance education and research. Her work with AI is concentrated on three primary projects that integrate AI to address diverse challenges in the academic and clinical settings. The following summary (and title and image, with a few tweaks from me) was synthesised and generated in ChatGPT using the transcript of a fireside chat with Martin Compton from King’s Academy. The whole conversation can be listened to here.

AI generated image of a path winding through trees in sunlight and shadow
  1. Animated Videos on Cultural Competency and Microaggression: Dr. Sagoo has led a cross-faculty project aimed at creating animated, thought-provoking videos that address microaggressions in clinical and academic environments. This initiative, funded by the race equity and inclusive education fund, involved collaboration with students from various faculties. The videos, designed using AI for imagery and backdrops, serve as educational tools to raise awareness about unconscious bias and microaggression. They are intended for staff and student training at King’s College London and have been utilised in international collaborations. Outputs will be disseminated later in the year.
  2. AI-Powered Question Generator and Progress Tracker: Co-leading with a second-year medical student and working across faculties with a number of others, Dr. Sagoo received a college teaching fund award to develop this project, which is focused on creating an AI system that generates single best answer questions for preclinical students. The system allows students to upload their notes, and the AI generates questions, tracks their progress, and monitors the quality of the questions. This project aims to refine ChatGPT to tailor it for educational purposes, ensuring the questions are relevant and of high quality.
  3. Generating Marking Rubrics from Marking Schemes: Dr. Sagoo has explored the use of AI to transform marking schemes into detailed marking rubrics. This project emerged from a workshop and aims to simplify the creation of rubrics, which are essential for clear, consistent, and fair assessment. By inputting existing marking schemes into an AI system, she has been able to generate comprehensive rubrics that delineate the levels of performance expected from students. This project not only streamlines the assessment process but also enhances the clarity and effectiveness of feedback provided to students.

Dr. Sagoo’s work exemplifies a proactive approach to incorporating AI in education, demonstrating its potential to foster innovation, enhance learning, and streamline administrative processes. Her projects are characterised by a strong emphasis on collaboration, both with students and colleagues, reflecting a commitment to co-creation and the sharing of expertise in the pursuit of educational excellence.

Contact Mandeep

College Teaching Fund: AI Projects- A review of the review by Chris Ince

On Wednesday I attended the mid-point event of the KCL College Teaching Fund projects – each group has been awarded some funding (up to £10,000, though some came in with far smaller budgets) to do more than speculate on the possibility of using AI within their discipline and teaching, but carry out a research project around design and implementation.

Each team had one slide and three minutes to give updates on their progress so far, with Martin acting as compere and facilitator. I started to take notes so that I could possibly share ideas with the faculty that I support (and part-way through thought that I perhaps should have recorded the session and used an AI to summarise each project), but it was fascinating to see links between projects in completely different fields. Some connections and thoughts before each project’s progress so far:

  • The work involving students was carried out in many ways, but pleasingly many projects were presented by student researchers, who had either been part of the initial project bid or who had been employed using CTF funds. Even if just considering being surveyed and trialled, students are at all levels through this work, as they should be.
  • Several projects opened with scoping existing student use of gAI in their academic lives and work. This has to be taken with a pinch of salt, as it requires an element of honesty, but King’s has been clear that gAI is not prohibited so long as it is acknowledged (and allowed at a local level). What is interesting is that scoping consistently found that students did not seem to be using gAI as much as one might think (about a third); however their use has been growing throughout projects and the academic year as they are taught how to use it.
  • That being said, several projects identify how students are sceptical of the usefulness of gAI to them and in some that scepticism grows through the project. In some ways this is quite pleasing, as they begin to see gAI not as a panacea, but as a tool. They’re identifying what it can and can’t do, and where it is and isn’t useful to them. We’re teaching about something (or facilitating), and they’re learning.
  • Training AIs and ChatBots to assist in specific and complex tasks crops up in a number of projects, and they’re trialling some very different methods for this. Some are external, some are developed and then shared with students, and some give students what they need to train them themselves. Evidence that there are so many approaches, and exactly why this kind of networking is useful.
  • There’s frequently a heavily patronising perception sometimes that young people know more about a technology that older people. It’s always more complex than that, but the involvement of students in CTF projects has fostered some sharing of knowledge, as academic staff have seen what students can do with gAI. However, it’s been clear that the converse is also true, and that ‘we’ not only need to teach them but there is a desire for us to. This is particularly notable when we consider equality of access and unfair advantages, and two projects highlight this when they noted students from China had lower levels of familiarity with AI.
Project TitleLead Thoughts
How do students perceive the use of genAI for providing feedbackTimothy PullenA project from Biochemistry that’s focused on coding, specifically AI tools giving useful feedback on coding. Some GTAs have developed some short coding exercises that have trialled with students (they get embedded into Moodle and the AI provides student feedback). This has implications in time saved on the administration of feedback of this kind, but Tim suggests seems that there are limits to what customised bots can do within this “significantly” – I need to find out more, and am intrigued around the student perception of this: are there some situations where students would rather have a real person look at their work and offer help?
AI-Powered Single Best Answer (SBA) Automatic Question Generation & Enhanced Pre-Clinical Student Progress TrackingIsaac Ng (student) Mandeep SagooIsaac, a medical student, presents, and it’s interesting that there’s quite a clear throughline to producing something that could have commercial prospects further down the line – there’s a name and logo! An AI has been ‘trained’ with resources and question styles that act as the baseline; students can then upload their own notes and the AI uses these to produce questions in an SBA format that is consistent with the ‘real’ ones. There’s a clear focus on making sure that the AI won’t generate prompts from the material that it’s been given that aren’t factually wrong. A nice aspect is that all of the questions the AI generates are stored, and in March students are going to be able to vote on other student-AI questions. I’m intrigued about the element of students knowing what a good or bad question is, and do we need to ensure their notes are high-quality first?
Co-designing Encounters with AI in Education for Sustainable DevelopmentCaitlin BentleyMira Vogel from King’s Academy is speaking on the team’s behalf – she leads on teaching sustainability in HE. The team have been working on the ‘right’ scaffolding and framing to find the most appropriate teaching within different areas/subjects/faculties – how to find the best routes. They have a broad range of members of staff involved, so have brought this element into the project itself. The first phase has been recursive – recruiting students across King’s to develop materials – Mira has a fun phrase about “eating one’s own dog food”. They’ve been identifying common ground across disciplines to find how future work should be organised at scale and wider to tackle ‘Wicked problems’ (I’m sure this is ‘pernicious or thorny problems’ and not surfer dude ‘wicked’, but I like the positivity in the thought of it being both).
Testing the Frontier – Generative AI in Legal Education and beyondAnat Keller and Cari Hyde VaamondeTrying to bring critical thinking into student use of AI. There’s a Moodle page and online workshop (120 participants) and focus group day (12 students-staff) to consider this. How does/should/could the law regulate financial institutions? The project focused on the application of assessment marking criteria and typically identified three key areas of failure: structure, understanding, and a lack of in-depth knowledge (interestingly, probably replicating what many academics would report for most assessment failure). The aim wasn’t a pass, but to see if a distinction level essay could be produced. Students were a lot more critical than staff when assessing the essays. (side-note: students anthropomorphised the AI, often using terms like ‘them’ and ‘him’ rather than ‘it’). Students felt that while using AI at the initial ideas stage and creation may initially feel more appropriate than using it during the actual essay writing, this was where they lost the agency and creativity that you’d want/find in a distinction level student – perhaps this is the message to get across to students?
Exploring literature search and analysis through the lens of AIIsabelle MiletichAnother project where the students on the research team get to present their work; it’s a highlight of the work, which also has a heavy co-creational aspect. Focused on Research Rabbit: a free AI platform that sorts and organises literature for literature reviews. Y2 focus groups have been used to inform material that is then used with Y1 dental students. There was a 95.7% response to Y1 survey. Resources were produced to form a toolbox for students, mainly guidance for the use of Research Rabbit. There was also a student produced video on how to use it for Y1s. The conclusion of the project will be narrated student presentations on how they used Research Rabbit.
Designing an AI-Driven Curriculum for Employable Business Students: Authentic Assessment and Generative AIChahna GonsalvesIdentifying use cases so that academics are better informed about when to put AI into their work. There have been a number employer-based interviews around how employers are using AI. Student participants are reviewing transcripts to match these to appropriate areas that academics might then slot them into the curriculum. An interesting aspect has been that students didn’t necessarily know/appreciate how much that King’s staff did behind the scenes on curriculum development work. It was also a surprise to the team how some employers were not as persuaded by the usefulness of AI (although many were embedding this within work). Some consideration of there being a difference in approach between early-adopters and those more reticent.
Assessment Innovation integrating Generative AI: Co-creating assessment activities with Undergraduate StudentsRebecca UpsherBased in Psychology – students described how assessment to them means anxiety and stress or “just a means to get a degree” (probably some work around the latter one for sure). There’s a desire for creative and authentic assessment from all sides. Project started by identifying current student use of AI in and around assessment. One focus group (A learning and assessment investigation. Clarity of existing AI guidance. Suggestions for improvements) and one workshop (students more actively giving suggestions about summative AI suggestions to staff). Focus on inclusive and authentic assessment, being mindful of neurodiverse students and the group have been working with the neurodiverse society. Research students have been carrying out the literature review, prepared recruitment materials for groups, and mapped assessment types used in the department. Preliminary interest that has been a common thread was a desire for assessments to be designed with students, and a shift in power dynamics – interesting is that AI projects like this are fostering these sorts of co-design work that could have taken place before AI, but didn’t necessarily – academic staff are now valuing what students know and can do with AI (particularly if they know more than we do).
Improving exam questions to decrease the impact of Large Language ModelsVictor TurcanuA medicine-based project. Alignment with authentic professional tasks, that allow students to demonstrate their understanding, critical and innovative thinking, can students use LLMs to enhance their creativity and wider conceptual reach? The project is using 300 anonymous exam scripts to compare with ChatGPT answers. More specifically it’s about asking students their opinion in a question that doesn’t have an answer (a novel question embedded within an area of research around allergies – can students design a study to investigate something that doesn’t have a known solution: talk about the possibilities, or what they think would be a line of approach to research an answer). LLMs may be able to utilise work that has been published, but cannot draw on what hasn’t been published or isn’t yet understood. While the project was about students using LLMs, there’s also an angle here that it’s a way of an assessment where an AI can’t help as much.
Exploring Generative AI in Essay Writing and Marking: A study on Students’ and Educators’ Perceptions, Trust Dynamics, and InclusivityMargherita de CandiaPolitical science. Working with Saul Jones (an expert on assessment), they’ve also considered making an essay ‘AI proof’. They’re using the PAIR framework developed at King’s and have designed an assessment using the framework to make a brief they think is AI proof but still allows students to use AI tools. Workshops with students where they write an essay using AI will then be used to refine the assignment brief following a marking phase. If it works they want to disseminate the AI-proof brief for essays to colleagues across the social science faculties, however they are running sessions to investigate student perceptions, particularly around improvements to inclusivity in using AI. An interesting element here is what we consider to be ‘AI proof’, but also that students will be asked for thoughts on feedback for their essays when half will have been generated by an AI.
Student attitudes towards the use of Generative AI in a Foundation Level English for Academic Purposes course and the impact of in-class interventions on these attitudesJames AckroydAction research – King’s Foundations within the team working on English for Academic purposes. Two surveys through the year and a focus group, specific interventions in class on use of AI. Another survey to follow. 2/3 of students initially said that they didn’t use AI at the start of the course (40% of students from China where AI is less commonly used due to access restrictions). But half-way through the course 2/3 said that they did. Is this King’s demystifying things? Student belief in what AI could do reduced during the course of the courseFaith in the micro-skills required for essay writing increased. Lots of fascinating threads of AI literacy and perceptions of it have come out of this so far.
Enhancing gAI literacy: an online seminar series to explore generative AI in education, research and employment.Brenda WilliamsOnline seminar series on the use of AI (because students asked for them online, but there also more than 2,000 students in the target group and it’s the best way to get reach. Consultation panel (10 each of staff/students/alumni) to design five sessions to be delivered in June. Students have been informed about the course and a pre-survey to find out about use of AI by participants (and post-) has been prepared. This project in particular has a high mix of staff from multiple areas around King’s and highlights that there is more at play within AI than just working with AI in teaching settings.
Supporting students to use AI ethically and effectively in academic writingUrsula WingatePreliminary scoping of student use of AI. Focus on fairness about a level playing field to upskill some students, and to reign in others. Recruited four student collaborators. Four focus groups (23 participants in January). All students reported having used Chat GPT (did this mean, in education, or in general?) and there is a wide range of free ones they use. Students are critical and sceptical of AI: they’ve noticed that it isn’t very reliable and have concerns about IP of others. They’re also concerned about not developing their own voice. Sessions designed to focus on some key aspects (cohesion, grammatical compliance, appropriateness of style, etc.) when using AI in academic writing are being planned.
Is this a good research question?Iain Marshall, Kalwant SidhuResearch topics for possible theses are being discussed at this half-way point of the academic year. Students are consulting chatbots (academics are quite busy, but also supervisors are usually only assigned when project titles and themes are decided – can students have space to go to beforehand for more detailed input?) The team have been utilising prompt engineering to create their own chatbot to help themselves and others (I think this is through the application of provided material, so students can input this and then follow with their own questions). This does involve students utilising quite a number of detailed scripts and coding, so this is supervised by a team – aimed that this will be supportive.
Evaluating an integrated approach to guide students’ use of generative AI in written assessmentsTania Alcantarilla &Karl NightingaleThere are 600 students in the 1st year of their Bioscience degrees. The team focused on perceptions and student use of AI. Design of a guidance podcast/session. Evaluation of the sessions and then of ultimate gAI use. There were 200 responses to student survey (which is pretty impressive). Lower use of gAI than expected (1/3 of students, but this increased after being at King’s – mainly by international students). It’s now that I’ve realised people ‘in the know’ are using gAI and not genAI as I have…am I out of touch?
AI-Based Automated Assessment Tools for Code QualityMarcus Messer, Neil BrownA project based around the assessment of student produced code. Here the team have focused on ‘Chain of thought prompting’ – a example is given to the LLM where there is a gobbet that includes the data, a show of reasoning steps, and the solution. Typically eight are used before the gAI is used to apply what it learned to a new question or other input. Here the team will use this to assess the code quality of programming assignments, including the readability, maintainability, and quality. Ultimately the grades and feedback will be compared with human-graded examples to judge the effectiveness of the tool.
Integrating ChatGPT-4 into teaching and assessmentBarbara PiotrowskaPublic Policy in the Department of Political Economy – Broad goal was to get students excited and comfortable with using gAI. Some of the most hesitant students have been the most inventive in using it to learn new concepts. ChatGPT used as co-writer for an assessment – a policy brief (advocacy) – due next week. Teaching also a part (conversations with gAI on a topic can be used as an example of a learning task).
Generative AI for critical engagement with the literatureJelena DzakulaDigital Humanities – reading and marking essays where students engage with a small window of literature. Can gAI summarise what are considered difficult articles and chapters for students? Initial survey showed that students don’t use tools for this, they just give up. They mainly use gAI for brainstorming and planning, but not for helping their learning. Designing workshops/focus groups to turn gAI into a learning tool, mainly based around complex texts.
Adaptive learning support platform using GenAI and personalised feedbackIevgeniia KuzminykhThis project aims to embed AI, or at least use it as an integral part, of a programme, where it has access to a lot of information about progress, performance and participation. Moodle has proven quite difficult to work with for this project as the team wanted an AI that would analyse Moodle (to do this a cloned copy was needed, uploaded elsewhere so that it can be accessed externally by the AI). ChatGPT API not being free has also been an issue. So far, course content, quizzes, answers, were utilised and gAI asked to give feedback and generate a new quizzes. Paper design for a feedback system is being written and will be disseminated.
Evaluating the Reliability and Acceptability of AI Evaluation and Feedback of Medical School Course WorkHelen OramCouldn’t make the session- updates coming soon!

Fascinating stuff. For me, I want to consider how we can take this work from projects that have been funded by the CTF, and use them as ideas and models that departments, academics, and teaching staff can look to when considering teaching, curriculum and assessment in ways where they may not have funding.

Assessment 2033

What will assessment look like in universities in 2033? There’s a lot of talk about how AI may finally catalyse long-needed changes to a lot of the practices we cling to but there’s also a quite significant clamour to do everything in exam halls. Amidst the disparate voices of change are also those that suggest we ride this storm out and carry on pretty much as we are: it’s a time-served and proven model, is it not?

Anyway, by way of provocation, see below four visions of assessment in 2033. What do you think? Is one more likely? Maybe bits of two or more or none of the below? What other possibilities have I missed?

  1. Assessment 2033: Panopticopia

Alex sat nervously in a sterile examination room, palms clammy, heart pounding, her personal evaluation number stamped on each hand and her evaluation tablet. The huge digits on the all-wall clock counted down ominously. As she began the timed exam, micro-drones buzzed overhead, scanning for unauthorised augmentations and communications. Proctoring AI software tracked every keystroke and eye movement, erasing any semblance of privacy. The relentless pressure to recall facts and formulas within seconds elevated her already intense anxiety. Alex knew she was better than these exams would suggest but in the race against technology ideals like fairness, inclusive practice and assessment validity were almost forgotten.

  1. Assessment 2033: Nova Lingua

Karim sat, feet up, in the study pod on campus, ready to tackle his latest essay. Much of the source material was in his first language so he felt confident the only translation tech he’d need would be with his more whimsical flourishes (usually in the intro and conclusion). He  activated ‘AiMee’, his assistant bot, instructed her to open Microsoft Multi-Platform and set the essay parameters: ‘BeeLine text with synthetic voiced audio and an AI avatar presented digest’. AiMee processed the essay brief as Karim scanned it in and started the conversation. Karim was pleased as his thoughts appeared as eloquent prose, simultaneously in both his first language and the two official university languages. As he worked, Karim thought ruefully about how different an education his parents might have had given that they both, like him, were dyslexic.

  1. Assessment 2033: Nova Aurora

Jordan was flushed with delight at the end of their first term on the flexible, multi-modal ‘stackable’ degree. It was amazing to think how different it was from their parents’ experience. There were no traditional exams or strict deadlines. Instead, they engaged in continuous, project and problem-based learning. Professors acted as mentors, guiding them through iterative processes of discovery and growth. The emphasis was on individual development, not just the final product. Grades were replaced with detailed feedback, fostering an appreciation for learning for its own sake rather than competition or -what did their mum call it? ‘Grade grubbing’! Trust was a defining characteristic of academic and student interactions with collaboration highly valued and ‘collusion’ an obsolete concept. HE in the UK had somehow shifted from a focus on evaluation and grades to nurturing individual potential, mirrored by dynamic, flexible structures and opportunities to study in many ways, in many institutions and in ways that aligned with the complexities of life.

  1. Assessment 2033: Plus ça change

Ash sighed as she hunched over her laptop, typing furiously to meet another looming deadline. In 2033, it seemed that little had changed in higher education. Universities clung stubbornly to old assessment methods, reluctant to adapt. Plagiarism and AI detection tools remained easy to circumvent, masking the harsh realities of how students and, with similar frequency, academic staff, relied on technologies that a lot of policy documents effectively banned. The obsession with “students’ own words” pervaded every conversation, drowning out the unheard lobby advocating for a deeper understanding of students’ comprehension and wider acceptance of the realities of new ways of producing work. Ash knew that she wasn’t alone in her frustrations. The system seemed intent on perpetuating the status quo, turning a blind eye to the disconnect between the façade of academic integrity and the hidden truth of how most students and faculty navigated the system.