AI and the pragmatics of curriculum change

Whilst some (many) academic staff and students voice valid and expansive concerns about the use of or focus on Artificial intelligence in education I find myself (finally, perhaps, and later in life) much more pragmatic. We hear the loud voices and I applaud many acts of resistance but we cannot ignore the ‘explosive increase’ in AI use by students. It’s here and that is one driver. More positive drivers to change might be visible trends and future predictions in the global employment landscape  and the affordances in terms of data analytics and medical diagnostics (for example) that more widely defined AI promises. As I keep saying this doesn’t mean we need to rush to embrace anything nor does it imply that educators must become computer scientists overnight. But it does mean something has to give and, rather than (something else I have been saying for a long time) knee-jerk ‘everyone back in the exam halls’ type responses, it’s clear we need to move a wee bit faster in the names of credibility (of the education and the bits of paper we dish out at the end of it) as well as the value to the students of what we are teaching and , perhaps more controversially I suppose, how we teach and assess them. 

Over the past months, I’ve been working with colleagues to think through what AI’s presence means for curriculum transformation. In discussions with colleagues at King’s most recently, three interconnected areas keep surfacing and this is my effort to set them out: 

Content and Disciplinary Shifts

We need to reflect on not just what might we add, but what can we subtract or reweight. The core question becomes: How is AI reshaping knowledge and practice in this discipline, and how should our curricula respond?

This isn’t about inserting generic “AI in Society” modules everywhere. It’s about recognising discipline-specific shifts and preparing students to work with, and critically appraise/ engage with new tech, approaches, systems and idea (and the impacts consequent of implementation). Update 11th June 2025: My colleague, Dr Charlotte Haberstroh, pointed out on reading this that there is an additional important dimension to this and I agree. She suggests we need to find a way to enable students to question and make connections explicitly: ‘how does my disciplinary knowledge help me (the student) make sense of what’s happening, how can it inform my analysis of causes/consequences of how AI is being embedded into our society (within the part that I aim to contribute to in the future / participate in as responsible citizen). Examples she suggested: In Law it could be around how it alters the meaning of intellectual property, in HR it’s going to be about AI replacing workers (or not) and/or the business model of the tech firms driving these changes. In history it’s perhaps how have we adopted technologies in the past and how does that help us understand what we are doing now.

Some additional examples of how AI as content crosses all disciplines: 

  • Law: AI-powered legal research and contract review tools (e.g. Harvey) are changing the role of administration in law firms and the roles of junior solicitors.
  • Medicine: Diagnostic imaging is increasingly supported by machine learning, shifting the emphasis away from manual pattern recognition towards interpretation, communication, and ethical judgement.
  • Geography: Environmental modelling uses real-time AI data analytics, reshaping how students understand climate systems.
  • History and Linguistics: Machine learning is enabling large-scale text and language analysis, accelerating discovery while raising questions about authorship, interpretation, and cultural nuance.

Assessment Integrity and Innovation

Much of the current debate focuses on the security of assessment in the age of AI. That matters a lot of course but if it drives all our thinking and feel it is still the dominant narrative in HE spaces, we will double down on distrust, always default to suspicion and restriction rather than our starting point being designing for creativity, authenticity and inclusivity. 

First shift needs to be moving from ‘how do we catch cheating?”’to ‘Where and how can we ‘catch’ learning?’ as well as  ‘how do we design assessments that AI can’t meaningfully complete without student learning?’ Does this mean redefining ‘assessment’ beyond the narrow ‘evaluative ‘ definition we tend to elevate? Probably, yes. 

Risk is real, inappropriate/ foolish/ unacceptable …even malicious use of AI is a real thing too. So, robustness by design is important too: iterative, multi-stage tasks; oral components; personalised data sets; critical reflection. All are possible without reverting to closed-book exams. These have a place but are no panacea. 

Examples of AI-shaping assessment and design:

AI Integration & Critical Literacies

Students need access to AI tools; They need choice (this will be an increasingly big hurdle to navigate), they need structured opportunities to critique and reflect on their use. This means building critical AI literacy into our programmes or, minimally, the extra-curricular space, not as a bolt-on, but as an embedded activity. Given what I set out above, this will need nuancing to a disciplinary context. It’s happening in pockets but I would argue needs more investment and an upping of the pace- given the ongoing crises in UK HE (if not globally) it’s easy to see why this may not be seen as a priority. 

I think we need to do the following for all students. What do you think? 

  • Critical AI literacy (what it is, how it works (and where it doesn’t),  all the mess in connotes)
  • Aligned with better Information/digital literacy (how to verify, attribute, trace and reflect on outputs- and triangulate)
  • Assessment and feedback literacy (how to judge what’s been learned, and how it’s being measured)

Some examples of where the discipline needs nuance and separate focus and why it is so complex: 

  • English Literature/ History/ Politics: Is the essay dead? Students can prompt ChatGPT to generate essays but how are they generating passable essays when so much of the critique is about the banality of homogenised outputs and lack of anything resembling critical depth? How can we (in a context were anonymous submission is the default) maintain value in something deemed so utterly central to humanities and social science study? 
  • Medical and Nursing education:I often feel observed clinical examinations hold a potential template for wider adoption in non medical disciplines. And AI simulation tools offer lifelike decision-making environments so we are seeing increasing exploration of the potentials here: the literacy lies in knowing what AI can support and what it cannot do, and how to bridge that gap.Who learns this? Where is the time to do it? How are decision made about which tools to trail or purchase? 

Where to Start: Prompting thoughtful change

These three areas are best explored collectively, in programme teams, curriculum working groups or assessment review/ module review teams. I’d suggest to begin with these teams need to discuss and then move on from there. 

  1. Where have you designed assessments that acknowledge AI in terms of the content taught? What might you need to modify looking ahead?
    (e.g. updated disciplinary knowledge, methodological changes, professional practice)
  2. Have you modified assessments where vulnerability is a concern? Have you drawn on positive reasons for change (eg scholarship in effective assessment design)? (e.g. risk of generative AI substitution, over-reliance on closed tasks, integrity challenges)
  3. Have you designed or planned assessments that incorporate, develop or even fully embed AI use?
    (e.g. requiring students to use, reflect on or critique AI outputs as part of their task)

I do not think this is AI evangelism though I do accept that some will see it as such because I do believe that engagement is necessary and actually an ethical responsibility to our students. That’s a tough sell when some of those students are decrying anything with ‘AI’ in it as inherently and solely evil. I’m not trying  to win hearts and minds to embrace anything other than that these tech need much broader definition and understanding and from that we may critique and evolve.

Old problem, new era

“Empirical studies suggest that a majority of students cheat. Longitudinal studies over the past six decades have found that about 65–87% of college students in America have admitted to at least one form of nine types of cheating at some point during their college studies”

(Yu et al., 2018)

Shocking? Yes. But also reassuring in its own way. When you are presented with something like that from 2018 (ie. pre chatgpt) you realise that this is not a newly massive issue; it’s the same issue with a different aspect, lens or vehicle. Cheating in higher education has always existed, but I do acknowledge that generative AI has illuminated it with an intensity that makes me reach for the eclipse goggles. There are those that argue that essay mills and inappropriate third party support were phenomena that we had inadequately addressed as a sector for a long time. LLMs have somehow opened a fissure in the integrity debate so large that suddenly everyone wants to do something about it. it has become so much more complex because of that but also that visibility could be seen positively (I may be reaching but I genuinely think there is mileage in this) not least because: 

1. We are actually talking about it seriously. 

2. It may give us leverage to effect long needed changes. 

The common narratives I hear are ‘where there’s a will, there’s a way’ and chatgpt makes the ‘way’ easier. The problem though, in my view, is that just because the ‘way’ is easier does not mean the ‘will’ will necessarily increase. Assuming all students will cheat does nothing to build bridges, establish trust or provide an environment where the sort of essential mutual respect necessary for transparent and honest working can flourish.  You might point to the stat at the top of this page and say we are WAY past the need to keep measuring will!  Exams, as I’ve argued before, are no panacea, given the long-standing issues of authenticity and inclusivity they bring (as well as being the place where students have shown themselves to be most creative in their subversion techniques!). 

In contrast to this, study after study is finding that students are increasingly anxious about being accused of cheating when that was never their intention. They report unclear and sometimes contradictory guidance, leaving them uncertain about what is and isn’t acceptable. A compounding  issue  is the lack of consistency in how cheating is defined. it varies significantly between institutions, disciplines and even individual lecturers. I often ask colleagues whether certain scenarios constitute cheating, deliberately using examples involving marginalised students to highlight the inconsistencies.  Is it ok to get structural, content or proof reading  suggestions from your family? How does your access to human support differ if you are a first generation, neurodivergent student studying in a new language and country? Policies usually say “no” but to fool ourselves that this sort of ‘cheating’ is not routine would be hard to achieve and even harder to evidence. The boundaries are blurred, and the lack of consensus only adds to the confusion.

To help my thinking on this I looked again at some articles on cheating over time (going back to 1941!) that I had put in a folder and badly labelled as per usual and selected a few to give me a sense of the what and how as well as the why and to provide a baseline to inform the context around the current assumptions about cheating. Yu et al. (2018) use a long established categorisation of types of cheating with a modification to acknowledge unauthorised digital assistance:

  1. Copying sentences without citation.
  2. Padding a bibliography with unused sources.
  3. Using published materials without attribution.
  4. Accessing exam questions or answers in advance.
  5. Collaborating on homework without permission.
  6. Submitting work done by others.
  7. Giving answers to others during an exam.
  8. Copying from another student in an exam.
  9. Using unauthorised materials in an exam.

The what and how question reveals plenty of expected ways of cheating, especially in exams but it is also noted where teachers / lecturers are surprised by the extent and creativity. Four broad types:

  1. Plagiarism in various forms from self, to peers to deliberate inappropriate practices in citation.
  2. Homework and assignment cheating such as copying work, unauthorised collaboration, or failing to contribute fairly.
  3. Other academic dishonesty such as falsifying bibliographies, influencing grading or contract cheating.
  4. In exams.

The amount of exam based cheating reported should really challenge assumptions about the security of exams at the very least and remind us that they are no panacea whether we see this issue through an ongoing or a chatgpt lens. Stevens and Stevens (1987) in particular share some great pre-internet digital ingenuity and Simpkin and McLeod (2006) show how the internet broadened the scope and potential. These are some of the types reported over time: 

  1. Using unauthorised materials.
  2. Obtaining exam information in advance.
  3. Copying from other students.
  4. Providing answers to other students.
  5. Using technology to cheat (using microcassettes, pre-storing data in calculators, mobile phones. Not mentioned but now apparently a phenomenon is use of bone conduction tech in glasses and/ or smart glasses).
  6. Using encoded materials (rolled up pieces of paper for example).
  7. Hiring a surrogate to take an exam.
  8. Changing answers after scoring (this one in Drake,1941)
  9. Collaborating during an exam without permission.

These are the main reasons for cheating across the decades I could identify (from across all sources cited at the end):

  1. Difficulty of the work. When students are on the wrong course (I’m sure we can think of many reasons why this might occur), teaching is inadequate or insufficiently differentiated.
  2. Pressure to succeed. ‘Success’ when seen as the principal goal can subdue the conscience.
  3. Laziness. This is probably top of many academics’ assumptions and it is there in the research but also worth considering what else competes for attention and time and how ‘I can’t be bothered’ may also mask other issues even in self-reporting. 
  4. Perception that cheating is widespread. If students feel others are doing it and getting away with it, it increases the cheating.
  5. Low risk of getting caught.
  6. Sense of injustice in systemic approach, structural inequalities both real and perceived can be seen as a valid justification. 
  7. External factors such as evident cheating in wider society. A fascinating example of this was suggested to me by an academic who was trained in Soviet dominated Eastern Europe who said cheating was (and remains) a marker of subversion so carries its own respectability)
  8. Lack of understanding of what is allowed and is not- students reporting they have not been taught this and degrees of cheating blurred by some of the other factors here- when does collaboration become collusion?
  9. Cultural influences. Different norms and expectations can create issues and this comes back to my point about individualised (or contextualised) definitions of what is and is not appropriate. 
  10. My own experiences, over 30 years, of dealing with plagiarism cases often reveals very powerful, often traumatic, experiences that lead students to act in ways that are perceived as cheating.

For each it’s worth asking yourself:

How much is the responsibility for this on the student and how much on the teacher/ lecturer and / or institution (or even society)?

I suspect that the truly willful, utterly cynical students are the ones least likely to self declare and are least likely to get caught. This furthers my own discomfort about the mechanisms we rely (too heavily?) on to judge integrity too.

This skim through really did make clear to me that cheating and plagiarism are not the simple concepts that many say they are. Also cheating in exams is a much bigger thing than we might imagine. The reasons for cheating are where we need to focus I think.  Less so the ‘how’ as that becomes a battleground and further entrenches ‘us and them’ conceptualisations.  When designing curricula and assessments the unavoidable truth is we need to do better by moving away from one size fits all approaches, by realising cultural, social and cognitive differences will impact many of the ‘whys’ and hold ourselves to account when we create or exacerbate structural factors that broaden likelihood of cheating. 

I am definitely NOT saying give wilful cheaters a free pass but all the work many universities are doing on assessment reform needs to be seen through a much longer lens than the generative AI one. To focus only on that is to lose sight of the wider and longer issue. We DO have the capacity to change things for the better but that also means that many of us will be compelled (in a tense, under threat landscape) to learn more about how to challenge conventions and even invest much more time in programme level, iterative, AI cognisant teaching and assessment practices. Inevitably the conversations will start with the narrow and hyped and immediate manifestations of inappropriate AI use but let’s celebrate this as leverage; as a catalyst.  We’d do well, at the very least, to reconsider how we define cheating, why we consider some incredibly common behaviours as cheating (is it collusion or is it collaboration for example or proof reading help from 3rd parties). Beyond that, we should be having serious discussions about augmentation and hybridity in writing: what counts as acceptable support? How does that differ according to context and discipline? It will raise questions about the extent to which writing is the dominant assessment medium, about authenticity in assessment and about the rationale and perceived value of anonymity. 

It’s interesting to read how over 80 years ago (Drake, 1941) many of the behaviours we witness today in both students and their teachers have 21st century parallels. Strict disciplinarian responses or ignoring it because ‘they’re only harming themselves’ being common. In other words, the underlying causes were not being addressed. To finish I think this sets out the challenge confronting us well:

“Teachers in general, and college professors in particular, will not be enthusiastic about proposed changes. They are opposed to changes of any sort that may interfere with long- established routines-and examinations are a part of the hoary tradition of the academic past”

(Drake, 1941, p.420)

Drake, C. A. (1941). Why students cheat. Journal of Higher Education, 12(5)

Hutton, P. A. (2006). Understanding student cheating and what educators can do about it. College Teaching, 54(1), 171–176. https://www.jstor.org/stable/27559254 

Miles, P., et al. (2022). Why Students Cheat. The Journal of Undergraduate Neuroscience Education (JUNE), 20(2):A150-A160 

Rettinger, D. A., & Kramer, Y. (2009). Situational and individual factors associated with academic dishonesty. Research in Higher Education, 50(3), 293-313. https://doi.org/10.1007/s11162-008-9116-5 

Simkin, M. G., & McLeod, A. (2010). Why do college students cheat?. Journal of Business Ethics, 94, 441-453. https://doi.org/10.1007/s10551-009-0275-x 

Stevens, G. E., & Stevens, F. W. (1987). Ethical inclinations of tomorrow’s managers revisited: How and why students cheat. Journal of Education for Business, 63(1), 24-29. https://doi.org/10.1080/08832323.1987.10117269 

Yu, H., Glanzer, P. L., Johnson, B. R., Sriram, R., & Moore, B. (2018). Why college students cheat: A conceptual model of five factors. The Review of Higher Education, 41(4), 549-576. https://doi.org/10.1353/rhe.2018.0025 

Gallant, T. B., & Drinan, P. (2006). Organizational theory and student cheating: Explanation, responses, and strategies. The Journal of Higher Education, 77(5), 839-860. https://www.jstor.org/stable/3838789