Plus ça change; plus c’est a scroll of death

Hang on it was summer a minute a go
I looked at my blog just now and saw my last post was in July. How did the summer go so fast? There’s a wind howling outside, I am wearing a jumper and both actual long dark wintry nights and the long dark metaphorical ones of our political climate seem to loom. To warm myself up a little I have been looking through some tools that offer AI integrations into learning management systems (LMS aka VLEs)* rather than doing ‘actual’ work. That exploration reminded me of the first ever article I had published back in 2004. The piece has long since disappeared from wherever I save the printed version and is no longer online (not everything digital lasts forever, thank goodness) but I dug the text out of an old online storage account and reading it through has made me realise how much things have changed broadly while, in other ways, it is still the same show rumbling along in the background, like Coronation Street (but no-one really remembers when it went from black and white to colour).

What I wrote back then
In that 2004 article I described the excitement of experimenting with synchronous and asynchronous digital discussion tools in WebCT (for those not ancient like me, Web Course Tools – WebCT- was an early VLE developed by the University of British Columbia which was eventually subsumed into Blackboard). I was teaching GCSE English and was programme leader for an ‘Access to Primary Teaching’ course and many of my students were part time so only on campus for 6 hours per week across two evenings. I’d earlier taught myself HTML so I could build a website for my history students- it had lots of text! It had hyperlinks! It had a scolling marquee! Images would have been nice but I knew my limits. When I saw WebCT, I was fired up by the possibilities of discussion forums and live chat. When I set it up and trialled it I saw peer support, increased engagement with tough topics, participation from ‘quiet’ students amongst other benefits. I was so persuaded by the added value potential I even ran workshops with colleagues to share that excitement.

See this great into to WebCT from someone in CS dept at British Columbia from 1998:

That is still me of course. My job has changed and so has the context, but the impulse to share enthusiasm for digital tools that foster dialogue and interaction remains why I do what I do. It was nice to read that and I felt a fleeting affection for that much younger teacher, blissfully unaware of the challenges ahead! Even so and forming a rattling cognitive dissonace that is still there, I was frustrated by the clunky design and awkward user interface that made persuading colleagues to use it really challenging. Log in issues took up a lot of time and balancing ‘learning’ use with what I then called ‘horseplay’ (what was I, 75?!) took a while to calibrate. Nevertheless, I thought these worth working through but, even with some evidence of uptake across the college I was at was apparent, there was a wider scepticism and reluctance. Why wouldn’t they? ‘it’s too complex’; ‘I am too busy’; ‘the way I do it now works just fine, thank you’. Pretty much every digital innovation has been accompanied by similar responses; even the good ones! I speculated about whether we needed a blank sheet of paper to rethink what an LMS could be, but concluded that institutions were more likely to tinker and add features than to start again.

2004? Feels like yesterday; feels like centuries ago
It was only 2003–4 (he says, painfully aware that I have colleagues who were born then), yet experimenting with an LMS felt novel and that comes over really clearly in my article. If you’d asked me this morning when I started using an LMS I might have said 1998 or 99. 2003 feels so recent in the contexct of my whole teaching career. What the heck was I doing before all that? Thinking back I realise that in my first full time job there was only one computer in our office and John S. got to use that as he was a trained typist (so he said). And older than me. In the article I was carefully explaining what chat and forums were and how they were different from one another, so the need for that dates the phenomeon too I suppose. Later, after moving to a Moodle institution, I became e-learning lead and engaged with JISC working groups- a JISC colleague who oversaw the VLE working group jokingly called me Mr Anti-Moodle because I was vocal in my critiques. It wasn’t quite acccurate- I was critical for sure but then, as now, I liked the concept but disliked the way it worked. Persuading people to adopt an LMS was hard as I said, and, while I have seen some brilliant use of Moodle and the like, my impression is that the majority (argue with me on this though) of LMS course are functional repositiories with interactive and creative applications the exception rather than the norm. The scroll of death was a thing in 2005 and it is as much of a thing now. It also made me think of current ‘Marmitey’ positions folk are taking re: AI. Basically, AI (big and ill defined as it usually is) has to come with nuance and understanding so binary, entrenched, one size fits all positons are unhelpful and, in my view, hard to rationalise and sustain.

The familiar LMS problem
Back to the LMS, from WebCT to Moodle and other common current systems, the underlying functionality has barely shifted (I mean from the perspective of your average teacher/lecturer or student). Many still say Moodle feels very 1990s (probably they mean early 2000s but I suspect they, like me, find it hard to reconcile the idea of any year starting with a 2000 could be a long time ago). Ultimately I think none of these systems offered a genuinely encouraging combination of interface and user experience and that is an issues that persists to this day. The legacy of those early design decisions lingers, and we are still working around them. People have been predicting the death of the VLE for years (including me) but it has not happened. When I first saw Microsoft Teams just before Covid, I thought here’s the nail in the coffin. I was wrong again. Maybe being wrong about the end of the LMS is another running theme.

Will AI change the LMS story?
So what about AI powered integrations? Will they revolutionise how the LMS works? Will they be part of the reason for a shift away from them? Unlikely in either sense is my best guess. Everything I see now is about embellishments and shortcuts that feed into the existing structure. My old dream of a blank-sheet LMS revolution has faded. Thirty years of teaching and more than twenty years using LMSs suggest that this is one component of digital education that will not fade away. The tools will keep evolving, but the slow, steady thrum of the LMS endures in the background. I realise that I have finally predicted non change so don’t bet on that as I have been wrong quite a bit in the past. What I do know is that digital discussions using tools to support dialogic pedagogies have persisted as have the issues related to them. Only 10-20% of my students use the forums! I hear that still. But what I realised in 2004 and maintain to this day is that 10-20% is a significat embellishment for some and alternative for others so I stick with what I said back then in that sense at least. Oh, and lurking is a legit and fine thing for yet others!

One of the most wonderful things about the AI in Education course (so close to 15,000 participants!) is the forums. They add layers of interest that cannot be planned or produced. I estimate only 10-15% of participants post but what a contribution they are making and its an enhancement that keeps me there and, I am convinced, adds real value to those not posting too.

*I’ll stick with LMS as this seems to be pretty ubiquitous these days though I am aware of the distinctions and when I wrote the piece about ‘WebCT’ the term VLE was very much go to.

AI and the pragmatics of curriculum change

Whilst some (many) academic staff and students voice valid and expansive concerns about the use of or focus on Artificial intelligence in education I find myself (finally, perhaps, and later in life) much more pragmatic. We hear the loud voices and I applaud many acts of resistance but we cannot ignore the ‘explosive increase’ in AI use by students. It’s here and that is one driver. More positive drivers to change might be visible trends and future predictions in the global employment landscape  and the affordances in terms of data analytics and medical diagnostics (for example) that more widely defined AI promises. As I keep saying this doesn’t mean we need to rush to embrace anything nor does it imply that educators must become computer scientists overnight. But it does mean something has to give and, rather than (something else I have been saying for a long time) knee-jerk ‘everyone back in the exam halls’ type responses, it’s clear we need to move a wee bit faster in the names of credibility (of the education and the bits of paper we dish out at the end of it) as well as the value to the students of what we are teaching and , perhaps more controversially I suppose, how we teach and assess them. 

Over the past months, I’ve been working with colleagues to think through what AI’s presence means for curriculum transformation. In discussions with colleagues at King’s most recently, three interconnected areas keep surfacing and this is my effort to set them out: 

Content and Disciplinary Shifts

We need to reflect on not just what might we add, but what can we subtract or reweight. The core question becomes: How is AI reshaping knowledge and practice in this discipline, and how should our curricula respond?

This isn’t about inserting generic “AI in Society” modules everywhere. It’s about recognising discipline-specific shifts and preparing students to work with, and critically appraise/ engage with new tech, approaches, systems and idea (and the impacts consequent of implementation). Update 11th June 2025: My colleague, Dr Charlotte Haberstroh, pointed out on reading this that there is an additional important dimension to this and I agree. She suggests we need to find a way to enable students to question and make connections explicitly: ‘how does my disciplinary knowledge help me (the student) make sense of what’s happening, how can it inform my analysis of causes/consequences of how AI is being embedded into our society (within the part that I aim to contribute to in the future / participate in as responsible citizen). Examples she suggested: In Law it could be around how it alters the meaning of intellectual property, in HR it’s going to be about AI replacing workers (or not) and/or the business model of the tech firms driving these changes. In history it’s perhaps how have we adopted technologies in the past and how does that help us understand what we are doing now.

Some additional examples of how AI as content crosses all disciplines: 

  • Law: AI-powered legal research and contract review tools (e.g. Harvey) are changing the role of administration in law firms and the roles of junior solicitors.
  • Medicine: Diagnostic imaging is increasingly supported by machine learning, shifting the emphasis away from manual pattern recognition towards interpretation, communication, and ethical judgement.
  • Geography: Environmental modelling uses real-time AI data analytics, reshaping how students understand climate systems.
  • History and Linguistics: Machine learning is enabling large-scale text and language analysis, accelerating discovery while raising questions about authorship, interpretation, and cultural nuance.

Assessment Integrity and Innovation

Much of the current debate focuses on the security of assessment in the age of AI. That matters a lot of course but if it drives all our thinking and feel it is still the dominant narrative in HE spaces, we will double down on distrust, always default to suspicion and restriction rather than our starting point being designing for creativity, authenticity and inclusivity. 

First shift needs to be moving from ‘how do we catch cheating?”’to ‘Where and how can we ‘catch’ learning?’ as well as  ‘how do we design assessments that AI can’t meaningfully complete without student learning?’ Does this mean redefining ‘assessment’ beyond the narrow ‘evaluative ‘ definition we tend to elevate? Probably, yes. 

Risk is real, inappropriate/ foolish/ unacceptable …even malicious use of AI is a real thing too. So, robustness by design is important too: iterative, multi-stage tasks; oral components; personalised data sets; critical reflection. All are possible without reverting to closed-book exams. These have a place but are no panacea. 

Examples of AI-shaping assessment and design:

AI Integration & Critical Literacies

Students need access to AI tools; They need choice (this will be an increasingly big hurdle to navigate), they need structured opportunities to critique and reflect on their use. This means building critical AI literacy into our programmes or, minimally, the extra-curricular space, not as a bolt-on, but as an embedded activity. Given what I set out above, this will need nuancing to a disciplinary context. It’s happening in pockets but I would argue needs more investment and an upping of the pace- given the ongoing crises in UK HE (if not globally) it’s easy to see why this may not be seen as a priority. 

I think we need to do the following for all students. What do you think? 

  • Critical AI literacy (what it is, how it works (and where it doesn’t),  all the mess in connotes)
  • Aligned with better Information/digital literacy (how to verify, attribute, trace and reflect on outputs- and triangulate)
  • Assessment and feedback literacy (how to judge what’s been learned, and how it’s being measured)

Some examples of where the discipline needs nuance and separate focus and why it is so complex: 

  • English Literature/ History/ Politics: Is the essay dead? Students can prompt ChatGPT to generate essays but how are they generating passable essays when so much of the critique is about the banality of homogenised outputs and lack of anything resembling critical depth? How can we (in a context were anonymous submission is the default) maintain value in something deemed so utterly central to humanities and social science study? 
  • Medical and Nursing education:I often feel observed clinical examinations hold a potential template for wider adoption in non medical disciplines. And AI simulation tools offer lifelike decision-making environments so we are seeing increasing exploration of the potentials here: the literacy lies in knowing what AI can support and what it cannot do, and how to bridge that gap.Who learns this? Where is the time to do it? How are decision made about which tools to trail or purchase? 

Where to Start: Prompting thoughtful change

These three areas are best explored collectively, in programme teams, curriculum working groups or assessment review/ module review teams. I’d suggest to begin with these teams need to discuss and then move on from there. 

  1. Where have you designed assessments that acknowledge AI in terms of the content taught? What might you need to modify looking ahead?
    (e.g. updated disciplinary knowledge, methodological changes, professional practice)
  2. Have you modified assessments where vulnerability is a concern? Have you drawn on positive reasons for change (eg scholarship in effective assessment design)? (e.g. risk of generative AI substitution, over-reliance on closed tasks, integrity challenges)
  3. Have you designed or planned assessments that incorporate, develop or even fully embed AI use?
    (e.g. requiring students to use, reflect on or critique AI outputs as part of their task)

I do not think this is AI evangelism though I do accept that some will see it as such because I do believe that engagement is necessary and actually an ethical responsibility to our students. That’s a tough sell when some of those students are decrying anything with ‘AI’ in it as inherently and solely evil. I’m not trying  to win hearts and minds to embrace anything other than that these tech need much broader definition and understanding and from that we may critique and evolve.

Rewilding higher education: weeds and wildflowers

Connie Gillies and Martin Compton

It was a privilege to offer reflections at Professor Cathy Elliott’s inaugural lecture, Rewilding the University recently. Her lecture was more than a celebration of an academic career: it was also a call to action. A provocation. A gentle but insistent reminder that education (and nature and the world!) does not need to look the way it does now. A packed lecture hall listened intently to Cathy’s arguments, ideas and jokes: it was a tough act to follow. Cathy said she hardly ever lectures but a skillful lecture is a thing of joy and is utterly compelling and we were lucky to witness one.  Here we share some reflections on Cathy’s ideas and how they have helped shape aspects of our own. 

Cathy made clear that rewilding is not a metaphor of neglect or abandonment, but of restoration, connection and flourishing. It recognises that overly managed systems, whether ecological or educational, can become depleted, homogenous and fragile. In both cases, monoculture and rigidity are warning signs: what Cathy referred to as ‘command and control’.  The invitation we heard was to value and support diversity, likewise in both nature  and education, to value what is often dismissed, and to allow for the possibility of unpredictable, unmeasurable growth.

This vision has shaped how we think about education and how we’ve each worked together with Cathy. Our own relationships, as a fellow academic (with similarly unconventional paths to current roles) and as a student (who had been disillusioned by educational experiences to the point of encountering Cathy’s course), and now as authors, as collaborators, is a component of the network that Connie has described as mycelial: Like subterranean fungal connections but nourishing ideas, allowing knowledge to travel, and making future growth possible. Like mycelium in forest ecosystems, these relationships and ideas remain largely invisible to the untrained eye, but they are foundational. They remind us that learning does not happen in isolation, but in intricate, collaborative webs.

When students sign up for Cathy’s Politics of Nature class, they often don’t fully grasp the lasting impact it will have on them. A friend once told Connie, “A Cathy Elliott module will change your life,” and while the statement may seem grand, it’s not far from the truth. For many, this course didn’t just teach content; it reshaped our approach to thinking, learning, and even our careers. Cathy’s teaching blends critical rigor with intellectual play, making the class a rare space where students can be both creatively curious and academically rigorous. Most importantly, she empowers students to discover their unique intellectual passions, encouraging them to contribute perspectives no one else could, simply because they aren’t anyone else.

Education, when rewilded, becomes an ecosystem. A space where mutual dependence is generative. A space where difference is not simply tolerated but required. It is through this lens that we’ve come to understand projects like ungrading, student co-authorship, and the politics of belonging, not as reforms, but as regenerative acts. These are not surface-level interventions, but shifts in the soil.

One of the most notable aspects of Cathy’s work is her broad intellectual curiosity. She’s not confined to any one field of study — from politics and nature to democracy, development, gender, race, disability and sexuality, Cathy’s academic interests are as diverse as they are profound. In an academic world that often pushes students toward ever-narrower specialization, Cathy’s approach encourages students to break free from this limitation.

Cathy’s teaching has long enacted this ethos. She nurtures students not through control but through trust. Her pedagogy invites learners to bring their whole selves, to make connections across disciplinary and personal boundaries, and to treat knowledge as something to be inhabited, not merely acquired. She encourages risk, slowness, reflection, and relationality which are qualities too often sidelined in institutional discourses of impact, efficiency and performance.

The dandelion is another metaphor Cathy draws on frequently and one we were also drawn to in our appreciation. Often dismissed as a weed, the dandelion (The French is ‘pissenlit’ which really does say everything about its reputation)  is in fact a profoundly restorative plant. It detoxifies soil, strengthens roots and nourishes ecosystems. It grows where it is not wanted and flourishes nonetheless. To children, it is a source of wonder, blown seeds, floating wishes,transformation, softness at one time, vibrant yellow before. But to adults, it is a nuisance to be removed. Cathy’s work, like the dandelion, asks us to reconsider who gets to decide what counts as valuable, as beautiful, as worthy. We need to ask ourselves to what extent have we constructed educational systems that we want to be like perfect lawns- predictable, clean, neat and each blade of grass much like the others. Cathy says: ‘don’t cut the grass and plant wildflowers instead!’ This is a literal and metaphorical phrase we can get behind!

This ethos extends into her work on gender, race and sexuality, which consistently challenges the structures that exclude some or  may diminish the presence or experience of others. In classrooms, in curricula, in institutional policy, she reminds us in her work that exclusion is never accidental, it is designed. But that also gives us pause for positive reflection: this means they can be redesigned. 

What we’ve come to understand through Cathy’s influence, and through our ongoing partnership, is that rewilding higher education is not a metaphorical indulgence, it is a pedagogical imperative. It calls us to rethink the terms of participation, the assumptions of merit, the rituals of assessment, and the conditions under which learning takes place. It also calls for attention to scale: recognising that large transformations begin with small shifts, relationships and new practices. 

It felt fitting, then, that the very day after Cathy’s lecture, a special issue of the Journal of Learning Development in Higher Education was published. Co-edited by one of us and containing a piece co-authored by the other, the issue is seeded with many of these same ideas. It features students and a Vice Chancellor; early career ac academics and emeritus professors, reimaginings of assessment, and reflections on academic community that echo and extend Cathy’s provocations. The special issue is a timely continuation of many of the conversations we have had with Cathy, who, unsurprisingly, also has a paper in the special issue and was part of the King’s/ UCL editorial collective. 

We both have very different careers and are at very different ends of them! But we share the sense that the rigid, often foreboding and frequently distrustful academy could be rewilded. It doesn’t have to be this way; more importantly, it could be otherwise.

Meme-ingful reflections on AI, teaching and assessment

I did a session earlier today for the RAISE special interest group on AI. I thought I’d have a bit of fun with it 1. because I was originally invited by Dr. Tadhg Blommerde (and Dr. Amarpreet Kaur) who likes a heterodox approach (see his YouTube channel here) and 2. Because I was preparing on Friday evening and my daughter was looking over my shoulder and suggesting more and more memes. Anyway, I was just reading the chat back and note my former colleague Steve asked: “Is the rest of the sector really short of memes these days now that Martin has them all?” I felt guilty so decided to share them back.

My point: There’s a danger we assume students will invariably cheat if given the chance. This meme challenges educators to reconsider what they define as cheating and encourages transparent, explicit dialogue around academic integrity. What will we lose if we assume all students are all about pulling a fast one?

My daughter (aged 13) suggested this one. How teachers view ChatGPT output: homogenised, overly polished essays lacking individuality. My daughter used the ‘who will be the next contestant on ‘The Bachelor’ (some reality show I am told) image to illustrate how teachers confidently claim they can spot AI-generated assignments because “they all look the same.” My point: I think this highlights early scepticism about AI-produced writing but that we should as educators consider the extent to which these tools have evolved beyond initial assumptions and remind our students (and ourselves) that imperfections and quirks can define a style. Just ask anyone reading one of my metaphor-stretched, overly complex sentences. Perhaps, for too long we have over-valued grammatical accuracy and formulaic writing?

My point: It’s not just about AI detectors of course. It’s more that this is an arms race we can’t win. If we see big tech as our enemy then fighting back with more of their big tech makes no sense. If we see students as the enemy then we have a much bigger problem. Collective punishment and starting with an assumption of guilt are hugely problematic in schools/ unis much as they are in life and tyrannical societies in general. When it comes to revisiting academic integrity I am keen discuss what it is we are protecting. I am also very much drawn to Ellis and Murdoch’s ‘responsive regulation’ approach. I don’t think I’m quite on the same page regarding automated detection but I do agree regarding the application (and resourcing of) deserved sanction for the ‘criminal’ (willful cheats) along with efforts to widen self-regulation and move as many students as possible from carelessness (or chancer behaviours) to self-regulation is critical.

Pretty obvious I guess but my point is this: We also need to resist assumptions that all students prioritise grades over genuine learning and creativity. Yes, there are those who are wilfully trying to find the easiest path to the piece of paper that confirms a grade or a degree or whatever. Yes, there are those whose heads may be turned by the promise of a corner-cutting opportunity. But there are SO many more who want to learn, who are anxious because they know others who are being accused of using these tech inappropriately (because, for example, they use ‘big’ words… really, this has happened). ALSO, we need to challenge the structural features that define education in terms of employability and value. I know how to use chatgpt but I am writing this. Why am I bothering writing? Because I like it. Because – I hope- my writing, even when convoluted (much like this sentence) is more compelling. Because it’s more gratifying than the thing I’m supposed to be doing. Above all, for me, it’s because it actually helps me articulate my thoughts better. We must continue valuing intrinsic motivation and the joy students derive from learning and creating independently. But more than that: we need to face up to the systemic issues that drive more students towards corner cutting or willful cheating. By the way, I often use generated text in things I write. All the alt text in these images is AI generated (then approved / edited by me) for example.

This leads me to the next one. I mean I do use AI every day for translation, transcription, information management, easing access to information, reformatting, providing alternative media, writing alt text… Many don’t I know. Many refuse; I know this too. But we are way into majority territory here I think. Students are recognising this real (or imagined) hypocrisy. The only really valid response to this I have heard goes something like: ‘I can use it because I am educated to x level. first year undergrads do not have the critical awareness or developed voice to make an informed choice’. I mean, I think that may be the case to an extent or in some cases but it reminds me a bit of the ‘pen licences’ my daughter’s primary school issued: you get one when you prove you can use a pencil first (little Timmy, bless him, is still on crayons). Have you seen the data on student routine use of generative AI? It elevates the tool to some sort of next level implement but is it even? I think I could make a better case for normalisation and acceptance of a future where human / AI hybrid writing is just how it is done (as per Dr Sarah Eaton’s work- note the firve other elements in the tenets.)

My point: The narratives around essential changes we need to implement ‘because of AI’ presents a false dichotomy between reverting to traditional exam halls or relying solely on AI detection tools. Neither option adequately addresses modern academic integrity challenges. Exams can be as problematic and inequitable as AI detection. It is not a binary choice. There are other things that can be done. I’ll leave this one hanging a bit as it overlaps with the next one.

My point: We need to critically re-evaluate how and why essays are used in assessment. We can maintain the essay but evolve its form to better reflect authentic, inclusive and meaningful assessments rather than relying on traditional, formulaic, high-stakes versions. Anyway I (with Dr Claire Gordon) have said it before, we already have a manifesto and Dr Alicia Syskja takes the argument to the next level here.

Really, though, you should have been there; we had a great time.

Old problem, new era

“Empirical studies suggest that a majority of students cheat. Longitudinal studies over the past six decades have found that about 65–87% of college students in America have admitted to at least one form of nine types of cheating at some point during their college studies”

(Yu et al., 2018)

Shocking? Yes. But also reassuring in its own way. When you are presented with something like that from 2018 (ie. pre chatgpt) you realise that this is not a newly massive issue; it’s the same issue with a different aspect, lens or vehicle. Cheating in higher education has always existed, but I do acknowledge that generative AI has illuminated it with an intensity that makes me reach for the eclipse goggles. There are those that argue that essay mills and inappropriate third party support were phenomena that we had inadequately addressed as a sector for a long time. LLMs have somehow opened a fissure in the integrity debate so large that suddenly everyone wants to do something about it. it has become so much more complex because of that but also that visibility could be seen positively (I may be reaching but I genuinely think there is mileage in this) not least because: 

1. We are actually talking about it seriously. 

2. It may give us leverage to effect long needed changes. 

The common narratives I hear are ‘where there’s a will, there’s a way’ and chatgpt makes the ‘way’ easier. The problem though, in my view, is that just because the ‘way’ is easier does not mean the ‘will’ will necessarily increase. Assuming all students will cheat does nothing to build bridges, establish trust or provide an environment where the sort of essential mutual respect necessary for transparent and honest working can flourish.  You might point to the stat at the top of this page and say we are WAY past the need to keep measuring will!  Exams, as I’ve argued before, are no panacea, given the long-standing issues of authenticity and inclusivity they bring (as well as being the place where students have shown themselves to be most creative in their subversion techniques!). 

In contrast to this, study after study is finding that students are increasingly anxious about being accused of cheating when that was never their intention. They report unclear and sometimes contradictory guidance, leaving them uncertain about what is and isn’t acceptable. A compounding  issue  is the lack of consistency in how cheating is defined. it varies significantly between institutions, disciplines and even individual lecturers. I often ask colleagues whether certain scenarios constitute cheating, deliberately using examples involving marginalised students to highlight the inconsistencies.  Is it ok to get structural, content or proof reading  suggestions from your family? How does your access to human support differ if you are a first generation, neurodivergent student studying in a new language and country? Policies usually say “no” but to fool ourselves that this sort of ‘cheating’ is not routine would be hard to achieve and even harder to evidence. The boundaries are blurred, and the lack of consensus only adds to the confusion.

To help my thinking on this I looked again at some articles on cheating over time (going back to 1941!) that I had put in a folder and badly labelled as per usual and selected a few to give me a sense of the what and how as well as the why and to provide a baseline to inform the context around the current assumptions about cheating. Yu et al. (2018) use a long established categorisation of types of cheating with a modification to acknowledge unauthorised digital assistance:

  1. Copying sentences without citation.
  2. Padding a bibliography with unused sources.
  3. Using published materials without attribution.
  4. Accessing exam questions or answers in advance.
  5. Collaborating on homework without permission.
  6. Submitting work done by others.
  7. Giving answers to others during an exam.
  8. Copying from another student in an exam.
  9. Using unauthorised materials in an exam.

The what and how question reveals plenty of expected ways of cheating, especially in exams but it is also noted where teachers / lecturers are surprised by the extent and creativity. Four broad types:

  1. Plagiarism in various forms from self, to peers to deliberate inappropriate practices in citation.
  2. Homework and assignment cheating such as copying work, unauthorised collaboration, or failing to contribute fairly.
  3. Other academic dishonesty such as falsifying bibliographies, influencing grading or contract cheating.
  4. In exams.

The amount of exam based cheating reported should really challenge assumptions about the security of exams at the very least and remind us that they are no panacea whether we see this issue through an ongoing or a chatgpt lens. Stevens and Stevens (1987) in particular share some great pre-internet digital ingenuity and Simpkin and McLeod (2006) show how the internet broadened the scope and potential. These are some of the types reported over time: 

  1. Using unauthorised materials.
  2. Obtaining exam information in advance.
  3. Copying from other students.
  4. Providing answers to other students.
  5. Using technology to cheat (using microcassettes, pre-storing data in calculators, mobile phones. Not mentioned but now apparently a phenomenon is use of bone conduction tech in glasses and/ or smart glasses).
  6. Using encoded materials (rolled up pieces of paper for example).
  7. Hiring a surrogate to take an exam.
  8. Changing answers after scoring (this one in Drake,1941)
  9. Collaborating during an exam without permission.

These are the main reasons for cheating across the decades I could identify (from across all sources cited at the end):

  1. Difficulty of the work. When students are on the wrong course (I’m sure we can think of many reasons why this might occur), teaching is inadequate or insufficiently differentiated.
  2. Pressure to succeed. ‘Success’ when seen as the principal goal can subdue the conscience.
  3. Laziness. This is probably top of many academics’ assumptions and it is there in the research but also worth considering what else competes for attention and time and how ‘I can’t be bothered’ may also mask other issues even in self-reporting. 
  4. Perception that cheating is widespread. If students feel others are doing it and getting away with it, it increases the cheating.
  5. Low risk of getting caught.
  6. Sense of injustice in systemic approach, structural inequalities both real and perceived can be seen as a valid justification. 
  7. External factors such as evident cheating in wider society. A fascinating example of this was suggested to me by an academic who was trained in Soviet dominated Eastern Europe who said cheating was (and remains) a marker of subversion so carries its own respectability)
  8. Lack of understanding of what is allowed and is not- students reporting they have not been taught this and degrees of cheating blurred by some of the other factors here- when does collaboration become collusion?
  9. Cultural influences. Different norms and expectations can create issues and this comes back to my point about individualised (or contextualised) definitions of what is and is not appropriate. 
  10. My own experiences, over 30 years, of dealing with plagiarism cases often reveals very powerful, often traumatic, experiences that lead students to act in ways that are perceived as cheating.

For each it’s worth asking yourself:

How much is the responsibility for this on the student and how much on the teacher/ lecturer and / or institution (or even society)?

I suspect that the truly willful, utterly cynical students are the ones least likely to self declare and are least likely to get caught. This furthers my own discomfort about the mechanisms we rely (too heavily?) on to judge integrity too.

This skim through really did make clear to me that cheating and plagiarism are not the simple concepts that many say they are. Also cheating in exams is a much bigger thing than we might imagine. The reasons for cheating are where we need to focus I think.  Less so the ‘how’ as that becomes a battleground and further entrenches ‘us and them’ conceptualisations.  When designing curricula and assessments the unavoidable truth is we need to do better by moving away from one size fits all approaches, by realising cultural, social and cognitive differences will impact many of the ‘whys’ and hold ourselves to account when we create or exacerbate structural factors that broaden likelihood of cheating. 

I am definitely NOT saying give wilful cheaters a free pass but all the work many universities are doing on assessment reform needs to be seen through a much longer lens than the generative AI one. To focus only on that is to lose sight of the wider and longer issue. We DO have the capacity to change things for the better but that also means that many of us will be compelled (in a tense, under threat landscape) to learn more about how to challenge conventions and even invest much more time in programme level, iterative, AI cognisant teaching and assessment practices. Inevitably the conversations will start with the narrow and hyped and immediate manifestations of inappropriate AI use but let’s celebrate this as leverage; as a catalyst.  We’d do well, at the very least, to reconsider how we define cheating, why we consider some incredibly common behaviours as cheating (is it collusion or is it collaboration for example or proof reading help from 3rd parties). Beyond that, we should be having serious discussions about augmentation and hybridity in writing: what counts as acceptable support? How does that differ according to context and discipline? It will raise questions about the extent to which writing is the dominant assessment medium, about authenticity in assessment and about the rationale and perceived value of anonymity. 

It’s interesting to read how over 80 years ago (Drake, 1941) many of the behaviours we witness today in both students and their teachers have 21st century parallels. Strict disciplinarian responses or ignoring it because ‘they’re only harming themselves’ being common. In other words, the underlying causes were not being addressed. To finish I think this sets out the challenge confronting us well:

“Teachers in general, and college professors in particular, will not be enthusiastic about proposed changes. They are opposed to changes of any sort that may interfere with long- established routines-and examinations are a part of the hoary tradition of the academic past”

(Drake, 1941, p.420)

Drake, C. A. (1941). Why students cheat. Journal of Higher Education, 12(5)

Hutton, P. A. (2006). Understanding student cheating and what educators can do about it. College Teaching, 54(1), 171–176. https://www.jstor.org/stable/27559254 

Miles, P., et al. (2022). Why Students Cheat. The Journal of Undergraduate Neuroscience Education (JUNE), 20(2):A150-A160 

Rettinger, D. A., & Kramer, Y. (2009). Situational and individual factors associated with academic dishonesty. Research in Higher Education, 50(3), 293-313. https://doi.org/10.1007/s11162-008-9116-5 

Simkin, M. G., & McLeod, A. (2010). Why do college students cheat?. Journal of Business Ethics, 94, 441-453. https://doi.org/10.1007/s10551-009-0275-x 

Stevens, G. E., & Stevens, F. W. (1987). Ethical inclinations of tomorrow’s managers revisited: How and why students cheat. Journal of Education for Business, 63(1), 24-29. https://doi.org/10.1080/08832323.1987.10117269 

Yu, H., Glanzer, P. L., Johnson, B. R., Sriram, R., & Moore, B. (2018). Why college students cheat: A conceptual model of five factors. The Review of Higher Education, 41(4), 549-576. https://doi.org/10.1353/rhe.2018.0025 

Gallant, T. B., & Drinan, P. (2006). Organizational theory and student cheating: Explanation, responses, and strategies. The Journal of Higher Education, 77(5), 839-860. https://www.jstor.org/stable/3838789 

CPD for critical AI literacy: do NOT click here.

In 2018, Timos Almpanis and I co-wrote an article exploring issues with Continuous Professional Development (CPD) in relation to Technology Enhanced Learning (TEL). The article, which we published while working together at Greenwich (in Compass: Journal of Learning and Teaching), highlighted a persistent challenge: despite substantial investment in TEL, enthusiasm for it and use among educators remained inconsistent at best. While students increasingly expect technology to enhance their learning, and there is/ was evidence to supports its potential to improve engagement and outcomes, the traditional transmissive CPD models supporting how teaching academics were introduced to TEL and supported in it could undermine its own purpose. Focusing on technology and systems as well as using poor (and non modelling) pedagogy often gave/ give a sense of compliance over pedagogic improvement.

Because we are both a bit contrary and subversive we commissioned an undergraduate student (Christina Chitoroaga) to illustrate our arguments with some cartoons which I am duplicating here (I think I am allowed to do that?):

We argued that TEL focussed CPD should prioritise personalised and pedagogy-focused approaches over one-size-fits-all training sessions. Effective CPD that acknowledges need, relfects evidence-informed pedagogic apparoaches and empowers educators by offering choice, flexibility and relevance, will also enable them to explore and apply tools that suit their specific teaching contexts and pedagogical needs. By shifting the focus away from the technology itself and towards its purpose in enhancing learning, we can foster greater engagement and creativity among academic staff. This was exactly the approach I tried to apply when rolling out Mentimenter (a student response system to support increasing engagement in and out of class).

I was reminded of this article recently (because fo the ‘click here; clck there’ cartoon) when a colleague expressed frustration about a common issue they observed: lecturers teaching ‘regular’ students (I always struggle with this framing as most of my ‘students’ are my colleagues- we need a name for that! I will do a poll – got totally distracted by that but it’s done now) how to use software using a “follow me as I click here and there” method. Given that the “follow me as I click” is still a thing, perhaps it is time to adopt a more assertive and directive approach. Instead of simply providing opportunities to explore better practices, we may need to be clearer in saying: “Do not do this.” I mean I do not want to be the pedagogy police but while there is no absolute right way there are some wrong ways, right? Also we might want to think about what this means in terms of the AI elephant in every bloomin’ classroom.

The deluge of AI tools and emerging uses of these tech (willingly and unwillingly & appropriately and inappropriately) means the need for effective upskilling is even more urgent. However we support skill development and thinking time we need of course to realise it requires moving beyond the “click here, click there” model. In my view (and I am aware this is contested) educators and students need to experiment with AI tools in real-world contexts, gaining experience in how AI is impacting curricula, academic use and, potentially, pedagogic practices. The many valid and pressing reasons why teachers might resist or reject engaging with AI tools: workload, ethical implications, data privacy, copyright, eye-watering environmental impacts or even concern about being replaced by technology are a significant barriers to adoption. But adoption is not my goal; critical engagement is. The conflation of the two in the minds of my colleagues is I think a powerful impediment before I even get a chance to bore them to death with a ‘click here; click there’. In fact, there’s no getting away from the necessity of empathy and a supportive approach, one that acknowledges these fears while providing space for dialogue and both critical AND creative applications of responsibly used AI tools. In fact, Alison Gilmour and I wrote about this too! It’s like all my work actually coheres!

Whatever the approach, CPD cannot be a one-size-fits-all solution, nor can it rely on prescriptive ‘click here, click there’ methods. It must be compassionate and dialogic, enabling experimentation across a spectrum of enthusiasm—from evangelical to steadfast resistance. While I have prioritised ‘come and play’, ‘let’s discuss’, or ‘did you know you can…’ events, I recognise the need for more structured opportunities to clarify these underpinning values before events begin. If I can find a way to manage such a shift it will help align the CPD with meaningful, exploratory engagement that puts pedagogy and dialogue at the heart of our ongoing efforts to grow critical AI literacy in a productive, positive way that offers something to everyone wherever they sit of the parallel spectrums of AI skills and beliefs.

Post script: some time ago I wrote on the WONKHE blog about growing AI literacy and this coincided wiht the launch of the GEN AI in HE MOOC. We’re working on an expanded version- broadening the scope of AI beyond the utterly divisive ‘generative’ as well as widening the scope to other sectors of education. Release due in May. It’ll be free to access.

Navigating the AI Landscape in HE: Six Opinions

Read my post below or listen to AI me read it. Have to say, I sound very well spoken in this video. To my ears doesn’t sound much like me. For those that know me: what do you think?

As we attempt to navigate uncharted (as well as expanding and changing) landscapes of artificial intelligence in higher education, it makes sense to reflect on our approaches and understanding. We’ve done ‘headless chicken’ mode; we’ve been in reactive mode. Maybe we can start to take control of the narratives; even if what is ahead of us is disrupting, fast-moving and fraught with tensions. Here are six perspectives from me that I believe will help us move beyond the hype and get on with the engagement that is increasingly pressing but, thus far, inconsistent at best.

1. AI means whatever people think it means

In educational circles, when we discuss AI, we’re primarily referring to generative tools like ChatGPT, DALL-E, or Copilot. While computer scientists might argue- with a ton of justification- this is a narrow definition, it’s the reality of how most educators and students understand and engage with AI. We mustn’t get bogged down in semantics; instead, we should focus on the practical implications of these tools in our teaching and learning environments whilst taking time to widen some of those definitions, especially when talking with students. Interrogating what we mean when we say ‘AI’ is a great starting point for these discussions in fact.

2. AI challenges our identities as educators

The rapid evolution of AI is forcing us to reconsider our roles as educators.  Whether you buy into the traditional framing of higher education this way or not, we’re no longer the sole gatekeepers of knowledge, dispensing wisdom from the lectern. However much we might want to advocate for notions of co-creation or discovery learning, the lecturer/ teacher as expert is a key component of many of our teacher professional identities.  Instead, we need to acknowledge that we’re all navigating this new landscape together – staff and students alike. This shift requires humility and a willingness to learn alongside our students. The alternatives: Fake it until you make it? Bury your head? Neither are viable or sustainable. Likewise, this is not something that is ‘someone else’s job’. HE is being menaced from many corners and workload is one of the many pressures- but I don’t see a beneficial path that does not necessitate engagement. If I’m right then something needs to give. Or be made less burdensome.

3. Engage, not embrace

I’m not really a hugger, tbh. My family? Yes. A cute puppy? Probably. Friends? Awkwardly at best. A disruptive tech? Of course not. While some advocate for ’embracing’ AI, I prefer the term ‘engage’. We needn’t love these technologies or accept them unquestioningly, but we do need to interact with them critically and thoughtfully. Rejection or outright banning is increasingly unsupportable, despite the many oft-cited issues. The sooner we at least entertain the possibilities that some of our assumptions about the nature of writing and what constitutes cheating and how we best judge achievement may need review the better.

4. AI-proofing is a fool’s errand

Attempts to create ‘AI-proof’ assessments or to reliably detect AI-generated content are likely to be futile. The pace of technological advancement means that any barriers we create will swiftly be overcome. Many have written on the unreliability and inherent biases of detection tools and the promotion of flawed proctoring and surveillance tools only deepens the trust divide between staff and students that is already strained to its limit.  Instead, we should focus on developing better, more authentic forms of assessment that prioritise critical thinking and application of knowledge. A lot of people have said this already, so we need to build a bank of practical, meaningful approaches, draw on the (extensive) existing scholarship and, in so doing, find ways to better share things that address some of the concerns that are not: ‘Eek, everyone do exams again!’

5. We need dedicated AI champions and leadership

To effectively integrate AI into our educational practices, we need people at all levels of our institutions who can take responsibility for guiding innovations in assessment and addressing colleagues’ questions. This requires significant time allocation and can’t be achieved through goodwill alone. Local level leadership and engagement (again with dedicated time and resource) is needed to complement central policy and guidance. This is especially true of multi-faculty institutions like my own. There’s only so much you can generalise. The problem of course is that whilst local agency is imperative, too many people do not yet have enough understanding to make fully informed decisions.  

6. Find a personal use for AI

To truly understand the potential and limitations of AI, it’s valuable to find ways to develop understanding with personal engagement – one way to do this is to incorporate it into your own workflows. Whether it’s using AI to summarise meeting or supervision notes, create thumbnails for videos, or transform lecture notes into coherent summaries, personal engagement with these tools can help demystify them and reveal practical benefits for yourself and for your students. My current focus is on how generative AI can open doors for neurodivergent students and those with disabilities or, in fact, any student marginalised by the structures and systems that are slow to change and privilege the few.

Navigating the Path of Innovation: Dr. Mandeep Gill Sagoo’s Journey in AI-Enhanced Education

Dr. Mandeep Gill Sagoo, a Senior Lecturer in Anatomy at King’s College London, is actively engaged in leveraging artificial intelligence (AI) to enhance education and research. Her work with AI is concentrated on three primary projects that integrate AI to address diverse challenges in the academic and clinical settings. The following summary (and title and image, with a few tweaks from me) was synthesised and generated in ChatGPT using the transcript of a fireside chat with Martin Compton from King’s Academy. The whole conversation can be listened to here.

AI generated image of a path winding through trees in sunlight and shadow
  1. Animated Videos on Cultural Competency and Microaggression: Dr. Sagoo has led a cross-faculty project aimed at creating animated, thought-provoking videos that address microaggressions in clinical and academic environments. This initiative, funded by the race equity and inclusive education fund, involved collaboration with students from various faculties. The videos, designed using AI for imagery and backdrops, serve as educational tools to raise awareness about unconscious bias and microaggression. They are intended for staff and student training at King’s College London and have been utilised in international collaborations. Outputs will be disseminated later in the year.
  2. AI-Powered Question Generator and Progress Tracker: Co-leading with a second-year medical student and working across faculties with a number of others, Dr. Sagoo received a college teaching fund award to develop this project, which is focused on creating an AI system that generates single best answer questions for preclinical students. The system allows students to upload their notes, and the AI generates questions, tracks their progress, and monitors the quality of the questions. This project aims to refine ChatGPT to tailor it for educational purposes, ensuring the questions are relevant and of high quality.
  3. Generating Marking Rubrics from Marking Schemes: Dr. Sagoo has explored the use of AI to transform marking schemes into detailed marking rubrics. This project emerged from a workshop and aims to simplify the creation of rubrics, which are essential for clear, consistent, and fair assessment. By inputting existing marking schemes into an AI system, she has been able to generate comprehensive rubrics that delineate the levels of performance expected from students. This project not only streamlines the assessment process but also enhances the clarity and effectiveness of feedback provided to students.

Dr. Sagoo’s work exemplifies a proactive approach to incorporating AI in education, demonstrating its potential to foster innovation, enhance learning, and streamline administrative processes. Her projects are characterised by a strong emphasis on collaboration, both with students and colleagues, reflecting a commitment to co-creation and the sharing of expertise in the pursuit of educational excellence.

Contact Mandeep

College Teaching Fund: AI Projects- A review of the review by Chris Ince

On Wednesday I attended the mid-point event of the KCL College Teaching Fund projects – each group has been awarded some funding (up to £10,000, though some came in with far smaller budgets) to do more than speculate on the possibility of using AI within their discipline and teaching, but carry out a research project around design and implementation.

Each team had one slide and three minutes to give updates on their progress so far, with Martin acting as compere and facilitator. I started to take notes so that I could possibly share ideas with the faculty that I support (and part-way through thought that I perhaps should have recorded the session and used an AI to summarise each project), but it was fascinating to see links between projects in completely different fields. Some connections and thoughts before each project’s progress so far:

  • The work involving students was carried out in many ways, but pleasingly many projects were presented by student researchers, who had either been part of the initial project bid or who had been employed using CTF funds. Even if just considering being surveyed and trialled, students are at all levels through this work, as they should be.
  • Several projects opened with scoping existing student use of gAI in their academic lives and work. This has to be taken with a pinch of salt, as it requires an element of honesty, but King’s has been clear that gAI is not prohibited so long as it is acknowledged (and allowed at a local level). What is interesting is that scoping consistently found that students did not seem to be using gAI as much as one might think (about a third); however their use has been growing throughout projects and the academic year as they are taught how to use it.
  • That being said, several projects identify how students are sceptical of the usefulness of gAI to them and in some that scepticism grows through the project. In some ways this is quite pleasing, as they begin to see gAI not as a panacea, but as a tool. They’re identifying what it can and can’t do, and where it is and isn’t useful to them. We’re teaching about something (or facilitating), and they’re learning.
  • Training AIs and ChatBots to assist in specific and complex tasks crops up in a number of projects, and they’re trialling some very different methods for this. Some are external, some are developed and then shared with students, and some give students what they need to train them themselves. Evidence that there are so many approaches, and exactly why this kind of networking is useful.
  • There’s frequently a heavily patronising perception sometimes that young people know more about a technology that older people. It’s always more complex than that, but the involvement of students in CTF projects has fostered some sharing of knowledge, as academic staff have seen what students can do with gAI. However, it’s been clear that the converse is also true, and that ‘we’ not only need to teach them but there is a desire for us to. This is particularly notable when we consider equality of access and unfair advantages, and two projects highlight this when they noted students from China had lower levels of familiarity with AI.
Project TitleLead Thoughts
How do students perceive the use of genAI for providing feedbackTimothy PullenA project from Biochemistry that’s focused on coding, specifically AI tools giving useful feedback on coding. Some GTAs have developed some short coding exercises that have trialled with students (they get embedded into Moodle and the AI provides student feedback). This has implications in time saved on the administration of feedback of this kind, but Tim suggests seems that there are limits to what customised bots can do within this “significantly” – I need to find out more, and am intrigued around the student perception of this: are there some situations where students would rather have a real person look at their work and offer help?
AI-Powered Single Best Answer (SBA) Automatic Question Generation & Enhanced Pre-Clinical Student Progress TrackingIsaac Ng (student) Mandeep SagooIsaac, a medical student, presents, and it’s interesting that there’s quite a clear throughline to producing something that could have commercial prospects further down the line – there’s a name and logo! An AI has been ‘trained’ with resources and question styles that act as the baseline; students can then upload their own notes and the AI uses these to produce questions in an SBA format that is consistent with the ‘real’ ones. There’s a clear focus on making sure that the AI won’t generate prompts from the material that it’s been given that aren’t factually wrong. A nice aspect is that all of the questions the AI generates are stored, and in March students are going to be able to vote on other student-AI questions. I’m intrigued about the element of students knowing what a good or bad question is, and do we need to ensure their notes are high-quality first?
Co-designing Encounters with AI in Education for Sustainable DevelopmentCaitlin BentleyMira Vogel from King’s Academy is speaking on the team’s behalf – she leads on teaching sustainability in HE. The team have been working on the ‘right’ scaffolding and framing to find the most appropriate teaching within different areas/subjects/faculties – how to find the best routes. They have a broad range of members of staff involved, so have brought this element into the project itself. The first phase has been recursive – recruiting students across King’s to develop materials – Mira has a fun phrase about “eating one’s own dog food”. They’ve been identifying common ground across disciplines to find how future work should be organised at scale and wider to tackle ‘Wicked problems’ (I’m sure this is ‘pernicious or thorny problems’ and not surfer dude ‘wicked’, but I like the positivity in the thought of it being both).
Testing the Frontier – Generative AI in Legal Education and beyondAnat Keller and Cari Hyde VaamondeTrying to bring critical thinking into student use of AI. There’s a Moodle page and online workshop (120 participants) and focus group day (12 students-staff) to consider this. How does/should/could the law regulate financial institutions? The project focused on the application of assessment marking criteria and typically identified three key areas of failure: structure, understanding, and a lack of in-depth knowledge (interestingly, probably replicating what many academics would report for most assessment failure). The aim wasn’t a pass, but to see if a distinction level essay could be produced. Students were a lot more critical than staff when assessing the essays. (side-note: students anthropomorphised the AI, often using terms like ‘them’ and ‘him’ rather than ‘it’). Students felt that while using AI at the initial ideas stage and creation may initially feel more appropriate than using it during the actual essay writing, this was where they lost the agency and creativity that you’d want/find in a distinction level student – perhaps this is the message to get across to students?
Exploring literature search and analysis through the lens of AIIsabelle MiletichAnother project where the students on the research team get to present their work; it’s a highlight of the work, which also has a heavy co-creational aspect. Focused on Research Rabbit: a free AI platform that sorts and organises literature for literature reviews. Y2 focus groups have been used to inform material that is then used with Y1 dental students. There was a 95.7% response to Y1 survey. Resources were produced to form a toolbox for students, mainly guidance for the use of Research Rabbit. There was also a student produced video on how to use it for Y1s. The conclusion of the project will be narrated student presentations on how they used Research Rabbit.
Designing an AI-Driven Curriculum for Employable Business Students: Authentic Assessment and Generative AIChahna GonsalvesIdentifying use cases so that academics are better informed about when to put AI into their work. There have been a number employer-based interviews around how employers are using AI. Student participants are reviewing transcripts to match these to appropriate areas that academics might then slot them into the curriculum. An interesting aspect has been that students didn’t necessarily know/appreciate how much that King’s staff did behind the scenes on curriculum development work. It was also a surprise to the team how some employers were not as persuaded by the usefulness of AI (although many were embedding this within work). Some consideration of there being a difference in approach between early-adopters and those more reticent.
Assessment Innovation integrating Generative AI: Co-creating assessment activities with Undergraduate StudentsRebecca UpsherBased in Psychology – students described how assessment to them means anxiety and stress or “just a means to get a degree” (probably some work around the latter one for sure). There’s a desire for creative and authentic assessment from all sides. Project started by identifying current student use of AI in and around assessment. One focus group (A learning and assessment investigation. Clarity of existing AI guidance. Suggestions for improvements) and one workshop (students more actively giving suggestions about summative AI suggestions to staff). Focus on inclusive and authentic assessment, being mindful of neurodiverse students and the group have been working with the neurodiverse society. Research students have been carrying out the literature review, prepared recruitment materials for groups, and mapped assessment types used in the department. Preliminary interest that has been a common thread was a desire for assessments to be designed with students, and a shift in power dynamics – interesting is that AI projects like this are fostering these sorts of co-design work that could have taken place before AI, but didn’t necessarily – academic staff are now valuing what students know and can do with AI (particularly if they know more than we do).
Improving exam questions to decrease the impact of Large Language ModelsVictor TurcanuA medicine-based project. Alignment with authentic professional tasks, that allow students to demonstrate their understanding, critical and innovative thinking, can students use LLMs to enhance their creativity and wider conceptual reach? The project is using 300 anonymous exam scripts to compare with ChatGPT answers. More specifically it’s about asking students their opinion in a question that doesn’t have an answer (a novel question embedded within an area of research around allergies – can students design a study to investigate something that doesn’t have a known solution: talk about the possibilities, or what they think would be a line of approach to research an answer). LLMs may be able to utilise work that has been published, but cannot draw on what hasn’t been published or isn’t yet understood. While the project was about students using LLMs, there’s also an angle here that it’s a way of an assessment where an AI can’t help as much.
Exploring Generative AI in Essay Writing and Marking: A study on Students’ and Educators’ Perceptions, Trust Dynamics, and InclusivityMargherita de CandiaPolitical science. Working with Saul Jones (an expert on assessment), they’ve also considered making an essay ‘AI proof’. They’re using the PAIR framework developed at King’s and have designed an assessment using the framework to make a brief they think is AI proof but still allows students to use AI tools. Workshops with students where they write an essay using AI will then be used to refine the assignment brief following a marking phase. If it works they want to disseminate the AI-proof brief for essays to colleagues across the social science faculties, however they are running sessions to investigate student perceptions, particularly around improvements to inclusivity in using AI. An interesting element here is what we consider to be ‘AI proof’, but also that students will be asked for thoughts on feedback for their essays when half will have been generated by an AI.
Student attitudes towards the use of Generative AI in a Foundation Level English for Academic Purposes course and the impact of in-class interventions on these attitudesJames AckroydAction research – King’s Foundations within the team working on English for Academic purposes. Two surveys through the year and a focus group, specific interventions in class on use of AI. Another survey to follow. 2/3 of students initially said that they didn’t use AI at the start of the course (40% of students from China where AI is less commonly used due to access restrictions). But half-way through the course 2/3 said that they did. Is this King’s demystifying things? Student belief in what AI could do reduced during the course of the courseFaith in the micro-skills required for essay writing increased. Lots of fascinating threads of AI literacy and perceptions of it have come out of this so far.
Enhancing gAI literacy: an online seminar series to explore generative AI in education, research and employment.Brenda WilliamsOnline seminar series on the use of AI (because students asked for them online, but there also more than 2,000 students in the target group and it’s the best way to get reach. Consultation panel (10 each of staff/students/alumni) to design five sessions to be delivered in June. Students have been informed about the course and a pre-survey to find out about use of AI by participants (and post-) has been prepared. This project in particular has a high mix of staff from multiple areas around King’s and highlights that there is more at play within AI than just working with AI in teaching settings.
Supporting students to use AI ethically and effectively in academic writingUrsula WingatePreliminary scoping of student use of AI. Focus on fairness about a level playing field to upskill some students, and to reign in others. Recruited four student collaborators. Four focus groups (23 participants in January). All students reported having used Chat GPT (did this mean, in education, or in general?) and there is a wide range of free ones they use. Students are critical and sceptical of AI: they’ve noticed that it isn’t very reliable and have concerns about IP of others. They’re also concerned about not developing their own voice. Sessions designed to focus on some key aspects (cohesion, grammatical compliance, appropriateness of style, etc.) when using AI in academic writing are being planned.
Is this a good research question?Iain Marshall, Kalwant SidhuResearch topics for possible theses are being discussed at this half-way point of the academic year. Students are consulting chatbots (academics are quite busy, but also supervisors are usually only assigned when project titles and themes are decided – can students have space to go to beforehand for more detailed input?) The team have been utilising prompt engineering to create their own chatbot to help themselves and others (I think this is through the application of provided material, so students can input this and then follow with their own questions). This does involve students utilising quite a number of detailed scripts and coding, so this is supervised by a team – aimed that this will be supportive.
Evaluating an integrated approach to guide students’ use of generative AI in written assessmentsTania Alcantarilla &Karl NightingaleThere are 600 students in the 1st year of their Bioscience degrees. The team focused on perceptions and student use of AI. Design of a guidance podcast/session. Evaluation of the sessions and then of ultimate gAI use. There were 200 responses to student survey (which is pretty impressive). Lower use of gAI than expected (1/3 of students, but this increased after being at King’s – mainly by international students). It’s now that I’ve realised people ‘in the know’ are using gAI and not genAI as I have…am I out of touch?
AI-Based Automated Assessment Tools for Code QualityMarcus Messer, Neil BrownA project based around the assessment of student produced code. Here the team have focused on ‘Chain of thought prompting’ – a example is given to the LLM where there is a gobbet that includes the data, a show of reasoning steps, and the solution. Typically eight are used before the gAI is used to apply what it learned to a new question or other input. Here the team will use this to assess the code quality of programming assignments, including the readability, maintainability, and quality. Ultimately the grades and feedback will be compared with human-graded examples to judge the effectiveness of the tool.
Integrating ChatGPT-4 into teaching and assessmentBarbara PiotrowskaPublic Policy in the Department of Political Economy – Broad goal was to get students excited and comfortable with using gAI. Some of the most hesitant students have been the most inventive in using it to learn new concepts. ChatGPT used as co-writer for an assessment – a policy brief (advocacy) – due next week. Teaching also a part (conversations with gAI on a topic can be used as an example of a learning task).
Generative AI for critical engagement with the literatureJelena DzakulaDigital Humanities – reading and marking essays where students engage with a small window of literature. Can gAI summarise what are considered difficult articles and chapters for students? Initial survey showed that students don’t use tools for this, they just give up. They mainly use gAI for brainstorming and planning, but not for helping their learning. Designing workshops/focus groups to turn gAI into a learning tool, mainly based around complex texts.
Adaptive learning support platform using GenAI and personalised feedbackIevgeniia KuzminykhThis project aims to embed AI, or at least use it as an integral part, of a programme, where it has access to a lot of information about progress, performance and participation. Moodle has proven quite difficult to work with for this project as the team wanted an AI that would analyse Moodle (to do this a cloned copy was needed, uploaded elsewhere so that it can be accessed externally by the AI). ChatGPT API not being free has also been an issue. So far, course content, quizzes, answers, were utilised and gAI asked to give feedback and generate a new quizzes. Paper design for a feedback system is being written and will be disseminated.
Evaluating the Reliability and Acceptability of AI Evaluation and Feedback of Medical School Course WorkHelen OramCouldn’t make the session- updates coming soon!

Fascinating stuff. For me, I want to consider how we can take this work from projects that have been funded by the CTF, and use them as ideas and models that departments, academics, and teaching staff can look to when considering teaching, curriculum and assessment in ways where they may not have funding.