Old problem, new era

“Empirical studies suggest that a majority of students cheat. Longitudinal studies over the past six decades have found that about 65–87% of college students in America have admitted to at least one form of nine types of cheating at some point during their college studies”

(Yu et al., 2018)

Shocking? Yes. But also reassuring in its own way. When you are presented with something like that from 2018 (ie. pre chatgpt) you realise that this is not a newly massive issue; it’s the same issue with a different aspect, lens or vehicle. Cheating in higher education has always existed, but I do acknowledge that generative AI has illuminated it with an intensity that makes me reach for the eclipse goggles. There are those that argue that essay mills and inappropriate third party support were phenomena that we had inadequately addressed as a sector for a long time. LLMs have somehow opened a fissure in the integrity debate so large that suddenly everyone wants to do something about it. it has become so much more complex because of that but also that visibility could be seen positively (I may be reaching but I genuinely think there is mileage in this) not least because: 

1. We are actually talking about it seriously. 

2. It may give us leverage to effect long needed changes. 

The common narratives I hear are ‘where there’s a will, there’s a way’ and chatgpt makes the ‘way’ easier. The problem though, in my view, is that just because the ‘way’ is easier does not mean the ‘will’ will necessarily increase. Assuming all students will cheat does nothing to build bridges, establish trust or provide an environment where the sort of essential mutual respect necessary for transparent and honest working can flourish.  You might point to the stat at the top of this page and say we are WAY past the need to keep measuring will!  Exams, as I’ve argued before, are no panacea, given the long-standing issues of authenticity and inclusivity they bring (as well as being the place where students have shown themselves to be most creative in their subversion techniques!). 

In contrast to this, study after study is finding that students are increasingly anxious about being accused of cheating when that was never their intention. They report unclear and sometimes contradictory guidance, leaving them uncertain about what is and isn’t acceptable. A compounding  issue  is the lack of consistency in how cheating is defined. it varies significantly between institutions, disciplines and even individual lecturers. I often ask colleagues whether certain scenarios constitute cheating, deliberately using examples involving marginalised students to highlight the inconsistencies.  Is it ok to get structural, content or proof reading  suggestions from your family? How does your access to human support differ if you are a first generation, neurodivergent student studying in a new language and country? Policies usually say “no” but to fool ourselves that this sort of ‘cheating’ is not routine would be hard to achieve and even harder to evidence. The boundaries are blurred, and the lack of consensus only adds to the confusion.

To help my thinking on this I looked again at some articles on cheating over time (going back to 1941!) that I had put in a folder and badly labelled as per usual and selected a few to give me a sense of the what and how as well as the why and to provide a baseline to inform the context around the current assumptions about cheating. Yu et al. (2018) use a long established categorisation of types of cheating with a modification to acknowledge unauthorised digital assistance:

  1. Copying sentences without citation.
  2. Padding a bibliography with unused sources.
  3. Using published materials without attribution.
  4. Accessing exam questions or answers in advance.
  5. Collaborating on homework without permission.
  6. Submitting work done by others.
  7. Giving answers to others during an exam.
  8. Copying from another student in an exam.
  9. Using unauthorised materials in an exam.

The what and how question reveals plenty of expected ways of cheating, especially in exams but it is also noted where teachers / lecturers are surprised by the extent and creativity. Four broad types:

  1. Plagiarism in various forms from self, to peers to deliberate inappropriate practices in citation.
  2. Homework and assignment cheating such as copying work, unauthorised collaboration, or failing to contribute fairly.
  3. Other academic dishonesty such as falsifying bibliographies, influencing grading or contract cheating.
  4. In exams.

The amount of exam based cheating reported should really challenge assumptions about the security of exams at the very least and remind us that they are no panacea whether we see this issue through an ongoing or a chatgpt lens. Stevens and Stevens (1987) in particular share some great pre-internet digital ingenuity and Simpkin and McLeod (2006) show how the internet broadened the scope and potential. These are some of the types reported over time: 

  1. Using unauthorised materials.
  2. Obtaining exam information in advance.
  3. Copying from other students.
  4. Providing answers to other students.
  5. Using technology to cheat (using microcassettes, pre-storing data in calculators, mobile phones. Not mentioned but now apparently a phenomenon is use of bone conduction tech in glasses and/ or smart glasses).
  6. Using encoded materials (rolled up pieces of paper for example).
  7. Hiring a surrogate to take an exam.
  8. Changing answers after scoring (this one in Drake,1941)
  9. Collaborating during an exam without permission.

These are the main reasons for cheating across the decades I could identify (from across all sources cited at the end):

  1. Difficulty of the work. When students are on the wrong course (I’m sure we can think of many reasons why this might occur), teaching is inadequate or insufficiently differentiated.
  2. Pressure to succeed. ‘Success’ when seen as the principal goal can subdue the conscience.
  3. Laziness. This is probably top of many academics’ assumptions and it is there in the research but also worth considering what else competes for attention and time and how ‘I can’t be bothered’ may also mask other issues even in self-reporting. 
  4. Perception that cheating is widespread. If students feel others are doing it and getting away with it, it increases the cheating.
  5. Low risk of getting caught.
  6. Sense of injustice in systemic approach, structural inequalities both real and perceived can be seen as a valid justification. 
  7. External factors such as evident cheating in wider society. A fascinating example of this was suggested to me by an academic who was trained in Soviet dominated Eastern Europe who said cheating was (and remains) a marker of subversion so carries its own respectability)
  8. Lack of understanding of what is allowed and is not- students reporting they have not been taught this and degrees of cheating blurred by some of the other factors here- when does collaboration become collusion?
  9. Cultural influences. Different norms and expectations can create issues and this comes back to my point about individualised (or contextualised) definitions of what is and is not appropriate. 
  10. My own experiences, over 30 years, of dealing with plagiarism cases often reveals very powerful, often traumatic, experiences that lead students to act in ways that are perceived as cheating.

For each it’s worth asking yourself:

How much is the responsibility for this on the student and how much on the teacher/ lecturer and / or institution (or even society)?

I suspect that the truly willful, utterly cynical students are the ones least likely to self declare and are least likely to get caught. This furthers my own discomfort about the mechanisms we rely (too heavily?) on to judge integrity too.

This skim through really did make clear to me that cheating and plagiarism are not the simple concepts that many say they are. Also cheating in exams is a much bigger thing than we might imagine. The reasons for cheating are where we need to focus I think.  Less so the ‘how’ as that becomes a battleground and further entrenches ‘us and them’ conceptualisations.  When designing curricula and assessments the unavoidable truth is we need to do better by moving away from one size fits all approaches, by realising cultural, social and cognitive differences will impact many of the ‘whys’ and hold ourselves to account when we create or exacerbate structural factors that broaden likelihood of cheating. 

I am definitely NOT saying give wilful cheaters a free pass but all the work many universities are doing on assessment reform needs to be seen through a much longer lens than the generative AI one. To focus only on that is to lose sight of the wider and longer issue. We DO have the capacity to change things for the better but that also means that many of us will be compelled (in a tense, under threat landscape) to learn more about how to challenge conventions and even invest much more time in programme level, iterative, AI cognisant teaching and assessment practices. Inevitably the conversations will start with the narrow and hyped and immediate manifestations of inappropriate AI use but let’s celebrate this as leverage; as a catalyst.  We’d do well, at the very least, to reconsider how we define cheating, why we consider some incredibly common behaviours as cheating (is it collusion or is it collaboration for example or proof reading help from 3rd parties). Beyond that, we should be having serious discussions about augmentation and hybridity in writing: what counts as acceptable support? How does that differ according to context and discipline? It will raise questions about the extent to which writing is the dominant assessment medium, about authenticity in assessment and about the rationale and perceived value of anonymity. 

It’s interesting to read how over 80 years ago (Drake, 1941) many of the behaviours we witness today in both students and their teachers have 21st century parallels. Strict disciplinarian responses or ignoring it because ‘they’re only harming themselves’ being common. In other words, the underlying causes were not being addressed. To finish I think this sets out the challenge confronting us well:

“Teachers in general, and college professors in particular, will not be enthusiastic about proposed changes. They are opposed to changes of any sort that may interfere with long- established routines-and examinations are a part of the hoary tradition of the academic past”

(Drake, 1941, p.420)

Drake, C. A. (1941). Why students cheat. Journal of Higher Education, 12(5)

Hutton, P. A. (2006). Understanding student cheating and what educators can do about it. College Teaching, 54(1), 171–176. https://www.jstor.org/stable/27559254 

Miles, P., et al. (2022). Why Students Cheat. The Journal of Undergraduate Neuroscience Education (JUNE), 20(2):A150-A160 

Rettinger, D. A., & Kramer, Y. (2009). Situational and individual factors associated with academic dishonesty. Research in Higher Education, 50(3), 293-313. https://doi.org/10.1007/s11162-008-9116-5 

Simkin, M. G., & McLeod, A. (2010). Why do college students cheat?. Journal of Business Ethics, 94, 441-453. https://doi.org/10.1007/s10551-009-0275-x 

Stevens, G. E., & Stevens, F. W. (1987). Ethical inclinations of tomorrow’s managers revisited: How and why students cheat. Journal of Education for Business, 63(1), 24-29. https://doi.org/10.1080/08832323.1987.10117269 

Yu, H., Glanzer, P. L., Johnson, B. R., Sriram, R., & Moore, B. (2018). Why college students cheat: A conceptual model of five factors. The Review of Higher Education, 41(4), 549-576. https://doi.org/10.1353/rhe.2018.0025 

Gallant, T. B., & Drinan, P. (2006). Organizational theory and student cheating: Explanation, responses, and strategies. The Journal of Higher Education, 77(5), 839-860. https://www.jstor.org/stable/3838789 

CPD for critical AI literacy: do NOT click here.

In 2018, Timos Almpanis and I co-wrote an article exploring issues with Continuous Professional Development (CPD) in relation to Technology Enhanced Learning (TEL). The article, which we published while working together at Greenwich (in Compass: Journal of Learning and Teaching), highlighted a persistent challenge: despite substantial investment in TEL, enthusiasm for it and use among educators remained inconsistent at best. While students increasingly expect technology to enhance their learning, and there is/ was evidence to supports its potential to improve engagement and outcomes, the traditional transmissive CPD models supporting how teaching academics were introduced to TEL and supported in it could undermine its own purpose. Focusing on technology and systems as well as using poor (and non modelling) pedagogy often gave/ give a sense of compliance over pedagogic improvement.

Because we are both a bit contrary and subversive we commissioned an undergraduate student (Christina Chitoroaga) to illustrate our arguments with some cartoons which I am duplicating here (I think I am allowed to do that?):

We argued that TEL focussed CPD should prioritise personalised and pedagogy-focused approaches over one-size-fits-all training sessions. Effective CPD that acknowledges need, relfects evidence-informed pedagogic apparoaches and empowers educators by offering choice, flexibility and relevance, will also enable them to explore and apply tools that suit their specific teaching contexts and pedagogical needs. By shifting the focus away from the technology itself and towards its purpose in enhancing learning, we can foster greater engagement and creativity among academic staff. This was exactly the approach I tried to apply when rolling out Mentimenter (a student response system to support increasing engagement in and out of class).

I was reminded of this article recently (because fo the ‘click here; clck there’ cartoon) when a colleague expressed frustration about a common issue they observed: lecturers teaching ‘regular’ students (I always struggle with this framing as most of my ‘students’ are my colleagues- we need a name for that! I will do a poll – got totally distracted by that but it’s done now) how to use software using a “follow me as I click here and there” method. Given that the “follow me as I click” is still a thing, perhaps it is time to adopt a more assertive and directive approach. Instead of simply providing opportunities to explore better practices, we may need to be clearer in saying: “Do not do this.” I mean I do not want to be the pedagogy police but while there is no absolute right way there are some wrong ways, right? Also we might want to think about what this means in terms of the AI elephant in every bloomin’ classroom.

The deluge of AI tools and emerging uses of these tech (willingly and unwillingly & appropriately and inappropriately) means the need for effective upskilling is even more urgent. However we support skill development and thinking time we need of course to realise it requires moving beyond the “click here, click there” model. In my view (and I am aware this is contested) educators and students need to experiment with AI tools in real-world contexts, gaining experience in how AI is impacting curricula, academic use and, potentially, pedagogic practices. The many valid and pressing reasons why teachers might resist or reject engaging with AI tools: workload, ethical implications, data privacy, copyright, eye-watering environmental impacts or even concern about being replaced by technology are a significant barriers to adoption. But adoption is not my goal; critical engagement is. The conflation of the two in the minds of my colleagues is I think a powerful impediment before I even get a chance to bore them to death with a ‘click here; click there’. In fact, there’s no getting away from the necessity of empathy and a supportive approach, one that acknowledges these fears while providing space for dialogue and both critical AND creative applications of responsibly used AI tools. In fact, Alison Gilmour and I wrote about this too! It’s like all my work actually coheres!

Whatever the approach, CPD cannot be a one-size-fits-all solution, nor can it rely on prescriptive ‘click here, click there’ methods. It must be compassionate and dialogic, enabling experimentation across a spectrum of enthusiasm—from evangelical to steadfast resistance. While I have prioritised ‘come and play’, ‘let’s discuss’, or ‘did you know you can…’ events, I recognise the need for more structured opportunities to clarify these underpinning values before events begin. If I can find a way to manage such a shift it will help align the CPD with meaningful, exploratory engagement that puts pedagogy and dialogue at the heart of our ongoing efforts to grow critical AI literacy in a productive, positive way that offers something to everyone wherever they sit of the parallel spectrums of AI skills and beliefs.

Post script: some time ago I wrote on the WONKHE blog about growing AI literacy and this coincided wiht the launch of the GEN AI in HE MOOC. We’re working on an expanded version- broadening the scope of AI beyond the utterly divisive ‘generative’ as well as widening the scope to other sectors of education. Release due in May. It’ll be free to access.

AI positions: Where do you stand?

I have been thinking a lot recently about my own and others’ positions in relation to AI in education. I’m reading a lot more from the ‘ResistAI’ lobby and share many persepctives with core arguments. I likewise read a lot from the tech communities and enthusiastic educator groups which often get conflated but are important to distinguish given bloomin’ obvious as well as more subtle agenda and motivation differences (see world domination and profit arguments for example). I see willing adoption, pragmatic adoption, reluctant adoption and a whole bunch of ill-informed adoption/ rejection too. My reality is that staff and students are using AI (of different types) in different ways. Some of this is ground-breaking and exciting, some snag-filled and disappointing, some ill-advised and potentially risky. Exisiting IT infrastrucutre and processes are struggling to keep pace and daily conversations range from ‘I have to show you this- it’s going to change my life! ‘ to ‘I feel like I’m being left behind here’ and a lot more besides.

So it was that this morning I saw a post on LinkedIn (who’d have thought the place where we put our CVs would grow so much as an academic social network?) from Leon Furze who defines his position as ‘sitting on the fence’. I initially I thought ‘yeah that’s me’ but, in fact, I am not actually sitting on the fence at all in this space. I am trying as best I can to navigate a path that can be defined by the broad two word strategy we are trying define and support at my place: Engage Responsibly. Constructive resitance and debate are central but so is engagement with fundamental ideas, technologies, principles and applications. I have for ages been arguing for more nuanced understanding. I very much appreciate evidence and experiential based arguments (counter and pro). The waters are muddied though with, on the one hand, big tech declarations of educational transformation and revolution (we’re always on the cusp, right?) and sceptical generalisations like the one I saw gaining social media traction the other day which went something like:

“Reading is thinking

Writing is thinking

AI is anti-thinking”

If you think that then you are not thinking in my view. Each of those statements must be contextualised and nuanced. This is exactly the kind of meme-level sound bite that sounds good initially but is not what we should be entertaining as a position in academia. Or is it? Below are some adjectives and defintions of the sorts of positions identified by Leon Furze in the collection linked above and by me and research partners in crime Shoshi, Olivia and Navyasara. Which one/s would you pick to define your position? (I am aware that many of these terms are loaded; I’m just interested in the broadest sense where people see themselves, whether they have planted a flag or if they are still looking for a spot as they wander around in the traffic wide eyed).

  • Cautious: Educators who are cautious might see both the potential benefits and risks of AI. They might be hesitant to fully embrace AI without a thorough understanding of its implications.
  • Critical: Educators who are critical might take a stance that focusses on one or more of the ethical concerns surrounding AI and its potential negative impacts, such as the risk of AI being used for surveillance or control, or ways in which data is sourced or used.
  • Open minded: Open minded educators might be willing to explore AI’s possibilities and experiment with its use in education, while remaining aware of potential drawbacks.
  • Engaged: Engaged educators actively seek to understand AI, its capabilities and its implications for education. They seek to shape the way AI is used in their field.
  • Resistant: Resistant educators might actively oppose the integration of AI into education due to concerns about its impact on teaching, learning or ethical considerations.
  • Pragmatic: Pragmatic educators might focus on the practical applications of AI in education, such as using it for administrative tasks or to support personalised learning. They might be less concerned with theoretical debates and more interested in how AI can be used to improve their practice.
  • Concerned: Educators who are concerned might primarily focus on the potential negative impacts of AI on students and educators. They might worry about issues like data privacy, algorithmic bias, or the deskilling of teachers.
  • Hopeful: Hopeful educators might see AI as a tool that can enhance education and create new opportunities for students and teachers. They might be excited about AI’s potential to personalise learning, provide feedback and support students with diverse needs.
  • Sceptical: Sceptical educators might question the claims made about AI’s benefits in education and demand evidence to support its effectiveness. They might be wary of the hype surrounding AI and prefer to wait for more research before adopting it.
  • Informed: Informed educators would stay up-to-date with the latest developments in AI and its applications in education. They would understand both the potential benefits and risks of AI and be able to make informed decisions about its use.
  • Fence-sitting: Educators who are fence-sitting recognise the complexity of the issue and see valid arguments on both sides. They may be delaying making a decision until more information is available or a clearer consensus emerges. This aligns with Furze’s own position of being on the fence, acknowledging both the benefits and risks of AI.
  • Ambivalent: Educators experiencing ambivalence might simultaneously hold positive and negative views about AI. They may, for example, appreciate its potential for personalising learning but be uneasy about its ethical implications. This reflects cognitive dissonance, where conflicting ideas create mental discomfort. Furze’s exploration of both the positive potential of AI and the reasons for resisting it illustrates this tension.
  • Time-poor: Educators who are time-poor may not have the capacity to fully (or even partially) research and understand the implications of AI, leading to delayed decisions or reliance on simplified viewpoints.
  • Inexperienced: Inexperienced educators may lack the background knowledge to confidently assess the potential benefits and risks of AI in education, contributing to hesitation or reliance on the opinions of others.
  • Other: whatever the heck you like!

How many did you choose?

Please select two or three and share them via this Mentimeter Link.

I’ll share the responses soon!

The Essay in the Age of AI: a test case for transformation

We need to get beyond entrenched thinking. We need to see that we are at a threshold of change in many of the ways that we work, write, study, research etc. Large language models as a key development in AI (with ChatGPT as a symbolic shorthand for that) have led to some pretty extreme pronouncements. Many see it as an existential threat, heralding the ‘death of the essay’ for example. These narratives, though, are unhelpful as they oversimplify a complex issue and mask long-standing, evidence-informed calls for change in educational assessment practices (and wider pedagogic practices). The ‘death of the essay’ narratives do though give us an opportunity to interrogate the thinking and (mis)understandings that underpin these discourses and tensions. We have a chance to challenge tacit assumptions about the value and purpose of essays as one aspect of educational practice that has been considered an immutable part of the ways learning and the evaluation of that learning happens. We are at a point where it is not just people like me (teacher trainers; instructional designers; academic developers; enthusiastic tech fiddlers; contrarians; compassionate & critical pedagogues; disability advocates etc.) that are voicing concerns about conventional practices. My view is that we leverage the heck out of this opportunity and find ways to effect change that is meaningful, scalable, responsive and coherent.

So it was that in a conversation over coffee (in my favourite coffee shop in the Strand area)  on these things with Claire Gordon (Director of the Eden Centre at LSE) that we decided to use the essay as a stimulus for a synthesis of thinking and to evolve a Manifesto for the essay (and other long form writing) in the age of AI.  To explore these ideas further, we invited colleagues from King’s College London and the London School of Economics (as well as special guests from Richmond American University and the University of Sydney) to a workshop. We explored questions like:

  • What are the core issues and concerns surrounding essays in the age of AI?
  • What alternatives might we consider in our quest for validity, reliability and authenticity?
  • Why do some educators and students love the essay format, and why do others not?
  • What is the future of writing? What gains can we harness, especially in terms of equity and inclusion?
  • How might we conceptualise human/hybrid writing processes?

A morning of sharing research, discussion, debate and reflection enabled us to draft and subsequently hone and fine tune a collection of provocations which we have called a ‘Manifesto for the Essay in the age of AI’

I invite you to read our full manifesto and the accompanying blog post outlining our workshop discussions. As we navigate this period of significant change in higher education, it’s crucial that we engage in open, critical dialogue about the future of assessment.

What are your thoughts on the role of essays in the age of AI? Or, indeed, how assessment and teaching will change shape over the next few years? I welcome your comments and reflections below.

Navigating the AI Landscape in HE: Six Opinions

Read my post below or listen to AI me read it. Have to say, I sound very well spoken in this video. To my ears doesn’t sound much like me. For those that know me: what do you think?

As we attempt to navigate uncharted (as well as expanding and changing) landscapes of artificial intelligence in higher education, it makes sense to reflect on our approaches and understanding. We’ve done ‘headless chicken’ mode; we’ve been in reactive mode. Maybe we can start to take control of the narratives; even if what is ahead of us is disrupting, fast-moving and fraught with tensions. Here are six perspectives from me that I believe will help us move beyond the hype and get on with the engagement that is increasingly pressing but, thus far, inconsistent at best.

1. AI means whatever people think it means

In educational circles, when we discuss AI, we’re primarily referring to generative tools like ChatGPT, DALL-E, or Copilot. While computer scientists might argue- with a ton of justification- this is a narrow definition, it’s the reality of how most educators and students understand and engage with AI. We mustn’t get bogged down in semantics; instead, we should focus on the practical implications of these tools in our teaching and learning environments whilst taking time to widen some of those definitions, especially when talking with students. Interrogating what we mean when we say ‘AI’ is a great starting point for these discussions in fact.

2. AI challenges our identities as educators

The rapid evolution of AI is forcing us to reconsider our roles as educators.  Whether you buy into the traditional framing of higher education this way or not, we’re no longer the sole gatekeepers of knowledge, dispensing wisdom from the lectern. However much we might want to advocate for notions of co-creation or discovery learning, the lecturer/ teacher as expert is a key component of many of our teacher professional identities.  Instead, we need to acknowledge that we’re all navigating this new landscape together – staff and students alike. This shift requires humility and a willingness to learn alongside our students. The alternatives: Fake it until you make it? Bury your head? Neither are viable or sustainable. Likewise, this is not something that is ‘someone else’s job’. HE is being menaced from many corners and workload is one of the many pressures- but I don’t see a beneficial path that does not necessitate engagement. If I’m right then something needs to give. Or be made less burdensome.

3. Engage, not embrace

I’m not really a hugger, tbh. My family? Yes. A cute puppy? Probably. Friends? Awkwardly at best. A disruptive tech? Of course not. While some advocate for ’embracing’ AI, I prefer the term ‘engage’. We needn’t love these technologies or accept them unquestioningly, but we do need to interact with them critically and thoughtfully. Rejection or outright banning is increasingly unsupportable, despite the many oft-cited issues. The sooner we at least entertain the possibilities that some of our assumptions about the nature of writing and what constitutes cheating and how we best judge achievement may need review the better.

4. AI-proofing is a fool’s errand

Attempts to create ‘AI-proof’ assessments or to reliably detect AI-generated content are likely to be futile. The pace of technological advancement means that any barriers we create will swiftly be overcome. Many have written on the unreliability and inherent biases of detection tools and the promotion of flawed proctoring and surveillance tools only deepens the trust divide between staff and students that is already strained to its limit.  Instead, we should focus on developing better, more authentic forms of assessment that prioritise critical thinking and application of knowledge. A lot of people have said this already, so we need to build a bank of practical, meaningful approaches, draw on the (extensive) existing scholarship and, in so doing, find ways to better share things that address some of the concerns that are not: ‘Eek, everyone do exams again!’

5. We need dedicated AI champions and leadership

To effectively integrate AI into our educational practices, we need people at all levels of our institutions who can take responsibility for guiding innovations in assessment and addressing colleagues’ questions. This requires significant time allocation and can’t be achieved through goodwill alone. Local level leadership and engagement (again with dedicated time and resource) is needed to complement central policy and guidance. This is especially true of multi-faculty institutions like my own. There’s only so much you can generalise. The problem of course is that whilst local agency is imperative, too many people do not yet have enough understanding to make fully informed decisions.  

6. Find a personal use for AI

To truly understand the potential and limitations of AI, it’s valuable to find ways to develop understanding with personal engagement – one way to do this is to incorporate it into your own workflows. Whether it’s using AI to summarise meeting or supervision notes, create thumbnails for videos, or transform lecture notes into coherent summaries, personal engagement with these tools can help demystify them and reveal practical benefits for yourself and for your students. My current focus is on how generative AI can open doors for neurodivergent students and those with disabilities or, in fact, any student marginalised by the structures and systems that are slow to change and privilege the few.

AI3*: Crossing the streams of artificial intelligence, academic integrity and assessment innovation

*That’s supposed to read AI3 but the title font refuses to allow superscript!

Yesterday I was delighted to keynote at the Universities at Medway annual teaching and learning conference. It’s a really interesting collaboration of three universities: University of Greenwich, University of Kent and Canterbury Christchurch University. Based at the Chatham campus in Medway you can’t help but notice the history the moment you enter the campus. Given that I’d worked at Greenwich for five years I was familiar with the campus but, as was always the case when I went there during my time at Greenwich, I experienced a moment of awe when seeing the campus buildings again. It’s actually part of the Chatham Dockyard World Heritage site and features the remarkable Drill Hall library. The reason I’m banging on about history is because such an environment really underscores for me some of those things that are emblematic of higher education in the United Kingdom (especially for those that don’t work or study in it!)

It has echoes of cultural shorthands and memes of university life that remain popular in representations of campus life and study. It’s definitely a bit out of date (and overtly UK centric) like a lot of my cultural references, but it made me think of all the murders in the Oxford set crime drama ‘Morse’.  The campus locations fossilised for a generation the idea of ornate buildings, musty libraries and deranged academics. Most universities of course don’t look like that and by and large academics tend not to be too deranged. Nevertheless we do spend a lot of time talking about the need for change and transformation whilst merrily doing things the way we’ve done them for decades if not hundreds of years. Some might call that deranged behaviour. And that, in essence, was the core argument of my keynote: For too long we have twiddled around the edges but there will be no better opportunity than now with machine-assisted leverage to do the things that give the lie to the idea that universities are seats of innovation and dynamism. Despite decades of research that have helped define broad principles for effective teaching, learning, assessment and feedback we default to lecture – seminar and essay – report – exam across large swathes of programmes. We privilege writing as the principle mechanism of evidencing learning. We think we know what learning looks like, what good writing is, what plagiarism and cheating are but a couple of quick scenarios to a room full of academics invariably reveal lack of consensus and a mass of tacit, hidden and sometimes very privileged understandings of those concepts.

Employing an undoubtedly questionable metaphor and unashamedly dated (1984) concept of ‘crossing the streams’ from the original Ghostbusters film, I argued that there are several parallels to the situation the citizens of New York first found themselves in way back when and not least the academics (initially mocked and defunded) who confront the paranormal manifestations in their Ghostbusters guises. First are the appearances of a trickle of ghosts and demons followed by a veritable deluge. Witness ChatGPTs release, the unprecedented sign ups and the ensuing 18 months wherein everything now has AI (even my toothbrush).   There’s an AI for That has logged 12,982 AIs to date to give an indication of that scale (I need to watch the film again to get an estimate on number of ghosts). Anyway, early in the film we learn that a Ghost catching device called a ‘Proton Pack’ emits energy streams but:


“The important thing to remember is that you must never under any circumstances, cross the streams.” (Dr Egon Spengler)

Inevitably, of course, the resolution to the escalating crisis is the necessity of crossing the streams to defeat and banish the ghosts and demons. I don’t think that generative AI is something that could or should be defeated and I definitely do not think that an arms race of detection and policing is the way forward either. But I do think we need to cross the streams of the three AIs: Artificial Intelligence; Academic Integrity and Assessment Innovation to help realise the long-needed changes.

Artificial Intelligence represents the catalyst not the reason for needing dramatic change.

Academic Integrity as a goal is fine but too often connotes protected knowledge, archaic practices, inflexible standards and a resistance to evolution.

Assessment innovation is the place where we can, through common language and understanding, address the concerns of perhaps more traditional or conservative voices about perceived robustness of assessments in a world where generative AI exists and is increasingly integrated into familiar tools along with what might be seen as more progressive voices who, well before ChatGPT, were arguing for more authentic, dialogic, process-focussed and, dare I say it, de-anonymised and humanly connected assessments.

Here is our opportunity. Crossing the streams may be the only way we mitigate a drift to obsolescence! MY concluding slide showed a (definitely NOT called Casper) friendly ghost which, I hope, connoted the idea that what we fear is the unknown but as we come to know it we find ways to shift from engagement (sometimes aggressively) to understanding and perhaps even an ‘embrace’ as many who talk of AI encourage us to do.

Incidentally, I asked the Captain (in my custom bot ‘Teaching Trek: Captain’s Counsel’) a question about change and he came up with a similar metaphor:

Blow Up the Enterprise: Sometimes, radical changes are necessary. I had to destroy the Enterprise to save my crew in “Star Trek III: The Search for Spock.” Academics should learn when to abandon a failing strategy and embrace new approaches, even if it means starting over.”

In a way I think I’d have had an easier time if I’d stuck with Star Trek metaphors. I was gratified to note that ‘The Search for Spock’ was also released in 1984. An auspicious year for dated cultural references from humans and bots alike.

—————–

Thanks:

The conference itself was great and I am grateful to Chloe, Emma, Julie and the team for orgnaising it and inviting me.

Earlier in the day I was inspired by presentations by colleagues from the three universities: Emma, Jimmy, Nicole, Stuart and Laura. The student panel was great too- started strongly with a rejection of the characterisation of students as idle and disintersted and carried on forcefully from there! And special thanks too to David Bedford (who I first worked with something like 10 years ago) who uses an analytical framework of his own devising called ‘BREAD’ as an aid to informing critical information literacy. His session adapted the framework for AI interactions and it prompted a question which led, over lunch, to me producing a (rough and ready) custom GPT based on it.

I should also acknowledge the works I referred to: 1. Sarah Eaton whose work on the 6 tenets of post-plagiarism I heartily recommended and to 2. Cath Ellis and Kane Murdoch* for their ‘enforcement pyramid’ which also works well as one of the vehicles that will help us navigate our way from the old to the new.

*Recommendation of this text does not in any way connote acceptance of Kane’s poor choice when it comes to football team preference.

BAAB Workshop: Gen AI- The Implications for Teaching and Assessment

A Summary of the transcript-first drafted via Google Gemini , prompted and edited by Martin Compton

The British Acupuncture Accreditation Board (BAAB) recently hosted a workshop on the implications of AI with a focus on generative AI tools like ChatGPT for teaching and assessment. With Dr Vivien Shaw from BAAB who designed and led the breakout element of the session, I was invited to share my thoughts on this rapidly evolving landscape, and it was a fantastic opportunity to engage with acupuncture/ Chinese Traditional Medicine educators and practitioners.

We started by noting the fact that the majority of attendees have had little or no experience using these tools and most were concerned:

Key Points

After a few defintions and live demos the key points I made were:

  • AI is Bigger Than Generative AI: While generative AI tools like ChatGPT have taken the spotlight, it’s crucial to remember that artificial intelligence encompasses a much broader spectrum of technologies.
  • Generative AI is a Black Box: Even the developers of these tools are often surprised by their capabilities and applications. This unpredictability presents both challenges and opportunities.
  • The Human Must Remain in the Loop: AI should augment, not replace, human expertise. The “poetry” and nuance of human intelligence are irreplaceable.
  • Scepticism is Essential: Don’t trust everything AI produces. Critical thinking and verification of information are more important than ever.
  • AI is Constantly Improving: The capabilities of AI tools are evolving at a breakneck pace. What seems impossible today might be commonplace tomorrow.

Embracing the Opportunities and Addressing the Threats

The workshop highlighted the need for educators to lean into AI, understand its potential, and exploit its capabilities where appropriate. We also discussed the importance of adapting our teaching and assessment methods to this new reality.

In the workshop I shared an AI generated summary of an article by Saffron Huang on ‘The surprising synergy between acupuncture and AI

and a A Chinese Medicine custom GPT which was critiqued by the group

Breakout Sessions: Putting AI to the Test

To get a hands-on feel for AI’s impact, we divided into breakout groups and tackled some standard acupuncture exam questions using ChatGPT and other AI tools. The results were both impressive and concerning.

  • Group 1: Case History: The AI-generated responses were generic and lacked the nuance and depth expected from a student.
  • Group 2: Reflective Task: The AI produced “marshmallow blurb” – responses that sounded good but lacked substance or specific details.
  • Group 3: PowerPoint Presentation: While the AI-generated presentation was a decent starting point, it lacked the specifics and critical analysis required by the assignment.

It was noted that these outputs should not mask the potential for labour saving, for getting something down as a start or the possibilites when multi-shot prompting (iterating).

The Road Ahead

The workshop sparked lively discussions about the future of teaching and assessment in the age of AI. Some key questions that emerged:

  • How can we ensure that students are truly learning and not just relying on AI to generate answers?
  • What are the ethical implications of using AI in education?
  • How can we adapt our assessments to maintain their validity and relevance?

This will all take work but, as a starting point and even if you are blown away by the tutoring demo from Sal Khan /GPT 4o this week, value human connecton and interaction at all times. Neither dismiss out of hand or unthinkingly accept change for its own sake. Transformation is possible with these new tech because these AI are powerful tools, but it’s up to us to use them responsibly and ethically and to grow our understanding through experimentation and dialogue. We need to engage with the opportunities presented while remaining vigilant about the potential threats.

The wizard of PAIR

Full recording: Listen / watch here

This post is a AI/ me hybrid summary of the transcript of a conversation I had with Prof Oz Acar as part of the AI conversations series at KCL.  This morning I found that my Copilot window now allows me to upload attachments (now disabled again! 30/4/24) but the output with the same prompt was poor by comparison to Claude or my ‘writemystyle’ custom GPT unfortunately (for now and at first attempt). I have made some edits to the post for clarity and to remove some of the wilder excesses of  ‘AI cringe’.  

 

“The beauty of PAIR is its flexibility,” Oz explained. “Educators can customise each component based on learning objectives, student cohorts, and assignments.” An instructor could opt for closed problem statements tailored to specific lessons, or challenge students to formulate their own open-ended inquiries. Guidelines may restrict AI tool choices, or enable students more autonomy to explore the ever-expanding AI ecosystem.  That oversight and guidance needs to come from an informed position of course.

 

Crucially, by emphasising skills like problem formulation, iterative experimentation, critical evaluation, and self-reflection, PAIR aligns with long-established pedagogical models proven to deepen understanding, such as inquiry-based and active learning. “PAIR is really skill-centric, not tool-centric,” Oz clarified. “It develops capabilities that will be invaluable for working with any AI system, now or in the future.”

 

The early results from over a dozen King’s modules across disciplines like business, marketing, and arts have piloted PAIR have been overwhelmingly positive. Students have reported marked improvements in their AI literacy – confidence in understanding these technologies’ current capabilities, limitations, and ethical implications. “Over 90% felt their skills in areas like evaluating outputs, recognising bias, and grasping AI’s broader impact had significantly increased,” Oz shared.

 

While valid concerns around academic integrity have catalysed polarising debates, with some advocating outright bans and restrictive detection measures, Oz makes a nuanced case for an open approach centred on responsible AI adoption. “If we prohibit generative AI for assignments, the stellar students will follow the rules while others will use it covertly,” he argued. “Since even expert linguists struggle to detect AI-written text reliably (especially when it has been manipulated rather than simply churned from a single shot prompt), those circumventing the rules gain an unfair advantage.”

 

Instead, Oz advocates assuming AI usage as an integrated part of the learning process, creating an equitable playing field primed for recalibrating expectations and assessment criteria. “There’s less motivation to cheat if we allow appropriate AI involvement,” he explained. “We can redefine what constitutes an exceptional essay or report in an AI-augmented age.”

 

This stance aligns with PAIR’s human-centric philosophy of ensuring students remain firmly in the driver’s seat, leveraging AI as an enabling co-pilot to materialise and enrich their own ideas and outputs. “Throughout the PAIR process, we have mechanisms like reflective reports that reinforce students’ ownership and agency … The AI’s role is as an assistive partner, not an autonomous solution.”

 

Looking ahead, Oz is energised by generative AI’s potential to tackle substantial challenges plaguing education systems globally – from expanding equitable access to quality learning resources, to easing overstretched educators’ burnout through intelligent process optimisation and tailored student support. “We could make education infinitely better by leveraging these technologies thoughtfully…Imagine having the world’s most patient, accessible digital teaching assistants to achieve our pedagogical goals.”

 

However, Oz also acknowledges legitimate worries about the perils of inaction or institutional inertia. “My biggest concern is that we keep talking endlessly about what could go wrong, paralysed by committee after committee, while failing to prepare the next generation for their AI-infused reality,” he cautioned. Without proactive engagement, Oz fears a bifurcated future where students are either obliviously clueless about AI’s disruptive scope, or conversely, become overly dependent on it without cultivating essential critical thinking abilities.

 

Another risk for Oz is generative AI’s potential to propel misinformation and personalised manipulation campaigns to unprecedented scales. “We’re heading into major election cycles soon, and I’m deeply worried about deepfakes fuelling conspiracy theories and political interference,” he revealed. “But even more insidious is AI’s ability to produce highly persuasive, psychologically targeted disinformation tailored to each individual’s profile and vulnerabilities.”

 

Despite these significant hazards, Oz remains optimistic that responsible frameworks like PAIR can steer education towards effectively harnessing generative AI’s positive transformations while mitigating risks.

 

PAIR Framework- Further information

Previous conversation with Dan Hunter

Previous conversation with Mandeep Gill Sagoo

Generative AI in HE- self study short course

An additional point to note: The recording is of course a conversation between two humans (Oz and Martin) and is unscripted. The Q&A towards the end of the recording was faciliated by a third human (Sanjana). I then compared four AI transcription tools: Kaltura, Clipchamp, Stream and Youtube. Kaltura estimated 78% accuracy, Clipchamp crashed twice, Stream was (in my estimation) around 90-95% accurate but the editing/ download process is less convenient when compared to YouTube in my view so the final transcript is the one initially auto-generated in in YouTube, ChatGPT punctuated then re-edited for accuracy in YouTube. Whilst accuracy has improved noticeably in the last few years the faff is still there. The video itself is hosted in Kaltura.

AI Law

Watch the full video here

In the second AI conversation of the King’s Academy ‘Interfaculty Insights’ series, Professor Dan Hunter, Executive Dean of the Dickson Poon School of Law, shared his multifaceted engagement with artificial intelligence (AI). Prof Hunter discussed the transformative potential of AI, particularly generative AI, in legal education, practice, and beyond. With a long history in the field of AI and law, he offered a unique perspective on the challenges and opportunities presented by this rapidly evolving technology. To say he is firmly in the enthusiast camp, is probably an understatement.

A wooden gavel with ‘AI’ embossed on it

From his vantage point, Prof Hunter presents the following key ideas:

  1. AI tools (especially LLMs) are already demonstrating significant productivity gains for professionals and students alike but it is often more about the ways they can do ‘scut work’. Workers and students become more efficient and improve work quality when using these models. For those with lower skill levels the improvement is even more pronounced.
  2. While cognitive offloading to AI models raises concerns about losing specific skills (examples of long division or logarithms were mentioned), Prof Hunter argued that we must adapt to this new reality. The “cat is out of the bag” so our responsibility lies in identifying and preserving foundational skills while embracing the benefits of AI.
  3. Assessment methods in legal education (and by implication across disciplines) must evolve to accommodate AI capabilities. Traditional essay writing can be easily replicated by language models, necessitating more complex and time-intensive assessment approaches. Prof Hunter advocates for supporting the development of prompt engineering skills and requiring students to use AI models while reflecting on the process.
  4. The legal profession will undergo a significant shakeup, with early adopters thriving and those resistant to change struggling. Routine tasks will be automated obligating lawyers to move up the value chain and offer higher-value services. This disruption may lead to the need for retraining.
  5. AI models can help address unmet legal demand by making legal services more affordable and accessible. However, this will require systematic changes in how law is taught and practiced, with a greater emphasis on leveraging AI’s capabilities.
  6. In the short term, we tend to overestimate the impact of technological innovations, while underestimating their long-term effects. Just as the internet transformed our lives over decades, the full impact of generative AI may take time to unfold, but it will undoubtedly be transformative.
  7. Educators must carefully consider when cognitive offloading to AI is appropriate and when it is necessary for students to engage in the learning process without AI assistance. Finding the right balance is crucial for effective pedagogy in the AI era.
  8. Professional services staff can benefit from AI by identifying repetitive, language-based tasks that can be offloaded to language models. However, proper training on responsible AI use, data privacy, and information security is essential to avoid potential pitfalls.
  9. While AI models can aid in brainstorming, generating persuasive prose, and creating analogies, they currently lack the ability for critical thinking, planning, and execution. Humans must retain these higher-order skills, which cannot yet be outsourced to AI.
  10. Embracing AI in legal education and practice is not just about adopting the technology but also about fostering a mindset of change and continuous adaptation. As Prof Hunter notes, “If large language models were a drug, everyone would be prescribed them.” *

The first in the series was Dr Mandeep Gill Sagoo

* First draft of this summary generated from meeting transcript via Claude

Navigating the Path of Innovation: Dr. Mandeep Gill Sagoo’s Journey in AI-Enhanced Education

Dr. Mandeep Gill Sagoo, a Senior Lecturer in Anatomy at King’s College London, is actively engaged in leveraging artificial intelligence (AI) to enhance education and research. Her work with AI is concentrated on three primary projects that integrate AI to address diverse challenges in the academic and clinical settings. The following summary (and title and image, with a few tweaks from me) was synthesised and generated in ChatGPT using the transcript of a fireside chat with Martin Compton from King’s Academy. The whole conversation can be listened to here.

AI generated image of a path winding through trees in sunlight and shadow
  1. Animated Videos on Cultural Competency and Microaggression: Dr. Sagoo has led a cross-faculty project aimed at creating animated, thought-provoking videos that address microaggressions in clinical and academic environments. This initiative, funded by the race equity and inclusive education fund, involved collaboration with students from various faculties. The videos, designed using AI for imagery and backdrops, serve as educational tools to raise awareness about unconscious bias and microaggression. They are intended for staff and student training at King’s College London and have been utilised in international collaborations. Outputs will be disseminated later in the year.
  2. AI-Powered Question Generator and Progress Tracker: Co-leading with a second-year medical student and working across faculties with a number of others, Dr. Sagoo received a college teaching fund award to develop this project, which is focused on creating an AI system that generates single best answer questions for preclinical students. The system allows students to upload their notes, and the AI generates questions, tracks their progress, and monitors the quality of the questions. This project aims to refine ChatGPT to tailor it for educational purposes, ensuring the questions are relevant and of high quality.
  3. Generating Marking Rubrics from Marking Schemes: Dr. Sagoo has explored the use of AI to transform marking schemes into detailed marking rubrics. This project emerged from a workshop and aims to simplify the creation of rubrics, which are essential for clear, consistent, and fair assessment. By inputting existing marking schemes into an AI system, she has been able to generate comprehensive rubrics that delineate the levels of performance expected from students. This project not only streamlines the assessment process but also enhances the clarity and effectiveness of feedback provided to students.

Dr. Sagoo’s work exemplifies a proactive approach to incorporating AI in education, demonstrating its potential to foster innovation, enhance learning, and streamline administrative processes. Her projects are characterised by a strong emphasis on collaboration, both with students and colleagues, reflecting a commitment to co-creation and the sharing of expertise in the pursuit of educational excellence.

Contact Mandeep