Innovation, AI and (weirdly) the new PSF

Mark Twain almost certainly said: 

“substantially all ideas are second-hand, consciously and unconsciously drawn from a million outside sources, and daily used by the garnerer with a pride and satisfaction born of the superstition that he originated them” 

and he is also attributed with saying:

 “A person with a new idea is a crank until the idea succeeds”

Both takes, perhaps even while being a little contradictory, relate to the idea of innovation. In this post that I initially drafted in an interaction with GPT Pro Advance voice chat while walking to work, I have thrown down some things that have been bothering me a bit about this surprisingly controversial word. 

Firstly, what counts as innovation in education? You often hear folk argue that, for example, audio feedback is no innovation as teachers somewhere or other have been doing it for donkeys years. The more I think though, the more I’m certain actions/ interventions/ experiments/ adaptations are rarely innovative by themselves but what matters fundamentally is context. Something that’s been around for years in one field or department might be utterly new and transformative somewhere else.

Objective Structured Clinical Examinations are something I have been thinking about a lot because I believe they may inspire others to adapt this assessment approach outside health professions. . In medical education, they’re routine. In business or political economy observable stations to assess performance or professional judgement would probably be deemed innovative. Chatting with colleagues, they could instantly see how something like that might work in their own context, but with different content, different criteria and perhaps a different ethos. In other words, in terms of the thing we might try to show we are doing to evidence the probably impossible to achieve ‘continuous improvement’ agenda, innovation isn’t about something being objectively new; it’s about it being new here. It’s about context, relevance and reapplication.

Innovation isn’t (just) what’s shiny

Ages ago I wrote about the danger and tendency for educators (and their leaders) to be dazzled by shiny things. But we need to move away from equating innovation with digital novelty. The current obsession is AI, unsurprisingly,  but it’s easy to get swept along in the sheen of it, especially if, like me, you are a vendor target. This, though reminds me that there’s a tendency to see innovation as synonymous with technological disruption. But I’d argue the more interesting innovations right now are not just about what AI can do, but how people are responding to it.

Arguable I know, but I do believe AI offers clear affordances: supporting diverse staff and student bodies, support for feedback, marking assistance, rewriting for tone, generating examples or case studies. And there’s real experimentation happening, much of it promising, some of it quietly radical. At the same time I’m seeing teams innovate in the opposite, analogue direction. Not because they’re nostalgic, conservative or anti tech (though some may be!),  but because they’re worried about academic integrity or concerned about the over-automation of thinking. We’re seeing a return to in-person vivas, handwritten tasks, oral assessments. So these are not new but because they are being re-justified in light of present challenges. It could be seen as innovation via resistance.

Collaboration as a key component of innovation

In amongst the amazing work reflected on, I see a lot a claims for innovative practice in the many Advance HE fellowship submissions I read as internal and external reviewer.  In some ways, seemingly very similar activities could be seen as innovative in one place and not another. While not a mandatory criterion, innovation is:

  • Encouraged through the emphasis on evidence-informed practice (V3) and responding to context (V4).
  • Often part of enhancing practice (A5) via continuing professional development.
  • Aligned with Core Knowledge K3, which stresses the importance of critical evaluation as a basis for effective practice—and this often involves improving or innovating methods.In the guidance for King’s applicants, innovation is positioned as a natural outcome of reflective practice

So while the new  PSF (2023) doesn’t promote innovation explicitly, what it does do (and this is new) is promote collaboration. it explicitly recognises the importance of collaboration and working with others, across disciplines, roles and institutions as a vital part of educational practice. That’s important because whilst in the past perceptions of innovation have stretched the definition and celebrated individual excellence in this space many of the most meaningful innovations I’ve seen emerge from collaboration and conversation. This takes us back to Twain and  borrowing, adapting, questioning.

We talk of interdisciplinarity (often with considerable insight and expertise like my esteemed colleagues Dave Ashby and Emma Taylor) and sometimes big but often small-scale, contextual innovation comes from these sideways encounters. But they require time, permission and a willingness to not always be the expert in the room. Something innovators with a lingering sense of the inspiring, individual creative may have trouble reconciling. 

Failure and innovation

We have a problem with failure in HE. We prefer success stories and polished case studies. But real innovation involves risk: things not quite working, not going to plan. Even failed experiments are educative. But often we structure our institutions to minimise that kind of risk, to reward what’s provable, publishable, measurable, successful. I have argued that  we do something similar to students. We say we want creativity, risk-taking, deep engagement. But we assess for precision, accuracy, conformity to narrow criteria and expectations. We encourage resilience, then punish failure with our blunt, subjective grading systems. We ask for experimentation but then rank it. So it’s no surprise if staff, like students, when encouraged to be creative or experimental, can be reluctant to try new things.

AI and innovation

I think I am finally getting to my point. The innovation AI  catalyses goes far beyond AI use cases. It’s prompting people to re-examine their curricula, reassess assessment designs, rethink what we mean by original thinking or independent learning. It’s forcing conversations we’ve long avoided, about what we value, how we assess,  and how we support students in an age of automated possibility. Even WHETHER we should continue to grade. (Incidentally I heard, amongst many fine presentations yesterday at the King’s /Cadmus event on assessment an inspiring argument against grading by Professor Bugewa Apampa from UEL. It’s so good to hear clearly articulated arguments on the necessity of confronting the issues related to grading from someone so senior). 

Despite my role (Ai and Innovation Lead) some of the best innovations I’ve seen aren’t about tech at all. They’re about human decisions in response to tech. They’re about asking, “What do we not want to automate?” or “How can we protect space for dialogue, for process or for pause?” 

If we only recognise innovation when it looks like disruption, we’ll miss a lot. 

Twain, Mark. Letter to: Helen Keller. 1903 Mar 17 [cited June, 19, 2025] Available from: https://www.afb.org/about-afb/history/helen-keller/letters/mark-twain-samuel-l-clemens/letter-miss-keller-mark-twain-st 

CPD for critical AI literacy: do NOT click here.

In 2018, Timos Almpanis and I co-wrote an article exploring issues with Continuous Professional Development (CPD) in relation to Technology Enhanced Learning (TEL). The article, which we published while working together at Greenwich (in Compass: Journal of Learning and Teaching), highlighted a persistent challenge: despite substantial investment in TEL, enthusiasm for it and use among educators remained inconsistent at best. While students increasingly expect technology to enhance their learning, and there is/ was evidence to supports its potential to improve engagement and outcomes, the traditional transmissive CPD models supporting how teaching academics were introduced to TEL and supported in it could undermine its own purpose. Focusing on technology and systems as well as using poor (and non modelling) pedagogy often gave/ give a sense of compliance over pedagogic improvement.

Because we are both a bit contrary and subversive we commissioned an undergraduate student (Christina Chitoroaga) to illustrate our arguments with some cartoons which I am duplicating here (I think I am allowed to do that?):

We argued that TEL focussed CPD should prioritise personalised and pedagogy-focused approaches over one-size-fits-all training sessions. Effective CPD that acknowledges need, relfects evidence-informed pedagogic apparoaches and empowers educators by offering choice, flexibility and relevance, will also enable them to explore and apply tools that suit their specific teaching contexts and pedagogical needs. By shifting the focus away from the technology itself and towards its purpose in enhancing learning, we can foster greater engagement and creativity among academic staff. This was exactly the approach I tried to apply when rolling out Mentimenter (a student response system to support increasing engagement in and out of class).

I was reminded of this article recently (because fo the ‘click here; clck there’ cartoon) when a colleague expressed frustration about a common issue they observed: lecturers teaching ‘regular’ students (I always struggle with this framing as most of my ‘students’ are my colleagues- we need a name for that! I will do a poll – got totally distracted by that but it’s done now) how to use software using a “follow me as I click here and there” method. Given that the “follow me as I click” is still a thing, perhaps it is time to adopt a more assertive and directive approach. Instead of simply providing opportunities to explore better practices, we may need to be clearer in saying: “Do not do this.” I mean I do not want to be the pedagogy police but while there is no absolute right way there are some wrong ways, right? Also we might want to think about what this means in terms of the AI elephant in every bloomin’ classroom.

The deluge of AI tools and emerging uses of these tech (willingly and unwillingly & appropriately and inappropriately) means the need for effective upskilling is even more urgent. However we support skill development and thinking time we need of course to realise it requires moving beyond the “click here, click there” model. In my view (and I am aware this is contested) educators and students need to experiment with AI tools in real-world contexts, gaining experience in how AI is impacting curricula, academic use and, potentially, pedagogic practices. The many valid and pressing reasons why teachers might resist or reject engaging with AI tools: workload, ethical implications, data privacy, copyright, eye-watering environmental impacts or even concern about being replaced by technology are a significant barriers to adoption. But adoption is not my goal; critical engagement is. The conflation of the two in the minds of my colleagues is I think a powerful impediment before I even get a chance to bore them to death with a ‘click here; click there’. In fact, there’s no getting away from the necessity of empathy and a supportive approach, one that acknowledges these fears while providing space for dialogue and both critical AND creative applications of responsibly used AI tools. In fact, Alison Gilmour and I wrote about this too! It’s like all my work actually coheres!

Whatever the approach, CPD cannot be a one-size-fits-all solution, nor can it rely on prescriptive ‘click here, click there’ methods. It must be compassionate and dialogic, enabling experimentation across a spectrum of enthusiasm—from evangelical to steadfast resistance. While I have prioritised ‘come and play’, ‘let’s discuss’, or ‘did you know you can…’ events, I recognise the need for more structured opportunities to clarify these underpinning values before events begin. If I can find a way to manage such a shift it will help align the CPD with meaningful, exploratory engagement that puts pedagogy and dialogue at the heart of our ongoing efforts to grow critical AI literacy in a productive, positive way that offers something to everyone wherever they sit of the parallel spectrums of AI skills and beliefs.

Post script: some time ago I wrote on the WONKHE blog about growing AI literacy and this coincided wiht the launch of the GEN AI in HE MOOC. We’re working on an expanded version- broadening the scope of AI beyond the utterly divisive ‘generative’ as well as widening the scope to other sectors of education. Release due in May. It’ll be free to access.

AI positions: Where do you stand?

I have been thinking a lot recently about my own and others’ positions in relation to AI in education. I’m reading a lot more from the ‘ResistAI’ lobby and share many persepctives with core arguments. I likewise read a lot from the tech communities and enthusiastic educator groups which often get conflated but are important to distinguish given bloomin’ obvious as well as more subtle agenda and motivation differences (see world domination and profit arguments for example). I see willing adoption, pragmatic adoption, reluctant adoption and a whole bunch of ill-informed adoption/ rejection too. My reality is that staff and students are using AI (of different types) in different ways. Some of this is ground-breaking and exciting, some snag-filled and disappointing, some ill-advised and potentially risky. Exisiting IT infrastrucutre and processes are struggling to keep pace and daily conversations range from ‘I have to show you this- it’s going to change my life! ‘ to ‘I feel like I’m being left behind here’ and a lot more besides.

So it was that this morning I saw a post on LinkedIn (who’d have thought the place where we put our CVs would grow so much as an academic social network?) from Leon Furze who defines his position as ‘sitting on the fence’. I initially I thought ‘yeah that’s me’ but, in fact, I am not actually sitting on the fence at all in this space. I am trying as best I can to navigate a path that can be defined by the broad two word strategy we are trying define and support at my place: Engage Responsibly. Constructive resitance and debate are central but so is engagement with fundamental ideas, technologies, principles and applications. I have for ages been arguing for more nuanced understanding. I very much appreciate evidence and experiential based arguments (counter and pro). The waters are muddied though with, on the one hand, big tech declarations of educational transformation and revolution (we’re always on the cusp, right?) and sceptical generalisations like the one I saw gaining social media traction the other day which went something like:

“Reading is thinking

Writing is thinking

AI is anti-thinking”

If you think that then you are not thinking in my view. Each of those statements must be contextualised and nuanced. This is exactly the kind of meme-level sound bite that sounds good initially but is not what we should be entertaining as a position in academia. Or is it? Below are some adjectives and defintions of the sorts of positions identified by Leon Furze in the collection linked above and by me and research partners in crime Shoshi, Olivia and Navyasara. Which one/s would you pick to define your position? (I am aware that many of these terms are loaded; I’m just interested in the broadest sense where people see themselves, whether they have planted a flag or if they are still looking for a spot as they wander around in the traffic wide eyed).

  • Cautious: Educators who are cautious might see both the potential benefits and risks of AI. They might be hesitant to fully embrace AI without a thorough understanding of its implications.
  • Critical: Educators who are critical might take a stance that focusses on one or more of the ethical concerns surrounding AI and its potential negative impacts, such as the risk of AI being used for surveillance or control, or ways in which data is sourced or used.
  • Open minded: Open minded educators might be willing to explore AI’s possibilities and experiment with its use in education, while remaining aware of potential drawbacks.
  • Engaged: Engaged educators actively seek to understand AI, its capabilities and its implications for education. They seek to shape the way AI is used in their field.
  • Resistant: Resistant educators might actively oppose the integration of AI into education due to concerns about its impact on teaching, learning or ethical considerations.
  • Pragmatic: Pragmatic educators might focus on the practical applications of AI in education, such as using it for administrative tasks or to support personalised learning. They might be less concerned with theoretical debates and more interested in how AI can be used to improve their practice.
  • Concerned: Educators who are concerned might primarily focus on the potential negative impacts of AI on students and educators. They might worry about issues like data privacy, algorithmic bias, or the deskilling of teachers.
  • Hopeful: Hopeful educators might see AI as a tool that can enhance education and create new opportunities for students and teachers. They might be excited about AI’s potential to personalise learning, provide feedback and support students with diverse needs.
  • Sceptical: Sceptical educators might question the claims made about AI’s benefits in education and demand evidence to support its effectiveness. They might be wary of the hype surrounding AI and prefer to wait for more research before adopting it.
  • Informed: Informed educators would stay up-to-date with the latest developments in AI and its applications in education. They would understand both the potential benefits and risks of AI and be able to make informed decisions about its use.
  • Fence-sitting: Educators who are fence-sitting recognise the complexity of the issue and see valid arguments on both sides. They may be delaying making a decision until more information is available or a clearer consensus emerges. This aligns with Furze’s own position of being on the fence, acknowledging both the benefits and risks of AI.
  • Ambivalent: Educators experiencing ambivalence might simultaneously hold positive and negative views about AI. They may, for example, appreciate its potential for personalising learning but be uneasy about its ethical implications. This reflects cognitive dissonance, where conflicting ideas create mental discomfort. Furze’s exploration of both the positive potential of AI and the reasons for resisting it illustrates this tension.
  • Time-poor: Educators who are time-poor may not have the capacity to fully (or even partially) research and understand the implications of AI, leading to delayed decisions or reliance on simplified viewpoints.
  • Inexperienced: Inexperienced educators may lack the background knowledge to confidently assess the potential benefits and risks of AI in education, contributing to hesitation or reliance on the opinions of others.
  • Other: whatever the heck you like!

How many did you choose?

Please select two or three and share them via this Mentimeter Link.

I’ll share the responses soon!

Navigating the AI Landscape in HE: Six Opinions

Read my post below or listen to AI me read it. Have to say, I sound very well spoken in this video. To my ears doesn’t sound much like me. For those that know me: what do you think?

As we attempt to navigate uncharted (as well as expanding and changing) landscapes of artificial intelligence in higher education, it makes sense to reflect on our approaches and understanding. We’ve done ‘headless chicken’ mode; we’ve been in reactive mode. Maybe we can start to take control of the narratives; even if what is ahead of us is disrupting, fast-moving and fraught with tensions. Here are six perspectives from me that I believe will help us move beyond the hype and get on with the engagement that is increasingly pressing but, thus far, inconsistent at best.

1. AI means whatever people think it means

In educational circles, when we discuss AI, we’re primarily referring to generative tools like ChatGPT, DALL-E, or Copilot. While computer scientists might argue- with a ton of justification- this is a narrow definition, it’s the reality of how most educators and students understand and engage with AI. We mustn’t get bogged down in semantics; instead, we should focus on the practical implications of these tools in our teaching and learning environments whilst taking time to widen some of those definitions, especially when talking with students. Interrogating what we mean when we say ‘AI’ is a great starting point for these discussions in fact.

2. AI challenges our identities as educators

The rapid evolution of AI is forcing us to reconsider our roles as educators.  Whether you buy into the traditional framing of higher education this way or not, we’re no longer the sole gatekeepers of knowledge, dispensing wisdom from the lectern. However much we might want to advocate for notions of co-creation or discovery learning, the lecturer/ teacher as expert is a key component of many of our teacher professional identities.  Instead, we need to acknowledge that we’re all navigating this new landscape together – staff and students alike. This shift requires humility and a willingness to learn alongside our students. The alternatives: Fake it until you make it? Bury your head? Neither are viable or sustainable. Likewise, this is not something that is ‘someone else’s job’. HE is being menaced from many corners and workload is one of the many pressures- but I don’t see a beneficial path that does not necessitate engagement. If I’m right then something needs to give. Or be made less burdensome.

3. Engage, not embrace

I’m not really a hugger, tbh. My family? Yes. A cute puppy? Probably. Friends? Awkwardly at best. A disruptive tech? Of course not. While some advocate for ’embracing’ AI, I prefer the term ‘engage’. We needn’t love these technologies or accept them unquestioningly, but we do need to interact with them critically and thoughtfully. Rejection or outright banning is increasingly unsupportable, despite the many oft-cited issues. The sooner we at least entertain the possibilities that some of our assumptions about the nature of writing and what constitutes cheating and how we best judge achievement may need review the better.

4. AI-proofing is a fool’s errand

Attempts to create ‘AI-proof’ assessments or to reliably detect AI-generated content are likely to be futile. The pace of technological advancement means that any barriers we create will swiftly be overcome. Many have written on the unreliability and inherent biases of detection tools and the promotion of flawed proctoring and surveillance tools only deepens the trust divide between staff and students that is already strained to its limit.  Instead, we should focus on developing better, more authentic forms of assessment that prioritise critical thinking and application of knowledge. A lot of people have said this already, so we need to build a bank of practical, meaningful approaches, draw on the (extensive) existing scholarship and, in so doing, find ways to better share things that address some of the concerns that are not: ‘Eek, everyone do exams again!’

5. We need dedicated AI champions and leadership

To effectively integrate AI into our educational practices, we need people at all levels of our institutions who can take responsibility for guiding innovations in assessment and addressing colleagues’ questions. This requires significant time allocation and can’t be achieved through goodwill alone. Local level leadership and engagement (again with dedicated time and resource) is needed to complement central policy and guidance. This is especially true of multi-faculty institutions like my own. There’s only so much you can generalise. The problem of course is that whilst local agency is imperative, too many people do not yet have enough understanding to make fully informed decisions.  

6. Find a personal use for AI

To truly understand the potential and limitations of AI, it’s valuable to find ways to develop understanding with personal engagement – one way to do this is to incorporate it into your own workflows. Whether it’s using AI to summarise meeting or supervision notes, create thumbnails for videos, or transform lecture notes into coherent summaries, personal engagement with these tools can help demystify them and reveal practical benefits for yourself and for your students. My current focus is on how generative AI can open doors for neurodivergent students and those with disabilities or, in fact, any student marginalised by the structures and systems that are slow to change and privilege the few.

Navigating the Path of Innovation: Dr. Mandeep Gill Sagoo’s Journey in AI-Enhanced Education

Dr. Mandeep Gill Sagoo, a Senior Lecturer in Anatomy at King’s College London, is actively engaged in leveraging artificial intelligence (AI) to enhance education and research. Her work with AI is concentrated on three primary projects that integrate AI to address diverse challenges in the academic and clinical settings. The following summary (and title and image, with a few tweaks from me) was synthesised and generated in ChatGPT using the transcript of a fireside chat with Martin Compton from King’s Academy. The whole conversation can be listened to here.

AI generated image of a path winding through trees in sunlight and shadow
  1. Animated Videos on Cultural Competency and Microaggression: Dr. Sagoo has led a cross-faculty project aimed at creating animated, thought-provoking videos that address microaggressions in clinical and academic environments. This initiative, funded by the race equity and inclusive education fund, involved collaboration with students from various faculties. The videos, designed using AI for imagery and backdrops, serve as educational tools to raise awareness about unconscious bias and microaggression. They are intended for staff and student training at King’s College London and have been utilised in international collaborations. Outputs will be disseminated later in the year.
  2. AI-Powered Question Generator and Progress Tracker: Co-leading with a second-year medical student and working across faculties with a number of others, Dr. Sagoo received a college teaching fund award to develop this project, which is focused on creating an AI system that generates single best answer questions for preclinical students. The system allows students to upload their notes, and the AI generates questions, tracks their progress, and monitors the quality of the questions. This project aims to refine ChatGPT to tailor it for educational purposes, ensuring the questions are relevant and of high quality.
  3. Generating Marking Rubrics from Marking Schemes: Dr. Sagoo has explored the use of AI to transform marking schemes into detailed marking rubrics. This project emerged from a workshop and aims to simplify the creation of rubrics, which are essential for clear, consistent, and fair assessment. By inputting existing marking schemes into an AI system, she has been able to generate comprehensive rubrics that delineate the levels of performance expected from students. This project not only streamlines the assessment process but also enhances the clarity and effectiveness of feedback provided to students.

Dr. Sagoo’s work exemplifies a proactive approach to incorporating AI in education, demonstrating its potential to foster innovation, enhance learning, and streamline administrative processes. Her projects are characterised by a strong emphasis on collaboration, both with students and colleagues, reflecting a commitment to co-creation and the sharing of expertise in the pursuit of educational excellence.

Contact Mandeep

College Teaching Fund: AI Projects- A review of the review by Chris Ince

On Wednesday I attended the mid-point event of the KCL College Teaching Fund projects – each group has been awarded some funding (up to £10,000, though some came in with far smaller budgets) to do more than speculate on the possibility of using AI within their discipline and teaching, but carry out a research project around design and implementation.

Each team had one slide and three minutes to give updates on their progress so far, with Martin acting as compere and facilitator. I started to take notes so that I could possibly share ideas with the faculty that I support (and part-way through thought that I perhaps should have recorded the session and used an AI to summarise each project), but it was fascinating to see links between projects in completely different fields. Some connections and thoughts before each project’s progress so far:

  • The work involving students was carried out in many ways, but pleasingly many projects were presented by student researchers, who had either been part of the initial project bid or who had been employed using CTF funds. Even if just considering being surveyed and trialled, students are at all levels through this work, as they should be.
  • Several projects opened with scoping existing student use of gAI in their academic lives and work. This has to be taken with a pinch of salt, as it requires an element of honesty, but King’s has been clear that gAI is not prohibited so long as it is acknowledged (and allowed at a local level). What is interesting is that scoping consistently found that students did not seem to be using gAI as much as one might think (about a third); however their use has been growing throughout projects and the academic year as they are taught how to use it.
  • That being said, several projects identify how students are sceptical of the usefulness of gAI to them and in some that scepticism grows through the project. In some ways this is quite pleasing, as they begin to see gAI not as a panacea, but as a tool. They’re identifying what it can and can’t do, and where it is and isn’t useful to them. We’re teaching about something (or facilitating), and they’re learning.
  • Training AIs and ChatBots to assist in specific and complex tasks crops up in a number of projects, and they’re trialling some very different methods for this. Some are external, some are developed and then shared with students, and some give students what they need to train them themselves. Evidence that there are so many approaches, and exactly why this kind of networking is useful.
  • There’s frequently a heavily patronising perception sometimes that young people know more about a technology that older people. It’s always more complex than that, but the involvement of students in CTF projects has fostered some sharing of knowledge, as academic staff have seen what students can do with gAI. However, it’s been clear that the converse is also true, and that ‘we’ not only need to teach them but there is a desire for us to. This is particularly notable when we consider equality of access and unfair advantages, and two projects highlight this when they noted students from China had lower levels of familiarity with AI.
Project TitleLead Thoughts
How do students perceive the use of genAI for providing feedbackTimothy PullenA project from Biochemistry that’s focused on coding, specifically AI tools giving useful feedback on coding. Some GTAs have developed some short coding exercises that have trialled with students (they get embedded into Moodle and the AI provides student feedback). This has implications in time saved on the administration of feedback of this kind, but Tim suggests seems that there are limits to what customised bots can do within this “significantly” – I need to find out more, and am intrigued around the student perception of this: are there some situations where students would rather have a real person look at their work and offer help?
AI-Powered Single Best Answer (SBA) Automatic Question Generation & Enhanced Pre-Clinical Student Progress TrackingIsaac Ng (student) Mandeep SagooIsaac, a medical student, presents, and it’s interesting that there’s quite a clear throughline to producing something that could have commercial prospects further down the line – there’s a name and logo! An AI has been ‘trained’ with resources and question styles that act as the baseline; students can then upload their own notes and the AI uses these to produce questions in an SBA format that is consistent with the ‘real’ ones. There’s a clear focus on making sure that the AI won’t generate prompts from the material that it’s been given that aren’t factually wrong. A nice aspect is that all of the questions the AI generates are stored, and in March students are going to be able to vote on other student-AI questions. I’m intrigued about the element of students knowing what a good or bad question is, and do we need to ensure their notes are high-quality first?
Co-designing Encounters with AI in Education for Sustainable DevelopmentCaitlin BentleyMira Vogel from King’s Academy is speaking on the team’s behalf – she leads on teaching sustainability in HE. The team have been working on the ‘right’ scaffolding and framing to find the most appropriate teaching within different areas/subjects/faculties – how to find the best routes. They have a broad range of members of staff involved, so have brought this element into the project itself. The first phase has been recursive – recruiting students across King’s to develop materials – Mira has a fun phrase about “eating one’s own dog food”. They’ve been identifying common ground across disciplines to find how future work should be organised at scale and wider to tackle ‘Wicked problems’ (I’m sure this is ‘pernicious or thorny problems’ and not surfer dude ‘wicked’, but I like the positivity in the thought of it being both).
Testing the Frontier – Generative AI in Legal Education and beyondAnat Keller and Cari Hyde VaamondeTrying to bring critical thinking into student use of AI. There’s a Moodle page and online workshop (120 participants) and focus group day (12 students-staff) to consider this. How does/should/could the law regulate financial institutions? The project focused on the application of assessment marking criteria and typically identified three key areas of failure: structure, understanding, and a lack of in-depth knowledge (interestingly, probably replicating what many academics would report for most assessment failure). The aim wasn’t a pass, but to see if a distinction level essay could be produced. Students were a lot more critical than staff when assessing the essays. (side-note: students anthropomorphised the AI, often using terms like ‘them’ and ‘him’ rather than ‘it’). Students felt that while using AI at the initial ideas stage and creation may initially feel more appropriate than using it during the actual essay writing, this was where they lost the agency and creativity that you’d want/find in a distinction level student – perhaps this is the message to get across to students?
Exploring literature search and analysis through the lens of AIIsabelle MiletichAnother project where the students on the research team get to present their work; it’s a highlight of the work, which also has a heavy co-creational aspect. Focused on Research Rabbit: a free AI platform that sorts and organises literature for literature reviews. Y2 focus groups have been used to inform material that is then used with Y1 dental students. There was a 95.7% response to Y1 survey. Resources were produced to form a toolbox for students, mainly guidance for the use of Research Rabbit. There was also a student produced video on how to use it for Y1s. The conclusion of the project will be narrated student presentations on how they used Research Rabbit.
Designing an AI-Driven Curriculum for Employable Business Students: Authentic Assessment and Generative AIChahna GonsalvesIdentifying use cases so that academics are better informed about when to put AI into their work. There have been a number employer-based interviews around how employers are using AI. Student participants are reviewing transcripts to match these to appropriate areas that academics might then slot them into the curriculum. An interesting aspect has been that students didn’t necessarily know/appreciate how much that King’s staff did behind the scenes on curriculum development work. It was also a surprise to the team how some employers were not as persuaded by the usefulness of AI (although many were embedding this within work). Some consideration of there being a difference in approach between early-adopters and those more reticent.
Assessment Innovation integrating Generative AI: Co-creating assessment activities with Undergraduate StudentsRebecca UpsherBased in Psychology – students described how assessment to them means anxiety and stress or “just a means to get a degree” (probably some work around the latter one for sure). There’s a desire for creative and authentic assessment from all sides. Project started by identifying current student use of AI in and around assessment. One focus group (A learning and assessment investigation. Clarity of existing AI guidance. Suggestions for improvements) and one workshop (students more actively giving suggestions about summative AI suggestions to staff). Focus on inclusive and authentic assessment, being mindful of neurodiverse students and the group have been working with the neurodiverse society. Research students have been carrying out the literature review, prepared recruitment materials for groups, and mapped assessment types used in the department. Preliminary interest that has been a common thread was a desire for assessments to be designed with students, and a shift in power dynamics – interesting is that AI projects like this are fostering these sorts of co-design work that could have taken place before AI, but didn’t necessarily – academic staff are now valuing what students know and can do with AI (particularly if they know more than we do).
Improving exam questions to decrease the impact of Large Language ModelsVictor TurcanuA medicine-based project. Alignment with authentic professional tasks, that allow students to demonstrate their understanding, critical and innovative thinking, can students use LLMs to enhance their creativity and wider conceptual reach? The project is using 300 anonymous exam scripts to compare with ChatGPT answers. More specifically it’s about asking students their opinion in a question that doesn’t have an answer (a novel question embedded within an area of research around allergies – can students design a study to investigate something that doesn’t have a known solution: talk about the possibilities, or what they think would be a line of approach to research an answer). LLMs may be able to utilise work that has been published, but cannot draw on what hasn’t been published or isn’t yet understood. While the project was about students using LLMs, there’s also an angle here that it’s a way of an assessment where an AI can’t help as much.
Exploring Generative AI in Essay Writing and Marking: A study on Students’ and Educators’ Perceptions, Trust Dynamics, and InclusivityMargherita de CandiaPolitical science. Working with Saul Jones (an expert on assessment), they’ve also considered making an essay ‘AI proof’. They’re using the PAIR framework developed at King’s and have designed an assessment using the framework to make a brief they think is AI proof but still allows students to use AI tools. Workshops with students where they write an essay using AI will then be used to refine the assignment brief following a marking phase. If it works they want to disseminate the AI-proof brief for essays to colleagues across the social science faculties, however they are running sessions to investigate student perceptions, particularly around improvements to inclusivity in using AI. An interesting element here is what we consider to be ‘AI proof’, but also that students will be asked for thoughts on feedback for their essays when half will have been generated by an AI.
Student attitudes towards the use of Generative AI in a Foundation Level English for Academic Purposes course and the impact of in-class interventions on these attitudesJames AckroydAction research – King’s Foundations within the team working on English for Academic purposes. Two surveys through the year and a focus group, specific interventions in class on use of AI. Another survey to follow. 2/3 of students initially said that they didn’t use AI at the start of the course (40% of students from China where AI is less commonly used due to access restrictions). But half-way through the course 2/3 said that they did. Is this King’s demystifying things? Student belief in what AI could do reduced during the course of the courseFaith in the micro-skills required for essay writing increased. Lots of fascinating threads of AI literacy and perceptions of it have come out of this so far.
Enhancing gAI literacy: an online seminar series to explore generative AI in education, research and employment.Brenda WilliamsOnline seminar series on the use of AI (because students asked for them online, but there also more than 2,000 students in the target group and it’s the best way to get reach. Consultation panel (10 each of staff/students/alumni) to design five sessions to be delivered in June. Students have been informed about the course and a pre-survey to find out about use of AI by participants (and post-) has been prepared. This project in particular has a high mix of staff from multiple areas around King’s and highlights that there is more at play within AI than just working with AI in teaching settings.
Supporting students to use AI ethically and effectively in academic writingUrsula WingatePreliminary scoping of student use of AI. Focus on fairness about a level playing field to upskill some students, and to reign in others. Recruited four student collaborators. Four focus groups (23 participants in January). All students reported having used Chat GPT (did this mean, in education, or in general?) and there is a wide range of free ones they use. Students are critical and sceptical of AI: they’ve noticed that it isn’t very reliable and have concerns about IP of others. They’re also concerned about not developing their own voice. Sessions designed to focus on some key aspects (cohesion, grammatical compliance, appropriateness of style, etc.) when using AI in academic writing are being planned.
Is this a good research question?Iain Marshall, Kalwant SidhuResearch topics for possible theses are being discussed at this half-way point of the academic year. Students are consulting chatbots (academics are quite busy, but also supervisors are usually only assigned when project titles and themes are decided – can students have space to go to beforehand for more detailed input?) The team have been utilising prompt engineering to create their own chatbot to help themselves and others (I think this is through the application of provided material, so students can input this and then follow with their own questions). This does involve students utilising quite a number of detailed scripts and coding, so this is supervised by a team – aimed that this will be supportive.
Evaluating an integrated approach to guide students’ use of generative AI in written assessmentsTania Alcantarilla &Karl NightingaleThere are 600 students in the 1st year of their Bioscience degrees. The team focused on perceptions and student use of AI. Design of a guidance podcast/session. Evaluation of the sessions and then of ultimate gAI use. There were 200 responses to student survey (which is pretty impressive). Lower use of gAI than expected (1/3 of students, but this increased after being at King’s – mainly by international students). It’s now that I’ve realised people ‘in the know’ are using gAI and not genAI as I have…am I out of touch?
AI-Based Automated Assessment Tools for Code QualityMarcus Messer, Neil BrownA project based around the assessment of student produced code. Here the team have focused on ‘Chain of thought prompting’ – a example is given to the LLM where there is a gobbet that includes the data, a show of reasoning steps, and the solution. Typically eight are used before the gAI is used to apply what it learned to a new question or other input. Here the team will use this to assess the code quality of programming assignments, including the readability, maintainability, and quality. Ultimately the grades and feedback will be compared with human-graded examples to judge the effectiveness of the tool.
Integrating ChatGPT-4 into teaching and assessmentBarbara PiotrowskaPublic Policy in the Department of Political Economy – Broad goal was to get students excited and comfortable with using gAI. Some of the most hesitant students have been the most inventive in using it to learn new concepts. ChatGPT used as co-writer for an assessment – a policy brief (advocacy) – due next week. Teaching also a part (conversations with gAI on a topic can be used as an example of a learning task).
Generative AI for critical engagement with the literatureJelena DzakulaDigital Humanities – reading and marking essays where students engage with a small window of literature. Can gAI summarise what are considered difficult articles and chapters for students? Initial survey showed that students don’t use tools for this, they just give up. They mainly use gAI for brainstorming and planning, but not for helping their learning. Designing workshops/focus groups to turn gAI into a learning tool, mainly based around complex texts.
Adaptive learning support platform using GenAI and personalised feedbackIevgeniia KuzminykhThis project aims to embed AI, or at least use it as an integral part, of a programme, where it has access to a lot of information about progress, performance and participation. Moodle has proven quite difficult to work with for this project as the team wanted an AI that would analyse Moodle (to do this a cloned copy was needed, uploaded elsewhere so that it can be accessed externally by the AI). ChatGPT API not being free has also been an issue. So far, course content, quizzes, answers, were utilised and gAI asked to give feedback and generate a new quizzes. Paper design for a feedback system is being written and will be disseminated.
Evaluating the Reliability and Acceptability of AI Evaluation and Feedback of Medical School Course WorkHelen OramCouldn’t make the session- updates coming soon!

Fascinating stuff. For me, I want to consider how we can take this work from projects that have been funded by the CTF, and use them as ideas and models that departments, academics, and teaching staff can look to when considering teaching, curriculum and assessment in ways where they may not have funding.