I just posted this on innovation and I thought the way I got to that from an idea that popped into my head while listening to a podcast when walking to the station might be interesting for a few reasons. 1. Because we talk a lot about transparency in AI use…well the link below and then the final post reveals the stages I went through (other than the comprehensive final edit I did in MS Word). 2. I think it shows a lot about the increasing complexity in the nature of authorship. 3. It shows that where AI augments writing, how challenging it is to capture the nature of use and actually ‘be’ transparent because writing has suddenly become somehting I can do on the move and 4. will challenge many to consider the legitimacy and quality of writing when produced in this way. I should also note that this post, I did in the regular way of typing direxctly into the WordPress author window without (not sure why) built in spellechecker.
“substantially all ideas are second-hand, consciously and unconsciously drawn from a million outside sources, and daily used by the garnerer with a pride and satisfaction born of the superstition that he originated them”
and he is also attributed with saying:
“A person with a new idea is a crank until the idea succeeds”
Firstly, what counts as innovation in education? You often hear folk argue that, for example, audio feedback is no innovation as teachers somewhere or other have been doing it for donkeys years. The more I think though, the more I’m certain actions/ interventions/ experiments/ adaptations are rarely innovative by themselves but what matters fundamentally is context. Something that’s been around for years in one field or department might be utterly new and transformative somewhere else.
Objective Structured Clinical Examinations are something I have been thinking about a lot because I believe they may inspire others to adapt this assessment approach outside health professions. . In medical education, they’re routine. In business or political economy observable stations to assess performance or professional judgement would probably be deemed innovative. Chatting with colleagues, they could instantly see how something like that might work in their own context, but with different content, different criteria and perhaps a different ethos. In other words, in terms of the thing we might try to show we are doing to evidence the probably impossible to achieve ‘continuous improvement’ agenda, innovation isn’t about something being objectively new; it’s about it being new here. It’s about context, relevance and reapplication.
Innovation isn’t (just) what’s shiny
Ages ago I wrote about the danger and tendency for educators (and their leaders) to be dazzled by shiny things. But we need to move away from equating innovation with digital novelty. The current obsession is AI, unsurprisingly, but it’s easy to get swept along in the sheen of it, especially if, like me, you are a vendor target. This, though reminds me that there’s a tendency to see innovation as synonymous with technological disruption. But I’d argue the more interesting innovations right now are not just about what AI can do, but how people are responding to it.
Arguable I know, but I do believe AI offers clear affordances: supporting diverse staff and student bodies, support for feedback, marking assistance, rewriting for tone, generating examples or case studies. And there’s real experimentation happening, much of it promising, some of it quietly radical. At the same time I’m seeing teams innovate in the opposite, analogue direction. Not because they’re nostalgic, conservative or anti tech (though some may be!), but because they’re worried about academic integrity or concerned about the over-automation of thinking. We’re seeing a return to in-person vivas, handwritten tasks, oral assessments. So these are not new but because they are being re-justified in light of present challenges. It could be seen as innovation via resistance.
Collaboration as a key component of innovation
In amongst the amazing work reflected on, I see a lot a claims for innovative practice in the many Advance HE fellowship submissions I read as internal and external reviewer. In some ways, seemingly very similar activities could be seen as innovative in one place and not another. While not a mandatory criterion, innovation is:
Encouraged through the emphasis on evidence-informed practice (V3) and responding to context (V4).
Often part of enhancing practice (A5) via continuing professional development.
Aligned with Core Knowledge K3, which stresses the importance of critical evaluation as a basis for effective practice—and this often involves improving or innovating methods.In the guidance for King’s applicants, innovation is positioned as a natural outcome of reflective practice
So while the new PSF (2023) doesn’t promote innovation explicitly, what it does do (and this is new) is promote collaboration. it explicitly recognises the importance of collaboration and working with others, across disciplines, roles and institutions as a vital part of educational practice. That’s important because whilst in the past perceptions of innovation have stretched the definition and celebrated individual excellence in this space many of the most meaningful innovations I’ve seen emerge from collaboration and conversation. This takes us back to Twain and borrowing, adapting, questioning.
We talk of interdisciplinarity (often with considerable insight and expertise like my esteemed colleagues Dave Ashby and Emma Taylor) and sometimes big but often small-scale, contextual innovation comes from these sideways encounters. But they require time, permission and a willingness to not always be the expert in the room. Something innovators with a lingering sense of the inspiring, individual creative may have trouble reconciling.
Failure and innovation
We have a problem with failure in HE. We prefer success stories and polished case studies. But real innovation involves risk: things not quite working, not going to plan. Even failed experiments are educative. But often we structure our institutions to minimise that kind of risk, to reward what’s provable, publishable, measurable, successful. I have argued that we do something similar to students. We say we want creativity, risk-taking, deep engagement. But we assess for precision, accuracy, conformity to narrow criteria and expectations. We encourage resilience, then punish failure with our blunt, subjective grading systems. We ask for experimentation but then rank it. So it’s no surprise if staff, like students, when encouraged to be creative or experimental, can be reluctant to try new things.
AI and innovation
I think I am finally getting to my point. The innovation AI catalyses goes far beyond AI use cases. It’s prompting people to re-examine their curricula, reassess assessment designs, rethink what we mean by original thinking or independent learning. It’s forcing conversations we’ve long avoided, about what we value, how we assess, and how we support students in an age of automated possibility. Even WHETHER we should continue to grade. (Incidentally I heard, amongst many fine presentations yesterday at the King’s /Cadmus event on assessment an inspiring argument against grading by Professor Bugewa Apampa from UEL. It’s so good to hear clearly articulated arguments on the necessity of confronting the issues related to grading from someone so senior).
Despite my role (Ai and Innovation Lead) some of the best innovations I’ve seen aren’t about tech at all. They’re about human decisions in response to tech. They’re about asking, “What do we not want to automate?” or “How can we protect space for dialogue, for process or for pause?”
If we only recognise innovation when it looks like disruption, we’ll miss a lot.
Whilst some (many) academic staff and students voice valid and expansive concerns about the use of or focus on Artificial intelligence in education I find myself (finally, perhaps, and later in life) much more pragmatic. We hear the loud voices and I applaud many acts of resistance but we cannot ignore the ‘explosive increase’ in AI use by students. It’s here and that is one driver. More positive drivers to change might be visible trends and future predictions in the global employment landscape and the affordances in terms of data analytics and medical diagnostics (for example) that more widely defined AI promises. As I keep saying this doesn’t mean we need to rush to embrace anything nor does it imply that educators must become computer scientists overnight. But it does mean something has to give and, rather than (something else I have been saying for a long time) knee-jerk ‘everyone back in the exam halls’ type responses, it’s clear we need to move a wee bit faster in the names of credibility (of the education and the bits of paper we dish out at the end of it) as well as the value to the students of what we are teaching and , perhaps more controversially I suppose, how we teach and assess them.
Over the past months, I’ve been working with colleagues to think through what AI’s presence means for curriculum transformation. In discussions with colleagues at King’s most recently, three interconnected areas keep surfacing and this is my effort to set them out:
Content and Disciplinary Shifts
We need to reflect on not just what might we add, but what can we subtract or reweight. The core question becomes: How is AI reshaping knowledge and practice in this discipline, and how should our curricula respond?
This isn’t about inserting generic “AI in Society” modules everywhere. It’s about recognising discipline-specific shifts and preparing students to work with, and critically appraise/ engage with new tech, approaches, systems and idea (and the impacts consequent of implementation). Update 11th June 2025: My colleague, Dr Charlotte Haberstroh, pointed out on reading this that there is an additional important dimension to this and I agree. She suggests we need to find a way to enable students to question and make connections explicitly: ‘how does my disciplinary knowledge help me (the student) make sense of what’s happening, how can it inform my analysis of causes/consequences of how AI is being embedded into our society (within the part that I aim to contribute to in the future / participate in as responsible citizen). Examples she suggested: In Law it could be around how it alters the meaning of intellectual property, in HR it’s going to be about AI replacing workers (or not) and/or the business model of the tech firms driving these changes. In history it’s perhaps how have we adopted technologies in the past and how does that help us understand what we are doing now.
Some additional examples of how AI as content crosses all disciplines:
Law: AI-powered legal research and contract review tools (e.g. Harvey) are changing the role of administration in law firms and the roles of junior solicitors.
Medicine: Diagnostic imaging is increasingly supported by machine learning, shifting the emphasis away from manual pattern recognition towards interpretation, communication, and ethical judgement.
Geography: Environmental modelling uses real-time AI data analytics, reshaping how students understand climate systems.
Much of the current debate focuses on the security of assessment in the age of AI. That matters a lot of course but if it drives all our thinking and feel it is still the dominant narrative in HE spaces, we will double down on distrust, always default to suspicion and restriction rather than our starting point being designing for creativity, authenticity and inclusivity.
First shift needs to be moving from ‘how do we catch cheating?”’to ‘Where and how can we ‘catch’ learning?’ as well as ‘how do we design assessments that AI can’t meaningfully complete without student learning?’ Does this mean redefining ‘assessment’ beyond the narrow ‘evaluative ‘ definition we tend to elevate? Probably, yes.
Risk is real, inappropriate/ foolish/ unacceptable …even malicious use of AI is a real thing too. So, robustness by design is important too: iterative, multi-stage tasks; oral components; personalised data sets; critical reflection. All are possible without reverting to closed-book exams. These have a place but are no panacea.
Computer Science / Coding: With tools like GitHub Copilot generating working code from natural language, assessment may need to focus more on code comprehension, debugging, testing, and ethical evaluation and/ or problem-based and collective (and interdisciplinary) tasks.
AI Integration & Critical Literacies
Students need access to AI tools; They need choice (this will be an increasingly big hurdle to navigate), they need structured opportunities to critique and reflect on their use. This means building critical AI literacy into our programmes or, minimally, the extra-curricular space, not as a bolt-on, but as an embedded activity. Given what I set out above, this will need nuancing to a disciplinary context. It’s happening in pockets but I would argue needs more investment and an upping of the pace- given the ongoing crises in UK HE (if not globally) it’s easy to see why this may not be seen as a priority.
I think we need to do the following for all students. What do you think?
Critical AI literacy (what it is, how it works (and where it doesn’t), all the mess in connotes)
Aligned with better Information/digital literacy (how to verify, attribute, trace and reflect on outputs- and triangulate)
Assessment and feedback literacy (how to judge what’s been learned, and how it’s being measured)
Some examples of where the discipline needs nuance and separate focus and why it is so complex:
English Literature/ History/ Politics: Is the essay dead? Students can prompt ChatGPT to generate essays but how are they generating passable essays when so much of the critique is about the banality of homogenised outputs and lack of anything resembling critical depth? How can we (in a context were anonymous submission is the default) maintain value in something deemed so utterly central to humanities and social science study?
Medical and Nursing education:I often feel observed clinical examinations hold a potential template for wider adoption in non medical disciplines. And AI simulation tools offer lifelike decision-making environments so we are seeing increasing exploration of the potentials here: the literacy lies in knowing what AI can support and what it cannot do, and how to bridge that gap.Who learns this? Where is the time to do it? How are decision made about which tools to trail or purchase?
Where to Start: Prompting thoughtful change
These three areas are best explored collectively, in programme teams, curriculum working groups or assessment review/ module review teams. I’d suggest to begin with these teams need to discuss and then move on from there.
Where have you designed assessments that acknowledge AI in terms of the content taught? What might you need to modify looking ahead? (e.g. updated disciplinary knowledge, methodological changes, professional practice)
Have you modified assessments where vulnerability is a concern? Have you drawn on positive reasons for change (eg scholarship in effective assessment design)? (e.g. risk of generative AI substitution, over-reliance on closed tasks, integrity challenges)
Have you designed or planned assessments that incorporate, develop or even fully embed AI use? (e.g. requiring students to use, reflect on or critique AI outputs as part of their task)
I do not think this is AI evangelism though I do accept that some will see it as such because I do believe that engagement is necessary and actually an ethical responsibility to our students. That’s a tough sell when some of those students are decrying anything with ‘AI’ in it as inherently and solely evil. I’m not trying to win hearts and minds to embrace anything other than that these tech need much broader definition and understanding and from that we may critique and evolve.
I wanted to drop these two reports in one place. Neither of course are concerned with wider ethical issues of AI and I do not want to come over as tech bro evangelist but I do think my pragmatism and the necessary ‘responsible engagement’ approach many institutions are now taking is buttressed by the (like or no) trends we are seeing which are profound. Barriers (as seen through eyes of employers) to change struck me too as we can see the similar manifestations in educational spaces: skills gaps, cultural resistance and outdated regulation.
The Future of Jobs Report 2025 explores how global labour markets will evolve by 2030 in response to intersecting drivers of change: technological advances (especially AI), economic volatility, demographic shifts, climate imperatives and geopolitical tensions. Based on responses from over 1,000 global employers covering more than 14 million workers, the report predicts large-scale job transformation. While 14% of current jobs (170 million) are expected to be created, 8% (92 million) will be displaced, resulting in a net 6% growth. The transition will be skills-intensive, with 59% of workers needing retraining. Those numbers are enogh to make you gasp and drop your coffee.
PwC’s 2025 Global AI Jobs Barometer presents an incredibly optimistic analysis of how AI is reshaping the global workforce. From a billion job ads and thousands of company reports across the globe, the report suggests that AI is enhancing productivity, shifting skill demands and increasing the value of workers. Rather than displacing workers it argues that AI is acting as a multiplier, especially when deployed agentically. The findings provide a counter perspective to common (and I’d argue, perfectly reasonable and rational!) fears about AI-induced job losses.
Whilst I am still wearing the biggest of my cynical hats, I concur that the need for urgent investment in skills (and critical engagement) is imperative and, lest we lose any residual handle on shaping the narratives in this space, we need to invest much more of our efforts into considering where we need to adapt what we research, the design and content of our currcula and the critical and practical skills we need to develop. Given the timeframes suggested in these reports, we’d better get on with it.