What do we call colleagues when they are our students?

I love working with ‘Proper’ students. ‘Actual’ students; ‘Real’ students… I don’t think it’s just me in my teacher educator/ academic developer role who says stuff like this. It’s one odd side of the way we refer to folk we work with because of the nature of our work. The large part of what many of us do: lecturer/ teacher development/ training, CPD and so on also often puts me in a linguistic pickle. Obviously these colleagues are not “students” in the traditional sense, but they’re also more than just “colleagues” in that context. They occupy a unique space in those moment at least: learners, collaborators, peers… and when I want to write about or talk about that I find myself saying awkward things like ‘my students who are of course my colleagues’

Do we need to neologise? Or has this been sorted but I just haven’t heard that all the cool academic developers use the term ‘Learn-o-nauts’ or something.

I tried to think of somehting but I am up early on the weekend and no-one is about so I sought some AI assistance. A couple are actually mine but I am too embarrrassed to say which.

Vote for your favourite from the (possibly crigeworthy and wholly inadequate) list below and add comments or suggestions in the comments or via Bluesky

AI positions: Where do you stand?

I have been thinking a lot recently about my own and others’ positions in relation to AI in education. I’m reading a lot more from the ‘ResistAI’ lobby and share many persepctives with core arguments. I likewise read a lot from the tech communities and enthusiastic educator groups which often get conflated but are important to distinguish given bloomin’ obvious as well as more subtle agenda and motivation differences (see world domination and profit arguments for example). I see willing adoption, pragmatic adoption, reluctant adoption and a whole bunch of ill-informed adoption/ rejection too. My reality is that staff and students are using AI (of different types) in different ways. Some of this is ground-breaking and exciting, some snag-filled and disappointing, some ill-advised and potentially risky. Exisiting IT infrastrucutre and processes are struggling to keep pace and daily conversations range from ‘I have to show you this- it’s going to change my life! ‘ to ‘I feel like I’m being left behind here’ and a lot more besides.

So it was that this morning I saw a post on LinkedIn (who’d have thought the place where we put our CVs would grow so much as an academic social network?) from Leon Furze who defines his position as ‘sitting on the fence’. I initially I thought ‘yeah that’s me’ but, in fact, I am not actually sitting on the fence at all in this space. I am trying as best I can to navigate a path that can be defined by the broad two word strategy we are trying define and support at my place: Engage Responsibly. Constructive resitance and debate are central but so is engagement with fundamental ideas, technologies, principles and applications. I have for ages been arguing for more nuanced understanding. I very much appreciate evidence and experiential based arguments (counter and pro). The waters are muddied though with, on the one hand, big tech declarations of educational transformation and revolution (we’re always on the cusp, right?) and sceptical generalisations like the one I saw gaining social media traction the other day which went something like:

“Reading is thinking

Writing is thinking

AI is anti-thinking”

If you think that then you are not thinking in my view. Each of those statements must be contextualised and nuanced. This is exactly the kind of meme-level sound bite that sounds good initially but is not what we should be entertaining as a position in academia. Or is it? Below are some adjectives and defintions of the sorts of positions identified by Leon Furze in the collection linked above and by me and research partners in crime Shoshi, Olivia and Navyasara. Which one/s would you pick to define your position? (I am aware that many of these terms are loaded; I’m just interested in the broadest sense where people see themselves, whether they have planted a flag or if they are still looking for a spot as they wander around in the traffic wide eyed).

  • Cautious: Educators who are cautious might see both the potential benefits and risks of AI. They might be hesitant to fully embrace AI without a thorough understanding of its implications.
  • Critical: Educators who are critical might take a stance that focusses on one or more of the ethical concerns surrounding AI and its potential negative impacts, such as the risk of AI being used for surveillance or control, or ways in which data is sourced or used.
  • Open minded: Open minded educators might be willing to explore AI’s possibilities and experiment with its use in education, while remaining aware of potential drawbacks.
  • Engaged: Engaged educators actively seek to understand AI, its capabilities and its implications for education. They seek to shape the way AI is used in their field.
  • Resistant: Resistant educators might actively oppose the integration of AI into education due to concerns about its impact on teaching, learning or ethical considerations.
  • Pragmatic: Pragmatic educators might focus on the practical applications of AI in education, such as using it for administrative tasks or to support personalised learning. They might be less concerned with theoretical debates and more interested in how AI can be used to improve their practice.
  • Concerned: Educators who are concerned might primarily focus on the potential negative impacts of AI on students and educators. They might worry about issues like data privacy, algorithmic bias, or the deskilling of teachers.
  • Hopeful: Hopeful educators might see AI as a tool that can enhance education and create new opportunities for students and teachers. They might be excited about AI’s potential to personalise learning, provide feedback and support students with diverse needs.
  • Sceptical: Sceptical educators might question the claims made about AI’s benefits in education and demand evidence to support its effectiveness. They might be wary of the hype surrounding AI and prefer to wait for more research before adopting it.
  • Informed: Informed educators would stay up-to-date with the latest developments in AI and its applications in education. They would understand both the potential benefits and risks of AI and be able to make informed decisions about its use.
  • Fence-sitting: Educators who are fence-sitting recognise the complexity of the issue and see valid arguments on both sides. They may be delaying making a decision until more information is available or a clearer consensus emerges. This aligns with Furze’s own position of being on the fence, acknowledging both the benefits and risks of AI.
  • Ambivalent: Educators experiencing ambivalence might simultaneously hold positive and negative views about AI. They may, for example, appreciate its potential for personalising learning but be uneasy about its ethical implications. This reflects cognitive dissonance, where conflicting ideas create mental discomfort. Furze’s exploration of both the positive potential of AI and the reasons for resisting it illustrates this tension.
  • Time-poor: Educators who are time-poor may not have the capacity to fully (or even partially) research and understand the implications of AI, leading to delayed decisions or reliance on simplified viewpoints.
  • Inexperienced: Inexperienced educators may lack the background knowledge to confidently assess the potential benefits and risks of AI in education, contributing to hesitation or reliance on the opinions of others.
  • Other: whatever the heck you like!

How many did you choose?

Please select two or three and share them via this Mentimeter Link.

I’ll share the responses soon!

AI in healthcare pulse check

I have been interested recently in the ways in which AI is being integrated into healthcare as part of my personal goal to widen my understanding and broaden my own definition of AI. I’m seeing increasing need to do this as part of growing awareness and literacy as well as a need to show that AI is impacting curricula well beyond the ongoing kerfuffle around generative AI and assessment integrity. I was recommended this panel by Professor Dan Nicolau Jr who chaired this session at the recent event which looked at the many barriers to advances in a context where early detection, monitoring, business models and data availability impact the ways in which we do medicine and advance it in a world where ageing populations present an existential threat to global healthcare systems. It struck me when I watched this how much the potentials and barriers expressed here will likely be mirrored in other disciplines. Medicine does seem to be an effective bellweather though.

Some of the issues that stood out:

Data availability and validity: Just as healthcare AI can produce skewed results from over-represented organisms in protein design, we see similar issues of data bias emerging across AI applications. The challenges around electronic health records – inconsistent, incomplete and error-prone – mirror concerns about data quality in other domains.

Business models and willingness/ ability to use what is available: The difficulty in monetising preventative AI applications in medicine, for example, reflects broader questions about how we value different types of AI innovation. Similarly, the need to shift mindsets from reactive to proactive approaches in healthcare has parallels with cultural change required for effective AI adoption elsewhere. The comments from the panel about human propensities NOT to use devices or take medicines that will help them are quite shocking but still somehow unsurprising. Cracking that, according to the panel, would increase life expectancy more than finding a cure for cancer.

The regulatory landscape: The NHS’s procurement processes, which can stifle AI innovation, demonstrate how existing institutional frameworks may need significant adaptation. This raises important questions about how we balance innovation with appropriate oversight – something all sectors grappling with AI must address.

For me, healthcare exemplifies the complex relationship between technical capability and human behaviour. The adoption issue is obviously one that has parallels with willingness/ openness to using novel technologies, even where they can be shown to make life better or easier. The panel’s observations about patient compliance mirror wider challenges around user adoption and engagement with AI systems. We cannot separate the technology from the human context in which it operates.

Bots with character

This is a swift intro to character AI *(note 1)- a tool that is available to use for free currently (on a freemium model). My daughter showed it to me some months ago. It appears as a novelty app but is used (as I understand it) beyond entertainment for creative activity, gaming, role playing and even emotional support. For me it is the potential to test ideas that many have about bot potential for learning that is most interesting. By shifting focus away from ‘generating essays’ it is possible to see the appeal of natural language exchanges to augment learning in a novel medium. While I can think of dozens of use cases based on the way I currently (for example) use YouTube to help me to learn how to unblock a washing machine I imagine that is a continuum that goes all the way up to teacher replacement.*(note 2) Character AI is built on large language model, employs ‘reinforcement’ (learning as coversations continue) and provides an easy to learn interface (basically typing stuff in boxes) that allows you to ground the bot with ease in a wysiwyg interface.

As I see it, it offers three significant modifications in the default interface to standard (free) LLMs. 1. You can create characters and define their knowledge and ‘personality’ traits by having space to ground the bot behaviour through customisation. 2. You can have voice exchanges by ‘calling’ the character. 3. Most importantly, it shifts the technology back to interaction and away from lengthy generation (though they can still go on a bit if you don’t bake succinctness in!) What interests me most is the potential to use tools like this to augment learning, add some novelty and provide reinforcement opportunity through text or voice based exchanges. I have experimented with creating some academic architypes for my students to converse with. This one is a compassionate pedagogue, this one is keen on AI for teaching and learning, this one a real AI sceptic, this one deeply worried about academic integrity. They each have a back story, defined university role and expertise. I tried to get people to test arguments and counter arguments and to work through difficult academic encounters. It’s had mixed reviews so far: Some love it; some REALLY do not like it at all!

How do/ could you use a tool like this?

Note 1. This video in no way connotes promotion or recommendation (by me or by my employer) of this software. Never upload data you are not comfortable sharing and never upload your own or other’s personal data.

Note 2: I am not a proponent of this! There may be people who think this is the panacea to chronic educational underfunding though so beware.

Meet my slightly posher, potentially evil twin

I have been awestruck by the capabilities of tools like HeyGen and Synthesia in the way they can create videos voiced by AI avatars and, with HeyGen in particular, translate from one language to another. The latest beta tool in HeyGen enables someone with limited technical skills (i.e. me) to create an AI avatar of themselves. This is a screen recording on me conversing with my twin. I could choose to speak with it/ him in a number of languages and on topics outside the grounding though it is bland and vague in those spaces. Apparently with some nudging away from the highbrow his Hindi is pretty good and his French sounds Quebecois. I grounded it in the King’s central guidance on GenAI and a few other things I have written. For now only sharing the the recording while I get to grips with the implications of this. I have honed the base prompt so that the twin is crisper in response and doesn’t waffle on. What do you think of this? What are genuine educational use cases that are not about putting humans out of work?

Conversing with AI: Natural language exchanges with and among the bots

In the fast evolving landscape of AI tools, two recent releases have really caught my attention: Google’s NotebookLM and the advanced conversational features in ChatGPT. Both offer intriguing possibilities for how we might interact with AI in more natural, fluid ways.

NotebookLM, still in its experimental stage and free to use, is well worth exploring- as one of my King’s colleagues pointed out recently: it’s about time Google did something impressive in this space! Its standout feature is the ability to generate surprisingly natural-sounding ‘auto podcasts’. I’ve been particularly struck by how the AI voice avatars exchange and overlap in their speech patterns, mimicking the cadence of real conversation. This authenticity is both impressive and slightly unsettling and at least two colleagues thought they were listing to human exchanges.

I tested this feature with three distinct topics:

Language learning in the age of AI (based on three online articles):

A rather flattering exchange about my blog posts (created in fact by my former colleague Gerhard Kristandl – I’m not that egotistical):

A summary of King’s generative AI guidance:

The results were remarkably coherent and engaging. Beyond this, NotebookLM offers other useful features such as the ability to upload multiple file formats, synthesise high-level summaries, and generate questions to help interrogate the material. Perhaps most usefully, it visually represents the sources of information cited in response to your queries, making the retrieval-augmented generation process transparent.

The image is a screenshot of a NotebookLM (experimental) interface with a note titled "Briefing Document: Language Learning in the Age of AI." It includes main themes and insights from three sources on the relationship between artificial intelligence (AI) and language learning:

1. **"Language Learning in the Age of AI" by Richard Campbell**: Discusses AI applications in language learning, highlighting both benefits and challenges.
2. **"The Future of Language Learning in an Age of AI" by Gerhard Ohrband**: Emphasizes that human interaction remains crucial despite AI tools in language acquisition.
3. **"The Timeless Value of Language Learning in the Age of AI" by Sungho Park**: Focuses on the cultural and personal value of language learning in an AI-driven world.

The note then expands on important ideas, specifically on the transformative potential of AI in language learning, such as personalized learning and 24/7 accessibility through AI-driven platforms.

Meanwhile, ChatGPT’s latest update advance voice feature (not available in EU, by the way) has addressed previous latency issues, resulting in a much more realistic exchange. To test this, I engaged in a brief conversation, asking it to switch accents mid-dialogue. The fluidity of the interaction was notable, feeling much closer to a natural conversation than previous iterations. Watch here:

What struck me during this exchange was how easily I slipped into treating the AI as a sentient being. At one point, I found myself saying “thank you”, while at another I felt a bit bad when I abruptly interrupted. This tendency to anthropomorphise these tools is deeply ingrained and hard to avoid, especially as the interactions become more natural. It raises interesting questions about how we relate to AI and whether this human-like interaction is beneficial or potentially problematic.

These developments challenge our conventions around writing and authorship. As these tools become more sophisticated, the line between human and AI-generated content blurs further. What constitutes a ‘valid’ tool for authorship in this new landscape? How do we navigate the ethical implications of using AI in this way?

What are your thoughts on these developments? How might you see yourself using tools like NotebookLM or the advanced ChatGPT in your work?

Sources used for the Langauge ‘podcast’:

  1. Language Learning in the Age of AI” by Richard Campbell
  2. The Future of Language Learning in an Age of AI” by Gerhard Ohrband
  3. The Timeless Value of Language Learning in the Age of AI” by Sungho Park

The Essay in the Age of AI: a test case for transformation

We need to get beyond entrenched thinking. We need to see that we are at a threshold of change in many of the ways that we work, write, study, research etc. Large language models as a key development in AI (with ChatGPT as a symbolic shorthand for that) have led to some pretty extreme pronouncements. Many see it as an existential threat, heralding the ‘death of the essay’ for example. These narratives, though, are unhelpful as they oversimplify a complex issue and mask long-standing, evidence-informed calls for change in educational assessment practices (and wider pedagogic practices). The ‘death of the essay’ narratives do though give us an opportunity to interrogate the thinking and (mis)understandings that underpin these discourses and tensions. We have a chance to challenge tacit assumptions about the value and purpose of essays as one aspect of educational practice that has been considered an immutable part of the ways learning and the evaluation of that learning happens. We are at a point where it is not just people like me (teacher trainers; instructional designers; academic developers; enthusiastic tech fiddlers; contrarians; compassionate & critical pedagogues; disability advocates etc.) that are voicing concerns about conventional practices. My view is that we leverage the heck out of this opportunity and find ways to effect change that is meaningful, scalable, responsive and coherent.

So it was that in a conversation over coffee (in my favourite coffee shop in the Strand area)  on these things with Claire Gordon (Director of the Eden Centre at LSE) that we decided to use the essay as a stimulus for a synthesis of thinking and to evolve a Manifesto for the essay (and other long form writing) in the age of AI.  To explore these ideas further, we invited colleagues from King’s College London and the London School of Economics (as well as special guests from Richmond American University and the University of Sydney) to a workshop. We explored questions like:

  • What are the core issues and concerns surrounding essays in the age of AI?
  • What alternatives might we consider in our quest for validity, reliability and authenticity?
  • Why do some educators and students love the essay format, and why do others not?
  • What is the future of writing? What gains can we harness, especially in terms of equity and inclusion?
  • How might we conceptualise human/hybrid writing processes?

A morning of sharing research, discussion, debate and reflection enabled us to draft and subsequently hone and fine tune a collection of provocations which we have called a ‘Manifesto for the Essay in the age of AI’

I invite you to read our full manifesto and the accompanying blog post outlining our workshop discussions. As we navigate this period of significant change in higher education, it’s crucial that we engage in open, critical dialogue about the future of assessment.

What are your thoughts on the role of essays in the age of AI? Or, indeed, how assessment and teaching will change shape over the next few years? I welcome your comments and reflections below.

Situationally oblivious

I’m reading Leopold Aschenbrenner’s extended collection of essays Situational Awareness:
The Decade Ahead (June 2024)
. It made me think about so many things and so I thought I’d start the academic year on my blog with a breathless brain dump. You never know, I might need one for every chapter! In the first chapter Aschenbrenner extrapolates from several examples predictions about AI capabilities in the near future stating: “I make the following claim: it is strikingly plausible that by 2027, models will be able to do the work of an AI researcher/engineer.”

The (heavily caveated!) prediction of near future AGI and replacement of cognitive jobs within a timeframe that doesn’t even see me to retirement is bold but definitely not set out in a terminator / tin foil hat way either. The strucutred and systematic approach including pushing us to engage with our reactions at stages in the very recent past (‘this is what impressed us in 2019!’) it is hard not to be convinced by the extrapolations. For a relative lay person like me the trajectory from GPT-2 to GPT-4 has indeed been jaw-dropping and I definitely feel the described amazement at how we (humans) so quickly normalise things that dropped our jaws so recently. But extrapolating this progress linearly still seems improbable to my simple brain (as if to prove this ‘simple’ assertion to myself I accidentally typed ‘brian’ three times before hitting on the correct spelling). The challenges of data availability and quality, algorithmic breakthroughs and hardware limitations acknowledged in the chapter are not trivial hurdles to overcome though, as I understand it, but this first chapter seems to promise me a challenge to my thinking. Neither are the scaling issues and relative money and environmental costs which must be the top priority whichever lens on capitalism you are looking through.

That being said, the potential for AI to reach or exceed PhD-level expertise in many fields by 2027 is sobering, though I remain sceptical about the ways in which each new iteration ‘achieves’ the benchmarks: much of the ‘achievement’ masks the very real and essential human interventions but then compares human and AI ability realtively in an apples versus oranges way without acknowledging those essential leg ups. It reminds me a bit of some of the controversies around what merits celebration of achievement in this year’s Olympics: ‘acceptable’ and ‘unacceptable’ levels of privilige, augmentation, diets, birth differences and so on are largely masked and set aside behind narratives of wonder until someone with an agenda picks on something as if to reveal as a surprise that Olympic athletes are actually very different from the vast majority of us (some breakdancers nothwithstanding).

If the near future cognitive performance predictions are realised, this will have profound implications for higher education and the job market. The current tinkering around the edges as we blunder towards prudent and risk averse change may seem quaint much sooner than many imagine and it definitely keeps me awake at night tbh. So, yes, human intelligence encompasses more than just information processing and recall, but we shouldn’t ignore the success against benchmarks that exist irrespective of any frailty in them in terms of design or efficacy. Aschenbrenner says one lesson from the last decade is that we shouldn’t bet against deep learning. We can certainly see how ‘AI can’t do x’ narratives so often and so swiftly make fools of us. Aschenbrenner shares in that first chapter this image from ‘Our World in Data’ which has a dynamic/ editable version. The data comes from Kiela et al (2023) who acknoweldge benchmark ‘bottlenecking’ is a hindrance to progress but that significant improvements are in train.

Look at the abilities in image recognition for example. Based on some books and papers I read from the late 2010s and early 2020s I get the sense that even within much of the AI community the abilities of AI systems in that domain will have come as a big surprise. By way of illustration here is my alt text for the image above which I share here unedited from ChatGPT:

A graph showing the test scores of AI systems on various capabilities relative to human performance. The graph tracks multiple AI capabilities over time, with human performance set as a baseline at 0. The AI capabilities include reading comprehension, image recognition, language understanding, speech recognition, and others. As the lines cross the zero line, it indicates that AI systems have surpassed human performance in those specific areas.

I asked for a 50 word overview suitable for alt text and not only does it save me labour (without, I should add, diminishing the cognitive labour necessary to get my head around what I am looking at) it also tells me there’s no excuse not to alt text things now we can streamline workflows with tools that can support me in this very way.

The nuanced aspects of creativity, emotional intelligence and contextual understanding may prove to be more challenging for AI to replicate meaningfully but even there we are being challenged to define what is human about human cognition and emotion and art and creativity. As educators, the challenges are huge. Even if the the extrapolations are only 10% right this connotes disruption like never before. For me, in the short term, it suggests we need to double down on the endeavours many of us have been pushing in terms of redefining what an education means at UG and PG level, what is valuable and how we augment what we do and how we learn with AI. We can’t ignore it, that’s for sure, whether we wish to or not. We should be preparing our students for a world where AI is a powerful tool, so that we can avert drifts towards dangerous replacement for human cognition and decision-making at the very least.

Kiela, D., Thrush, T., Ethayarajh, K., & Singh, A. (2023) ‘Plotting Progress in AI’, Contextual AI Blog. Available at: https://contextual.ai/blog/plotting-progress (Accessed: 20 Aug 2024).


Navigating the AI Landscape in HE: Six Opinions

Read my post below or listen to AI me read it. Have to say, I sound very well spoken in this video. To my ears doesn’t sound much like me. For those that know me: what do you think?

As we attempt to navigate uncharted (as well as expanding and changing) landscapes of artificial intelligence in higher education, it makes sense to reflect on our approaches and understanding. We’ve done ‘headless chicken’ mode; we’ve been in reactive mode. Maybe we can start to take control of the narratives; even if what is ahead of us is disrupting, fast-moving and fraught with tensions. Here are six perspectives from me that I believe will help us move beyond the hype and get on with the engagement that is increasingly pressing but, thus far, inconsistent at best.

1. AI means whatever people think it means

In educational circles, when we discuss AI, we’re primarily referring to generative tools like ChatGPT, DALL-E, or Copilot. While computer scientists might argue- with a ton of justification- this is a narrow definition, it’s the reality of how most educators and students understand and engage with AI. We mustn’t get bogged down in semantics; instead, we should focus on the practical implications of these tools in our teaching and learning environments whilst taking time to widen some of those definitions, especially when talking with students. Interrogating what we mean when we say ‘AI’ is a great starting point for these discussions in fact.

2. AI challenges our identities as educators

The rapid evolution of AI is forcing us to reconsider our roles as educators.  Whether you buy into the traditional framing of higher education this way or not, we’re no longer the sole gatekeepers of knowledge, dispensing wisdom from the lectern. However much we might want to advocate for notions of co-creation or discovery learning, the lecturer/ teacher as expert is a key component of many of our teacher professional identities.  Instead, we need to acknowledge that we’re all navigating this new landscape together – staff and students alike. This shift requires humility and a willingness to learn alongside our students. The alternatives: Fake it until you make it? Bury your head? Neither are viable or sustainable. Likewise, this is not something that is ‘someone else’s job’. HE is being menaced from many corners and workload is one of the many pressures- but I don’t see a beneficial path that does not necessitate engagement. If I’m right then something needs to give. Or be made less burdensome.

3. Engage, not embrace

I’m not really a hugger, tbh. My family? Yes. A cute puppy? Probably. Friends? Awkwardly at best. A disruptive tech? Of course not. While some advocate for ’embracing’ AI, I prefer the term ‘engage’. We needn’t love these technologies or accept them unquestioningly, but we do need to interact with them critically and thoughtfully. Rejection or outright banning is increasingly unsupportable, despite the many oft-cited issues. The sooner we at least entertain the possibilities that some of our assumptions about the nature of writing and what constitutes cheating and how we best judge achievement may need review the better.

4. AI-proofing is a fool’s errand

Attempts to create ‘AI-proof’ assessments or to reliably detect AI-generated content are likely to be futile. The pace of technological advancement means that any barriers we create will swiftly be overcome. Many have written on the unreliability and inherent biases of detection tools and the promotion of flawed proctoring and surveillance tools only deepens the trust divide between staff and students that is already strained to its limit.  Instead, we should focus on developing better, more authentic forms of assessment that prioritise critical thinking and application of knowledge. A lot of people have said this already, so we need to build a bank of practical, meaningful approaches, draw on the (extensive) existing scholarship and, in so doing, find ways to better share things that address some of the concerns that are not: ‘Eek, everyone do exams again!’

5. We need dedicated AI champions and leadership

To effectively integrate AI into our educational practices, we need people at all levels of our institutions who can take responsibility for guiding innovations in assessment and addressing colleagues’ questions. This requires significant time allocation and can’t be achieved through goodwill alone. Local level leadership and engagement (again with dedicated time and resource) is needed to complement central policy and guidance. This is especially true of multi-faculty institutions like my own. There’s only so much you can generalise. The problem of course is that whilst local agency is imperative, too many people do not yet have enough understanding to make fully informed decisions.  

6. Find a personal use for AI

To truly understand the potential and limitations of AI, it’s valuable to find ways to develop understanding with personal engagement – one way to do this is to incorporate it into your own workflows. Whether it’s using AI to summarise meeting or supervision notes, create thumbnails for videos, or transform lecture notes into coherent summaries, personal engagement with these tools can help demystify them and reveal practical benefits for yourself and for your students. My current focus is on how generative AI can open doors for neurodivergent students and those with disabilities or, in fact, any student marginalised by the structures and systems that are slow to change and privilege the few.

AI3*: Crossing the streams of artificial intelligence, academic integrity and assessment innovation

*That’s supposed to read AI3 but the title font refuses to allow superscript!

Yesterday I was delighted to keynote at the Universities at Medway annual teaching and learning conference. It’s a really interesting collaboration of three universities: University of Greenwich, University of Kent and Canterbury Christchurch University. Based at the Chatham campus in Medway you can’t help but notice the history the moment you enter the campus. Given that I’d worked at Greenwich for five years I was familiar with the campus but, as was always the case when I went there during my time at Greenwich, I experienced a moment of awe when seeing the campus buildings again. It’s actually part of the Chatham Dockyard World Heritage site and features the remarkable Drill Hall library. The reason I’m banging on about history is because such an environment really underscores for me some of those things that are emblematic of higher education in the United Kingdom (especially for those that don’t work or study in it!)

It has echoes of cultural shorthands and memes of university life that remain popular in representations of campus life and study. It’s definitely a bit out of date (and overtly UK centric) like a lot of my cultural references, but it made me think of all the murders in the Oxford set crime drama ‘Morse’.  The campus locations fossilised for a generation the idea of ornate buildings, musty libraries and deranged academics. Most universities of course don’t look like that and by and large academics tend not to be too deranged. Nevertheless we do spend a lot of time talking about the need for change and transformation whilst merrily doing things the way we’ve done them for decades if not hundreds of years. Some might call that deranged behaviour. And that, in essence, was the core argument of my keynote: For too long we have twiddled around the edges but there will be no better opportunity than now with machine-assisted leverage to do the things that give the lie to the idea that universities are seats of innovation and dynamism. Despite decades of research that have helped define broad principles for effective teaching, learning, assessment and feedback we default to lecture – seminar and essay – report – exam across large swathes of programmes. We privilege writing as the principle mechanism of evidencing learning. We think we know what learning looks like, what good writing is, what plagiarism and cheating are but a couple of quick scenarios to a room full of academics invariably reveal lack of consensus and a mass of tacit, hidden and sometimes very privileged understandings of those concepts.

Employing an undoubtedly questionable metaphor and unashamedly dated (1984) concept of ‘crossing the streams’ from the original Ghostbusters film, I argued that there are several parallels to the situation the citizens of New York first found themselves in way back when and not least the academics (initially mocked and defunded) who confront the paranormal manifestations in their Ghostbusters guises. First are the appearances of a trickle of ghosts and demons followed by a veritable deluge. Witness ChatGPTs release, the unprecedented sign ups and the ensuing 18 months wherein everything now has AI (even my toothbrush).   There’s an AI for That has logged 12,982 AIs to date to give an indication of that scale (I need to watch the film again to get an estimate on number of ghosts). Anyway, early in the film we learn that a Ghost catching device called a ‘Proton Pack’ emits energy streams but:


“The important thing to remember is that you must never under any circumstances, cross the streams.” (Dr Egon Spengler)

Inevitably, of course, the resolution to the escalating crisis is the necessity of crossing the streams to defeat and banish the ghosts and demons. I don’t think that generative AI is something that could or should be defeated and I definitely do not think that an arms race of detection and policing is the way forward either. But I do think we need to cross the streams of the three AIs: Artificial Intelligence; Academic Integrity and Assessment Innovation to help realise the long-needed changes.

Artificial Intelligence represents the catalyst not the reason for needing dramatic change.

Academic Integrity as a goal is fine but too often connotes protected knowledge, archaic practices, inflexible standards and a resistance to evolution.

Assessment innovation is the place where we can, through common language and understanding, address the concerns of perhaps more traditional or conservative voices about perceived robustness of assessments in a world where generative AI exists and is increasingly integrated into familiar tools along with what might be seen as more progressive voices who, well before ChatGPT, were arguing for more authentic, dialogic, process-focussed and, dare I say it, de-anonymised and humanly connected assessments.

Here is our opportunity. Crossing the streams may be the only way we mitigate a drift to obsolescence! MY concluding slide showed a (definitely NOT called Casper) friendly ghost which, I hope, connoted the idea that what we fear is the unknown but as we come to know it we find ways to shift from engagement (sometimes aggressively) to understanding and perhaps even an ‘embrace’ as many who talk of AI encourage us to do.

Incidentally, I asked the Captain (in my custom bot ‘Teaching Trek: Captain’s Counsel’) a question about change and he came up with a similar metaphor:

Blow Up the Enterprise: Sometimes, radical changes are necessary. I had to destroy the Enterprise to save my crew in “Star Trek III: The Search for Spock.” Academics should learn when to abandon a failing strategy and embrace new approaches, even if it means starting over.”

In a way I think I’d have had an easier time if I’d stuck with Star Trek metaphors. I was gratified to note that ‘The Search for Spock’ was also released in 1984. An auspicious year for dated cultural references from humans and bots alike.

—————–

Thanks:

The conference itself was great and I am grateful to Chloe, Emma, Julie and the team for orgnaising it and inviting me.

Earlier in the day I was inspired by presentations by colleagues from the three universities: Emma, Jimmy, Nicole, Stuart and Laura. The student panel was great too- started strongly with a rejection of the characterisation of students as idle and disintersted and carried on forcefully from there! And special thanks too to David Bedford (who I first worked with something like 10 years ago) who uses an analytical framework of his own devising called ‘BREAD’ as an aid to informing critical information literacy. His session adapted the framework for AI interactions and it prompted a question which led, over lunch, to me producing a (rough and ready) custom GPT based on it.

I should also acknowledge the works I referred to: 1. Sarah Eaton whose work on the 6 tenets of post-plagiarism I heartily recommended and to 2. Cath Ellis and Kane Murdoch* for their ‘enforcement pyramid’ which also works well as one of the vehicles that will help us navigate our way from the old to the new.

*Recommendation of this text does not in any way connote acceptance of Kane’s poor choice when it comes to football team preference.