Bots with character

This is a swift intro to character AI *(note 1)- a tool that is available to use for free currently (on a freemium model). My daughter showed it to me some months ago. It appears as a novelty app but is used (as I understand it) beyond entertainment for creative activity, gaming, role playing and even emotional support. For me it is the potential to test ideas that many have about bot potential for learning that is most interesting. By shifting focus away from ‘generating essays’ it is possible to see the appeal of natural language exchanges to augment learning in a novel medium. While I can think of dozens of use cases based on the way I currently (for example) use YouTube to help me to learn how to unblock a washing machine I imagine that is a continuum that goes all the way up to teacher replacement.*(note 2) Character AI is built on large language model, employs ‘reinforcement’ (learning as coversations continue) and provides an easy to learn interface (basically typing stuff in boxes) that allows you to ground the bot with ease in a wysiwyg interface.

As I see it, it offers three significant modifications in the default interface to standard (free) LLMs. 1. You can create characters and define their knowledge and ‘personality’ traits by having space to ground the bot behaviour through customisation. 2. You can have voice exchanges by ‘calling’ the character. 3. Most importantly, it shifts the technology back to interaction and away from lengthy generation (though they can still go on a bit if you don’t bake succinctness in!) What interests me most is the potential to use tools like this to augment learning, add some novelty and provide reinforcement opportunity through text or voice based exchanges. I have experimented with creating some academic architypes for my students to converse with. This one is a compassionate pedagogue, this one is keen on AI for teaching and learning, this one a real AI sceptic, this one deeply worried about academic integrity. They each have a back story, defined university role and expertise. I tried to get people to test arguments and counter arguments and to work through difficult academic encounters. It’s had mixed reviews so far: Some love it; some REALLY do not like it at all!

How do/ could you use a tool like this?

Note 1. This video in no way connotes promotion or recommendation (by me or by my employer) of this software. Never upload data you are not comfortable sharing and never upload your own or other’s personal data.

Note 2: I am not a proponent of this! There may be people who think this is the panacea to chronic educational underfunding though so beware.

Conversing with AI: Natural language exchanges with and among the bots

In the fast evolving landscape of AI tools, two recent releases have really caught my attention: Google’s NotebookLM and the advanced conversational features in ChatGPT. Both offer intriguing possibilities for how we might interact with AI in more natural, fluid ways.

NotebookLM, still in its experimental stage and free to use, is well worth exploring- as one of my King’s colleagues pointed out recently: it’s about time Google did something impressive in this space! Its standout feature is the ability to generate surprisingly natural-sounding ‘auto podcasts’. I’ve been particularly struck by how the AI voice avatars exchange and overlap in their speech patterns, mimicking the cadence of real conversation. This authenticity is both impressive and slightly unsettling and at least two colleagues thought they were listing to human exchanges.

I tested this feature with three distinct topics:

Language learning in the age of AI (based on three online articles):

A rather flattering exchange about my blog posts (created in fact by my former colleague Gerhard Kristandl – I’m not that egotistical):

A summary of King’s generative AI guidance:

The results were remarkably coherent and engaging. Beyond this, NotebookLM offers other useful features such as the ability to upload multiple file formats, synthesise high-level summaries, and generate questions to help interrogate the material. Perhaps most usefully, it visually represents the sources of information cited in response to your queries, making the retrieval-augmented generation process transparent.

The image is a screenshot of a NotebookLM (experimental) interface with a note titled "Briefing Document: Language Learning in the Age of AI." It includes main themes and insights from three sources on the relationship between artificial intelligence (AI) and language learning:

1. **"Language Learning in the Age of AI" by Richard Campbell**: Discusses AI applications in language learning, highlighting both benefits and challenges.
2. **"The Future of Language Learning in an Age of AI" by Gerhard Ohrband**: Emphasizes that human interaction remains crucial despite AI tools in language acquisition.
3. **"The Timeless Value of Language Learning in the Age of AI" by Sungho Park**: Focuses on the cultural and personal value of language learning in an AI-driven world.

The note then expands on important ideas, specifically on the transformative potential of AI in language learning, such as personalized learning and 24/7 accessibility through AI-driven platforms.

Meanwhile, ChatGPT’s latest update advance voice feature (not available in EU, by the way) has addressed previous latency issues, resulting in a much more realistic exchange. To test this, I engaged in a brief conversation, asking it to switch accents mid-dialogue. The fluidity of the interaction was notable, feeling much closer to a natural conversation than previous iterations. Watch here:

What struck me during this exchange was how easily I slipped into treating the AI as a sentient being. At one point, I found myself saying “thank you”, while at another I felt a bit bad when I abruptly interrupted. This tendency to anthropomorphise these tools is deeply ingrained and hard to avoid, especially as the interactions become more natural. It raises interesting questions about how we relate to AI and whether this human-like interaction is beneficial or potentially problematic.

These developments challenge our conventions around writing and authorship. As these tools become more sophisticated, the line between human and AI-generated content blurs further. What constitutes a ‘valid’ tool for authorship in this new landscape? How do we navigate the ethical implications of using AI in this way?

What are your thoughts on these developments? How might you see yourself using tools like NotebookLM or the advanced ChatGPT in your work?

Sources used for the Langauge ‘podcast’:

  1. Language Learning in the Age of AI” by Richard Campbell
  2. The Future of Language Learning in an Age of AI” by Gerhard Ohrband
  3. The Timeless Value of Language Learning in the Age of AI” by Sungho Park

The Essay in the Age of AI: a test case for transformation

We need to get beyond entrenched thinking. We need to see that we are at a threshold of change in many of the ways that we work, write, study, research etc. Large language models as a key development in AI (with ChatGPT as a symbolic shorthand for that) have led to some pretty extreme pronouncements. Many see it as an existential threat, heralding the ‘death of the essay’ for example. These narratives, though, are unhelpful as they oversimplify a complex issue and mask long-standing, evidence-informed calls for change in educational assessment practices (and wider pedagogic practices). The ‘death of the essay’ narratives do though give us an opportunity to interrogate the thinking and (mis)understandings that underpin these discourses and tensions. We have a chance to challenge tacit assumptions about the value and purpose of essays as one aspect of educational practice that has been considered an immutable part of the ways learning and the evaluation of that learning happens. We are at a point where it is not just people like me (teacher trainers; instructional designers; academic developers; enthusiastic tech fiddlers; contrarians; compassionate & critical pedagogues; disability advocates etc.) that are voicing concerns about conventional practices. My view is that we leverage the heck out of this opportunity and find ways to effect change that is meaningful, scalable, responsive and coherent.

So it was that in a conversation over coffee (in my favourite coffee shop in the Strand area)  on these things with Claire Gordon (Director of the Eden Centre at LSE) that we decided to use the essay as a stimulus for a synthesis of thinking and to evolve a Manifesto for the essay (and other long form writing) in the age of AI.  To explore these ideas further, we invited colleagues from King’s College London and the London School of Economics (as well as special guests from Richmond American University and the University of Sydney) to a workshop. We explored questions like:

  • What are the core issues and concerns surrounding essays in the age of AI?
  • What alternatives might we consider in our quest for validity, reliability and authenticity?
  • Why do some educators and students love the essay format, and why do others not?
  • What is the future of writing? What gains can we harness, especially in terms of equity and inclusion?
  • How might we conceptualise human/hybrid writing processes?

A morning of sharing research, discussion, debate and reflection enabled us to draft and subsequently hone and fine tune a collection of provocations which we have called a ‘Manifesto for the Essay in the age of AI’

I invite you to read our full manifesto and the accompanying blog post outlining our workshop discussions. As we navigate this period of significant change in higher education, it’s crucial that we engage in open, critical dialogue about the future of assessment.

What are your thoughts on the role of essays in the age of AI? Or, indeed, how assessment and teaching will change shape over the next few years? I welcome your comments and reflections below.

Navigating the AI Landscape in HE: Six Opinions

Read my post below or listen to AI me read it. Have to say, I sound very well spoken in this video. To my ears doesn’t sound much like me. For those that know me: what do you think?

As we attempt to navigate uncharted (as well as expanding and changing) landscapes of artificial intelligence in higher education, it makes sense to reflect on our approaches and understanding. We’ve done ‘headless chicken’ mode; we’ve been in reactive mode. Maybe we can start to take control of the narratives; even if what is ahead of us is disrupting, fast-moving and fraught with tensions. Here are six perspectives from me that I believe will help us move beyond the hype and get on with the engagement that is increasingly pressing but, thus far, inconsistent at best.

1. AI means whatever people think it means

In educational circles, when we discuss AI, we’re primarily referring to generative tools like ChatGPT, DALL-E, or Copilot. While computer scientists might argue- with a ton of justification- this is a narrow definition, it’s the reality of how most educators and students understand and engage with AI. We mustn’t get bogged down in semantics; instead, we should focus on the practical implications of these tools in our teaching and learning environments whilst taking time to widen some of those definitions, especially when talking with students. Interrogating what we mean when we say ‘AI’ is a great starting point for these discussions in fact.

2. AI challenges our identities as educators

The rapid evolution of AI is forcing us to reconsider our roles as educators.  Whether you buy into the traditional framing of higher education this way or not, we’re no longer the sole gatekeepers of knowledge, dispensing wisdom from the lectern. However much we might want to advocate for notions of co-creation or discovery learning, the lecturer/ teacher as expert is a key component of many of our teacher professional identities.  Instead, we need to acknowledge that we’re all navigating this new landscape together – staff and students alike. This shift requires humility and a willingness to learn alongside our students. The alternatives: Fake it until you make it? Bury your head? Neither are viable or sustainable. Likewise, this is not something that is ‘someone else’s job’. HE is being menaced from many corners and workload is one of the many pressures- but I don’t see a beneficial path that does not necessitate engagement. If I’m right then something needs to give. Or be made less burdensome.

3. Engage, not embrace

I’m not really a hugger, tbh. My family? Yes. A cute puppy? Probably. Friends? Awkwardly at best. A disruptive tech? Of course not. While some advocate for ’embracing’ AI, I prefer the term ‘engage’. We needn’t love these technologies or accept them unquestioningly, but we do need to interact with them critically and thoughtfully. Rejection or outright banning is increasingly unsupportable, despite the many oft-cited issues. The sooner we at least entertain the possibilities that some of our assumptions about the nature of writing and what constitutes cheating and how we best judge achievement may need review the better.

4. AI-proofing is a fool’s errand

Attempts to create ‘AI-proof’ assessments or to reliably detect AI-generated content are likely to be futile. The pace of technological advancement means that any barriers we create will swiftly be overcome. Many have written on the unreliability and inherent biases of detection tools and the promotion of flawed proctoring and surveillance tools only deepens the trust divide between staff and students that is already strained to its limit.  Instead, we should focus on developing better, more authentic forms of assessment that prioritise critical thinking and application of knowledge. A lot of people have said this already, so we need to build a bank of practical, meaningful approaches, draw on the (extensive) existing scholarship and, in so doing, find ways to better share things that address some of the concerns that are not: ‘Eek, everyone do exams again!’

5. We need dedicated AI champions and leadership

To effectively integrate AI into our educational practices, we need people at all levels of our institutions who can take responsibility for guiding innovations in assessment and addressing colleagues’ questions. This requires significant time allocation and can’t be achieved through goodwill alone. Local level leadership and engagement (again with dedicated time and resource) is needed to complement central policy and guidance. This is especially true of multi-faculty institutions like my own. There’s only so much you can generalise. The problem of course is that whilst local agency is imperative, too many people do not yet have enough understanding to make fully informed decisions.  

6. Find a personal use for AI

To truly understand the potential and limitations of AI, it’s valuable to find ways to develop understanding with personal engagement – one way to do this is to incorporate it into your own workflows. Whether it’s using AI to summarise meeting or supervision notes, create thumbnails for videos, or transform lecture notes into coherent summaries, personal engagement with these tools can help demystify them and reveal practical benefits for yourself and for your students. My current focus is on how generative AI can open doors for neurodivergent students and those with disabilities or, in fact, any student marginalised by the structures and systems that are slow to change and privilege the few.

AI3*: Crossing the streams of artificial intelligence, academic integrity and assessment innovation

*That’s supposed to read AI3 but the title font refuses to allow superscript!

Yesterday I was delighted to keynote at the Universities at Medway annual teaching and learning conference. It’s a really interesting collaboration of three universities: University of Greenwich, University of Kent and Canterbury Christchurch University. Based at the Chatham campus in Medway you can’t help but notice the history the moment you enter the campus. Given that I’d worked at Greenwich for five years I was familiar with the campus but, as was always the case when I went there during my time at Greenwich, I experienced a moment of awe when seeing the campus buildings again. It’s actually part of the Chatham Dockyard World Heritage site and features the remarkable Drill Hall library. The reason I’m banging on about history is because such an environment really underscores for me some of those things that are emblematic of higher education in the United Kingdom (especially for those that don’t work or study in it!)

It has echoes of cultural shorthands and memes of university life that remain popular in representations of campus life and study. It’s definitely a bit out of date (and overtly UK centric) like a lot of my cultural references, but it made me think of all the murders in the Oxford set crime drama ‘Morse’.  The campus locations fossilised for a generation the idea of ornate buildings, musty libraries and deranged academics. Most universities of course don’t look like that and by and large academics tend not to be too deranged. Nevertheless we do spend a lot of time talking about the need for change and transformation whilst merrily doing things the way we’ve done them for decades if not hundreds of years. Some might call that deranged behaviour. And that, in essence, was the core argument of my keynote: For too long we have twiddled around the edges but there will be no better opportunity than now with machine-assisted leverage to do the things that give the lie to the idea that universities are seats of innovation and dynamism. Despite decades of research that have helped define broad principles for effective teaching, learning, assessment and feedback we default to lecture – seminar and essay – report – exam across large swathes of programmes. We privilege writing as the principle mechanism of evidencing learning. We think we know what learning looks like, what good writing is, what plagiarism and cheating are but a couple of quick scenarios to a room full of academics invariably reveal lack of consensus and a mass of tacit, hidden and sometimes very privileged understandings of those concepts.

Employing an undoubtedly questionable metaphor and unashamedly dated (1984) concept of ‘crossing the streams’ from the original Ghostbusters film, I argued that there are several parallels to the situation the citizens of New York first found themselves in way back when and not least the academics (initially mocked and defunded) who confront the paranormal manifestations in their Ghostbusters guises. First are the appearances of a trickle of ghosts and demons followed by a veritable deluge. Witness ChatGPTs release, the unprecedented sign ups and the ensuing 18 months wherein everything now has AI (even my toothbrush).   There’s an AI for That has logged 12,982 AIs to date to give an indication of that scale (I need to watch the film again to get an estimate on number of ghosts). Anyway, early in the film we learn that a Ghost catching device called a ‘Proton Pack’ emits energy streams but:


“The important thing to remember is that you must never under any circumstances, cross the streams.” (Dr Egon Spengler)

Inevitably, of course, the resolution to the escalating crisis is the necessity of crossing the streams to defeat and banish the ghosts and demons. I don’t think that generative AI is something that could or should be defeated and I definitely do not think that an arms race of detection and policing is the way forward either. But I do think we need to cross the streams of the three AIs: Artificial Intelligence; Academic Integrity and Assessment Innovation to help realise the long-needed changes.

Artificial Intelligence represents the catalyst not the reason for needing dramatic change.

Academic Integrity as a goal is fine but too often connotes protected knowledge, archaic practices, inflexible standards and a resistance to evolution.

Assessment innovation is the place where we can, through common language and understanding, address the concerns of perhaps more traditional or conservative voices about perceived robustness of assessments in a world where generative AI exists and is increasingly integrated into familiar tools along with what might be seen as more progressive voices who, well before ChatGPT, were arguing for more authentic, dialogic, process-focussed and, dare I say it, de-anonymised and humanly connected assessments.

Here is our opportunity. Crossing the streams may be the only way we mitigate a drift to obsolescence! MY concluding slide showed a (definitely NOT called Casper) friendly ghost which, I hope, connoted the idea that what we fear is the unknown but as we come to know it we find ways to shift from engagement (sometimes aggressively) to understanding and perhaps even an ‘embrace’ as many who talk of AI encourage us to do.

Incidentally, I asked the Captain (in my custom bot ‘Teaching Trek: Captain’s Counsel’) a question about change and he came up with a similar metaphor:

Blow Up the Enterprise: Sometimes, radical changes are necessary. I had to destroy the Enterprise to save my crew in “Star Trek III: The Search for Spock.” Academics should learn when to abandon a failing strategy and embrace new approaches, even if it means starting over.”

In a way I think I’d have had an easier time if I’d stuck with Star Trek metaphors. I was gratified to note that ‘The Search for Spock’ was also released in 1984. An auspicious year for dated cultural references from humans and bots alike.

—————–

Thanks:

The conference itself was great and I am grateful to Chloe, Emma, Julie and the team for orgnaising it and inviting me.

Earlier in the day I was inspired by presentations by colleagues from the three universities: Emma, Jimmy, Nicole, Stuart and Laura. The student panel was great too- started strongly with a rejection of the characterisation of students as idle and disintersted and carried on forcefully from there! And special thanks too to David Bedford (who I first worked with something like 10 years ago) who uses an analytical framework of his own devising called ‘BREAD’ as an aid to informing critical information literacy. His session adapted the framework for AI interactions and it prompted a question which led, over lunch, to me producing a (rough and ready) custom GPT based on it.

I should also acknowledge the works I referred to: 1. Sarah Eaton whose work on the 6 tenets of post-plagiarism I heartily recommended and to 2. Cath Ellis and Kane Murdoch* for their ‘enforcement pyramid’ which also works well as one of the vehicles that will help us navigate our way from the old to the new.

*Recommendation of this text does not in any way connote acceptance of Kane’s poor choice when it comes to football team preference.

Responsible AI Use: A Call to Reflection and Action

To watch / listen to the recording access the KCL media pages here

Nb. The summary below was generated from the transcript via Claude with a prompt focussing on the issues highlighted by Dr Bentley.

As AI continues to permeate various aspects of our lives, it is crucial to engage with its responsible use and consider the broader social and ethical implications. In this discussion (the fift in King’s Academy series: AI Conversations) , Dr Caitlin Bentley, a lecturer in AI Education at King’s College London, highlighted several critical issues surrounding the responsible adoption of AI technologies.

Privatisation and Commercialisation of AI
One of the major concerns raised by Dr Bentley is the rapid privatisation and commercialisation of AI technologies. With large technology companies capturing much of the technological infrastructure, driven by a surveillance-driven business model, there is a risk of solidifying the position of a few dominant players. This could lead to a lack of diversity and potential biases in AI systems.

Language Representation and Preservation
Another important issue highlighted is the impact of AI on less-used or less-resourced languages. Dr Bentley emphasised the need to monitor and ensure that AI tools do not inadvertently accelerate the disappearance of linguistic diversity. Initiatives aimed at preserving and representing these languages in AI systems are crucial.

Academic Integrity and Meaningful Learning
While the focus on academic integrity concerning AI tools like large language models is valid, Dr Bentley suggests that it might also indicate underlying issues within educational programmes. If students feel the need to turn to AI for assistance, it could signify a lack of meaningful engagement or relevance in the learning experience. Educators should reflect on creating more engaging and relevant curricula.

Responsible Use and Social Justice
Despite the potential challenges, Dr Bentley firmly believes that AI can be used for social good and to advance social justice. She highlighted examples of students using AI to create culturally relevant learning materials, assist insulin pump users, and develop multidisciplinary workshops on AI and sustainable development.

Call to Action: Reflection, Action Planning, and Research
To positively and responsibly engage with AI, Dr Bentley recommends a process of reflection, action planning, and research. This includes:

  • Engaging with communities and considering the impacts of AI on society.
  • Developing personal ethical stands and understanding one’s power to influence change.
  • Collaborating with others who share similar interests in driving positive and responsible AI use.
  • Utilising toolkits and resources (Dr Bentley is working on building toolkits for reflection, expected to be available by August)

UKRI Responsible Artificial Intelligence UK (RAI UK) programme.

    Watch/ listen to the rest of the conversations here

    BAAB Workshop: Gen AI- The Implications for Teaching and Assessment

    A Summary of the transcript-first drafted via Google Gemini , prompted and edited by Martin Compton

    The British Acupuncture Accreditation Board (BAAB) recently hosted a workshop on the implications of AI with a focus on generative AI tools like ChatGPT for teaching and assessment. With Dr Vivien Shaw from BAAB who designed and led the breakout element of the session, I was invited to share my thoughts on this rapidly evolving landscape, and it was a fantastic opportunity to engage with acupuncture/ Chinese Traditional Medicine educators and practitioners.

    We started by noting the fact that the majority of attendees have had little or no experience using these tools and most were concerned:

    Key Points

    After a few defintions and live demos the key points I made were:

    • AI is Bigger Than Generative AI: While generative AI tools like ChatGPT have taken the spotlight, it’s crucial to remember that artificial intelligence encompasses a much broader spectrum of technologies.
    • Generative AI is a Black Box: Even the developers of these tools are often surprised by their capabilities and applications. This unpredictability presents both challenges and opportunities.
    • The Human Must Remain in the Loop: AI should augment, not replace, human expertise. The “poetry” and nuance of human intelligence are irreplaceable.
    • Scepticism is Essential: Don’t trust everything AI produces. Critical thinking and verification of information are more important than ever.
    • AI is Constantly Improving: The capabilities of AI tools are evolving at a breakneck pace. What seems impossible today might be commonplace tomorrow.

    Embracing the Opportunities and Addressing the Threats

    The workshop highlighted the need for educators to lean into AI, understand its potential, and exploit its capabilities where appropriate. We also discussed the importance of adapting our teaching and assessment methods to this new reality.

    In the workshop I shared an AI generated summary of an article by Saffron Huang on ‘The surprising synergy between acupuncture and AI

    and a A Chinese Medicine custom GPT which was critiqued by the group

    Breakout Sessions: Putting AI to the Test

    To get a hands-on feel for AI’s impact, we divided into breakout groups and tackled some standard acupuncture exam questions using ChatGPT and other AI tools. The results were both impressive and concerning.

    • Group 1: Case History: The AI-generated responses were generic and lacked the nuance and depth expected from a student.
    • Group 2: Reflective Task: The AI produced “marshmallow blurb” – responses that sounded good but lacked substance or specific details.
    • Group 3: PowerPoint Presentation: While the AI-generated presentation was a decent starting point, it lacked the specifics and critical analysis required by the assignment.

    It was noted that these outputs should not mask the potential for labour saving, for getting something down as a start or the possibilites when multi-shot prompting (iterating).

    The Road Ahead

    The workshop sparked lively discussions about the future of teaching and assessment in the age of AI. Some key questions that emerged:

    • How can we ensure that students are truly learning and not just relying on AI to generate answers?
    • What are the ethical implications of using AI in education?
    • How can we adapt our assessments to maintain their validity and relevance?

    This will all take work but, as a starting point and even if you are blown away by the tutoring demo from Sal Khan /GPT 4o this week, value human connecton and interaction at all times. Neither dismiss out of hand or unthinkingly accept change for its own sake. Transformation is possible with these new tech because these AI are powerful tools, but it’s up to us to use them responsibly and ethically and to grow our understanding through experimentation and dialogue. We need to engage with the opportunities presented while remaining vigilant about the potential threats.

    The wizard of PAIR

    Full recording: Listen / watch here

    This post is a AI/ me hybrid summary of the transcript of a conversation I had with Prof Oz Acar as part of the AI conversations series at KCL.  This morning I found that my Copilot window now allows me to upload attachments (now disabled again! 30/4/24) but the output with the same prompt was poor by comparison to Claude or my ‘writemystyle’ custom GPT unfortunately (for now and at first attempt). I have made some edits to the post for clarity and to remove some of the wilder excesses of  ‘AI cringe’.  

     

    “The beauty of PAIR is its flexibility,” Oz explained. “Educators can customise each component based on learning objectives, student cohorts, and assignments.” An instructor could opt for closed problem statements tailored to specific lessons, or challenge students to formulate their own open-ended inquiries. Guidelines may restrict AI tool choices, or enable students more autonomy to explore the ever-expanding AI ecosystem.  That oversight and guidance needs to come from an informed position of course.

     

    Crucially, by emphasising skills like problem formulation, iterative experimentation, critical evaluation, and self-reflection, PAIR aligns with long-established pedagogical models proven to deepen understanding, such as inquiry-based and active learning. “PAIR is really skill-centric, not tool-centric,” Oz clarified. “It develops capabilities that will be invaluable for working with any AI system, now or in the future.”

     

    The early results from over a dozen King’s modules across disciplines like business, marketing, and arts have piloted PAIR have been overwhelmingly positive. Students have reported marked improvements in their AI literacy – confidence in understanding these technologies’ current capabilities, limitations, and ethical implications. “Over 90% felt their skills in areas like evaluating outputs, recognising bias, and grasping AI’s broader impact had significantly increased,” Oz shared.

     

    While valid concerns around academic integrity have catalysed polarising debates, with some advocating outright bans and restrictive detection measures, Oz makes a nuanced case for an open approach centred on responsible AI adoption. “If we prohibit generative AI for assignments, the stellar students will follow the rules while others will use it covertly,” he argued. “Since even expert linguists struggle to detect AI-written text reliably (especially when it has been manipulated rather than simply churned from a single shot prompt), those circumventing the rules gain an unfair advantage.”

     

    Instead, Oz advocates assuming AI usage as an integrated part of the learning process, creating an equitable playing field primed for recalibrating expectations and assessment criteria. “There’s less motivation to cheat if we allow appropriate AI involvement,” he explained. “We can redefine what constitutes an exceptional essay or report in an AI-augmented age.”

     

    This stance aligns with PAIR’s human-centric philosophy of ensuring students remain firmly in the driver’s seat, leveraging AI as an enabling co-pilot to materialise and enrich their own ideas and outputs. “Throughout the PAIR process, we have mechanisms like reflective reports that reinforce students’ ownership and agency … The AI’s role is as an assistive partner, not an autonomous solution.”

     

    Looking ahead, Oz is energised by generative AI’s potential to tackle substantial challenges plaguing education systems globally – from expanding equitable access to quality learning resources, to easing overstretched educators’ burnout through intelligent process optimisation and tailored student support. “We could make education infinitely better by leveraging these technologies thoughtfully…Imagine having the world’s most patient, accessible digital teaching assistants to achieve our pedagogical goals.”

     

    However, Oz also acknowledges legitimate worries about the perils of inaction or institutional inertia. “My biggest concern is that we keep talking endlessly about what could go wrong, paralysed by committee after committee, while failing to prepare the next generation for their AI-infused reality,” he cautioned. Without proactive engagement, Oz fears a bifurcated future where students are either obliviously clueless about AI’s disruptive scope, or conversely, become overly dependent on it without cultivating essential critical thinking abilities.

     

    Another risk for Oz is generative AI’s potential to propel misinformation and personalised manipulation campaigns to unprecedented scales. “We’re heading into major election cycles soon, and I’m deeply worried about deepfakes fuelling conspiracy theories and political interference,” he revealed. “But even more insidious is AI’s ability to produce highly persuasive, psychologically targeted disinformation tailored to each individual’s profile and vulnerabilities.”

     

    Despite these significant hazards, Oz remains optimistic that responsible frameworks like PAIR can steer education towards effectively harnessing generative AI’s positive transformations while mitigating risks.

     

    PAIR Framework- Further information

    Previous conversation with Dan Hunter

    Previous conversation with Mandeep Gill Sagoo

    Generative AI in HE- self study short course

    An additional point to note: The recording is of course a conversation between two humans (Oz and Martin) and is unscripted. The Q&A towards the end of the recording was faciliated by a third human (Sanjana). I then compared four AI transcription tools: Kaltura, Clipchamp, Stream and Youtube. Kaltura estimated 78% accuracy, Clipchamp crashed twice, Stream was (in my estimation) around 90-95% accurate but the editing/ download process is less convenient when compared to YouTube in my view so the final transcript is the one initially auto-generated in in YouTube, ChatGPT punctuated then re-edited for accuracy in YouTube. Whilst accuracy has improved noticeably in the last few years the faff is still there. The video itself is hosted in Kaltura.

    Nuancing the discussions around GenAI in HE

    Audio version (Produced using speechify text to voice- requires free sign up to listen)

    While we collectively and individually (cross college and in-faculties) reflect on the impacts  over the last year or so of (Big) AI and Generative AI on what we teach, how we teach, how we assess and what students can, can’t should and shouldn’t be doing I am finding that (finally) some of the conversations are cohering around themes. Thankfully, it’s not all about academic integrity as fascinating as that is). Below is my effort at organising some of those themes and is a bit of a brain dump!

    Balancing institutional consistency with disciplinary diversity

    One of the primary challenges we face is how to balance the need for institutional consistency with the fact that GenAI is developing in diverse ways across different disciplines and industries. This issue is particularly pertinent at multi-disciplinary institutions like KCL, where we have nine faculties, each witnessing emerging differences not just between faculties but between departments, programmes, and even among colleagues within the same programme.

    The fractious, new, contentious, ill-understood, unknown, and unpredictable nature of GenAI exacerbates this challenge. To address this, we are adopting a two-pronged approach:

    1. Absolute clarity about the broad direction: ENGAGE at KCL (not embrace!) with clear central guidance that can be adapted locally, allowing a degree of agency.

    2. A multi-faceted approach to evolving staff and student literacy, both centrally and locally, recognising that we all know roughly nothing about the implications and what will actually emerge in terms of teaching and assessment practices.

    What we are not doing is articulating explicit policy (yet) given the unknowns and unpredictability but we are trying to make more explicit where existing policy applies and where there tensions or even perceptions of contradictions.

    Enabling innovation while supporting the ‘engagement’ strategy

    To enable and support staff in innovating with GenAI while fostering engagement and endeavouring to ensure compliance with ethical, broader policy and even legal requirements, our multi-faceted approach includes:

    1. Student engagement in research, in developing guidance and in supporting literacy initiatives

    2. Supported/funded research projects to help diversify fields of interest, to build communities of enthusiasts and to share outcomes within (and beyond) the College.

    3. Collaboration within (e.g. with AI institute; involvement of libraries and collections, careers, academic skills) and across institutions (sharing within networks, participating at national and international events; building national and international communities of shared interest).

    4. Investment in technologies and leadership to facilitate innovation and more rapid pace where such innovation and piloting and experimentation has typically taken much longer in the past.

    5. Providing spaces for dialogue such as student events, the forthcoming AI Institute festival, research dissemination events, workshops and a college-wide working group.

    As we navigate this new territory, consistent messaging and clear guidance are paramount. We need to learn from others’ successes and mistakes while avoiding breaching data privacy or other ethical and legal boundaries inadvertently- in a fast moving landscape the sharing of experience and intelligence is essential.  One example (from another university!)  is the potential pitfall of uploading students’ work into ChatGPT to determine if an LLM wrote it, only to discover that this constitutes a massive data breach, and the LLM couldn’t even provide that information.

    Fostering digital literacy and critical thinking

    Everything above connotes learning (and therefore time) investment for all staff and students. Where will we find this time? Framed as critical AI literacy it is (imho) unavoidable even for the World’s leading sceptics. Wherever you situate yourself on the AI enthusiasm continuum (and I’m very much a vacillator and certainly not  firmly at the evangelical end!), we have to address this and there’s no better way than first hand rather than  (often hype tainted, simplistic)  second-hand narratives peddled by those with vested interests (whether they be big -and small- tech companies with a whizzy tool or detector to sell you or educational conservatives keen to exploit a perceived opportunity to return to halcyon days of squeaky-shoed invigilation of exams for everyone for everything).

    My biggest worry for the whole educational sector (especially where leadership from government is woolly at best)  is that complexity and necessary nuancing of discussion and decision-making will signal a threatening or punitive approach to assessment or an over-exuberant, ill-conceived deal with the devil…both of which of  will be counterproductive if good education is your goal.  In my view we should:

    1. Work with, not against, both students and the technology.

    2. Model good practices ourselves.

    3. Accept that mistakes will be made, but provide clear guidelines on what is and is not advised/permitted for any given teaching or assessment or activity.

    4. Drive the narratives more ourselves from within the broader academy- stop reacting; start demanding (much easier collectively, of course).

    At KCL, we have implemented three “golden rules” for students to mitigate risks during the transition to better understanding:

    Golden Rule 1: Learn with your interactions with AI, but never copy-paste text generated from a prompt directly into summative assignments.

    Golden Rule 2: Ask if you are uncertain about what is allowed in any given assessment.

    Golden Rule 3: Ensure you take time before submission to acknowledge the use of generative AI.

    Empowering critical and creative engagement

    This is easy to set as a goal but of course much harder to realise. To empower all students (and staff) to engage critically and creatively with GenAI tools, we must acknowledge the potential benefits while addressing justified concerns. In an environment of reduced real-terms funding, international student recruitment challenges, and widespread redundancies in several HE instituions, some colleagues might view GenAI as yet another burden. I have been encouraging colleagues (with one eye on a firmly held view that first-hand experience equips you much better to make informed judgements) to look for ways to exploit these tech in relatively risk-free ways not only to build self-efficacy but also to shift the more entrenched and narrow narratives of GenAI as an essay generator and existential threat! Some examples:

    1. Can you find ways to actually realise workflow optimisation?: GenAI tools offer amazing potentials for translation, transcript generation, meeting summaries, clarifying and reformatting content.

    2. Accessibility and neurodiversity support: Many colleagues and students are already benefiting from GenAI’s ability to present content in alternative formats, making it easier to process text or generate alt-text.

    3. Educational support in underserved areas: GenAI tools at a macro level could potentially support regions where there are too few teachers but also on a micro level can enable students with complex commitments to access a degree of support outside ‘office hours’

    Implications for curriculum design, teaching and assessment

    The advent of GenAI has potential implications for curriculum design, instructional strategies, and assessment methods. One concern is the potential homogenisation (and Americanisation) of content by LLMs. While LLMs can provide decent structures, learning outcomes, and assessment suggestions, there is a risk of losing the spark, humanity, visceral connection and novelty that human educators bring.

    However, this does not have to be an either/or scenario and I think this is the critical point to raise. We can leverage GenAI to achieve both creativity and consistency. For example, freely available LLMs can generate scenarios, case studies, multiple-choice questions based on specific texts, single-best-answer databases, and interactive simulations for developing skills like clinical engagement or client interaction. A colleague has found GenAI helpful in designing Team-Based Learning (TBL) activities, although the quality of outputs depends on the tool used and the quality of the prompts, underscoring the importance of GenAI literacy.

    When discussing academic integrity and rigour, we must separate our concerns about GenAI from broader issues around plagiarism and well-masked cheating, which have long been challenges. We need to re-evaluate why we use specific assessments, what they measure overtly and tacitly, and the importance of writing in different programmes.

    Moving beyond ‘Cheating’ and ‘AI-Proofing’

    To move the conversation around AI and assessment beyond ‘cheating’ and ‘AI-proofing,’ we must recognise that ‘AI proofing’ is an arms race we cannot win. We also need to accept that we have lived for a very long time with very varied definitions of what constitutes cheating, what constitutes plagiarism and even the extent to which things like proof-reading support are or should be allowed. I thin k the time now is for us to re-evaluate everything we do (easy!) – our assessments, their purposes, what they measure, the importance of writing in each programme, and what we define as cheating, plagiarism, and authorship in the context of GenAI. If we do this well, we will surface the tacit criteria many students are judged on, the hidden curricula buttressing programme and assessment design and covert (even often from those assessing) privileges that dictate the what and how of assessments and the ways in which they are evaluated.

    Ethical dilemmas: Energy consumption and a whole lot more

    Many have written on the many controversies GenAI raises- copyright, privacy, exploitation, sustainability. One is energy consumption. While figures vary, some suggest that using an LLM for a basic search query costs 40 times more in cooling. Shocking! Conversely, others argue that using LLMs to generate content that would otherwise be time-consuming and laborious could be less costly in terms of consumption. What to think?!  At the very least and as technology improves, we must distinguish between legitimate, purposeful use and novelty or wasteful use, just as we should with any technology. But we need to find trusted sources and points of referral as, in my experience at least, a lot of what I read is based on figures that are hard to pin down in terms of provenance and veracity.

    We cannot pretend that that the copyright, data privacy, lack of transparency, and the exploitation of human reinforcement workers issues do not exist- and these are  challenges compounded by the tech industry’s race for a sustainable market share. But we should be wary of ignoring pre-existing controversies, being inconsistent in the ways we scrutinise different tech and, from my point of view at least, fail to recognise the potentials as a consequence of some of the more shocking and outlandish stories we hear. Again, we come back to complexity and nuance. Currently, education seems to be in reaction mode, but we need to drive the narratives around these ethical concerns.

    Intellectual property rights, authorship, and attribution

    As I say above, we need to re-examine the fundamentals of higher education, such as our definitions of authorship, writing, cheating, and plagiarism. For example, while most institutional policies prohibit proofreading, many students from privileged backgrounds have long benefited from having family members review their work – a form of cultural capital and privilege that is generally accepted and not questioned even if, by letter of the academic integrity law, such support is as much cheating as getting a third party piece of tech to ‘proof read’ for you.

    The opportunity for students from diverse backgrounds, including those who find conventional reading and studying challenging, to leverage GenAI for similar benefits is a reality we must address. Unless the quality of writing or the writing process itself is being assessed, we may need to be more open to how technology changes the way we approach writing, just as Google and word processing revolutionised information-finding and writing processes. I think we (as a sector) have realised that citation of LLMs is inappropriate but for how long and in which disciplines will we feel the need to make lengthy acknowledgements of how we have used these tech?

    Regardless of the discipline, engaging with GenAI is crucial – not doing so would be irresponsible and unfair to our students and ourselves. However, engagement also connotes investment in time and other resources, which raises the question of where we find those resources.

    AI Law

    Watch the full video here

    In the second AI conversation of the King’s Academy ‘Interfaculty Insights’ series, Professor Dan Hunter, Executive Dean of the Dickson Poon School of Law, shared his multifaceted engagement with artificial intelligence (AI). Prof Hunter discussed the transformative potential of AI, particularly generative AI, in legal education, practice, and beyond. With a long history in the field of AI and law, he offered a unique perspective on the challenges and opportunities presented by this rapidly evolving technology. To say he is firmly in the enthusiast camp, is probably an understatement.

    A wooden gavel with ‘AI’ embossed on it

    From his vantage point, Prof Hunter presents the following key ideas:

    1. AI tools (especially LLMs) are already demonstrating significant productivity gains for professionals and students alike but it is often more about the ways they can do ‘scut work’. Workers and students become more efficient and improve work quality when using these models. For those with lower skill levels the improvement is even more pronounced.
    2. While cognitive offloading to AI models raises concerns about losing specific skills (examples of long division or logarithms were mentioned), Prof Hunter argued that we must adapt to this new reality. The “cat is out of the bag” so our responsibility lies in identifying and preserving foundational skills while embracing the benefits of AI.
    3. Assessment methods in legal education (and by implication across disciplines) must evolve to accommodate AI capabilities. Traditional essay writing can be easily replicated by language models, necessitating more complex and time-intensive assessment approaches. Prof Hunter advocates for supporting the development of prompt engineering skills and requiring students to use AI models while reflecting on the process.
    4. The legal profession will undergo a significant shakeup, with early adopters thriving and those resistant to change struggling. Routine tasks will be automated obligating lawyers to move up the value chain and offer higher-value services. This disruption may lead to the need for retraining.
    5. AI models can help address unmet legal demand by making legal services more affordable and accessible. However, this will require systematic changes in how law is taught and practiced, with a greater emphasis on leveraging AI’s capabilities.
    6. In the short term, we tend to overestimate the impact of technological innovations, while underestimating their long-term effects. Just as the internet transformed our lives over decades, the full impact of generative AI may take time to unfold, but it will undoubtedly be transformative.
    7. Educators must carefully consider when cognitive offloading to AI is appropriate and when it is necessary for students to engage in the learning process without AI assistance. Finding the right balance is crucial for effective pedagogy in the AI era.
    8. Professional services staff can benefit from AI by identifying repetitive, language-based tasks that can be offloaded to language models. However, proper training on responsible AI use, data privacy, and information security is essential to avoid potential pitfalls.
    9. While AI models can aid in brainstorming, generating persuasive prose, and creating analogies, they currently lack the ability for critical thinking, planning, and execution. Humans must retain these higher-order skills, which cannot yet be outsourced to AI.
    10. Embracing AI in legal education and practice is not just about adopting the technology but also about fostering a mindset of change and continuous adaptation. As Prof Hunter notes, “If large language models were a drug, everyone would be prescribed them.” *

    The first in the series was Dr Mandeep Gill Sagoo

    * First draft of this summary generated from meeting transcript via Claude