Understanding and Integrating AI in Teaching

This morning I discussed this topic with colleagues from King’s Natural, Mathematical and Engineering Sciences faculty. The session was recorded and a transcript is available to NMES colleagues but, as I pointed out in the session, AI is enabling ways of enhancing and/ or adding to the alternative ways of accessing the core information. By way of illustration the post below is generated from the transcript (after I sifted content to remove other speakers.) The only thing I edited was the words ‘in summary’ from the final paragraph.

TL:DR Autopodcast version

Slides can be seen here

Screenshot from title slide showing AI generated image of a foot with only 4 toes and a quote purportedly from da Vinci which says: ‘The human foot is a masterpiece of engineering and a work of art’

Understanding and Integrating AI in Teaching

Martin Compton’s contribution to the NMS Education Elevenses session revolved around the integration of AI into teaching, learning, and assessment. His perspective is deeply rooted in practical application and cautious understanding of these technologies, especially large language models like ChatGPT or Microsoft Co-pilot.

——-

My approach towards AI in education is multifaceted. I firmly believe we need a basic understanding of these technologies to avoid pitfalls. The misuse of AI can lead to serious consequences, as seen in instances like the professor in Texas who misused ChatGPT for student assessment or the lawyer in Australia who relied on fabricated legal precedents from ChatGPT. These examples underline the importance of understanding the capabilities and limitations of AI tools.

The Ethical and Practical Application of AI

The heart of my argument lies in engaging with AI responsibly. It’s not just about using AI tools but also understanding and teaching about them. Whether it’s informatics, chemistry, or any other discipline, integrating AI into the curriculum demands a balance between utilisation and ethical considerations. I advocate for a metacognitive approach, where we reflect on how we’re learning and interacting with AI. It’s crucial to encourage students to critically evaluate AI-generated content.

Examples of AI Integration in Education

I routinely use AI in various aspects of my work. For instance, AI-generated thumbnails for YouTube videos, AI transcription in Teams, upscaling transcripts using large language models, and even translations and video manipulation techniques that were beyond my skill set a year ago. These tools are not just about easing workflows but also about enhancing the educational experience.

One significant example I use is AI for creating flashcards. Using tools like Quizlet, combined with AI, I can quickly generate educational resources, which not only saves time but also introduces an interactive and engaging way for students to learn.

The Future of AI in Education

I believe that UK universities, and educational institutions worldwide, face a critical choice: either embrace AI as an integral component of academic pursuit or risk becoming obsolete. AI tools could become as ubiquitous as textbooks, and we need to prepare for this reality. It’s not about whether AI will lead us to a utopia or dystopia; it’s about engaging with the reality of AI as it exists today and its potential future impact on our students.

My stance on AI in education is one of cautious optimism. The potential benefits are immense, but so are the risks. We must tread carefully, ensuring that we use AI to enhance education without compromising on ethical standards or the quality of learning. Our responsibility lies in guiding students to use these tools ethically and responsibly, preparing them for a future where AI is an integral part of everyday life.

The key is to balance the use of AI with critical thinking and an understanding of its limitations. As educators, we are not just imparting knowledge but also shaping how the next generation interacts with and perceives technology. Therefore, it’s not just about teaching with AI but also teaching about AI, its potential, and its pitfalls.

The AI Literacy Frontier

Despite the best efforts of storm Isha I still managed to present at the 2024 National Conference on Gen AI in Ulster today (albeit remotely). Following on from my WONKHE post I focussed on the ‘how’ and ‘who’ of AI literacy in universities and proposed 10 (and a bit) principles.

When I was planning it I happened to have a chat with my son about AI translation getting us a step closer to Star Trek universal translators and how AI is akin to a journey …’where no-one has gone before’. Before I knew it my abstract was choc full of Star Trek refs and my presentation played fast and loose with the entire franchise.

The slides and my suggested principles are here

AI image depicting a scene on the bridge of a Star Trek-inspired starship, with a baby in the captain’s chair wearing a Starfleet-inspired uniform.

In the presentation I connected with Dr Kara Kennedy’s AI literacy Framework, exemplified a critical point with reference to Dr Sarah Eaton’s Tenets of Post-plagiarism and share some resources including my Custom GPT ‘Trek: The Captain’s Counsel’ and a really terrible AI generated song about my presentation.

Abstract

A year beyond our initial first contact with ChatGPT, the Russell Group has set a course with their principles on generative AI’s use in education, acting as an essential guide for the USS Academia. Foremost among these is the commitment to fostering AI literacy: an enterprise where universities pledge to equip students and staff for the journey ahead. This mission, however, navigates through sectors where perspectives on AI range from likely nemesis to potential prodigy.

Amidst the din of divergent voices, the necessity for critical, cohesive, and focused discourse in our scholarly collectives is paramount. In this talk Martin argues that we need to view AI and all associated opportunities and challenges as an undiscovered country where we have a much greater chance not only of survival in this strange new world but also of flourishing if we navigate it together. This challenge to the conventional knowledge hierarchies in higher education suggests genuine dialogue and collaboration are essential prerequisites to success.

Martin will chart the course he’s plotted at King’s College London, navigating through the nebula of complex AI narratives. He will share insights from a multifaceted strategy aimed at fostering AI understanding and literacy across the community of stakeholders in their endeavour to ensure the voyage is one of shared discovery.

13 ways you could integrate AI tools into teaching

For a session I am facilitating with our Natural, Mathematical and Engineering Sciences faculty I have below pulled together a few ideas drawn from a ton of brilliant suggestions colleagues from across the sector have shared in person, at events or via social media. There’s a bit overlap but I am trying to address the often heard criticism that what’s missing from the guidance and theory and tools out there is some easily digestible, accessible and practically-focussed suggestions that focus on teaching rather than assessment and feedback. Here my first tuppenceworth:

1.AI ideator: Students write prompts to produce a given number of outputs (visual, text or code) to a design or problem brief. Groups select top 2-3 and critique in detail the viability of solutions.  (AI as inspiration)

2. AI Case Studies: Students analyse real-world examples where AI has influenced various practices (e.g., medical diagnosis, finance, robotics) to develop contextual intelligence and critical evaluation skills. (AI as disciplinary content focus)

3. AI Case Study Creator: Students are given AI generated vignettes, micro case studies or scenarios related to a given topic and discuss responses/ solutions. (AI as content creator)

4. AI Chatbot Research: For foundational theoretical principles or contextual understanding, students interact with AI chatbots, document the conversation, and evaluate the experience, enhancing their research, problem-solving, and understanding of user experience. (AI as tool to further understanding of content)

5. AI Restructuring: Students are tasked with using AI tools to reformat content into different media accordsing to pre-defined principles. (AI for multi-media rreframing).

6. AI Promptathon: Students formulate prompts for AI to address significant questions in their discipline, critically evaluate the AI-generated responses, and reflect on the process, thereby improving their AI literacy and collaborative skills. (Critical AI literacy and disciplinary formative activity)

7. AI audit: Students use AI to generate short responses to open questions, critically assess the AI’s output, and then give a group presentation on their findings. Focus could be on accuracy and/ or clarity of outputs. (Critical AI literacy)

8. AI Solution Finder: Applicable post work placement or with case studies/ scenarios, students identify real-world challenges and propose AI-based solutions, honing their creativity, research skills, and professional confidence. (AI in context)

9. AI Think, Pair & Share: Students individually generate AI responses to a key challenge, then pair up to discuss and refine their prompts, improving their critical thinking, evaluation skills, and AI literacy. (AI as dialogic tool)

10. Analyse Data: Students work with open-source data sets to answer pressing questions in their discipline, thereby developing cultural intelligence, data literacy, and ethical understanding. (AI as analytical tool)

11. AI Quizmaster : Students design quiz questions and use AI to generate initial ideas, which they then revise and peer-review, fostering foundational knowledge, research skills, and metacognition. (AI as concept checking tool)

12. Chemistry / Physics or Maths Principle Exploration with AI Chatbot: Students engage with an AI chatbot to learn and understand a specific principle. The chatbot can explain concepts, answer queries, and provide examples. Students (with support of GTA/ near peer or academic tutor) compare the AI’s approach to their own process/ understanding. (AI chatbot tutor)

13. Coding Challenge- AI vs. Manual Code Comparison: Coding students create a short piece of code for a specific purpose and then compare their code to a pre-existing manually produced code for the same purpose. This comparison can include an analysis of efficiency, creativity, and effectiveness. (AI as point of comparison)

Team Based Learning revisited

originally posted here: https://reflect.ucl.ac.uk/mcarena/2022/03/31/tbl/

I have been forced to confront a prejudice this week and I’m very glad I have because I have significantly changed my perspective on Team Based Learning (TBL) as a result. When I cook I rarely use a recipe: rough amounts and a ‘bit of this; bit of that’ get me results that wouldn’t win Bake Off but they do the job.  I’m a bit anti-authority I suppose and I might, on occasion, be seen as contrary given a tendency to take devil’s advocate positions.  As a teacher educator, and unlike many of my colleagues over the years, I tend to advocate a more flexible approach to planning, am most certainly not a stickler for detailed lesson plans and maintain a sceptisicm (that I think is healthy) about the affordances of learning outcomes and predictably aligned teaching. I think this is why I was put off TBL when I first read about it. Call something TBL and most people would imagine something loose, active, collaborative and dialogic. But TBL purists (and maybe this was another reason I was resistant) would holler: ‘Hang on! TBL is a clearly delineated thing! It has a clear structure and process and language of its own.’ However, after attending a very meta-level session run by my colleague, Dr Pete Fitch, this week I was embarrassed to realise how thoroughly I’d misunderstood its potential flexibility and adaptability as well as the potentials of different aspects I might be sceptical of in other contexts.

Established as a pedagogic approach in medical education in the US in the 1970s, it is now used widely across medical education globally as well as in many other disciplinary areas. In essence, it provides a seemingly rigid structure to a flipped approach that typically looks like this:

  • Individual pre-work – reading, videos etc.
  • Individual readiness assurance test (IRAT) – in class multi-choice text
  • Team readiness assurance teast (TRAT) – same questions, discussed and agreed- points awarded according to how few errors are made getting to correct response
  • Discussion and clarification (and challenge)- opportunities to argue, contest, seek clarification from tutor
  • Application- opportunity to take core knowledge and apply it
  • Peer evaluation

This video offers a really clear summary of the stages:

Aside from the rigid structure, my original resistance was rooted in the knowledge-focussed tests and how this would mean sessions started with silent, individual work. However, having been through the process myself (always a good idea before mud slinging!), I realised that this stage could achieve a number of goals as well as the ostensible self-check on understanding. It provides a framing point for students to measure understanding of materials read; it offers-completely anonymously- even to the tutor, an opportunity to guage understanding within a group; it provides an ipsative opportunity to measure progress week by week and acts additionally as a motivator to actually engage with the pre-session work (increasingly so as the learning culture is established). It turns a typically high stakes, high anxiety activity (individual test) into a much lower stakes one and provides a platform from which intial arguments can start at the TRAT stage. A further advantage therefore could be that it helps students formatively with their understanding of and approaches to multi-choice examinations in those programmes that utilise this summative assessment methodology.  In this session I changed my mind on three questions during the TRAT, two of which I was quietly (perhaps even smugly) confident I’d got right. A key part of the process is the ‘scratch to reveal if correct’ cards which Pete had re-imagined with some clever manipulation of Moodle questions. We discussed the importance of the visceral ‘scratching’ commitment in comparsion to a digital alternative and I do wonder if this is one of those things that will always work better analogue!

The cards are somewhat like those shown in this short video:

To move beyond knowledge development, it is clear the application stage is fundamental. Across all stages it was evident how much effort is needed in the design stage. Writing meaningful, level appropriate multi-choice questions is hard. Level-appropriate, authentic application activities are similarly challenging to design. But the payoffs can be great and, as Pete said in session, the design lasts more than a single iteration. I can see why TBL lends itself so well to medical education but this session did make me wish I was still running my own programme so I could test this formula in a higher ed or digital education context.

An example of how it works in the School of Medicine in Nanyang Technological University can be seen here:

The final (should have been obvious) thing spelt out was that the structure and approach can be manipulated. Despite appearances, TBL does enable a flexible approach. I imagine one-off and routine adaptations according to contextual need are commonplace.  I think if I were to design a TBL curriculum, I’d certainly want to collaborate on its design. This would in itself be a departure for me but preparing quality pre-session materials, writing good questions and working up appropriate application activites are all essential and all benefit from collaboration or, at least, a willing ‘sounding board’ colleague. 

Custom GPTs

There are two main audiences for custom GPTs built within the ChatGPT Pro insfrastucture. The first is anyone with a pro account. There are other tools that allow me to build custom GPTs with minimal skills that are open to wider audiences so I think it’ll be interesting to see whether OpenAI continue to leverage this feature to encourage new subscription purchases or whether it will open up to further stifle competitor development. In education the ‘custom bots for others’ potential is huge but, for now, I am realising how potentially valuable they might be for the audience I did not initially consider – me.

One that is already proving useful is ‘My thesis helper’ which I constructed to pull information only from my thesis (given that even the really obvious papers never materialsed I am wondering whether this might catalyse that!) It’s an opportunity to use as source material much larger documents than the copy/ paste tokens allow or even the relatively generous (and free) 100k tokens and document upload Claude AI permits. In particular, it facilitates much swifter searching within the document as well as opportunities for synthesising and summarising specific sections. Another is ‘Innovating in the Academy’ (try it yourself if you have a pro account) which uses two great sources of case studies from across King’s, collated and edited by my immediate colleagues in King’s Academy. The bot enables a more refined search as well as an opportunity to synthesise thinking.

Designed to be more outward facing is ‘Captain’s Counsel’. This I made to align with a ‘Star Trek’ extended (and undoubtedly excruciating) metaphor I’ll be using in a presentation for the forthcoming GenAI in Education conference in Ulster. Here I have uploaded some reference material but also opened it to the web. I have tried to tap into my own Star Trek enthusiasm whilst focussing on broader questions about teaching. The web-openness means it will happily respond to questions about many things under the broad scope I have identified though I have also identified some taboos. Most useful and interesting is the way it follows my instruction to address the issue with reference to Captain Kirk’s own experiences. 

Both the creation and use of customised bots enables different ways of perceiving and accessing existing information and it is in these functions broadly that LLMs and image generators as well as within customised bots are likely to establish a utility niche I think, especially for folk yet to dip their toes or whose perceptions are dominated by LLMs as free essay mills.