College Teaching Fund: AI Projects- A review of the review by Chris Ince

On Wednesday I attended the mid-point event of the KCL College Teaching Fund projects – each group has been awarded some funding (up to £10,000, though some came in with far smaller budgets) to do more than speculate on the possibility of using AI within their discipline and teaching, but carry out a research project around design and implementation.

Each team had one slide and three minutes to give updates on their progress so far, with Martin acting as compere and facilitator. I started to take notes so that I could possibly share ideas with the faculty that I support (and part-way through thought that I perhaps should have recorded the session and used an AI to summarise each project), but it was fascinating to see links between projects in completely different fields. Some connections and thoughts before each project’s progress so far:

  • The work involving students was carried out in many ways, but pleasingly many projects were presented by student researchers, who had either been part of the initial project bid or who had been employed using CTF funds. Even if just considering being surveyed and trialled, students are at all levels through this work, as they should be.
  • Several projects opened with scoping existing student use of gAI in their academic lives and work. This has to be taken with a pinch of salt, as it requires an element of honesty, but King’s has been clear that gAI is not prohibited so long as it is acknowledged (and allowed at a local level). What is interesting is that scoping consistently found that students did not seem to be using gAI as much as one might think (about a third); however their use has been growing throughout projects and the academic year as they are taught how to use it.
  • That being said, several projects identify how students are sceptical of the usefulness of gAI to them and in some that scepticism grows through the project. In some ways this is quite pleasing, as they begin to see gAI not as a panacea, but as a tool. They’re identifying what it can and can’t do, and where it is and isn’t useful to them. We’re teaching about something (or facilitating), and they’re learning.
  • Training AIs and ChatBots to assist in specific and complex tasks crops up in a number of projects, and they’re trialling some very different methods for this. Some are external, some are developed and then shared with students, and some give students what they need to train them themselves. Evidence that there are so many approaches, and exactly why this kind of networking is useful.
  • There’s frequently a heavily patronising perception sometimes that young people know more about a technology that older people. It’s always more complex than that, but the involvement of students in CTF projects has fostered some sharing of knowledge, as academic staff have seen what students can do with gAI. However, it’s been clear that the converse is also true, and that ‘we’ not only need to teach them but there is a desire for us to. This is particularly notable when we consider equality of access and unfair advantages, and two projects highlight this when they noted students from China had lower levels of familiarity with AI.
Project TitleLead Thoughts
How do students perceive the use of genAI for providing feedbackTimothy PullenA project from Biochemistry that’s focused on coding, specifically AI tools giving useful feedback on coding. Some GTAs have developed some short coding exercises that have trialled with students (they get embedded into Moodle and the AI provides student feedback). This has implications in time saved on the administration of feedback of this kind, but Tim suggests seems that there are limits to what customised bots can do within this “significantly” – I need to find out more, and am intrigued around the student perception of this: are there some situations where students would rather have a real person look at their work and offer help?
AI-Powered Single Best Answer (SBA) Automatic Question Generation & Enhanced Pre-Clinical Student Progress TrackingIsaac Ng (student) Mandeep SagooIsaac, a medical student, presents, and it’s interesting that there’s quite a clear throughline to producing something that could have commercial prospects further down the line – there’s a name and logo! An AI has been ‘trained’ with resources and question styles that act as the baseline; students can then upload their own notes and the AI uses these to produce questions in an SBA format that is consistent with the ‘real’ ones. There’s a clear focus on making sure that the AI won’t generate prompts from the material that it’s been given that aren’t factually wrong. A nice aspect is that all of the questions the AI generates are stored, and in March students are going to be able to vote on other student-AI questions. I’m intrigued about the element of students knowing what a good or bad question is, and do we need to ensure their notes are high-quality first?
Co-designing Encounters with AI in Education for Sustainable DevelopmentCaitlin BentleyMira Vogel from King’s Academy is speaking on the team’s behalf – she leads on teaching sustainability in HE. The team have been working on the ‘right’ scaffolding and framing to find the most appropriate teaching within different areas/subjects/faculties – how to find the best routes. They have a broad range of members of staff involved, so have brought this element into the project itself. The first phase has been recursive – recruiting students across King’s to develop materials – Mira has a fun phrase about “eating one’s own dog food”. They’ve been identifying common ground across disciplines to find how future work should be organised at scale and wider to tackle ‘Wicked problems’ (I’m sure this is ‘pernicious or thorny problems’ and not surfer dude ‘wicked’, but I like the positivity in the thought of it being both).
Testing the Frontier – Generative AI in Legal Education and beyondAnat Keller and Cari Hyde VaamondeTrying to bring critical thinking into student use of AI. There’s a Moodle page and online workshop (120 participants) and focus group day (12 students-staff) to consider this. How does/should/could the law regulate financial institutions? The project focused on the application of assessment marking criteria and typically identified three key areas of failure: structure, understanding, and a lack of in-depth knowledge (interestingly, probably replicating what many academics would report for most assessment failure). The aim wasn’t a pass, but to see if a distinction level essay could be produced. Students were a lot more critical than staff when assessing the essays. (side-note: students anthropomorphised the AI, often using terms like ‘them’ and ‘him’ rather than ‘it’). Students felt that while using AI at the initial ideas stage and creation may initially feel more appropriate than using it during the actual essay writing, this was where they lost the agency and creativity that you’d want/find in a distinction level student – perhaps this is the message to get across to students?
Exploring literature search and analysis through the lens of AIIsabelle MiletichAnother project where the students on the research team get to present their work; it’s a highlight of the work, which also has a heavy co-creational aspect. Focused on Research Rabbit: a free AI platform that sorts and organises literature for literature reviews. Y2 focus groups have been used to inform material that is then used with Y1 dental students. There was a 95.7% response to Y1 survey. Resources were produced to form a toolbox for students, mainly guidance for the use of Research Rabbit. There was also a student produced video on how to use it for Y1s. The conclusion of the project will be narrated student presentations on how they used Research Rabbit.
Designing an AI-Driven Curriculum for Employable Business Students: Authentic Assessment and Generative AIChahna GonsalvesIdentifying use cases so that academics are better informed about when to put AI into their work. There have been a number employer-based interviews around how employers are using AI. Student participants are reviewing transcripts to match these to appropriate areas that academics might then slot them into the curriculum. An interesting aspect has been that students didn’t necessarily know/appreciate how much that King’s staff did behind the scenes on curriculum development work. It was also a surprise to the team how some employers were not as persuaded by the usefulness of AI (although many were embedding this within work). Some consideration of there being a difference in approach between early-adopters and those more reticent.
Assessment Innovation integrating Generative AI: Co-creating assessment activities with Undergraduate StudentsRebecca UpsherBased in Psychology – students described how assessment to them means anxiety and stress or “just a means to get a degree” (probably some work around the latter one for sure). There’s a desire for creative and authentic assessment from all sides. Project started by identifying current student use of AI in and around assessment. One focus group (A learning and assessment investigation. Clarity of existing AI guidance. Suggestions for improvements) and one workshop (students more actively giving suggestions about summative AI suggestions to staff). Focus on inclusive and authentic assessment, being mindful of neurodiverse students and the group have been working with the neurodiverse society. Research students have been carrying out the literature review, prepared recruitment materials for groups, and mapped assessment types used in the department. Preliminary interest that has been a common thread was a desire for assessments to be designed with students, and a shift in power dynamics – interesting is that AI projects like this are fostering these sorts of co-design work that could have taken place before AI, but didn’t necessarily – academic staff are now valuing what students know and can do with AI (particularly if they know more than we do).
Improving exam questions to decrease the impact of Large Language ModelsVictor TurcanuA medicine-based project. Alignment with authentic professional tasks, that allow students to demonstrate their understanding, critical and innovative thinking, can students use LLMs to enhance their creativity and wider conceptual reach? The project is using 300 anonymous exam scripts to compare with ChatGPT answers. More specifically it’s about asking students their opinion in a question that doesn’t have an answer (a novel question embedded within an area of research around allergies – can students design a study to investigate something that doesn’t have a known solution: talk about the possibilities, or what they think would be a line of approach to research an answer). LLMs may be able to utilise work that has been published, but cannot draw on what hasn’t been published or isn’t yet understood. While the project was about students using LLMs, there’s also an angle here that it’s a way of an assessment where an AI can’t help as much.
Exploring Generative AI in Essay Writing and Marking: A study on Students’ and Educators’ Perceptions, Trust Dynamics, and InclusivityMargherita de CandiaPolitical science. Working with Saul Jones (an expert on assessment), they’ve also considered making an essay ‘AI proof’. They’re using the PAIR framework developed at King’s and have designed an assessment using the framework to make a brief they think is AI proof but still allows students to use AI tools. Workshops with students where they write an essay using AI will then be used to refine the assignment brief following a marking phase. If it works they want to disseminate the AI-proof brief for essays to colleagues across the social science faculties, however they are running sessions to investigate student perceptions, particularly around improvements to inclusivity in using AI. An interesting element here is what we consider to be ‘AI proof’, but also that students will be asked for thoughts on feedback for their essays when half will have been generated by an AI.
Student attitudes towards the use of Generative AI in a Foundation Level English for Academic Purposes course and the impact of in-class interventions on these attitudesJames AckroydAction research – King’s Foundations within the team working on English for Academic purposes. Two surveys through the year and a focus group, specific interventions in class on use of AI. Another survey to follow. 2/3 of students initially said that they didn’t use AI at the start of the course (40% of students from China where AI is less commonly used due to access restrictions). But half-way through the course 2/3 said that they did. Is this King’s demystifying things? Student belief in what AI could do reduced during the course of the courseFaith in the micro-skills required for essay writing increased. Lots of fascinating threads of AI literacy and perceptions of it have come out of this so far.
Enhancing gAI literacy: an online seminar series to explore generative AI in education, research and employment.Brenda WilliamsOnline seminar series on the use of AI (because students asked for them online, but there also more than 2,000 students in the target group and it’s the best way to get reach. Consultation panel (10 each of staff/students/alumni) to design five sessions to be delivered in June. Students have been informed about the course and a pre-survey to find out about use of AI by participants (and post-) has been prepared. This project in particular has a high mix of staff from multiple areas around King’s and highlights that there is more at play within AI than just working with AI in teaching settings.
Supporting students to use AI ethically and effectively in academic writingUrsula WingatePreliminary scoping of student use of AI. Focus on fairness about a level playing field to upskill some students, and to reign in others. Recruited four student collaborators. Four focus groups (23 participants in January). All students reported having used Chat GPT (did this mean, in education, or in general?) and there is a wide range of free ones they use. Students are critical and sceptical of AI: they’ve noticed that it isn’t very reliable and have concerns about IP of others. They’re also concerned about not developing their own voice. Sessions designed to focus on some key aspects (cohesion, grammatical compliance, appropriateness of style, etc.) when using AI in academic writing are being planned.
Is this a good research question?Iain Marshall, Kalwant SidhuResearch topics for possible theses are being discussed at this half-way point of the academic year. Students are consulting chatbots (academics are quite busy, but also supervisors are usually only assigned when project titles and themes are decided – can students have space to go to beforehand for more detailed input?) The team have been utilising prompt engineering to create their own chatbot to help themselves and others (I think this is through the application of provided material, so students can input this and then follow with their own questions). This does involve students utilising quite a number of detailed scripts and coding, so this is supervised by a team – aimed that this will be supportive.
Evaluating an integrated approach to guide students’ use of generative AI in written assessmentsTania Alcantarilla &Karl NightingaleThere are 600 students in the 1st year of their Bioscience degrees. The team focused on perceptions and student use of AI. Design of a guidance podcast/session. Evaluation of the sessions and then of ultimate gAI use. There were 200 responses to student survey (which is pretty impressive). Lower use of gAI than expected (1/3 of students, but this increased after being at King’s – mainly by international students). It’s now that I’ve realised people ‘in the know’ are using gAI and not genAI as I have…am I out of touch?
AI-Based Automated Assessment Tools for Code QualityMarcus Messer, Neil BrownA project based around the assessment of student produced code. Here the team have focused on ‘Chain of thought prompting’ – a example is given to the LLM where there is a gobbet that includes the data, a show of reasoning steps, and the solution. Typically eight are used before the gAI is used to apply what it learned to a new question or other input. Here the team will use this to assess the code quality of programming assignments, including the readability, maintainability, and quality. Ultimately the grades and feedback will be compared with human-graded examples to judge the effectiveness of the tool.
Integrating ChatGPT-4 into teaching and assessmentBarbara PiotrowskaPublic Policy in the Department of Political Economy – Broad goal was to get students excited and comfortable with using gAI. Some of the most hesitant students have been the most inventive in using it to learn new concepts. ChatGPT used as co-writer for an assessment – a policy brief (advocacy) – due next week. Teaching also a part (conversations with gAI on a topic can be used as an example of a learning task).
Generative AI for critical engagement with the literatureJelena DzakulaDigital Humanities – reading and marking essays where students engage with a small window of literature. Can gAI summarise what are considered difficult articles and chapters for students? Initial survey showed that students don’t use tools for this, they just give up. They mainly use gAI for brainstorming and planning, but not for helping their learning. Designing workshops/focus groups to turn gAI into a learning tool, mainly based around complex texts.
Adaptive learning support platform using GenAI and personalised feedbackIevgeniia KuzminykhThis project aims to embed AI, or at least use it as an integral part, of a programme, where it has access to a lot of information about progress, performance and participation. Moodle has proven quite difficult to work with for this project as the team wanted an AI that would analyse Moodle (to do this a cloned copy was needed, uploaded elsewhere so that it can be accessed externally by the AI). ChatGPT API not being free has also been an issue. So far, course content, quizzes, answers, were utilised and gAI asked to give feedback and generate a new quizzes. Paper design for a feedback system is being written and will be disseminated.
Evaluating the Reliability and Acceptability of AI Evaluation and Feedback of Medical School Course WorkHelen OramCouldn’t make the session- updates coming soon!

Fascinating stuff. For me, I want to consider how we can take this work from projects that have been funded by the CTF, and use them as ideas and models that departments, academics, and teaching staff can look to when considering teaching, curriculum and assessment in ways where they may not have funding.

Understanding and Integrating AI in Teaching

This morning I discussed this topic with colleagues from King’s Natural, Mathematical and Engineering Sciences faculty. The session was recorded and a transcript is available to NMES colleagues but, as I pointed out in the session, AI is enabling ways of enhancing and/ or adding to the alternative ways of accessing the core information. By way of illustration the post below is generated from the transcript (after I sifted content to remove other speakers.) The only thing I edited was the words ‘in summary’ from the final paragraph.

TL:DR Autopodcast version

Slides can be seen here

Screenshot from title slide showing AI generated image of a foot with only 4 toes and a quote purportedly from da Vinci which says: ‘The human foot is a masterpiece of engineering and a work of art’

Understanding and Integrating AI in Teaching

Martin Compton’s contribution to the NMS Education Elevenses session revolved around the integration of AI into teaching, learning, and assessment. His perspective is deeply rooted in practical application and cautious understanding of these technologies, especially large language models like ChatGPT or Microsoft Co-pilot.

——-

My approach towards AI in education is multifaceted. I firmly believe we need a basic understanding of these technologies to avoid pitfalls. The misuse of AI can lead to serious consequences, as seen in instances like the professor in Texas who misused ChatGPT for student assessment or the lawyer in Australia who relied on fabricated legal precedents from ChatGPT. These examples underline the importance of understanding the capabilities and limitations of AI tools.

The Ethical and Practical Application of AI

The heart of my argument lies in engaging with AI responsibly. It’s not just about using AI tools but also understanding and teaching about them. Whether it’s informatics, chemistry, or any other discipline, integrating AI into the curriculum demands a balance between utilisation and ethical considerations. I advocate for a metacognitive approach, where we reflect on how we’re learning and interacting with AI. It’s crucial to encourage students to critically evaluate AI-generated content.

Examples of AI Integration in Education

I routinely use AI in various aspects of my work. For instance, AI-generated thumbnails for YouTube videos, AI transcription in Teams, upscaling transcripts using large language models, and even translations and video manipulation techniques that were beyond my skill set a year ago. These tools are not just about easing workflows but also about enhancing the educational experience.

One significant example I use is AI for creating flashcards. Using tools like Quizlet, combined with AI, I can quickly generate educational resources, which not only saves time but also introduces an interactive and engaging way for students to learn.

The Future of AI in Education

I believe that UK universities, and educational institutions worldwide, face a critical choice: either embrace AI as an integral component of academic pursuit or risk becoming obsolete. AI tools could become as ubiquitous as textbooks, and we need to prepare for this reality. It’s not about whether AI will lead us to a utopia or dystopia; it’s about engaging with the reality of AI as it exists today and its potential future impact on our students.

My stance on AI in education is one of cautious optimism. The potential benefits are immense, but so are the risks. We must tread carefully, ensuring that we use AI to enhance education without compromising on ethical standards or the quality of learning. Our responsibility lies in guiding students to use these tools ethically and responsibly, preparing them for a future where AI is an integral part of everyday life.

The key is to balance the use of AI with critical thinking and an understanding of its limitations. As educators, we are not just imparting knowledge but also shaping how the next generation interacts with and perceives technology. Therefore, it’s not just about teaching with AI but also teaching about AI, its potential, and its pitfalls.

The AI Literacy Frontier

Despite the best efforts of storm Isha I still managed to present at the 2024 National Conference on Gen AI in Ulster today (albeit remotely). Following on from my WONKHE post I focussed on the ‘how’ and ‘who’ of AI literacy in universities and proposed 10 (and a bit) principles.

When I was planning it I happened to have a chat with my son about AI translation getting us a step closer to Star Trek universal translators and how AI is akin to a journey …’where no-one has gone before’. Before I knew it my abstract was choc full of Star Trek refs and my presentation played fast and loose with the entire franchise.

The slides and my suggested principles are here

AI image depicting a scene on the bridge of a Star Trek-inspired starship, with a baby in the captain’s chair wearing a Starfleet-inspired uniform.

In the presentation I connected with Dr Kara Kennedy’s AI literacy Framework, exemplified a critical point with reference to Dr Sarah Eaton’s Tenets of Post-plagiarism and share some resources including my Custom GPT ‘Trek: The Captain’s Counsel’ and a really terrible AI generated song about my presentation.

Abstract

A year beyond our initial first contact with ChatGPT, the Russell Group has set a course with their principles on generative AI’s use in education, acting as an essential guide for the USS Academia. Foremost among these is the commitment to fostering AI literacy: an enterprise where universities pledge to equip students and staff for the journey ahead. This mission, however, navigates through sectors where perspectives on AI range from likely nemesis to potential prodigy.

Amidst the din of divergent voices, the necessity for critical, cohesive, and focused discourse in our scholarly collectives is paramount. In this talk Martin argues that we need to view AI and all associated opportunities and challenges as an undiscovered country where we have a much greater chance not only of survival in this strange new world but also of flourishing if we navigate it together. This challenge to the conventional knowledge hierarchies in higher education suggests genuine dialogue and collaboration are essential prerequisites to success.

Martin will chart the course he’s plotted at King’s College London, navigating through the nebula of complex AI narratives. He will share insights from a multifaceted strategy aimed at fostering AI understanding and literacy across the community of stakeholders in their endeavour to ensure the voyage is one of shared discovery.

13 ways you could integrate AI tools into teaching

For a session I am facilitating with our Natural, Mathematical and Engineering Sciences faculty I have below pulled together a few ideas drawn from a ton of brilliant suggestions colleagues from across the sector have shared in person, at events or via social media. There’s a bit overlap but I am trying to address the often heard criticism that what’s missing from the guidance and theory and tools out there is some easily digestible, accessible and practically-focussed suggestions that focus on teaching rather than assessment and feedback. Here my first tuppenceworth:

1.AI ideator: Students write prompts to produce a given number of outputs (visual, text or code) to a design or problem brief. Groups select top 2-3 and critique in detail the viability of solutions.  (AI as inspiration)

2. AI Case Studies: Students analyse real-world examples where AI has influenced various practices (e.g., medical diagnosis, finance, robotics) to develop contextual intelligence and critical evaluation skills. (AI as disciplinary content focus)

3. AI Case Study Creator: Students are given AI generated vignettes, micro case studies or scenarios related to a given topic and discuss responses/ solutions. (AI as content creator)

4. AI Chatbot Research: For foundational theoretical principles or contextual understanding, students interact with AI chatbots, document the conversation, and evaluate the experience, enhancing their research, problem-solving, and understanding of user experience. (AI as tool to further understanding of content)

5. AI Restructuring: Students are tasked with using AI tools to reformat content into different media accordsing to pre-defined principles. (AI for multi-media rreframing).

6. AI Promptathon: Students formulate prompts for AI to address significant questions in their discipline, critically evaluate the AI-generated responses, and reflect on the process, thereby improving their AI literacy and collaborative skills. (Critical AI literacy and disciplinary formative activity)

7. AI audit: Students use AI to generate short responses to open questions, critically assess the AI’s output, and then give a group presentation on their findings. Focus could be on accuracy and/ or clarity of outputs. (Critical AI literacy)

8. AI Solution Finder: Applicable post work placement or with case studies/ scenarios, students identify real-world challenges and propose AI-based solutions, honing their creativity, research skills, and professional confidence. (AI in context)

9. AI Think, Pair & Share: Students individually generate AI responses to a key challenge, then pair up to discuss and refine their prompts, improving their critical thinking, evaluation skills, and AI literacy. (AI as dialogic tool)

10. Analyse Data: Students work with open-source data sets to answer pressing questions in their discipline, thereby developing cultural intelligence, data literacy, and ethical understanding. (AI as analytical tool)

11. AI Quizmaster : Students design quiz questions and use AI to generate initial ideas, which they then revise and peer-review, fostering foundational knowledge, research skills, and metacognition. (AI as concept checking tool)

12. Chemistry / Physics or Maths Principle Exploration with AI Chatbot: Students engage with an AI chatbot to learn and understand a specific principle. The chatbot can explain concepts, answer queries, and provide examples. Students (with support of GTA/ near peer or academic tutor) compare the AI’s approach to their own process/ understanding. (AI chatbot tutor)

13. Coding Challenge- AI vs. Manual Code Comparison: Coding students create a short piece of code for a specific purpose and then compare their code to a pre-existing manually produced code for the same purpose. This comparison can include an analysis of efficiency, creativity, and effectiveness. (AI as point of comparison)

Team Based Learning revisited

originally posted here: https://reflect.ucl.ac.uk/mcarena/2022/03/31/tbl/

I have been forced to confront a prejudice this week and I’m very glad I have because I have significantly changed my perspective on Team Based Learning (TBL) as a result. When I cook I rarely use a recipe: rough amounts and a ‘bit of this; bit of that’ get me results that wouldn’t win Bake Off but they do the job.  I’m a bit anti-authority I suppose and I might, on occasion, be seen as contrary given a tendency to take devil’s advocate positions.  As a teacher educator, and unlike many of my colleagues over the years, I tend to advocate a more flexible approach to planning, am most certainly not a stickler for detailed lesson plans and maintain a sceptisicm (that I think is healthy) about the affordances of learning outcomes and predictably aligned teaching. I think this is why I was put off TBL when I first read about it. Call something TBL and most people would imagine something loose, active, collaborative and dialogic. But TBL purists (and maybe this was another reason I was resistant) would holler: ‘Hang on! TBL is a clearly delineated thing! It has a clear structure and process and language of its own.’ However, after attending a very meta-level session run by my colleague, Dr Pete Fitch, this week I was embarrassed to realise how thoroughly I’d misunderstood its potential flexibility and adaptability as well as the potentials of different aspects I might be sceptical of in other contexts.

Established as a pedagogic approach in medical education in the US in the 1970s, it is now used widely across medical education globally as well as in many other disciplinary areas. In essence, it provides a seemingly rigid structure to a flipped approach that typically looks like this:

  • Individual pre-work – reading, videos etc.
  • Individual readiness assurance test (IRAT) – in class multi-choice text
  • Team readiness assurance teast (TRAT) – same questions, discussed and agreed- points awarded according to how few errors are made getting to correct response
  • Discussion and clarification (and challenge)- opportunities to argue, contest, seek clarification from tutor
  • Application- opportunity to take core knowledge and apply it
  • Peer evaluation

This video offers a really clear summary of the stages:

Aside from the rigid structure, my original resistance was rooted in the knowledge-focussed tests and how this would mean sessions started with silent, individual work. However, having been through the process myself (always a good idea before mud slinging!), I realised that this stage could achieve a number of goals as well as the ostensible self-check on understanding. It provides a framing point for students to measure understanding of materials read; it offers-completely anonymously- even to the tutor, an opportunity to guage understanding within a group; it provides an ipsative opportunity to measure progress week by week and acts additionally as a motivator to actually engage with the pre-session work (increasingly so as the learning culture is established). It turns a typically high stakes, high anxiety activity (individual test) into a much lower stakes one and provides a platform from which intial arguments can start at the TRAT stage. A further advantage therefore could be that it helps students formatively with their understanding of and approaches to multi-choice examinations in those programmes that utilise this summative assessment methodology.  In this session I changed my mind on three questions during the TRAT, two of which I was quietly (perhaps even smugly) confident I’d got right. A key part of the process is the ‘scratch to reveal if correct’ cards which Pete had re-imagined with some clever manipulation of Moodle questions. We discussed the importance of the visceral ‘scratching’ commitment in comparsion to a digital alternative and I do wonder if this is one of those things that will always work better analogue!

The cards are somewhat like those shown in this short video:

To move beyond knowledge development, it is clear the application stage is fundamental. Across all stages it was evident how much effort is needed in the design stage. Writing meaningful, level appropriate multi-choice questions is hard. Level-appropriate, authentic application activities are similarly challenging to design. But the payoffs can be great and, as Pete said in session, the design lasts more than a single iteration. I can see why TBL lends itself so well to medical education but this session did make me wish I was still running my own programme so I could test this formula in a higher ed or digital education context.

An example of how it works in the School of Medicine in Nanyang Technological University can be seen here:

The final (should have been obvious) thing spelt out was that the structure and approach can be manipulated. Despite appearances, TBL does enable a flexible approach. I imagine one-off and routine adaptations according to contextual need are commonplace.  I think if I were to design a TBL curriculum, I’d certainly want to collaborate on its design. This would in itself be a departure for me but preparing quality pre-session materials, writing good questions and working up appropriate application activites are all essential and all benefit from collaboration or, at least, a willing ‘sounding board’ colleague. 

Custom GPTs

There are two main audiences for custom GPTs built within the ChatGPT Pro insfrastucture. The first is anyone with a pro account. There are other tools that allow me to build custom GPTs with minimal skills that are open to wider audiences so I think it’ll be interesting to see whether OpenAI continue to leverage this feature to encourage new subscription purchases or whether it will open up to further stifle competitor development. In education the ‘custom bots for others’ potential is huge but, for now, I am realising how potentially valuable they might be for the audience I did not initially consider – me.

One that is already proving useful is ‘My thesis helper’ which I constructed to pull information only from my thesis (given that even the really obvious papers never materialsed I am wondering whether this might catalyse that!) It’s an opportunity to use as source material much larger documents than the copy/ paste tokens allow or even the relatively generous (and free) 100k tokens and document upload Claude AI permits. In particular, it facilitates much swifter searching within the document as well as opportunities for synthesising and summarising specific sections. Another is ‘Innovating in the Academy’ (try it yourself if you have a pro account) which uses two great sources of case studies from across King’s, collated and edited by my immediate colleagues in King’s Academy. The bot enables a more refined search as well as an opportunity to synthesise thinking.

Designed to be more outward facing is ‘Captain’s Counsel’. This I made to align with a ‘Star Trek’ extended (and undoubtedly excruciating) metaphor I’ll be using in a presentation for the forthcoming GenAI in Education conference in Ulster. Here I have uploaded some reference material but also opened it to the web. I have tried to tap into my own Star Trek enthusiasm whilst focussing on broader questions about teaching. The web-openness means it will happily respond to questions about many things under the broad scope I have identified though I have also identified some taboos. Most useful and interesting is the way it follows my instruction to address the issue with reference to Captain Kirk’s own experiences. 

Both the creation and use of customised bots enables different ways of perceiving and accessing existing information and it is in these functions broadly that LLMs and image generators as well as within customised bots are likely to establish a utility niche I think, especially for folk yet to dip their toes or whose perceptions are dominated by LLMs as free essay mills.

Assessment 2033

What will assessment look like in universities in 2033? There’s a lot of talk about how AI may finally catalyse long-needed changes to a lot of the practices we cling to but there’s also a quite significant clamour to do everything in exam halls. Amidst the disparate voices of change are also those that suggest we ride this storm out and carry on pretty much as we are: it’s a time-served and proven model, is it not?

Anyway, by way of provocation, see below four visions of assessment in 2033. What do you think? Is one more likely? Maybe bits of two or more or none of the below? What other possibilities have I missed?

  1. Assessment 2033: Panopticopia

Alex sat nervously in a sterile examination room, palms clammy, heart pounding, her personal evaluation number stamped on each hand and her evaluation tablet. The huge digits on the all-wall clock counted down ominously. As she began the timed exam, micro-drones buzzed overhead, scanning for unauthorised augmentations and communications. Proctoring AI software tracked every keystroke and eye movement, erasing any semblance of privacy. The relentless pressure to recall facts and formulas within seconds elevated her already intense anxiety. Alex knew she was better than these exams would suggest but in the race against technology ideals like fairness, inclusive practice and assessment validity were almost forgotten.

  1. Assessment 2033: Nova Lingua

Karim sat, feet up, in the study pod on campus, ready to tackle his latest essay. Much of the source material was in his first language so he felt confident the only translation tech he’d need would be with his more whimsical flourishes (usually in the intro and conclusion). He  activated ‘AiMee’, his assistant bot, instructed her to open Microsoft Multi-Platform and set the essay parameters: ‘BeeLine text with synthetic voiced audio and an AI avatar presented digest’. AiMee processed the essay brief as Karim scanned it in and started the conversation. Karim was pleased as his thoughts appeared as eloquent prose, simultaneously in both his first language and the two official university languages. As he worked, Karim thought ruefully about how different an education his parents might have had given that they both, like him, were dyslexic.

  1. Assessment 2033: Nova Aurora

Jordan was flushed with delight at the end of their first term on the flexible, multi-modal ‘stackable’ degree. It was amazing to think how different it was from their parents’ experience. There were no traditional exams or strict deadlines. Instead, they engaged in continuous, project and problem-based learning. Professors acted as mentors, guiding them through iterative processes of discovery and growth. The emphasis was on individual development, not just the final product. Grades were replaced with detailed feedback, fostering an appreciation for learning for its own sake rather than competition or -what did their mum call it? ‘Grade grubbing’! Trust was a defining characteristic of academic and student interactions with collaboration highly valued and ‘collusion’ an obsolete concept. HE in the UK had somehow shifted from a focus on evaluation and grades to nurturing individual potential, mirrored by dynamic, flexible structures and opportunities to study in many ways, in many institutions and in ways that aligned with the complexities of life.

  1. Assessment 2033: Plus ça change

Ash sighed as she hunched over her laptop, typing furiously to meet another looming deadline. In 2033, it seemed that little had changed in higher education. Universities clung stubbornly to old assessment methods, reluctant to adapt. Plagiarism and AI detection tools remained easy to circumvent, masking the harsh realities of how students and, with similar frequency, academic staff, relied on technologies that a lot of policy documents effectively banned. The obsession with “students’ own words” pervaded every conversation, drowning out the unheard lobby advocating for a deeper understanding of students’ comprehension and wider acceptance of the realities of new ways of producing work. Ash knew that she wasn’t alone in her frustrations. The system seemed intent on perpetuating the status quo, turning a blind eye to the disconnect between the façade of academic integrity and the hidden truth of how most students and faculty navigated the system.



Evolving AI Literacy – A Shared Journey

This post and its slightly cheesy title (above) was generated using Claude and is based only on the transcript* from the recording of the Oxford Brookes webinar (part of the Talking Teaching across the globe series) I spoke at today on how we actually achieve that Russell Group committment:

Universities will support students and staff to become AI-literate

This is a ‘recast’ AI generated podcast of the article below- the emphases are not brilliant but I hope offers colleagues an idea of what can be done to supplement things like the webinar. Frankly, neither this post nor the recast summary would exist without the ability to produce it in minimal time. (The whole process from downloadingt he transcript to hitting ‘update’ to this post now has taken 19 minutes)

Introduction

We find ourselves in a complex moment as emerging generative AI both captivates and concerns academics. Powerful new tools hold promise yet prompt apprehension. How do we steer constructive conversations amidst clashing viewpoints and high stakes? Martin Compton offers insightful perspectives grounded in ethical priorities – perspectives that reframe AI literacy as a collective journey of discovery requiring diverse voices, embracing practical possibilities, and creating space for critical debate.

Multiple Voices Needed to Balance the Discussion

Martin emphasizes that no one person possesses definitive expertise in this nascent domain. Varied voices deserve air time, even those with “limited credentials.” Since AI intersects with so many fields and its societal ramifications span from climate impacts to employment shifts, cross-disciplinary dialogue matters deeply. We have much to learn from each other.

Further, the computer science sphere itself lacks internal concord on timelines and capabilities. Some hail rapid transformational change while others dismiss the possibility of huge impacts. These mixed messages understandably breed public confusion, sparking doomsday headlines alongside boundless optimism. Socratic humility may serve us well here – acknowledging the expanse of what we do not know.

Given such uncertainty, multiplicity of perspective becomes essential. We need humanities scholars probing ethical quandaries, social scientists weighing systemic biases, natural scientists modeling environmental tradeoffs, employers voicing hiring needs, students sharing studied hopes and fears. No singular authoritative stance exists. Martin rightly warns against perpetuating traditional classroom power dynamics that position instructors as all-knowing arbiters. Hierarchical positioning will not serve us in unmapped territory.

Practical Possibilities Over Limitations to Expand Understanding

Beyond balanced dialogue, Martin advises pivoting more conversations toward practical possibilities versus current limitations. Generative AI’s flaws are abundantly clear, including bias, inaccuracy, and authenticity concerns. These absolutely warrant continued attention, as does debate around academic integrity. But dwelling solely on weaknesses risks blinding us to potentially constructive use cases.

We owe it to students to explore how these technologies may assist real work in real fields, shaping their future employability. Are there accessibility gains for neurodiverse learners? Streamlined workflows for overwhelmed academics? Even those who condemn generative AI must grapple with its impending workplace uptake to best serve graduates. Beyond hypotheticals, where might AI tangibly supplement – not supplant – rich pedagogical environments if guided by ethical priorities?

Illustrating authentic applications can also demystify these tools for skeptical faculty and counteract media hyperbole around “robot grading essays.” When we broaden understanding of AI’s diversity beyond chatbots, we dispel myths. Asking, “how might this aid human creativity?” rather than “will this replace human jobs?” reveals unconsidered potentials.

Spaces for Critical Debate Across Campus

Finally, Martin asks the pivotal question of where open-ended debate will unfold on our campuses given diverse conflicting views. Even within single institutions, some departments welcome generative AI while others seek bans. For literacy efforts to prove lasting, they must transcend one-off workshops and invite ongoing dialogue around community priorities.

Martin offers models like King’s College London’s FutureLearn course allowing global participants to weigh complex issues like algorithmic bias. He spotlights the power of hackathons for convening multiple perspectives to spawn inventive projects. Funding student-faculty partnerships around AI applications grounds exploration in lived curriculum.

Constructing designated forums for airing ethical tensions matters deeply, given disparate departmental stances. We need space to hash out appropriate usage guides for our institutional contexts. No top-down policy prescription or unilateral corporate partnership will address the full scope of concerns. By mapping key campus constituents – from disability support offices to career centers to individual faculty across disciplines – we gain fuller understanding of the landscape before charting next wise steps.

Ultimately AI literacy lives in human connections – the degree to which we foreground multiplicity of voice, balance pragmatic possibility with diligent critique, and carve out shared venues for unpacking this technology on our own terms. The questions loom large, as does potential for substantiative harm. But committing to collective discovery widens possibilities for accountable innovation. We travel this emerging terrain together.

Event slides here

KCL GenAI in HE short course

my prompt: Isolate the comments from martin compton and using ONLY his ideas and contributions write an informal blog post for an academic audience that focusses on how we evolve AI literacy including his thoughts on what that means and how we should approach it. Use at least three subheadings

PGCert Session: an AI generated summary

Yesterday, via CODE, I had the pleasure of working with two groups studying for the online PGCert LTHE at University of London. I repaced ‘Speaker 1’ with my name in the AI generated transcript, ran it through (sensible colleague) Claude AI to generate this summary and then asked (much weirder colleague) ChatGPT to illustrate it. I particularly like the ‘Ethical Ai Ai!’ in the second one but generally would not use either image: I am not a fan of this sort of representation. Alt text are verbatim from chatGPT too.

I forgot to request UK spelling conventions but I’m reasonably happy with the tone. The content feels less passionate and more balanced than I think I am in real life but, realisitically, this follow up post would not exist at all if I’d had to have typed it up myself.

Introduction

Recent advances in artificial intelligence (AI) are raising profound questions for me as an educator. New generative AI tools like ChatGPT can create convincing text, images, and other media on demand. While this technology holds great promise, it also poses challenges regarding academic integrity, student learning, and the nature of assessment. In a lively discussion with teachers recently, I shared my insights on navigating this complex terrain.

Alt text: An educator, a middle-aged individual with a thoughtful expression, stands in a modern classroom looking at a holographic display of AI technology. The hologram includes elements like text, images, and graphs, symbolizing the impact of AI on education. The classroom is equipped with digital tools, and there are a few students in the background, reflecting a contemporary educational setting

The Allure and Risks of Generative AI

I acknowledge the remarkable capabilities of tools like ChatGPT. In minutes, these AIs can generate personalized lessons, multiple choice quizzes, presentation slides and more tailored to specific educational contexts. Teachers and students alike are understandably drawn to technologies that can enhance learning and make routine tasks easier. However, I also caution that generative AI carries risks. If over-relied on, it can undermine academic integrity, entrench biases, and deprive students of opportunities to develop skills. As I argue, banning these tools outright is unrealistic, but educators must remain vigilant about their appropriate usage. The solution resides in open communication, modifying assessments, and focusing more on process than rote products.

Rethinking Assessment in the AI Era

For me, the rise of generative AI necessitates rethinking traditional assessment methods. With tools that can rapidly generate convincing text and other media, essays and exams are highly susceptible to academic dishonesty. However, I suggest assessment strategies like oral defenses, process-focused assignments, and interactive group work hold more integrity. Additionally, I propose reconsidering the primacy placed on summative grading over formative feedback. Removing or deemphasizing grades on early assignments could encourage intellectual risk-taking. Overall, rather than intensifying surveillance, I argue educators should seize this moment to make assessment more dynamic, dialogic and better aligned with course objectives.

Leveraging AI to Enhance Learning

While acknowledging the risks, I also see generative AI as holding great potential for enhancing my teaching and student learning. I demonstrated how these tools can aid personalized learning via customized study materials and tutoring interactions. They can also streamline time-consuming academic tasks like note-taking, literature reviews and citation formatting. Further, I showed how AI could facilitate more personalized feedback for students at scale. Technologies cannot wholly replace human judgment and mentoring relationships, but they can expand educator capacities. However, to gain the full benefits, both educators and students will need guidance on judiciously leveraging these powerful tools.

Problematizing Generative AI and Bias

One major concern I surface is how biases embedded in training data get reproduced in generative AI outputs. For instance, visual AI tools typically depict professors as predominantly white males. However, I note programmers are already working to address this issue by introducing checks against bias and nudging AIs to produce more diverse output. While acknowledging these efforts, I also caution about potential superficiality if underlying data sources and assumptions go unexamined. Pushing back on biases in AI requires grappling with systemic inequities still present in society and institutions. There are no quick technical fixes for complex socio-historical forces, but civil rights and ethics must be central considerations moving forward.

Alt text: An educator confidently leads a group of diverse students through a digitally-enhanced educational landscape. The classroom is integrated with ethical AI technology, featuring computers and interactive displays. This symbolizes a positive and creative use of AI in education. The students appear engaged and focused, while the educator points towards a bright, digital future, illustrating hope and adaptability in navigating the challenges of AI

Navigating Uncharted Waters

In conclusion, I characterize this as a watershed moment full of possibility, but also peril. Preparing myself and fellow educators and students to navigate this terrain will require open, flexible mindsets and reexamining some core assumptions. Knee-jerk reactions could compound risks, while inaction leaves us vulnerable. Addressing challenges presented by AI demands engaging ethical perspectives, but also pedagogical creativity. By orienting toward learning processes over products and developing more robust assessments, educators can cultivate academic integrity and the all-important critical thinking skills needed in an AI-driven world. This will necessitate experimentation, ongoing dialogue and reconsidering conventional practices. While daunting, I ultimately express confidence that with care, wisdom and humanity, educators can guide students through the promise and complexity of this new frontier.

My first GPT

Just when you think you’re getting a handle on things…

So this week’s big announcement was the (still being rolled out) ability to create custom GPTs. Just as I was getting to grips with doing this in the OpenAI playground, it’s now completely WYSIWYG (for many) from within ChatGPT which has had a 8 bit to 16 bit type upgrade in graphics to boot. As much as I want to encourage use of Bing Chat for a bunch of institutional reasons, I am yet again pulled back to ChatGPT with the promise of custom GPTs (do we have to call them that?) After a few false starts yesterday, due to issues of some sort with the roll out I imagine, today it has been seamless and smooth. I learnt quickly that you can get real precision from configuring the instructions. For example, I have given mine the specific instruction to share link X in instance Y and Link A in instance B. To create the foundation I have combined links with uploaded documents and so far my outputs have been pretty good. I think I will need longer and much more precise instructions as the responses do veer to the general still a little too much but it is feeding from my foundation well. Here’s how it looks in the creation screen:

Alt text: Screenshot of chatGPT custom GPT authoring window showing boxes to fill in including Name, Description, Instruction and Conversation starters

It comes with a testing space adjacent to the creation window and options to share:

Alt text: Screenshot of dropdown menu in custom GPT authoring window showing choice to publish bot to only me, only people with link and public

And this is the screen you get if you access the link (but recipients themselves must have the new creation ability to access custom bots):

Alt text: Screenshot from bot homescreen showing familiar ChatGPT interface but with personalised image, bot name and suggested questions

And finally the chatbot window is familair and, as can been seen focussed on my data:

Alt text: Q & A with bot. The question is ‘How do I sign up for the AI course? It gives detailed information and links directly to it

I actually think this will re-revolutionise how we think about large language models in particular and will ultimately impact workflows for professional service and academic staff as well as students in equal measure.