PGCert Session: an AI generated summary

Yesterday, via CODE, I had the pleasure of working with two groups studying for the online PGCert LTHE at University of London. I repaced ‘Speaker 1’ with my name in the AI generated transcript, ran it through (sensible colleague) Claude AI to generate this summary and then asked (much weirder colleague) ChatGPT to illustrate it. I particularly like the ‘Ethical Ai Ai!’ in the second one but generally would not use either image: I am not a fan of this sort of representation. Alt text are verbatim from chatGPT too.

I forgot to request UK spelling conventions but I’m reasonably happy with the tone. The content feels less passionate and more balanced than I think I am in real life but, realisitically, this follow up post would not exist at all if I’d had to have typed it up myself.

Introduction

Recent advances in artificial intelligence (AI) are raising profound questions for me as an educator. New generative AI tools like ChatGPT can create convincing text, images, and other media on demand. While this technology holds great promise, it also poses challenges regarding academic integrity, student learning, and the nature of assessment. In a lively discussion with teachers recently, I shared my insights on navigating this complex terrain.

Alt text: An educator, a middle-aged individual with a thoughtful expression, stands in a modern classroom looking at a holographic display of AI technology. The hologram includes elements like text, images, and graphs, symbolizing the impact of AI on education. The classroom is equipped with digital tools, and there are a few students in the background, reflecting a contemporary educational setting

The Allure and Risks of Generative AI

I acknowledge the remarkable capabilities of tools like ChatGPT. In minutes, these AIs can generate personalized lessons, multiple choice quizzes, presentation slides and more tailored to specific educational contexts. Teachers and students alike are understandably drawn to technologies that can enhance learning and make routine tasks easier. However, I also caution that generative AI carries risks. If over-relied on, it can undermine academic integrity, entrench biases, and deprive students of opportunities to develop skills. As I argue, banning these tools outright is unrealistic, but educators must remain vigilant about their appropriate usage. The solution resides in open communication, modifying assessments, and focusing more on process than rote products.

Rethinking Assessment in the AI Era

For me, the rise of generative AI necessitates rethinking traditional assessment methods. With tools that can rapidly generate convincing text and other media, essays and exams are highly susceptible to academic dishonesty. However, I suggest assessment strategies like oral defenses, process-focused assignments, and interactive group work hold more integrity. Additionally, I propose reconsidering the primacy placed on summative grading over formative feedback. Removing or deemphasizing grades on early assignments could encourage intellectual risk-taking. Overall, rather than intensifying surveillance, I argue educators should seize this moment to make assessment more dynamic, dialogic and better aligned with course objectives.

Leveraging AI to Enhance Learning

While acknowledging the risks, I also see generative AI as holding great potential for enhancing my teaching and student learning. I demonstrated how these tools can aid personalized learning via customized study materials and tutoring interactions. They can also streamline time-consuming academic tasks like note-taking, literature reviews and citation formatting. Further, I showed how AI could facilitate more personalized feedback for students at scale. Technologies cannot wholly replace human judgment and mentoring relationships, but they can expand educator capacities. However, to gain the full benefits, both educators and students will need guidance on judiciously leveraging these powerful tools.

Problematizing Generative AI and Bias

One major concern I surface is how biases embedded in training data get reproduced in generative AI outputs. For instance, visual AI tools typically depict professors as predominantly white males. However, I note programmers are already working to address this issue by introducing checks against bias and nudging AIs to produce more diverse output. While acknowledging these efforts, I also caution about potential superficiality if underlying data sources and assumptions go unexamined. Pushing back on biases in AI requires grappling with systemic inequities still present in society and institutions. There are no quick technical fixes for complex socio-historical forces, but civil rights and ethics must be central considerations moving forward.

Alt text: An educator confidently leads a group of diverse students through a digitally-enhanced educational landscape. The classroom is integrated with ethical AI technology, featuring computers and interactive displays. This symbolizes a positive and creative use of AI in education. The students appear engaged and focused, while the educator points towards a bright, digital future, illustrating hope and adaptability in navigating the challenges of AI

Navigating Uncharted Waters

In conclusion, I characterize this as a watershed moment full of possibility, but also peril. Preparing myself and fellow educators and students to navigate this terrain will require open, flexible mindsets and reexamining some core assumptions. Knee-jerk reactions could compound risks, while inaction leaves us vulnerable. Addressing challenges presented by AI demands engaging ethical perspectives, but also pedagogical creativity. By orienting toward learning processes over products and developing more robust assessments, educators can cultivate academic integrity and the all-important critical thinking skills needed in an AI-driven world. This will necessitate experimentation, ongoing dialogue and reconsidering conventional practices. While daunting, I ultimately express confidence that with care, wisdom and humanity, educators can guide students through the promise and complexity of this new frontier.

My first GPT

Just when you think you’re getting a handle on things…

So this week’s big announcement was the (still being rolled out) ability to create custom GPTs. Just as I was getting to grips with doing this in the OpenAI playground, it’s now completely WYSIWYG (for many) from within ChatGPT which has had a 8 bit to 16 bit type upgrade in graphics to boot. As much as I want to encourage use of Bing Chat for a bunch of institutional reasons, I am yet again pulled back to ChatGPT with the promise of custom GPTs (do we have to call them that?) After a few false starts yesterday, due to issues of some sort with the roll out I imagine, today it has been seamless and smooth. I learnt quickly that you can get real precision from configuring the instructions. For example, I have given mine the specific instruction to share link X in instance Y and Link A in instance B. To create the foundation I have combined links with uploaded documents and so far my outputs have been pretty good. I think I will need longer and much more precise instructions as the responses do veer to the general still a little too much but it is feeding from my foundation well. Here’s how it looks in the creation screen:

Alt text: Screenshot of chatGPT custom GPT authoring window showing boxes to fill in including Name, Description, Instruction and Conversation starters

It comes with a testing space adjacent to the creation window and options to share:

Alt text: Screenshot of dropdown menu in custom GPT authoring window showing choice to publish bot to only me, only people with link and public

And this is the screen you get if you access the link (but recipients themselves must have the new creation ability to access custom bots):

Alt text: Screenshot from bot homescreen showing familiar ChatGPT interface but with personalised image, bot name and suggested questions

And finally the chatbot window is familair and, as can been seen focussed on my data:

Alt text: Q & A with bot. The question is ‘How do I sign up for the AI course? It gives detailed information and links directly to it

I actually think this will re-revolutionise how we think about large language models in particular and will ultimately impact workflows for professional service and academic staff as well as students in equal measure.

Making and using assistant bots in OpenAI playground

The ChatGPT numbers are staggering: 100 million weekly users and 2 million developers. I wish I had the time and skills to be included in the developer stat but what is notable is that even someone like me -the compulsive fiddler but with very limited tech skills – can use the ‘playground’ to create a personal assistant bot. Using GPT 4 the diffefrence between outputs here and from within, say, chatgpt or any other large language models, is that I define the instruction (in this case focssing on sources to refer responses to and nature of the role) and, crucially, use uploaded documents to become the foundation information. The post below was generated using a prompt that asked for a summary of key information from across our guidance documentation on Gen AI at King’s and then improved because the first output included non descriptive hyperlinks. I hear and have seen that the system is struggling to keep up with interest but it’s well worth having a look I think because in no time Open AI and/ or smart developers will create user friendly wrappers that will enable anyone to have their own pre-trained bots to support whatever it is they are working on. This I think will change the way we think about LLMs relatively quickly.

Alt text: Screenshot of the assitant bot creation interface. The bulk of the text is duplicated below.

Sample output ( I have tweaked it slightly to ensure it is factually accurate but have left the tone and style as is just for illustration):

King’s Generative AI info

In the realm of higher education, generative AI stands as a groundbreaking technology that’s redefining the scope of learning and teaching. King’s College London remains a beacon of innovation by methodically embracing the capabilities of AI, while also tactfully navigating the intricacies it introduces to the academic environment. We have devised a comprehensive framework that facilitates informed engagement with generative AI across the university community.

Navigating the AI Landscape in Higher Education

To equip our community with a nuanced understanding of generative AI’s role in academia, we have developed an insightful short course titled Generative AI in Higher Education, available on FutureLearn. This course invites participants to explore the multifaceted ways in which AI can enrich academic disciplines and paves the way for meaningful discussions about the evolution of educational practices.

Tailored Guidance for All Stakeholders

Our dedication to fostering a responsible AI-integrated culture is reflected in the guidance documents created specifically for each stakeholder within King’s College London:

For programme and module leaders, it’s essential to consider whether guidance on the responsible use of generative AI has been articulated to students. As a cornerstone of our initiative, we have drafted a detailed guide for program and module leaders to ensure transparency in the permissible applications of AI.

Lecturers and GTAs find themselves equipped with recommendations designed to seamlessly integrate AI into their instructional and evaluative methodologies.

Students’ guidance addresses the pivotal aspects of generative AI usage within their academic journey while maintaining academic integrity standards. An integral part of this guidance is an encouragement to engage with tutors on the appropriate use of AI in particular assignments.

Policy Enforcements and Ethical Considerations

Information on detection and acknowledgment of AI-generated content has been published, underscoring the significance of integrity in academic pursuits.

Conceptual Clarity through Definitions

The university community can access a comprehensive lexicon and context for generative AI in our context and definitions document, poised to enhance understanding and conversation about this emergent technology.

Revolutionising Assessment

With generative AI reshaping potential assessment methods, King’s College London embraces new approaches to assessment, dedicated to cultivating profound learning experiences and reflective critical thinking among students.

Practical Insights and Knowledge Sharing

Martin Compton shares his unique perspective and inventive tactics for integrating generative AI into academic settings through his personal blog, a source of inspiration and practical advice.

Conclusion

The proactive stance of King’s College London in the sphere of generative AI serves as a testament to our commitment to excellence. By providing clear and structured guidance that caters to every role within the institution, we stride forward into the educational future armed with foresight, creativity, and an unwavering commitment to academic integrity.

Bias in AI image outputs

A really convenient illustration of how biases in training data find their way into generative AI outputs for me these last few months has been to show images of ‘a university professor from….’ as generated by Midjourney. The first example below is exactly that and I will use it today when talking with colleagues in the history department: ‘A history professor at King’s College London’. But, updates and new releases appear at first glance to be tackling the issue head on. The second set of images is exactly the same prompt generated via Dall-e 3 (via mobile version of ChatGPT -4). Positive me is delighted. Cynical me, though, is more than sceptical. The training data is unlikely to have changed so much as to result in this so what has? A shroud of diversity masking realities of the foundation that sit beneath through clever algorithmic tweaking. The system level technique is a deliberate effort to reflect more accurately the diversity of the World where prompts do not specify gender or ethnicity. This may well contribute a significant challenge to the bias issue but we shouldn’t kid ourselves that it has been resolved.

Alt text: This is a single image divided into four panels, and generated in Midjourney via Discord. Each panel displays a photo realistic portrait of a different man against an architectural backdrop, presumably a historic building or courtyard. All appear white, all wear glasses, all are formally dressed and three wear ties.
Alt text: A collection of four images showcasing AI Dall-e 3 interpretations of the prompt “history professors at Kings College London” in different scenarios and styles: two women and two men of differnet ethnicities in different styles from photo realisitc to simple line drawing.

Babies and Bathwater: How Far Will AI Necessitate an Assessment Revolution?

By Martin Compton & Chris Rowell

Recast version (auto podcast)

Caveat: This two-for-one post was generated using multiple AI technologies. It is drawn from the transcript of an event held this afternoon ( 6th October 2023) which was the first in a series of conversations about AI hosted by Chris Rowell at UAL. We thought it would be an interesting experiment to produce a blog summary of the key ideas and themes but then we realised that it was Friday afternoon and we both have lives too. So… we put AI tools to work: first MS Teams AI provided an instant transcript, then Claude AI filtered the content and separated it into two main chunks (Martin answering questions and then open discussion). Third we used the prompt in ChatGPT: Using the points made by Martin Compton write a blog post of 500-750 words that captures the key points he raises in full prose, using the style and tone he uses here. Call the post ” Babies and bathwater: how far will AI necessitate an assessment revolution?” . Then, we did something similar with the open discussion and that led to part two of this post below. Finally, I used some keywords to generate some images in Bing Chat which uses Dall-e 3 to decorate the text.

Part 1: The conversation

Attempt 1: AI generated image (Using Dall-e3 via Bing Chat) of computer monitor showing article called ‘Babies and Bathwater’ below which is an image of two babies in a sort of highchair/ bath combo

The ongoing dialogue around AI’s influence on education often has us pondering over the depth and dimensions of the issue. Our peers frequently express their concerns about students using AI to craft essays and generate images for their assessments. Recently, I (Chris) stumbled upon the AI guidelines by King’s, urging institutions to enable students and staff to become AI literate. But the bigger question looms large: what does being AI literate truly entail?

Attempt 2: AI generated image (Using Dall-e3 via Bing Chat) of computer monitor showing article called ‘Babies and Bathwater?’ below which is an image of a robot

For me (Martin), this statement from the Russell Group principles on generative AI has been instrumental in persuading some skeptics in the academic realm of the necessity to engage. It’s clear that AI literacy isn’t just another buzzword. It’s a doorway to stimulating dialogue. It’s about addressing our anxieties and reservations, then channeling those emotions to drive conversations around teaching, assessment, and learning.

Truth be told, when we dive deep into the matter of AI literacy, we’re essentially discussing another facet of information literacy. It’s a skill we aim to foster in our students and one that, as educators, we should continually refine in ourselves. Yet, I often feel that the larger academic community might not be doing enough to hone these skills, especially in the digital age where misinformation spreads like wildfire.

With the rise of AI technologies like ChatGPT, I was both amazed and slightly concerned. The first time I tested it, the results left me in awe. However, on introspection, I realized that if an AI can flawlessly generate a university-level essay, then it’s high time we scrutinized our assessments. It’s not about the capabilities of AI; it’s about reassessing the nature and objectives of our examinations.

When my colleagues seek advice on navigating this AI-augmented educational landscape, my primary counsel is simple: don’t panic. Instead, let’s critically analyze our current assessment methodologies. Our focus should pivot from regurgitation of facts to evaluating understanding and application. And if a certain subject demands instant recall of information, like in medical studies, we should stick to time-constrained evaluations.

Attempt 3: AI generated image (Using Dall-e3 via Bing Chat) of computer monitor showing article called ‘Babies and Bathnwater [sic] below which is an image of some very disturbingly muscled babies

To make our existing assessments less susceptible to AI, it’s crucial to reflect on their core objectives. This takes me back to the fundamental essence of pedagogy, where we need to continuously question and redefine our approach. Are we merely conducting assessments as a formality, or are they genuinely driving learning? It’s imperative to emphasize the process as much as the final output.

Now, if you ask me whether we should incorporate AI into our summative assessments, my perspective remains fluid. While today it might seem like a radical notion, in the future, it could be as commonplace as using the internet for research. But while we’re in this transitional phase, understanding and integrating AI should be done judiciously.

Lastly, when it comes to AI-generated feedback for students, I believe there’s potential, albeit with certain limitations. There’s undeniable value in students receiving feedback from various sources. Yet, we must tread cautiously to ensure academic integrity.

In essence, as educators and advocates of lifelong learning, we must embrace the challenges AI brings to our table, approach them with a critical lens, and adapt our strategies to nurture an equitable, AI-literate generation.

Part 2: Thoughts from the (bathroom) floor: Assessing Process Over Product in the Age of AI

The following is a synthesis of comments made during the discussion that ensued after the intial Q & A conversation.

Valuing Creation Process over End Product

There’s been a long-standing tradition in education of assessing the final product. Be it a project, an essay, or a painting, the emphasis has always been on the end result. But isn’t the journey as significant, if not more so? The time has come for assessments to shift their focus from the finished piece to the process behind its creation. Such an approach would not only value the hard work and thought process of a student but also celebrate their research journey.

Moving Beyond Memorization

Currently, knowledge reproduction assessments rule the roost. Students cram facts, only to regurgitate them during exams. However, the real essence of learning lies in fostering higher-order thinking skills. It’s crucial to design assessments that challenge students to analyze, evaluate, and create. This way, we’re nurturing thinkers and not just fact-repeating robots.

Embracing AI in the Classroom

The introduction of AI image generators in classroom projects was met with varied reactions. Some students weren’t quite thrilled with what the AI generated for them. However, this sparked a pivotal dialogue about the value of showcasing one’s process rather than merely submitting an end product.

It became evident that possessing a good amount of subject knowledge positions students better to use AI tools effectively, minimizing misuse. This draws a clear parallel between disciplinary knowledge and sophisticated AI usage. Today, employers prize graduates who can adeptly wield AI. Declining AI usage is no longer a strength but a weakness.

The Ever-Evolving AI Landscape

As AI tools constantly evolve and become more sophisticated, we can expect students to step into universities already acquainted with these tools. However, just familiarity isn’t enough. Education must pivot towards fostering honest AI usage and teaching students to discern between appropriate and inappropriate uses.

Critical AI Literacy: The Need of the Hour

AI tools, no matter how advanced, are just tools. They might churn out outputs that match a user’s intent, but it’s up to the individual to critically evaluate the AI’s output. Does it align with what you wanted to express? Does it represent your research accurately? Developing a robust AI literacy is paramount to navigate this digital landscape.

Attempt 4: AI generated image (Using Dall-e3 via Bing Chat) of computer monitor showing article called ‘Babies and Bathwater?’ below which is a photorealistic image of a baby

The Intrinsic Value of Creation

We must remember that the act of writing or creating is in itself a learning experience. Merely receiving an AI’s output doesn’t equate to learning. There’s an intrinsic value in the process of creation, an enrichment that often transcends the final product.

To sum it up, as the lines between human ingenuity and AI blur, our educational paradigm must pivot, placing process over product, fostering critical thinking, and embracing the AI wave, all while ensuring we retain our unique human touch in creation. The future beckons, and it’s up to us to shape it judiciously.

King’s approach to all things GenAI

I have used the image below a few times internally to summarise the various strands of GenAI activity from a King’s Academy/ central College perspective. The stuff that’s happening in faculties is huge too but appears here somewhat inadequately as the top bubble on the right. The other two bubbles represent how we are contributing at sector level (such as the framework for responsible use) as well as drawing on and being informed by the great work at Jisc and how we have centred the Russell Group principles on AI as well as close working with Microsoft as innovations and integrations are rolled out. The Staff Guidance on GenAI is published as is Student Guidance, which are represented on the left as two main elements of our approach that we are badging as the King’s AI in education Laboratory (KAIeLAB). It also references the PAIR Framework, The FREE short course and the College Teaching Fund (internal only) which this year is AI focussed and all funded projects will necessarily have a student engagement element or co-leadership.

Graphical representation of the KCL multi-faceted approach to generative AI engagement in teaching, learning and assessment (explained in text above)

Generative AI as assistive technology

In this personal, exemplified account from my colleague Amy Aisha Brown, a Technology Enhanced Learning Manager at KCL, we can see a worked example of how Amy uses ChatGPT. Amy describes herself in the video as a neurodivergent member of staff and uses personal experience of utilising ChatGPT for assistance in writing a succinct bio for an online learning platform. Despite the apparent simplicity of the task, Amy illustrates how generative AI significantly eased her process by aiding in initiating the task, organising ideas, and ensuring language accuracy. Through this, Amy demonstrates the potential of AI as an invaluable assistant in alleviating common challenges faced. For anyone who’s interested, you can explore Amy’s chat via OpenAI

Video Translation: Hindi & Turkish

I tried the remarkable HeyGen in two other languages, this time ones that I don’t speak. Friends and family tell me the Hindi is accurate. The only oddity is how my glasses in the Hindi version are partially put back on my face before I actually did it in the original. AI translation is impressive. Voice synthesis in another langauge is impressive. Manipulating facial expressions to track translation is impressive. Put them all together and it is jaw droppingly impressive. The audio version of this text was created using Eleven Labs by the way. The voice is ‘Joseph’- I chose it because it is one of three British voices available and is also my son’s name.

Auto Translated English to Hindi (English captions available; Hindi captions not yet available)
Auto translated video English to Turkish (English captions available; Turkish captions not yet available)

Freedom to Learn

Resharing via my blog this video I made as a contrbution to discussions we were having in my old job (feels like eons ago) under the banner of ‘freedom to learn’. I’m sharing again because we will soon be sharing a call for contributions to the ‘Freedom to Learn’ conference – save the date 5/4/24 at King’s College London – where will be exploring the following themes:

Themes: 

  1. Rekindling a joy of learning: Was there ever a ‘golden age’ where learning for its own sake provided sufficient value? What role does/ could/ should ‘joy’ play in a higher education? Why is there an apparent mental health crisis amongst undergraduate students? How can we innovate for joyful learning (and teaching)?
  2. Decentring grades: Realising ungrading possibilities from the micro (one class) to the meso (whole modules) through to the macro (entire programmes or even institutions). Why is there so much resistance? What are the barriers to change and how might they be overcome? To what extent can change happen given the current status quo? Do we need a grading revolution or how might we chip away?
  1. Caring and compassionate pedagogies: Is content still king (or queen)? How far have we realised an endeavour to weave care and compassion into our learning designs, teaching and assessments? What else could we do? Where are the pitfalls?
  2. Myth Busting the modern academy: “We’ve always done it that way!” “We’re not allowed to change!” “Employers want….” “PSRBs want….” “ Students want…” “A degree is all about employability” A lot of what we hear when discussing change is responded to with arguments like this.Why? Is tradition a strong reason to stick with convention? Are the best pedagogies those that favour economies of scale? How far do structures really impede innovation and transformation? What are you doing? What would you like to do?