Just when you think you’re getting a handle on things…
So this week’s big announcement was the (still being rolled out) ability to create custom GPTs. Just as I was getting to grips with doing this in the OpenAI playground, it’s now completely WYSIWYG (for many) from within ChatGPT which has had a 8 bit to 16 bit type upgrade in graphics to boot. As much as I want to encourage use of Bing Chat for a bunch of institutional reasons, I am yet again pulled back to ChatGPT with the promise of custom GPTs (do we have to call them that?) After a few false starts yesterday, due to issues of some sort with the roll out I imagine, today it has been seamless and smooth. I learnt quickly that you can get real precision from configuring the instructions. For example, I have given mine the specific instruction to share link X in instance Y and Link A in instance B. To create the foundation I have combined links with uploaded documents and so far my outputs have been pretty good. I think I will need longer and much more precise instructions as the responses do veer to the general still a little too much but it is feeding from my foundation well. Here’s how it looks in the creation screen:
Alt text: Screenshot of chatGPT custom GPT authoring window showing boxes to fill in including Name, Description, Instruction and Conversation starters
It comes with a testing space adjacent to the creation window and options to share:
Alt text: Screenshot of dropdown menu in custom GPT authoring window showing choice to publish bot to only me, only people with link and public
And this is the screen you get if you access the link (but recipients themselves must have the new creation ability to access custom bots):
Alt text: Screenshot from bot homescreen showing familiar ChatGPT interface but with personalised image, bot name and suggested questions
And finally the chatbot window is familair and, as can been seen focussed on my data:
Alt text: Q & A with bot. The question is ‘How do I sign up for the AI course? It gives detailed information and links directly to it
I actually think this will re-revolutionise how we think about large language models in particular and will ultimately impact workflows for professional service and academic staff as well as students in equal measure.
The ChatGPT numbers are staggering: 100 million weekly users and 2 million developers. I wish I had the time and skills to be included in the developer stat but what is notable is that even someone like me -the compulsive fiddler but with very limited tech skills – can use the ‘playground’ to create a personal assistant bot. Using GPT 4 the diffefrence between outputs here and from within, say, chatgpt or any other large language models, is that I define the instruction (in this case focssing on sources to refer responses to and nature of the role) and, crucially, use uploaded documents to become the foundation information. The post below was generated using a prompt that asked for a summary of key information from across our guidance documentation on Gen AI at King’s and then improved because the first output included non descriptive hyperlinks. I hear and have seen that the system is struggling to keep up with interest but it’s well worth having a look I think because in no time Open AI and/ or smart developers will create user friendly wrappers that will enable anyone to have their own pre-trained bots to support whatever it is they are working on. This I think will change the way we think about LLMs relatively quickly.
Alt text: Screenshot of the assitant bot creation interface. The bulk of the text is duplicated below.
Sample output ( I have tweaked it slightly to ensure it is factually accurate but have left the tone and style as is just for illustration):
King’s Generative AI info
In the realm of higher education, generative AI stands as a groundbreaking technology that’s redefining the scope of learning and teaching. King’s College London remains a beacon of innovation by methodically embracing the capabilities of AI, while also tactfully navigating the intricacies it introduces to the academic environment. We have devised a comprehensive framework that facilitates informed engagement with generative AI across the university community.
Navigating the AI Landscape in Higher Education
To equip our community with a nuanced understanding of generative AI’s role in academia, we have developed an insightful short course titled Generative AI in Higher Education, available on FutureLearn. This course invites participants to explore the multifaceted ways in which AI can enrich academic disciplines and paves the way for meaningful discussions about the evolution of educational practices.
Tailored Guidance for All Stakeholders
Our dedication to fostering a responsible AI-integrated culture is reflected in the guidance documents created specifically for each stakeholder within King’s College London:
For programme and module leaders, it’s essential to consider whether guidance on the responsible use of generative AI has been articulated to students. As a cornerstone of our initiative, we have drafted a detailed guide for program and module leaders to ensure transparency in the permissible applications of AI.
Lecturers and GTAs find themselves equipped with recommendations designed to seamlessly integrate AI into their instructional and evaluative methodologies.
Students’ guidance addresses the pivotal aspects of generative AI usage within their academic journey while maintaining academic integrity standards. An integral part of this guidance is an encouragement to engage with tutors on the appropriate use of AI in particular assignments.
Policy Enforcements and Ethical Considerations
Information on detection and acknowledgment of AI-generated content has been published, underscoring the significance of integrity in academic pursuits.
Conceptual Clarity through Definitions
The university community can access a comprehensive lexicon and context for generative AI in our context and definitions document, poised to enhance understanding and conversation about this emergent technology.
Revolutionising Assessment
With generative AI reshaping potential assessment methods, King’s College London embraces new approaches to assessment, dedicated to cultivating profound learning experiences and reflective critical thinking among students.
Practical Insights and Knowledge Sharing
Martin Compton shares his unique perspective and inventive tactics for integrating generative AI into academic settings through his personal blog, a source of inspiration and practical advice.
Conclusion
The proactive stance of King’s College London in the sphere of generative AI serves as a testament to our commitment to excellence. By providing clear and structured guidance that caters to every role within the institution, we stride forward into the educational future armed with foresight, creativity, and an unwavering commitment to academic integrity.
A really convenient illustration of how biases in training data find their way into generative AI outputs for me these last few months has been to show images of ‘a university professor from….’ as generated by Midjourney. The first example below is exactly that and I will use it today when talking with colleagues in the history department: ‘A history professor at King’s College London’. But, updates and new releases appear at first glance to be tackling the issue head on. The second set of images is exactly the same prompt generated via Dall-e 3 (via mobile version of ChatGPT -4). Positive me is delighted. Cynical me, though, is more than sceptical. The training data is unlikely to have changed so much as to result in this so what has? A shroud of diversity masking realities of the foundation that sit beneath through clever algorithmic tweaking. The system level technique is a deliberate effort to reflect more accurately the diversity of the World where prompts do not specify gender or ethnicity. This may well contribute a significant challenge to the bias issue but we shouldn’t kid ourselves that it has been resolved.
Alt text: This is a single image divided into four panels, and generated in Midjourney via Discord. Each panel displays a photo realistic portrait of a different man against an architectural backdrop, presumably a historic building or courtyard. All appear white, all wear glasses, all are formally dressed and three wear ties. Alt text: A collection of four images showcasing AI Dall-e 3 interpretations of the prompt “history professors at Kings College London” in different scenarios and styles: two women and two men of differnet ethnicities in different styles from photo realisitc to simple line drawing.
Caveat: This two-for-one post was generated using multiple AI technologies. It is drawn from the transcript of an event held this afternoon ( 6th October 2023) which was the first in a series of conversations about AI hosted by Chris Rowell at UAL. We thought it would be an interesting experiment to produce a blog summary of the key ideas and themes but then we realised that it was Friday afternoon and we both have lives too. So… we put AI tools to work: first MS Teams AI provided an instant transcript, then Claude AI filtered the content and separated it into two main chunks (Martin answering questions and then open discussion). Third we used the prompt in ChatGPT: Using the points made by Martin Compton write a blog post of 500-750 words that captures the key points he raises in full prose, using the style and tone he uses here. Call the post ” Babies and bathwater: how far will AI necessitate an assessment revolution?” . Then, we did something similar with the open discussion and that led to part two of this post below. Finally, I used some keywords to generate some images in Bing Chat which uses Dall-e 3 to decorate the text.
Part 1: The conversation
Attempt 1: AI generated image (Using Dall-e3 via Bing Chat) of computer monitor showing article called ‘Babies and Bathwater’ below which is an image of two babies in a sort of highchair/ bath combo
The ongoing dialogue around AI’s influence on education often has us pondering over the depth and dimensions of the issue. Our peers frequently express their concerns about students using AI to craft essays and generate images for their assessments. Recently, I (Chris) stumbled upon the AI guidelines by King’s, urging institutions to enable students and staff to become AI literate. But the bigger question looms large: what does being AI literate truly entail?
Attempt 2: AI generated image (Using Dall-e3 via Bing Chat) of computer monitor showing article called ‘Babies and Bathwater?’ below which is an image of a robot
For me (Martin), this statement from the Russell Group principles on generative AI has been instrumental in persuading some skeptics in the academic realm of the necessity to engage. It’s clear that AI literacy isn’t just another buzzword. It’s a doorway to stimulating dialogue. It’s about addressing our anxieties and reservations, then channeling those emotions to drive conversations around teaching, assessment, and learning.
Truth be told, when we dive deep into the matter of AI literacy, we’re essentially discussing another facet of information literacy. It’s a skill we aim to foster in our students and one that, as educators, we should continually refine in ourselves. Yet, I often feel that the larger academic community might not be doing enough to hone these skills, especially in the digital age where misinformation spreads like wildfire.
With the rise of AI technologies like ChatGPT, I was both amazed and slightly concerned. The first time I tested it, the results left me in awe. However, on introspection, I realized that if an AI can flawlessly generate a university-level essay, then it’s high time we scrutinized our assessments. It’s not about the capabilities of AI; it’s about reassessing the nature and objectives of our examinations.
When my colleagues seek advice on navigating this AI-augmented educational landscape, my primary counsel is simple: don’t panic. Instead, let’s critically analyze our current assessment methodologies. Our focus should pivot from regurgitation of facts to evaluating understanding and application. And if a certain subject demands instant recall of information, like in medical studies, we should stick to time-constrained evaluations.
Attempt 3: AI generated image (Using Dall-e3 via Bing Chat) of computer monitor showing article called ‘Babies and Bathnwater [sic] below which is an image of some very disturbingly muscled babies
To make our existing assessments less susceptible to AI, it’s crucial to reflect on their core objectives. This takes me back to the fundamental essence of pedagogy, where we need to continuously question and redefine our approach. Are we merely conducting assessments as a formality, or are they genuinely driving learning? It’s imperative to emphasize the process as much as the final output.
Now, if you ask me whether we should incorporate AI into our summative assessments, my perspective remains fluid. While today it might seem like a radical notion, in the future, it could be as commonplace as using the internet for research. But while we’re in this transitional phase, understanding and integrating AI should be done judiciously.
Lastly, when it comes to AI-generated feedback for students, I believe there’s potential, albeit with certain limitations. There’s undeniable value in students receiving feedback from various sources. Yet, we must tread cautiously to ensure academic integrity.
In essence, as educators and advocates of lifelong learning, we must embrace the challenges AI brings to our table, approach them with a critical lens, and adapt our strategies to nurture an equitable, AI-literate generation.
Part 2: Thoughts from the (bathroom) floor: Assessing Process Over Product in the Age of AI
The following is a synthesis of comments made during the discussion that ensued after the intial Q & A conversation.
Valuing Creation Process over End Product
There’s been a long-standing tradition in education of assessing the final product. Be it a project, an essay, or a painting, the emphasis has always been on the end result. But isn’t the journey as significant, if not more so? The time has come for assessments to shift their focus from the finished piece to the process behind its creation. Such an approach would not only value the hard work and thought process of a student but also celebrate their research journey.
Moving Beyond Memorization
Currently, knowledge reproduction assessments rule the roost. Students cram facts, only to regurgitate them during exams. However, the real essence of learning lies in fostering higher-order thinking skills. It’s crucial to design assessments that challenge students to analyze, evaluate, and create. This way, we’re nurturing thinkers and not just fact-repeating robots.
Embracing AI in the Classroom
The introduction of AI image generators in classroom projects was met with varied reactions. Some students weren’t quite thrilled with what the AI generated for them. However, this sparked a pivotal dialogue about the value of showcasing one’s process rather than merely submitting an end product.
It became evident that possessing a good amount of subject knowledge positions students better to use AI tools effectively, minimizing misuse. This draws a clear parallel between disciplinary knowledge and sophisticated AI usage. Today, employers prize graduates who can adeptly wield AI. Declining AI usage is no longer a strength but a weakness.
The Ever-Evolving AI Landscape
As AI tools constantly evolve and become more sophisticated, we can expect students to step into universities already acquainted with these tools. However, just familiarity isn’t enough. Education must pivot towards fostering honest AI usage and teaching students to discern between appropriate and inappropriate uses.
Critical AI Literacy: The Need of the Hour
AI tools, no matter how advanced, are just tools. They might churn out outputs that match a user’s intent, but it’s up to the individual to critically evaluate the AI’s output. Does it align with what you wanted to express? Does it represent your research accurately? Developing a robust AI literacy is paramount to navigate this digital landscape.
Attempt 4: AI generated image (Using Dall-e3 via Bing Chat) of computer monitor showing article called ‘Babies and Bathwater?’ below which is a photorealistic image of a baby
The Intrinsic Value of Creation
We must remember that the act of writing or creating is in itself a learning experience. Merely receiving an AI’s output doesn’t equate to learning. There’s an intrinsic value in the process of creation, an enrichment that often transcends the final product.
To sum it up, as the lines between human ingenuity and AI blur, our educational paradigm must pivot, placing process over product, fostering critical thinking, and embracing the AI wave, all while ensuring we retain our unique human touch in creation. The future beckons, and it’s up to us to shape it judiciously.
I have used the image below a few times internally to summarise the various strands of GenAI activity from a King’s Academy/ central College perspective. The stuff that’s happening in faculties is huge too but appears here somewhat inadequately as the top bubble on the right. The other two bubbles represent how we are contributing at sector level (such as the framework for responsible use) as well as drawing on and being informed by the great work at Jisc and how we have centred the Russell Group principles on AI as well as close working with Microsoft as innovations and integrations are rolled out. The Staff Guidance on GenAI is published as is Student Guidance, which are represented on the left as two main elements of our approach that we are badging as the King’s AI in education Laboratory (KAIeLAB). It also references the PAIR Framework, The FREE short course and the College Teaching Fund (internal only) which this year is AI focussed and all funded projects will necessarily have a student engagement element or co-leadership.
Graphical representation of the KCL multi-faceted approach to generative AI engagement in teaching, learning and assessment (explained in text above)
In this personal, exemplified account from my colleague Amy Aisha Brown, a Technology Enhanced Learning Manager at KCL, we can see a worked example of how Amy uses ChatGPT. Amy describes herself in the video as a neurodivergent member of staff and uses personal experience of utilising ChatGPT for assistance in writing a succinct bio for an online learning platform. Despite the apparent simplicity of the task, Amy illustrates how generative AI significantly eased her process by aiding in initiating the task, organising ideas, and ensuring language accuracy. Through this, Amy demonstrates the potential of AI as an invaluable assistant in alleviating common challenges faced. For anyone who’s interested, you can explore Amy’s chat via OpenAI
I tried the remarkable HeyGen in two other languages, this time ones that I don’t speak. Friends and family tell me the Hindi is accurate. The only oddity is how my glasses in the Hindi version are partially put back on my face before I actually did it in the original. AI translation is impressive. Voice synthesis in another langauge is impressive. Manipulating facial expressions to track translation is impressive. Put them all together and it is jaw droppingly impressive. The audio version of this text was created using Eleven Labs by the way. The voice is ‘Joseph’- I chose it because it is one of three British voices available and is also my son’s name.
Auto Translated English to Hindi (English captions available; Hindi captions not yet available)
Auto translated video English to Turkish (English captions available; Turkish captions not yet available)
Resharing via my blog this video I made as a contrbution to discussions we were having in my old job (feels like eons ago) under the banner of ‘freedom to learn’. I’m sharing again because we will soon be sharing a call for contributions to the ‘Freedom to Learn’ conference – save the date 5/4/24 at King’s College London – where will be exploring the following themes:
Themes:
Rekindling a joy of learning: Was there ever a ‘golden age’ where learning for its own sake provided sufficient value? What role does/ could/ should ‘joy’ play in a higher education? Why is there an apparent mental health crisis amongst undergraduate students? How can we innovate for joyful learning (and teaching)?
Decentring grades: Realising ungrading possibilities from the micro (one class) to the meso (whole modules) through to the macro (entire programmes or even institutions). Why is there so much resistance? What are the barriers to change and how might they be overcome? To what extent can change happen given the current status quo? Do we need a grading revolution or how might we chip away?
Caring and compassionate pedagogies: Is content still king (or queen)? How far have we realised an endeavour to weave care and compassion into our learning designs, teaching and assessments? What else could we do? Where are the pitfalls?
Myth Busting the modern academy: “We’ve always done it that way!” “We’re not allowed to change!” “Employers want….” “PSRBs want….” “ Students want…” “A degree is all about employability” A lot of what we hear when discussing change is responded to with arguments like this.Why? Is tradition a strong reason to stick with convention? Are the best pedagogies those that favour economies of scale? How far do structures really impede innovation and transformation? What are you doing? What would you like to do?
Below is the automated MP3 you would get if this post was an uploaded file in KEATS- If you get time, have a listen to how this sounds, especially the tabulated sections. What issues might there be?
This post accompanies a CPD event for colleagues at King’s. In it are resources referred to in this event. But please do read and try the linked activities for yourself even if you can’t attend!The resource is designed to raise awareness of what digital accessibility means and what a ‘by design’ approach to digital accessibility requires us to know and to do. The session is also an opportunity for us to pilot aspects of an (in-development) Accessibility Engagement Tool being worked on in a collaboration between KCL and UCL (Two of the best ‘CLs’ in fact!). The tool is being designed to help colleagues discuss their accessibility engagement and get clear direction on what they can do to further improve the accessibility of their teaching and, as far as possible in an anticipatory and planned way, rather than reactively or in response to a need that had not been anticipated. The goal is to enable colleagues to set some clear digital accessibility goals irrespective of their starting point.
Accessibility in its broadest sense is about making activities, environments, and information as useable and meaningful as possible in ways that do not exclude people. It is about empowerment, about minimising frustration and about effective anticipatory design. Digital accessibility therefore ‘provides ramps and lifts into information.’ It includes ensuring that all information we create at KCL can be seamlessly consumed by everyone that wishes to access it.
The accessibility engagement model and accompanying self-assessment tool are being designed to enable colleagues to plot their own level according to a series of questions about aspects of digital accessibility. The idea will be that through series of questions related to:
Values and beliefs
Knowledge and skills
Actions and behaviours
…the tool will plot an overall position as well as noting areas of developmental or resourcing need. As we have shaped this model one area that has led to much discussion, consultation and head scratching are the labels we are appending to levels. As a starting point we propose six levels of ‘maturity status’ and invite colleagues to decide which level they are currently at:
Accessibility Engagement Model
Level
Accessibility Maturity Status
Characteristics and indicative practices
0
Unwilling
Context means that this is not prioritised in current working environment given competing commitments and pressures.Time is a key point of resistance.
1
Unable
Don’t know where to start and/or in need of direction, support, and prioritisation.
2
Reluctant compliant
Awareness of accessibility principles and drivers; only adopting bare minimum when encouraged.
3
Willing compliant
Awareness of accessibility design principles; willingly adopting good basic level of accessibility.
4
Ally
Connected to wider pedagogical values; allies are vocal on behalf of students. Role model or provide case studies/ templates for others in their departments.
5
Champion and Co-creator
Activists/ innovators who work with students to understand and design more accessible approaches and resources. Potential contributors to institutional policy and strategy.
Digitally Accessible Learning Design
Whilst the online tool is still under construction, we will invite colleagues to use to respond briefly to some ‘actions / behaviours’ statements in the session. The results will be shared here after the event.
For each statement you are able to choose from 0-5 as follows
The statements in the slides can also be seen below.
I use descriptive hyperlinks rather than ‘click here’ or unconcealed links
I ensure that visual materials are conveyed effectively to those who cannot see them using alternative text descriptions and audio descriptions
I ensure my documents are navigable with a structured set of headings
I ensure tables are easy to read and have clear heading rows
I can use/ enable automatic speech recognition captions in live sessons
I caption and/or provide transcripts for in all multimedia I create
I offer a range of formats for my materials e.g., PDF, html and docx
I signpost students to assistive technologies so that they can have more support accessing materials
I share electronic content with students (such as slides) ahead of teaching sessions
I accessibility check my documents before finalising them
I explain acronyms and jargon when I use them
I check my work for colour contrast issues
As you will note, this is not an exhaustive list as in the King’s Digital Education Accessibility Baseline but it is a series of indicative behaviours that will allow us to position ourselves and set clear developmental targets (as well as a way into enagging more thoroughly with the baseline itself).