The AI Literacy Frontier

Despite the best efforts of storm Isha I still managed to present at the 2024 National Conference on Gen AI in Ulster today (albeit remotely). Following on from my WONKHE post I focussed on the ‘how’ and ‘who’ of AI literacy in universities and proposed 10 (and a bit) principles.

When I was planning it I happened to have a chat with my son about AI translation getting us a step closer to Star Trek universal translators and how AI is akin to a journey …’where no-one has gone before’. Before I knew it my abstract was choc full of Star Trek refs and my presentation played fast and loose with the entire franchise.

The slides and my suggested principles are here

AI image depicting a scene on the bridge of a Star Trek-inspired starship, with a baby in the captain’s chair wearing a Starfleet-inspired uniform.

In the presentation I connected with Dr Kara Kennedy’s AI literacy Framework, exemplified a critical point with reference to Dr Sarah Eaton’s Tenets of Post-plagiarism and share some resources including my Custom GPT ‘Trek: The Captain’s Counsel’ and a really terrible AI generated song about my presentation.

Abstract

A year beyond our initial first contact with ChatGPT, the Russell Group has set a course with their principles on generative AI’s use in education, acting as an essential guide for the USS Academia. Foremost among these is the commitment to fostering AI literacy: an enterprise where universities pledge to equip students and staff for the journey ahead. This mission, however, navigates through sectors where perspectives on AI range from likely nemesis to potential prodigy.

Amidst the din of divergent voices, the necessity for critical, cohesive, and focused discourse in our scholarly collectives is paramount. In this talk Martin argues that we need to view AI and all associated opportunities and challenges as an undiscovered country where we have a much greater chance not only of survival in this strange new world but also of flourishing if we navigate it together. This challenge to the conventional knowledge hierarchies in higher education suggests genuine dialogue and collaboration are essential prerequisites to success.

Martin will chart the course he’s plotted at King’s College London, navigating through the nebula of complex AI narratives. He will share insights from a multifaceted strategy aimed at fostering AI understanding and literacy across the community of stakeholders in their endeavour to ensure the voyage is one of shared discovery.

13 ways you could integrate AI tools into teaching

For a session I am facilitating with our Natural, Mathematical and Engineering Sciences faculty I have below pulled together a few ideas drawn from a ton of brilliant suggestions colleagues from across the sector have shared in person, at events or via social media. There’s a bit overlap but I am trying to address the often heard criticism that what’s missing from the guidance and theory and tools out there is some easily digestible, accessible and practically-focussed suggestions that focus on teaching rather than assessment and feedback. Here my first tuppenceworth:

1.AI ideator: Students write prompts to produce a given number of outputs (visual, text or code) to a design or problem brief. Groups select top 2-3 and critique in detail the viability of solutions.  (AI as inspiration)

2. AI Case Studies: Students analyse real-world examples where AI has influenced various practices (e.g., medical diagnosis, finance, robotics) to develop contextual intelligence and critical evaluation skills. (AI as disciplinary content focus)

3. AI Case Study Creator: Students are given AI generated vignettes, micro case studies or scenarios related to a given topic and discuss responses/ solutions. (AI as content creator)

4. AI Chatbot Research: For foundational theoretical principles or contextual understanding, students interact with AI chatbots, document the conversation, and evaluate the experience, enhancing their research, problem-solving, and understanding of user experience. (AI as tool to further understanding of content)

5. AI Restructuring: Students are tasked with using AI tools to reformat content into different media accordsing to pre-defined principles. (AI for multi-media rreframing).

6. AI Promptathon: Students formulate prompts for AI to address significant questions in their discipline, critically evaluate the AI-generated responses, and reflect on the process, thereby improving their AI literacy and collaborative skills. (Critical AI literacy and disciplinary formative activity)

7. AI audit: Students use AI to generate short responses to open questions, critically assess the AI’s output, and then give a group presentation on their findings. Focus could be on accuracy and/ or clarity of outputs. (Critical AI literacy)

8. AI Solution Finder: Applicable post work placement or with case studies/ scenarios, students identify real-world challenges and propose AI-based solutions, honing their creativity, research skills, and professional confidence. (AI in context)

9. AI Think, Pair & Share: Students individually generate AI responses to a key challenge, then pair up to discuss and refine their prompts, improving their critical thinking, evaluation skills, and AI literacy. (AI as dialogic tool)

10. Analyse Data: Students work with open-source data sets to answer pressing questions in their discipline, thereby developing cultural intelligence, data literacy, and ethical understanding. (AI as analytical tool)

11. AI Quizmaster : Students design quiz questions and use AI to generate initial ideas, which they then revise and peer-review, fostering foundational knowledge, research skills, and metacognition. (AI as concept checking tool)

12. Chemistry / Physics or Maths Principle Exploration with AI Chatbot: Students engage with an AI chatbot to learn and understand a specific principle. The chatbot can explain concepts, answer queries, and provide examples. Students (with support of GTA/ near peer or academic tutor) compare the AI’s approach to their own process/ understanding. (AI chatbot tutor)

13. Coding Challenge- AI vs. Manual Code Comparison: Coding students create a short piece of code for a specific purpose and then compare their code to a pre-existing manually produced code for the same purpose. This comparison can include an analysis of efficiency, creativity, and effectiveness. (AI as point of comparison)

Team Based Learning revisited

originally posted here: https://reflect.ucl.ac.uk/mcarena/2022/03/31/tbl/

I have been forced to confront a prejudice this week and I’m very glad I have because I have significantly changed my perspective on Team Based Learning (TBL) as a result. When I cook I rarely use a recipe: rough amounts and a ‘bit of this; bit of that’ get me results that wouldn’t win Bake Off but they do the job.  I’m a bit anti-authority I suppose and I might, on occasion, be seen as contrary given a tendency to take devil’s advocate positions.  As a teacher educator, and unlike many of my colleagues over the years, I tend to advocate a more flexible approach to planning, am most certainly not a stickler for detailed lesson plans and maintain a sceptisicm (that I think is healthy) about the affordances of learning outcomes and predictably aligned teaching. I think this is why I was put off TBL when I first read about it. Call something TBL and most people would imagine something loose, active, collaborative and dialogic. But TBL purists (and maybe this was another reason I was resistant) would holler: ‘Hang on! TBL is a clearly delineated thing! It has a clear structure and process and language of its own.’ However, after attending a very meta-level session run by my colleague, Dr Pete Fitch, this week I was embarrassed to realise how thoroughly I’d misunderstood its potential flexibility and adaptability as well as the potentials of different aspects I might be sceptical of in other contexts.

Established as a pedagogic approach in medical education in the US in the 1970s, it is now used widely across medical education globally as well as in many other disciplinary areas. In essence, it provides a seemingly rigid structure to a flipped approach that typically looks like this:

  • Individual pre-work – reading, videos etc.
  • Individual readiness assurance test (IRAT) – in class multi-choice text
  • Team readiness assurance teast (TRAT) – same questions, discussed and agreed- points awarded according to how few errors are made getting to correct response
  • Discussion and clarification (and challenge)- opportunities to argue, contest, seek clarification from tutor
  • Application- opportunity to take core knowledge and apply it
  • Peer evaluation

This video offers a really clear summary of the stages:

Aside from the rigid structure, my original resistance was rooted in the knowledge-focussed tests and how this would mean sessions started with silent, individual work. However, having been through the process myself (always a good idea before mud slinging!), I realised that this stage could achieve a number of goals as well as the ostensible self-check on understanding. It provides a framing point for students to measure understanding of materials read; it offers-completely anonymously- even to the tutor, an opportunity to guage understanding within a group; it provides an ipsative opportunity to measure progress week by week and acts additionally as a motivator to actually engage with the pre-session work (increasingly so as the learning culture is established). It turns a typically high stakes, high anxiety activity (individual test) into a much lower stakes one and provides a platform from which intial arguments can start at the TRAT stage. A further advantage therefore could be that it helps students formatively with their understanding of and approaches to multi-choice examinations in those programmes that utilise this summative assessment methodology.  In this session I changed my mind on three questions during the TRAT, two of which I was quietly (perhaps even smugly) confident I’d got right. A key part of the process is the ‘scratch to reveal if correct’ cards which Pete had re-imagined with some clever manipulation of Moodle questions. We discussed the importance of the visceral ‘scratching’ commitment in comparsion to a digital alternative and I do wonder if this is one of those things that will always work better analogue!

The cards are somewhat like those shown in this short video:

To move beyond knowledge development, it is clear the application stage is fundamental. Across all stages it was evident how much effort is needed in the design stage. Writing meaningful, level appropriate multi-choice questions is hard. Level-appropriate, authentic application activities are similarly challenging to design. But the payoffs can be great and, as Pete said in session, the design lasts more than a single iteration. I can see why TBL lends itself so well to medical education but this session did make me wish I was still running my own programme so I could test this formula in a higher ed or digital education context.

An example of how it works in the School of Medicine in Nanyang Technological University can be seen here:

The final (should have been obvious) thing spelt out was that the structure and approach can be manipulated. Despite appearances, TBL does enable a flexible approach. I imagine one-off and routine adaptations according to contextual need are commonplace.  I think if I were to design a TBL curriculum, I’d certainly want to collaborate on its design. This would in itself be a departure for me but preparing quality pre-session materials, writing good questions and working up appropriate application activites are all essential and all benefit from collaboration or, at least, a willing ‘sounding board’ colleague. 

Custom GPTs

There are two main audiences for custom GPTs built within the ChatGPT Pro insfrastucture. The first is anyone with a pro account. There are other tools that allow me to build custom GPTs with minimal skills that are open to wider audiences so I think it’ll be interesting to see whether OpenAI continue to leverage this feature to encourage new subscription purchases or whether it will open up to further stifle competitor development. In education the ‘custom bots for others’ potential is huge but, for now, I am realising how potentially valuable they might be for the audience I did not initially consider – me.

One that is already proving useful is ‘My thesis helper’ which I constructed to pull information only from my thesis (given that even the really obvious papers never materialsed I am wondering whether this might catalyse that!) It’s an opportunity to use as source material much larger documents than the copy/ paste tokens allow or even the relatively generous (and free) 100k tokens and document upload Claude AI permits. In particular, it facilitates much swifter searching within the document as well as opportunities for synthesising and summarising specific sections. Another is ‘Innovating in the Academy’ (try it yourself if you have a pro account) which uses two great sources of case studies from across King’s, collated and edited by my immediate colleagues in King’s Academy. The bot enables a more refined search as well as an opportunity to synthesise thinking.

Designed to be more outward facing is ‘Captain’s Counsel’. This I made to align with a ‘Star Trek’ extended (and undoubtedly excruciating) metaphor I’ll be using in a presentation for the forthcoming GenAI in Education conference in Ulster. Here I have uploaded some reference material but also opened it to the web. I have tried to tap into my own Star Trek enthusiasm whilst focussing on broader questions about teaching. The web-openness means it will happily respond to questions about many things under the broad scope I have identified though I have also identified some taboos. Most useful and interesting is the way it follows my instruction to address the issue with reference to Captain Kirk’s own experiences. 

Both the creation and use of customised bots enables different ways of perceiving and accessing existing information and it is in these functions broadly that LLMs and image generators as well as within customised bots are likely to establish a utility niche I think, especially for folk yet to dip their toes or whose perceptions are dominated by LLMs as free essay mills.

Assessment 2033

What will assessment look like in universities in 2033? There’s a lot of talk about how AI may finally catalyse long-needed changes to a lot of the practices we cling to but there’s also a quite significant clamour to do everything in exam halls. Amidst the disparate voices of change are also those that suggest we ride this storm out and carry on pretty much as we are: it’s a time-served and proven model, is it not?

Anyway, by way of provocation, see below four visions of assessment in 2033. What do you think? Is one more likely? Maybe bits of two or more or none of the below? What other possibilities have I missed?

  1. Assessment 2033: Panopticopia

Alex sat nervously in a sterile examination room, palms clammy, heart pounding, her personal evaluation number stamped on each hand and her evaluation tablet. The huge digits on the all-wall clock counted down ominously. As she began the timed exam, micro-drones buzzed overhead, scanning for unauthorised augmentations and communications. Proctoring AI software tracked every keystroke and eye movement, erasing any semblance of privacy. The relentless pressure to recall facts and formulas within seconds elevated her already intense anxiety. Alex knew she was better than these exams would suggest but in the race against technology ideals like fairness, inclusive practice and assessment validity were almost forgotten.

  1. Assessment 2033: Nova Lingua

Karim sat, feet up, in the study pod on campus, ready to tackle his latest essay. Much of the source material was in his first language so he felt confident the only translation tech he’d need would be with his more whimsical flourishes (usually in the intro and conclusion). He  activated ‘AiMee’, his assistant bot, instructed her to open Microsoft Multi-Platform and set the essay parameters: ‘BeeLine text with synthetic voiced audio and an AI avatar presented digest’. AiMee processed the essay brief as Karim scanned it in and started the conversation. Karim was pleased as his thoughts appeared as eloquent prose, simultaneously in both his first language and the two official university languages. As he worked, Karim thought ruefully about how different an education his parents might have had given that they both, like him, were dyslexic.

  1. Assessment 2033: Nova Aurora

Jordan was flushed with delight at the end of their first term on the flexible, multi-modal ‘stackable’ degree. It was amazing to think how different it was from their parents’ experience. There were no traditional exams or strict deadlines. Instead, they engaged in continuous, project and problem-based learning. Professors acted as mentors, guiding them through iterative processes of discovery and growth. The emphasis was on individual development, not just the final product. Grades were replaced with detailed feedback, fostering an appreciation for learning for its own sake rather than competition or -what did their mum call it? ‘Grade grubbing’! Trust was a defining characteristic of academic and student interactions with collaboration highly valued and ‘collusion’ an obsolete concept. HE in the UK had somehow shifted from a focus on evaluation and grades to nurturing individual potential, mirrored by dynamic, flexible structures and opportunities to study in many ways, in many institutions and in ways that aligned with the complexities of life.

  1. Assessment 2033: Plus ça change

Ash sighed as she hunched over her laptop, typing furiously to meet another looming deadline. In 2033, it seemed that little had changed in higher education. Universities clung stubbornly to old assessment methods, reluctant to adapt. Plagiarism and AI detection tools remained easy to circumvent, masking the harsh realities of how students and, with similar frequency, academic staff, relied on technologies that a lot of policy documents effectively banned. The obsession with “students’ own words” pervaded every conversation, drowning out the unheard lobby advocating for a deeper understanding of students’ comprehension and wider acceptance of the realities of new ways of producing work. Ash knew that she wasn’t alone in her frustrations. The system seemed intent on perpetuating the status quo, turning a blind eye to the disconnect between the façade of academic integrity and the hidden truth of how most students and faculty navigated the system.



Evolving AI Literacy – A Shared Journey

This post and its slightly cheesy title (above) was generated using Claude and is based only on the transcript* from the recording of the Oxford Brookes webinar (part of the Talking Teaching across the globe series) I spoke at today on how we actually achieve that Russell Group committment:

Universities will support students and staff to become AI-literate

This is a ‘recast’ AI generated podcast of the article below- the emphases are not brilliant but I hope offers colleagues an idea of what can be done to supplement things like the webinar. Frankly, neither this post nor the recast summary would exist without the ability to produce it in minimal time. (The whole process from downloadingt he transcript to hitting ‘update’ to this post now has taken 19 minutes)

Introduction

We find ourselves in a complex moment as emerging generative AI both captivates and concerns academics. Powerful new tools hold promise yet prompt apprehension. How do we steer constructive conversations amidst clashing viewpoints and high stakes? Martin Compton offers insightful perspectives grounded in ethical priorities – perspectives that reframe AI literacy as a collective journey of discovery requiring diverse voices, embracing practical possibilities, and creating space for critical debate.

Multiple Voices Needed to Balance the Discussion

Martin emphasizes that no one person possesses definitive expertise in this nascent domain. Varied voices deserve air time, even those with “limited credentials.” Since AI intersects with so many fields and its societal ramifications span from climate impacts to employment shifts, cross-disciplinary dialogue matters deeply. We have much to learn from each other.

Further, the computer science sphere itself lacks internal concord on timelines and capabilities. Some hail rapid transformational change while others dismiss the possibility of huge impacts. These mixed messages understandably breed public confusion, sparking doomsday headlines alongside boundless optimism. Socratic humility may serve us well here – acknowledging the expanse of what we do not know.

Given such uncertainty, multiplicity of perspective becomes essential. We need humanities scholars probing ethical quandaries, social scientists weighing systemic biases, natural scientists modeling environmental tradeoffs, employers voicing hiring needs, students sharing studied hopes and fears. No singular authoritative stance exists. Martin rightly warns against perpetuating traditional classroom power dynamics that position instructors as all-knowing arbiters. Hierarchical positioning will not serve us in unmapped territory.

Practical Possibilities Over Limitations to Expand Understanding

Beyond balanced dialogue, Martin advises pivoting more conversations toward practical possibilities versus current limitations. Generative AI’s flaws are abundantly clear, including bias, inaccuracy, and authenticity concerns. These absolutely warrant continued attention, as does debate around academic integrity. But dwelling solely on weaknesses risks blinding us to potentially constructive use cases.

We owe it to students to explore how these technologies may assist real work in real fields, shaping their future employability. Are there accessibility gains for neurodiverse learners? Streamlined workflows for overwhelmed academics? Even those who condemn generative AI must grapple with its impending workplace uptake to best serve graduates. Beyond hypotheticals, where might AI tangibly supplement – not supplant – rich pedagogical environments if guided by ethical priorities?

Illustrating authentic applications can also demystify these tools for skeptical faculty and counteract media hyperbole around “robot grading essays.” When we broaden understanding of AI’s diversity beyond chatbots, we dispel myths. Asking, “how might this aid human creativity?” rather than “will this replace human jobs?” reveals unconsidered potentials.

Spaces for Critical Debate Across Campus

Finally, Martin asks the pivotal question of where open-ended debate will unfold on our campuses given diverse conflicting views. Even within single institutions, some departments welcome generative AI while others seek bans. For literacy efforts to prove lasting, they must transcend one-off workshops and invite ongoing dialogue around community priorities.

Martin offers models like King’s College London’s FutureLearn course allowing global participants to weigh complex issues like algorithmic bias. He spotlights the power of hackathons for convening multiple perspectives to spawn inventive projects. Funding student-faculty partnerships around AI applications grounds exploration in lived curriculum.

Constructing designated forums for airing ethical tensions matters deeply, given disparate departmental stances. We need space to hash out appropriate usage guides for our institutional contexts. No top-down policy prescription or unilateral corporate partnership will address the full scope of concerns. By mapping key campus constituents – from disability support offices to career centers to individual faculty across disciplines – we gain fuller understanding of the landscape before charting next wise steps.

Ultimately AI literacy lives in human connections – the degree to which we foreground multiplicity of voice, balance pragmatic possibility with diligent critique, and carve out shared venues for unpacking this technology on our own terms. The questions loom large, as does potential for substantiative harm. But committing to collective discovery widens possibilities for accountable innovation. We travel this emerging terrain together.

Event slides here

KCL GenAI in HE short course

my prompt: Isolate the comments from martin compton and using ONLY his ideas and contributions write an informal blog post for an academic audience that focusses on how we evolve AI literacy including his thoughts on what that means and how we should approach it. Use at least three subheadings

PGCert Session: an AI generated summary

Yesterday, via CODE, I had the pleasure of working with two groups studying for the online PGCert LTHE at University of London. I repaced ‘Speaker 1’ with my name in the AI generated transcript, ran it through (sensible colleague) Claude AI to generate this summary and then asked (much weirder colleague) ChatGPT to illustrate it. I particularly like the ‘Ethical Ai Ai!’ in the second one but generally would not use either image: I am not a fan of this sort of representation. Alt text are verbatim from chatGPT too.

I forgot to request UK spelling conventions but I’m reasonably happy with the tone. The content feels less passionate and more balanced than I think I am in real life but, realisitically, this follow up post would not exist at all if I’d had to have typed it up myself.

Introduction

Recent advances in artificial intelligence (AI) are raising profound questions for me as an educator. New generative AI tools like ChatGPT can create convincing text, images, and other media on demand. While this technology holds great promise, it also poses challenges regarding academic integrity, student learning, and the nature of assessment. In a lively discussion with teachers recently, I shared my insights on navigating this complex terrain.

Alt text: An educator, a middle-aged individual with a thoughtful expression, stands in a modern classroom looking at a holographic display of AI technology. The hologram includes elements like text, images, and graphs, symbolizing the impact of AI on education. The classroom is equipped with digital tools, and there are a few students in the background, reflecting a contemporary educational setting

The Allure and Risks of Generative AI

I acknowledge the remarkable capabilities of tools like ChatGPT. In minutes, these AIs can generate personalized lessons, multiple choice quizzes, presentation slides and more tailored to specific educational contexts. Teachers and students alike are understandably drawn to technologies that can enhance learning and make routine tasks easier. However, I also caution that generative AI carries risks. If over-relied on, it can undermine academic integrity, entrench biases, and deprive students of opportunities to develop skills. As I argue, banning these tools outright is unrealistic, but educators must remain vigilant about their appropriate usage. The solution resides in open communication, modifying assessments, and focusing more on process than rote products.

Rethinking Assessment in the AI Era

For me, the rise of generative AI necessitates rethinking traditional assessment methods. With tools that can rapidly generate convincing text and other media, essays and exams are highly susceptible to academic dishonesty. However, I suggest assessment strategies like oral defenses, process-focused assignments, and interactive group work hold more integrity. Additionally, I propose reconsidering the primacy placed on summative grading over formative feedback. Removing or deemphasizing grades on early assignments could encourage intellectual risk-taking. Overall, rather than intensifying surveillance, I argue educators should seize this moment to make assessment more dynamic, dialogic and better aligned with course objectives.

Leveraging AI to Enhance Learning

While acknowledging the risks, I also see generative AI as holding great potential for enhancing my teaching and student learning. I demonstrated how these tools can aid personalized learning via customized study materials and tutoring interactions. They can also streamline time-consuming academic tasks like note-taking, literature reviews and citation formatting. Further, I showed how AI could facilitate more personalized feedback for students at scale. Technologies cannot wholly replace human judgment and mentoring relationships, but they can expand educator capacities. However, to gain the full benefits, both educators and students will need guidance on judiciously leveraging these powerful tools.

Problematizing Generative AI and Bias

One major concern I surface is how biases embedded in training data get reproduced in generative AI outputs. For instance, visual AI tools typically depict professors as predominantly white males. However, I note programmers are already working to address this issue by introducing checks against bias and nudging AIs to produce more diverse output. While acknowledging these efforts, I also caution about potential superficiality if underlying data sources and assumptions go unexamined. Pushing back on biases in AI requires grappling with systemic inequities still present in society and institutions. There are no quick technical fixes for complex socio-historical forces, but civil rights and ethics must be central considerations moving forward.

Alt text: An educator confidently leads a group of diverse students through a digitally-enhanced educational landscape. The classroom is integrated with ethical AI technology, featuring computers and interactive displays. This symbolizes a positive and creative use of AI in education. The students appear engaged and focused, while the educator points towards a bright, digital future, illustrating hope and adaptability in navigating the challenges of AI

Navigating Uncharted Waters

In conclusion, I characterize this as a watershed moment full of possibility, but also peril. Preparing myself and fellow educators and students to navigate this terrain will require open, flexible mindsets and reexamining some core assumptions. Knee-jerk reactions could compound risks, while inaction leaves us vulnerable. Addressing challenges presented by AI demands engaging ethical perspectives, but also pedagogical creativity. By orienting toward learning processes over products and developing more robust assessments, educators can cultivate academic integrity and the all-important critical thinking skills needed in an AI-driven world. This will necessitate experimentation, ongoing dialogue and reconsidering conventional practices. While daunting, I ultimately express confidence that with care, wisdom and humanity, educators can guide students through the promise and complexity of this new frontier.

My first GPT

Just when you think you’re getting a handle on things…

So this week’s big announcement was the (still being rolled out) ability to create custom GPTs. Just as I was getting to grips with doing this in the OpenAI playground, it’s now completely WYSIWYG (for many) from within ChatGPT which has had a 8 bit to 16 bit type upgrade in graphics to boot. As much as I want to encourage use of Bing Chat for a bunch of institutional reasons, I am yet again pulled back to ChatGPT with the promise of custom GPTs (do we have to call them that?) After a few false starts yesterday, due to issues of some sort with the roll out I imagine, today it has been seamless and smooth. I learnt quickly that you can get real precision from configuring the instructions. For example, I have given mine the specific instruction to share link X in instance Y and Link A in instance B. To create the foundation I have combined links with uploaded documents and so far my outputs have been pretty good. I think I will need longer and much more precise instructions as the responses do veer to the general still a little too much but it is feeding from my foundation well. Here’s how it looks in the creation screen:

Alt text: Screenshot of chatGPT custom GPT authoring window showing boxes to fill in including Name, Description, Instruction and Conversation starters

It comes with a testing space adjacent to the creation window and options to share:

Alt text: Screenshot of dropdown menu in custom GPT authoring window showing choice to publish bot to only me, only people with link and public

And this is the screen you get if you access the link (but recipients themselves must have the new creation ability to access custom bots):

Alt text: Screenshot from bot homescreen showing familiar ChatGPT interface but with personalised image, bot name and suggested questions

And finally the chatbot window is familair and, as can been seen focussed on my data:

Alt text: Q & A with bot. The question is ‘How do I sign up for the AI course? It gives detailed information and links directly to it

I actually think this will re-revolutionise how we think about large language models in particular and will ultimately impact workflows for professional service and academic staff as well as students in equal measure.

Making and using assistant bots in OpenAI playground

The ChatGPT numbers are staggering: 100 million weekly users and 2 million developers. I wish I had the time and skills to be included in the developer stat but what is notable is that even someone like me -the compulsive fiddler but with very limited tech skills – can use the ‘playground’ to create a personal assistant bot. Using GPT 4 the diffefrence between outputs here and from within, say, chatgpt or any other large language models, is that I define the instruction (in this case focssing on sources to refer responses to and nature of the role) and, crucially, use uploaded documents to become the foundation information. The post below was generated using a prompt that asked for a summary of key information from across our guidance documentation on Gen AI at King’s and then improved because the first output included non descriptive hyperlinks. I hear and have seen that the system is struggling to keep up with interest but it’s well worth having a look I think because in no time Open AI and/ or smart developers will create user friendly wrappers that will enable anyone to have their own pre-trained bots to support whatever it is they are working on. This I think will change the way we think about LLMs relatively quickly.

Alt text: Screenshot of the assitant bot creation interface. The bulk of the text is duplicated below.

Sample output ( I have tweaked it slightly to ensure it is factually accurate but have left the tone and style as is just for illustration):

King’s Generative AI info

In the realm of higher education, generative AI stands as a groundbreaking technology that’s redefining the scope of learning and teaching. King’s College London remains a beacon of innovation by methodically embracing the capabilities of AI, while also tactfully navigating the intricacies it introduces to the academic environment. We have devised a comprehensive framework that facilitates informed engagement with generative AI across the university community.

Navigating the AI Landscape in Higher Education

To equip our community with a nuanced understanding of generative AI’s role in academia, we have developed an insightful short course titled Generative AI in Higher Education, available on FutureLearn. This course invites participants to explore the multifaceted ways in which AI can enrich academic disciplines and paves the way for meaningful discussions about the evolution of educational practices.

Tailored Guidance for All Stakeholders

Our dedication to fostering a responsible AI-integrated culture is reflected in the guidance documents created specifically for each stakeholder within King’s College London:

For programme and module leaders, it’s essential to consider whether guidance on the responsible use of generative AI has been articulated to students. As a cornerstone of our initiative, we have drafted a detailed guide for program and module leaders to ensure transparency in the permissible applications of AI.

Lecturers and GTAs find themselves equipped with recommendations designed to seamlessly integrate AI into their instructional and evaluative methodologies.

Students’ guidance addresses the pivotal aspects of generative AI usage within their academic journey while maintaining academic integrity standards. An integral part of this guidance is an encouragement to engage with tutors on the appropriate use of AI in particular assignments.

Policy Enforcements and Ethical Considerations

Information on detection and acknowledgment of AI-generated content has been published, underscoring the significance of integrity in academic pursuits.

Conceptual Clarity through Definitions

The university community can access a comprehensive lexicon and context for generative AI in our context and definitions document, poised to enhance understanding and conversation about this emergent technology.

Revolutionising Assessment

With generative AI reshaping potential assessment methods, King’s College London embraces new approaches to assessment, dedicated to cultivating profound learning experiences and reflective critical thinking among students.

Practical Insights and Knowledge Sharing

Martin Compton shares his unique perspective and inventive tactics for integrating generative AI into academic settings through his personal blog, a source of inspiration and practical advice.

Conclusion

The proactive stance of King’s College London in the sphere of generative AI serves as a testament to our commitment to excellence. By providing clear and structured guidance that caters to every role within the institution, we stride forward into the educational future armed with foresight, creativity, and an unwavering commitment to academic integrity.