Evolving AI Literacy – A Shared Journey

This post and its slightly cheesy title (above) was generated using Claude and is based only on the transcript* from the recording of the Oxford Brookes webinar (part of the Talking Teaching across the globe series) I spoke at today on how we actually achieve that Russell Group committment:

Universities will support students and staff to become AI-literate

This is a ‘recast’ AI generated podcast of the article below- the emphases are not brilliant but I hope offers colleagues an idea of what can be done to supplement things like the webinar. Frankly, neither this post nor the recast summary would exist without the ability to produce it in minimal time. (The whole process from downloadingt he transcript to hitting ‘update’ to this post now has taken 19 minutes)

Introduction

We find ourselves in a complex moment as emerging generative AI both captivates and concerns academics. Powerful new tools hold promise yet prompt apprehension. How do we steer constructive conversations amidst clashing viewpoints and high stakes? Martin Compton offers insightful perspectives grounded in ethical priorities – perspectives that reframe AI literacy as a collective journey of discovery requiring diverse voices, embracing practical possibilities, and creating space for critical debate.

Multiple Voices Needed to Balance the Discussion

Martin emphasizes that no one person possesses definitive expertise in this nascent domain. Varied voices deserve air time, even those with “limited credentials.” Since AI intersects with so many fields and its societal ramifications span from climate impacts to employment shifts, cross-disciplinary dialogue matters deeply. We have much to learn from each other.

Further, the computer science sphere itself lacks internal concord on timelines and capabilities. Some hail rapid transformational change while others dismiss the possibility of huge impacts. These mixed messages understandably breed public confusion, sparking doomsday headlines alongside boundless optimism. Socratic humility may serve us well here – acknowledging the expanse of what we do not know.

Given such uncertainty, multiplicity of perspective becomes essential. We need humanities scholars probing ethical quandaries, social scientists weighing systemic biases, natural scientists modeling environmental tradeoffs, employers voicing hiring needs, students sharing studied hopes and fears. No singular authoritative stance exists. Martin rightly warns against perpetuating traditional classroom power dynamics that position instructors as all-knowing arbiters. Hierarchical positioning will not serve us in unmapped territory.

Practical Possibilities Over Limitations to Expand Understanding

Beyond balanced dialogue, Martin advises pivoting more conversations toward practical possibilities versus current limitations. Generative AI’s flaws are abundantly clear, including bias, inaccuracy, and authenticity concerns. These absolutely warrant continued attention, as does debate around academic integrity. But dwelling solely on weaknesses risks blinding us to potentially constructive use cases.

We owe it to students to explore how these technologies may assist real work in real fields, shaping their future employability. Are there accessibility gains for neurodiverse learners? Streamlined workflows for overwhelmed academics? Even those who condemn generative AI must grapple with its impending workplace uptake to best serve graduates. Beyond hypotheticals, where might AI tangibly supplement – not supplant – rich pedagogical environments if guided by ethical priorities?

Illustrating authentic applications can also demystify these tools for skeptical faculty and counteract media hyperbole around “robot grading essays.” When we broaden understanding of AI’s diversity beyond chatbots, we dispel myths. Asking, “how might this aid human creativity?” rather than “will this replace human jobs?” reveals unconsidered potentials.

Spaces for Critical Debate Across Campus

Finally, Martin asks the pivotal question of where open-ended debate will unfold on our campuses given diverse conflicting views. Even within single institutions, some departments welcome generative AI while others seek bans. For literacy efforts to prove lasting, they must transcend one-off workshops and invite ongoing dialogue around community priorities.

Martin offers models like King’s College London’s FutureLearn course allowing global participants to weigh complex issues like algorithmic bias. He spotlights the power of hackathons for convening multiple perspectives to spawn inventive projects. Funding student-faculty partnerships around AI applications grounds exploration in lived curriculum.

Constructing designated forums for airing ethical tensions matters deeply, given disparate departmental stances. We need space to hash out appropriate usage guides for our institutional contexts. No top-down policy prescription or unilateral corporate partnership will address the full scope of concerns. By mapping key campus constituents – from disability support offices to career centers to individual faculty across disciplines – we gain fuller understanding of the landscape before charting next wise steps.

Ultimately AI literacy lives in human connections – the degree to which we foreground multiplicity of voice, balance pragmatic possibility with diligent critique, and carve out shared venues for unpacking this technology on our own terms. The questions loom large, as does potential for substantiative harm. But committing to collective discovery widens possibilities for accountable innovation. We travel this emerging terrain together.

Event slides here

KCL GenAI in HE short course

my prompt: Isolate the comments from martin compton and using ONLY his ideas and contributions write an informal blog post for an academic audience that focusses on how we evolve AI literacy including his thoughts on what that means and how we should approach it. Use at least three subheadings

PGCert Session: an AI generated summary

Yesterday, via CODE, I had the pleasure of working with two groups studying for the online PGCert LTHE at University of London. I repaced ‘Speaker 1’ with my name in the AI generated transcript, ran it through (sensible colleague) Claude AI to generate this summary and then asked (much weirder colleague) ChatGPT to illustrate it. I particularly like the ‘Ethical Ai Ai!’ in the second one but generally would not use either image: I am not a fan of this sort of representation. Alt text are verbatim from chatGPT too.

I forgot to request UK spelling conventions but I’m reasonably happy with the tone. The content feels less passionate and more balanced than I think I am in real life but, realisitically, this follow up post would not exist at all if I’d had to have typed it up myself.

Introduction

Recent advances in artificial intelligence (AI) are raising profound questions for me as an educator. New generative AI tools like ChatGPT can create convincing text, images, and other media on demand. While this technology holds great promise, it also poses challenges regarding academic integrity, student learning, and the nature of assessment. In a lively discussion with teachers recently, I shared my insights on navigating this complex terrain.

Alt text: An educator, a middle-aged individual with a thoughtful expression, stands in a modern classroom looking at a holographic display of AI technology. The hologram includes elements like text, images, and graphs, symbolizing the impact of AI on education. The classroom is equipped with digital tools, and there are a few students in the background, reflecting a contemporary educational setting

The Allure and Risks of Generative AI

I acknowledge the remarkable capabilities of tools like ChatGPT. In minutes, these AIs can generate personalized lessons, multiple choice quizzes, presentation slides and more tailored to specific educational contexts. Teachers and students alike are understandably drawn to technologies that can enhance learning and make routine tasks easier. However, I also caution that generative AI carries risks. If over-relied on, it can undermine academic integrity, entrench biases, and deprive students of opportunities to develop skills. As I argue, banning these tools outright is unrealistic, but educators must remain vigilant about their appropriate usage. The solution resides in open communication, modifying assessments, and focusing more on process than rote products.

Rethinking Assessment in the AI Era

For me, the rise of generative AI necessitates rethinking traditional assessment methods. With tools that can rapidly generate convincing text and other media, essays and exams are highly susceptible to academic dishonesty. However, I suggest assessment strategies like oral defenses, process-focused assignments, and interactive group work hold more integrity. Additionally, I propose reconsidering the primacy placed on summative grading over formative feedback. Removing or deemphasizing grades on early assignments could encourage intellectual risk-taking. Overall, rather than intensifying surveillance, I argue educators should seize this moment to make assessment more dynamic, dialogic and better aligned with course objectives.

Leveraging AI to Enhance Learning

While acknowledging the risks, I also see generative AI as holding great potential for enhancing my teaching and student learning. I demonstrated how these tools can aid personalized learning via customized study materials and tutoring interactions. They can also streamline time-consuming academic tasks like note-taking, literature reviews and citation formatting. Further, I showed how AI could facilitate more personalized feedback for students at scale. Technologies cannot wholly replace human judgment and mentoring relationships, but they can expand educator capacities. However, to gain the full benefits, both educators and students will need guidance on judiciously leveraging these powerful tools.

Problematizing Generative AI and Bias

One major concern I surface is how biases embedded in training data get reproduced in generative AI outputs. For instance, visual AI tools typically depict professors as predominantly white males. However, I note programmers are already working to address this issue by introducing checks against bias and nudging AIs to produce more diverse output. While acknowledging these efforts, I also caution about potential superficiality if underlying data sources and assumptions go unexamined. Pushing back on biases in AI requires grappling with systemic inequities still present in society and institutions. There are no quick technical fixes for complex socio-historical forces, but civil rights and ethics must be central considerations moving forward.

Alt text: An educator confidently leads a group of diverse students through a digitally-enhanced educational landscape. The classroom is integrated with ethical AI technology, featuring computers and interactive displays. This symbolizes a positive and creative use of AI in education. The students appear engaged and focused, while the educator points towards a bright, digital future, illustrating hope and adaptability in navigating the challenges of AI

Navigating Uncharted Waters

In conclusion, I characterize this as a watershed moment full of possibility, but also peril. Preparing myself and fellow educators and students to navigate this terrain will require open, flexible mindsets and reexamining some core assumptions. Knee-jerk reactions could compound risks, while inaction leaves us vulnerable. Addressing challenges presented by AI demands engaging ethical perspectives, but also pedagogical creativity. By orienting toward learning processes over products and developing more robust assessments, educators can cultivate academic integrity and the all-important critical thinking skills needed in an AI-driven world. This will necessitate experimentation, ongoing dialogue and reconsidering conventional practices. While daunting, I ultimately express confidence that with care, wisdom and humanity, educators can guide students through the promise and complexity of this new frontier.

My first GPT

Just when you think you’re getting a handle on things…

So this week’s big announcement was the (still being rolled out) ability to create custom GPTs. Just as I was getting to grips with doing this in the OpenAI playground, it’s now completely WYSIWYG (for many) from within ChatGPT which has had a 8 bit to 16 bit type upgrade in graphics to boot. As much as I want to encourage use of Bing Chat for a bunch of institutional reasons, I am yet again pulled back to ChatGPT with the promise of custom GPTs (do we have to call them that?) After a few false starts yesterday, due to issues of some sort with the roll out I imagine, today it has been seamless and smooth. I learnt quickly that you can get real precision from configuring the instructions. For example, I have given mine the specific instruction to share link X in instance Y and Link A in instance B. To create the foundation I have combined links with uploaded documents and so far my outputs have been pretty good. I think I will need longer and much more precise instructions as the responses do veer to the general still a little too much but it is feeding from my foundation well. Here’s how it looks in the creation screen:

Alt text: Screenshot of chatGPT custom GPT authoring window showing boxes to fill in including Name, Description, Instruction and Conversation starters

It comes with a testing space adjacent to the creation window and options to share:

Alt text: Screenshot of dropdown menu in custom GPT authoring window showing choice to publish bot to only me, only people with link and public

And this is the screen you get if you access the link (but recipients themselves must have the new creation ability to access custom bots):

Alt text: Screenshot from bot homescreen showing familiar ChatGPT interface but with personalised image, bot name and suggested questions

And finally the chatbot window is familair and, as can been seen focussed on my data:

Alt text: Q & A with bot. The question is ‘How do I sign up for the AI course? It gives detailed information and links directly to it

I actually think this will re-revolutionise how we think about large language models in particular and will ultimately impact workflows for professional service and academic staff as well as students in equal measure.

Making and using assistant bots in OpenAI playground

The ChatGPT numbers are staggering: 100 million weekly users and 2 million developers. I wish I had the time and skills to be included in the developer stat but what is notable is that even someone like me -the compulsive fiddler but with very limited tech skills – can use the ‘playground’ to create a personal assistant bot. Using GPT 4 the diffefrence between outputs here and from within, say, chatgpt or any other large language models, is that I define the instruction (in this case focssing on sources to refer responses to and nature of the role) and, crucially, use uploaded documents to become the foundation information. The post below was generated using a prompt that asked for a summary of key information from across our guidance documentation on Gen AI at King’s and then improved because the first output included non descriptive hyperlinks. I hear and have seen that the system is struggling to keep up with interest but it’s well worth having a look I think because in no time Open AI and/ or smart developers will create user friendly wrappers that will enable anyone to have their own pre-trained bots to support whatever it is they are working on. This I think will change the way we think about LLMs relatively quickly.

Alt text: Screenshot of the assitant bot creation interface. The bulk of the text is duplicated below.

Sample output ( I have tweaked it slightly to ensure it is factually accurate but have left the tone and style as is just for illustration):

King’s Generative AI info

In the realm of higher education, generative AI stands as a groundbreaking technology that’s redefining the scope of learning and teaching. King’s College London remains a beacon of innovation by methodically embracing the capabilities of AI, while also tactfully navigating the intricacies it introduces to the academic environment. We have devised a comprehensive framework that facilitates informed engagement with generative AI across the university community.

Navigating the AI Landscape in Higher Education

To equip our community with a nuanced understanding of generative AI’s role in academia, we have developed an insightful short course titled Generative AI in Higher Education, available on FutureLearn. This course invites participants to explore the multifaceted ways in which AI can enrich academic disciplines and paves the way for meaningful discussions about the evolution of educational practices.

Tailored Guidance for All Stakeholders

Our dedication to fostering a responsible AI-integrated culture is reflected in the guidance documents created specifically for each stakeholder within King’s College London:

For programme and module leaders, it’s essential to consider whether guidance on the responsible use of generative AI has been articulated to students. As a cornerstone of our initiative, we have drafted a detailed guide for program and module leaders to ensure transparency in the permissible applications of AI.

Lecturers and GTAs find themselves equipped with recommendations designed to seamlessly integrate AI into their instructional and evaluative methodologies.

Students’ guidance addresses the pivotal aspects of generative AI usage within their academic journey while maintaining academic integrity standards. An integral part of this guidance is an encouragement to engage with tutors on the appropriate use of AI in particular assignments.

Policy Enforcements and Ethical Considerations

Information on detection and acknowledgment of AI-generated content has been published, underscoring the significance of integrity in academic pursuits.

Conceptual Clarity through Definitions

The university community can access a comprehensive lexicon and context for generative AI in our context and definitions document, poised to enhance understanding and conversation about this emergent technology.

Revolutionising Assessment

With generative AI reshaping potential assessment methods, King’s College London embraces new approaches to assessment, dedicated to cultivating profound learning experiences and reflective critical thinking among students.

Practical Insights and Knowledge Sharing

Martin Compton shares his unique perspective and inventive tactics for integrating generative AI into academic settings through his personal blog, a source of inspiration and practical advice.

Conclusion

The proactive stance of King’s College London in the sphere of generative AI serves as a testament to our commitment to excellence. By providing clear and structured guidance that caters to every role within the institution, we stride forward into the educational future armed with foresight, creativity, and an unwavering commitment to academic integrity.