The wizard of PAIR

Full recording: Listen / watch here

This post is a AI/ me hybrid summary of the transcript of a conversation I had with Prof Oz Acar as part of the AI conversations series at KCL.  This morning I found that my Copilot window now allows me to upload attachments (now disabled again! 30/4/24) but the output with the same prompt was poor by comparison to Claude or my ‘writemystyle’ custom GOT unfortunately (for now and at first attempt). I have made some edits to the post for clarity and to remove some of the wilder excesses of  ‘AI cringe’.  

 

“The beauty of PAIR is its flexibility,” Oz explained. “Educators can customise each component based on learning objectives, student cohorts, and assignments.” An instructor could opt for closed problem statements tailored to specific lessons, or challenge students to formulate their own open-ended inquiries. Guidelines may restrict AI tool choices, or enable students more autonomy to explore the ever-expanding AI ecosystem.  That oversight and guidance needs to come from an informed position of course.

 

Crucially, by emphasising skills like problem formulation, iterative experimentation, critical evaluation, and self-reflection, PAIR aligns with long-established pedagogical models proven to deepen understanding, such as inquiry-based and active learning. “PAIR is really skill-centric, not tool-centric,” Oz clarified. “It develops capabilities that will be invaluable for working with any AI system, now or in the future.”

 

The early results from over a dozen King’s modules across disciplines like business, marketing, and arts have piloted PAIR have been overwhelmingly positive. Students have reported marked improvements in their AI literacy – confidence in understanding these technologies’ current capabilities, limitations, and ethical implications. “Over 90% felt their skills in areas like evaluating outputs, recognising bias, and grasping AI’s broader impact had significantly increased,” Oz shared.

 

While valid concerns around academic integrity have catalysed polarising debates, with some advocating outright bans and restrictive detection measures, Oz makes a nuanced case for an open approach centred on responsible AI adoption. “If we prohibit generative AI for assignments, the stellar students will follow the rules while others will use it covertly,” he argued. “Since even expert linguists struggle to detect AI-written text reliably (especially when it has been manipulated rather than simply churned from a single shot prompt), those circumventing the rules gain an unfair advantage.”

 

Instead, Oz advocates assuming AI usage as an integrated part of the learning process, creating an equitable playing field primed for recalibrating expectations and assessment criteria. “There’s less motivation to cheat if we allow appropriate AI involvement,” he explained. “We can redefine what constitutes an exceptional essay or report in an AI-augmented age.”

 

This stance aligns with PAIR’s human-centric philosophy of ensuring students remain firmly in the driver’s seat, leveraging AI as an enabling co-pilot to materialise and enrich their own ideas and outputs. “Throughout the PAIR process, we have mechanisms like reflective reports that reinforce students’ ownership and agency … The AI’s role is as an assistive partner, not an autonomous solution.”

 

Looking ahead, Oz is energised by generative AI’s potential to tackle substantial challenges plaguing education systems globally – from expanding equitable access to quality learning resources, to easing overstretched educators’ burnout through intelligent process optimisation and tailored student support. “We could make education infinitely better by leveraging these technologies thoughtfully…Imagine having the world’s most patient, accessible digital teaching assistants to achieve our pedagogical goals.”

 

However, Oz also acknowledges legitimate worries about the perils of inaction or institutional inertia. “My biggest concern is that we keep talking endlessly about what could go wrong, paralysed by committee after committee, while failing to prepare the next generation for their AI-infused reality,” he cautioned. Without proactive engagement, Oz fears a bifurcated future where students are either obliviously clueless about AI’s disruptive scope, or conversely, become overly dependent on it without cultivating essential critical thinking abilities.

 

Another risk for Oz is generative AI’s potential to propel misinformation and personalised manipulation campaigns to unprecedented scales. “We’re heading into major election cycles soon, and I’m deeply worried about deepfakes fuelling conspiracy theories and political interference,” he revealed. “But even more insidious is AI’s ability to produce highly persuasive, psychologically targeted disinformation tailored to each individual’s profile and vulnerabilities.”

 

Despite these significant hazards, Oz remains optimistic that responsible frameworks like PAIR can steer education towards effectively harnessing generative AI’s positive transformations while mitigating risks.

 

PAIR Framework- Further information

Previous conversation with Dan Hunter

Previous conversation with Mandeep Gill Sagoo

Generative AI in HE- self study short course

An additional point to note: The recording is of course a conversation between two humans (Oz and Martin) and is unscripted. The Q&A towards the end of the recording was faciliated by a third human (Sanjana). I then compared four AI transcription tools: Kaltura, Clipchamp, Stream and Youtube. Kaltura estimated 78% accuracy, Clipchamp crashed twice, Stream was (in my estimation) around 90-95% accurate but the editing/ download process is less convenient when compared to YouTube in my view so the final transcript is the one initially auto-generated in in YouTube, ChatGPT punctuated then re-edited for accuracy in YouTube. Whilst accuracy has improved noticeably in the last few years the faff is still there. The video itself is hosted in Kaltura.

AI Law

Watch the full video here

In the second AI conversation of the King’s Academy ‘Interfaculty Insights’ series, Professor Dan Hunter, Executive Dean of the Dickson Poon School of Law, shared his multifaceted engagement with artificial intelligence (AI). Prof Hunter discussed the transformative potential of AI, particularly generative AI, in legal education, practice, and beyond. With a long history in the field of AI and law, he offered a unique perspective on the challenges and opportunities presented by this rapidly evolving technology. To say he is firmly in the enthusiast camp, is probably an understatement.

A wooden gavel with ‘AI’ embossed on it

From his vantage point, Prof Hunter presents the following key ideas:

  1. AI tools (especially LLMs) are already demonstrating significant productivity gains for professionals and students alike but it is often more about the ways they can do ‘scut work’. Workers and students become more efficient and improve work quality when using these models. For those with lower skill levels the improvement is even more pronounced.
  2. While cognitive offloading to AI models raises concerns about losing specific skills (examples of long division or logarithms were mentioned), Prof Hunter argued that we must adapt to this new reality. The “cat is out of the bag” so our responsibility lies in identifying and preserving foundational skills while embracing the benefits of AI.
  3. Assessment methods in legal education (and by implication across disciplines) must evolve to accommodate AI capabilities. Traditional essay writing can be easily replicated by language models, necessitating more complex and time-intensive assessment approaches. Prof Hunter advocates for supporting the development of prompt engineering skills and requiring students to use AI models while reflecting on the process.
  4. The legal profession will undergo a significant shakeup, with early adopters thriving and those resistant to change struggling. Routine tasks will be automated obligating lawyers to move up the value chain and offer higher-value services. This disruption may lead to the need for retraining.
  5. AI models can help address unmet legal demand by making legal services more affordable and accessible. However, this will require systematic changes in how law is taught and practiced, with a greater emphasis on leveraging AI’s capabilities.
  6. In the short term, we tend to overestimate the impact of technological innovations, while underestimating their long-term effects. Just as the internet transformed our lives over decades, the full impact of generative AI may take time to unfold, but it will undoubtedly be transformative.
  7. Educators must carefully consider when cognitive offloading to AI is appropriate and when it is necessary for students to engage in the learning process without AI assistance. Finding the right balance is crucial for effective pedagogy in the AI era.
  8. Professional services staff can benefit from AI by identifying repetitive, language-based tasks that can be offloaded to language models. However, proper training on responsible AI use, data privacy, and information security is essential to avoid potential pitfalls.
  9. While AI models can aid in brainstorming, generating persuasive prose, and creating analogies, they currently lack the ability for critical thinking, planning, and execution. Humans must retain these higher-order skills, which cannot yet be outsourced to AI.
  10. Embracing AI in legal education and practice is not just about adopting the technology but also about fostering a mindset of change and continuous adaptation. As Prof Hunter notes, “If large language models were a drug, everyone would be prescribed them.” *

The first in the series was Dr Mandeep Gill Sagoo

* First draft of this summary generated from meeting transcript via Claude

Understanding and Integrating AI in Teaching

This morning I discussed this topic with colleagues from King’s Natural, Mathematical and Engineering Sciences faculty. The session was recorded and a transcript is available to NMES colleagues but, as I pointed out in the session, AI is enabling ways of enhancing and/ or adding to the alternative ways of accessing the core information. By way of illustration the post below is generated from the transcript (after I sifted content to remove other speakers.) The only thing I edited was the words ‘in summary’ from the final paragraph.

TL:DR Autopodcast version

Slides can be seen here

Screenshot from title slide showing AI generated image of a foot with only 4 toes and a quote purportedly from da Vinci which says: ‘The human foot is a masterpiece of engineering and a work of art’

Understanding and Integrating AI in Teaching

Martin Compton’s contribution to the NMS Education Elevenses session revolved around the integration of AI into teaching, learning, and assessment. His perspective is deeply rooted in practical application and cautious understanding of these technologies, especially large language models like ChatGPT or Microsoft Co-pilot.

——-

My approach towards AI in education is multifaceted. I firmly believe we need a basic understanding of these technologies to avoid pitfalls. The misuse of AI can lead to serious consequences, as seen in instances like the professor in Texas who misused ChatGPT for student assessment or the lawyer in Australia who relied on fabricated legal precedents from ChatGPT. These examples underline the importance of understanding the capabilities and limitations of AI tools.

The Ethical and Practical Application of AI

The heart of my argument lies in engaging with AI responsibly. It’s not just about using AI tools but also understanding and teaching about them. Whether it’s informatics, chemistry, or any other discipline, integrating AI into the curriculum demands a balance between utilisation and ethical considerations. I advocate for a metacognitive approach, where we reflect on how we’re learning and interacting with AI. It’s crucial to encourage students to critically evaluate AI-generated content.

Examples of AI Integration in Education

I routinely use AI in various aspects of my work. For instance, AI-generated thumbnails for YouTube videos, AI transcription in Teams, upscaling transcripts using large language models, and even translations and video manipulation techniques that were beyond my skill set a year ago. These tools are not just about easing workflows but also about enhancing the educational experience.

One significant example I use is AI for creating flashcards. Using tools like Quizlet, combined with AI, I can quickly generate educational resources, which not only saves time but also introduces an interactive and engaging way for students to learn.

The Future of AI in Education

I believe that UK universities, and educational institutions worldwide, face a critical choice: either embrace AI as an integral component of academic pursuit or risk becoming obsolete. AI tools could become as ubiquitous as textbooks, and we need to prepare for this reality. It’s not about whether AI will lead us to a utopia or dystopia; it’s about engaging with the reality of AI as it exists today and its potential future impact on our students.

My stance on AI in education is one of cautious optimism. The potential benefits are immense, but so are the risks. We must tread carefully, ensuring that we use AI to enhance education without compromising on ethical standards or the quality of learning. Our responsibility lies in guiding students to use these tools ethically and responsibly, preparing them for a future where AI is an integral part of everyday life.

The key is to balance the use of AI with critical thinking and an understanding of its limitations. As educators, we are not just imparting knowledge but also shaping how the next generation interacts with and perceives technology. Therefore, it’s not just about teaching with AI but also teaching about AI, its potential, and its pitfalls.

13 ways you could integrate AI tools into teaching

For a session I am facilitating with our Natural, Mathematical and Engineering Sciences faculty I have below pulled together a few ideas drawn from a ton of brilliant suggestions colleagues from across the sector have shared in person, at events or via social media. There’s a bit overlap but I am trying to address the often heard criticism that what’s missing from the guidance and theory and tools out there is some easily digestible, accessible and practically-focussed suggestions that focus on teaching rather than assessment and feedback. Here my first tuppenceworth:

1.AI ideator: Students write prompts to produce a given number of outputs (visual, text or code) to a design or problem brief. Groups select top 2-3 and critique in detail the viability of solutions.  (AI as inspiration)

2. AI Case Studies: Students analyse real-world examples where AI has influenced various practices (e.g., medical diagnosis, finance, robotics) to develop contextual intelligence and critical evaluation skills. (AI as disciplinary content focus)

3. AI Case Study Creator: Students are given AI generated vignettes, micro case studies or scenarios related to a given topic and discuss responses/ solutions. (AI as content creator)

4. AI Chatbot Research: For foundational theoretical principles or contextual understanding, students interact with AI chatbots, document the conversation, and evaluate the experience, enhancing their research, problem-solving, and understanding of user experience. (AI as tool to further understanding of content)

5. AI Restructuring: Students are tasked with using AI tools to reformat content into different media accordsing to pre-defined principles. (AI for multi-media rreframing).

6. AI Promptathon: Students formulate prompts for AI to address significant questions in their discipline, critically evaluate the AI-generated responses, and reflect on the process, thereby improving their AI literacy and collaborative skills. (Critical AI literacy and disciplinary formative activity)

7. AI audit: Students use AI to generate short responses to open questions, critically assess the AI’s output, and then give a group presentation on their findings. Focus could be on accuracy and/ or clarity of outputs. (Critical AI literacy)

8. AI Solution Finder: Applicable post work placement or with case studies/ scenarios, students identify real-world challenges and propose AI-based solutions, honing their creativity, research skills, and professional confidence. (AI in context)

9. AI Think, Pair & Share: Students individually generate AI responses to a key challenge, then pair up to discuss and refine their prompts, improving their critical thinking, evaluation skills, and AI literacy. (AI as dialogic tool)

10. Analyse Data: Students work with open-source data sets to answer pressing questions in their discipline, thereby developing cultural intelligence, data literacy, and ethical understanding. (AI as analytical tool)

11. AI Quizmaster : Students design quiz questions and use AI to generate initial ideas, which they then revise and peer-review, fostering foundational knowledge, research skills, and metacognition. (AI as concept checking tool)

12. Chemistry / Physics or Maths Principle Exploration with AI Chatbot: Students engage with an AI chatbot to learn and understand a specific principle. The chatbot can explain concepts, answer queries, and provide examples. Students (with support of GTA/ near peer or academic tutor) compare the AI’s approach to their own process/ understanding. (AI chatbot tutor)

13. Coding Challenge- AI vs. Manual Code Comparison: Coding students create a short piece of code for a specific purpose and then compare their code to a pre-existing manually produced code for the same purpose. This comparison can include an analysis of efficiency, creativity, and effectiveness. (AI as point of comparison)

Custom GPTs

There are two main audiences for custom GPTs built within the ChatGPT Pro insfrastucture. The first is anyone with a pro account. There are other tools that allow me to build custom GPTs with minimal skills that are open to wider audiences so I think it’ll be interesting to see whether OpenAI continue to leverage this feature to encourage new subscription purchases or whether it will open up to further stifle competitor development. In education the ‘custom bots for others’ potential is huge but, for now, I am realising how potentially valuable they might be for the audience I did not initially consider – me.

One that is already proving useful is ‘My thesis helper’ which I constructed to pull information only from my thesis (given that even the really obvious papers never materialsed I am wondering whether this might catalyse that!) It’s an opportunity to use as source material much larger documents than the copy/ paste tokens allow or even the relatively generous (and free) 100k tokens and document upload Claude AI permits. In particular, it facilitates much swifter searching within the document as well as opportunities for synthesising and summarising specific sections. Another is ‘Innovating in the Academy’ (try it yourself if you have a pro account) which uses two great sources of case studies from across King’s, collated and edited by my immediate colleagues in King’s Academy. The bot enables a more refined search as well as an opportunity to synthesise thinking.

Designed to be more outward facing is ‘Captain’s Counsel’. This I made to align with a ‘Star Trek’ extended (and undoubtedly excruciating) metaphor I’ll be using in a presentation for the forthcoming GenAI in Education conference in Ulster. Here I have uploaded some reference material but also opened it to the web. I have tried to tap into my own Star Trek enthusiasm whilst focussing on broader questions about teaching. The web-openness means it will happily respond to questions about many things under the broad scope I have identified though I have also identified some taboos. Most useful and interesting is the way it follows my instruction to address the issue with reference to Captain Kirk’s own experiences. 

Both the creation and use of customised bots enables different ways of perceiving and accessing existing information and it is in these functions broadly that LLMs and image generators as well as within customised bots are likely to establish a utility niche I think, especially for folk yet to dip their toes or whose perceptions are dominated by LLMs as free essay mills.

Babies and Bathwater: How Far Will AI Necessitate an Assessment Revolution?

By Martin Compton & Chris Rowell

Recast version (auto podcast)

Caveat: This two-for-one post was generated using multiple AI technologies. It is drawn from the transcript of an event held this afternoon ( 6th October 2023) which was the first in a series of conversations about AI hosted by Chris Rowell at UAL. We thought it would be an interesting experiment to produce a blog summary of the key ideas and themes but then we realised that it was Friday afternoon and we both have lives too. So… we put AI tools to work: first MS Teams AI provided an instant transcript, then Claude AI filtered the content and separated it into two main chunks (Martin answering questions and then open discussion). Third we used the prompt in ChatGPT: Using the points made by Martin Compton write a blog post of 500-750 words that captures the key points he raises in full prose, using the style and tone he uses here. Call the post ” Babies and bathwater: how far will AI necessitate an assessment revolution?” . Then, we did something similar with the open discussion and that led to part two of this post below. Finally, I used some keywords to generate some images in Bing Chat which uses Dall-e 3 to decorate the text.

Part 1: The conversation

Attempt 1: AI generated image (Using Dall-e3 via Bing Chat) of computer monitor showing article called ‘Babies and Bathwater’ below which is an image of two babies in a sort of highchair/ bath combo

The ongoing dialogue around AI’s influence on education often has us pondering over the depth and dimensions of the issue. Our peers frequently express their concerns about students using AI to craft essays and generate images for their assessments. Recently, I (Chris) stumbled upon the AI guidelines by King’s, urging institutions to enable students and staff to become AI literate. But the bigger question looms large: what does being AI literate truly entail?

Attempt 2: AI generated image (Using Dall-e3 via Bing Chat) of computer monitor showing article called ‘Babies and Bathwater?’ below which is an image of a robot

For me (Martin), this statement from the Russell Group principles on generative AI has been instrumental in persuading some skeptics in the academic realm of the necessity to engage. It’s clear that AI literacy isn’t just another buzzword. It’s a doorway to stimulating dialogue. It’s about addressing our anxieties and reservations, then channeling those emotions to drive conversations around teaching, assessment, and learning.

Truth be told, when we dive deep into the matter of AI literacy, we’re essentially discussing another facet of information literacy. It’s a skill we aim to foster in our students and one that, as educators, we should continually refine in ourselves. Yet, I often feel that the larger academic community might not be doing enough to hone these skills, especially in the digital age where misinformation spreads like wildfire.

With the rise of AI technologies like ChatGPT, I was both amazed and slightly concerned. The first time I tested it, the results left me in awe. However, on introspection, I realized that if an AI can flawlessly generate a university-level essay, then it’s high time we scrutinized our assessments. It’s not about the capabilities of AI; it’s about reassessing the nature and objectives of our examinations.

When my colleagues seek advice on navigating this AI-augmented educational landscape, my primary counsel is simple: don’t panic. Instead, let’s critically analyze our current assessment methodologies. Our focus should pivot from regurgitation of facts to evaluating understanding and application. And if a certain subject demands instant recall of information, like in medical studies, we should stick to time-constrained evaluations.

Attempt 3: AI generated image (Using Dall-e3 via Bing Chat) of computer monitor showing article called ‘Babies and Bathnwater [sic] below which is an image of some very disturbingly muscled babies

To make our existing assessments less susceptible to AI, it’s crucial to reflect on their core objectives. This takes me back to the fundamental essence of pedagogy, where we need to continuously question and redefine our approach. Are we merely conducting assessments as a formality, or are they genuinely driving learning? It’s imperative to emphasize the process as much as the final output.

Now, if you ask me whether we should incorporate AI into our summative assessments, my perspective remains fluid. While today it might seem like a radical notion, in the future, it could be as commonplace as using the internet for research. But while we’re in this transitional phase, understanding and integrating AI should be done judiciously.

Lastly, when it comes to AI-generated feedback for students, I believe there’s potential, albeit with certain limitations. There’s undeniable value in students receiving feedback from various sources. Yet, we must tread cautiously to ensure academic integrity.

In essence, as educators and advocates of lifelong learning, we must embrace the challenges AI brings to our table, approach them with a critical lens, and adapt our strategies to nurture an equitable, AI-literate generation.

Part 2: Thoughts from the (bathroom) floor: Assessing Process Over Product in the Age of AI

The following is a synthesis of comments made during the discussion that ensued after the intial Q & A conversation.

Valuing Creation Process over End Product

There’s been a long-standing tradition in education of assessing the final product. Be it a project, an essay, or a painting, the emphasis has always been on the end result. But isn’t the journey as significant, if not more so? The time has come for assessments to shift their focus from the finished piece to the process behind its creation. Such an approach would not only value the hard work and thought process of a student but also celebrate their research journey.

Moving Beyond Memorization

Currently, knowledge reproduction assessments rule the roost. Students cram facts, only to regurgitate them during exams. However, the real essence of learning lies in fostering higher-order thinking skills. It’s crucial to design assessments that challenge students to analyze, evaluate, and create. This way, we’re nurturing thinkers and not just fact-repeating robots.

Embracing AI in the Classroom

The introduction of AI image generators in classroom projects was met with varied reactions. Some students weren’t quite thrilled with what the AI generated for them. However, this sparked a pivotal dialogue about the value of showcasing one’s process rather than merely submitting an end product.

It became evident that possessing a good amount of subject knowledge positions students better to use AI tools effectively, minimizing misuse. This draws a clear parallel between disciplinary knowledge and sophisticated AI usage. Today, employers prize graduates who can adeptly wield AI. Declining AI usage is no longer a strength but a weakness.

The Ever-Evolving AI Landscape

As AI tools constantly evolve and become more sophisticated, we can expect students to step into universities already acquainted with these tools. However, just familiarity isn’t enough. Education must pivot towards fostering honest AI usage and teaching students to discern between appropriate and inappropriate uses.

Critical AI Literacy: The Need of the Hour

AI tools, no matter how advanced, are just tools. They might churn out outputs that match a user’s intent, but it’s up to the individual to critically evaluate the AI’s output. Does it align with what you wanted to express? Does it represent your research accurately? Developing a robust AI literacy is paramount to navigate this digital landscape.

Attempt 4: AI generated image (Using Dall-e3 via Bing Chat) of computer monitor showing article called ‘Babies and Bathwater?’ below which is a photorealistic image of a baby

The Intrinsic Value of Creation

We must remember that the act of writing or creating is in itself a learning experience. Merely receiving an AI’s output doesn’t equate to learning. There’s an intrinsic value in the process of creation, an enrichment that often transcends the final product.

To sum it up, as the lines between human ingenuity and AI blur, our educational paradigm must pivot, placing process over product, fostering critical thinking, and embracing the AI wave, all while ensuring we retain our unique human touch in creation. The future beckons, and it’s up to us to shape it judiciously.

Lost in translation?

I have just spent a week in Egypt and, I suppose unsurprisingly, have returned to find that there have been yet more new AI tools released and important tweaks to existing ones. The things that I have been drawn to are the ‘Smart Slides’ plugin in GPT-4 and the image interpreter in Bing Chat. Before I show examples of my ‘fiddling when I should be working’, the one AI tool I found very useful in Egypt was the Google Lens translation tool. When I did have wifi I used it quite a lot to translate Arabic text as below. We have grown used to easy translation using tools like Google Translate but this really does take things to the next level, especially when dealing with a script you may not be familiar with. We are discussing this week at work the extent to which AI translation might form a legitimate part of the production of assessed work and I think it is going to be quite divisive. I imagine that study in the future will naturally become increasingly translingual and, whilst I acknowledge and understand the underpinning context of studying for degrees in any given linguistic medium, I feel like we may need to address our assumptions about what that connotes in terms of skills and ways students approach study. Key questions will be: If I think and produce in Language 1 and use AI to translate portions of that into Language 2 (which is the degree host language), how much is that a step over an academic integrity line? How much does it differ and matter in different disciplines? Are we in danger of thinking neo-colonially with persisting with insistence of certain levels of linguistic competence (in Global North internationalised degrees)?

Screenshot from Google Lens on an iphone screen showing an Egyptian banknote with Arabic text translated into English on screen.

As a Chat GPT ‘plus’ user I have, for my 20 bucks a month, access to GPT-4 and the growing stack of plug ins. I saw on Twitter the ‘Smart Slides’ plug in demoed and thought I’d better give that a whirl. I wanted to see how it comparted to other tools like Tome.app which can instantly produce presentational materials like this and I was blown away by Gamma.app when I first saw it. The generation of pre-formatted, editable slides on a given topic based on a prompt is very impressive the first time you see it but, like the anodyne Chat GPT generated ‘essays’, it’s not the ‘here’s something I made earlier’ that will likely be the most useful, but (in this case) the web-based, intuitive creation and sharing PowerPoint alternative format. This one I generated in seconds to illustrate to a colleague and it reamins un-tweaked.

I have found for my own use that ideation, summarisation and structuring are my go to uses for ChatGPT and all of these could feed creation of a slide deck. Plus, whilst I tend to use tools like Mentimeter to create slide decks, I am not daft enough to think that PowerPoint is still not the Monarch of Monarchs in a sector where content (for better or worse!) remains King.

The ChatGPT Smart Slides plug in works best in my limited experiments if you supply a decent amount of prompt material but also gives a decent starting point when using only a minimal starting prompt. To create the one that follows I used my own pre-authored rationale and suggested structure for a short course on Generative AI, downloaded the output, re-opened it in PowerPoint, changed the design away from the default ugly white text on purple background and then used the in-app design tools to tweak the look and feel but not the content.

Ugly default PowerPoint design showing multiple slides in graduated purples with white text

It took 5 minutes to turn a document into useable slides so hard to argue with that as a template at least.

The completed slides after a little (PowerPoint-y AI design fiddling)

Finally, I noted the release of image reading AI which is a Bing Chat option (note you can only use in Edge browser and with safe search off or set to moderate). The first thing I tried (predictably I guess) was an image of me taken this morning (post holiday warts ‘n’ all) and the description is impressive as a starting point for generating Alt-Text for example.

Screenshot from Bing chat of uploaded image of a man waving at a camera in a n office with below the AI image description which reads: This image shows an office space with a person in the foreground. The person is wearing a white shirt and is holding up their hand in a waving gesture. The background consists of desks with computer monitors and office chairs. There are other people visible in the background, but their faces are blurred to protect their privacy. The lighting in the office is bright and natural. Is there anything else you would like to know? 😊

I then thought of the advice universities were giving staff about making assessments ‘AI-proof’ (!) and how use of images was one recommendation. So for my next experiment I tried giving it a maths question in the form of an image.

Screenshot of Bing Chat where a maths problem is posted and then responded to in the following way by the AI: This is a black and white image of a math problem. The problem is to find the value of x in a right triangle. The triangle has a base of 4 cm and a height of 3 cm. The value of x is humorously marked with a circle and an arrow pointing to it, with the text “Here it is” written below the triangle. The image is a meme poking fun at math problems.

The actual answer to the problem can be found using the Pythagorean theorem, which states that in a right triangle, the square of the length of the hypotenuse (the side opposite the right angle) is equal to the sum of the squares of the lengths of the other two sides. In this case, x represents the length of the hypotenuse, so we can solve for x using the equation x^2 = 4^2 + 3^2. Solving this equation gives us x = 5, so the length of the hypotenuse is 5 cm.

Given that it got it right, explained it but also noted the ‘humourous meme’ nature of the image suggests that bit of advice at least is well and truly redundant.

Using rubrics and generativeAI tools to reflect on and develop writing

One of the biggest worries about generative AI is in terms of how it could impact the development of the ability to write and the learning that is, in essence, formed and evolved through the construction of sentences, paragraphs and the outputs of writing from songs to blogs to academic essays. There’s been some really thoughtful work in this area aleady and Anna Mills has collected some amazing resources that offer a range of perspectives and approaches as well as plenty of food for thought about impacts and issues. This series of videos ‘Generative AI practicals’ is designed to suggest ways in which tools like ChatGPT and Google Bard might be used by academic staff and students in ways other than pumping out text indiscriminately and uncritically! In this video I isolate one element from a marking rubric and using two genAI tools ask them to assess a paragraph and then suggest alternatives across grade bands.

Transcript

Prompt & Outputs from ChatGPT and Google Bard

Generative AI practicals: Making sense of lecture notes (with ChatGPT)

There are loads of things we (in HE and education more broadly) need to think about and do when it comes to generative AI, both cognitively and practically. I am alert to and concerned about the ethical and practical implications of generative AI tools but here want to focus on ways in which we (teachers and students) might find ways to use these tools productively (as well as ethically and with integrity). My view is that the ‘wow’ (or OMG) moment experienced when you witness tools like chatGPT spouting text needs to be looked beyond and ways in which the mooted idea of AI personal assistants can actually be realised need to be explored and shared. As a compulsive fiddler I am sometimes struck by how little other people have experimented but need to remember that stuff I might do in my spare time may have limited appeal for others (I am, after all, a Spurs supporter).

This first video then (4 mins) shows how I might take take some lecture notes (which may be notes from anything of course) and then uses ChatGPT to make sense of them.

Transcript

Prompts used, outputs and original notes