AI generated vignettes

I wanted to compare outputs for an activity I have used generative text AI for quite a bit- the creation of case studies and vignettes to prompt discussion or to illuminate theoretical models. I used the same prompt in Bing Chat, Google Bard and ChatGPT on a topic that is out of my usual teaching domain. Here are the results. It’s fascinating to see the differences.

First my prompt:

I am working with a group of first year undergraduates. I need 5 vignettes to prompt discussion about legal issues and the health service in the UK. Each should be a short scenario that sketches out an issue, centring a different person each time from diverse backgrounds who has some kind of issue or conflict with a doctor or in a hospital. No further details are need other than an explanation of the issue. Each should be maximum 150 words.

Bing Chat chooses ‘a patient’ as the only identifier whilst Google Bard suggests some context to the patient. ChatGPT creates a persona with a name and is much more affective in this way but does it make them more effective? I suppose it depends on the nature of the discussion I want my students to have. Both Bing Chat and Chat GPT set out the vignettes as a whole but Google Bard suggests additional guidance; in many ways reflecting (without me requesting or stating this) the approach I would take if using scenarios in a class- the gradual layering on of more information or targetted prompts to steer a discussion in a certain direction without being over-inteventionist. I asked for diverse backgrounds but there’s something disconcerting about the way ChatGPT responded to this. Perhaps it’s because one feels very close to a personal experience? I should note, Google Bard’s alternatives were closer to the ChatGPT inital response. This is just one quick comparison of output and I can’t imagine using any without some adjustments but I do think this is an area of teaching where we could save a lot of labour.

Bing Chat

Here are five vignettes that could prompt discussion about legal principles and the law in the UK in relation to medical profession:

  1. A patient is admitted to a hospital for a procedure but is not given enough information about the procedure or its risks. The patient is not able to give informed consent and suffers complications as a result.
  2. A patient is given a medication that has been prescribed by their doctor. The medication causes an allergic reaction that was not identified by the doctor. The patient suffers serious harm as a result.
  3. A patient is diagnosed with a serious illness but is not given appropriate treatment or care by their doctor. The patient’s condition worsens as a result.
  4. A patient is discharged from hospital too early and without appropriate follow-up care. The patient’s condition worsens as a result.
  5. A patient is given incorrect or incomplete information about their diagnosis or treatment. The patient suffers harm as a result.

I hope this helps.

Google Bard

Sure, here are 5 vignettes that you can use to prompt discussion about legal principles and the law in the UK:

  1. A young woman from a minority ethnic group is refused a termination of pregnancy by her doctor, even though she has clearly stated that she does not want to continue with the pregnancy.
  2. A man from a low-income background is denied access to a new cancer treatment because it is not yet available on the NHS.
  3. A patient with learning disabilities is not given the same level of information about their treatment as other patients.
  4. A patient is discriminated against because of their sexual orientation when they are seeking treatment for a mental health condition.
  5. A patient is injured during a medical procedure and the hospital refuses to take responsibility for their injuries.

These vignettes all raise important legal issues, such as the right to autonomy, the right to equality, and the right to be free from discrimination. They can be used to prompt discussion about how these legal principles are applied in the context of healthcare.

Here are some additional details that you can provide to your students to help them better understand the legal issues involved in each vignette:

  • In the first vignette, the young woman’s right to autonomy is at stake. Autonomy is the right to make decisions about one’s own body and life. The doctor’s refusal to perform a termination of pregnancy could be seen as a violation of this right.
  • In the second vignette, the man’s right to equality is at stake. Equality means that everyone should be treated fairly and without discrimination. The man’s denial of access to a new cancer treatment could be seen as a violation of this right.
  • In the third vignette, the patient with learning disabilities may not be able to understand the information that is being given to them about their treatment. This could lead to them being denied treatment that they need or being given treatment that is not in their best interests.
  • In the fourth vignette, the patient is being discriminated against because of their sexual orientation. This is a form of discrimination that is prohibited by law.
  • In the fifth vignette, the patient is being denied compensation for their injuries. This could be a violation of their right to be compensated for harm that they have suffered.

These are just a few of the legal issues that can be raised by these vignettes. By discussing these issues, your students can learn more about the law and how it applies to healthcare.

ChatGPT

Here are five vignettes designed to prompt discussion about legal principles and the law in the UK, particularly in the context of healthcare settings:

  1. Vignette 1: Consent and Minors
    • Sarah, a 15-year-old girl, is pregnant and wants an abortion. Her parents are devoutly religious and strongly oppose the procedure. Sarah’s doctor is conflicted about whether to proceed without parental consent.
  2. Vignette 2: Data Protection
    • Ahmed, a 40-year-old software engineer, discovers that his medical records were accessed by a hospital employee without a valid reason. The employee is a friend of Ahmed’s estranged wife, and he worries that his private information will be misused.
  3. Vignette 3: Discrimination
    • Priya, a transgender woman, goes to the hospital for a routine check-up. She feels that her doctor is treating her differently and making assumptions about her health based solely on her gender identity.
  4. Vignette 4: End-of-Life Decisions
    • John, an 80-year-old man with terminal cancer, wants to refuse further treatment to let nature take its course. His adult children insist that he continue treatment, believing that a new experimental drug might help. The doctor is unsure whose wishes to honor.
  5. Vignette 5: Medical Negligence
    • Fiona, a 25-year-old professional dancer, goes in for a minor knee surgery but ends up with a severe infection due to improperly sterilized equipment. Her career is jeopardized, and she considers taking legal action against the hospital.

Each of these vignettes presents a unique legal issue, from consent and data protection to discrimination, end-of-life decisions, and medical negligence.

Lost in translation?

I have just spent a week in Egypt and, I suppose unsurprisingly, have returned to find that there have been yet more new AI tools released and important tweaks to existing ones. The things that I have been drawn to are the ‘Smart Slides’ plugin in GPT-4 and the image interpreter in Bing Chat. Before I show examples of my ‘fiddling when I should be working’, the one AI tool I found very useful in Egypt was the Google Lens translation tool. When I did have wifi I used it quite a lot to translate Arabic text as below. We have grown used to easy translation using tools like Google Translate but this really does take things to the next level, especially when dealing with a script you may not be familiar with. We are discussing this week at work the extent to which AI translation might form a legitimate part of the production of assessed work and I think it is going to be quite divisive. I imagine that study in the future will naturally become increasingly translingual and, whilst I acknowledge and understand the underpinning context of studying for degrees in any given linguistic medium, I feel like we may need to address our assumptions about what that connotes in terms of skills and ways students approach study. Key questions will be: If I think and produce in Language 1 and use AI to translate portions of that into Language 2 (which is the degree host language), how much is that a step over an academic integrity line? How much does it differ and matter in different disciplines? Are we in danger of thinking neo-colonially with persisting with insistence of certain levels of linguistic competence (in Global North internationalised degrees)?

Screenshot from Google Lens on an iphone screen showing an Egyptian banknote with Arabic text translated into English on screen.

As a Chat GPT ‘plus’ user I have, for my 20 bucks a month, access to GPT-4 and the growing stack of plug ins. I saw on Twitter the ‘Smart Slides’ plug in demoed and thought I’d better give that a whirl. I wanted to see how it comparted to other tools like Tome.app which can instantly produce presentational materials like this and I was blown away by Gamma.app when I first saw it. The generation of pre-formatted, editable slides on a given topic based on a prompt is very impressive the first time you see it but, like the anodyne Chat GPT generated ‘essays’, it’s not the ‘here’s something I made earlier’ that will likely be the most useful, but (in this case) the web-based, intuitive creation and sharing PowerPoint alternative format. This one I generated in seconds to illustrate to a colleague and it reamins un-tweaked.

I have found for my own use that ideation, summarisation and structuring are my go to uses for ChatGPT and all of these could feed creation of a slide deck. Plus, whilst I tend to use tools like Mentimeter to create slide decks, I am not daft enough to think that PowerPoint is still not the Monarch of Monarchs in a sector where content (for better or worse!) remains King.

The ChatGPT Smart Slides plug in works best in my limited experiments if you supply a decent amount of prompt material but also gives a decent starting point when using only a minimal starting prompt. To create the one that follows I used my own pre-authored rationale and suggested structure for a short course on Generative AI, downloaded the output, re-opened it in PowerPoint, changed the design away from the default ugly white text on purple background and then used the in-app design tools to tweak the look and feel but not the content.

Ugly default PowerPoint design showing multiple slides in graduated purples with white text

It took 5 minutes to turn a document into useable slides so hard to argue with that as a template at least.

The completed slides after a little (PowerPoint-y AI design fiddling)

Finally, I noted the release of image reading AI which is a Bing Chat option (note you can only use in Edge browser and with safe search off or set to moderate). The first thing I tried (predictably I guess) was an image of me taken this morning (post holiday warts ‘n’ all) and the description is impressive as a starting point for generating Alt-Text for example.

Screenshot from Bing chat of uploaded image of a man waving at a camera in a n office with below the AI image description which reads: This image shows an office space with a person in the foreground. The person is wearing a white shirt and is holding up their hand in a waving gesture. The background consists of desks with computer monitors and office chairs. There are other people visible in the background, but their faces are blurred to protect their privacy. The lighting in the office is bright and natural. Is there anything else you would like to know? 😊

I then thought of the advice universities were giving staff about making assessments ‘AI-proof’ (!) and how use of images was one recommendation. So for my next experiment I tried giving it a maths question in the form of an image.

Screenshot of Bing Chat where a maths problem is posted and then responded to in the following way by the AI: This is a black and white image of a math problem. The problem is to find the value of x in a right triangle. The triangle has a base of 4 cm and a height of 3 cm. The value of x is humorously marked with a circle and an arrow pointing to it, with the text “Here it is” written below the triangle. The image is a meme poking fun at math problems.

The actual answer to the problem can be found using the Pythagorean theorem, which states that in a right triangle, the square of the length of the hypotenuse (the side opposite the right angle) is equal to the sum of the squares of the lengths of the other two sides. In this case, x represents the length of the hypotenuse, so we can solve for x using the equation x^2 = 4^2 + 3^2. Solving this equation gives us x = 5, so the length of the hypotenuse is 5 cm.

Given that it got it right, explained it but also noted the ‘humourous meme’ nature of the image suggests that bit of advice at least is well and truly redundant.

Creating a marking rubric (using ChatGPT)

Long term I have no doubt we will find ways of working using generative AI tools that will feel as second nature as Googling something or picking up a mobile phone to do any one of a million things (that is not ‘phoning’). I’m particularly interested in how we might use them effectively NOW, especially for jobs that may connote ‘drudge’. There are wider issues around consistency in feedback and marking of course, but having a rubric is certainly one way to move towards this. In my experience, though, the expectation of a rubric shared between markers and/ or being shared with students is not always realised. Creating one from scratch can be as daunting as it is time-consuming. This video shows how, with a carefully worded prompt, you can get an AI assistant to take on a lot of the drudgery.

Transcript

Prompt and outputs from ChatGPT and Google Bard

Using rubrics and generativeAI tools to reflect on and develop writing

One of the biggest worries about generative AI is in terms of how it could impact the development of the ability to write and the learning that is, in essence, formed and evolved through the construction of sentences, paragraphs and the outputs of writing from songs to blogs to academic essays. There’s been some really thoughtful work in this area aleady and Anna Mills has collected some amazing resources that offer a range of perspectives and approaches as well as plenty of food for thought about impacts and issues. This series of videos ‘Generative AI practicals’ is designed to suggest ways in which tools like ChatGPT and Google Bard might be used by academic staff and students in ways other than pumping out text indiscriminately and uncritically! In this video I isolate one element from a marking rubric and using two genAI tools ask them to assess a paragraph and then suggest alternatives across grade bands.

Transcript

Prompt & Outputs from ChatGPT and Google Bard

Generative AI practicals: Making sense of lecture notes (with ChatGPT)

There are loads of things we (in HE and education more broadly) need to think about and do when it comes to generative AI, both cognitively and practically. I am alert to and concerned about the ethical and practical implications of generative AI tools but here want to focus on ways in which we (teachers and students) might find ways to use these tools productively (as well as ethically and with integrity). My view is that the ‘wow’ (or OMG) moment experienced when you witness tools like chatGPT spouting text needs to be looked beyond and ways in which the mooted idea of AI personal assistants can actually be realised need to be explored and shared. As a compulsive fiddler I am sometimes struck by how little other people have experimented but need to remember that stuff I might do in my spare time may have limited appeal for others (I am, after all, a Spurs supporter).

This first video then (4 mins) shows how I might take take some lecture notes (which may be notes from anything of course) and then uses ChatGPT to make sense of them.

Transcript

Prompts used, outputs and original notes

The Ideal Learning Environment: A Student’s Perspective – Inaya Compton (an AI bonanza experiment)

This is a guest post from my daughter. It is the outcome of an effort today to kill multiple birds with one stone. First, I have a daughter at home on summer holiday who needs to do more than watch TV or play video games all day and as much as I’d like to go for a long walk or bike ride, I do have work. Second, after overhearing online meeting discussions about plans for new campus spaces the other day she offered some really interesting insights I wanted to capture. Third, in my efforts to find genuine productivity affordances for generative AI I wanted to test the MS Teams transcription function and combine that with ChatGPT text analysis and re-write. So, below is the result. We chatted via Teams for about 15 minutes. I uploaded the transcript to ChatGPT with this prompt: Isolate all comments from ‘Inaya Compton’ then using her arguments and ideas compose a 500 word blog post written in the first person. We then edited the output together and deleted ChatGPT’s typical formulaic conclusion. We decorated the post with a couple of Midjourney images using her prompt: top down view of a classroom with tables in groups of four with two chairs on each side and a board at 90 degrees to the chairs along one wall. Finally, we used the inbuilt AI assistant to review grammar and do a final edit (Paragraph 5 was reworked to be ‘serious’ in tone).

Screenshot from the editor window of this post showing toolbar with three star icon for ‘AI assistant’ highlighted

———————–

Hi, I’m Inaya Compton. As a student, I am enthusiastic and eager to learn, valuing both the intellectual richness of the classroom and the importance of conversation in broadening perspectives. Here, I would like to share some of my insights on the most conducive learning environments, based on my experiences as a learner.

Let’s begin with the significance of the learning space itself. The size of the classroom matters a lot to me. If it’s too large, it creates a sense of separation, while a small space can feel too restrictive. It’s essential to strike the right balance between creating a sense of unity and allowing for individual freedom.

Top down view of classroom generated in Midjourney showing 5 desks each with 6 or 8 chairs around it.

When it comes to teaching small groups, say up to about 10 students, a primary challenge is student engagement and cooperation. The smaller the group, the lesser the interaction dynamics. There’s also a risk of students not learning how to engage with a wider range of people, which is a crucial life skill.

Teachers often struggle with how to address students who don’t readily answer questions. My take on this is that students should have their space. Instead of forcing participation, provide them with an option to indicate their comfort levels with a topic before the session. By giving them an overview of questions and letting them hear other students’ responses, they can still participate indirectly and learn in their way.

When it comes to managing large classes, such as university lectures, adapting the pace to accommodate the diverse learning needs of students can be quite challenging. Within a single class, there exists a wide spectrum of learning speeds, ranging from quick learners to slow learners, with many falling somewhere in between. The task at hand is to ensure that no student is left behind while also avoiding boredom among the more advanced students. Achieving this delicate balance necessitates educators who possess the versatility and adaptability required to effectively cater to the varied learning speeds within the classroom.

Now, let’s talk about the physical arrangement of the classroom. I believe the teacher should remain at the front as it gives a comprehensive range to the class. As for student seating, desks should not be arranged in rows, but instead, every student should be in groups but also able to see the teacher or, more importantly the board or the learning materials. The table configuration in my primary school worked well, with two students on each side of the table, thus avoiding the need for turning around completely.

Top-down view of a classroom, showing a random range of desks of different sizes with varying numbers of chairs and a central paper-covered desk.

It’s worth considering multiple screens in a medium-sized room. Having more than one screen would benefit those seated at difficult angles. And yes, the desks should be movable for a fresh feel every once in a while. It gives a sense of newness, fostering a comfortable learning environment.

Speaking of comfort, this includes the right lighting, temperature, and air flow. A classroom that’s too bright or too dark can be distracting, as can one that’s too cold or too hot. Here in England, for instance, heaters should be used only in winter – don’t keep them on as soon as February hits! Air flow is crucial too, to prevent the room from becoming stuffy.

Finally, seating arrangements should be thought through. My suggestion is to allow students some agency in deciding their seating, depending on their specific needs, such as those with glasses or those who prefer more or less involvement in class. Students with learning difficulties or disabilities should be seated closer to the board and have access to an assistant teacher for personalized help, thereby ensuring that everyone gets the attention they need.

——————-

The various stages we went through to produce this make me think that this will likely become increasingly routine and , although I was keen to really push the generative AI contribuiton, I can already see how the boundaries in some writing between human and AI authorship will be increasingly blurred, hard to track and profoundly complex in providence.

Using ChatGPT to support neurodivergent reading and comprehension

I am sitting in an ALT ELESIG webinar and just saw Dominik Lukes from Oxford University suggest one way students, in all their glorious neurodivergence, could exploit potentials of Generative AI is to reorganise journal content into a different format to aid processing and comprehension. So, no time like the present, I thought I’d try it myself. The idea is to copy/ paste the original text into a tool like ChatGPT then using a prompt like this:

Identify all authors in this text, put them in table and add content to each cell based on the following headings: authors; title of text; key points raised in this article [then pasted in the text from my article]

this is the output:

AuthorsTitle of TextKey Points Raised in This Article
Compton (2023)N/ACritiques the traditional examination system, citing issues of scalability and pedagogical incompatibility. The pandemic has prompted the consideration of alternative assessment methods, however there’s a renewed call for traditional exams since campuses have reopened.
Williams and Wong (2009)The efficacy of final examinations: A comparative study of closed‐book, invigilated exams and open‐book, open‐web examsClosed book exams are deemed outdated in the context of a knowledge economy and counterproductive to constructive learning theories that promote deep learning. Open book exams, which reduce pressure on memorization, provide less opportunities and temptations for academic dishonesty.
Lawrence & Day (2021)How do we navigate the brave new world of online exams?Argues that tradition and convenience often inform the continued use of traditional exams. These exams could lead to reduced marking time when feedback is not required.
Durning et al. (2016)Comparing open-book and closed-book examinations: a systematic reviewSupports that traditional exams promote rigor, particularly in medical education. Students tend to study more assiduously for these exams, which could lead to better outcomes for patients.
Johanns et al. (2017)A systematic review comparing open-book and closed-book examinations: Evaluating effects on development of critical thinking skillsPoints out that misunderstandings about open book exams and overconfidence can lead students to waste time searching for answers. Well-designed open book exams can be seen as more authentic, applicable to real-world scenarios, and more learner-directed.
Downes (2017)University scandal, reputation and governanceArgues that closed-book, in-person exams can reduce cheating compared to other assessment types. Also notes that online exams increase the likelihood of cheating and confusion about what is permissible.
D’Souza and Siegfeldt (2017)A conceptual framework for detecting cheating in online and take‐home examsAligns with Downes, arguing that in-person exams are less susceptible to cheating.
Ottaway et al. (2017)Cheating after the test: who does it and how often?Claims that high-achieving students are more likely to cheat on exams.
Simonite (2010)The impact of coursework on degree classifications and the performance of individual studentsSuggests that exams favour high-achieving students, however, the combination of coursework and exams tend to give better results in terms of final degree outcomes.
Williams (2006)The place of the closed book, invigilated final examination in a knowledge economyContests the belief that closed book exams are less prone to cheating.
Chirumamilla et al. (2020)Cheating in e-exams and paper exams: the perceptions of engineering students and teachers in NorwayPoints out that online exams are perceived to be more reliable and efficient, but both staff and students perceive more opportunities for cheating in online modes.
Bengtsson (2019)Take-home exams in higher education: a systematic reviewArgues that traditional exams focus on recall and memorization rather than higher order/critical thinking. Such exams also put unnatural pressure on students, impacting their performance.
Clemmer et al. (2018)Will that be on the exam?-Student perceptions of memorization and success in engineeringPoints out that students using memorization techniques may perceive exams as unfair if questions do not reflect content they have practiced.
Rich (2011)An experimental study of differences in study habits and long-term retention rates between open-book and closed-book exams in a water resources engineering classSuggests that open-book exams encourage students to adopt more effective study habits and demonstrate better long-term retention of course material.

For me, this would have been a complete game chnager when I was a student, espceilly when conducting literature reviews.

Source article is here.

How to do this- step by step:

  1. Select the Content: First, you need to choose the article or content you want to summarise. The content could be a journal article, a blog post, a news report, or any written material.
  2. Copy the Content: Once you have the content, copy the entire text or the relevant portions you want summarised. This is your raw material for the summarisation task.
  3. Navigate to ChatGPT: Open a new browser window and navigate to the ChatGPT interface. This could be a website or an application where you can interact with the ChatGPT model.
  4. Paste the Content: Paste the copied content into the chatbox of ChatGPT. You need to be careful not to exceed the maximum character limit allowed by the tool.
  5. Create a Prompt: After pasting the content, create a prompt for the model to structure the summary. Here’s an example prompt: “Identify all authors in this text, put them in a table and add content to each cell based on the following headings: authors; title of text; key points raised in this article.”
  6. Run the Model: After inputting the prompt, press enter or click the appropriate button to execute the command. The model will process the input and produce an output based on your prompt.
  7. Review the Output: Review the generated summary and make sure all the key points from the original content are captured accurately. If necessary, refine your prompt and rerun the model.
  8. Copy the Output: If you’re satisfied with the output, copy it for use in your desired application.
  9. Refine and Iterate: Keep in mind that AI models like ChatGPT may require a few iterations to get the desired output. Don’t hesitate to refine your prompts and iterate the process.

This method will allow you to create a structured summary of any written content. Note that the AI will follow your prompts, so be as specific as you can to get the best results.