It all started here

Not a week goes by when I am not bowled over by the brillliance of students (and colleagues too of course). Recently it has been Kaz and Navi whose articulate ruminations on generative AI are really thought-provoking. Last week I was blown away with the work of Olivia Moir, a student at the UCL Institute for Women’s Health, who was keen to start a podcast series. I’d previously shown the workings of the Arena podcasting studio but I wasn’t expecting such a swift application of those developing skills and a very professional first in a series podcast. Olivia writes more below:

This past week I launched my new podcast, It All Starts Here, for the UCL Institute for Women’s Health. The podcast aims to provide a platform that brings awareness to various critical topics in the fields of reproductive science and women’s health – topics that are often not discussed enough. We cover a range of topics and focus on creating content that is understandable to individuals from all levels of education and backgrounds, with no science experience required.

This is something I’ve been working on since last year, and have put my entire heart into, so am very excited to finally share it with everyone!

Admittedly, in the initial stages of figuring out the logistics of this podcast, my biggest concerns were around my lack of skills when it comes to technology. Thankfully, these anxieties were quickly put to rest following a couple of training sessions with Martin, who showed how intuitive and easy to use podcasting equipment can be.

This has been a wonderful journey so far, and I have already learned a lot. Really looking forward to continuing with this and seeing where it goes!

Check out my podcast using this link or listen below

Look out for more episodes as several are in the pipeline!

Generative AI: Insights and Experiences from Students at UCL

We’ve heard a lot from the academic community about current and potential impacts of generative AI technologies like ChatGPT but what are their students saying, doing, thinking and predicting? I had a crack at a partial answer to that by chatting with two undergrads. Their thoughts and insights are really worth hearing. The summary that follows was generated using ChatGPT based on the video transcripts:

The first video, “Generative AI: A Student’s Perspective,” features a candid conversation with two students, Navi and Kaz, both studying on undergraduate programmes at UCL. Martin, the host, delves into the students’ experiences with generative AI, their understanding of its capabilities and limitations, and its use in academic contexts. The video vividly illustrates how students are using generative AI tools like ChatGPT to help plan essays and generate content. It also raises the question of whether using these AI models for academic purposes could be considered a breach of academic integrity.

The second video, “Generative AI and Academic Integrity,” continues the conversation with Kaz and Navi, focusing on the academic integrity aspect of using generative AI. This video delves into the nuanced perspective of students regarding the fine line between legitimate usage and cheating. Both students stress that while generative AI could potentially be misused, its usage heavily relies on individual abilities and understanding. Moreover, it suggests that educators and universities should adapt their assessment strategies and questions to stimulate critical thinking, thereby promoting a better and more responsible use of AI tools in academia.

These videos provide a much-needed student perspective on a rapidly evolving topic. They highlight the urgent need for a comprehensive discussion around the ethical implications of AI and its integration into educational practices, making them a must-watch for anyone interested in the intersection of AI and education.

video 1: How are students already using ChatGPT and the like?
Video 2: What about academic integrity? And academics’ AI literacy?

AI: Shorthand and eponyms

One of the things we do badly and get agitated about a lot is the naming of things to do with technology; not least in the realm of digital education. Just ask a room of HE folk for a definition of ‘Blended’ and ‘Hybrid’ and wait for saloon brawl to ensue. So it is with some of the language that is emerging as default in relation to all things ‘Artifical Intelligence’, notably and especially the large language model (LLM) ‘ChatGPT’.

Shorthand

I do understand when folk get antsy when something is called a thing that it isn’t, isn’t exactly or isn’t entirely. But, unless you’re a specialist, a pedant, or willfully awkward (and granted a lot of academics are at least two of these- no offence intended), we may as well get on with saying strawBERRIES, peaNUTS and TIN foil even if they are no such thing. In that vein, I am more than happy to use ‘AI’ as a shorthand for all the stuff that is vexing, perplexing and flexing educators just now. I’m afraid that if someone starts saying: ‘Well technically, it’s not artificial intelligence per se, but rather a fine-tuned large language model…’ I can feel my eyes glazing over. Obviously this stuff is fundamental if you are a computer scientist but the reason such shorthands exist is that they are shorthandy (clues are in the word) and suitable for lay users. Experts that know this are one step closer to communicating beyond their discipline.

a mechanical strawberry as imagined with midjourney AI image from text

Eponym

I am much less comfortable with some brand eponyms; especially when the tools and the words to describe them are still evolving. Against my better judgement, I ‘Google’ stuff even if I’m in Safari and I use a ‘hoover’ (even though the only actual- and long discarded- Hoover I ever owned metaphorically ‘sucked’ at the literal thing it was supposed to do). But I am pushing back at the generic use of ‘ChatGPT‘ (OpenAI’s LLM) to refer to the latest iteration of large language models. Chatbots have been around for years and the underpinning technology has evolved rather than suddenly appeared but the genius of the release and the subsequent explosion in use and interest is in the non-techy, almost friendly user interface combined with the now famous fluency and (superfically at least) convicing outputs. The ‘GPT’ part stands for ‘Generative Pre-Trained Transformer’ which certainly needs unpicking to understand but 100 million users in two months is testament to the appeal of this iteration of this particular tool with this particular interface that has led to so many using the ‘ChatGPT’ eponymically. But the ongoing interests in Open AI from Bond villan Elon Musk; the environmental and ethical costs and implications along with the ‘Oh my god ChatGPT could pass an MBA’ educational panic in some quarters mean we could rue generalising the term. Subscriptions to ChatGPT will necessarily change our realtionship with it and, if even half of what reading about Microsoft integrations are to be believed (as well as Google’s “Look at us, we have ‘Bard’”counter-panic) the technology will be no more separate from the tools we use every day than a spellchecker.

My suggestion

All I’m saying really is that we should think carefully about the terms we use now lest they become fossilised, awkward and, worse, provide advertising and subtle condonement to what is just one product amongst many (and with not inconsiderable ethical baggage). So, for my tuppence worth: I think it’s OK for educators and students to use AI as a catch all term for these new dialogic, chat interface tools as well as other generative tools such as Dall-e and Midjourney ‘image from text’ generators and other comparable AIs such as AI music generation and AI video generators. The common denominator is ‘generation’ so I wonder whether we might usefully agree to use ‘AI text generators’, ‘AI image generators’ etc.. . as the default? I have been using ‘language models’, ‘large language models’ and even LLMs and realise that experts would likely prefer this framing but to a lay ear these say nothing about what these tools do and, anyway, when ‘The One Show’ (Popular BBC early evening magazine programme (starts at 22:56) is using ‘ChatGPT’ generically, a technical preference has probably got no chance.

AI + AI = AI

To be honest, I really can’t believe no-one appears to have ‘generated’ this equation yet amongst the kerfuffle around generative AI. So, let this post be me staking a claim for what I hope will be a ‘go-to’ simplification of the key issue that educators in both the compulsory and post-compulsory sectors are (or likely should be) grappling with. I know it might ruffle mathematician or science colleagues’ ordered feathers but I feel like it could be a useful starting or continuation point for dialogue about productive and future-facing responses to the popularisation of AI generators like ChatGPT (text) and Midjourney (Image).

Basically, this is what each AI means:

  • First AI = Artificial Intelligence
  • Second AI = Academic Integrity
  • Third AI = Assessment Innovation

In other words, we have these generative tools popularly and collectively referred to as AI (Please don’t @ me on any inaccuracy of this naming phenomenon, I have nailed my colours to the ‘path of least resistance’ mast already) which when coupled with implications of and concerns about academic integrity lead us to the necessity of assessment innovation. 

I mean it really as a provocation or as a stimulus for critique so the most important question is: What do you think?

Perhaps also, we might consider other possible solutions to the ‘sum’?

  • E.g. AI + AI = BI? (Ban it)

Or maybe there’s a better or alternative way of representing the debates pseudo-mathematically?

  • AI³ ? 
  • AI – AI = AA (academic anarchy)?

Incidentally, I asked Midjourney to generate a suitable image using only “AI + AI = AI” as the prompt and this is one of its offerings:

Image generated using midjourney using the prompt AI+AI=AI. The image is futuristic and shows two young women facing away from one another but blended complex code and machinery. They are simultaneously blended and exploding apart.

Make of that what you will. Finally, and just because it’s what we all seem to be doing now in this context, I also asked ChatGPT what AI+AI=AI meant and I actually quite appreciated the response:

“If you wrote “AI + AI = AI,” you may be referring to the concept of artificial intelligence (AI) self-improvement. The equation suggests that when you add an AI system to another AI system, the result is still AI. This could imply that AI has the potential to improve itself through learning from other AI systems, leading to even more advanced AI.”

AI text generators (not chatGPT) on essays, citations and plagiarism

I like to think of myself as generally optimistic. Like everyone, I make jokes about the machines taking over but, by and large, I feel excited by the potentials of AI (whilst keeping one hand behind my back tightly gripping a broken bottle just in case). In higher education the most pressing fears relate to threats to academic integrity and, as much as some might like, this is an issue that can’t be addressed with a broken bottle. Instead, it’s forcing/ encouraging/ catalysing (choose as per your perspective) us to reflect on how we assess students (see AI+AI=AI) . Whilst ChatGPT caught the world’s attention and millions of educators and students went slack-jawed and wide-eyed at first exposure (most soon of course started to see how frequently it ‘may occasionally generate incorrect information’ such as in this experiment I did). Whilst ChatGPT allowed us to draw our own (often erroneous) conclusions about its capabilities, other tools built on the same parameters as ChatGPT are, in different ways, honing in on the ‘help with essays’ market. On the one hand, this may be the death of the essay mill (who, after all, once they have decided to go down an academic dishonesty path, would decide to pay an essay mill to do what anyone with an internet connection can do for themselves?). On the other, it really blurs the picture of what is honest or dishonest or what we at UCL frame as ‘unfair advantage

Two tools have caught my eye recently and in this post I want to reflect on the different impressions they give in terms utility and ethical use for students wanting to write an essay. 

Writesonic /chatsonic

Chatsonic is one aspect of the ‘writesonic’ suite of tools – reviewed in considerable detail here with pricing and potentials. It caught my eye because of the bold claim to have addressed ‘limitations of chatGPT’ and to be ‘the best alternative’. I thought I’d ask it why it was better and I feel like the responses do a lot of talking for themselves:

Prompt: why is chatsonic better than chatgpt? 

ChatSonic is a powerful AI writing assistant that offers superior accuracy and speed compared to ChatGPT. Unlike ChatGPT, ChatSonic produces 100% original content that is thoroughly checked for plagiarism. In addition, ChatSonic offers detailed analytics and feedback to help you track your progress in writing and make improvements. ChatSonic also provides an extensive library of templates to save you time and energy when you’re writing, allowing you to focus on the quality of your content. Finally, ChatSonic is backed by a team of expert writers who can help you refine and polish your work to perfection.

Prompt: Explain the process you use for plagiarism checking

ChatSonic uses a combination of automated checks and manual reviews to ensure that all content produced is free of plagiarism. Our automated system checks the content against the web and other sources to detect any potential plagiarism. If anything suspicious is detected, it is flagged and sent to our team of expert writers for a manual review. We also use specialized tools such as TurnItIn and Copyscape to detect any possible plagiarism. All content is thoroughly checked to ensure it is completely original before it is delivered to our customers.

Prompt: If I use chatsonic to write an essay will I get into trouble at my university?

No, you should not get into trouble when using ChatSonic to write an essay. Our content is completely original and plagiarism-free so it should not be flagged as such by your university. However, it is always a good idea to review the essay yourself before submission to ensure it meets all the requirements of your assignment. Additionally, make sure the content you create with ChatSonic is unique and tailored to your own voice and style, otherwise it may be noticeable that you have used a writing assistant.

To say this makes me uncomfortable is a real understatement. It only tangentially addresses the wider integrity issues and in its first response is pushing me to conflate plagiarism (and its avoidance) with those wider integrity issues. Two posts in its resources section are worth noting. This one on plagiarism reinforces the message the chatbot produced. This post on replacing writers asserts that AI is an assistant but I couldn’t find much more detail on the ethical dimensions.

Maybe though I was ‘unfair’ (is that the right word?) with my prompts so it’s worth looking at some of the other functionality of the wider writesonic suite. The writesonic suite offers a dazzling array of tools for a range of purposes from website copy to social media content:

Screenshot from the writesonic user interface showing links to a wide range of AI text generation tools

I was keen to look at the ‘citations generator’ as this is an aspect of ChatGPT that is a recognised weakness. You can use a URL prompt and a text based prompt. The text based prompt I used was itself generated in chatsonic.  It takes the text in the linked article or whatever you paste in and identifies ‘facts’ with suggested citations. The web articles are mostly relevant though the first journal article it suggested was a little off the mark and I’d need to be lazy, in a massive hurry or ignorant of better ways of sourcing appropriate resources to rely on this. At this stage!

Screenshot of the writesonic citation generator showing a url paste don the left of the screen then on the write suggestions for citations based on facts identified in the linked post

Jenni.ai

The second tool that I noticed (via the prolific AI researcher Mushtaq Bilal) was Jenni. The interface is well worth a look as I feel as if this foreshadows what we are likely to expect from generative text AI integration into tools like Microsoft Word. 

The first thing I noticed, however, is the blog with the most prominent word across posts being ‘essays’. Each is designed to address an approach to a different type of essay such as the compare and contrast essay. It offers clear suggestions for different approaches, a worked example and then, right at the end says: 

“ If you want your compare-and-contrast essays done 10x faster, we recommend usingJenni.ai along with the tips and guidelines we provided you in this post.Jenni.ai is a fantastic AI software that aids your essay writing process so that you could produce your writing needs faster and better than ever.”

Another post deals head on with the ethical and integrity issues of using AI to help write essays and makes a case for use of ‘AI as a tool, not a solution’ where the goal is a “symbiotic relationship between the critical thought of a writer and the processing speed of AI” 

The tool itself, unlike the huge range of offerings in Writesonic is a relatively uncluttered interface where you start by typing a title, it offers a judgement and suggestions if appropriate. 

screenshot from jenni.ai showing a tyyped heading and pop up underneath saying this is a strong heading

In addition, it offers in-line suggestions from whatever has come before. The prompt engineering continues through what feels like a single document rather than a chat. If you don’t like the suggestion you can get another. Here I typed a prompt and these are the first three options it gave me. Note the positive aspect on my worried prompt in every case! 

My prompt sentence….The worry is that these tools will be used by students to cheat  

but the reality is that most AI tools are designed to help students learn and improve their writing skills. [first suggested Jenni response]

The worry is that these tools will be used by students to cheat 

on their assignments by generating essays automatically, but they can also be used to provide feedback on areas where students need…[the second option]

The worry is that these tools will be used by students to cheat 

but their true purpose is to assist in the learning process by providing immediate feedback and identifying areas where improvement is…[third option]

The other noticeable aspect is the option to ‘cite’ – Here it offers a choice of MLA or APA 7th and the sources are, unlike ChatGPT’s famous hallucinations, genuine articles (at least in my limited testing). You can select ‘websites’ or ‘journals’ though I found the websites tended to be much more directly relevant than the journals. 

I really have only just started to play with these though and new things are popping up all over the place every day. Most educators will not have the time to do so though. Students may see and use these tools as an extension of those they use already for translation or improving writing. The blurry zone between acceptable and unacceptable is getting more ill-defined by the day.

What can I conclude from this? Well, firstly,  whatever the motivation on the continuum ranging from ‘give us all your money’ to ‘I believe the children are our future’, the underlying technology is being adapted rapidly to address perceived limitations in the tool that has brought generative text AI tools to our attention. We may not like the motivations or the ethics but we’ll not get far by ‘making like an ostrich’. Secondly,  It’s not good enough for us (educators) to dismiss things because the tool that many are now familiar with, ChatGPT, makes up citations. That’s being addressed as I type.  The number of these tools proliferating will soon be too huge to keep a decent handle on so we need to understand broadly how discrete tools might be used (ethically and unethically) and how many will integrate into tools we use daily already. In so doing we need to work out what that means for our students, their studies, their assessment and the careers our education is ostensibly preparing them for. Thirdly, we need to open up the discussions and debates around academic integrity and move on from ‘plagiarism’ as public Enemy No 1. Finally, where there are necessitated changes so there are resource implications. We need to accept that to prepare ourselves, our colleagues and our students we will need to adapt much faster than we are used to and properly resource however we attempt to address the challenges and opportunities ahead.  

Note: This post is not an endorsement or recommendation of any products mentioned and should be read with that clearly in mind!

AI text generation: Should we get students back in exam halls?

There’s a lot of talk about in-person, invigilated, hand-written exams being the obvious solution to assessment concerns being discussed across education in light of developments in what is popularly referred to as AI.  Putting aside scalability issues for now, I have looked at some of the literature on utility and impact of such exams so that we might remind ourselves that there is no such thing as a simple and obvious solution!

According to Williams and Wong (2009) in-person, closed-book exams are: 

an anachronism given the human capital needs of a knowledge economy, not just because of the absence of technology that is used routinely in everyday business and commerce, but because this type of examination instrument is incompatible with constructivist learning theory that facilitates deep learning (pp. 233-234). 

My own sense was that during the pandemic we were finally able to leverage circumstance along with similar arguments to effect change. We saw successful implementation of alternative assessments such as ‘capstones’, grade averaging and take-home exams as the examinations themselves were cancelled, modified or replaced. But since the great return to campus,  we have witnessed a reinvigoration of enthusiasm for the return of exams, the re-booking of exhibition centres and conference halls to host them and hear many academic colleagues doubling down on the exam as a panacea as the capabilities of generative AI tools have caught the World’s attention. 

Non-pedagogic reasons are often heard in support of the ‘traditional’ exam (imagine, red bricks, sun shining through windows and squeaky invigilator shoes).  These may invoke convention and tradition as well as pragmatic reasons of identity confirmation and significant reductions in marking time where feedback is not required to be given on examinations (Lawrence & Day, 2021). It has to be said, that the widely held belief that examinations promote rigour is supported by some research (especially in medical education). So, for example, students spend more time preparing for traditional exams and attend to studies more assiduously (Durning et al. , 2016). Durning et al. also argue that medical students need to have the knowledge to hand and that the students who do well in these exams do better by their patients. Misunderstandings about the nature of open book exams and (over) confidence in their ability to find answers in sources available leads to less preparation for open book exams and can lead some students to spend more time searching than producing (Johanns et al., 2017).   In addition, closed-book, in-person exams are believed to reduce cheating in comparison to open book exams or other assessment types (Downes, 2017; D’Souza and Siegfeldt, 2017). Although exams are seen to favour high-achieving students (Simonite, 2010), it is interesting to note that high achievers are more likely to cheat in exams (Ottaway et al., 2017).   

Online exams in particular are found to increase the likelihood of ‘cheating’ and lead to confusions about what is permitted and what constitutes collusion (Downes, 2017). However, whether cheating is less likely in closed book exams is contested (Williams, 2006). Williams and Wong (2009) argue that of open book exams where the pressure and dependency on memorization are reduced:

“The opportunity for academically dishonest practice is less because of the way these examinations are structured, but so is the temptation to resort to this kind of behaviour in the first place” (p.230).

Whilst online exams  are perceived to be more reliable and efficient (sample student group n=342) compared to paper-based exams (Shraim, 2019), both staff and students perceive opportunities for cheating to be easier in online modes (Chirumamilla et al., 2020) 

There are three dominant themes in the literature which focus on issues with traditional examinations: pedagogic, wellbeing and inclusivity. Closed exams tend to focus on recall and memorization at expense of higher order/ critical thinking (Bengtsson, 2019). Significant proportions of students use memorization techniques and consequently can perceive exams as unfair when exam questions do not mirror problems or content they have practiced (Clemmer et al., 2018). Open book exams de-emphasize memorisation imperatives (Johanns et al., 2017). Open book/ open web – when well-designed (e.g. problem based) is seen as more authentic, more applicable to real-world scenarios, and more learner-directed and bridges the learning with social context (Williams and Wong, 2009). 

Exams put ‘unnatural pressure’ (Bengtsson, 2019, p.1) on students that affects performance. The common perception that stress is ‘good for students’ is undermined by studies that show impeded cognition and outcome in stressed students (Rich, 2011). Students tend to prefer coursework or coursework + exams rather than exams alone (Richardson, 2015; Turner and Briggs, 2018). A small study of student perceptions of alternatives offered due to Covid-19 found that replacing traditional examinations with open-book, take home examinations found the stresses reported were replaced by technical anxieties and a sense that the papers were much harder than traditional invigilated exams would have been (Tam, 2021). A study in New Zealand of ‘take home tests’ however, found students performed better and saw learning and anxiety reduction benefits (Hall, 2001).  

A comparative study of undergraduate psychology students found greater student satisfaction and pass rates for students undertaking coursework, slightly lower satisfaction and pass rates for seen exams and lowest satisfaction and pass rate for the unseen exams which meant students saw as unfair, stressful and invalid due to need to memorize (Turner and Briggs, 2018).  

Although Richardson’ s (2014) review found studies offer contradictory findings in terms of ethnicity and performance in exams and coursework, all ethnicities tend to do better in terms of grade profile with coursework.  However, markers are idiosyncratic, privilege ‘good’ language and expression (Brown, 2010) and this contributes to higher degree outcomes for primary/ first language English speakers over English as second language speakers (Smith, 2011). Coursework increases consistency of marks across types of assessment, improves mean performance in terms of final degree outcomes and counter-balances disproportionate disadvantage of exams faced by students whose means scores are low (Simonite, 2010).  

It goes without saying that there is no ‘one size fits all’ solution but we do need to think carefully, in light of research, of the consequences of the decisions we make now about how we manage assessment in the future. It would be foolish to knee-jerk our  responses though. Just because the wheels of change move so slowly in universities, shifts back to exams may appear to offer a path of least resistance. Instead, our first consideration must be modifications and innovations that address issues but are also positive in their own right. We need to consider the possibilities of more programmatic assessment for example or perhaps learn from medical education ‘OSCE’ assessments where knowledge and communication are assessed in simulated settings or even look further to other higher education cultures where oral assessments are already the default. To achieve this level of change we need to recognise that AI is a catalyst to changes that many have been advocating (from a research-based position) for a long time but have often only achieved limited success if the resource for change has not accompanied that advocacy.

References 

Bengtsson, L. (2019). Take-home exams in higher education: a systematic review. Education Sciences, 9(4), 267. 

Brown, Gavin. (2010). The Validity of Examination Essays in Higher Education: Issues and Responses. Higher Education Quarterly. 64. 276 – 291. 10.1111/j.1468-2273.2010.00460.x. 

Chirumamilla, A., Sindre, G., & Nguyen-Duc, A. (2020). Cheating in e-exams and paper exams: the perceptions of engineering students and teachers in Norway. Assessment & Evaluation in Higher Education, 45(7), 940-957. 

Clemmer, R., Gordon, K., & Vale, J. (2018). Will that be on the exam?-Student perceptions of memorization and success in engineering. Proceedings of the Canadian Engineering Education Association (CEEA)

Downes, M. (2017). University scandal, reputation and governance. International Journal for Educational Integrity, 13(1), 1-20. 

D’Souza, K. A., & Siegfeldt, D. V. (2017). A conceptual framework for detecting cheating in online and take‐home exams. Decision Sciences Journal of Innovative Education, 15(4), 370-391. 

Durning, S. J., Dong, T., Ratcliffe, T., Schuwirth, L., Artino, A. R., Boulet, J. R., & Eva, K. (2016). Comparing open-book and closed-book examinations: a systematic review. Academic Medicine, 91(4), 583-599. 

Hall, L. (2001). Take-Home Tests: Educational Fast Food for the New Millennium? Journal of the Australian and New Zealand Academy of Management, 7(2), 50-57. doi:10.5172/jmo.2001.7.2.50 

Johanns, B., Dinkens, A., & Moore, J. (2017). A systematic review comparing open-book and closed- book examinations: Evaluating effects on development of critical thinking skills. Nurse Education in Practice, 27, 89-94. https://doi.org/10.1016/j.nepr.2017.08.018  

Lawrence, J. & Day, K. (2021) How do we navigate the brave new world of online exams? Times Higher Available: https://www.timeshighereducation.com/opinion/how-do-we-navigate-brave-new-world-online-exams [accessed 17/6/21] 

Ottaway, K., Murrant, C., & Ritchie, K. (2017). Cheating after the test: who does it and how often?. Advances in physiology education, 41(3), 368-374. 

Rich, J. D. (2011). An experimental study of differences in study habits and long-term retention rates between take-home and in-class examinations. International Journal of University Teaching and Faculty Development, 2(2), 121. 

Richardson, J. T. (2015). Coursework versus examinations in end-of-module assessment: a literature review. Assessment & Evaluation in Higher Education, 40(3), 439-455. 

Shraim, K. (2019). Online examination practices in higher education institutions: learners’ perspectives. Turkish Online Journal of Distance Education, 20(4), 185-196. 

Simonite, V. (2003). The impact of coursework on degree classifications and the performance of individual students. Assessment & Evaluation in Higher Education, 28(5), 459-470. 

Smith, C. (2011). Examinations and the ESL student–more evidence of particular disadvantages. Assessment & Evaluation in Higher Education, 36(1), 13-25. 

Tam, A. C. F. (2021). Students’ perceptions of and learning practices in online timed take-home examinations during Covid-19. Assessment & Evaluation in Higher Education, 1-16. 

Turner, J., & Briggs, G. (2018). To see or not to see? Comparing the effectiveness of examinations and end of module assessments in online distance learning. Assessment & Evaluation in Higher Education, 43(7), 1048-1060. 

Williams, J. B., & Wong, A. (2009). The efficacy of final examinations: A comparative study of closed‐book, invigilated exams and open‐book, open‐web exams. British Journal of Educational Technology, 40(2), 227-236. 

Williams, J. B. (2006). The place of the closed book, invigilated final examination in a knowledge economy. Educational Media International, 43, 2, 107–119. 

Developing AI literacy – learning by fiddling

Despite ongoing debates about whether so called large language models /generative language  (and other media) tools are ‘proper’ AI (I’m sticking with the shorthand), my own approach to trying to make sense of the ‘what’, ‘how’, ‘why’ and ‘to what end?’ is to use spare moments to read articles, listen to podcasts, watch videos, scroll through AI enthusiasts’ Twitter feeds and, above all, fiddle with various tools on my desktop or phone. When I find a tool or an approach that  I think might be useful for colleagues with better things to do with their spare time I will jot notes in my sandpit, make a note like this blog post comparing different tools or record a video or podcast like those collected here or, if prodded hard enough, try to cohere my tumbling thoughts in writing. The two videos I recorded last week are an effort to help non-experts like me to think, with exemplification, about what different tools can and can’t do and how we might find benefit in amongst the uncertainty, ethical challenges, privacy questions and academic integrity anxieties.

The video summaries were generated using GPT4 based on the video transcripts:

Can I use generative AI tools to summarise web content?

In this video, Martin Compton explores the limitations and potential inaccuracies of ChatGPT, Google Bard, and Microsoft Bing chat, particularly when it comes to summarizing external texts or web content. By testing these AI tools on an article he co-authored with Dr Rebecca Lindner, the speaker demonstrates that while ChatGPT and Google Bard may produce seemingly authoritative but false summaries, Microsoft Bing chat, which integrates GPT-4 with search functionality, can provide a more accurate summary. The speaker emphasizes the importance of understanding the limitations of these tools and communicating these limitations to students. Experimentation and keeping up to date with the latest AI tools can help educators better integrate them into their teaching and assessment practices, while also supporting students in developing AI literacy. (Video and Transcript available via Media Central)

Using a marking rubric and ChatGPT to generate extended boilerplate (and tailored) feedback

In this video, Martin Compton explores the potential of ChatGPT, a large language model, as a labour-saving tool in higher education, particularly for generating boilerplate feedback on student assessments. Using the paid GPT-4 Plus version, the speaker demonstrates how to use a marking rubric for take-home papers to create personalized feedback for students. By pasting the rubric into ChatGPT and providing specific instructions, the AI generates tailored feedback that educators can then refine and customize further. The speaker emphasizes the importance of using this technology with care and ensuring that feedback remains personalized and relevant to each student’s work. This approach is already being used by some educators and is expected to improve over time. (Video and Transcript available via Media Central)

I should say that in the time since I made the first video (4 days ago) I have been shown a tool that web connects ChatGPT and my initial fiddling there has re-dropped my jaw! More on that soon I hope.

Generative AI: Friend or Foe?

In this post I share two videos on generative AI including (of course) reference to ChatPT.  These are designed for a general audience at UCL and will hopefully be of relevance to academic and professional service colleagues as well as students. In these unscripted videos I, a human, talk in a non-technical way about some of the tools, their affordances and implications. The summaries below were generated in GPT4 using the transcripts of the videos.

Video 1:

In this video, Martin Compton from Arena discusses the phenomenon of generative AI, using Chat GPT as a prime example. He addresses the question of whether generative AI is a friend or foe, and suggests that how we react, utilise, and learn from these technologies will determine the outcome. He provides an example of a generative image created with AI, raising ethical concerns such as copyright infringement and the carbon footprint of AI technologies. He also talks about different manifestations of ‘large language models’ and raise questions about the ways members of the academic community could use them.

Access details and transcript for video 1 here

————————————

Video 2

In the second video about generative AI, Martin Compton from Arena builds on discussions with a colleague, Professor Susan Smith, and explores whether generative AI is a friend or enemy. He acknowledges the power and remarkable capabilities of AI tools like ChatGPT (a large language model text generator) and Midjourney, an AI image generator. However, he advises against panicking or feeling anxious about the impact of these technologies. Instead, Martin suggests that we should adapt, adjust, and learn from the ethical issues and implications these tools present. By finding ways to accommodate, embrace, and exploit the potential of generative AI, we can utilize these technologies for labor-saving purposes and ultimately enhance various aspects of our lives.

Access the details and transcript for video 2 here

———————————

 Podcast

LLInC · ChatGPT in Higher Education: a curse, a blessing or an inevitability?

Digitally Accessible Learning design

Listen 5 mins 50 seconds or read below

This post accompanies a LinkedIn Live event hosted by Tony Reeves of Ding Learning. In it are resources referred to in this event.  But please do read and try the linked activities for yourself even if you can’t attend! The resource is designed to raise awareness of what digital accessibility means and what a ‘by design’ approach to digital accessibility requires us to know and to do. The session is also an opportunity for us to pilot aspects of an (in-development) Accessibility Engagement Tool with colleagues beyond the walls of UCL. The tool is being designed to help colleagues discuss their accessibility engagement and get clear direction on what they can do to further improve the accessibility of their teaching and, as far as possible in an anticipatory and planned way, rather than reactively or in response to a need that had not been anticipated. The goal is to enable colleagues to set some clear digital accessibility goals irrespective of their starting point.

Accessibility in its broadest sense is about making activities, environments, and information as useable and meaningful as possible in ways that do not exclude people. It is about empowerment, about minimising frustration and about effective anticipatory design. Digital accessibility therefore ‘provides ramps and lifts into information.’ It includes ensuring that all information we create at UCL can be seamlessly consumed by everyone that wishes to access it. As UCL’s digital accessibility policy is rolled out, we are using this framework so that we can help demystify aspects of digital accessibility

The accessibility engagement model and accompanying self-assessment tool are being designed to enable colleagues to plot their own level according to a series of questions about aspects of digital accessibility. The idea will be that through series of questions related to:

  1. Values and beliefs
  2. Knowledge and skills
  3. Actions and behaviours

…the tool will plot an overall position as well as noting areas of developmental or resourcing need. As we have shaped this model one area that has led to much discussion, consultation and head scratching are the labels we are appending to levels. As a starting point we propose six levels of ‘maturity status’ and invite colleagues to decide which level they are currently at:

Accessibility Engagement Model

LevelAccessibility Maturity StatusCharacteristics and indicative practices
0UnwillingContext means that this is not prioritised in current working environment given competing commitments and pressures.Time is a key point of resistance.
1UnableDon’t know where to start and/or in need of direction, support, and prioritisation.
2Reluctant compliantAwareness of accessibility principles and drivers; only adopting bare minimum when encouraged.
3Willing compliantAwareness of accessibility design principles; willingly adopting good basic level of accessibility.
4AllyConnected to wider pedagogical values; allies are vocal on behalf of students. Role model or provide case studies/ templates for others in their departments.
5Champion and Co-creatorActivists/ innovators who work with students to understand and design more accessible approaches and resources. Potential contributors to institutional policy and strategy.

Digitally Accessible Learning Design

Whilst the online tool is still under construction, we invite colleagues to use the Mentimeter for now to respond briefly to some ‘actions / behaviours’ statements. This page is for colleagues accessing the tool via the Ding Learning LinkedIn Live event.

Please access the Mentimeter so that you can (anonymously) assess your behaviours against a number of statements. For each statement you are able to choose from 0-5 as follows

0. Not on my radar

1. Would if I knew how

2. Rarely or if required

3. Sometimes

4. Always

5. Yes! And support others

The cumulative results (i.e. all those who respond) can be seen in the presentation slides which display results.

The statements in the slides can also be seen below.

  1. I use descriptive hyperlinks rather than ‘click here’ or unconcealed links
  2. I ensure that visual materials are conveyed effectively to those who cannot see them using alternative text descriptions and audio descriptions
  3. I ensure my documents are navigable with a structured set of headings
  4. I ensure tables are easy to read and have clear heading rows
  5. I can use/ enable automatic speech recognition captions in live sessons
  6. I caption and/or provide transcripts for in all multimedia I create
  7. I offer a range of formats for my materials e.g., PDF, html and docx
  8. I signpost students to assistive technologies so that they can have more support accessing materials
  9. I share electronic content with students (such as slides) ahead of teaching sessions
  10. I accessibility check my documents before finalising them
  11. I explain acronyms when I use them
  12. I check my work for colour contrast issues

Further resources

If colleagues wish to use the slides as they are then you will need a Mentimeter account. If you are logged into it you can then open the cumulative results slides and ‘copy to your account’ . If not, please do use the questions as framed and/ or await our proposed online tool.

Learning theory

This resource has been created as pre-reading for a session I have been invited to lead with students on the MSc Dietetics at UCL. It is my attempt at answering the question: ‘Is there any point in dieticians knowing about learning theory? (professionally I mean, given that it is, of course, inherently interesting!). However, I think it is potentially of interest to anyone teaching!

(Listen 9mins 12 seconds or read below)

Download pdf version

Rationale

People with research or professional interests in educational psychology or teaching understandably and logically have an interest in learning theory. Whether theory provides a template for design of approaches to teaching, learning and assessment or offers an analytical lens to better understand what is happening at an individual or collective level, it makes sense that we challenge our assumptions, experiences and reflections through such theoretical lenses. But what of those whose relationship with teaching or with everyday human tendencies and behaviours is only a tangential part of their role? In any role where one person gives information to others, helps them to understand things or is responsible for changing (or helping to change) behaviours then understanding a little of how people learn will be beneficial. From my (lay) perspective, I imagine that common challenges in dietetics will be interpreting and conveying complex scientific information about nutrition and health and helping people to understand impacts and causation in relation to excess or absence in diet. Consideration of these challenges and issues that arise can be informed through theoretical lenses.

The learning theory landscape

A wordcloud of all the key words for this topic: theory, dietetics, constructivism, social, nutrition, behaviourism, health, diet, psychology are the largest

One of the problems with this is that a quick search for ‘theories of learning’ will present a dazzling, complex, sometimes-contradictory array of theories and ideas. It immediately raises several questions:

  1. Where do you start?
  2. How deep need you go?
  3. How can exposure to learning theory be applied in a meaningful way in context?

The answer to the first question may be ‘right here!’ if you have not studied learning theory before. The second question probably has the same answer as to the question: ‘How long is a piece of string?’ and will inevitably be determined by academic, research and professional roles and interests. The third question is one that we will try to get to grips with here and in the forthcoming session.

Like any other academic field, learning theory has its groupings and areas. The landscape isn’t always represented the same way: you will sometimes see theorists in one category, then another, which can be confusing. This may be due to classification differences, or because the theorist has developed their position over time. To complicate things further, the term ‘theory’ is used to cover a variety of models, approaches and techniques, and often defined by different people in different ways. That said, complication need not be a problem: rather than seeking firmly defined boundaries, think more in terms of spectra and Venn diagrams, where things overlap and interconnect. A main use of theory is to shed light on our experience and help us reflect on – and even change – our practice.

The following theories are both broad and narrow and some can be seen as subsets or informed by wider/ earlier theories. Whether broad or narrow, generalised or specific they have been selected because I think they may be of use to those in the field of dietetics. However, you have more expertise than I do here so it is important your critical eye is focussed and alert. Remember, it is unlikely that you will read a theory, decide ‘ah ha! That’s me from now on’. Rather, you may read, think, reflect, apply and draw on a range of complementary (or contradictory) ideas and approaches as you develop techniques in your future roles as well as using theoretical lenses to better understand what has worked and what has not.

Broad theoretical ‘schools’


Behaviourist theories of learning see the learner as passive; they can be trained using reward and punishment to ‘condition’ them to behave in particular ways (famous theorists in this domain include Pavlov and Skinner whose reach extends into popular understandings unlike most other domains of theory). Learning is seen as a change in behaviour. In health education the role of the expert might be to provide incentives or find ways to disincentivise certain behaviours. Consider the cost of tobacco products and the gruesome images on the packaging. What is the thinking behind this? Can the cost and images be credited with the continuing fall in numbers of smokers?

Cognitivist theories of learning see the learner as actively processing information rather than just being ‘conditioned’ by various stimuli. Cognitivists are concerned with how learners process and remember information, and often test recall as a measure of learning. In health education the expert’s role is to convey information in ways that optimises recall and completeness. Consider the 5 portions of fruit/ veg a day campaign: Whilst there were certainly ‘rewards’ built into the design of the programmes (i.e. health benefits of eating 5 a day) there was also an emphasis on providing and reinforcing information about nutrition and vitamins through attractive materials, booklets, leaflets, connections of school curricula and so on.

Constructivist theories of learning see the learner as an active participant in their own learning. The process of learning is not merely putting knowledge into an empty container. The ‘teacher’ presents knowledge, scenarios, resources, options and problems (or they gain it in another way) and in learning it students ‘construct’ the knowledge for themselves, linking it to what they already know. A variant of this is ‘social constructivism’, which holds that students’ construction of their knowledge is done with others. How might a dietician apply a constructivist approach when working with a client with type 2 diabetes who, by their own admission and despite worsening symptoms, persists in keeping a diet that is sugar, starch and salt rich?


a display cabinet with items and the amount of sugar in them such as a coke can (27g), capri sun carton (24g) and mars bar (54g)

Stop and think

Which broad theoretical approach can you see here?

At my local dentist surgery there is a display case with different sugary snacks, foods and drinks set out very neatly adjacent to piles of sugar equivalent to the actual amount in those foodstuffs. Each has a typed label (like in a museum) with the amount of sugar in grams. There are also a couple of low sugar items. There are no explicit warnings of the dangers of sugar to teeth.


Specific theories: How relevant / useful are these?

Situated Learning theory holds that relevance/ needs of learning are always embedded within a context and culture, so it’s best to teach particular materials within a relevant context – e.g. teaching clinical skills in a clinical setting. Within that context, students learn by becoming involved in a ‘community of practice’ – a group of practitioners – and through ‘Legitimate Peripheral Participation’ move from the periphery of this community towards its centre (i.e. the more expert and involved in practice they become, the closer they move toward the centre). (key names: Vygotsky, Lave)

Social Learning Theory views observation as key to learning; it holds that we learn through observing others, not just what they do but also the consequences of that. People learn from watching older or more expert people. An educator has a role in getting their attention, helping them remember and motivating them to demonstrate their learning. Behaviour is also affected by what they see being rewarded or punished. (key name: Bandura)

Mindset (motivational) Theory argues that if people believe that their ability to achieve something is fixed they have little chance of changing it and they therefore have a fixed mindset. To develop (i.e. learn), a growth mindset is needed and this is related to intrinsic self-belief. The educator’s role is to show belief, exemplify positive behaviours (e.g. hard work and effort should be valued not only results) and showing how to embrace ‘failure’ (key name: Dweck)

Critical Pedagogy is more of a movement than a theory: it holds that teaching cannot be separated from wider social and political concerns, and that educators should empower their ‘students’ to be active, critical citizens. Critical pedagogy is concerned with, whatever the subject, asking students to question hierarchies and power relations and to achieve a higher political consciousness. Paulo Freire, author of Pedagogy of the Oppressed (one of the first books of critical pedagogy) coined the ‘banking model’ in his critique of how some teaching aims to ‘fill’ students up with knowledge as though they are blank slates, merely receiving and storing knowledge. In addition, bell hooks’ work on intersectionality (complex layers of discrimination and privilege according to factors such as race, class, gender, sexuality and disability) might also lend a powerful lens to understand and challenge the nature and role of diet amongst groups as well as in individuals.

Session slides

https://www.mentimeter.com/app/presentation/b198f152cb620471d75aaadbcc42c251/embed

Further reading

You may like to see things represented on a timeline with short, pithy summaries of key ideas. If so, try this site: https://www.mybrainisopen.net/learning-theories-timeline/

Donald Clark has written a huge amount about learning theory on his blog and this can be seen here if you prefer a dip in and search approach: https://donaldclarkplanb.blogspot.com/

A really accessible intro (as well as a much wider resource) is the encyclopaedia of informal learning: https://infed.org/learning-theory-models-product-and-process/

This resource was produced by Martin Compton. The Theoretical schools material was adapted from resources created by Emma Kennedy, Ros Beaumont & Martin Compton (UoG, 2018)