The wizard of PAIR

Full recording: Listen / watch here

This post is a AI/ me hybrid summary of the transcript of a conversation I had with Prof Oz Acar as part of the AI conversations series at KCL.  This morning I found that my Copilot window now allows me to upload attachments (now disabled again! 30/4/24) but the output with the same prompt was poor by comparison to Claude or my ‘writemystyle’ custom GOT unfortunately (for now and at first attempt). I have made some edits to the post for clarity and to remove some of the wilder excesses of  ‘AI cringe’.  

 

“The beauty of PAIR is its flexibility,” Oz explained. “Educators can customise each component based on learning objectives, student cohorts, and assignments.” An instructor could opt for closed problem statements tailored to specific lessons, or challenge students to formulate their own open-ended inquiries. Guidelines may restrict AI tool choices, or enable students more autonomy to explore the ever-expanding AI ecosystem.  That oversight and guidance needs to come from an informed position of course.

 

Crucially, by emphasising skills like problem formulation, iterative experimentation, critical evaluation, and self-reflection, PAIR aligns with long-established pedagogical models proven to deepen understanding, such as inquiry-based and active learning. “PAIR is really skill-centric, not tool-centric,” Oz clarified. “It develops capabilities that will be invaluable for working with any AI system, now or in the future.”

 

The early results from over a dozen King’s modules across disciplines like business, marketing, and arts have piloted PAIR have been overwhelmingly positive. Students have reported marked improvements in their AI literacy – confidence in understanding these technologies’ current capabilities, limitations, and ethical implications. “Over 90% felt their skills in areas like evaluating outputs, recognising bias, and grasping AI’s broader impact had significantly increased,” Oz shared.

 

While valid concerns around academic integrity have catalysed polarising debates, with some advocating outright bans and restrictive detection measures, Oz makes a nuanced case for an open approach centred on responsible AI adoption. “If we prohibit generative AI for assignments, the stellar students will follow the rules while others will use it covertly,” he argued. “Since even expert linguists struggle to detect AI-written text reliably (especially when it has been manipulated rather than simply churned from a single shot prompt), those circumventing the rules gain an unfair advantage.”

 

Instead, Oz advocates assuming AI usage as an integrated part of the learning process, creating an equitable playing field primed for recalibrating expectations and assessment criteria. “There’s less motivation to cheat if we allow appropriate AI involvement,” he explained. “We can redefine what constitutes an exceptional essay or report in an AI-augmented age.”

 

This stance aligns with PAIR’s human-centric philosophy of ensuring students remain firmly in the driver’s seat, leveraging AI as an enabling co-pilot to materialise and enrich their own ideas and outputs. “Throughout the PAIR process, we have mechanisms like reflective reports that reinforce students’ ownership and agency … The AI’s role is as an assistive partner, not an autonomous solution.”

 

Looking ahead, Oz is energised by generative AI’s potential to tackle substantial challenges plaguing education systems globally – from expanding equitable access to quality learning resources, to easing overstretched educators’ burnout through intelligent process optimisation and tailored student support. “We could make education infinitely better by leveraging these technologies thoughtfully…Imagine having the world’s most patient, accessible digital teaching assistants to achieve our pedagogical goals.”

 

However, Oz also acknowledges legitimate worries about the perils of inaction or institutional inertia. “My biggest concern is that we keep talking endlessly about what could go wrong, paralysed by committee after committee, while failing to prepare the next generation for their AI-infused reality,” he cautioned. Without proactive engagement, Oz fears a bifurcated future where students are either obliviously clueless about AI’s disruptive scope, or conversely, become overly dependent on it without cultivating essential critical thinking abilities.

 

Another risk for Oz is generative AI’s potential to propel misinformation and personalised manipulation campaigns to unprecedented scales. “We’re heading into major election cycles soon, and I’m deeply worried about deepfakes fuelling conspiracy theories and political interference,” he revealed. “But even more insidious is AI’s ability to produce highly persuasive, psychologically targeted disinformation tailored to each individual’s profile and vulnerabilities.”

 

Despite these significant hazards, Oz remains optimistic that responsible frameworks like PAIR can steer education towards effectively harnessing generative AI’s positive transformations while mitigating risks.

 

PAIR Framework- Further information

Previous conversation with Dan Hunter

Previous conversation with Mandeep Gill Sagoo

Generative AI in HE- self study short course

An additional point to note: The recording is of course a conversation between two humans (Oz and Martin) and is unscripted. The Q&A towards the end of the recording was faciliated by a third human (Sanjana). I then compared four AI transcription tools: Kaltura, Clipchamp, Stream and Youtube. Kaltura estimated 78% accuracy, Clipchamp crashed twice, Stream was (in my estimation) around 90-95% accurate but the editing/ download process is less convenient when compared to YouTube in my view so the final transcript is the one initially auto-generated in in YouTube, ChatGPT punctuated then re-edited for accuracy in YouTube. Whilst accuracy has improved noticeably in the last few years the faff is still there. The video itself is hosted in Kaltura.

Nuancing the discussions around GenAI in HE

Audio version (Produced using speechify text to voice- requires free sign up to listen)

While we collectively and individually (cross college and in-faculties) reflect on the impacts  over the last year or so of (Big) AI and Generative AI on what we teach, how we teach, how we assess and what students can, can’t should and shouldn’t be doing I am finding that (finally) some of the conversations are cohering around themes. Thankfully, it’s not all about academic integrity as fascinating as that is). Below is my effort at organising some of those themes and is a bit of a brain dump!

Balancing institutional consistency with disciplinary diversity

One of the primary challenges we face is how to balance the need for institutional consistency with the fact that GenAI is developing in diverse ways across different disciplines and industries. This issue is particularly pertinent at multi-disciplinary institutions like KCL, where we have nine faculties, each witnessing emerging differences not just between faculties but between departments, programmes, and even among colleagues within the same programme.

The fractious, new, contentious, ill-understood, unknown, and unpredictable nature of GenAI exacerbates this challenge. To address this, we are adopting a two-pronged approach:

1. Absolute clarity about the broad direction: ENGAGE at KCL (not embrace!) with clear central guidance that can be adapted locally, allowing a degree of agency.

2. A multi-faceted approach to evolving staff and student literacy, both centrally and locally, recognising that we all know roughly nothing about the implications and what will actually emerge in terms of teaching and assessment practices.

What we are not doing is articulating explicit policy (yet) given the unknowns and unpredictability but we are trying to make more explicit where existing policy applies and where there tensions or even perceptions of contradictions.

Enabling innovation while supporting the ‘engagement’ strategy

To enable and support staff in innovating with GenAI while fostering engagement and endeavouring to ensure compliance with ethical, broader policy and even legal requirements, our multi-faceted approach includes:

1. Student engagement in research, in developing guidance and in supporting literacy initiatives

2. Supported/funded research projects to help diversify fields of interest, to build communities of enthusiasts and to share outcomes within (and beyond) the College.

3. Collaboration within (e.g. with AI institute; involvement of libraries and collections, careers, academic skills) and across institutions (sharing within networks, participating at national and international events; building national and international communities of shared interest).

4. Investment in technologies and leadership to facilitate innovation and more rapid pace where such innovation and piloting and experimentation has typically taken much longer in the past.

5. Providing spaces for dialogue such as student events, the forthcoming AI Institute festival, research dissemination events, workshops and a college-wide working group.

As we navigate this new territory, consistent messaging and clear guidance are paramount. We need to learn from others’ successes and mistakes while avoiding breaching data privacy or other ethical and legal boundaries inadvertently- in a fast moving landscape the sharing of experience and intelligence is essential.  One example (from another university!)  is the potential pitfall of uploading students’ work into ChatGPT to determine if an LLM wrote it, only to discover that this constitutes a massive data breach, and the LLM couldn’t even provide that information.

Fostering digital literacy and critical thinking

Everything above connotes learning (and therefore time) investment for all staff and students. Where will we find this time? Framed as critical AI literacy it is (imho) unavoidable even for the World’s leading sceptics. Wherever you situate yourself on the AI enthusiasm continuum (and I’m very much a vacillator and certainly not  firmly at the evangelical end!), we have to address this and there’s no better way than first hand rather than  (often hype tainted, simplistic)  second-hand narratives peddled by those with vested interests (whether they be big -and small- tech companies with a whizzy tool or detector to sell you or educational conservatives keen to exploit a perceived opportunity to return to halcyon days of squeaky-shoed invigilation of exams for everyone for everything).

My biggest worry for the whole educational sector (especially where leadership from government is woolly at best)  is that complexity and necessary nuancing of discussion and decision-making will signal a threatening or punitive approach to assessment or an over-exuberant, ill-conceived deal with the devil…both of which of  will be counterproductive if good education is your goal.  In my view we should:

1. Work with, not against, both students and the technology.

2. Model good practices ourselves.

3. Accept that mistakes will be made, but provide clear guidelines on what is and is not advised/permitted for any given teaching or assessment or activity.

4. Drive the narratives more ourselves from within the broader academy- stop reacting; start demanding (much easier collectively, of course).

At KCL, we have implemented three “golden rules” for students to mitigate risks during the transition to better understanding:

Golden Rule 1: Learn with your interactions with AI, but never copy-paste text generated from a prompt directly into summative assignments.

Golden Rule 2: Ask if you are uncertain about what is allowed in any given assessment.

Golden Rule 3: Ensure you take time before submission to acknowledge the use of generative AI.

Empowering critical and creative engagement

This is easy to set as a goal but of course much harder to realise. To empower all students (and staff) to engage critically and creatively with GenAI tools, we must acknowledge the potential benefits while addressing justified concerns. In an environment of reduced real-terms funding, international student recruitment challenges, and widespread redundancies in several HE instituions, some colleagues might view GenAI as yet another burden. I have been encouraging colleagues (with one eye on a firmly held view that first-hand experience equips you much better to make informed judgements) to look for ways to exploit these tech in relatively risk-free ways not only to build self-efficacy but also to shift the more entrenched and narrow narratives of GenAI as an essay generator and existential threat! Some examples:

1. Can you find ways to actually realise workflow optimisation?: GenAI tools offer amazing potentials for translation, transcript generation, meeting summaries, clarifying and reformatting content.

2. Accessibility and neurodiversity support: Many colleagues and students are already benefiting from GenAI’s ability to present content in alternative formats, making it easier to process text or generate alt-text.

3. Educational support in underserved areas: GenAI tools at a macro level could potentially support regions where there are too few teachers but also on a micro level can enable students with complex commitments to access a degree of support outside ‘office hours’

Implications for curriculum design, teaching and assessment

The advent of GenAI has potential implications for curriculum design, instructional strategies, and assessment methods. One concern is the potential homogenisation (and Americanisation) of content by LLMs. While LLMs can provide decent structures, learning outcomes, and assessment suggestions, there is a risk of losing the spark, humanity, visceral connection and novelty that human educators bring.

However, this does not have to be an either/or scenario and I think this is the critical point to raise. We can leverage GenAI to achieve both creativity and consistency. For example, freely available LLMs can generate scenarios, case studies, multiple-choice questions based on specific texts, single-best-answer databases, and interactive simulations for developing skills like clinical engagement or client interaction. A colleague has found GenAI helpful in designing Team-Based Learning (TBL) activities, although the quality of outputs depends on the tool used and the quality of the prompts, underscoring the importance of GenAI literacy.

When discussing academic integrity and rigour, we must separate our concerns about GenAI from broader issues around plagiarism and well-masked cheating, which have long been challenges. We need to re-evaluate why we use specific assessments, what they measure overtly and tacitly, and the importance of writing in different programmes.

Moving beyond ‘Cheating’ and ‘AI-Proofing’

To move the conversation around AI and assessment beyond ‘cheating’ and ‘AI-proofing,’ we must recognise that ‘AI proofing’ is an arms race we cannot win. We also need to accept that we have lived for a very long time with very varied definitions of what constitutes cheating, what constitutes plagiarism and even the extent to which things like proof-reading support are or should be allowed. I thin k the time now is for us to re-evaluate everything we do (easy!) – our assessments, their purposes, what they measure, the importance of writing in each programme, and what we define as cheating, plagiarism, and authorship in the context of GenAI. If we do this well, we will surface the tacit criteria many students are judged on, the hidden curricula buttressing programme and assessment design and covert (even often from those assessing) privileges that dictate the what and how of assessments and the ways in which they are evaluated.

Ethical dilemmas: Energy consumption and a whole lot more

Many have written on the many controversies GenAI raises- copyright, privacy, exploitation, sustainability. One is energy consumption. While figures vary, some suggest that using an LLM for a basic search query costs 40 times more in cooling. Shocking! Conversely, others argue that using LLMs to generate content that would otherwise be time-consuming and laborious could be less costly in terms of consumption. What to think?!  At the very least and as technology improves, we must distinguish between legitimate, purposeful use and novelty or wasteful use, just as we should with any technology. But we need to find trusted sources and points of referral as, in my experience at least, a lot of what I read is based on figures that are hard to pin down in terms of provenance and veracity.

We cannot pretend that that the copyright, data privacy, lack of transparency, and the exploitation of human reinforcement workers issues do not exist- and these are  challenges compounded by the tech industry’s race for a sustainable market share. But we should be wary of ignoring pre-existing controversies, being inconsistent in the ways we scrutinise different tech and, from my point of view at least, fail to recognise the potentials as a consequence of some of the more shocking and outlandish stories we hear. Again, we come back to complexity and nuance. Currently, education seems to be in reaction mode, but we need to drive the narratives around these ethical concerns.

Intellectual property rights, authorship, and attribution

As I say above, we need to re-examine the fundamentals of higher education, such as our definitions of authorship, writing, cheating, and plagiarism. For example, while most institutional policies prohibit proofreading, many students from privileged backgrounds have long benefited from having family members review their work – a form of cultural capital and privilege that is generally accepted and not questioned even if, by letter of the academic integrity law, such support is as much cheating as getting a third party piece of tech to ‘proof read’ for you.

The opportunity for students from diverse backgrounds, including those who find conventional reading and studying challenging, to leverage GenAI for similar benefits is a reality we must address. Unless the quality of writing or the writing process itself is being assessed, we may need to be more open to how technology changes the way we approach writing, just as Google and word processing revolutionised information-finding and writing processes. I think we (as a sector) have realised that citation of LLMs is inappropriate but for how long and in which disciplines will we feel the need to make lengthy acknowledgements of how we have used these tech?

Regardless of the discipline, engaging with GenAI is crucial – not doing so would be irresponsible and unfair to our students and ourselves. However, engagement also connotes investment in time and other resources, which raises the question of where we find those resources.

AI Law

Watch the full video here

In the second AI conversation of the King’s Academy ‘Interfaculty Insights’ series, Professor Dan Hunter, Executive Dean of the Dickson Poon School of Law, shared his multifaceted engagement with artificial intelligence (AI). Prof Hunter discussed the transformative potential of AI, particularly generative AI, in legal education, practice, and beyond. With a long history in the field of AI and law, he offered a unique perspective on the challenges and opportunities presented by this rapidly evolving technology. To say he is firmly in the enthusiast camp, is probably an understatement.

A wooden gavel with ‘AI’ embossed on it

From his vantage point, Prof Hunter presents the following key ideas:

  1. AI tools (especially LLMs) are already demonstrating significant productivity gains for professionals and students alike but it is often more about the ways they can do ‘scut work’. Workers and students become more efficient and improve work quality when using these models. For those with lower skill levels the improvement is even more pronounced.
  2. While cognitive offloading to AI models raises concerns about losing specific skills (examples of long division or logarithms were mentioned), Prof Hunter argued that we must adapt to this new reality. The “cat is out of the bag” so our responsibility lies in identifying and preserving foundational skills while embracing the benefits of AI.
  3. Assessment methods in legal education (and by implication across disciplines) must evolve to accommodate AI capabilities. Traditional essay writing can be easily replicated by language models, necessitating more complex and time-intensive assessment approaches. Prof Hunter advocates for supporting the development of prompt engineering skills and requiring students to use AI models while reflecting on the process.
  4. The legal profession will undergo a significant shakeup, with early adopters thriving and those resistant to change struggling. Routine tasks will be automated obligating lawyers to move up the value chain and offer higher-value services. This disruption may lead to the need for retraining.
  5. AI models can help address unmet legal demand by making legal services more affordable and accessible. However, this will require systematic changes in how law is taught and practiced, with a greater emphasis on leveraging AI’s capabilities.
  6. In the short term, we tend to overestimate the impact of technological innovations, while underestimating their long-term effects. Just as the internet transformed our lives over decades, the full impact of generative AI may take time to unfold, but it will undoubtedly be transformative.
  7. Educators must carefully consider when cognitive offloading to AI is appropriate and when it is necessary for students to engage in the learning process without AI assistance. Finding the right balance is crucial for effective pedagogy in the AI era.
  8. Professional services staff can benefit from AI by identifying repetitive, language-based tasks that can be offloaded to language models. However, proper training on responsible AI use, data privacy, and information security is essential to avoid potential pitfalls.
  9. While AI models can aid in brainstorming, generating persuasive prose, and creating analogies, they currently lack the ability for critical thinking, planning, and execution. Humans must retain these higher-order skills, which cannot yet be outsourced to AI.
  10. Embracing AI in legal education and practice is not just about adopting the technology but also about fostering a mindset of change and continuous adaptation. As Prof Hunter notes, “If large language models were a drug, everyone would be prescribed them.” *

The first in the series was Dr Mandeep Gill Sagoo

* First draft of this summary generated from meeting transcript via Claude

Breathless AI for EDU

Microsoft EDU presentations at the BETT show were high energy and breathless and this video adopts the same tone. Being ancient myself I carry within me hard to shake cultural norms and, despite my love of so many things from across the pond, still blink nervously when confronted with ‘Wow, look at this people!’ approaches to sales- BETT is increasingly like this it has to be said. (side note: all the Twitter folk who shout ‘MOST PEOPLE ARE USING CHATGPT WRONG!’ get automatic hard passes from me every single time).

Anyway, this video is a summary of one of the presentations I watched and I watched it all then and recently watched the video summary too. There’s a lot to be sceptical about and a lot that wouldn’t leave me quite as breathlessly excited but there’s also a ton of things in here that are indicative of the direction MS products are going in the education space- particularly in relation to schools. The MS Teams for Education integrations suggest we may soon be talking again about the what and where of VLEs too. To be fair, the Copilot ‘side by side’ in MS Edge approach is something I don’t routinely do but it may nudge me towards using a browser other than Safari or Chrome finally (or maybe not!). The Copilot for Educators resource mentioned is very useful. The big deal towards the end is on school-focussed Teams embellishments but they are worth thinking about as they suggest likely trajectories for all sectors. Much of the reader and speaker AI support look like tools that would transfer to the HE context and the admin/ resource creation ideas will likely be popular too.

The presenter’s examples and own style really underline the American bias in the tool development and the way the reading and speaker coach tools will further homogenise accent and dialect. My daughter already says ‘gotten’ and ‘sidewalk’ and was delighted yesterday to find out a show she wants to see was ‘on Broadway’ until we explained that Broadway is in New York. The question for us in the ‘not America’ English speaking/ using world is how much loss to homogenisation will be perceived to be acceptable for assumed gains: actually a question you might ask about a lot of these tech. Predicted degrees of divergence in orthography and dialect leading to an inability to understand one another never manifested beyond some pretty well-known differences (though subtitles are a solid friend with some TV) so I think accent and tone variants are the most at risk.

Anyway, what I came here to say was I think it’s worth a watch (32 mins) or having on in the background when you’re doing something placid, calm and terribly British, like drinking tea, having a curry or watching football.

TL:DW? I used Gemini (whilst waving fist at Copilot) to produce a summary based on the transcript

College Teaching Fund: AI Projects- A review of the review by Chris Ince

On Wednesday I attended the mid-point event of the KCL College Teaching Fund projects – each group has been awarded some funding (up to £10,000, though some came in with far smaller budgets) to do more than speculate on the possibility of using AI within their discipline and teaching, but carry out a research project around design and implementation.

Each team had one slide and three minutes to give updates on their progress so far, with Martin acting as compere and facilitator. I started to take notes so that I could possibly share ideas with the faculty that I support (and part-way through thought that I perhaps should have recorded the session and used an AI to summarise each project), but it was fascinating to see links between projects in completely different fields. Some connections and thoughts before each project’s progress so far:

  • The work involving students was carried out in many ways, but pleasingly many projects were presented by student researchers, who had either been part of the initial project bid or who had been employed using CTF funds. Even if just considering being surveyed and trialled, students are at all levels through this work, as they should be.
  • Several projects opened with scoping existing student use of gAI in their academic lives and work. This has to be taken with a pinch of salt, as it requires an element of honesty, but King’s has been clear that gAI is not prohibited so long as it is acknowledged (and allowed at a local level). What is interesting is that scoping consistently found that students did not seem to be using gAI as much as one might think (about a third); however their use has been growing throughout projects and the academic year as they are taught how to use it.
  • That being said, several projects identify how students are sceptical of the usefulness of gAI to them and in some that scepticism grows through the project. In some ways this is quite pleasing, as they begin to see gAI not as a panacea, but as a tool. They’re identifying what it can and can’t do, and where it is and isn’t useful to them. We’re teaching about something (or facilitating), and they’re learning.
  • Training AIs and ChatBots to assist in specific and complex tasks crops up in a number of projects, and they’re trialling some very different methods for this. Some are external, some are developed and then shared with students, and some give students what they need to train them themselves. Evidence that there are so many approaches, and exactly why this kind of networking is useful.
  • There’s frequently a heavily patronising perception sometimes that young people know more about a technology that older people. It’s always more complex than that, but the involvement of students in CTF projects has fostered some sharing of knowledge, as academic staff have seen what students can do with gAI. However, it’s been clear that the converse is also true, and that ‘we’ not only need to teach them but there is a desire for us to. This is particularly notable when we consider equality of access and unfair advantages, and two projects highlight this when they noted students from China had lower levels of familiarity with AI.
Project TitleLead Thoughts
How do students perceive the use of genAI for providing feedbackTimothy PullenA project from Biochemistry that’s focused on coding, specifically AI tools giving useful feedback on coding. Some GTAs have developed some short coding exercises that have trialled with students (they get embedded into Moodle and the AI provides student feedback). This has implications in time saved on the administration of feedback of this kind, but Tim suggests seems that there are limits to what customised bots can do within this “significantly” – I need to find out more, and am intrigued around the student perception of this: are there some situations where students would rather have a real person look at their work and offer help?
AI-Powered Single Best Answer (SBA) Automatic Question Generation & Enhanced Pre-Clinical Student Progress TrackingIsaac Ng (student) Mandeep SagooIsaac, a medical student, presents, and it’s interesting that there’s quite a clear throughline to producing something that could have commercial prospects further down the line – there’s a name and logo! An AI has been ‘trained’ with resources and question styles that act as the baseline; students can then upload their own notes and the AI uses these to produce questions in an SBA format that is consistent with the ‘real’ ones. There’s a clear focus on making sure that the AI won’t generate prompts from the material that it’s been given that aren’t factually wrong. A nice aspect is that all of the questions the AI generates are stored, and in March students are going to be able to vote on other student-AI questions. I’m intrigued about the element of students knowing what a good or bad question is, and do we need to ensure their notes are high-quality first?
Co-designing Encounters with AI in Education for Sustainable DevelopmentCaitlin BentleyMira Vogel from King’s Academy is speaking on the team’s behalf – she leads on teaching sustainability in HE. The team have been working on the ‘right’ scaffolding and framing to find the most appropriate teaching within different areas/subjects/faculties – how to find the best routes. They have a broad range of members of staff involved, so have brought this element into the project itself. The first phase has been recursive – recruiting students across King’s to develop materials – Mira has a fun phrase about “eating one’s own dog food”. They’ve been identifying common ground across disciplines to find how future work should be organised at scale and wider to tackle ‘Wicked problems’ (I’m sure this is ‘pernicious or thorny problems’ and not surfer dude ‘wicked’, but I like the positivity in the thought of it being both).
Testing the Frontier – Generative AI in Legal Education and beyondAnat Keller and Cari Hyde VaamondeTrying to bring critical thinking into student use of AI. There’s a Moodle page and online workshop (120 participants) and focus group day (12 students-staff) to consider this. How does/should/could the law regulate financial institutions? The project focused on the application of assessment marking criteria and typically identified three key areas of failure: structure, understanding, and a lack of in-depth knowledge (interestingly, probably replicating what many academics would report for most assessment failure). The aim wasn’t a pass, but to see if a distinction level essay could be produced. Students were a lot more critical than staff when assessing the essays. (side-note: students anthropomorphised the AI, often using terms like ‘them’ and ‘him’ rather than ‘it’). Students felt that while using AI at the initial ideas stage and creation may initially feel more appropriate than using it during the actual essay writing, this was where they lost the agency and creativity that you’d want/find in a distinction level student – perhaps this is the message to get across to students?
Exploring literature search and analysis through the lens of AIIsabelle MiletichAnother project where the students on the research team get to present their work; it’s a highlight of the work, which also has a heavy co-creational aspect. Focused on Research Rabbit: a free AI platform that sorts and organises literature for literature reviews. Y2 focus groups have been used to inform material that is then used with Y1 dental students. There was a 95.7% response to Y1 survey. Resources were produced to form a toolbox for students, mainly guidance for the use of Research Rabbit. There was also a student produced video on how to use it for Y1s. The conclusion of the project will be narrated student presentations on how they used Research Rabbit.
Designing an AI-Driven Curriculum for Employable Business Students: Authentic Assessment and Generative AIChahna GonsalvesIdentifying use cases so that academics are better informed about when to put AI into their work. There have been a number employer-based interviews around how employers are using AI. Student participants are reviewing transcripts to match these to appropriate areas that academics might then slot them into the curriculum. An interesting aspect has been that students didn’t necessarily know/appreciate how much that King’s staff did behind the scenes on curriculum development work. It was also a surprise to the team how some employers were not as persuaded by the usefulness of AI (although many were embedding this within work). Some consideration of there being a difference in approach between early-adopters and those more reticent.
Assessment Innovation integrating Generative AI: Co-creating assessment activities with Undergraduate StudentsRebecca UpsherBased in Psychology – students described how assessment to them means anxiety and stress or “just a means to get a degree” (probably some work around the latter one for sure). There’s a desire for creative and authentic assessment from all sides. Project started by identifying current student use of AI in and around assessment. One focus group (A learning and assessment investigation. Clarity of existing AI guidance. Suggestions for improvements) and one workshop (students more actively giving suggestions about summative AI suggestions to staff). Focus on inclusive and authentic assessment, being mindful of neurodiverse students and the group have been working with the neurodiverse society. Research students have been carrying out the literature review, prepared recruitment materials for groups, and mapped assessment types used in the department. Preliminary interest that has been a common thread was a desire for assessments to be designed with students, and a shift in power dynamics – interesting is that AI projects like this are fostering these sorts of co-design work that could have taken place before AI, but didn’t necessarily – academic staff are now valuing what students know and can do with AI (particularly if they know more than we do).
Improving exam questions to decrease the impact of Large Language ModelsVictor TurcanuA medicine-based project. Alignment with authentic professional tasks, that allow students to demonstrate their understanding, critical and innovative thinking, can students use LLMs to enhance their creativity and wider conceptual reach? The project is using 300 anonymous exam scripts to compare with ChatGPT answers. More specifically it’s about asking students their opinion in a question that doesn’t have an answer (a novel question embedded within an area of research around allergies – can students design a study to investigate something that doesn’t have a known solution: talk about the possibilities, or what they think would be a line of approach to research an answer). LLMs may be able to utilise work that has been published, but cannot draw on what hasn’t been published or isn’t yet understood. While the project was about students using LLMs, there’s also an angle here that it’s a way of an assessment where an AI can’t help as much.
Exploring Generative AI in Essay Writing and Marking: A study on Students’ and Educators’ Perceptions, Trust Dynamics, and InclusivityMargherita de CandiaPolitical science. Working with Saul Jones (an expert on assessment), they’ve also considered making an essay ‘AI proof’. They’re using the PAIR framework developed at King’s and have designed an assessment using the framework to make a brief they think is AI proof but still allows students to use AI tools. Workshops with students where they write an essay using AI will then be used to refine the assignment brief following a marking phase. If it works they want to disseminate the AI-proof brief for essays to colleagues across the social science faculties, however they are running sessions to investigate student perceptions, particularly around improvements to inclusivity in using AI. An interesting element here is what we consider to be ‘AI proof’, but also that students will be asked for thoughts on feedback for their essays when half will have been generated by an AI.
Student attitudes towards the use of Generative AI in a Foundation Level English for Academic Purposes course and the impact of in-class interventions on these attitudesJames AckroydAction research – King’s Foundations within the team working on English for Academic purposes. Two surveys through the year and a focus group, specific interventions in class on use of AI. Another survey to follow. 2/3 of students initially said that they didn’t use AI at the start of the course (40% of students from China where AI is less commonly used due to access restrictions). But half-way through the course 2/3 said that they did. Is this King’s demystifying things? Student belief in what AI could do reduced during the course of the courseFaith in the micro-skills required for essay writing increased. Lots of fascinating threads of AI literacy and perceptions of it have come out of this so far.
Enhancing gAI literacy: an online seminar series to explore generative AI in education, research and employment.Brenda WilliamsOnline seminar series on the use of AI (because students asked for them online, but there also more than 2,000 students in the target group and it’s the best way to get reach. Consultation panel (10 each of staff/students/alumni) to design five sessions to be delivered in June. Students have been informed about the course and a pre-survey to find out about use of AI by participants (and post-) has been prepared. This project in particular has a high mix of staff from multiple areas around King’s and highlights that there is more at play within AI than just working with AI in teaching settings.
Supporting students to use AI ethically and effectively in academic writingUrsula WingatePreliminary scoping of student use of AI. Focus on fairness about a level playing field to upskill some students, and to reign in others. Recruited four student collaborators. Four focus groups (23 participants in January). All students reported having used Chat GPT (did this mean, in education, or in general?) and there is a wide range of free ones they use. Students are critical and sceptical of AI: they’ve noticed that it isn’t very reliable and have concerns about IP of others. They’re also concerned about not developing their own voice. Sessions designed to focus on some key aspects (cohesion, grammatical compliance, appropriateness of style, etc.) when using AI in academic writing are being planned.
Is this a good research question?Iain Marshall, Kalwant SidhuResearch topics for possible theses are being discussed at this half-way point of the academic year. Students are consulting chatbots (academics are quite busy, but also supervisors are usually only assigned when project titles and themes are decided – can students have space to go to beforehand for more detailed input?) The team have been utilising prompt engineering to create their own chatbot to help themselves and others (I think this is through the application of provided material, so students can input this and then follow with their own questions). This does involve students utilising quite a number of detailed scripts and coding, so this is supervised by a team – aimed that this will be supportive.
Evaluating an integrated approach to guide students’ use of generative AI in written assessmentsTania Alcantarilla &Karl NightingaleThere are 600 students in the 1st year of their Bioscience degrees. The team focused on perceptions and student use of AI. Design of a guidance podcast/session. Evaluation of the sessions and then of ultimate gAI use. There were 200 responses to student survey (which is pretty impressive). Lower use of gAI than expected (1/3 of students, but this increased after being at King’s – mainly by international students). It’s now that I’ve realised people ‘in the know’ are using gAI and not genAI as I have…am I out of touch?
AI-Based Automated Assessment Tools for Code QualityMarcus Messer, Neil BrownA project based around the assessment of student produced code. Here the team have focused on ‘Chain of thought prompting’ – a example is given to the LLM where there is a gobbet that includes the data, a show of reasoning steps, and the solution. Typically eight are used before the gAI is used to apply what it learned to a new question or other input. Here the team will use this to assess the code quality of programming assignments, including the readability, maintainability, and quality. Ultimately the grades and feedback will be compared with human-graded examples to judge the effectiveness of the tool.
Integrating ChatGPT-4 into teaching and assessmentBarbara PiotrowskaPublic Policy in the Department of Political Economy – Broad goal was to get students excited and comfortable with using gAI. Some of the most hesitant students have been the most inventive in using it to learn new concepts. ChatGPT used as co-writer for an assessment – a policy brief (advocacy) – due next week. Teaching also a part (conversations with gAI on a topic can be used as an example of a learning task).
Generative AI for critical engagement with the literatureJelena DzakulaDigital Humanities – reading and marking essays where students engage with a small window of literature. Can gAI summarise what are considered difficult articles and chapters for students? Initial survey showed that students don’t use tools for this, they just give up. They mainly use gAI for brainstorming and planning, but not for helping their learning. Designing workshops/focus groups to turn gAI into a learning tool, mainly based around complex texts.
Adaptive learning support platform using GenAI and personalised feedbackIevgeniia KuzminykhThis project aims to embed AI, or at least use it as an integral part, of a programme, where it has access to a lot of information about progress, performance and participation. Moodle has proven quite difficult to work with for this project as the team wanted an AI that would analyse Moodle (to do this a cloned copy was needed, uploaded elsewhere so that it can be accessed externally by the AI). ChatGPT API not being free has also been an issue. So far, course content, quizzes, answers, were utilised and gAI asked to give feedback and generate a new quizzes. Paper design for a feedback system is being written and will be disseminated.
Evaluating the Reliability and Acceptability of AI Evaluation and Feedback of Medical School Course WorkHelen OramCouldn’t make the session- updates coming soon!

Fascinating stuff. For me, I want to consider how we can take this work from projects that have been funded by the CTF, and use them as ideas and models that departments, academics, and teaching staff can look to when considering teaching, curriculum and assessment in ways where they may not have funding.

Understanding and Integrating AI in Teaching

This morning I discussed this topic with colleagues from King’s Natural, Mathematical and Engineering Sciences faculty. The session was recorded and a transcript is available to NMES colleagues but, as I pointed out in the session, AI is enabling ways of enhancing and/ or adding to the alternative ways of accessing the core information. By way of illustration the post below is generated from the transcript (after I sifted content to remove other speakers.) The only thing I edited was the words ‘in summary’ from the final paragraph.

TL:DR Autopodcast version

Slides can be seen here

Screenshot from title slide showing AI generated image of a foot with only 4 toes and a quote purportedly from da Vinci which says: ‘The human foot is a masterpiece of engineering and a work of art’

Understanding and Integrating AI in Teaching

Martin Compton’s contribution to the NMS Education Elevenses session revolved around the integration of AI into teaching, learning, and assessment. His perspective is deeply rooted in practical application and cautious understanding of these technologies, especially large language models like ChatGPT or Microsoft Co-pilot.

——-

My approach towards AI in education is multifaceted. I firmly believe we need a basic understanding of these technologies to avoid pitfalls. The misuse of AI can lead to serious consequences, as seen in instances like the professor in Texas who misused ChatGPT for student assessment or the lawyer in Australia who relied on fabricated legal precedents from ChatGPT. These examples underline the importance of understanding the capabilities and limitations of AI tools.

The Ethical and Practical Application of AI

The heart of my argument lies in engaging with AI responsibly. It’s not just about using AI tools but also understanding and teaching about them. Whether it’s informatics, chemistry, or any other discipline, integrating AI into the curriculum demands a balance between utilisation and ethical considerations. I advocate for a metacognitive approach, where we reflect on how we’re learning and interacting with AI. It’s crucial to encourage students to critically evaluate AI-generated content.

Examples of AI Integration in Education

I routinely use AI in various aspects of my work. For instance, AI-generated thumbnails for YouTube videos, AI transcription in Teams, upscaling transcripts using large language models, and even translations and video manipulation techniques that were beyond my skill set a year ago. These tools are not just about easing workflows but also about enhancing the educational experience.

One significant example I use is AI for creating flashcards. Using tools like Quizlet, combined with AI, I can quickly generate educational resources, which not only saves time but also introduces an interactive and engaging way for students to learn.

The Future of AI in Education

I believe that UK universities, and educational institutions worldwide, face a critical choice: either embrace AI as an integral component of academic pursuit or risk becoming obsolete. AI tools could become as ubiquitous as textbooks, and we need to prepare for this reality. It’s not about whether AI will lead us to a utopia or dystopia; it’s about engaging with the reality of AI as it exists today and its potential future impact on our students.

My stance on AI in education is one of cautious optimism. The potential benefits are immense, but so are the risks. We must tread carefully, ensuring that we use AI to enhance education without compromising on ethical standards or the quality of learning. Our responsibility lies in guiding students to use these tools ethically and responsibly, preparing them for a future where AI is an integral part of everyday life.

The key is to balance the use of AI with critical thinking and an understanding of its limitations. As educators, we are not just imparting knowledge but also shaping how the next generation interacts with and perceives technology. Therefore, it’s not just about teaching with AI but also teaching about AI, its potential, and its pitfalls.

13 ways you could integrate AI tools into teaching

For a session I am facilitating with our Natural, Mathematical and Engineering Sciences faculty I have below pulled together a few ideas drawn from a ton of brilliant suggestions colleagues from across the sector have shared in person, at events or via social media. There’s a bit overlap but I am trying to address the often heard criticism that what’s missing from the guidance and theory and tools out there is some easily digestible, accessible and practically-focussed suggestions that focus on teaching rather than assessment and feedback. Here my first tuppenceworth:

1.AI ideator: Students write prompts to produce a given number of outputs (visual, text or code) to a design or problem brief. Groups select top 2-3 and critique in detail the viability of solutions.  (AI as inspiration)

2. AI Case Studies: Students analyse real-world examples where AI has influenced various practices (e.g., medical diagnosis, finance, robotics) to develop contextual intelligence and critical evaluation skills. (AI as disciplinary content focus)

3. AI Case Study Creator: Students are given AI generated vignettes, micro case studies or scenarios related to a given topic and discuss responses/ solutions. (AI as content creator)

4. AI Chatbot Research: For foundational theoretical principles or contextual understanding, students interact with AI chatbots, document the conversation, and evaluate the experience, enhancing their research, problem-solving, and understanding of user experience. (AI as tool to further understanding of content)

5. AI Restructuring: Students are tasked with using AI tools to reformat content into different media accordsing to pre-defined principles. (AI for multi-media rreframing).

6. AI Promptathon: Students formulate prompts for AI to address significant questions in their discipline, critically evaluate the AI-generated responses, and reflect on the process, thereby improving their AI literacy and collaborative skills. (Critical AI literacy and disciplinary formative activity)

7. AI audit: Students use AI to generate short responses to open questions, critically assess the AI’s output, and then give a group presentation on their findings. Focus could be on accuracy and/ or clarity of outputs. (Critical AI literacy)

8. AI Solution Finder: Applicable post work placement or with case studies/ scenarios, students identify real-world challenges and propose AI-based solutions, honing their creativity, research skills, and professional confidence. (AI in context)

9. AI Think, Pair & Share: Students individually generate AI responses to a key challenge, then pair up to discuss and refine their prompts, improving their critical thinking, evaluation skills, and AI literacy. (AI as dialogic tool)

10. Analyse Data: Students work with open-source data sets to answer pressing questions in their discipline, thereby developing cultural intelligence, data literacy, and ethical understanding. (AI as analytical tool)

11. AI Quizmaster : Students design quiz questions and use AI to generate initial ideas, which they then revise and peer-review, fostering foundational knowledge, research skills, and metacognition. (AI as concept checking tool)

12. Chemistry / Physics or Maths Principle Exploration with AI Chatbot: Students engage with an AI chatbot to learn and understand a specific principle. The chatbot can explain concepts, answer queries, and provide examples. Students (with support of GTA/ near peer or academic tutor) compare the AI’s approach to their own process/ understanding. (AI chatbot tutor)

13. Coding Challenge- AI vs. Manual Code Comparison: Coding students create a short piece of code for a specific purpose and then compare their code to a pre-existing manually produced code for the same purpose. This comparison can include an analysis of efficiency, creativity, and effectiveness. (AI as point of comparison)

Custom GPTs

There are two main audiences for custom GPTs built within the ChatGPT Pro insfrastucture. The first is anyone with a pro account. There are other tools that allow me to build custom GPTs with minimal skills that are open to wider audiences so I think it’ll be interesting to see whether OpenAI continue to leverage this feature to encourage new subscription purchases or whether it will open up to further stifle competitor development. In education the ‘custom bots for others’ potential is huge but, for now, I am realising how potentially valuable they might be for the audience I did not initially consider – me.

One that is already proving useful is ‘My thesis helper’ which I constructed to pull information only from my thesis (given that even the really obvious papers never materialsed I am wondering whether this might catalyse that!) It’s an opportunity to use as source material much larger documents than the copy/ paste tokens allow or even the relatively generous (and free) 100k tokens and document upload Claude AI permits. In particular, it facilitates much swifter searching within the document as well as opportunities for synthesising and summarising specific sections. Another is ‘Innovating in the Academy’ (try it yourself if you have a pro account) which uses two great sources of case studies from across King’s, collated and edited by my immediate colleagues in King’s Academy. The bot enables a more refined search as well as an opportunity to synthesise thinking.

Designed to be more outward facing is ‘Captain’s Counsel’. This I made to align with a ‘Star Trek’ extended (and undoubtedly excruciating) metaphor I’ll be using in a presentation for the forthcoming GenAI in Education conference in Ulster. Here I have uploaded some reference material but also opened it to the web. I have tried to tap into my own Star Trek enthusiasm whilst focussing on broader questions about teaching. The web-openness means it will happily respond to questions about many things under the broad scope I have identified though I have also identified some taboos. Most useful and interesting is the way it follows my instruction to address the issue with reference to Captain Kirk’s own experiences. 

Both the creation and use of customised bots enables different ways of perceiving and accessing existing information and it is in these functions broadly that LLMs and image generators as well as within customised bots are likely to establish a utility niche I think, especially for folk yet to dip their toes or whose perceptions are dominated by LLMs as free essay mills.

Assessment 2033

Recast version (automated podcast 2 minute listen)

What will assessment look like in universities in 2033? There’s a lot of talk about how AI may finally catalyse long-needed changes to a lot of the practices we cling to but there’s also a quite significant clamour to do everything in exam halls. Amidst the disparate voices of change are also those that suggest we ride this storm out and carry on pretty much as we are: it’s a time-served and proven model, is it not?

Anyway, by way of provocation, see below four visions of assessment in 2033. What do you think? Is one more likely? Maybe bits of two or more or none of the below? What other possibilities have I missed?

  1. Assessment 2033: Panopticopia

Alex sat nervously in a sterile examination room, palms clammy, heart pounding, her personal evaluation number stamped on each hand and her evaluation tablet. The huge digits on the all-wall clock counted down ominously. As she began the timed exam, micro-drones buzzed overhead, scanning for unauthorised augmentations and communications. Proctoring AI software tracked every keystroke and eye movement, erasing any semblance of privacy. The relentless pressure to recall facts and formulas within seconds elevated her already intense anxiety. Alex knew she was better than these exams would suggest but in the race against technology ideals like fairness, inclusive practice and assessment validity were almost forgotten.

  1. Assessment 2033: Nova Lingua

Karim sat, feet up, in the study pod on campus, ready to tackle his latest essay. Much of the source material was in his first language so he felt confident the only translation tech he’d need would be with his more whimsical flourishes (usually in the intro and conclusion). He  activated ‘AiMee’, his assistant bot, instructed her to open Microsoft Multi-Platform and set the essay parameters: ‘BeeLine text with synthetic voiced audio and an AI avatar presented digest’. AiMee processed the essay brief as Karim scanned it in and started the conversation. Karim was pleased as his thoughts appeared as eloquent prose, simultaneously in both his first language and the two official university languages. As he worked, Karim thought ruefully about how different an education his parents might have had given that they both, like him, were dyslexic.

  1. Assessment 2033: Nova Aurora

Jordan was flushed with delight at the end of their first term on the flexible, multi-modal ‘stackable’ degree. It was amazing to think how different it was from their parents’ experience. There were no traditional exams or strict deadlines. Instead, they engaged in continuous, project and problem-based learning. Professors acted as mentors, guiding them through iterative processes of discovery and growth. The emphasis was on individual development, not just the final product. Grades were replaced with detailed feedback, fostering an appreciation for learning for its own sake rather than competition or -what did their mum call it? ‘Grade grubbing’! Trust was a defining characteristic of academic and student interactions with collaboration highly valued and ‘collusion’ an obsolete concept. HE in the UK had somehow shifted from a focus on evaluation and grades to nurturing individual potential, mirrored by dynamic, flexible structures and opportunities to study in many ways, in many institutions and in ways that aligned with the complexities of life.

  1. Assessment 2033: Plus ça change

Ash sighed as she hunched over her laptop, typing furiously to meet another looming deadline. In 2033, it seemed that little had changed in higher education. Universities clung stubbornly to old assessment methods, reluctant to adapt. Plagiarism and AI detection tools remained easy to circumvent, masking the harsh realities of how students and, with similar frequency, academic staff, relied on technologies that a lot of policy documents effectively banned. The obsession with “students’ own words” pervaded every conversation, drowning out the unheard lobby advocating for a deeper understanding of students’ comprehension and wider acceptance of the realities of new ways of producing work. Ash knew that she wasn’t alone in her frustrations. The system seemed intent on perpetuating the status quo, turning a blind eye to the disconnect between the façade of academic integrity and the hidden truth of how most students and faculty navigated the system.



Babies and Bathwater: How Far Will AI Necessitate an Assessment Revolution?

By Martin Compton & Chris Rowell

Recast version (auto podcast)

Caveat: This two-for-one post was generated using multiple AI technologies. It is drawn from the transcript of an event held this afternoon ( 6th October 2023) which was the first in a series of conversations about AI hosted by Chris Rowell at UAL. We thought it would be an interesting experiment to produce a blog summary of the key ideas and themes but then we realised that it was Friday afternoon and we both have lives too. So… we put AI tools to work: first MS Teams AI provided an instant transcript, then Claude AI filtered the content and separated it into two main chunks (Martin answering questions and then open discussion). Third we used the prompt in ChatGPT: Using the points made by Martin Compton write a blog post of 500-750 words that captures the key points he raises in full prose, using the style and tone he uses here. Call the post ” Babies and bathwater: how far will AI necessitate an assessment revolution?” . Then, we did something similar with the open discussion and that led to part two of this post below. Finally, I used some keywords to generate some images in Bing Chat which uses Dall-e 3 to decorate the text.

Part 1: The conversation

Attempt 1: AI generated image (Using Dall-e3 via Bing Chat) of computer monitor showing article called ‘Babies and Bathwater’ below which is an image of two babies in a sort of highchair/ bath combo

The ongoing dialogue around AI’s influence on education often has us pondering over the depth and dimensions of the issue. Our peers frequently express their concerns about students using AI to craft essays and generate images for their assessments. Recently, I (Chris) stumbled upon the AI guidelines by King’s, urging institutions to enable students and staff to become AI literate. But the bigger question looms large: what does being AI literate truly entail?

Attempt 2: AI generated image (Using Dall-e3 via Bing Chat) of computer monitor showing article called ‘Babies and Bathwater?’ below which is an image of a robot

For me (Martin), this statement from the Russell Group principles on generative AI has been instrumental in persuading some skeptics in the academic realm of the necessity to engage. It’s clear that AI literacy isn’t just another buzzword. It’s a doorway to stimulating dialogue. It’s about addressing our anxieties and reservations, then channeling those emotions to drive conversations around teaching, assessment, and learning.

Truth be told, when we dive deep into the matter of AI literacy, we’re essentially discussing another facet of information literacy. It’s a skill we aim to foster in our students and one that, as educators, we should continually refine in ourselves. Yet, I often feel that the larger academic community might not be doing enough to hone these skills, especially in the digital age where misinformation spreads like wildfire.

With the rise of AI technologies like ChatGPT, I was both amazed and slightly concerned. The first time I tested it, the results left me in awe. However, on introspection, I realized that if an AI can flawlessly generate a university-level essay, then it’s high time we scrutinized our assessments. It’s not about the capabilities of AI; it’s about reassessing the nature and objectives of our examinations.

When my colleagues seek advice on navigating this AI-augmented educational landscape, my primary counsel is simple: don’t panic. Instead, let’s critically analyze our current assessment methodologies. Our focus should pivot from regurgitation of facts to evaluating understanding and application. And if a certain subject demands instant recall of information, like in medical studies, we should stick to time-constrained evaluations.

Attempt 3: AI generated image (Using Dall-e3 via Bing Chat) of computer monitor showing article called ‘Babies and Bathnwater [sic] below which is an image of some very disturbingly muscled babies

To make our existing assessments less susceptible to AI, it’s crucial to reflect on their core objectives. This takes me back to the fundamental essence of pedagogy, where we need to continuously question and redefine our approach. Are we merely conducting assessments as a formality, or are they genuinely driving learning? It’s imperative to emphasize the process as much as the final output.

Now, if you ask me whether we should incorporate AI into our summative assessments, my perspective remains fluid. While today it might seem like a radical notion, in the future, it could be as commonplace as using the internet for research. But while we’re in this transitional phase, understanding and integrating AI should be done judiciously.

Lastly, when it comes to AI-generated feedback for students, I believe there’s potential, albeit with certain limitations. There’s undeniable value in students receiving feedback from various sources. Yet, we must tread cautiously to ensure academic integrity.

In essence, as educators and advocates of lifelong learning, we must embrace the challenges AI brings to our table, approach them with a critical lens, and adapt our strategies to nurture an equitable, AI-literate generation.

Part 2: Thoughts from the (bathroom) floor: Assessing Process Over Product in the Age of AI

The following is a synthesis of comments made during the discussion that ensued after the intial Q & A conversation.

Valuing Creation Process over End Product

There’s been a long-standing tradition in education of assessing the final product. Be it a project, an essay, or a painting, the emphasis has always been on the end result. But isn’t the journey as significant, if not more so? The time has come for assessments to shift their focus from the finished piece to the process behind its creation. Such an approach would not only value the hard work and thought process of a student but also celebrate their research journey.

Moving Beyond Memorization

Currently, knowledge reproduction assessments rule the roost. Students cram facts, only to regurgitate them during exams. However, the real essence of learning lies in fostering higher-order thinking skills. It’s crucial to design assessments that challenge students to analyze, evaluate, and create. This way, we’re nurturing thinkers and not just fact-repeating robots.

Embracing AI in the Classroom

The introduction of AI image generators in classroom projects was met with varied reactions. Some students weren’t quite thrilled with what the AI generated for them. However, this sparked a pivotal dialogue about the value of showcasing one’s process rather than merely submitting an end product.

It became evident that possessing a good amount of subject knowledge positions students better to use AI tools effectively, minimizing misuse. This draws a clear parallel between disciplinary knowledge and sophisticated AI usage. Today, employers prize graduates who can adeptly wield AI. Declining AI usage is no longer a strength but a weakness.

The Ever-Evolving AI Landscape

As AI tools constantly evolve and become more sophisticated, we can expect students to step into universities already acquainted with these tools. However, just familiarity isn’t enough. Education must pivot towards fostering honest AI usage and teaching students to discern between appropriate and inappropriate uses.

Critical AI Literacy: The Need of the Hour

AI tools, no matter how advanced, are just tools. They might churn out outputs that match a user’s intent, but it’s up to the individual to critically evaluate the AI’s output. Does it align with what you wanted to express? Does it represent your research accurately? Developing a robust AI literacy is paramount to navigate this digital landscape.

Attempt 4: AI generated image (Using Dall-e3 via Bing Chat) of computer monitor showing article called ‘Babies and Bathwater?’ below which is a photorealistic image of a baby

The Intrinsic Value of Creation

We must remember that the act of writing or creating is in itself a learning experience. Merely receiving an AI’s output doesn’t equate to learning. There’s an intrinsic value in the process of creation, an enrichment that often transcends the final product.

To sum it up, as the lines between human ingenuity and AI blur, our educational paradigm must pivot, placing process over product, fostering critical thinking, and embracing the AI wave, all while ensuring we retain our unique human touch in creation. The future beckons, and it’s up to us to shape it judiciously.