
I have been thinking a lot recently about my own and others’ positions in relation to AI in education. I’m reading a lot more from the ‘ResistAI’ lobby and share many persepctives with core arguments. I likewise read a lot from the tech communities and enthusiastic educator groups which often get conflated but are important to distinguish given bloomin’ obvious as well as more subtle agenda and motivation differences (see world domination and profit arguments for example). I see willing adoption, pragmatic adoption, reluctant adoption and a whole bunch of ill-informed adoption/ rejection too. My reality is that staff and students are using AI (of different types) in different ways. Some of this is ground-breaking and exciting, some snag-filled and disappointing, some ill-advised and potentially risky. Exisiting IT infrastrucutre and processes are struggling to keep pace and daily conversations range from ‘I have to show you this- it’s going to change my life! ‘ to ‘I feel like I’m being left behind here’ and a lot more besides.
So it was that this morning I saw a post on LinkedIn (who’d have thought the place where we put our CVs would grow so much as an academic social network?) from Leon Furze who defines his position as ‘sitting on the fence’. I initially I thought ‘yeah that’s me’ but, in fact, I am not actually sitting on the fence at all in this space. I am trying as best I can to navigate a path that can be defined by the broad two word strategy we are trying define and support at my place: Engage Responsibly. Constructive resitance and debate are central but so is engagement with fundamental ideas, technologies, principles and applications. I have for ages been arguing for more nuanced understanding. I very much appreciate evidence and experiential based arguments (counter and pro). The waters are muddied though with, on the one hand, big tech declarations of educational transformation and revolution (we’re always on the cusp, right?) and sceptical generalisations like the one I saw gaining social media traction the other day which went something like:
“Reading is thinking
Writing is thinking
AI is anti-thinking”
If you think that then you are not thinking in my view. Each of those statements must be contextualised and nuanced. This is exactly the kind of meme-level sound bite that sounds good initially but is not what we should be entertaining as a position in academia. Or is it? Below are some adjectives and defintions of the sorts of positions identified by Leon Furze in the collection linked above and by me and research partners in crime Shoshi, Olivia and Navyasara. Which one/s would you pick to define your position? (I am aware that many of these terms are loaded; I’m just interested in the broadest sense where people see themselves, whether they have planted a flag or if they are still looking for a spot as they wander around in the traffic wide eyed).
- Cautious: Educators who are cautious might see both the potential benefits and risks of AI. They might be hesitant to fully embrace AI without a thorough understanding of its implications.
- Critical: Educators who are critical might take a stance that focusses on one or more of the ethical concerns surrounding AI and its potential negative impacts, such as the risk of AI being used for surveillance or control, or ways in which data is sourced or used.
- Open minded: Open minded educators might be willing to explore AI’s possibilities and experiment with its use in education, while remaining aware of potential drawbacks.
- Engaged: Engaged educators actively seek to understand AI, its capabilities and its implications for education. They seek to shape the way AI is used in their field.
- Resistant: Resistant educators might actively oppose the integration of AI into education due to concerns about its impact on teaching, learning or ethical considerations.
- Pragmatic: Pragmatic educators might focus on the practical applications of AI in education, such as using it for administrative tasks or to support personalised learning. They might be less concerned with theoretical debates and more interested in how AI can be used to improve their practice.
- Concerned: Educators who are concerned might primarily focus on the potential negative impacts of AI on students and educators. They might worry about issues like data privacy, algorithmic bias, or the deskilling of teachers.
- Hopeful: Hopeful educators might see AI as a tool that can enhance education and create new opportunities for students and teachers. They might be excited about AI’s potential to personalise learning, provide feedback and support students with diverse needs.
- Sceptical: Sceptical educators might question the claims made about AI’s benefits in education and demand evidence to support its effectiveness. They might be wary of the hype surrounding AI and prefer to wait for more research before adopting it.
- Informed: Informed educators would stay up-to-date with the latest developments in AI and its applications in education. They would understand both the potential benefits and risks of AI and be able to make informed decisions about its use.
- Fence-sitting: Educators who are fence-sitting recognise the complexity of the issue and see valid arguments on both sides. They may be delaying making a decision until more information is available or a clearer consensus emerges. This aligns with Furze’s own position of being on the fence, acknowledging both the benefits and risks of AI.
- Ambivalent: Educators experiencing ambivalence might simultaneously hold positive and negative views about AI. They may, for example, appreciate its potential for personalising learning but be uneasy about its ethical implications. This reflects cognitive dissonance, where conflicting ideas create mental discomfort. Furze’s exploration of both the positive potential of AI and the reasons for resisting it illustrates this tension.
- Time-poor: Educators who are time-poor may not have the capacity to fully (or even partially) research and understand the implications of AI, leading to delayed decisions or reliance on simplified viewpoints.
- Inexperienced: Inexperienced educators may lack the background knowledge to confidently assess the potential benefits and risks of AI in education, contributing to hesitation or reliance on the opinions of others.
- Other: whatever the heck you like!
How many did you choose?
Please select two or three and share them via this Mentimeter Link.
I’ll share the responses soon!