Learning theory

This resource has been created as pre-reading for a session I have been invited to lead with students on the MSc Dietetics at UCL. It is my attempt at answering the question: ‘Is there any point in dieticians knowing about learning theory? (professionally I mean, given that it is, of course, inherently interesting!). However, I think it is potentially of interest to anyone teaching!

(Listen 9mins 12 seconds or read below)

Download pdf version

Rationale

People with research or professional interests in educational psychology or teaching understandably and logically have an interest in learning theory. Whether theory provides a template for design of approaches to teaching, learning and assessment or offers an analytical lens to better understand what is happening at an individual or collective level, it makes sense that we challenge our assumptions, experiences and reflections through such theoretical lenses. But what of those whose relationship with teaching or with everyday human tendencies and behaviours is only a tangential part of their role? In any role where one person gives information to others, helps them to understand things or is responsible for changing (or helping to change) behaviours then understanding a little of how people learn will be beneficial. From my (lay) perspective, I imagine that common challenges in dietetics will be interpreting and conveying complex scientific information about nutrition and health and helping people to understand impacts and causation in relation to excess or absence in diet. Consideration of these challenges and issues that arise can be informed through theoretical lenses.

The learning theory landscape

A wordcloud of all the key words for this topic: theory, dietetics, constructivism, social, nutrition, behaviourism, health, diet, psychology are the largest

One of the problems with this is that a quick search for ‘theories of learning’ will present a dazzling, complex, sometimes-contradictory array of theories and ideas. It immediately raises several questions:

  1. Where do you start?
  2. How deep need you go?
  3. How can exposure to learning theory be applied in a meaningful way in context?

The answer to the first question may be ‘right here!’ if you have not studied learning theory before. The second question probably has the same answer as to the question: ‘How long is a piece of string?’ and will inevitably be determined by academic, research and professional roles and interests. The third question is one that we will try to get to grips with here and in the forthcoming session.

Like any other academic field, learning theory has its groupings and areas. The landscape isn’t always represented the same way: you will sometimes see theorists in one category, then another, which can be confusing. This may be due to classification differences, or because the theorist has developed their position over time. To complicate things further, the term ‘theory’ is used to cover a variety of models, approaches and techniques, and often defined by different people in different ways. That said, complication need not be a problem: rather than seeking firmly defined boundaries, think more in terms of spectra and Venn diagrams, where things overlap and interconnect. A main use of theory is to shed light on our experience and help us reflect on – and even change – our practice.

The following theories are both broad and narrow and some can be seen as subsets or informed by wider/ earlier theories. Whether broad or narrow, generalised or specific they have been selected because I think they may be of use to those in the field of dietetics. However, you have more expertise than I do here so it is important your critical eye is focussed and alert. Remember, it is unlikely that you will read a theory, decide ‘ah ha! That’s me from now on’. Rather, you may read, think, reflect, apply and draw on a range of complementary (or contradictory) ideas and approaches as you develop techniques in your future roles as well as using theoretical lenses to better understand what has worked and what has not.

Broad theoretical ‘schools’


Behaviourist theories of learning see the learner as passive; they can be trained using reward and punishment to ‘condition’ them to behave in particular ways (famous theorists in this domain include Pavlov and Skinner whose reach extends into popular understandings unlike most other domains of theory). Learning is seen as a change in behaviour. In health education the role of the expert might be to provide incentives or find ways to disincentivise certain behaviours. Consider the cost of tobacco products and the gruesome images on the packaging. What is the thinking behind this? Can the cost and images be credited with the continuing fall in numbers of smokers?

Cognitivist theories of learning see the learner as actively processing information rather than just being ‘conditioned’ by various stimuli. Cognitivists are concerned with how learners process and remember information, and often test recall as a measure of learning. In health education the expert’s role is to convey information in ways that optimises recall and completeness. Consider the 5 portions of fruit/ veg a day campaign: Whilst there were certainly ‘rewards’ built into the design of the programmes (i.e. health benefits of eating 5 a day) there was also an emphasis on providing and reinforcing information about nutrition and vitamins through attractive materials, booklets, leaflets, connections of school curricula and so on.

Constructivist theories of learning see the learner as an active participant in their own learning. The process of learning is not merely putting knowledge into an empty container. The ‘teacher’ presents knowledge, scenarios, resources, options and problems (or they gain it in another way) and in learning it students ‘construct’ the knowledge for themselves, linking it to what they already know. A variant of this is ‘social constructivism’, which holds that students’ construction of their knowledge is done with others. How might a dietician apply a constructivist approach when working with a client with type 2 diabetes who, by their own admission and despite worsening symptoms, persists in keeping a diet that is sugar, starch and salt rich?


a display cabinet with items and the amount of sugar in them such as a coke can (27g), capri sun carton (24g) and mars bar (54g)

Stop and think

Which broad theoretical approach can you see here?

At my local dentist surgery there is a display case with different sugary snacks, foods and drinks set out very neatly adjacent to piles of sugar equivalent to the actual amount in those foodstuffs. Each has a typed label (like in a museum) with the amount of sugar in grams. There are also a couple of low sugar items. There are no explicit warnings of the dangers of sugar to teeth.


Specific theories: How relevant / useful are these?

Situated Learning theory holds that relevance/ needs of learning are always embedded within a context and culture, so it’s best to teach particular materials within a relevant context – e.g. teaching clinical skills in a clinical setting. Within that context, students learn by becoming involved in a ‘community of practice’ – a group of practitioners – and through ‘Legitimate Peripheral Participation’ move from the periphery of this community towards its centre (i.e. the more expert and involved in practice they become, the closer they move toward the centre). (key names: Vygotsky, Lave)

Social Learning Theory views observation as key to learning; it holds that we learn through observing others, not just what they do but also the consequences of that. People learn from watching older or more expert people. An educator has a role in getting their attention, helping them remember and motivating them to demonstrate their learning. Behaviour is also affected by what they see being rewarded or punished. (key name: Bandura)

Mindset (motivational) Theory argues that if people believe that their ability to achieve something is fixed they have little chance of changing it and they therefore have a fixed mindset. To develop (i.e. learn), a growth mindset is needed and this is related to intrinsic self-belief. The educator’s role is to show belief, exemplify positive behaviours (e.g. hard work and effort should be valued not only results) and showing how to embrace ‘failure’ (key name: Dweck)

Critical Pedagogy is more of a movement than a theory: it holds that teaching cannot be separated from wider social and political concerns, and that educators should empower their ‘students’ to be active, critical citizens. Critical pedagogy is concerned with, whatever the subject, asking students to question hierarchies and power relations and to achieve a higher political consciousness. Paulo Freire, author of Pedagogy of the Oppressed (one of the first books of critical pedagogy) coined the ‘banking model’ in his critique of how some teaching aims to ‘fill’ students up with knowledge as though they are blank slates, merely receiving and storing knowledge. In addition, bell hooks’ work on intersectionality (complex layers of discrimination and privilege according to factors such as race, class, gender, sexuality and disability) might also lend a powerful lens to understand and challenge the nature and role of diet amongst groups as well as in individuals.

Session slides

https://www.mentimeter.com/app/presentation/b198f152cb620471d75aaadbcc42c251/embed

Further reading

You may like to see things represented on a timeline with short, pithy summaries of key ideas. If so, try this site: https://www.mybrainisopen.net/learning-theories-timeline/

Donald Clark has written a huge amount about learning theory on his blog and this can be seen here if you prefer a dip in and search approach: https://donaldclarkplanb.blogspot.com/

A really accessible intro (as well as a much wider resource) is the encyclopaedia of informal learning: https://infed.org/learning-theory-models-product-and-process/

This resource was produced by Martin Compton. The Theoretical schools material was adapted from resources created by Emma Kennedy, Ros Beaumont & Martin Compton (UoG, 2018)

Big tech headlights

Listen (7 mins) or read (5 mins)

Whether it’s non-existent problems, unscalable solutions or a lack of imagination, we need to be careful about what educational technology appears to promise.

close up of oldsmobile headlights in monochrome

I have written before about how easy it is to get dazzled by shiny tech things and, most dangerously, thinking that those shiny things will herald an educational sea change. More often than not they don’t. Or if they do, it’s nowhere near the pace often predicted.  It is remarkable to look back at the promises interactive whiteboards (IWBs) held for example. I think I still have a broken Promethean whiteboard pen in a drawer somewhere. I was sceptical from the off that one of the biggest selling points seemed to be something like: “You can get students up to move things around”. I like tech but as someone teaching 25+ hours per week (how the heck did I do that?) I could immediately see a lot of unnecessary faff. Most in my experience in schools and colleges suggest they are, at best, glorified projectors rarely fulfilling promise. Research I have seen on impact tends to be muted at best and studies in HE like this one (Benoit, 2022) suggest potential detrimental impacts. IWBs for me are emblematic of much of what I feel is often wrong with the way ed tech is purchased and used. Big companies selling big ideas to people in educational institutions with purchasing power and problems to solve but, crucially, at least one step removed from the teaching coal face. Nevertheless, because of my role at the time (‘ILT programme coordinator’, thank you very much) I did my damnedest to get colleagues using IWBs interactively and at all (I was going to say ‘effectively’) other than as a screen until I realised that it was a pointless endeavour. For most colleagues the IWB was a solution to a problem that didn’t exist.

A problem that is better articulated is about the extent of engagement of students coupled with tendencies towards uni-directional teaching and passivity in large classes.  One solution is ‘Clickers’.  These have been kicking around since the 1960s in fact and foreshadowed modern student / audience response systems like Mentimeter, still sometimes referred to as clickers (probably by older generation types like me). Research was able to show improvements in engagement, enjoyment, academic improvement and useful intelligence for lecturing staff (see Kay and LeSage, 2009; Keough, 2012; Hedgcock and Rouwenhort, 2014) but the big problem was scalability. Enthusiasts could secure the necessary hardware, trial use with small groups of students and report positively on impact. I remember the gorgeous aluminium cases our media team held containing maybe 30 devices each. I also recall the form filling, the traipse to the other campus, the device registering and the laborious question authoring processes. My enthusiasm quickly waned and the shiny cases gathered dust on media room shelves. I expect there are plenty still doing so and many more with gadgets and gizmos that looked so cool and full of potential but quickly became redundant. BYOD (Bring your own device) and cloud-based alternatives changed all that of course. The key is not whether enthusiasts can get the right kit but whether very busy teachers can get it and the results versus effort balance sheet firmly favours the former. There are of course issues (socio-economic, data, confidentiality, and security to name a few!) with cloud-base BYOD solutions but the tech is never going to be of the overnight obsolete variety. This is why I am very nervous about big ticket kit purchases such as VR headsets or smart glasses and very sceptical about the claims made about the extent to which education in the near future will be virtual. Second Life’s second life might be a multi-million pound white elephant.

Finally, one of the big buzzes in the kinds of bubbles I live in on Twitter is about the ‘threat’ of AI. On the one hand you have the ‘kid in the sweetshop’ excitement of developers marvelling at AI text authoring and video making and on the other doom-mongering teachers frothing about what these (massively inflated, currently) affordances offer our cheating, conniving, untrustworthy youth. The argument goes that problems of plagiarism, collusion and supervillain levels of academic dishonesty will be exacerbated massively. The ed tech solution: More surveillance! More checking! Plagiarism detection! Remote proctoring! I just think we need to say ‘whoa!’ before committing ourselves to anything and see whether we might imagine things a little differently. Firstly, do existing systems (putting aside major ethical concerns) for, say, plagiarism detection, actually do what we imagine them to do? They can pick up poor academic practice but can they detect ‘intelligent’ reworking?   The problem is: How will we know what someone has written themselves otherwise? But where is our global perspective on this? Where is our 21st century eye? Where is acknowledgement of existing tools used routinely by many? There are many ways to ‘stand on the shoulders of giants’ and different educational traditions value different ways to represent this. Remixes, mashups and sampling are a fundamental part of popular culture and the 20s zeitgeist. Could we not better embrace that reality and way of being? Spellcheckers and grammar checkers do a lot of the work that would have meant lower marks in the past but we use them now unthinkingly. Is it such a leap to imagine positive and open employment of new tools such as AI?  Solutions to collusion in online exams offer more options it seems: 1. Scrap online exams and get them all back in huge halls or 2. [insert Mr Burns’ gif] employ remote proctoring. The issues centre on students’ abilities to 1. Look things up to make sure they have the correct answer and 2. Work together to ensure they have a correct answer. I find it really hard not see that as a good thing and an essential skill. I want people to have the right answer. If it is essential to find what any individual student knows, our starting point needs to be re-thinking the way we assess NOT looking for ed tech solutions so that we can carry on regardless. While we’re thinking about that we may also want to re-appraise the role new tech does and will likely play in the ways that we access and share information and do what we can to weave it in positively rather than go all King Canute.

Benoit, A. (2022) Investigating the Impact of Interactive Whiteboards in Higher Education. A Case Study. Journal of Learning Spaces

Hedgcock, W. and Rouwenhorst, R. (2014) ‘Clicking their way to success: using student response systems as a tool for feedback.’ Journal for Advancement of Marketing Education,

Kay, R. and LeSage, A. (2009) ‘Examining the benefits and challenges of using audience response systems: A review of the literature.’ Computers & Education

Keough, S. (2012) ‘Clickers in the Classroom: A Review and a Replication.’ Journal of Management Education

Will Covid-19 finally catalyse the way we exploit digital options in assessment and feedback?

Listen 7m32 s or read below

(Previously posted on the Bloomsbury Learning Exchange blog, 29/3/21) 

The typical child will learn to listen first, then talk, then read, then write. In life, most of us tend to use these abilities proportionately in roughly the same order: listen most, speak next most, read next most frequently and write the least. Yet in educational assessment and feedback, and especially in higher education (HE), we value writing above all else. After writing comes reading, then speaking and the least assessed is listening. In other words, we value most what we use least. I realise this is a huge generalisation and that there are nuances and arguments to be had around this, but it is the broad principle and tendencies here that I am interested in. Given the ways in which technology makes such things as recording and sharing audio and video much easier than even a few years ago (i.e. tools that provide opportunity to favour speaking and listening), it is perhaps surprising how conservative we are in HE when it comes to changing assessment and feedback practices. We are, though, at the threshold of an opportunity whereby our increased dependency on technology, the necessarily changing relationships we are all experiencing due to the ongoing implications of Covid-19 and the inclusive, access and pedagogic affordances of the digital mean we may finally be at a stage where change is inevitable and inexorable.

In 2009 while working in Bradford, I did some research on using audio and video feedback on a postgraduate teaching programme. I was amazed at the impact, the increased depth of understanding of the content of the feedback and the positivity with which it was received. I coupled it with delayed grade release too. The process was: Listen to (or watch) the feedback, e-mail me with the grade band the feedback suggested and then I would return the actual grade and use the similarity or difference (usually, in fact, there was pretty close alignment) to prompt discussion about the work and what could be fed forward. A few really did not like the process but this was more to do with not liking the additional process involved in finding out the grades they had been given rather than the feedback medium itself. Only one student (out of 39) preferred written feedback as a default and this included three deaf students (I arranged for them to receive BSL signed feedback recorded synchronously with an interpreter while I spoke the words).  Most of the students not only favoured it, they actively sought it. While most colleagues were happy to experiment or at least consider the pros, cons and effort needed, at least one senior colleague was a little frosty, hinting that I was making their life more difficult. On balance, I found that once I had worked through the mechanics of the process and established a pattern, I was actually saving myself perhaps 50% of marking time per script though there certainly was some front-loading of effort necessary for the first time.  I concluded that video feedback was powerful but, at that time, too labour- and resource-intensive and stuck with audio feedback for most of the students unless video was requested or needed. I continued to use it in varying ways in my teaching, supporting others in their experimentation and, above all, persuading the ‘powers that be’ that it was not only legitimate but that it was powerful and, for many, preferable. I also began encouraging students to consider audio or video alternatives to reflective pieces as I worked up a digital alternative to the scale-tipping professional portfolios that were the usual end of year marking delight.

Microphone in close up as seen from the perspective of the user

Two years later I found myself in a new job back in London and confronted with a very resistant culture. As is not uncommon, it is an embedded faith and dependency on the written word that determines policy and practice rather than research and pedagogy. In performative cultures, written ‘evidence’ carries so much more weight and trust, apparently irrespective of impact. Research (much better and more credible than my own) has continued to show similar outcomes and benefits (see summary in Winstone and Carless, 2019) but the overwhelming majority of feedback is still of the written/ typed variety. Given the wealth of tools available and the voluminous advocacy generated through the scholarship of teaching and learning and potential of technology in particular (see Newman and Beetham, 2018, for example), it is often frustrating for me that assessment and feedback practices that embrace the opportunities afforded by digital media seemed few and far between.  So, will there ever be a genuine shift towards employing digital tools for assessment design and feedback? As technology makes these approaches easier and easier, what is preventing it?  In many ways the Covid-19 crisis, the immediate ‘emergency response’ of remote teaching and assessing and the way things are shaping up for the future have given a real impetus to notions of innovative assessment. We have seen how many of us were forced to confront our practice in terms of timed examinations and, amid inevitable discussions around the proctoring possibilities technology offered (to be clear: I am not a fan!), we saw discussions about effective assessment and feedback processes occurring and a re-invigorated interest in how we might do things differently.  I am hoping we might continue those discussions to include all aspects of assessment from the informal, in-session formative activities we do through to the ’big’, high-stakes summatives.

Change will not happen easily or rapidly, however. Hargreaves (2010) argues that a principal enemy of education change is social and political conservatism and I would add to that a form of departmental, faculty or institutional conservatism that errs on the side of caution lest evaluation outcomes are negatively impacted.  Covid-19 has disrupted everything and whilst tensions remain between the conservative (very much of the small ‘c’ variety in this context) and change-oriented voices, it is clear that recognition is growing of a need to modify (rather than transpose) pedagogic practices in new environments and this applies equally to assessment and feedback. In the minds of many lecturers, the technology that is focal to approaches to technology enhanced learning is often ill-defined or uninspiring (Bayne, 2015) and the frequent de-coupling of tech investment from pedagogically informed continuing professional development (CPD) opportunities (Compton and Almpanis, 2018) has often reinforced these tendencies towards pedagogic conservatism. Pragmatism, insight, digital preparedness, skills development, and new ways of working through necessity are combining to reveal a need for and willingness to embrace significant change in assessment practices.

As former programme leader of an online PGCertHE (a lecturer training programme) I was always in the very fortunate position to collect and share theories, principles and practices with colleagues, many of whom were novices in teaching. Though of course they had experienced HE as students they were less likely to have had a more fossilised sense of what assessments and feedback should or could look like. I also have the professional and experiential agency to draw on research-informed practices not only by talking about them but through exemplification and modelling (Compton and Almpanis, 2019). By showing that unconventional assessment (and feedback) are allowed and can be very rewarding we are able to sow seeds of enthusiasm that lead to a bottom-up (if still slow!) shift away from conservative assessment practices. Seeing some colleagues embrace these strategies is rewarding but I would love to see more.

References 

Bayne, S. (2015). ‘What’s the matter with ‘technology-enhanced learning?’ Learning, Media and Technology, 9 (1), 251-257.

Bryan, C., & Clegg, K. (Eds.). (2019). Innovative assessment in higher education: A handbook for academic practitioners. Routledge.

Compton, M. & Almpanis, T. (2019) Transforming lecturer practice and mindset: Re-engineered CPD and modelled use of cloud tools and social media by academic developers. Chapter in Rowell, C (ed.) Social Media and Higher Education: Case studies, Reflections and Analysis. Open Book Publishers.

Compton, M., & Almpanis, T. (2018). One size doesn’t fit all: rethinking approaches to continuing professional development in technology enhanced learning. Compass: Journal of Learning and Teaching,11(1).

Hargreaves, A. (2010). ‘Presentism, individualism, and conservatism: The legacy of Dan Lortie’s Schoolteacher: A sociological study’. Curriculum Inquiry, 40(1), 143-154.

Newman, T. and Beetham, H. (2018) Student Digital Experience Tracker 2018: The voices of 22,000 UK learners. Bristol: Jisc.

Winstone, N., & Carless, D. (2019). Designing effective feedback processes in higher education: A learning-focused approach. Routledge.

Can ‘ungrading’ change the way students engage with feedback and learning?

Dr Eva Mol; Dr Martin Compton- summary of paper presented at UCL Education conference 6th April 2022

‘Ungrading’ is a broad term for approaches that seek to minimise the centrality of grades in feedback and assessment. The goal is to enable students to focus on feedback purely as a developmental tool and to subvert the hegemony and potentially destructive power of grades. Fundamentally, ungrading is, at one end of the scale, completely stopping the process of adding grades to student work. A less radical change might be to shift from graded systems to far fewer gradations such as pass/ not yet passed (so called ‘minimal grading’). You don’t fatten the pig by weighing it

In addition to the summary offered in this post we began with the definition above and encouraged colleagues to consider critiques of the existing grading-dominated zeitgeist in terms of reliability, validity and  fairness. Grades become a proxy for learning in the minds of both students and lecturers and huge distractions away from the potentials of feedback and genuine dialogue about the work rather than the percentage or grade letter appended to it.

Grades can dampen existing intrinsic motivation… enhance fear of failure, reduce interest, decrease enjoyment in class work, increase anxiety, hamper performance on follow-up tasks, stimulate avoidance of challenging and heighten competitiveness (Schinske & Tanner, 2014)

We summarised the range of possibilities for those interested from simply talking about threats and potential detrimental effects of grades (as well as perceived benefits) through to wholesale, systemic change.

  • Scepticism/ discussion/ dialogue
  • Piloting no grades on small or low stakes work
  • ‘Conceal’ grades in feedback
  • Discussed (even negotiated) grades after engagement with feedback
  • Designing out grading
  • Students collaborate on criteria
  • Grade only for final summatives
  • Minimal grading (e.g. Pass/ fail)
  • Remove grades for early modules or years
  • Students self-grade
  • All students graded ‘A’
  • Institutional level – no grade policies

One of my ungrading experiences (Eva Mol)

These are based on teaching I did at Brown University (Providence Rhode Island), with a classroom of graduate and undergraduate students from archaeology and philosophy. I decided to give them all an A (highest mark possible) before the class started.

What did I learn about students?

  • Initially it was a shock to get students out of the system of marks! For most it is really their only mode of thinking about progress and learning, they wondered why take a course if there was no mark (which I think is very disconcerting).
  • However, this shifted quickly from shock to viewing the class as a few hours of relief from the system, followed by less anxiety, more experimentation, and students thinking freely and critically both about the system, as well as what they wanted to achieve in a course.
  • Much more engagement with the content of the course material and weekly readings
  • Discussions were more lively as there was less performance anxiety, students were more personal as well.
  • They set their own personal goals for the class, and I as instructor helped them achieve it. These were a variety of things: speaking at a conference, writing a blog, writing an article. At the end, they realised they got much more out of a course than they ever could with just a mark.

What did I learn as a teacher?

  • It is not less work! I still had to read what my students wrote, correspond to emails, give feedback. But it is really different and much more enjoyable work: when not reading in the context of how writing scores against a grading scale, you can allow yourself to appreciate what students accomplished in their writing.
  • Comments on feedback were much more rewarding because it was not to justify the mark for the administration, but how you can help students improve, and because there is no mark involved, students read feedback.
  • It made me a more engaged instructor, more flexible, creative, and more relaxed.
  • Because I could be flexible, I was much better equipped to deal with building in equity and inclusion.
  • It also forced me to critically reflect on the relationship between grading and teaching, contextualize how we have normalized the artificial frame of numerical feedback, and look for alternatives aimed at my personal pedagogy.

I felt empowered to question all aspects of the folklore. Why am I assigning a research paper even though it’s always a disappointment? Why do I care whether students use MLA formatting correctly down to the last parenthesis and comma? (I don’t.) Why should I worry about first-year writing as a course meant to prepare students for the rest of college? Why can’t I have autonomy over what I think students should experience? (Warner 2020, 215).

Now is the time

The pandemic showed that we can change, if necessary, perhaps now is the right time to reflect on the system. We have an opportunity to shift the way students feel about their own learning and move away from more traditional words associated with grading.

The classroom remains the most radical space of possibility in the academy. (bell hooks 1994)

—————————————————

References and more about ungrading

bell hooks (1994). Teaching to Transgress: Education as the Practice of Freedom, New York: Routledge.

Blum, S. and A. Kohn (eds.), (2020). Ungrading: Why rating students undermines learning (and what to do instead). West Virginia University Press.

Blum, S. (2019). Why Don’t Anthropologists Care about Learning (or Education or School)? An Immodest Proposal for an Integrative Anthropology of Learning Whose Time Has Finally Come. American Anthropologist 121(3): 641–54.

Eyler, J. R. (2018). How Humans Learn: The Science and Stories behind Effective College Teaching. Morgantown: West Virginia University Press.

Inoue, A.B. (2019). Labor-Based Grading Contracts: Building Equity and Inclusion in the Compassionate Writing Classroom. Fort Collins, CO: WAC Clearinghouse and University Press of Colorado. https://wac.colostate.edu/books/perspectives/labor/.

Rust, C. (2007), Towards a scholarship of assessment, Assessment & Evaluation in Higher Education 32:2, 229-237

Sackstein, S. (2015). Hacking Assessment: 10 Ways to Go Gradeless in a Traditional Grades School. Cleveland, OH: Times 10 Publications

Schinske, J., and K. Tanner (2014). Teaching more by grading less (or differently). CBE – LIfe Sciences Education 13, (2), 159-166

Stommel J., (2017), Why I don’t Grade. JesseStommel.com https://www.jessestommel.com/why-i-dont-grade/

Warner, J., (2020). Wile E. Coyote, the Hero of Ungrading, in S. Blum, Ungrading: Why rating students undermines learning (and what to do instead). West Virginia University Press, 204-218

Wormeli, R. (2018). Fair Isn’t Always Equal: Assessment and Grading in the Differentiated Classroom. 2nd ed. Portland, ME: Stenhouse.

Team Based Learning revisited

I have been forced to confront a prejudice this week and I’m very glad I have because I have significantly changed my perspective on Team Based Learning (TBL) as a result. When I cook I rarely use a recipe: rough amounts and a ‘bit of this; bit of that’ get me results that wouldn’t win Bake Off but they do the job.  I’m a bit anti-authority I suppose and I might, on occasion, be seen as contrary given a tendency to take devil’s advocate positions.  As a teacher educator, and unlike many of my colleagues over the years, I tend to advocate a more flexible approach to planning, am most certainly not a stickler for detailed lesson plans and maintain a sceptisicm (that I think is healthy) about the affordances of learning outcomes and predictably aligned teaching. I think this is why I was put off TBL when I first read about it. Call something TBL and most people would imagine something loose, active, collaborative and dialogic. But TBL purists (and maybe this was another reason I was resistant) would holler: ‘Hang on! TBL is a clearly delineated thing! It has a clear structure and process and language of its own.’ However, after attending a very meta-level session run by my colleague, Dr Pete Fitch, this week I was embarrassed to realise how thoroughly I’d misunderstood its potential flexibility and adaptability as well as the potentials of different aspects I might be sceptical of in other contexts.

Established as a pedagogic approach in medical education in the US in the 1970s, it is now used widely across medical education globally as well as in many other disciplinary areas. In essence, it provides a seemingly rigid structure to a flipped approach that typically looks like this:

  • Individual pre-work – reading, videos etc.
  • Individual readiness assurance test (IRAT) – in class multi-choice text
  • Team readiness assurance teast (TRAT) – same questions, discussed and agreed- points awarded according to how few errors are made getting to correct response
  • Discussion and clarification (and challenge)- opportunities to argue, contest, seek clarification from tutor
  • Application- opportunity to take core knowledge and apply it
  • Peer evaluation

This video offers a really clear summary of the stages:

Aside from the rigid structure, my original resistance was rooted in the knowledge-focussed tests and how this would mean sessions started with silent, individual work. However, having been through the process myself (always a good idea before mud slinging!), I realised that this stage could achieve a number of goals as well as the ostensible self-check on understanding. It provides a framing point for students to measure understanding of materials read; it offers-completely anonymously- even to the tutor, an opportunity to guage understanding within a group; it provides an ipsative opportunity to measure progress week by week and acts additionally as a motivator to actually engage with the pre-session work (increasingly so as the learning culture is established). It turns a typically high stakes, high anxiety activity (individual test) into a much lower stakes one and provides a platform from which intial arguments can start at the TRAT stage. A further advantage therefore could be that it helps students formatively with their understanding of and approaches to multi-choice examinations in those programmes that utilise this summative assessment methodology.  In this session I changed my mind on three questions during the TRAT, two of which I was quietly (perhaps even smugly) confident I’d got right. A key part of the process is the ‘scratch to reveal if correct’ cards which Pete had re-imagined with some clever manipulation of Moodle questions. We discussed the importance of the visceral ‘scratching’ commitment in comparsion to a digital alternative and I do wonder if this is one of those things that will always work better analogue!

The cards are somewhat like those shown in this short video:

To move beyond knowledge development, it is clear the application stage is fundamental. Across all stages it was evident how much effort is needed in the design stage. Writing meaningful, level appropriate multi-choice questions is hard. Level-appropriate, authentic application activities are similarly challenging to design. But the payoffs can be great and, as Pete said in session, the design lasts more than a single iteration. I can see why TBL lends itself so well to medical education but this session did make me wish I was still running my own programme so I could test this formula in a higher ed or digital education context.

An example of how it works in the School of Medicine in Nanyang Technological University can be seen here:

The final (should have been obvious) thing spelt out was that the structure and approach can be manipulated. Despite appearances, TBL does enable a flexible approach. I imagine one-off and routine adaptations according to contextual need are commonplace.  I think if I were to design a TBL curriculum, I’d certainly want to collaborate on its design. This would in itself be a departure for me but preparing quality pre-session materials, writing good questions and working up appropriate application activites are all essential and all benefit from collaboration or, at least, a willing ‘sounding board’ colleague.  I hope to work with Pete on modelling TBL across some of the sessions we offer in Arena and I really need to get my hands on some of those scratch cards!

How effective are your questions?

[listen -10 mins -or read below]

Questions you ask students are at the heart of teaching and assessment but where and how you ask them, the types of questions you ask and the ways you ask them can sometimes be neglected. This is especially true of the informal (often unplanned) questions you might ask in a live session (whether in-person or online) where a little additional forethought into your rationale, approach and the actual questions themselves could be very rewarding. I was prompted to update this post when reviewing some ‘hot questions’ from new colleagues about to embark on lecturing roles for the first time.  They expressed the very common fears over ‘tumbleweed’ moments when asking a question, concerns over nerves showing generally and  worries about a sea of blank screens in online contexts and ways to check students are understanding, especially when teaching online.  What I offer below is written with these colleagues in mind and is designed to be practically-oriented:

What is your purpose? It sounds obvious but knowing why you are asking a question and considering some of the possible reasons can be one way to overcome some of the anxieties that many of us have when thinking about teaching. Thinking about why you are asking questions and what happens when you do can also be a useful self-analysis tool.  Questions aren’t always about working out what students know already or have learned in-session. They can be a way of teaching (see Socratic method overview here and this article has some useful and interesting comments on handling responses to questions), a way of provoking, a way of changing the dynamic and even managing behaviour.  In terms of student understanding:  Are you diagnosing (i.e. seeing what they know already), encouraging speculation, seeking exemplification or checking comprehension?  Very often what we are teaching- the pure concepts – are the things that are neglected in questioning. How do we know students are understanding?  For a nice worked example see this example of concept checking.

The worst but most common question (in my view). Before I go any further, I’d like to suggest that there is one question (or question type) that should, for the most part, be avoided.  What do you think that question might be? It is a question that will almost always lead to a room full of people nodding away or replying in other positive ways. It makes us feel good about ourselves because of the positive response we usually get but actually can be harmful. The reason for it is that when we ask it there are all sorts of reasons why any given student might not actually give a genuine response. Instead of replying honestly they see others nodding and do not want to lose face, appear daft, go against the flow. They see everyone else nodding and join in. But how many of those students are doing the same? How does it feel when everyone else appears to understand something and you don’t? Do you know what the question is yet? See foot of this post to check** (Then argue with me in comments if you disagree).

Start with low stakes questions. Ask questions that ask for an opinion or perspective or to make a choice or even something not related to the topic. Get students to respond in different ways (a quick show of hands, an emoji in chat if teaching online, a thumbs up/ thumbs down to an e-poll or a controversial quotation – Mentimeter does this well). All these interactions build confidence and ease students into ‘ways of being’ in any live taught session. Anything that challenges any assumptions they may have about how teaching ‘should’ be uni-directional and help avoid disengagement are likely to help foster a safe environment in which exchange, dialogue, discussion and the questions that are at the heart of those things are comfortably accepted. Caveat: it is worth noting here that what we might assume if a student is at the back and not contributing will almost certainly have reasons behind it that are NOT to do with indolence or distraction. A student looking at their phone may be anxious about their comprehension and be using a translator, for example  They are there! This is key. Be compassionate and don’t force it. Build slowly.

Plan your questions. Another obvious thing but actually wording questions in advance of a session makes a huge difference. You can plan for questions beyond opinion and fact checking types (the easiest to come up with on the fly). Perhaps use something  like The Conversational Framework or Bloom’s Taxonomy to write questions for different purposes or of different types.  Think about the verbal questions you asked in your last teaching session. How many presented a real challenge? How many required analysis, synthesis, evaluation? Contrast to the number that required (or could only have) a single correct response. The latter are much easier to come up with so, naturally, we ask more of them. If framing the higher order questions is tough on the spot, maybe jot a few ahead of the lecture or seminar. If you use a tool like Mentimeter to design and structure slide content it has many built in tools to encourage you to think about questions that enable anonymous contributions from students.

The big question. A session or even a topic could be driven by a single question. Notions of Enquiry and Problem-Based Learning (EBL/ PBL) exploit well designed problems or questions that require students to resolve. These cannot of course be ‘Google-able’ set response type questions but require research, evidence gathering, rationalisation and so on. This reflects core components of constructivist learning theory.

The question is your answer. Challenging students to come up with questions based on current areas of study can be a very effective way of gauging the depth to which they have engaged with the topic. What they select and what they avoid is often a way of getting insights into where they are most and least comfortable.

Wait time. Did you know that the average time lapse between a question being asked and a student response is typically one second? In effect, the sharpest students (the ‘usual suspects’ you might see them as) get in quick. The lack of even momentary additional processing time means that a significant proportion (perhaps the majority) have not had time to mentally articulate a response. Mental articulation goes some way to challenging cognitive overload so, even where people don’t get a chance to respond the thinking time still helps (formatively).  There are other benefits to building in wait time too. This finding by Rowe (1974)* is long ago enough for us to have done something about it. It’s easy to see why we may not have done though…I ask a question; I get a satisfyingly quick and correct response…I can move on. But instilling a culture of ‘wait time’ can have a profound effect on the progress of the whole group. Such a strategy will often need to be accompanied by….

Targeting. One of the things we often notice when observing colleagues ‘in action’ is that questions are very often thrown out to a whole group. The result is either a response from the lightning usual suspect or, with easier questions, a sort of choral chant. These sorts of questions have their place. They signify the important. They can demarcate one section from another. But are they a genuine measurement of comprehension? And what are the consequences of allowing some (or many) never to have to answer if they don’t want to? Many lecturers will baulk at the thought of targeting individuals by name and this is something that I’d counsel against until you have a good working relationship with a group of students but why not by section? by row? by table? “someone from the back row tell me….”. By doing this you can move away from ‘the usual suspects’ and change your focus- one thing we can inadvertently do is to focus eye contact, attention and pace on students who are willing and eager to respond thereby further disconnecting those who are less confident or comfortable or inclined to ‘be’ the same.

Tumbleweed.  The worry of asking a question and getting nothing in response can be one of those things that leads to uni-directional teaching. A bad experience early on can dissuade us from asking further questions and then the whole thing develops its own momentum and only gets worse. The low stakes questions, embedding wait time and building a community comfortable with (at least minimal) targetting are ways to pre-empt this. My own advice is that numbers are with you if you can hold your nerve and relaxed smile. Ask a question and look at the students and wait. 30 seconds is nothing but feels like an eternity in such a situation. However, there are many more of them than you and one of them will break eventually! Resist re-framing the question or repeating it too soon but be prepared to ask a lower stakes version and building from there. More advice is available in this easy access article.

Technology as a question not the answer. Though they may seem gimmicky (and you have to be careful that you don’t subvert your pedagogy for colour and excitement) there are a number of in- or pre-session tools that can be used.  Tools like Mentimeter, Polleverywhere, Socrative, Slido, Kahoot all enable different sorts of questions to be answered as does the ‘Hot Questions’ function in Moodle that prompted me to re-post this.

Putting thought into questions, the reason you are asking them and how you will manage contributions (or lack thereof) is something we might all do a little  more of, especially when tasked with teaching new topics or to new groups or in new modalities.

*Rowe, M. B. (1974). Wait‐time and rewards as instructional variables, their influence on language, logic, and fate control: Part one‐wait‐time. Journal of research in science teaching, 11(2), 81-94. (though this original study was on elementary teaching situations the principles are applicable to HE settings)

**Worst question? ‘Does everyone understand?’ or some such variant such as nodding and smiling at your students whilst asking ‘All ok? or ‘Got that?’. Instead ask a question that is focussed on a specific point. Additionally, you might want to routinely invite students to jot their most troubling point on a post it or have an open forum in Moodle (or equivalent space) for areas that need clarifying.

[This is an update- actually more of a significant re-working- of my own post,  previously shared here: https://blogs.gre.ac.uk/learning-teaching/2016/11/07/thinking-about-questions/]

Are you not engaged?

Some colleagues and I were tasked with producing a ‘toolkit’ for other colleagues looking to improve/ optimise engagement. The toolkit can be seen in this online version of the ‘engagement’ toolkit or dowloaded from here in Word format. I hope colleagues find it useful.

What struck me most when talking about and reading around this topic is how problematic it is as a concept and how little time is actually given to deconstructing meaning and principles. We throw words like ‘engagement’, ’employability’ and ‘wellbeing’ (should it be hyphenated?) around, without even checking that we have a shared understanding of what we mean. The same could be said for ‘learning’ and ‘teaching’ too I suppose. For a start, engagement in activity is not a proxy for learning but is easily confused and conflated as such. Back when I was a sessional lecturer in several further education colleges many of the quality assurance processes in every place were informed by the lengthy shadow cast by Ofsted. As I recall, along with ‘differentiation strategies’ and ‘negotiated, individualised outcomes’ (I kid you not) we needed to show that we were on top of student engagement. So, lesson plans were expected to specify student engagement activities and observation forms sought ratings in terms of ‘successful’ engagement. I can imagine people asking ‘what’s wrong with that?’ and I suspect I didn’t question it then to be honest. I was aware that I was constructing an artifice in those plans and observed sessions though. What it tended to do (definitely in my case and certainly later in the case of many of the teachers I observed) was to encourage a cynical acknowledgement of this demand. You end up shoe-horning in activities where engagement (read: students being busy) is visible because the big problem was that only engagement that was in-your-face obvious was likely to count.

This is why I very much like the engagement framework suggested by Redmond et al. (2018) and is represented below. First, it encourages us to conceptualise types of engagement and secondly, and crucially in my view, implores us NOT to find ways to measure engagement but to eschew measurement and focus on developing an environment where different types of engagement are valued and fostered.

Blended learning engagement framework showing the five aspects: social, emotional, cognitive, behavioural and collaborative
Online and blended engagement framework with definitions (adapted from Redmond et al., 2018)

Redmond, P., Abawi, L. A., Brown, A., Henderson, R., & Heffernan, A. (2018). An online engagement framework for higher education. Online learning, 22(1), 183-204.

5 reasons why Mentimeter works so well

[if you have never seen or used Mentimeter then a quick look here may help]

[Listen -11 mins or read below]

When it comes to tools that will do a teaching and learning job there is a world of dedicated educational technology and ‘productivity’ tools to choose from. I’m very much an experimenter and a fiddler. If I see someone using or referring to a website or tool that looks interesting in a meeting or at a conference, I am there in seconds signing up and playing around and making judgements that have become a staple of my needs and preference-focussed filtration system. In broad terms I like to be able to try things for free, for it to be relatively intuitive and straightforward and (most important) fit for either pre-defined or imagined teaching or assessment purposes. I have written more about the how and why of this with my former colleague Dr Timos Almpanis here and in chapter 4 of this collection. I try not to evangelise and I am very much of the school that would argue that purpose rather than any given tool should be a starting point for discussions about integrating digital approaches but there’s something about what Mentimeter can do and how it does it that means I do sometimes slip into ultra-enthusiast mode.  Unlike a lot of tech approaches and tools that pass the initial ‘free, easy, fit for purpose’ test there’s something about the breadth of purpose that Mentimeter is fit for and its intuitiveness that, for me, make it a class above other tools. Also see here for why Chris Little at Keele made this point a while back and here for an evaluation of a number student response tools including Mentimeter from when I and a colleague at Greenwich were tasked with identifying  what the best institution-wide student response system would be.

1. It’s not hardware dependent

Like a lot of people similarly enthusiastic about opportunities for enhancing student interaction and engagement with digital technologies, I spent a lot of time (much of which was ultimately wasted) focussed on hardware. From interactive whiteboards to in-class ipad sets to PDAs and ‘flipcams’ the issues that directly impeded scaling of use as well as my own enthusiasm  were related to one or more of the following:

  • Amount of training needed
  • Device security and storage
  • ‘Just in time’ access limits
  • Responsibility for maintenance
  • Rapidity of obsolescence of kit

In my view, all these were factors that afflicted ‘clickers’ (voting pods that were handed round in face to face sessions) – as revolutionary as they promised to be -they were only ever used by the few, despite the gleaming aluminium cases and the sumptuous foam inserts that the clicker devices sat in. The BYOD dependence on user devices when it comes to cloud-based software alternatives like Mentimeter means that:

  • People usually know how to use their own devices or at least access the internet
  • Device security, maintenance, updating is not an issue
  • They are, by definition, available; turning an oft-cited teacher frustration of mobile device distraction into a potential virtue

‘What if students don’t have a device?’ is a common question but, like many things in this domain, it’s largely about framing. I will always make participation optional and make it clear ‘if you have a device on you’ if in a face to face setting or ‘if you have a big enough screen or a separate device nearby’ if online and frequently subvert the assumption that responses need to be individual and precede voting with group-based discussion with one person per group responding.

I have moved between institutions in the last year and both have invested in a site licence and access to the full suite of tools and functionality Mentimeter offers. This privilege is something that must be acknowledged so it’s certainly not ‘free’ any more (though I  personally pay nothing of course!) the free version is still relatively generous. In my view it’s an exemplary freemium set up. Just on the right side of frustrating in amongst the persuasive.

2. It’s a slide/ presentation tool that has many merits in its own right

One thing that is often missed because the ‘point’ of Mentimeter is interaction is how well it works as an alternative to PowerPoint as a presentation tool.  Even though PowerPoint remains the default across higher education for slide production (even during that weird period when everyone was doing Prezis!), for the most part colleagues seem to struggle to break from the desktop app habit. As a consequence, sharing of slides becomes an upload/ download faff or, even if sharing is managed via MS Office cloud storage there are often restrictions on who can view. Mentimeter generates a link so the first benefit is slides can be shared as easily as any website link. Secondly, the participation link enables the students to see the slides (including detail of pictures) on their own devices in real time (as well as or possibly INSTEAD OF the main screen). Thirdly the author interface is simple, there is a variety of slide types and styles, the copyright free image gallery is easy to use as is the ALT text prompt. Fourthly, the ability to add simple interactions (eg thumbs up or down) mean that students can be invited to contribute even to content delivery type slides by, for example, agreeing or disagreeing with a controversial idea or quotation.  The slides have more limited space for text and this (to some a limitation) is an excellent discipline when preparing slides to minimise the text and challenge the tendency many of us have to use too much of it.

A screenshot from the editing woindow of Mentimeter showing a bulleted slide with image and also the content slide types available
The editing window of Mentimeter showing a bulleted slide with a copyright free image and also the content slide types available

3. The participation and interaction options are substantial and adaptable

In a previous post there were a few occasions where students chose notoriety over maturity and tried to undermine sessions by being abusive in open text questions. This led to something of a knee-jerk response by some colleagues who questioned whether the tool should be supported or used at all. Much like (way way back) access to YouTube was banned for all students AND teaching staff in a college I worked in because ONE student accessed a (seriously) inappropriate video. The sledgehammer / nut response was not the way to address things, not least because Mentimeter’s existing tools and functionality enable users to avoid and tackle such behaviours. So, if open text questions are used there are ways of monitoring and filtering content (including a profanity filter) and of the ten interaction/ question types only three are open text.  To grasp this, however, does often necessitate more than superficial exploration and experimentation (or coming to one of my hour-long workshops!) One thing I commonly do is encourage colleagues to consider how they might eschew the favoured word cloud and open text formats and find ways of fully exploiting the lesser used types.  In addition, it’s important to think about how the interactions are presented and managed. A well-designed question can be an excellent vehicle for prompting discussion prior to ‘voting’ or as a prompt for analysing/ rationalising responses that have already been offered.

screenshot from Mentimeter authoring dashboard showing all the question types available
Mentimeter authoring dashboard showing all the question types available

4. Frequent updates and improvements

There’s no resting on laurels with Mentimeter and there does seem to be acknowledgement of user requests. For example, the ability to embed video in slides from YouTube is a real blessing and, if using Mentimeter as a slide tool as well as for interaction, further minimises shifting between tabs or different software.  The recently introduced collaborative authoring of presentations was much requested at UCL and enables more efficient working in addition to the collaborative potential. A very recent and welcome improvement is the ability to have active hyperlinks (in both participation and presentation modes). The ‘Mentimote’ tool that allows you to use your smartphone as a slide clicker, moderation tool and presentation embellisher has also recently switched from beta to ‘fully fledged’ mode and works very well, especially for live in-person events.

5. When Covid came, Mentimeter was equipped to adapt.

The default pace setting in Mentimeter is ‘presenter paced’. That is, the presenter advances slides and only then can participants see them. This is very much in keeping with the how Mentimeter (presumably) was conceived and how many people who are users regard it. However, the non default option (audience paced) allows slide collections with interactions to be accessed at audience pace. When lessons switched online almost across the board it was common for academic colleagues to take the intuitive approach and try to replicate face to face teaching in online environments via Zoom, Teams or Collaborate. They often tried to incorporate Mentimeter slides too. Whilst this is do-able and it is something I routinely use myself, the complexity and both mental and actual bandwidth this layer added to already struggling staff and students (with kit, with space, with implications of Covid) meant that it often felt unsatisfactory. Alongside my and colleagues’ recommendations to rethink how online time could be exploited and optimised I encouraged colleagues to think about the possibilities of using Mentimeter asynchronously. By encouraging participation ahead of a session then presenting results in a session much faffing, device and screen changing is removed but still students have a buy-in to the content. When I came to my current post it was fascinating to see how colleagues in similar positions to my own such as Dr Silvia Colaiacomo were saying the same thing here.

If you want to read more on my thoughts about Mentimeter see this post and also this collaboration with two former colleagues (Dr Gerhard Kristandl from Greenwich and Paramedic extraordinaire Richard Ward who is at Cumbria).

Here, too, is a video case study I made with a colleague and student from the Division of Psychiatry on academic and student use of Mentimeter.

Colleagues at UCL interested in using Mentimeter start here: https://blogs.ucl.ac.uk/digital-education/2020/07/09/mentimeter-at-ucl/

Sell them what they want; give them what they need [audio version]

The recently published special edition compendium from JLDHE of reflections on the impact of Covid19 on higher education teaching, learning and assessment is an excellent, accessible and diverse resource. The range and quality of articles makes me feel quite proud to be a part of it! The contents page can be accessed here. It is an open access journal and each article uses a common format under strict wordage guidelines so it really is possible to dip in and out.

My article is here and I offer an audio version of it below for those that prefer to listen or who may gain something from my efforts at being expressive.

In that role I was working closely with Dr Alison Gilmour (now at UWS) and I would also recommend her piece on ‘Adopting a pedagogy of kindness’.