Evolving AI Literacy – A Shared Journey

This post and its slightly cheesy title (above) was generated using Claude and is based only on the transcript* from the recording of the Oxford Brookes webinar (part of the Talking Teaching across the globe series) I spoke at today on how we actually achieve that Russell Group committment:

Universities will support students and staff to become AI-literate

This is a ‘recast’ AI generated podcast of the article below- the emphases are not brilliant but I hope offers colleagues an idea of what can be done to supplement things like the webinar. Frankly, neither this post nor the recast summary would exist without the ability to produce it in minimal time. (The whole process from downloadingt he transcript to hitting ‘update’ to this post now has taken 19 minutes)

Introduction

We find ourselves in a complex moment as emerging generative AI both captivates and concerns academics. Powerful new tools hold promise yet prompt apprehension. How do we steer constructive conversations amidst clashing viewpoints and high stakes? Martin Compton offers insightful perspectives grounded in ethical priorities – perspectives that reframe AI literacy as a collective journey of discovery requiring diverse voices, embracing practical possibilities, and creating space for critical debate.

Multiple Voices Needed to Balance the Discussion

Martin emphasizes that no one person possesses definitive expertise in this nascent domain. Varied voices deserve air time, even those with “limited credentials.” Since AI intersects with so many fields and its societal ramifications span from climate impacts to employment shifts, cross-disciplinary dialogue matters deeply. We have much to learn from each other.

Further, the computer science sphere itself lacks internal concord on timelines and capabilities. Some hail rapid transformational change while others dismiss the possibility of huge impacts. These mixed messages understandably breed public confusion, sparking doomsday headlines alongside boundless optimism. Socratic humility may serve us well here – acknowledging the expanse of what we do not know.

Given such uncertainty, multiplicity of perspective becomes essential. We need humanities scholars probing ethical quandaries, social scientists weighing systemic biases, natural scientists modeling environmental tradeoffs, employers voicing hiring needs, students sharing studied hopes and fears. No singular authoritative stance exists. Martin rightly warns against perpetuating traditional classroom power dynamics that position instructors as all-knowing arbiters. Hierarchical positioning will not serve us in unmapped territory.

Practical Possibilities Over Limitations to Expand Understanding

Beyond balanced dialogue, Martin advises pivoting more conversations toward practical possibilities versus current limitations. Generative AI’s flaws are abundantly clear, including bias, inaccuracy, and authenticity concerns. These absolutely warrant continued attention, as does debate around academic integrity. But dwelling solely on weaknesses risks blinding us to potentially constructive use cases.

We owe it to students to explore how these technologies may assist real work in real fields, shaping their future employability. Are there accessibility gains for neurodiverse learners? Streamlined workflows for overwhelmed academics? Even those who condemn generative AI must grapple with its impending workplace uptake to best serve graduates. Beyond hypotheticals, where might AI tangibly supplement – not supplant – rich pedagogical environments if guided by ethical priorities?

Illustrating authentic applications can also demystify these tools for skeptical faculty and counteract media hyperbole around “robot grading essays.” When we broaden understanding of AI’s diversity beyond chatbots, we dispel myths. Asking, “how might this aid human creativity?” rather than “will this replace human jobs?” reveals unconsidered potentials.

Spaces for Critical Debate Across Campus

Finally, Martin asks the pivotal question of where open-ended debate will unfold on our campuses given diverse conflicting views. Even within single institutions, some departments welcome generative AI while others seek bans. For literacy efforts to prove lasting, they must transcend one-off workshops and invite ongoing dialogue around community priorities.

Martin offers models like King’s College London’s FutureLearn course allowing global participants to weigh complex issues like algorithmic bias. He spotlights the power of hackathons for convening multiple perspectives to spawn inventive projects. Funding student-faculty partnerships around AI applications grounds exploration in lived curriculum.

Constructing designated forums for airing ethical tensions matters deeply, given disparate departmental stances. We need space to hash out appropriate usage guides for our institutional contexts. No top-down policy prescription or unilateral corporate partnership will address the full scope of concerns. By mapping key campus constituents – from disability support offices to career centers to individual faculty across disciplines – we gain fuller understanding of the landscape before charting next wise steps.

Ultimately AI literacy lives in human connections – the degree to which we foreground multiplicity of voice, balance pragmatic possibility with diligent critique, and carve out shared venues for unpacking this technology on our own terms. The questions loom large, as does potential for substantiative harm. But committing to collective discovery widens possibilities for accountable innovation. We travel this emerging terrain together.

Event slides here

KCL GenAI in HE short course

my prompt: Isolate the comments from martin compton and using ONLY his ideas and contributions write an informal blog post for an academic audience that focusses on how we evolve AI literacy including his thoughts on what that means and how we should approach it. Use at least three subheadings

Leave a comment