A really convenient illustration of how biases in training data find their way into generative AI outputs for me these last few months has been to show images of ‘a university professor from….’ as generated by Midjourney. The first example below is exactly that and I will use it today when talking with colleagues in the history department: ‘A history professor at King’s College London’. But, updates and new releases appear at first glance to be tackling the issue head on. The second set of images is exactly the same prompt generated via Dall-e 3 (via mobile version of ChatGPT -4). Positive me is delighted. Cynical me, though, is more than sceptical. The training data is unlikely to have changed so much as to result in this so what has? A shroud of diversity masking realities of the foundation that sit beneath through clever algorithmic tweaking. The system level technique is a deliberate effort to reflect more accurately the diversity of the World where prompts do not specify gender or ethnicity. This may well contribute a significant challenge to the bias issue but we shouldn’t kid ourselves that it has been resolved.

