Bias in AI image outputs

A really convenient illustration of how biases in training data find their way into generative AI outputs for me these last few months has been to show images of ‘a university professor from….’ as generated by Midjourney. The first example below is exactly that and I will use it today when talking with colleagues in the history department: ‘A history professor at King’s College London’. But, updates and new releases appear at first glance to be tackling the issue head on. The second set of images is exactly the same prompt generated via Dall-e 3 (via mobile version of ChatGPT -4). Positive me is delighted. Cynical me, though, is more than sceptical. The training data is unlikely to have changed so much as to result in this so what has? A shroud of diversity masking realities of the foundation that sit beneath through clever algorithmic tweaking. The system level technique is a deliberate effort to reflect more accurately the diversity of the World where prompts do not specify gender or ethnicity. This may well contribute a significant challenge to the bias issue but we shouldn’t kid ourselves that it has been resolved.

Alt text: This is a single image divided into four panels, and generated in Midjourney via Discord. Each panel displays a photo realistic portrait of a different man against an architectural backdrop, presumably a historic building or courtyard. All appear white, all wear glasses, all are formally dressed and three wear ties.
Alt text: A collection of four images showcasing AI Dall-e 3 interpretations of the prompt “history professors at Kings College London” in different scenarios and styles: two women and two men of differnet ethnicities in different styles from photo realisitc to simple line drawing.

Leave a comment