Suzanne Srdarov and I have a new publication out in the Oxford Intersection on AI and Society:

Generative Imaginaries of Australia: How Generative AI Tools Visualize Australia and Australianness
https://doi.org/10.1093/9780198945215.003.0150

It's paywalled, so we've also got a summary piece in The Conversation:

‘Australiana’ images made by AI are racist and full of tired cliches, new study shows https://theconversation.com/australiana-images-made-by-ai-are-racist-and-full-of-tired-cliches-new-study-shows-263117

but please ping me if need a PDF of the main piece! #generativeAI#Australia #racism #auspol

‘Australiana’ images made by AI are racist and full of tired cliches, new study shows 

Big tech company hype sells generative artificial intelligence (AI) as intelligent, creative, desirable, inevitable, and about to radically reshape the future in many ways.

Published by Oxford University Press, our new research on how generative AI depicts Australian themes directly challenges this perception.

We found when generative AIs produce images of Australia and Australians, these outputs are riddled with bias. They reproduce sexist and racist caricatures more at home in the country’s imagined monocultural past.
‘Australiana’ images made by AI are racist and full of tired cliches, new study shows Big tech company hype sells generative artificial intelligence (AI) as intelligent, creative, desirable, inevitable, and about to radically reshape the future in many ways. Published by Oxford University Press, our new research on how generative AI depicts Australian themes directly challenges this perception. We found when generative AIs produce images of Australia and Australians, these outputs are riddled with bias. They reproduce sexist and racist caricatures more at home in the country’s imagined monocultural past.
Abstract

Generative AI (GenAI) has the potential to “imagine,” create, and render novel images in a seemingly endless combination of possibilities. However, the capacity of digital technologies to reduce cultural paradigms though the algorithmic monocultures they produce is well documented. As GenAI evokes powerful imaginaries, it is vital to ask what sorts of stories are included, and who is made more and less visible in them. To answer this, the authors tested a series of prompts across five of the largest commercially available GenAI engines—Adobe Firefly, Dream Studio, Dall-E3, Meta AI, and Midjourney. The prompts were “Australian-centric” in nature, designed to elicit the visual data of Australia through the lens of GenAI. Through an analysis of a corpus of approximately 700 images, the authors found that GenAI frequently invokes tired and cliched tropes to communicate “Australianness,” such as depictions of red dirt, Uluru, the “outback,” and a sense of wildness, in both its wildlife and in its depictions of “typical” Indigenous Australians. Various forms of bias were evident in the visualizations produced. The optics and interpretation of these images spans the puzzling to the troubling; this paper contends that “Australiana” as a category surfaces the limitations and blind spots of GenAI. Moreover, GenAI operates as something of a cultural time machine, surfacing old and defunct caricatures of Australianness despite the seeming novel newness of the “GenAI moment.”
Abstract Generative AI (GenAI) has the potential to “imagine,” create, and render novel images in a seemingly endless combination of possibilities. However, the capacity of digital technologies to reduce cultural paradigms though the algorithmic monocultures they produce is well documented. As GenAI evokes powerful imaginaries, it is vital to ask what sorts of stories are included, and who is made more and less visible in them. To answer this, the authors tested a series of prompts across five of the largest commercially available GenAI engines—Adobe Firefly, Dream Studio, Dall-E3, Meta AI, and Midjourney. The prompts were “Australian-centric” in nature, designed to elicit the visual data of Australia through the lens of GenAI. Through an analysis of a corpus of approximately 700 images, the authors found that GenAI frequently invokes tired and cliched tropes to communicate “Australianness,” such as depictions of red dirt, Uluru, the “outback,” and a sense of wildness, in both its wildlife and in its depictions of “typical” Indigenous Australians. Various forms of bias were evident in the visualizations produced. The optics and interpretation of these images spans the puzzling to the troubling; this paper contends that “Australiana” as a category surfaces the limitations and blind spots of GenAI. Moreover, GenAI operates as something of a cultural time machine, surfacing old and defunct caricatures of Australianness despite the seeming novel newness of the “GenAI moment.”