Suzanne Srdarov and I have a new publication out in the Oxford Intersection on AI and Society:

Generative Imaginaries of Australia: How Generative AI Tools Visualize Australia and Australianness
https://doi.org/10.1093/9780198945215.003.0150

It's paywalled, so we've also got a summary piece in The Conversation:

‘Australiana’ images made by AI are racist and full of tired cliches, new study shows https://theconversation.com/australiana-images-made-by-ai-are-racist-and-full-of-tired-cliches-new-study-shows-263117

but please ping me if need a PDF of the main piece! #generativeAI#Australia #racism #auspol

‘Australiana’ images made by AI are racist and full of tired cliches, new study shows 

Big tech company hype sells generative artificial intelligence (AI) as intelligent, creative, desirable, inevitable, and about to radically reshape the future in many ways.

Published by Oxford University Press, our new research on how generative AI depicts Australian themes directly challenges this perception.

We found when generative AIs produce images of Australia and Australians, these outputs are riddled with bias. They reproduce sexist and racist caricatures more at home in the country’s imagined monocultural past.
‘Australiana’ images made by AI are racist and full of tired cliches, new study shows Big tech company hype sells generative artificial intelligence (AI) as intelligent, creative, desirable, inevitable, and about to radically reshape the future in many ways. Published by Oxford University Press, our new research on how generative AI depicts Australian themes directly challenges this perception. We found when generative AIs produce images of Australia and Australians, these outputs are riddled with bias. They reproduce sexist and racist caricatures more at home in the country’s imagined monocultural past.
Abstract

Generative AI (GenAI) has the potential to “imagine,” create, and render novel images in a seemingly endless combination of possibilities. However, the capacity of digital technologies to reduce cultural paradigms though the algorithmic monocultures they produce is well documented. As GenAI evokes powerful imaginaries, it is vital to ask what sorts of stories are included, and who is made more and less visible in them. To answer this, the authors tested a series of prompts across five of the largest commercially available GenAI engines—Adobe Firefly, Dream Studio, Dall-E3, Meta AI, and Midjourney. The prompts were “Australian-centric” in nature, designed to elicit the visual data of Australia through the lens of GenAI. Through an analysis of a corpus of approximately 700 images, the authors found that GenAI frequently invokes tired and cliched tropes to communicate “Australianness,” such as depictions of red dirt, Uluru, the “outback,” and a sense of wildness, in both its wildlife and in its depictions of “typical” Indigenous Australians. Various forms of bias were evident in the visualizations produced. The optics and interpretation of these images spans the puzzling to the troubling; this paper contends that “Australiana” as a category surfaces the limitations and blind spots of GenAI. Moreover, GenAI operates as something of a cultural time machine, surfacing old and defunct caricatures of Australianness despite the seeming novel newness of the “GenAI moment.”
Abstract Generative AI (GenAI) has the potential to “imagine,” create, and render novel images in a seemingly endless combination of possibilities. However, the capacity of digital technologies to reduce cultural paradigms though the algorithmic monocultures they produce is well documented. As GenAI evokes powerful imaginaries, it is vital to ask what sorts of stories are included, and who is made more and less visible in them. To answer this, the authors tested a series of prompts across five of the largest commercially available GenAI engines—Adobe Firefly, Dream Studio, Dall-E3, Meta AI, and Midjourney. The prompts were “Australian-centric” in nature, designed to elicit the visual data of Australia through the lens of GenAI. Through an analysis of a corpus of approximately 700 images, the authors found that GenAI frequently invokes tired and cliched tropes to communicate “Australianness,” such as depictions of red dirt, Uluru, the “outback,” and a sense of wildness, in both its wildlife and in its depictions of “typical” Indigenous Australians. Various forms of bias were evident in the visualizations produced. The optics and interpretation of these images spans the puzzling to the troubling; this paper contends that “Australiana” as a category surfaces the limitations and blind spots of GenAI. Moreover, GenAI operates as something of a cultural time machine, surfacing old and defunct caricatures of Australianness despite the seeming novel newness of the “GenAI moment.”

Suzanne Srdarov and I have a new publication out in the Oxford Intersection on AI and Society:

Generative Imaginaries of Australia: How Generative AI Tools Visualize Australia and Australianness
https://doi.org/10.1093/9780198945215.003.0150

It's paywalled, so we've also got a summary piece in The Conversation:

‘Australiana’ images made by AI are racist and full of tired cliches, new study shows https://theconversation.com/australiana-images-made-by-ai-are-racist-and-full-of-tired-cliches-new-study-shows-263117

but please ping me if need a PDF of the main piece! #generativeAI#Australia #racism #auspol

‘Australiana’ images made by AI are racist and full of tired cliches, new study shows 

Big tech company hype sells generative artificial intelligence (AI) as intelligent, creative, desirable, inevitable, and about to radically reshape the future in many ways.

Published by Oxford University Press, our new research on how generative AI depicts Australian themes directly challenges this perception.

We found when generative AIs produce images of Australia and Australians, these outputs are riddled with bias. They reproduce sexist and racist caricatures more at home in the country’s imagined monocultural past.
‘Australiana’ images made by AI are racist and full of tired cliches, new study shows Big tech company hype sells generative artificial intelligence (AI) as intelligent, creative, desirable, inevitable, and about to radically reshape the future in many ways. Published by Oxford University Press, our new research on how generative AI depicts Australian themes directly challenges this perception. We found when generative AIs produce images of Australia and Australians, these outputs are riddled with bias. They reproduce sexist and racist caricatures more at home in the country’s imagined monocultural past.
Abstract

Generative AI (GenAI) has the potential to “imagine,” create, and render novel images in a seemingly endless combination of possibilities. However, the capacity of digital technologies to reduce cultural paradigms though the algorithmic monocultures they produce is well documented. As GenAI evokes powerful imaginaries, it is vital to ask what sorts of stories are included, and who is made more and less visible in them. To answer this, the authors tested a series of prompts across five of the largest commercially available GenAI engines—Adobe Firefly, Dream Studio, Dall-E3, Meta AI, and Midjourney. The prompts were “Australian-centric” in nature, designed to elicit the visual data of Australia through the lens of GenAI. Through an analysis of a corpus of approximately 700 images, the authors found that GenAI frequently invokes tired and cliched tropes to communicate “Australianness,” such as depictions of red dirt, Uluru, the “outback,” and a sense of wildness, in both its wildlife and in its depictions of “typical” Indigenous Australians. Various forms of bias were evident in the visualizations produced. The optics and interpretation of these images spans the puzzling to the troubling; this paper contends that “Australiana” as a category surfaces the limitations and blind spots of GenAI. Moreover, GenAI operates as something of a cultural time machine, surfacing old and defunct caricatures of Australianness despite the seeming novel newness of the “GenAI moment.”
Abstract Generative AI (GenAI) has the potential to “imagine,” create, and render novel images in a seemingly endless combination of possibilities. However, the capacity of digital technologies to reduce cultural paradigms though the algorithmic monocultures they produce is well documented. As GenAI evokes powerful imaginaries, it is vital to ask what sorts of stories are included, and who is made more and less visible in them. To answer this, the authors tested a series of prompts across five of the largest commercially available GenAI engines—Adobe Firefly, Dream Studio, Dall-E3, Meta AI, and Midjourney. The prompts were “Australian-centric” in nature, designed to elicit the visual data of Australia through the lens of GenAI. Through an analysis of a corpus of approximately 700 images, the authors found that GenAI frequently invokes tired and cliched tropes to communicate “Australianness,” such as depictions of red dirt, Uluru, the “outback,” and a sense of wildness, in both its wildlife and in its depictions of “typical” Indigenous Australians. Various forms of bias were evident in the visualizations produced. The optics and interpretation of these images spans the puzzling to the troubling; this paper contends that “Australiana” as a category surfaces the limitations and blind spots of GenAI. Moreover, GenAI operates as something of a cultural time machine, surfacing old and defunct caricatures of Australianness despite the seeming novel newness of the “GenAI moment.”

The latest FOSS Academic post involves more wrestling with the implications of #generativeAI for academic peer review:

https://fossacademic.tech/2025/08/06/reviewing-ai.html

In this post, I take observations from software #developers and #openSource podcasters (such as the folks at @latenightlinux ) about how genAI is swamping things like bug bounties and code reviews. This is similar to some of the issues faced by academic peer reviewers.

#academicchatter#FOSSacademic

The latest FOSS Academic post involves more wrestling with the implications of #generativeAI for academic peer review:

https://fossacademic.tech/2025/08/06/reviewing-ai.html

In this post, I take observations from software #developers and #openSource podcasters (such as the folks at @latenightlinux ) about how genAI is swamping things like bug bounties and code reviews. This is similar to some of the issues faced by academic peer reviewers.

#academicchatter#FOSSacademic

Having a deceased shooting victim give an interview as an AI avatar clearly gets a reaction, but I fear it's not the right one. It's such an important issue, but the novelty of this form of AI 'ghost' eclipses what is actually being said. It also begs some big ethical questions about data personal use!

https://www.theguardian.com/us-news/2025/aug/04/jim-acosta-parkland-shooting-victim-ai-interview

[Thread ... ] #digitaldeath#generativeAI #privacy #posthomousdata

Mark Zuckerberg's vision of 'personal superintelligence' just sounds like supersurveillance. Again.

"Personal devices like glasses that understand our context because they can see what we see, hear what we hear, and interact with us throughout the day will become our primary computing devices."

https://www.meta.com/superintelligence/ #meta#generativeAI #hype

Mark Zuckerberg's vision of 'personal superintelligence' just sounds like supersurveillance. Again.

"Personal devices like glasses that understand our context because they can see what we see, hear what we hear, and interact with us throughout the day will become our primary computing devices."

https://www.meta.com/superintelligence/ #meta#generativeAI #hype

@researchfairy arguing that LLMs are a fascist technology: "well suited to centralizing authority, eliminating checks on that authority and advancing an anti-science agenda."

https://blog.bgcarlisle.com/2025/05/16/a-plausible-scalable-and-slightly-wrong-black-box-why-large-language-models-are-a-fascist-technology-that-cannot-be-redeemed/

"And because LLM prompts can be repeated at industrial scales, an unscrupulous user can cherry-pick the plausible-but-slightly-wrong answers they return to favour their own agenda."

#LLMs#generativeAI

@researchfairy arguing that LLMs are a fascist technology: "well suited to centralizing authority, eliminating checks on that authority and advancing an anti-science agenda."

https://blog.bgcarlisle.com/2025/05/16/a-plausible-scalable-and-slightly-wrong-black-box-why-large-language-models-are-a-fascist-technology-that-cannot-be-redeemed/

"And because LLM prompts can be repeated at industrial scales, an unscrupulous user can cherry-pick the plausible-but-slightly-wrong answers they return to favour their own agenda."

#LLMs#generativeAI

I talked to @404mediaco about generative AI's impact on teaching. Apparently I wasn't alone... not by a long shot. @jasonkoebler got a ton of responses about this topic and ran many of them here:

https://www.404media.co/teachers-are-not-ok-ai-chatgpt/

While it's distressing to read all of them, it's good to see I'm not alone.

#generativeAI#higherEducation #teaching#academicChatter