Discussion
Loading...

Discussion

  • About
  • Code of conduct
  • Privacy
  • Users
  • Instances
  • About Bonfire
Prof. Emily M. Bender(she/her)
@emilymbender@dair-community.social  ·  activity timestamp 2 months ago

I generally avoid consuming synthetic media if at all possible, but did have to sit through a couple of those LLM-"podcasts" in the context of media interviews about them. They're horrid.

https://www.scientificamerican.com/podcast/episode/how-tools-like-notebooklm-create-ai-generated-podcasts/

/fin

  • Copy link
  • Flag this post
  • Block
Petra van Cronenburg
@NatureMC@mastodon.online replied  ·  activity timestamp 2 months ago
@emilymbender Fortunately I found the article archived: https://archive.ph/G0tXT (I know how important it is to pay for journalism but in Europe, I can't subscribe to every US paper).
  • Copy link
  • Flag this comment
  • Block
cuan_knaggs
@mensrea@freeradical.zone replied  ·  activity timestamp 2 months ago
@emilymbender "talk to children" sure sure. <adds link to reading list for colleagues>
  • Copy link
  • Flag this comment
  • Block
Prof. Emily M. Bender(she/her)
@emilymbender@dair-community.social replied  ·  activity timestamp 2 months ago

That said, I wanted to provide a couple of corrections. When I problematize the term "AI", the goal is to get people to stop using it. "Many tools that use AI" doesn't mean anything. "Many tools that are sold as 'AI'" is okay. But more importantly: those study-aid podcasts systems are TRASH and nothing like automatic transcription tools.

>>

Screencap from linked article, reading:
Emily Bender, a linguist who co-authored The AI Con with the sociologist Alex Hanna, reminded me that when we talk about AI, we need to be precise. Many tools that use AI — voice-to-text transcription tools, or tools that will turn a set of text into a study-aid podcast, for example — are not generating something new; they are combining a single individual’s inputs and making them legible in a new format. What Bender is most critical of is what she calls “synthetic media machines” — models that create composite imagery and writing, like ChatGPT, DALL-E3, and Midjourney, using massive libraries of existing material to fulfill a prompt.
Screencap from linked article, reading: Emily Bender, a linguist who co-authored The AI Con with the sociologist Alex Hanna, reminded me that when we talk about AI, we need to be precise. Many tools that use AI — voice-to-text transcription tools, or tools that will turn a set of text into a study-aid podcast, for example — are not generating something new; they are combining a single individual’s inputs and making them legible in a new format. What Bender is most critical of is what she calls “synthetic media machines” — models that create composite imagery and writing, like ChatGPT, DALL-E3, and Midjourney, using massive libraries of existing material to fulfill a prompt.
Screencap from linked article, reading: Emily Bender, a linguist who co-authored The AI Con with the sociologist Alex Hanna, reminded me that when we talk about AI, we need to be precise. Many tools that use AI — voice-to-text transcription tools, or tools that will turn a set of text into a study-aid podcast, for example — are not generating something new; they are combining a single individual’s inputs and making them legible in a new format. What Bender is most critical of is what she calls “synthetic media machines” — models that create composite imagery and writing, like ChatGPT, DALL-E3, and Midjourney, using massive libraries of existing material to fulfill a prompt.
  • Copy link
  • Flag this comment
  • Block
Prof. Emily M. Bender(she/her)
@emilymbender@dair-community.social replied  ·  activity timestamp 2 months ago

Automatic transcription takes an audio signal and produces text. It's reasonably clear how to evaluate how well those systems work, both systematically (should we use this system for our purposes?) and in a particular use (is that really what was in the audio?).

>>

  • Copy link
  • Flag this comment
  • Block
Prof. Emily M. Bender(she/her)
@emilymbender@dair-community.social replied  ·  activity timestamp 2 months ago

The same is absolutely not true for Google's NotebookLM which takes in academic articles and outputs something that sounds like a podcast. It is much harder to evaluate that in general, and especially difficult in the particular case. Having listened to a fake podcast about a paper will undoubtedly shape how you perceive the paper, if you even take the time to go read the paper itself.

>>

  • Copy link
  • Flag this comment
  • Block
Prof. Emily M. Bender(she/her)
@emilymbender@dair-community.social replied  ·  activity timestamp 2 months ago

I generally avoid consuming synthetic media if at all possible, but did have to sit through a couple of those LLM-"podcasts" in the context of media interviews about them. They're horrid.

https://www.scientificamerican.com/podcast/episode/how-tools-like-notebooklm-create-ai-generated-podcasts/

/fin

  • Copy link
  • Flag this comment
  • Block
Petra van Cronenburg
@NatureMC@mastodon.online replied  ·  activity timestamp 2 months ago
@emilymbender This is so creepy for a real human #podcaster putting a lot of time, research and hard work into it, only to see it almost disappear amid such rubbish of AI.

#podcasts#AISlop#NotebookLM

  • Copy link
  • Flag this comment
  • Block
Hugo Korterik
@Hugo_K@ieji.de replied  ·  activity timestamp 2 months ago
@emilymbender I can't hear 'let's take a deepdive into...' after checking out NotebookLLM a couple of times.
  • Copy link
  • Flag this comment
  • Block
Log in

bonfire.cafe

A space for Bonfire maintainers and contributors to communicate

bonfire.cafe: About · Code of conduct · Privacy · Users · Instances
Bonfire social · 1.0.0-rc.3.21 no JS en
Automatic federation enabled
  • Explore
  • About
  • Members
  • Code of Conduct
Home
Login