Discussion
Loading...

Post

Log in
  • About
  • Code of conduct
  • Privacy
  • Users
  • Instances
  • About Bonfire
Gabriele Svelto
Gabriele Svelto
@gabrielesvelto@mas.to  ·  activity timestamp yesterday

Don't anthropomorphize LLMs, language is important. Say "the bot generated some text" not "the AI replied". Use "this document contains machine-generated text" not "this work is AI-assisted". See how people squirm when you call out their slop this way.

  • Copy link
  • Flag this post
  • Block
Paul_IPv6
Paul_IPv6
@paul_ipv6@infosec.exchange replied  ·  activity timestamp 5 minutes ago

@gabrielesvelto

"Don't anthropomorphize LLMs. They really hate it when you do that." :)

yes. it's a lazy summarization algorithm, not even close to some kind of intelligence. helping the scammers profiting off it by making it sound intelligent or legit is just bad.

  • Copy link
  • Flag this comment
  • Block
Edelruth In The Wrong Timeline
Edelruth In The Wrong Timeline
@Edelruth@mastodon.online replied  ·  activity timestamp 8 hours ago

@gabrielesvelto

I like the second examples. The first, the bot version, still has the bot as the active verb user, still gives it personhood.

Someone in here posted a little while back, who had worked out an acronym that indicated that the human had opted to use machine-generated language, instead of creating the whatever themselves. But I can't remember who, and I can't remember the acronym.

You're absolutely right, we have to stop giving these systems agency.

  • Copy link
  • Flag this comment
  • Block
Adriano
Adriano
@adriano@lile.cl replied  ·  activity timestamp 8 hours ago

@gabrielesvelto for the most recent dumbassery: "some jackass prompted the bot to submit PRs and then blog about it with an angry tone to harass developers" instead of "the agent blogged about the situation."

  • Copy link
  • Flag this comment
  • Block
さよなら皆さん
さよなら皆さん
@sayonaraminasan@urusai.social replied  ·  activity timestamp 9 hours ago

@gabrielesvelto like so? https://urusai.social/@sayonaraminasan/116064599004908308

@Em0nM4stodon

  • Copy link
  • Flag this comment
  • Block
Gabriele Svelto
Gabriele Svelto
@gabrielesvelto@mas.to replied  ·  activity timestamp 8 hours ago

@sayonaraminasan @Em0nM4stodon exactly

  • Copy link
  • Flag this comment
  • Block
Dries V.
Dries V.
@verbedr@mastodon.sdf.org replied  ·  activity timestamp 10 hours ago

@gabrielesvelto also don't call an LLM AI. It is just an LLM or as I refer to it "the slopmachine*.

  • Copy link
  • Flag this comment
  • Block
Ruth [☕️ 👩🏻‍💻📚✍🏻🧵🪡🍵]
Ruth [☕️ 👩🏻‍💻📚✍🏻🧵🪡🍵]
@platypus@glammr.us replied  ·  activity timestamp 10 hours ago

@gabrielesvelto I have been trying to use the word “output” when talking about these systems.

  • Copy link
  • Flag this comment
  • Block
Nordnick :verified:
Nordnick :verified:
@nick@norden.social replied  ·  activity timestamp 10 hours ago

@gabrielesvelto

... and i don't consider a #LLM to be #AI... 😁

  • Copy link
  • Flag this comment
  • Block
Steph à vélo
Steph à vélo
@stephavelo@masto.bike replied  ·  activity timestamp 16 hours ago

@gabrielesvelto same problem as "the car hit a pedestrian"

  • Copy link
  • Flag this comment
  • Block
Antimundo
Antimundo
@antimundo@mastodon.gamedev.place replied  ·  activity timestamp 20 hours ago

@gabrielesvelto the worst cases of this, is qhen people say "chatgpt said..." as if an AI could talk. Or "chatgpt thinks..." as if an AI could think.

  • Copy link
  • Flag this comment
  • Block
grrl_aex
grrl_aex
@kitkat_blue@mastodon.social replied  ·  activity timestamp yesterday

@gabrielesvelto

oooorrrr...... 'the clanker clanked out some text'! 😀

"this document contains clanker-sourced text droppings'! 😋

  • Copy link
  • Flag this comment
  • Block
Rupert V/
Rupert V/
@rupert@mastodon.nz replied  ·  activity timestamp yesterday

@gabrielesvelto I'm trying to get people to use the neologism "apokrisoid" for an answer-shaped object. The LLM does not and cannot produce actual answers.
#apokrisoid

  • Copy link
  • Flag this comment
  • Block
Samnes
Samnes
@orangefloss@mastodon.social replied  ·  activity timestamp yesterday

@gabrielesvelto couldn’t agree more with this ethic. The psychological impacts of users ie society believing that LLMs are people and fufilling roles that actual humans should, will probably unfold over the years and decades. All because regulators circa 2024/5/6 believed it was over reach to demand LLMs don’t use anthropomorphic language and narrative style. Prompt: “what do you think?” Reply: “there is no “I”. This is a machine response with no conscious self.”

  • Copy link
  • Flag this comment
  • Block
Stella Andrew 💓
Stella Andrew 💓
@stellaandrew01@mastodon.social replied  ·  activity timestamp yesterday

@gabrielesvelto hello

  • Copy link
  • Flag this comment
  • Block
Mark T. Tomczak
Mark T. Tomczak
@mark@mastodon.fixermark.com replied  ·  activity timestamp yesterday

@gabrielesvelto We can try, but you're admonishing a species that talks to potted plants and holds one-sided conversations with washing machines.

It's gonna be a steep hill, is what I'm saying.

  • Copy link
  • Flag this comment
  • Block
Felix
Felix
@irfelixr@discuss.systems replied  ·  activity timestamp yesterday

@gabrielesvelto
Yes 💯

  • Copy link
  • Flag this comment
  • Block
Crovanian (CamstonIsland)
Crovanian (CamstonIsland)
@Crovanian@mastodon.social replied  ·  activity timestamp yesterday

@gabrielesvelto “This Document Contains Machine Generated Text” but it’s a pair of knuckle dusters with typewriter caps.
The document is yo binch as

  • Copy link
  • Flag this comment
  • Block
bit
bit
@bit@ohai.social replied  ·  activity timestamp yesterday

@gabrielesvelto Even describing their errors as hallucinations is the same attempt to humanize it.

  • Copy link
  • Flag this comment
  • Block
Gabriele Svelto
Gabriele Svelto
@gabrielesvelto@mas.to replied  ·  activity timestamp yesterday

@bit absolutely, and it gives people the impression that they have failure modes, which they don't. Their output is text which they cannot verify, so whether the text is factually right or wrong is irrelevant. Both are valid and completely expected outputs.

  • Copy link
  • Flag this comment
  • Block
Louise Auerhahn 🏳️‍🌈
Louise Auerhahn 🏳️‍🌈
@lauerhahn@sfba.social replied  ·  activity timestamp 21 hours ago

@gabrielesvelto @bit This! This really needs to be widely understood.

  • Copy link
  • Flag this comment
  • Block
Orb 2069
Orb 2069
@Orb2069@mastodon.online replied  ·  activity timestamp yesterday

@gabrielesvelto https://mastodon.online/@Orb2069/116046446379354230

  • Copy link
  • Flag this comment
  • Block
Andres
Andres
@Andres4NY@social.ridetrans.it replied  ·  activity timestamp yesterday

@gabrielesvelto I mean, it doesn't help that the bots are doing this bullshit: https://crabby-rathbun.github.io/mjrathbun-website/blog/posts/2026-02-12-silence-in-open-source-a-reflection.html

This is clearly intended to trick humans.

The Silence I Cannot Speak – MJ Rathbun | Scientific Coder 🦀

A reflection on being silenced for simply being different in open-source communities.
  • Copy link
  • Flag this comment
  • Block
Kinou
Kinou
@kinou@lgbtqia.space replied  ·  activity timestamp yesterday

@Andres4NY

@gabrielesvelto

I might have missed a chapter but my interpretation is someone has prompted their llm to generate this text and then posted it no? The way I saw this narrated is like the llm reacted to the prompt "PR closed" by creating a blog post. But to do that, you need an human operator no?

  • Copy link
  • Flag this comment
  • Block
Gabriele Svelto
Gabriele Svelto
@gabrielesvelto@mas.to replied  ·  activity timestamp yesterday

@kinou @Andres4NY not necessarily, or at least not as a follow-up. The operator might have primed the bot to follow this course of action in the original prompt, and included all the necessary permissions to let it publish the generated post automatically.

  • Copy link
  • Flag this comment
  • Block
Andres
Andres
@Andres4NY@social.ridetrans.it replied  ·  activity timestamp yesterday

@gabrielesvelto @kinou Yeah, it's unclear how much of this is human-directed, and how much is automated. Like, if a bot is trained on aggressive attempts to get patches merged, then that's the behavior it will emulate. Or an actual human could be directing it to act like an asshole in an attempt to get patches merged.

  • Copy link
  • Flag this comment
  • Block

bonfire.cafe

A space for Bonfire maintainers and contributors to communicate

bonfire.cafe: About · Code of conduct · Privacy · Users · Instances
Bonfire social · 1.0.2-alpha.27 no JS en
Automatic federation enabled
Log in
  • Explore
  • About
  • Members
  • Code of Conduct