Discussion
Loading...

Discussion

  • About
  • Code of conduct
  • Privacy
  • Users
  • Instances
  • About Bonfire
tante
@tante@tldr.nettime.org  ·  activity timestamp 2 months ago

I was talking to someone yesterday (let's call them A) and they had another "AI" experience, I thought might happen but hadn't heard of before.

They were interacting with an organization and upon asking a specific thing got a very specific answer. Weeks later that organization claimed it had never said what they said and when A showed the email as proof the defense was: Oh yeah, we're an international organization and it's busy right now so the person who sent the original mail probably had an LLM write it that made shit up. It literally ended with: "Let's just blame the robot ;)".

(Edit: I did read the email and it did not read like something an LLM wrote. I think we see "LLM did it" emerging as a way to cover up mistakes.)

LLMs as diffusors for responsibility in corporate environments was quite obviously gonna be a key sales pitch, but it was new to me that people would be using those lines in direct communication.

  • Copy link
  • Flag this post
  • Block
Matt T.
@nosword@localization.cafe replied  ·  activity timestamp 2 months ago
@tante That organization must be very confident that A has no alternative but to deal with them! “We can retract any commitment at any time and not even feel guilty about it, let alone legally liable” is not much of a “choose us as your project partner” pitch
  • Copy link
  • Flag this comment
  • Block
Charlie the Anti-Fascist Dog
@arrrg@kolektiva.social replied  ·  activity timestamp 2 months ago
@tante in a world where things make sense, a bot that a company deploys is no different than if a person said something. that bot is representing the company. if they don't want the bot to do things, they should configure it correctly. just being like oopsy, that shouldn't be OK. I would certain not do business with that company if avoidable.
  • Copy link
  • Flag this comment
  • Block
Thomas Cameron
@ThomasCameron512@mastodon.online replied  ·  activity timestamp 2 months ago
@tante "the AI ate my homework?" 😂🤣😆
  • Copy link
  • Flag this comment
  • Block
Joanna Bryson, blathering
@j2bryson@mastodon.social replied  ·  activity timestamp 2 months ago

No. LLM don’t do anything. Hold people responsible for what they write and email, however they produce their text, and this problem goes away.

Get your lawyer informed assuming your organisation uses one.

#aiethics cf https://joanna-bryson.blogspot.com/2025/02/generative-ai-use-and-human-agency.html

  • Copy link
  • Flag this comment
  • Block
kgndiue
@kgndiue@mastodon.social replied  ·  activity timestamp 2 months ago
@tante There was a low key court case in Austria recently where the defendant claimed an llm hallucinated defamatory statements published by him. Did most certainly not stick and he was sentenced.
  • Copy link
  • Flag this comment
  • Block
SpaceLifeForm
@SpaceLifeForm@infosec.exchange replied  ·  activity timestamp 2 months ago
@tante

There is always something to blame. It is never their fault.

#Blame#AI#Insanity

  • Copy link
  • Flag this comment
  • Block
SlightlyCyberpunk
@admin@mastodon.slightlycyberpunk.com replied  ·  activity timestamp 2 months ago
@tante I'd just reply with this:

https://www.bbc.com/travel/article/20240222-air-canada-chatbot-misinformation-what-travellers-should-know

  • Copy link
  • Flag this comment
  • Block
Cabbidges
@TheDailyBurble@mastodon.social replied  ·  activity timestamp 2 months ago
@tante
Yep. No legal responsibility, it was the AI what done it.
  • Copy link
  • Flag this comment
  • Block
Sassinake! - ⊃∪∩⪽
@Sassinake@mastodon.social replied  ·  activity timestamp 2 months ago
@tante

air canada got sued for a similar messed up interaction

  • Copy link
  • Flag this comment
  • Block
Greg Lloyd
@Roundtrip@federate.social replied  ·  activity timestamp 2 months ago
@tante See also this good podcast episode on #llm #liability #law
https://mastodon.social/@lawfare/114874664114525328
  • Copy link
  • Flag this comment
  • Block
lproven
@lproven@social.vivaldi.net replied  ·  activity timestamp 2 months ago
@tante IBM called this one successfully in 1979:

“A computer can never be held accountable. Therefore a computer must never make a management decision.”

https://blog.apaonline.org/2023/04/13/responsibility-and-automated-decision-making-draft/

“a computer can never be held accountable. Therefore a computer must never make a management decision.”
“a computer can never be held accountable. Therefore a computer must never make a management decision.”
“a computer can never be held accountable. Therefore a computer must never make a management decision.”
  • Copy link
  • Flag this comment
  • Block
Vincent 🌻🇪🇺
@photovince@mastodon.social replied  ·  activity timestamp 2 months ago
@tante Let’s keep those fuckers to every single promise they made “because of the robot”
  • Copy link
  • Flag this comment
  • Block
Simon Forman
@carapace@mastodon.social replied  ·  activity timestamp 2 months ago
@tante

Another iteration of "Computer says no".

https://www.youtube.com/watch?v=0n_Ty_72Qds

https://en.wikipedia.org/wiki/Computer_says_no

  • Copy link
  • Flag this comment
  • Block
Faraiwe
@faraiwe@mstdn.social replied  ·  activity timestamp 2 months ago
@tante ...and they claim that something that originated within their corporation is NOT a legitimate statement of their own?

That is gonna fly Legally...

  • Copy link
  • Flag this comment
  • Block
Tom
@tdelmas@mamot.fr replied  ·  activity timestamp 2 months ago
@tante "I don't care if it's from your bot or an intern. It's from your org, in your org name."
  • Copy link
  • Flag this comment
  • Block
Zelda🎀TheZeldaZone🏳️‍⚧️🎮
@TheZeldaZone@mastodon.social replied  ·  activity timestamp 2 months ago
@tante any llm output should be read and verified before being sent to a customer.

If you think "then AI won't be a cost savings anymore" you get it: this shit simply cannot work this way.

  • Copy link
  • Flag this comment
  • Block
SarcastiCat
@Plumbert@thecanadian.social replied  ·  activity timestamp 2 months ago
@tante LLM is the new Marketing Intern.
  • Copy link
  • Flag this comment
  • Block
Taylor
@TWDickson@mastodon.social replied  ·  activity timestamp 2 months ago
@tante Air Canada got into some trouble with their chat bot promising discounts that don’t exist. So precedent has been set that companies can blame the AI but they’re still responsible,

https://www.cbc.ca/news/canada/british-columbia/air-canada-chatbot-lawsuit-1.7116416

  • Copy link
  • Flag this comment
  • Block
FoolishOwl
@foolishowl@social.coop replied  ·  activity timestamp 2 months ago
@tante Maybe they'd stop using bullshit generators if they were legally bound by the bullshit they generated.

I'm joking of course. As if laws meant anything to the rich.

  • Copy link
  • Flag this comment
  • Block
Flounder
@fl0und3r@defcon.social replied  ·  activity timestamp 2 months ago
@tante I don't suppose that you could disclose the name of the organization so I can put them on my shitlist?

This kind of bullshit needs to be punished. I want the people in charge terrified that decisions they make now will have lasting consequences. I want management to get blueballed by the sheer number of people who don't want to engage with their company because they tried to deny accountability one time.

  • Copy link
  • Flag this comment
  • Block
blausand 🐟
@blausand@chaos.social replied  ·  activity timestamp 2 months ago
@tante berichtet hier zB. über ein ganz fatales Problem beim Einsatz von KI im Kundendienst oder - schlimmer - behördlich.
Was ganz anderes als "KI böse1!!!".
  • Copy link
  • Flag this comment
  • Block
🧃
@atp@c.im replied  ·  activity timestamp 2 months ago
@tante LLM = lazy lying man.
  • Copy link
  • Flag this comment
  • Block
Sam Popowich
@redlibrarian@mastodon.social replied  ·  activity timestamp 2 months ago
@tante This happened in Canada last year - our largest airline tried to evade responsibility for erroneous information given out by its AI bot.

https://www.cbc.ca/news/canada/british-columbia/air-canada-chatbot-lawsuit-1.7116416

https://www.mccarthy.ca/en/insights/blogs/techlex/moffatt-v-air-canada-misrepresentation-ai-chatbot

  • Copy link
  • Flag this comment
  • Block
Frank Huysmans
@frhuy@ieji.de replied  ·  activity timestamp 2 months ago
@tante LLMs are the new interns
  • Copy link
  • Flag this comment
  • Block
tanavit
@tanavit@toot.aquilenet.fr replied  ·  activity timestamp 2 months ago
@tante

Before, enterprises had to "hire" trainees in intership to accuse them of all the errors.

Now, they will use LMM instead.

  • Copy link
  • Flag this comment
  • Block
MR.e
@MR_E@infosec.exchange replied  ·  activity timestamp 2 months ago
@tante
I have a sibling whose boss raved for weeks how much time AI was saving them with emails. It came to an abrupt halt when it turned out the AI had been telling their vendors to drop ship products to their warehouse instead of the typical more cost effective way. It cost over $100k to learn that AI might save time but didn't have any sense of budget or process.
  • Copy link
  • Flag this comment
  • Block
adison verlice
@adisonverlice@tweesecake.social replied  ·  activity timestamp 2 months ago
@tante Yeah. I'm guessing you read the contents and could tell it was not how an LLM normally responds. Bold letters, very specific word choice, etc.?
  • Copy link
  • Flag this comment
  • Block
tante
@tante@tldr.nettime.org replied  ·  activity timestamp 2 months ago
@adisonverlice the text in question was very specific, was explicitly layed out in bold. The text did not read GPTy
  • Copy link
  • Flag this comment
  • Block
rowmyboat
@rowmyboat@glammr.us replied  ·  activity timestamp 2 months ago
@tante the flip side of this that I’m seeing is that some companies are doing “the AI can’t fail, we can only fail it” and using it as a way to slag their low level support workers when we point out a nonsense email. Either way, the customer should not complain!
  • Copy link
  • Flag this comment
  • Block
Nicole Parsons
@Npars01@mstdn.social replied  ·  activity timestamp 2 months ago
@tante

https://www.desmog.com/2025/04/22/ai-energy-demand-can-keep-fossil-fuels-alive-tech-backers-promise-worlds-two-biggest-oil-producers/

Yet another example of why the fossil fuel industry funds AI, "blame the AI" narratives will be used in the industry's well-deserved upcoming mass lawsuits.
https://www.theguardian.com/us-news/2025/jul/15/trump-ai-oil-energy-summit

https://www.theguardian.com/environment/2025/jul/23/healthy-environment-is-a-human-right-top-un-court-rules

https://www.washingtonpost.com/business/2025/02/23/ai-gas-trump-climate-fossil/

https://www.bloomberg.com/opinion/articles/2025-07-24/ai-should-pay-a-price-for-its-environmental-damage

https://www.motherjones.com/politics/2025/07/fossil-fuel-polluters-lifeline-ai-data-centers-power-demand-donald-trump-pittsburgh-summit/

https://www.theguardian.com/environment/2025/jul/24/australia-warned-it-could-face-legal-action-over-fossil-fuels-after-icj-landmark-climate-ruling

The industry is desperately trying to keep its captive consumers & government handouts, as renewables gain.

https://www.theguardian.com/environment/2025/jul/22/antonio-guterres-climate-breakthrough-clean-energy-fossil-fuels

  • Copy link
  • Flag this comment
  • Block
screwlisp
@screwlisp@gamerplus.org replied  ·  activity timestamp 2 months ago
@tante
A project manager in the NZ govt I know just got the new "sandwich training" - the training is to defer govt decision making to the IBM chatbot subscription, but it's a sandwich (the official training, not me)
Bread slice 1: The employee sent the prompt
(Bullshit) filling: Chatbot response
Bread slice 2: The employee actions what the bot said

} in this way, the human employee both makes no decisions, and is at fault for doing what the bot told them to do.

  • Copy link
  • Flag this comment
  • Block
Zumbador
@Zumbador@mefi.social replied  ·  activity timestamp 2 months ago
@tante I've seen this called the "accountability sink" and it's a use case for quite a lot of tech.
  • Copy link
  • Flag this comment
  • Block
GutterPoetry
@GutterPoetry@mastodon.me.uk replied  ·  activity timestamp 2 months ago
@tante

I had a 'conversation' using the chat function of a website, asking for them to delete some very sensitive info. I was told firstly that I could delete the info myself (false) and then that I couldn't delete the info without a code (which included a link to their T's & c's -false). I realised I was chatting to AI, insisted on speaking to a person, and they immediately took the action I requested. AI's lie. It should be illegal for BS AI to be undeclared like that, posing as human.

  • Copy link
  • Flag this comment
  • Block
Samuelrod
@Samuelrod@mastodon.social replied  ·  activity timestamp 2 months ago
@tante please be careful with this on every organization
  • Copy link
  • Flag this comment
  • Block
DjedMoros
@DjedMoros@sueden.social replied  ·  activity timestamp 2 months ago
@tante ... but my dog ate my homework!
  • Copy link
  • Flag this comment
  • Block
mathegudrun (she/her)
@mathegudrun@mastodon.social replied  ·  activity timestamp 2 months ago
@tante autsch
  • Copy link
  • Flag this comment
  • Block
Alex P Roe
@alex_p_roe@mastodon.world replied  ·  activity timestamp 2 months ago
@tante LLLM? - Large Lying Language Model?! 😉
  • Copy link
  • Flag this comment
  • Block
Christine Burns MBE 🏳️‍⚧️📚⧖
@christineburns@mastodon.green replied  ·  activity timestamp 2 months ago
@tante Forty years ago it was common for companies to blame errors on ‘the computer’ even when they didn’t even have a computer.
  • Copy link
  • Flag this comment
  • Block
Lazy B0y
@lazyb0y@mastodon.social replied  ·  activity timestamp 2 months ago
@tante

If this was a communication with an employer, i assume when an employee uses an LLM to steal all company data they will also just shrug and say ah ok let's blame the robot?

  • Copy link
  • Flag this comment
  • Block
Tor Iver Wilhelmsen
@toriver@mas.to replied  ·  activity timestamp 2 months ago
@tante Let us see what IBM said about the subject back in 1979:
Text on white background: «A computer can never be held accountable

therefore a computer must never make a management decision»
Text on white background: «A computer can never be held accountable therefore a computer must never make a management decision»
Text on white background: «A computer can never be held accountable therefore a computer must never make a management decision»
  • Copy link
  • Flag this comment
  • Block
Dan Phiffer
@dphiffer@social.coop replied  ·  activity timestamp 2 months ago
@toriver @tante fixed it
The well known IBM memo, but modified: A COMPUTER CAN NEVER BE HELD ACCOUNTABLE! THEREFORE A COMPUTER MUST MAKE MANAGEMENT DECISIONS
The well known IBM memo, but modified: A COMPUTER CAN NEVER BE HELD ACCOUNTABLE! THEREFORE A COMPUTER MUST MAKE MANAGEMENT DECISIONS
The well known IBM memo, but modified: A COMPUTER CAN NEVER BE HELD ACCOUNTABLE! THEREFORE A COMPUTER MUST MAKE MANAGEMENT DECISIONS
  • Copy link
  • Flag this comment
  • Block
William Pietri
@williampietri@sfba.social replied  ·  activity timestamp 2 months ago
@dphiffer @toriver @tante LOLSOB
  • Copy link
  • Flag this comment
  • Block
xs4me2
@xs4me2@mastodon.social replied  ·  activity timestamp 2 months ago
@tante

And an sublimated message that LLM should not be taken seriously it seems…

  • Copy link
  • Flag this comment
  • Block
ChookMother 🇦🇺🦘
@anne_twain@theblower.au replied  ·  activity timestamp 2 months ago
@tante Interesting.
  • Copy link
  • Flag this comment
  • Block
contrasocial
@contrasocial@mastodon.social replied  ·  activity timestamp 2 months ago
@tante

I feel like a bit of a dummy that I didn't realize sooner that part of the embrace of AI by industries is precisely because it makes so many mistakes. It's the perfect scapegoat if you are trying to do something shady and get caught. "Ohhh, sorry we cooked the books, must've used AI for that huehue."

  • Copy link
  • Flag this comment
  • Block
Arnd Layer
@iamlayer8@mastodon.social replied  ·  activity timestamp 2 months ago
@tante
Yeah! If we could only get one regulation for the usage of #LLM|s, it should be that there is always real person that is legally responsible for any kind of automation including AI. And we should make it as hard as possible to delegate this responsibility.
  • Copy link
  • Flag this comment
  • Block
rhold
@rhold@norden.social replied  ·  activity timestamp 2 months ago
@tante
That's the main usecase. Will get much worse soon.

Recently visited a Corporate Website with a disclaimer: texts are AI generated so they take no liability.

  • Copy link
  • Flag this comment
  • Block
hagen terschüren
@hagen@mastodon.social replied  ·  activity timestamp 2 months ago
@tante kind of a great sales pitch though. i mean i hate it, but more and more i feel like diffusion of responsibility is the point of it all. middle management all the way down. nobody gets to talk to anyone in charge anymore. the person telling employees about the mass layoffs isn’t the one making the decision. got a complaint at work? the people you have access to can’t change anything. the last decades have been built around stripping human connections away and making everything faceless.
  • Copy link
  • Flag this comment
  • Block
tante
@tante@tldr.nettime.org replied  ·  activity timestamp 2 months ago
@hagen "frictionless and smooth". Like Joe Rogan's brain.
  • Copy link
  • Flag this comment
  • Block
hagen terschüren
@hagen@mastodon.social replied  ·  activity timestamp 2 months ago
@tante one good thing: ai could actually kill mckinsey because diffusion of responsibility is literally all they do. blame us for layoffs you already knew you wanted to do. that’s something llms are already perfectly capable of.
  • Copy link
  • Flag this comment
  • Block
Bespoke Nonsense Machine
@pikesley@mastodon.me.uk replied  ·  activity timestamp 2 months ago
@tante

"You should henceforth assume that all communication from this organisation is not to be trusted"

  • Copy link
  • Flag this comment
  • Block
Their friend, Svavar
@svavar@masto.svavar.com replied  ·  activity timestamp 2 months ago
@tante

There is already legal precedent in this area and it doesn't work.

https://www.theguardian.com/world/2024/feb/16/air-canada-chatbot-lawsuit

  • Copy link
  • Flag this comment
  • Block
rhold
@rhold@norden.social replied  ·  activity timestamp 2 months ago
@tante that's the biggest and most important usecase.

The other day I was visiting a corporate website and it had this disclaimer: the texts are ai generate and that's why they can't take any liability.

  • Copy link
  • Flag this comment
  • Block
Christian "Schepp" Schaefer
@Schepp@mastodon.social replied  ·  activity timestamp 2 months ago
@tante "Let's just blame the gun ;)"
  • Copy link
  • Flag this comment
  • Block
Aline Blankertz
@alineblankertz@indieweb.social replied  ·  activity timestamp 2 months ago
@tante
Sounds a lot like this, no?

https://www.theguardian.com/world/2024/feb/16/air-canada-chatbot-lawsuit

“Canada’s largest airline has been ordered to pay compensation after its chatbot gave a customer inaccurate information, misleading him into buying a full-price ticket.

Air Canada came under further criticism for later attempting to distance itself from the error by claiming that the bot was “responsible for its own actions”.”

  • Copy link
  • Flag this comment
  • Block
Janeishly
@janeishly@beige.party replied  ·  activity timestamp 2 months ago
@tante "Can we save money with this?"
"Yes, but it will potentially fuck up the entire corporation's brand image."
"Yeah, but can we save money?"
"Not very much compared to the brand image."
"Fuckit, save the money, it'll be fine."
  • Copy link
  • Flag this comment
  • Block
Log in

bonfire.cafe

A space for Bonfire maintainers and contributors to communicate

bonfire.cafe: About · Code of conduct · Privacy · Users · Instances
Bonfire social · 1.0.0-rc.2.21 no JS en
Automatic federation enabled
  • Explore
  • About
  • Members
  • Code of Conduct
Home
Login