I was talking to someone yesterday (let's call them A) and they had another "AI" experience, I thought might happen but hadn't heard of before.

They were interacting with an organization and upon asking a specific thing got a very specific answer. Weeks later that organization claimed it had never said what they said and when A showed the email as proof the defense was: Oh yeah, we're an international organization and it's busy right now so the person who sent the original mail probably had an LLM write it that made shit up. It literally ended with: "Let's just blame the robot ;)".

(Edit: I did read the email and it did not read like something an LLM wrote. I think we see "LLM did it" emerging as a way to cover up mistakes.)

LLMs as diffusors for responsibility in corporate environments was quite obviously gonna be a key sales pitch, but it was new to me that people would be using those lines in direct communication.

@tante in a world where things make sense, a bot that a company deploys is no different than if a person said something. that bot is representing the company. if they don't want the bot to do things, they should configure it correctly. just being like oopsy, that shouldn't be OK. I would certain not do business with that company if avoidable.
@tante I don't suppose that you could disclose the name of the organization so I can put them on my shitlist?

This kind of bullshit needs to be punished. I want the people in charge terrified that decisions they make now will have lasting consequences. I want management to get blueballed by the sheer number of people who don't want to engage with their company because they tried to deny accountability one time.

@tante
I have a sibling whose boss raved for weeks how much time AI was saving them with emails. It came to an abrupt halt when it turned out the AI had been telling their vendors to drop ship products to their warehouse instead of the typical more cost effective way. It cost over $100k to learn that AI might save time but didn't have any sense of budget or process.
@tante
A project manager in the NZ govt I know just got the new "sandwich training" - the training is to defer govt decision making to the IBM chatbot subscription, but it's a sandwich (the official training, not me)
Bread slice 1: The employee sent the prompt
(Bullshit) filling: Chatbot response
Bread slice 2: The employee actions what the bot said

} in this way, the human employee both makes no decisions, and is at fault for doing what the bot told them to do.

@tante

I had a 'conversation' using the chat function of a website, asking for them to delete some very sensitive info. I was told firstly that I could delete the info myself (false) and then that I couldn't delete the info without a code (which included a link to their T's & c's -false). I realised I was chatting to AI, insisted on speaking to a person, and they immediately took the action I requested. AI's lie. It should be illegal for BS AI to be undeclared like that, posing as human.

@tante kind of a great sales pitch though. i mean i hate it, but more and more i feel like diffusion of responsibility is the point of it all. middle management all the way down. nobody gets to talk to anyone in charge anymore. the person telling employees about the mass layoffs isn’t the one making the decision. got a complaint at work? the people you have access to can’t change anything. the last decades have been built around stripping human connections away and making everything faceless.
@tante
Sounds a lot like this, no?

https://www.theguardian.com/world/2024/feb/16/air-canada-chatbot-lawsuit

“Canada’s largest airline has been ordered to pay compensation after its chatbot gave a customer inaccurate information, misleading him into buying a full-price ticket.

Air Canada came under further criticism for later attempting to distance itself from the error by claiming that the bot was “responsible for its own actions”.”