I was talking to someone yesterday (let's call them A) and they had another "AI" experience, I thought might happen but hadn't heard of before.
They were interacting with an organization and upon asking a specific thing got a very specific answer. Weeks later that organization claimed it had never said what they said and when A showed the email as proof the defense was: Oh yeah, we're an international organization and it's busy right now so the person who sent the original mail probably had an LLM write it that made shit up. It literally ended with: "Let's just blame the robot ;)".
(Edit: I did read the email and it did not read like something an LLM wrote. I think we see "LLM did it" emerging as a way to cover up mistakes.)
LLMs as diffusors for responsibility in corporate environments was quite obviously gonna be a key sales pitch, but it was new to me that people would be using those lines in direct communication.
No. LLM don’t do anything. Hold people responsible for what they write and email, however they produce their text, and this problem goes away.
Get your lawyer informed assuming your organisation uses one.
#aiethics cf https://joanna-bryson.blogspot.com/2025/02/generative-ai-use-and-human-agency.html

I built a free tool to help students compare the energy/water use of AI tasks—like a 3-sec video gen or 500-word GPT reply—to everyday ones like Netflix, Google, or cloud storage. Try it at https://what-uses-more.com
Adjust variables like prompt complexity or the energy source and climate of local data centers to see how usage shifts. All data from sources in a public Google Sheet. Feedback and additional sources welcome!
#AIinEducation#AIliteracy#AIethics#Environment#Climate#Sustainability
I built a free tool to help students compare the energy/water use of AI tasks—like a 3-sec video gen or 500-word GPT reply—to everyday ones like Netflix, Google, or cloud storage. Try it at https://what-uses-more.com
Adjust variables like prompt complexity or the energy source and climate of local data centers to see how usage shifts. All data from sources in a public Google Sheet. Feedback and additional sources welcome!
#AIinEducation#AIliteracy#AIethics#Environment#Climate#Sustainability

We have moved to KIT's own new mastodon server: https://social.kit.edu/@DiTraRe
This is the DiTraRe Leibniz Science Campus on "Digital Transformation of Research" tooting.
#Introduction#AI#AIethics#AIact#generativeAI #research #science #humanities #digitalisation #ethics #chemistry #neuhier @fiz_karlsruhe @KIT_Karlsruhe @ITAS_KIT @Feelix @AnnaJacyszyn @sourisnumerique @GenAsefa @enorouzi @joerg @fizise @lysander07 #dh#YoMigroaMastodon
We have moved to KIT's own new mastodon server: https://social.kit.edu/@DiTraRe
This is the DiTraRe Leibniz Science Campus on "Digital Transformation of Research" tooting.
#Introduction#AI#AIethics#AIact#generativeAI #research #science #humanities #digitalisation #ethics #chemistry #neuhier @fiz_karlsruhe @KIT_Karlsruhe @ITAS_KIT @Feelix @AnnaJacyszyn @sourisnumerique @GenAsefa @enorouzi @joerg @fizise @lysander07 #dh#YoMigroaMastodon

🚨 Today in the Intro to the Ethics of AI lecture: Data Protection & Fundamental Rights
🔹 What’s the difference between privacy and data protection?
🔹 How do the US and Europe approach data protection differently?
🔹 Why we protect fundamental rights – not just data.
🧠 Join live on Zoom | 14:15–15:45 CEST: https://tinyurl.com/EoAI25
🎥 Watch later on YouTube: https://lnkd.in/ePcdbrvi
🚨 Today in the Intro to the Ethics of AI lecture: Data Protection & Fundamental Rights
🔹 What’s the difference between privacy and data protection?
🔹 How do the US and Europe approach data protection differently?
🔹 Why we protect fundamental rights – not just data.
🧠 Join live on Zoom | 14:15–15:45 CEST: https://tinyurl.com/EoAI25
🎥 Watch later on YouTube: https://lnkd.in/ePcdbrvi
✍️ : A blog post that I wanted to write since two years!