what's the dumbest shit in AI annoying everyone today, I need to write one up
Post
@davidgerard This was like six weeks ago, but the head of Norway’s $2 trillion sovereign wealth fund is very high on AI. He’s mandated all developers to use AI, saying those who want it the least need it the most and “you need to be on them like a wasp”. His favourite pastime is walking around asking devs what they’re doing with AI and asking them to explain what’s on their screens. Really some smashing quotes in this article, unfortunately in Norwegian. https://www.kode24.no/artikkel/hvis-du-nekter-a-bruke-ki-er-du-basically-tjukk-i-huet/259609
@davidgerard can this robot buddhist munk ceremony qualify? https://www.aljazeera.com/video/newsfeed/2026/5/7/humanoid-robot-becomes-buddhist-monk-in-south-korea
@davidgerard It's got to be Dawkins.
@davidgerard can you go meta with this? the need to respond to and pay attention to all the stupid shit feels like it has accelerated *dramatically* even in the last 6 weeks or so. literally everything is dumb shit right now, a fact much more remarkable than any individual piece of dumb shit
@davidgerard like don't get me wrong I appreciate the work you're doing on pivot to AI and I pay attention to it, but that's like the max amount I want to pay attention to it every day. I do not want every blog post, every toot, every podcast, every youtube video to be about the same dumb shit. like I just take a 10 minute break every day to figure out if we're still at war and every interaction that I have is some eye-wateringly stupid and evil AI thing or a reaction to same
@glyph so the whole premise is "Web 3 is Going Great But It's AI", originally in 250-word chunks. They're longer chunks now, but it's the same format. The think pieces are in there, but they're always riffs on the thing of the day. I do try to answer "so what does this meeeean". I mean, today's is just "dumb thing, the situation, here's what to do," so it varies. OTOH today's is real short.
thankfully my attention span is real low too
@davidgerard yeah, and you've been killing it. but why isn't it a hint to anyone involved in the topic that "daily news, but only about this one single topic, and everything is bad and stupid" is a reliable business model, now over multiple domains. it really feels like if if I had a blog called "Nathan Sucks Today" and every single day I had a minimum of 5 topics about all the cataclysmically stupid shit that Nathan did that day, maybe Nathan should think about making some changes
@davidgerard The Google "Prompt API", which is omnibullshit. Everything from how it's being pushed into open standards, to how it's outsourcing their compute bill to users, and finding yet more ways to avoid actually writing software and making usable UIs.
@davidgerard Apparently Marca said a dumb thing that is riling people up but I think everyone should just stop talking about him
@jwz The Defector nailed that one anyway https://defector.com/you-should-never-be-the-most-sycophantic-participant-in-a-conversation-with-a-chatbot
@davidgerard Remember last week when Jer Crane of PocketOS wrote up the incident where Cursor deleted their production database and its backup? The incident has been covered ad nauseam but I don’t think I’ve seen anyone analyze Crane’s statements:
“It confessed in writing. … The Agent’s Confession … This is the agent on the record, in writing.”
This is so mind-bogglingly stupid I don’t know where to begin. Maybe you do.
@stuartmarks the guy is a far gone MAGA and very very very stupid
@davidgerard Heh, didn’t know about him being MAGA. Correlates with stupid I guess!
I wanted to point out though that saying an AI “confessed” to something is meaningless. It ascribes a bunch of human meaning (guilt, contrition, right vs wrong, etc.) that simply doesn’t exist in an AI. And having it “in writing”? An AI “writing” something doesn’t imply any commitment to the truth as it might with a human. And it’s not as if the AI had any choice about whether its output is written…
There is the group of American Senators who want to spend government education funds on "K-12 AI curricula and teacher training".
By which they apparently mean "waste education funds on hyping the automated plagiarism machines"
@davidgerard Maybe this bullshit: https://www.techdirt.com/2026/05/06/more-liability-will-make-ai-chatbots-worse-at-preventing-suicide/
Apparently, demanding provider liability for chatbots' potentially terrible suicide prevention advice is a "moral panic", and also somehow makes the chatbot responses even worse...? 🤨🤔
@haverholm Masnick will never not make excuses for slop
@davidgerard Not worth writing about but.
1. I work for a company which is green, responsible, thinks about the planet and proudly puts that on its trucks.
2. We need to burn more forests, pollute more air, waste more water (aka use more AI).
@davidgerard antropic getting in bed with spacex is the "daaaaaamn" for today. Just look at THAT datacenter.
@davidgerard idk how new this is but I recently noticed YouTube shorts pushing a "reimagine" feature where you're prompted to use the short as the basis for an AI slop video based on it (like, take the guy in the video and make him do a little dance, or a spider falls into frame and everyone freaks out)
@davidgerard Apparently stress about ignoring/resisting use at work is bleeding into my dreams. That may not be writeable but it is annoying af personally.
@davidgerard Amazon is testing AI generated "podcasts" on product pages. Not sure if you've already covered that.
@davidgerard Feel free to take this idea, but I speculate that the tech companies that have leaderboards encouraging their engineers to token-max will switch to an Uber driver model and make their workers pay for their own tokens, and that will be hilarious/horrible.
@davidgerard
https://www.ai-wellbeing.org/
I cannot express how angry the recent push to consider the "wellbeing" of token generators (while ignoring the wellbeing of humans) makes me.
This is new to me, and is the most barftastic thing I've seen in a while.
@jztusk @peter_mcmahan yeah those are hard AI doomsday cranks
@davidgerard Somebody on YouTube is using ChatGPT to vibe code a game in Commodore 64 BASIC.
@davidgerard Have you done Chrome's snagging 4G of storage for the local AI model it wants to use? I'm not sure that's the most stupid but certainly the most annoying.
@wordshaper @davidgerard The true irony, if you read the most cogent cybersecurity analysis posted for this situation, is that it DOESN'T EVEN USE THE LOCAL MODEL. It just sets it up for future abuse of the user's system. Some have suggested this violates the Computer Misuse Act in the UK. https://www.thatprivacyguy.com/blog/chrome-silent-nano-install/
@bms48 @davidgerard Eh, I'd be inclined to think most of the legal analysis in that blog post is perhaps a bit hyperbolic, since you could reasonably apply it to things like pre-downloaded fonts, or shard libraries, or a big cache of images that any app might have for its use. (I can say many things about large blobs of unused data that an app might have but "this is illegal" isn't in that set)
@wordshaper @davidgerard The key difference here is if an "agentic" approach is adopted. Those things you've cited are largely static; there's nothing inherently pseudo-autonomous about them. People ascribe conscious agency to tech sold as "AI" that isn't there, but if a local LLM model is used to enable "agentic" services with a degree of local pseudo-autonomy (still directed by human prompting), we might have a cybersecurity problem.
@bms48 @davidgerard "Those things you've cited are largely static"... I have some bad news about font files. Also shared libraries. :)
This argument would also mean that if chrome had a blob of javascript libraries it exposed then that'd be an issue. Or a shared library that added javascript functions. (Even if they weren't used or exposed)
The legal argument there is really thin. But that's fine, snagging 4G on every install everywhere is more than bad enough, even if it were just fonts.
@wordshaper @davidgerard I am of course using the term "static" in the sense of agency, not code linkage (e.g. ELF PLT and ECOFF thunks) or the possible Turing completeness of certain glyph formats... JS does take the cake. FWIW I'm evaluating a certain RPC substrate for some tasks and its schemata allow for annotations, perhaps containing YAML which requires sandboxing and breaking the loop to defeat Turing completeness from a defensive cybersecurity posture
@davidgerard the CEO slogan "AI AI AI".
@njoseph @davidgerard In Greek, this is a lament.