what's the dumbest shit in AI annoying everyone today, I need to write one up
Post
There is the group of American Senators who want to spend government education funds on "K-12 AI curricula and teacher training".
By which they apparently mean "waste education funds on hyping the automated plagiarism machines"
@davidgerard Maybe this bullshit: https://www.techdirt.com/2026/05/06/more-liability-will-make-ai-chatbots-worse-at-preventing-suicide/
Apparently, demanding provider liability for chatbots' potentially terrible suicide prevention advice is a "moral panic", and also somehow makes the chatbot responses even worse...? 🤨🤔
@davidgerard Not worth writing about but.
1. I work for a company which is green, responsible, thinks about the planet and proudly puts that on its trucks.
2. We need to burn more forests, pollute more air, waste more water (aka use more AI).
@davidgerard antropic getting in bed with spacex is the "daaaaaamn" for today. Just look at THAT datacenter.
@davidgerard idk how new this is but I recently noticed YouTube shorts pushing a "reimagine" feature where you're prompted to use the short as the basis for an AI slop video based on it (like, take the guy in the video and make him do a little dance, or a spider falls into frame and everyone freaks out)
@davidgerard Apparently stress about ignoring/resisting use at work is bleeding into my dreams. That may not be writeable but it is annoying af personally.
@davidgerard Amazon is testing AI generated "podcasts" on product pages. Not sure if you've already covered that.
@davidgerard Feel free to take this idea, but I speculate that the tech companies that have leaderboards encouraging their engineers to token-max will switch to an Uber driver model and make their workers pay for their own tokens, and that will be hilarious/horrible.
@davidgerard
https://www.ai-wellbeing.org/
I cannot express how angry the recent push to consider the "wellbeing" of token generators (while ignoring the wellbeing of humans) makes me.
This is new to me, and is the most barftastic thing I've seen in a while.
@jztusk @peter_mcmahan yeah those are hard AI doomsday cranks
@davidgerard Somebody on YouTube is using ChatGPT to vibe code a game in Commodore 64 BASIC.
@davidgerard Have you done Chrome's snagging 4G of storage for the local AI model it wants to use? I'm not sure that's the most stupid but certainly the most annoying.
@wordshaper @davidgerard The true irony, if you read the most cogent cybersecurity analysis posted for this situation, is that it DOESN'T EVEN USE THE LOCAL MODEL. It just sets it up for future abuse of the user's system. Some have suggested this violates the Computer Misuse Act in the UK. https://www.thatprivacyguy.com/blog/chrome-silent-nano-install/
@bms48 @davidgerard Eh, I'd be inclined to think most of the legal analysis in that blog post is perhaps a bit hyperbolic, since you could reasonably apply it to things like pre-downloaded fonts, or shard libraries, or a big cache of images that any app might have for its use. (I can say many things about large blobs of unused data that an app might have but "this is illegal" isn't in that set)
@wordshaper @davidgerard The key difference here is if an "agentic" approach is adopted. Those things you've cited are largely static; there's nothing inherently pseudo-autonomous about them. People ascribe conscious agency to tech sold as "AI" that isn't there, but if a local LLM model is used to enable "agentic" services with a degree of local pseudo-autonomy (still directed by human prompting), we might have a cybersecurity problem.
@bms48 @davidgerard "Those things you've cited are largely static"... I have some bad news about font files. Also shared libraries. :)
This argument would also mean that if chrome had a blob of javascript libraries it exposed then that'd be an issue. Or a shared library that added javascript functions. (Even if they weren't used or exposed)
The legal argument there is really thin. But that's fine, snagging 4G on every install everywhere is more than bad enough, even if it were just fonts.
@wordshaper @davidgerard I am of course using the term "static" in the sense of agency, not code linkage (e.g. ELF PLT and ECOFF thunks) or the possible Turing completeness of certain glyph formats... JS does take the cake. FWIW I'm evaluating a certain RPC substrate for some tasks and its schemata allow for annotations, perhaps containing YAML which requires sandboxing and breaking the loop to defeat Turing completeness from a defensive cybersecurity posture
@bms48 @wordshaper yeah the legal arguments are entirely speculative and i would say "cool story, call me when you get it to stick"
i mean NOYB has had some spectacular GDPR successes! but again, call me when
@davidgerard the CEO slogan "AI AI AI".
@njoseph @davidgerard In Greek, this is a lament.