Really really wishing that headlines had to be specific when they say "AI" about whether they're talking about a) small, bespoke AI algorithms based on well-known causal relationships and relevant data sources or b) just some LLM
Post
@researchfairy Thank you for pointing out this distinction, it cleared things up for me.
@researchfairy I've seen someone on here, who probably didn't know better, conflating "AI", as in the algorithms governing non-player character (NPC) motion in videogames, with "AI" as in LLMs. Have not come across "AI" used to refer to the old-school chess-playing programs in the wild, but I'm sure it's been done.
Because sometimes the article is talking about (a) but you know that people are thinking that there's actually good work being done by (b) because of equivocation on the meaning of the term "AI"
"Oh I read that AI is going to help end the climate crisis, cure cancer, distribute wealth more equitably," you say
And I guarantee you
Chad Jeepity will do none of those things and will very certainly make all of those things worse
Also, I did a bunch of research on even the "good" AI prior to the advent of the modern generative AI hellscape
Most of it is shit too
@researchfairy “given sufficient [ed. note: infinite, properly labeled] data, we can train a classifier that will ____”
> Our results suggest significant validity threats, dissonance in reporting practices, and challenges to clinical translation. We outline practical recommendations for the successful implementation of AI research in acute ischemic stroke treatment and diagnosis.
https://www.ahajournals.org/doi/full/10.1161/STROKEAHA.122.041442