I'm listening to your interview #Anthropic's Amanda Askell from last week's episode and I'm having a hard time tracking y'all's understanding of what an AI even is at this point. I also can't get your disclaimer before the interview out of my head - that this interview was going to be hard to listen to for folks who still think #LLMs are probabilistic word sequencers, basically.
I'd be really grateful if some future episode of Hard Fork would take up that topic and basically answer that very naive question "What is an LLM, anyway" from today's understanding.
Because I still operate under the assumption that LLMs basically are glorified word-for-word-likelihood-calculators. Yeah, they've gotten impressively good at doing that and self-hosting and debugging Linux servers (my main use case) has become much, much easier with the advent of Claude Opus and Gemini 3 Pro. But let's not kid ourselves: both of these still hallucinate. They still have a tendency to ignore my system prompts. They still sometimes weirdly ignore/can't "see" attached documents.
On the other hand, they keep finding solutions and workarounds to my (very specific) problems that I wasn't able to find or come up with after an hour or two of throwing myself and a good old-fashioned google search at the problem.
So really, right now and today, what are LLMs, really? Because you guys and Amanda talk about #Claude like a sentient being and while I empathize, it also creeps me out a bit.