This is a really excellent talk at by Ben Zhao about the limits of and misconceptions about LLMs, that is tailored for the and crowd. He focuses in on how they work, and don't work -- and bridges that to a discussion of the impacts that the technology is having on the larger web (bots).

It gives me no measure of good vibes to know that a big crowd of people in my profession got to hear this for a keynote about AI.

youtube.com/watch?v=JicpcYwQe3

@edsu
Another indication that LLM tech is reaching/has reached it's limit in capability is that the efforts to make it work "better" are using exponentially more compute - instead of just asking one question and getting an answer, we have to generate (say) 6 questions, get 6 answers, then check the 6 answers and choose one. So instead of 1 question, 18 questions plus an evaluation/comparison. Yikes.
And it is still not great at finding the right answer.