in the context of writing a talk for an upcoming pre-conference workshop at #cogsci2025 I’m looking for examples of researchers who explicitly maintain that #LLMs do not or cannot “reason”….
all suggestions welcome!
Post
in the context of writing a talk for an upcoming pre-conference workshop at #cogsci2025 I’m looking for examples of researchers who explicitly maintain that #LLMs do not or cannot “reason”….
all suggestions welcome!
Inching further through the writing of a talk on whether #LLMs do or do not reason, I’ve now read Shannon Vallor’s recent book “The AI mirror”
(thanks to @ecosdelfuturo for pointing it out in this context!).
It’s a really worthwhile read 🧵
The main thesis of Vallor's book is that current AI systems pose a profound threat to humanity because they cause us to get lost in our own reflections. That reflection, given that it is based on past human expression (training data), has the potential to trap us in the past at precisely the moment where the challenges we face -climate change, biodiversity collapse, global political instability- require us to find radical new ways of living.
3/ Basically, its notion of ‘knowledge’ takes our focus away from key virtues: strengthening our character, attention on what matters, nourishing relational bonds with others… This feels to me like an important argument.
BUT I have issues with its foundation. The exposition rests to a good extent on the idea that LLMs simply mimic human behaviour- they mimic language use and they mimic reasoning. This is what supports the analogy with Narcissus who dies gazing at his own reflection in a pool.
4/ Given the centrality of this notion of ‘mimicry’ it was surprising to me how thin the argument for it actually is. There is only one main passage in the text (as far as I could see) that gives an outright argument for how we know that LLMs don’t reason, but merely mimic reasoning. Or in Vallor’s words, that
“An AI mirror is not a mind. It’s a mathematical tool for extracting statistical patterns from past human generated data and projecting these patterns forward into optimised predictions, selections, classifications, and compositions”
https://garymarcus.substack.com/p/llms-dont-do-formal-reasoning-and
(He very uncritically jumps on anything that appears to support this position, but that gives you a good collection.)
Melanie Mitchell is much more "serious people", and won't be pinned down to one position or another, but she gives some good pointers here:
https://aiguide.substack.com/p/the-llm-reasoning-debate-heats-up
Bender & Koller's octopus paper argues there is no meaning and only form, which may imply no reason.
I think they're all regrouping for their next pivot.
I guess there's also the classic quote from Dijkstra: "The question of whether a computer can think is no more interesting than the question of whether a submarine can swim." Although this predates LLMs by 40 years... https://www.cs.utexas.edu/~EWD/transcriptions/EWD08xx/EWD867.html