Inching further through the writing of a talk on whether #LLMs do or do not reason, I’ve now read Shannon Vallor’s recent book “The AI mirror”

(thanks to @ecosdelfuturo for pointing it out in this context!).

It’s a really worthwhile read 🧵

@philosophy @cogsci

The main thesis of Vallor's book is that current AI systems pose a profound threat to humanity because they cause us to get lost in our own reflections. That reflection, given that it is based on past human expression (training data), has the potential to trap us in the past at precisely the moment where the challenges we face -climate change, biodiversity collapse, global political instability- require us to find radical new ways of living.

3/ Basically, its notion of ‘knowledge’ takes our focus away from key virtues: strengthening our character, attention on what matters, nourishing relational bonds with others… This feels to me like an important argument.

BUT I have issues with its foundation. The exposition rests to a good extent on the idea that LLMs simply mimic human behaviour- they mimic language use and they mimic reasoning. This is what supports the analogy with Narcissus who dies gazing at his own reflection in a pool.

4/ Given the centrality of this notion of ‘mimicry’ it was surprising to me how thin the argument for it actually is. There is only one main passage in the text (as far as I could see) that gives an outright argument for how we know that LLMs don’t reason, but merely mimic reasoning. Or in Vallor’s words, that

“An AI mirror is not a mind. It’s a mathematical tool for extracting statistical patterns from past human generated data and projecting these patterns forward into optimised predictions, selections, classifications, and compositions”

5/ How do we know this? “How can we be so sure that AI tools are not minds? What is a mind anyway?” (pg. 87)

The answer is simply that “most of us accept that minds are in some way supervenient on the physical brain”. That is, minds depend on brains for their reality” (pg. 88)

And “our minds are embodied rather than simply connected to or contained by our bodies”

That’s it- that’s the heart of the argument….

6/ Why does this strike me as thin? First, it’s an appeal to popular opinion, albeit expert opinion (“most of us…” which means, “most of us in my discipline”…accept). In this context, that feels ultimately like simply saying “trust us”.

But there are multiple reasons for withholding that trust- one of them being that philosophy is by no means the only discipline that studies ‘reasoning’ or ‘minds’.

1+ more replies (not shown)
@UlrikeHahn Marcus is probably a good example/starting point.

https://garymarcus.substack.com/p/llms-dont-do-formal-reasoning-and

(He very uncritically jumps on anything that appears to support this position, but that gives you a good collection.)

Melanie Mitchell is much more "serious people", and won't be pinned down to one position or another, but she gives some good pointers here:

https://aiguide.substack.com/p/the-llm-reasoning-debate-heats-up

Bender & Koller's octopus paper argues there is no meaning and only form, which may imply no reason.

https://aclanthology.org/2020.acl-main.463.pdf

@UlrikeHahn @pbloem Noam Chomsky: Op-Ed: “The False Promise of ChatGPT” – NYT, 2023.
Melanie Mitchell: she has a series of articles on substack about this.
Even Yann LeCun? https://openreview.net/forum?id=BZ5a1r-kVsf
Depending on how you define “LLMs”, François Chollet’s recent talk on AGI illustrated how LLMs perform poorly on ARC-2 https://www.youtube.com/watch?v=5QcCeSsNRks.
@vaishakbelle here is working on NeSy and should have plenty to say about this.
2 more replies (not shown)
@UlrikeHahn not quite exactly what you want, but there's this: "ChatGPT is bullshit" https://philpapers.org/rec/HICCIB

I guess there's also the classic quote from Dijkstra: "The question of whether a computer can think is no more interesting than the question of whether a submarine can swim." Although this predates LLMs by 40 years... https://www.cs.utexas.edu/~EWD/transcriptions/EWD08xx/EWD867.html