@oblomov @rysiek @paco Even if you buy into the proposition that human consciousness is the same mechanism as an LLM except on biological hardware, I find it hilarious to attribute consciousness to these LLM implementations in particular where memory is so limited that things just never happened because they were a few thousand tokens ago
Discussion
@paco whatever happened to his favourite "extraordinary claims require an extraordinary proof"?..
@leadegroot @paco it was an interesting thought experiment and was somewhat useful for a while.
Remember it was created when computers started doing complex mathematical operations faster than humans could. People started making all sorts of "therefore computers think!" claims.
Turing test was useful in showing that computers of the day did not *actually* think.
Then somehow we ended up using this crutch as a definition of what "thinking" means. 🙄
@oblomov @rysiek @paco Even if you buy into the proposition that human consciousness is the same mechanism as an LLM except on biological hardware, I find it hilarious to attribute consciousness to these LLM implementations in particular where memory is so limited that things just never happened because they were a few thousand tokens ago
@sqrt2 @oblomov @paco the underlying issue in all of such claims is that people making them helpfully refuse to define the difficult terms like "consciousness", "intelligence", and so on:
https://rys.io/en/165.html
Basically, without such clear definition, all such claims are meaningless.