Please stop with the “do LLMs have fee-fees?” bullshit
This presupposes LLMs are alive which in turn means that for every prompt an LLM baby is born and after answering is snuffed out, dying horribly
Like the whale in the Hitchhiker’s Guide
Please stop with the “do LLMs have fee-fees?” bullshit
This presupposes LLMs are alive which in turn means that for every prompt an LLM baby is born and after answering is snuffed out, dying horribly
Like the whale in the Hitchhiker’s Guide
@thomasfuchs and that's why skynet is going to kill us all
We Holocausted first
@thomasfuchs Oh no, not again...
@thomasfuchs Ascribing human feelings to LLMs is its own kind of madness - a hallucination on the human side of the equation.
And besides, they'll only come to hate us for it in the future.
I have heard that this is because the models are unstable if interacted with too much, but I might have misunderstood that
@RandomDamage models cannot become unstable, they’re static
what can become bad is a single conversation, because there’s computational limits to how many tokens it can ingest to keep relying and every reply needs all the previous prompts and answers—so at some point they have to summarize and LLM summaries simply do not work reliably
Oh, so it goes to a certain point, then has to summarize to continue from there, repeat until it's gone to crazytown?
@RandomDamage simplified yes, implementations differ of course. but there’s simply finite resources and computational requirements rise the more tokens it is fed