I don’t mean as a pejorative. I think it’s a useful way to frame the thing we’re invoking, at least for me.
Discussion
I think this misses the technical definition of hallucinations; not just returning things that are untrue, but things that aren't in the model's training set. It also misses that an application like a chatbot also has rules about when it refuses a low-confidence answer. There's a lot going on.
…I would say I am explicitly sidestepping the technical definition to reclaim the older usage as a (to me) useful figure for What Is Going On In There. I do understand that this will be annoying for some people, but they mostly don‘t pay any attention to me to begin with.
(If it helps, the piece I am trying very not to write is mostly about continuity of divination practices.)
@kissane.myatproto.social it’s your boss who is loved by management who knows all the words and how to string them together to sound extremely competent, but if he makes sense it’s an accident and he doesn’t care either way anyway.
I don’t mean as a pejorative. I think it’s a useful way to frame the thing we’re invoking, at least for me.
One million people will now re-explain me that the models don‘t know what is true (which is fine, I posted words, that can’t go unpunished) and am I not sure how to explain that I am not really just talking about truth, in this particular thread.