I don’t mean as a pejorative. I think it’s a useful way to frame the thing we’re invoking, at least for me.
Post
I clearly don’t understand how they work bc I can’t get past the “world’s most advanced Markov chain generator” concept in my head.
I am an "a difference in scale becomes a difference in kind" person—with this as with everything, I guess. Like. the Markov-chainer's obviously not unrelated! I just don't think it's capturing how weird the models have become.
And not in a Roose/Dawkins way, I am not in awe of their consciousness or whatever. I think getting current capabilities of out this architecture is just incredibly strange and our metaphors are mostly not up to it.
I think this misses the technical definition of hallucinations; not just returning things that are untrue, but things that aren't in the model's training set. It also misses that an application like a chatbot also has rules about when it refuses a low-confidence answer. There's a lot going on.
…I would say I am explicitly sidestepping the technical definition to reclaim the older usage as a (to me) useful figure for What Is Going On In There. I do understand that this will be annoying for some people, but they mostly don‘t pay any attention to me to begin with.
(If it helps, the piece I am trying very not to write is mostly about continuity of divination practices.)
@kissane.myatproto.social it’s your boss who is loved by management who knows all the words and how to string them together to sound extremely competent, but if he makes sense it’s an accident and he doesn’t care either way anyway.
I don’t mean as a pejorative. I think it’s a useful way to frame the thing we’re invoking, at least for me.
One million people will now re-explain me that the models don‘t know what is true (which is fine, I posted words, that can’t go unpunished) and am I not sure how to explain that I am not really just talking about truth, in this particular thread.