So many humans fail to understand what an #LLM is and isn't, but it really shocks me how many who I believe *should* understand this by now, have no clue.
I wonder if this tendency is due to their use of #LLMs and being part of that community which uses them and the persistent use of anthropomorphic terminology to describe them (such as hallucination instead of "producing wrong results that appear correct to a human").