The illusion itself is not the core concern. Those discussing ChatGPT often
invoke its distant ancestor, the Eliza “psychotherapist” chatbot developed in
1967 that produced a similar illusion. By modern standards Eliza was
primitive: it generated responses via simple heuristics, often rephrasing
input as a question or making generic comments. Memorably, Eliza’s creator,
the computer scientist Joseph Weizenbaum, was surprised - and worried - by
how many users seemed to feel Eliza, in some sense, understood them. But
what modern chatbots produce is more insidious than the “Eliza effect”.
Eliza only reflected, but ChatGPT magnifies.