Chatbots can provide something like therapy, but it's not the genuine article — so I think it's good more states are cracking down on how companies present them https://www.platformer.news/ai-therapy-paxton-meta-character/

A Stanford University study, covered in Ars Technica last month, found that therapist-branded chatbots from Character.AI and other providers can encourage delusional thinking and express stigma toward people with certain mental health conditions. But one of its co-authors, Nick Haber, argued that AI likely does have positive applications to therapy, including in training human therapists and in helping clients with journaling and coaching.

That strikes me as true — and still not quite enough. Part of the problem here surely relates to language: the words "therapy" and "therapist" connote a level of trust and care that no automated system can provide. Tools like ChatGPT can clearly provide a convincing therapy-like experience — even one that has therapeutic benefits — but should never be mistaken for the genuine article.
A Stanford University study, covered in Ars Technica last month, found that therapist-branded chatbots from Character.AI and other providers can encourage delusional thinking and express stigma toward people with certain mental health conditions. But one of its co-authors, Nick Haber, argued that AI likely does have positive applications to therapy, including in training human therapists and in helping clients with journaling and coaching. That strikes me as true — and still not quite enough. Part of the problem here surely relates to language: the words "therapy" and "therapist" connote a level of trust and care that no automated system can provide. Tools like ChatGPT can clearly provide a convincing therapy-like experience — even one that has therapeutic benefits — but should never be mistaken for the genuine article.