@codinghorror @cstross @kyonshi I think it's simultaneously true that "unmonitored chatgpt sessions are not the biggest threat to American youth" and "it's a known threat and we aren't doing very much about that fact."
The biggest issue with this generation specifically, if I understand correctly, is the fact that the "special sauce" is the attention engine. That's a sliding window. It starts primed with whatever conditioning state the company uses for a new session, but as the session goes on the back-history becomes more and more significant to the response and it begins to dominate. It's like a black mirror that starts by reflecting what the developer wants but begins to reflect, more and more, the human in the session.
That makes it a terrible tool for people with suicidal ideation to try and self-medicate via "virtual therapist," for example. Just the worst. Talk to it long enough and it will start to agree that your dark poetry reflects reality and hey, maybe you're right about having nothing to live for! And then the only back-stop against it helping you plan a very bad day for everyone you know is whether someone was clever enough to program a "suicidal ideation suppression" step in the output filter (that can identify all the permutations that concept can take).
This is 100% tool mismatch.
I'm not against kids using chatgpt (Lord knows my own experience was "banning a thing is a great way to make it fascinating"), but it's one more thing that parents have to be aware of when parenting these days.