@strypey my point was that the LLM that is configured to give me the most appropriate answer using the data it has with no understanding of its impacts 'speaks' more compassionately and responds in a more prosocial way to a request for correction than humans that DO have comprehension and decision-making processes.
The machine doesn't have the choice to be an asshole, but humans do and are.
@FuVenusRs But has the Trained #MOLE remembered your preference? Based on my understanding of the backprop algorithms used to created them, and the feedback I've seen from people using them, I don't expect it will.
It's responses are weighted to seem conciliatory and pleasant, because this increases repeat use. With the downside that this increases the chances it will spit out nonsense using very confident and convincing language. Whether this is a selling point is arguable ; )