@strypey my point was that the LLM that is configured to give me the most appropriate answer using the data it has with no understanding of its impacts 'speaks' more compassionately and responds in a more prosocial way to a request for correction than humans that DO have comprehension and decision-making processes.
The machine doesn't have the choice to be an asshole, but humans do and are.