@laurenshof @timbray
I see your point here, but I also see it as not a matter of "where are death threats allowed" but "where are these statements considered actual condoning of violence and where are they not" - there are cultural contexts (and I don't just mean in an international sense, but in a "community" sense as well) where certain statements are seen as actual intent to do harm, or to encourage others to do so, and others where they are not.
A platform like today's Bluesky (I know, they aspire to be - and are - moving in a more decentralized direction) where one moderation team handles 99.5% of the network's users has to pick one standard and apply it everywhere (well, with exceptions for the authoritarian regimes, as we discussed elsewhere in this thread). And of course this standard is not at *all* neutral. It is a white American standard. That's fine, if that's the userbase they want.
But this is what I mean when I say that it's a good thing for moderators and reported users to have an established relationship and established trust. It's not just because "hey my buddy is not going to kick me off", it's because, in an online community that is closer in size to a natural human community, the social tools for establishing expected behavior are - certainly imperfect - but much more human. They have much more shared context. Whereas a distant, faceless moderator team is a necessity at very large scale, but it is also fundamentally more authoritarian.