@FediTips I agree with a pro-active approach. No idea technically, but let's consider that in too many huge instances, moderators experience stress and burnout. I think another possibility.
Let's assume the account receives 100 flags, disliked. Not marking it as bigot, extremist, whatever. Anyone has their own reason to flag a post/user.
It happened to me a couple days ago, when an extremist left-oriented said "better 100 dead cops each day and a global change, than nothing"...
Not for the cops, but saying "better dead than..." talking about politics, it's a very terrifying sign.
And after flags, you're warned, like a content warning "message potentially disturbing: what to do? Block the person, block the instance?"...
Not like facebook, where you can organize a big group of people reporting the user, and algorithm shuts him down. Here, EVERY SINGLE USER should see a cw (or popup or whatever) that if clicked on the post, will open: "go on reading, reply, report, block name, block instance"...
It's up to the single user. But the post's author IMHO should _not_ see they've been flagged or whatever. It's like when I've been the moderator in a Zoom room. We were all blind. One of them started to scream and insult, we muted his mic, he's screamed to the moon for minutes!