@szakib
> filtering toots by matching patterns, like this

I wonder of a plug-in system is the way to go? So devs can build generic fediverse moderation tools, and plug them into whichever software a server runs. Or plug in existing tools, like spam filters designed for email servers or blog comments. Or those existing tools could be forked or extended to create generic fediverse tools.

#fediverse #moderation

@Gargron

(1/?)

@emergencygg
> more support for finding and having the resources to evaluate them/admin support too

In the model I envision, people find their server through their IRL social networks, not by trawling through a web increasingly polluted by algorithm-generated slop. The evaluation is kind of built in.

Think of it as akin to the way a community group chooses a venue for in-person meetings. They're evaluating safety, values fit, etc, through direct knowledge of the people/ institutions.

(2/2)

There's a much-discussed #moderation challenge in the fediverse, that's solved by the smallest possible servers. Where if servers are large enough, they can get away with shifting the burden of moderating misbehaving accounts to the rest of the network. Either;

* Bad Actors on a large server have to be individually blocked

OR

* a few bad apple rot the whole bunch. Everyone on your server loses access to millions of well-behaved accounts to avoid the handful of under-moderated ones