Discussion
Loading...

Post

  • About
  • Code of conduct
  • Privacy
  • Users
  • Instances
  • About Bonfire
Emelia 馃懜馃徎
@thisismissem@hachyderm.io  路  activity timestamp 2 months ago

A plain a simple fact is that we can already prevent the distribution of terroristic or violent extremist content (TVEC) and child sexual abuse material (CSAM) without the need for invading everyone's privacy.

Why? Because most of this stuff is shared publicly on large social media platforms.

There's a bunch that does go through apps like telegram, but it doesn't happen in end-to-end encrypted places typically. It happens in public chats that telegram can and should moderate.

Our problems with combating CSAM and TVEC in the EU are not solved by scanning everyone's devices for potentially harmful material, in fact, that creates a *way* larger problem.

Our problems are that we don't have reporting hotlines for ESPs (electronic service providers) to actually report illegal conduct on their platforms across the EU. We have nothing like NCMEC here that coordinates reports of content with law enforcement & distributes hashes of known harmful content.

You report CSAM in the EU, and most likely that report goes to NCMEC in the US because we simply don't have the institutional organisations to handle it.

So given the lack of reporting and response management hotlines, what do you think is going to happen when you mandate scanning of private content on everyone's devices and the sheer number of false positives that's going to generate? The already barely working system is going to completely collapse.

Chat Control will not protect kids, it won't prevent the distribution of TVEC or CSAM, it will just overburden an already broken and fragmented system. We already can't handle the basics of reporting harmful content, let alone all the AI generated CSAM and TVEC content.

Anyone who thinks Chat Control is the answer is a fool with authoritarian leanings.

https://fightchatcontrol.eu/

#FightChatControl #chatcontrol #TrustAndSafety #EU

  • Copy link
  • Flag this post
  • Block
RejZoR
@rejzor@mastodon.social replied  路  activity timestamp 2 months ago
@thisismissem Thanks for this page. I see my country Slovenia is undecided. I need to contact our representatives.
  • Copy link
  • Flag this comment
  • Block
Miner34
@Miner34@mastodon.social replied  路  activity timestamp 2 months ago
@thisismissem They want to use an "AI" model that meta will have to create for this. They will have to provide CSAM in order to train this stupid model. Instead of fixing the problem, they're making it worse.
  • Copy link
  • Flag this comment
  • Block
Emelia 馃懜馃徎
@thisismissem@hachyderm.io replied  路  activity timestamp 2 months ago
@Miner34 they don't need Meta's model, Haidra Safety already exists (open source) and there's others (Thorn has one, and they're one of the non-profit but for-profit organisations that stand to benefit from this being mandated)
  • Copy link
  • Flag this comment
  • Block
鈩捰贬笍嫂臋 :blahaj:
@Lydie@tech.lgbt replied  路  activity timestamp 2 months ago
@thisismissem Clowns. So they're going to monitor encrypted traffic from the millions of open source methods to communicate? They're kidding themselves.
  • Copy link
  • Flag this comment
  • Block
Log in

bonfire.cafe

A space for Bonfire maintainers and contributors to communicate

bonfire.cafe: About 路 Code of conduct 路 Privacy 路 Users 路 Instances
Bonfire social 路 1.0.0-rc.3.13 no JS en
Automatic federation enabled
  • Explore
  • About
  • Members
  • Code of Conduct
Home
Login