As much as I admire the techlash, I have some serious reservations. I worry that there's some pretty useful tech babies that we are at risk of throwing away with the bathwater.

--

If you'd like an essay-formatted version of this thread to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:

https://pluralistic.net/2025/07/23/resto-modding/#itch-scratchers-r-us

1/

@pluralistic your post got me thinking (which is why i read them) about the bigger picture, so i fired up a (local) chatbot and filled it in on recent geopolitics, and it really struggled to be positive. but eventually, it came up with two solutions: grassroots movements and decentralised systems.

so, as usual, it didn't tell me anything i didn't already know. hopefully i didn't just put a car fire's worth of co2 in the atmosphere to get there.

as penance, i'll buy some (audio)books that may have been used in the training to reach that conclusion. but i already have most of yours.

oh, and i'm a tinkerer at heart, which is why this post got my attention. when listening to the internet con, all i could think of as a part of the solution, is for people just to have a little bit of literacy around how computation works. computers used to start in basic, forcing the user to have some semblance of understanding of how a computer worked.

we've spent 40-50 years going from fascination with things computers could do that humans can't do, to essentially selling functionality based on how many humans it replaces. i'm sure the roman empire heard similar arguments to today, when slaves gradually replaced workers who paid taxes.

@pluralistic

Cory, what about platforms that actively *promote* harmful content? My issue with 230D is that it's taken as a carte blanche by all social media sites.

If Xitter promotes a hateful (and false) post about someone to one million users just to get their "engagement" numbers up? Then the publisher is certainly liable. Create harmful AI slop for user engagement purposes? Boom, you're a publisher of original content and are liable.

This seems simple to me - what am I missing?

1+ more replies (not shown)
1+ more replies (not shown)
@pluralistic
Couldn't an even narrower change to section 230 preserve the liability shield if someone isn't making any "editorial decisions"?

For example, RSS and chronological feeds, no problem, fully shielded. But, as soon as you promote certain content, you become a "publisher" and can be held liable?

Section 230 reads "No provider or user of an interactive computer service shall be treated as the publisher..." But, when Facebook and YouTube are trying to boost engagement by selecting the most rage-inducing content and forcing it into your feed, they do seem like publishers.

I'm sure the devil's in the details. Like, is a stickied forum post enough to make someone a publisher? But, generally this would draw a distinction between algorithmic platforms (big tech) and chronological platforms (traditional forums).

@pluralistic If Meta and friends were to lose the ability to manipulate feeds and emotions without being seen as publishers, losing the explore tab seems like the tiniest possible price to pay.

But, yeah, the devil is in the details. Losing the option to block spam would be a problem.

If a law could be crafted by Tim Wu, Lina Khan, Elizabeth Warren etc. I think it could improve 230. But, this congress would do the exact wrong thing, especially once big tech got their FAANGs into it.