Discussion
Loading...

#Tag

Log in
  • About
  • Code of conduct
  • Privacy
  • Users
  • Instances
  • About Bonfire
Michał "rysiek" Woźniak · 🇺🇦 boosted
whoever loves Digit 🇵🇸🇺🇸🏴‍☠️
whoever loves Digit 🇵🇸🇺🇸🏴‍☠️
@iloveDigit@piefed.social  ·  activity timestamp yesterday
⁂ Article

Weak "AI filters" are dark pattern design & "web of trust" is the real solution

The worst examples are when bots can get through the “ban” just by paying a monthly fee.

So-called “AI filters”

An increasing number of websites lately are claiming to ban AI-generated content. This is a lie deeply tied to other lies.

Building on a well-known lie: that they can tell what is and isn’t generated by a chat bot, when every “detector tool” has been proven unreliable, and sometimes we humans can also only guess.

Helping slip a bigger lie past you: that today’s “AI algorithms” are “more AI” than the algorithms a few years ago. The lie that machine learning has just changed at the fundamental level, that suddenly it can truly understand. The lie that this is the cusp of AGI - Artificial General Intelligence.

Supporting future lying opportunities:

  • To pretend a person is a bot, because the authorities don’t like the person
  • To pretend a bot is a person, because the authorities like the bot (or it pays the monthly fee)
  • To pretend bots have become “intelligent” enough to outsmart everyone and break “AI filters” (yet another reframing of gullible people being tricked by liars with a shiny object)
  • Perhaps later - when bots are truly smart enough to reliably outsmart these filters - to pretend it’s nothing new, it was the bots doing it the whole time, don’t look beind the curtain at the humans who helped
  • And perhaps - with luck - to suggest you should give up on the internet, give up on organizing for a better future, give up on artistry, just give up on everything, because we have no options that work anymore

The solution: Web of Trust

You want to show up in “verified human” feeds, but you don’t know anyone in real life that uses a web of trust app, so nobody in the network has verified you’re a human.

You ask any verified human to meet up with you for lunch. After confirming you exist, they give your account the “verified human” tag too.

They will now see your posts in their “tagged human by me” feed.

Their followers will see your posts in the “tagged human by me and others I follow” feed.

And their followers will see your posts in the “tagged human by me, others I follow, and others they follow” feed…

And so on.

I’ve heard everyone is generally a maximum 6 degrees of separation from everyone else on Earth, so this could be a more robust solution than you’d think.

The tag should have a timestamp on it. You’d want to renew it, because the older it gets, the less people trust it.

This doesn’t hit the same goalposts, of course.

If your goal is to avoid thinking, and just be told lies that sound good to you, this isn’t as good as a weak “AI filter.”

If your goal is to scroll through a feed where none of the creators used any software “smarter” than you’d want, this isn’t as good as an imaginary strong “AI filter” that doesn’t exist.

But if your goal is to survive, while others are trying to drive the planet to extinction…

If your goal is to be able to tell the truth and not be drowned out by liars…

If your goal is to be able to hold the liars accountable, when they do drown out honest statements…

If your goal is to have at least some vague sense of “public opinion” in online discussion, that actually reflects what humans believe, not bots…

Then a “human tag” web of trust is a lot better than nothing.

It won’t stop someone from copying and pasting what ChatGPT says, but it should make it harder for them to copy and paste 10 answers across 10 fake faces.

Speaking of fake faces - even though you could use this system for ID verification, you might never need to. People can choose to be anonymous, using stuff like anime profile pictures, only showing their real face to the person who verifies them, never revealing their name or other details. But anime pictures will naturally be treated differently from recognizable individuals in political discussions, making it more difficult for themselves to game the system.

To flood a discussion with lies, racist statements, etc., the people flooding the discussion should have to take some accountability for those lies, racist statements, etc. At least if they want to show up on people’s screens and be taken seriously.

A different dark pattern design

You could say the human-tagging web of trust system is “dark pattern design” too.

This design takes advantage of human behavioral patterns, but in a completely different way.

When pathological liars encounter this system, they naturally face certain temptations. Creating cascading webs of false “human tags” to confuse people and waste time. Meanwhile, accusing others of doing it - wasting even more time.

And a more important temptation: echo chambering with others who use these lies the same way. Saying “ah, this person always accuses communists of using false human tags, because we know only bots are communists. I will trust this person.”

They can cluster together in a group, filtering everyone else out, calling them bots.

And, if they can’t resist these temptations, it will make them just as easy to filter out, for everyone else. Because at the end of the day, these chat bots aren’t late-gen Synths from Fallout. Take away the screen, put us face to face, and it’s very easy to discern a human from a machine. These liars get nothing to hide behind.

So you see, like strong is the opposite of weak [citation needed], the strong filter’s “dark pattern design” is quite different from the weak filter’s. Instead of preying on honesty, it preys on the predatory.

Perhaps, someday, systems like this could even change social pressures and incentives to make more people learn to be honest.

View
  • Copy link
  • Flag this article
  • Block
whoever loves Digit 🇵🇸🇺🇸🏴‍☠️
whoever loves Digit 🇵🇸🇺🇸🏴‍☠️
@iloveDigit@piefed.social  ·  activity timestamp yesterday
⁂ Article

Weak "AI filters" are dark pattern design & "web of trust" is the real solution

The worst examples are when bots can get through the “ban” just by paying a monthly fee.

So-called “AI filters”

An increasing number of websites lately are claiming to ban AI-generated content. This is a lie deeply tied to other lies.

Building on a well-known lie: that they can tell what is and isn’t generated by a chat bot, when every “detector tool” has been proven unreliable, and sometimes we humans can also only guess.

Helping slip a bigger lie past you: that today’s “AI algorithms” are “more AI” than the algorithms a few years ago. The lie that machine learning has just changed at the fundamental level, that suddenly it can truly understand. The lie that this is the cusp of AGI - Artificial General Intelligence.

Supporting future lying opportunities:

  • To pretend a person is a bot, because the authorities don’t like the person
  • To pretend a bot is a person, because the authorities like the bot (or it pays the monthly fee)
  • To pretend bots have become “intelligent” enough to outsmart everyone and break “AI filters” (yet another reframing of gullible people being tricked by liars with a shiny object)
  • Perhaps later - when bots are truly smart enough to reliably outsmart these filters - to pretend it’s nothing new, it was the bots doing it the whole time, don’t look beind the curtain at the humans who helped
  • And perhaps - with luck - to suggest you should give up on the internet, give up on organizing for a better future, give up on artistry, just give up on everything, because we have no options that work anymore

The solution: Web of Trust

You want to show up in “verified human” feeds, but you don’t know anyone in real life that uses a web of trust app, so nobody in the network has verified you’re a human.

You ask any verified human to meet up with you for lunch. After confirming you exist, they give your account the “verified human” tag too.

They will now see your posts in their “tagged human by me” feed.

Their followers will see your posts in the “tagged human by me and others I follow” feed.

And their followers will see your posts in the “tagged human by me, others I follow, and others they follow” feed…

And so on.

I’ve heard everyone is generally a maximum 6 degrees of separation from everyone else on Earth, so this could be a more robust solution than you’d think.

The tag should have a timestamp on it. You’d want to renew it, because the older it gets, the less people trust it.

This doesn’t hit the same goalposts, of course.

If your goal is to avoid thinking, and just be told lies that sound good to you, this isn’t as good as a weak “AI filter.”

If your goal is to scroll through a feed where none of the creators used any software “smarter” than you’d want, this isn’t as good as an imaginary strong “AI filter” that doesn’t exist.

But if your goal is to survive, while others are trying to drive the planet to extinction…

If your goal is to be able to tell the truth and not be drowned out by liars…

If your goal is to be able to hold the liars accountable, when they do drown out honest statements…

If your goal is to have at least some vague sense of “public opinion” in online discussion, that actually reflects what humans believe, not bots…

Then a “human tag” web of trust is a lot better than nothing.

It won’t stop someone from copying and pasting what ChatGPT says, but it should make it harder for them to copy and paste 10 answers across 10 fake faces.

Speaking of fake faces - even though you could use this system for ID verification, you might never need to. People can choose to be anonymous, using stuff like anime profile pictures, only showing their real face to the person who verifies them, never revealing their name or other details. But anime pictures will naturally be treated differently from recognizable individuals in political discussions, making it more difficult for themselves to game the system.

To flood a discussion with lies, racist statements, etc., the people flooding the discussion should have to take some accountability for those lies, racist statements, etc. At least if they want to show up on people’s screens and be taken seriously.

A different dark pattern design

You could say the human-tagging web of trust system is “dark pattern design” too.

This design takes advantage of human behavioral patterns, but in a completely different way.

When pathological liars encounter this system, they naturally face certain temptations. Creating cascading webs of false “human tags” to confuse people and waste time. Meanwhile, accusing others of doing it - wasting even more time.

And a more important temptation: echo chambering with others who use these lies the same way. Saying “ah, this person always accuses communists of using false human tags, because we know only bots are communists. I will trust this person.”

They can cluster together in a group, filtering everyone else out, calling them bots.

And, if they can’t resist these temptations, it will make them just as easy to filter out, for everyone else. Because at the end of the day, these chat bots aren’t late-gen Synths from Fallout. Take away the screen, put us face to face, and it’s very easy to discern a human from a machine. These liars get nothing to hide behind.

So you see, like strong is the opposite of weak [citation needed], the strong filter’s “dark pattern design” is quite different from the weak filter’s. Instead of preying on honesty, it preys on the predatory.

Perhaps, someday, systems like this could even change social pressures and incentives to make more people learn to be honest.

View
  • Copy link
  • Flag this article
  • Block
Hacker News
Hacker News
@h4ckernews@mastodon.social  ·  activity timestamp 3 weeks ago

tc-ematch(8) extended matches for use with "basic", "cgroup" or "flow" filters

https://man7.org/linux/man-pages/man8/tc-ematch.8.html

#HackerNews #tc-ematch #extended #matches #basic #cgroup #flow #filters #Linux

tc-ematch(8) - Linux manual page

  • Copy link
  • Flag this post
  • Block
Camelia :tranarchy_a_nonbinary: 🇵🇸
Camelia :tranarchy_a_nonbinary: 🇵🇸
@camelia@fedi.camelia.dev  ·  activity timestamp last month

When you're on the #fediverse, please avoid censoring specific words in your toots/posts (i.e. with numbers or asterisks instead of letters).

Why? Because many of us around here have set up #filters against specific words. We do not want to read posts talking about these words, because these topics trigger us and causes us great discomfort or distress.

If you're censoring words, what you're actually doing is circumvent our filters, and the result will be that we'll be able to see posts that triggers us.

Same goes for pictures. If you're posting screenshots of other people's posts, please ensure you have a short description of what the picture is about (and a content warning when needed). Otherwise, not only is your post not accessible for people with screen readers (or people who chose to hide all media by default), it will also escape our filters.

#accessibility #anxiety

  • Copy link
  • Flag this post
  • Block
Em :official_verified: boosted
Em :official_verified:
Em :official_verified:
@Em0nM4stodon@infosec.exchange  ·  activity timestamp 11 months ago

Tiny Mastodon Tip to Hide Topics 🫣 mastodon

If you get tired of seeing some
topics in your Mastodon feeds,

Know that you can use "Filters"
to help you save your spoons.

HOW TO ❓

From the desktop web interface:

1. Go to "Preferences" on the right-side menu.

2. Click on "Filters" on the left-side menu.

3. At the top-right, click on the "Add new filter" button.

4. Give a name to your filter in the "Title" field at the top, for example "US Politics". This Title will be displayed instead of the post and you will have the option to click on the Title to reveal a filtered post in your feeds 👀

5. Select the options you prefer. I recommend selecting "Hide with a warning" at first. That way, you will see if your Filter works and get the option described above to see it ⚠️ blocky_white_cursor

6. In "KEYWORDS", add the hashtags, words, names, or expressions you wish to hide for this topic. Select "Whole word" on the right.

7. Click on "Add keyword" at the bottom to add as many hashtags, names, or words that you wish for a same topic neocat_book

8. Click on "Save new filter".

9. Magic! 🥄✨

Bonus Tip! To help everyone's filters work, on Mastodon it's good to use uncensored names and hashtags in your posts. That way, people will be able to filter it out if they wish 💚

#TinyMastodonTip #Mastodon #Filters

  • Copy link
  • Flag this post
  • Block
Fox Trenton 🎱
Fox Trenton 🎱
@sintrenton@todon.nl  ·  activity timestamp 2 months ago

Photoshop 7 had a great filter for a adding grainy surface and dust to an image.
Does anyone know how to do this in GIMP? I have been trying all kinds of filters.

#gimp #filters

  • Copy link
  • Flag this post
  • Block
Craig Grannell boosted
Retrospecs
Retrospecs
@retrospecs@mastodon.social  ·  activity timestamp 3 months ago

You can now unlock the full version of Retrospecs for £1.99/€1.99/$1.99

In addition to the full suite of system and mode presets, the full version contains custom emulation, font and palette editors as well as additional features such as the emulation mutator.

Find out more at https://8bitartwork.co.uk

#ios #pixelart #8bitart #pixelate #retro #filters

An image of Peter Finch as the newscaster Howard Beale in the 1976 film Network. It depicts a man in his late middle age standing in front of multiple clocks with his hands in the air, shouting in frustration.
An image of Peter Finch as the newscaster Howard Beale in the 1976 film Network. It depicts a man in his late middle age standing in front of multiple clocks with his hands in the air, shouting in frustration.
An image of Peter Finch as the newscaster Howard Beale in the 1976 film Network. It depicts a man in his late middle age standing in front of multiple clocks with his hands in the air, shouting in frustration.
  • Copy link
  • Flag this post
  • Block
Retrospecs
Retrospecs
@retrospecs@mastodon.social  ·  activity timestamp 3 months ago

You can now unlock the full version of Retrospecs for £1.99/€1.99/$1.99

In addition to the full suite of system and mode presets, the full version contains custom emulation, font and palette editors as well as additional features such as the emulation mutator.

Find out more at https://8bitartwork.co.uk

#ios #pixelart #8bitart #pixelate #retro #filters

An image of Peter Finch as the newscaster Howard Beale in the 1976 film Network. It depicts a man in his late middle age standing in front of multiple clocks with his hands in the air, shouting in frustration.
An image of Peter Finch as the newscaster Howard Beale in the 1976 film Network. It depicts a man in his late middle age standing in front of multiple clocks with his hands in the air, shouting in frustration.
An image of Peter Finch as the newscaster Howard Beale in the 1976 film Network. It depicts a man in his late middle age standing in front of multiple clocks with his hands in the air, shouting in frustration.
  • Copy link
  • Flag this post
  • Block
Aral Balkan
Aral Balkan
@aral@mastodon.ar.al  ·  activity timestamp 3 months ago

Algorithmed (v): to be conditioned into thinking a certain way by algorithms that filter your reality in line with the goals of those who author them.

e.g., “They were algorithmed into thinking that way about trans people.”

#algorithmed #algorithms #truth #reality #tech #filters #BigTech #SiliconValley #technoFascism

  • Copy link
  • Flag this post
  • Block
Em :official_verified:
Em :official_verified:
@Em0nM4stodon@infosec.exchange  ·  activity timestamp 11 months ago

Tiny Mastodon Tip to Hide Topics 🫣 mastodon

If you get tired of seeing some
topics in your Mastodon feeds,

Know that you can use "Filters"
to help you save your spoons.

HOW TO ❓

From the desktop web interface:

1. Go to "Preferences" on the right-side menu.

2. Click on "Filters" on the left-side menu.

3. At the top-right, click on the "Add new filter" button.

4. Give a name to your filter in the "Title" field at the top, for example "US Politics". This Title will be displayed instead of the post and you will have the option to click on the Title to reveal a filtered post in your feeds 👀

5. Select the options you prefer. I recommend selecting "Hide with a warning" at first. That way, you will see if your Filter works and get the option described above to see it ⚠️ blocky_white_cursor

6. In "KEYWORDS", add the hashtags, words, names, or expressions you wish to hide for this topic. Select "Whole word" on the right.

7. Click on "Add keyword" at the bottom to add as many hashtags, names, or words that you wish for a same topic neocat_book

8. Click on "Save new filter".

9. Magic! 🥄✨

Bonus Tip! To help everyone's filters work, on Mastodon it's good to use uncensored names and hashtags in your posts. That way, people will be able to filter it out if they wish 💚

#TinyMastodonTip #Mastodon #Filters

  • Copy link
  • Flag this post
  • Block

bonfire.cafe

A space for Bonfire maintainers and contributors to communicate

bonfire.cafe: About · Code of conduct · Privacy · Users · Instances
Bonfire social · 1.0.1 no JS en
Automatic federation enabled
Log in
  • Explore
  • About
  • Members
  • Code of Conduct