Discussion
Loading...

Post

Log in
  • About
  • Code of conduct
  • Privacy
  • Users
  • Instances
  • About Bonfire
Em :official_verified:
Em :official_verified:
@Em0nM4stodon@infosec.exchange  ·  activity timestamp 3 days ago

There will never be an AI tool that
is truly private unless it hasn't trained on nonconsensual data.

Even if a platform were able to
create the perfect protections for its users' prompts and results,

If the platform is built from or utilizing an AI model that was trained on or is updated and optimized with data that was scraped from millions of people without their consent, then of course this platform isn't "privacy-respectful."

How could it be?

The company is saying:
"We respect the privacy of our users while they are using our platform, but outside of it, it's fair game."

Users thinking they are using a privacy-respectful platform are in fact saying:

"Privacy for me and not for thee,"

And are directly contributing to the platform needing to scrape even more nonconsensual data to improve.

Always ask: Where the training data comes from?

Without the assurance that a platform only uses AI models that have only been training on data acquired ethically, it is not a privacy-respectful platform.

#Privacy #AI #Consent #HumanRights #NoAI

  • Copy link
  • Flag this post
  • Block
crazyeddie
crazyeddie
@crazyeddie@mastodon.social replied  ·  activity timestamp yesterday

@Em0nM4stodon "Users thinking they are using a privacy-respectful platform are in fact saying:

"Privacy for me and not for thee,""

Which is pretty short sighted since they're probably not using that particular platform 24x7, and that makes them fair game for all the other time.

  • Copy link
  • Flag this comment
  • Block
Pip
Pip
@pip@infosec.exchange replied  ·  activity timestamp yesterday

@Em0nM4stodon Ask not whether fashtech is private. Ask why anyone is using fashtech.

  • Copy link
  • Flag this comment
  • Block
Mugita Sokio
Mugita Sokio
@msokiovt@uwu.social replied  ·  activity timestamp 3 days ago

@Em0nM4stodon To play devil's advocate on the matter, anything that people willing published to the public will be scraped, and that's a given. No matter how much poisoning of the data is being done to prevent it, it's getting reverse-engineered and scraped regardless. I honestly don't mind if an AI model scrapes stuff I say in public, though private is a different story.

  • Copy link
  • Flag this comment
  • Block
Watchful Citizen
Watchful Citizen
@watchfulcitizen@goingdark.social replied  ·  activity timestamp 3 days ago

@Em0nM4stodon well said! An interesting thought however. What's considered ethical scraping? All public data? No scraping at all?

Respect robots.txt?

I fully agree with you. Another issue is the lack of transparency from those who train. Its very unknown what data has been used or where it came from.

I'm not saying we shouldn't invest in AI. But the current form isnt ethical.

  • Copy link
  • Flag this comment
  • Block
teefax
teefax
@awalter@mastodon.bawue.social replied  ·  activity timestamp 3 days ago

@Em0nM4stodon What if you run a Large Language Modell local on your device?

  • Copy link
  • Flag this comment
  • Block
Em :official_verified:
Em :official_verified:
@Em0nM4stodon@infosec.exchange replied  ·  activity timestamp 3 days ago

@awalter Where does the data to train the LLM initially come from?

  • Copy link
  • Flag this comment
  • Block

bonfire.cafe

A space for Bonfire maintainers and contributors to communicate

bonfire.cafe: About · Code of conduct · Privacy · Users · Instances
Bonfire social · 1.0.1-beta.35 no JS en
Automatic federation enabled
Log in
  • Explore
  • About
  • Members
  • Code of Conduct