Discussion
Loading...

#Tag

  • About
  • Code of conduct
  • Privacy
  • Users
  • Instances
  • About Bonfire
Em :official_verified: boosted
Miguel Afonso Caetano
@remixtures@tldr.nettime.org  ·  activity timestamp 2 weeks ago

"Everyone sharing his or her data to train A.I. is great if we agree with the goals that were given to the A.I. It’s not so great if we don’t agree with these goals; and if the algorithm’s decisions might cost us our jobs, happiness, liberty or even lives.

To safeguard ourselves from collective harm, we need to build institutions and pass laws that give people affected by A.I. algorithms a voice over how those algorithms are designed, and what they aim to achieve. The first step is transparency. Similar to corporate financial reporting requirements, companies and agencies that use A.I. should be required to disclose their objectives and what their algorithms are trying to maximize — whether that’s ad clicks on social media, hiring workers who won’t join unions or total deportation counts.

The second step is participation. The people whose data are used to train the algorithms — and whose lives are shaped by them — should help decide their goals. Like a jury of peers who hear a civil or criminal case and render a verdict together, we might create citizens’ assemblies where a representative randomly chosen set of people deliberates and decides on appropriate goals for algorithms. That could mean workers at a firm deliberating about the use of A.I. at their workplace, or a civic assembly that reviews the objectives of predictive policing tools before government agencies deploy them. These are the kinds of democratic checks that could align A.I. with the public good, not just private power.

The future of A.I. will not be decided by smarter algorithms or faster chips. It will depend on who controls the data — and whose values and interests guide the machines. If we want A.I. that serves the public, the public must decide what it serves."

https://www.nytimes.com/2025/11/02/opinion/ai-privacy.html?unlocked_article_code=1.yU8.8BEa.DltbW_WwVhxN&smid=nytcore-android-share

#AI #Algorithms #Privacy #DifferentialPrivacy #AITraining

https://www.nytimes.com

Opinion | How A.I. Can Use Your Personal Data to Hurt Your Neighbor

  • Copy link
  • Flag this post
  • Block
Miguel Afonso Caetano
@remixtures@tldr.nettime.org  ·  activity timestamp 2 weeks ago

"Everyone sharing his or her data to train A.I. is great if we agree with the goals that were given to the A.I. It’s not so great if we don’t agree with these goals; and if the algorithm’s decisions might cost us our jobs, happiness, liberty or even lives.

To safeguard ourselves from collective harm, we need to build institutions and pass laws that give people affected by A.I. algorithms a voice over how those algorithms are designed, and what they aim to achieve. The first step is transparency. Similar to corporate financial reporting requirements, companies and agencies that use A.I. should be required to disclose their objectives and what their algorithms are trying to maximize — whether that’s ad clicks on social media, hiring workers who won’t join unions or total deportation counts.

The second step is participation. The people whose data are used to train the algorithms — and whose lives are shaped by them — should help decide their goals. Like a jury of peers who hear a civil or criminal case and render a verdict together, we might create citizens’ assemblies where a representative randomly chosen set of people deliberates and decides on appropriate goals for algorithms. That could mean workers at a firm deliberating about the use of A.I. at their workplace, or a civic assembly that reviews the objectives of predictive policing tools before government agencies deploy them. These are the kinds of democratic checks that could align A.I. with the public good, not just private power.

The future of A.I. will not be decided by smarter algorithms or faster chips. It will depend on who controls the data — and whose values and interests guide the machines. If we want A.I. that serves the public, the public must decide what it serves."

https://www.nytimes.com/2025/11/02/opinion/ai-privacy.html?unlocked_article_code=1.yU8.8BEa.DltbW_WwVhxN&smid=nytcore-android-share

#AI #Algorithms #Privacy #DifferentialPrivacy #AITraining

https://www.nytimes.com

Opinion | How A.I. Can Use Your Personal Data to Hurt Your Neighbor

  • Copy link
  • Flag this post
  • Block
Lauren Weinstein
@lauren@mastodon.laurenweinstein.org  ·  activity timestamp 4 months ago

WARNING: #GOOGLE IS TRYING TO TRICK YOU INTO USING GEMINI AI AND FEEDING GEMINI YOUR DATA IN GMAIL AND OTHER APPS!

What Google is now doing should be ILLEGAL. PERIOD. For the first time I can recall in history of using Gmail, it just now popped a modal dialogue box -- DEMANDING that I choose whether or not I wanted "Smart Features" turned on -- which when you read the verbiage mostly means goddamned Gemini AI AND if you enable this you're giving Google permission to use your data to "improve" this horrifically invasive, inept, and misinformation spewing tech that steals data from websites for its own use without permission of those sites. DON'T LET IT SUCK IN YOUR EMAIL AS WELL!

There was no way I could find to exit the modal window without choosing YES or NO, which means my existing selection to NOT use Gmail Smart Features (long my preference) was NOT being honored. After saying NO to this disgusting query by Google, I was pushed to ANOTHER page where I was forced to choose again about "smart features" in "other" Google apps. I chose NO again and finally was permitted to escape this trap.

Note that while you can fairly easily check to make sure "smart features" are turned off in Gmail settings, I offhand don't have a clue as to how to find the similar settings in other Google apps that may have been affected by this absolutely disrespectful forced dialogue, as Google keeps trying to ram Gemini AI down our throats.

ENOUGH IS ENOUGH! Google has become a DISGRACE.

#Google#Gmail#Gemini#AI

Petra van Cronenburg
@NatureMC@mastodon.online replied  ·  activity timestamp 4 months ago
@lauren I'm not that familiar with legal stuff but ask myself if we could get Google fined for infringement of the DSA act or GDPR laws in the EU? https://www.disinfo.eu/wp-content/uploads/2022/11/20221020_DSAUserGuide_Final.pdf By complaints we should get the @EUCommission to an investigation: https://digital-strategy.ec.europa.eu/en/policies/dsa-enforcement

Or am I too optimistic? 🤔

#Google #gemini #GoogleMail#DSA#GDPR#AItraining #privacy

  • Copy link
  • Flag this comment
  • Block
Tim Chambers boosted
Ars Technica News
@arstechnica@c.im  ·  activity timestamp 5 months ago

Book authors made the wrong arguments in Meta AI training case, judge says https://arstechni.ca/8nfR #copyrightinfringement #AItraining #torrenting #copyright #leeching#Policy#LLaMA #meta #AI

  • Copy link
  • Flag this post
  • Block
Ars Technica News
@arstechnica@c.im  ·  activity timestamp 5 months ago

Book authors made the wrong arguments in Meta AI training case, judge says https://arstechni.ca/8nfR #copyrightinfringement #AItraining #torrenting #copyright #leeching#Policy#LLaMA #meta #AI

  • Copy link
  • Flag this post
  • Block
Stefan Bohacek
@stefan@stefanbohacek.online  ·  activity timestamp 6 months ago

What on earth, SoundCloud??

"In the absence of a separate agreement that states otherwise, You explicitly agree that your Content may be used to inform, train, develop or serve as input to artificial intelligence or machine intelligence technologies or services as part of and for providing the services."

https://soundcloud.com/terms-of-use

via @sarahdal https://crispsandwi.ch/@sarahdal/114477755873097160

#soundcloud #AI #copyright #musicians #AITraining

  • Copy link
  • Flag this post
  • Block
Log in

bonfire.cafe

A space for Bonfire maintainers and contributors to communicate

bonfire.cafe: About · Code of conduct · Privacy · Users · Instances
Bonfire social · 1.0.0 no JS en
Automatic federation enabled
  • Explore
  • About
  • Members
  • Code of Conduct
Home
Login