Discussion
Loading...

Post

Log in
  • About
  • Code of conduct
  • Privacy
  • Users
  • Instances
  • About Bonfire
Jon
Jon
@jdp23@neuromatch.social  ·  activity timestamp 8 hours ago

RE: https://infosec.exchange/@thenexusofprivacy/116045129087836176

If you've noticed I've been quieter than usual here ... it's because the Washington state legislative session has been all-consuming ever since it started in early January. This article looks at the current state of play on the 10+ bills I've testified on -- and since I approach legislative activism as a full-contact sport, I've also been organizing on all of them.

Results so far are mixed. Good news is that we've killed two odious age verification bills -- including the one based on Texas' horrible law, and the other based on the even worse Mississippi law that would have required age verification on Mastodon and every other social network, forum, and chat room. On the other hand, the bills prohibiting surveillance pricing is also dead, which is a huge disappointment. The bill regulating AI use in therapy also didn't make it out of its initial policy committee. Big tech's lobbyists have a lot of influence here ...

#waleg

  • Copy link
  • Flag this post
  • Block
Jon
Jon
@jdp23@neuromatch.social replied  ·  activity timestamp 8 hours ago

Over the second half of the session, I'll probably be focusing primarily on four bills (although also supporting several other AI bills whose prospects are unclear)

  • SB 6002 (the Driver Privacy Act), regulating Flock and other ALPRs. The version the Senate passed is too weak to offer much protection; we'll be pushing to strengthen it in the House. Here's a good article on the situation by Danni Askini of Gender Justice League and Jaelynn Scott of Lavender Rights Project. https://www.thestranger.com/guest-editorial/2026/02/05/80458651/we-need-to-regulate-automated-license-plate-readers-now

  • HB 1834 / SB 5708, two more "child safety" bills with age verification bills, these focused on "addictive feeds". Attorney General Brown is pushing these bills hard, with some great Instagram videos, and Democratic leadership supports them as well. Then again LGBTQ+ communities are fired up after stomping the other age verification bills, ACLU of Washington thinks it's unconstitutional, big tech hates it, and Conservative Ladies of Washington also like it . It's possible they'll amend the bill to remove the age verification requirement, which will still leave other problems. We shall see.

  • HB 2225 / SB 5984 are "Governor's Request" bills to regulte AI Companion Chatbots. Big tech and their allies hate these bills of course, making ludicrous claims like "the requirement that chatbots tell people they're not human is unconsitutional compelled speech." Do they really live in a state where the government doesn't mandate warning labels on cigarettes or gasoline pumps? That said I also don't support the current versions. In the policy hearings I testified OTHER, highlighting that the bill as written makes it legal for ChatGPT et al to use manipulative engagement techniques to exploit seniors and other adults, and also has a potential major loophole that would easy for unscrupulous chatbot operators to escape regulation, These are "Governor's Request" bills so have a lot of momentum behind them, so once again we shall see.

The Stranger

We Need to Regulate Automated License Plate Readers Now

Glaring down at us from light posts, traffic lights, and even convenience stores, is a privately-built, publicly paid-for surveillance network of license plate readers. Whenever a vehicle passes by, cameras not only capture and store data about our license plates but information about our vehicles, location, images of passengers, and our movements. Right now, your data can be freely shared between local, county, state law enforcement, unregulated third party vendors, and through public requests.
  • Copy link
  • Flag this comment
  • Block
Jon
Jon
@jdp23@neuromatch.social replied  ·  activity timestamp 7 hours ago

BTW @onepict since we're so often talking about consent here on fedi, I should mention that I also discussed it my testimony on HB 2599, regulating AI in therapy. The bill quite rightly prohibits some uses of "AI" by therapists (like emotion detection and making independent therapeuttic decisions). It allows other uses, but requires disclosure and consent. That's good but the wording wasn't strong enough -- I briefly mentioned it in my live testimony , where I only had 2 minutes, and went into more detail in my written testimony. Here's an excerpt.

@emilymbender gave me some great feedback on how to improve the disclosure, and the sponsor appreciated the feedback ... the bill isn't going anywhere this session but we'll hopefully have discussions in the interim and give it another try next year.

Section 2(2)’s requirements for disclosure and consent for other uses of "AI" are extremely important....

As I discussed in my oral testimony, patients need to know if their therapist's use of cloud-based "AI" transcription tools could lead to details of their daily lives get shared with ICE or CBP  (or law enforcement in hostile states) because data has been sent to a state where protections of Keep Washington Working and the Shield Law don't apply.  If they are not informed of that, their consent is not meaningful....

For consent to be meaningful, it needs to be informed, so the disclosure should clearly describe the uses -- and the risks.
Section 2(2)’s requirements for disclosure and consent for other uses of "AI" are extremely important.... As I discussed in my oral testimony, patients need to know if their therapist's use of cloud-based "AI" transcription tools could lead to details of their daily lives get shared with ICE or CBP (or law enforcement in hostile states) because data has been sent to a state where protections of Keep Washington Working and the Shield Law don't apply. If they are not informed of that, their consent is not meaningful.... For consent to be meaningful, it needs to be informed, so the disclosure should clearly describe the uses -- and the risks.
Section 2(2)’s requirements for disclosure and consent for other uses of "AI" are extremely important.... As I discussed in my oral testimony, patients need to know if their therapist's use of cloud-based "AI" transcription tools could lead to details of their daily lives get shared with ICE or CBP (or law enforcement in hostile states) because data has been sent to a state where protections of Keep Washington Working and the Shield Law don't apply. If they are not informed of that, their consent is not meaningful.... For consent to be meaningful, it needs to be informed, so the disclosure should clearly describe the uses -- and the risks.
  • Copy link
  • Flag this comment
  • Block
Esther Payne :bisexual_flag:
Esther Payne :bisexual_flag:
@onepict@chaos.social replied  ·  activity timestamp 7 hours ago

@emilymbender @jdp23 I'm glad you were giving testimony. Especially with the risks of the data going out of state.

I do worry about the potential for getting private data out of these systems, or worse as you point out ICE using that information.

I'd been doing some reading for my digital poppets talk, although I didn't end up referring to data being extracted from AI systems in the end.

https://www.onepict.com/digitalpoppet.html

  • Copy link
  • Flag this comment
  • Block
Jon
Jon
@jdp23@neuromatch.social replied  ·  activity timestamp 5 hours ago

Excellent talk! And yeah, totally agree -- all this mass data gathering ties together. Loyalty cards relate to the surveillance pricing bill that was being considered here ... Maya Morales of WA People's Privacy pointed out in her testimony that that law enforcement can access all this information too.

And data brokers are relevant to surveillance pricing and age verification -- in fact some age verification companies are actually owned by data brokers. The bills we killed (like the Texas bill that the Supreme Court found constititional) restricted age verification companies from using the age verification data for other purposes, but didn't restrict them from transferring or even selling the data. The surviving age verification bill has somewhat more restrictions but even so in practice it's not clear how effective they'll be. Big tech companies routinely break the law, string out enforcement as long as possible, and they pay the fines as a cost of doing business. The only real solution is not to collect the data in the first place.

@onepict @emilymbender

  • Copy link
  • Flag this comment
  • Block
Esther Payne :bisexual_flag:
Esther Payne :bisexual_flag:
@onepict@chaos.social replied  ·  activity timestamp 5 hours ago

@emilymbender @jdp23 Thank you for taking the time to read, interesting that being covered as well. I didn't realise data brokers owned some verification companies, but then why wouldn't you if your business is collecting and collating personal data.

  • Copy link
  • Flag this comment
  • Block
Jon
Jon
@jdp23@neuromatch.social replied  ·  activity timestamp 5 hours ago

I didn't know that either until people testified about it on one of the age verification bills. But sure enough, here's links for Equifax and Experian.

@onepict

Age Verification Service | Age Verification Software UK | Experian Business

Our age verification service helps you develop confident relationships with your genuine customers. Operate responsibly and offer your customers a streamlined experience. Find out more.

Age Verification | Business | Equifax

Equifax Age Verification checks helps you confirm the age and identity of customers swiftly and with confidence.
  • Copy link
  • Flag this comment
  • Block
Jon
Jon
@jdp23@neuromatch.social replied  ·  activity timestamp 7 hours ago

And @emilymbender I made sure to get the 🦜 in there in the reference!

The need for regulating "AI" use by licensed professionals is clear. The Digital Futures in Mind report, from 2022, is an excellent look at the expanding use of algorithmic and data-driven technologies in the mental health context.  Since then, uses have expanded significantly. And many people are unaware of the risks. Anecdotally, when I have told health care professionals about the privacy risks of the "AI"-based systems they use, they are often shocked. While I personally am fortunate that I have always been asked for consent, others tell me that is not always the case. 

A complication here is that the term "AI" is used in many different ways to refer to many different technologies. Indeed, researchers such as UW Professor Emily Bender (co-author of the book The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want and the famed On the Dangers of Stochastic Parrots 🦜 paper) caution against the use of the term. The bill as written contains two definitions of "artificial intelligence", in Section 2(4) and 3(1). For brevity, in these comments I'll simply use the term "AI."
The need for regulating "AI" use by licensed professionals is clear. The Digital Futures in Mind report, from 2022, is an excellent look at the expanding use of algorithmic and data-driven technologies in the mental health context. Since then, uses have expanded significantly. And many people are unaware of the risks. Anecdotally, when I have told health care professionals about the privacy risks of the "AI"-based systems they use, they are often shocked. While I personally am fortunate that I have always been asked for consent, others tell me that is not always the case. A complication here is that the term "AI" is used in many different ways to refer to many different technologies. Indeed, researchers such as UW Professor Emily Bender (co-author of the book The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want and the famed On the Dangers of Stochastic Parrots 🦜 paper) caution against the use of the term. The bill as written contains two definitions of "artificial intelligence", in Section 2(4) and 3(1). For brevity, in these comments I'll simply use the term "AI."
The need for regulating "AI" use by licensed professionals is clear. The Digital Futures in Mind report, from 2022, is an excellent look at the expanding use of algorithmic and data-driven technologies in the mental health context. Since then, uses have expanded significantly. And many people are unaware of the risks. Anecdotally, when I have told health care professionals about the privacy risks of the "AI"-based systems they use, they are often shocked. While I personally am fortunate that I have always been asked for consent, others tell me that is not always the case. A complication here is that the term "AI" is used in many different ways to refer to many different technologies. Indeed, researchers such as UW Professor Emily Bender (co-author of the book The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want and the famed On the Dangers of Stochastic Parrots 🦜 paper) caution against the use of the term. The bill as written contains two definitions of "artificial intelligence", in Section 2(4) and 3(1). For brevity, in these comments I'll simply use the term "AI."
  • Copy link
  • Flag this comment
  • Block
Marcel Waldvogel
Marcel Waldvogel
@marcel@waldvogel.family replied  ·  activity timestamp 7 hours ago

@jdp23 @emilymbender
Now what is a good emoji representing "stochastic"?

I would prefer 🍵 (tea) to 🎁 (surprise), but I doubt anyone would get the relationship at first glance.

  • Copy link
  • Flag this comment
  • Block
Esther Payne :bisexual_flag:
Esther Payne :bisexual_flag:
@onepict@chaos.social replied  ·  activity timestamp 6 hours ago

@jdp23 @emilymbender @marcel I really like @davidrevoy AI parrot series. It's awesome

  • Copy link
  • Flag this comment
  • Block

bonfire.cafe

A space for Bonfire maintainers and contributors to communicate

bonfire.cafe: About · Code of conduct · Privacy · Users · Instances
Bonfire social · 1.0.2-alpha.7 no JS en
Automatic federation enabled
Log in
  • Explore
  • About
  • Members
  • Code of Conduct