Have you been asked by a medical provider recently for consent to have an "AI" scribe record your visit? Us, too. And we have **thoughts**
https://buttondown.com/maiht3k/archive/why-you-should-refuse-to-let-your-doctor-record/
Post
Have you been asked by a medical provider recently for consent to have an "AI" scribe record your visit? Us, too. And we have **thoughts**
https://buttondown.com/maiht3k/archive/why-you-should-refuse-to-let-your-doctor-record/
Yes, and of course said no.
But then I discovered they used AI transcribing when adding notes to my journal, after the meetings, as it was full of obvious errors. So needed to lecture them again about my right it is not used on my medical record.
What makes this even worse is that they all know how bad it works, as it is frequently reported in media about complaints from the medical community about horrific errors, as well as inefficiency this overhyped piece of crap creates.
@emilymbender no but yesterday, I did see a poster mentioning it at a local clinic
@emilymbender thankfully my therapist was like "yeah dude don't worry about it it's weird" but i still get an email alongside every 'upcoming appointment' email reminding me to sign the permission form
@emilymbender
One of my first jobs was providing tech support to doctors in a hospital setting. They were some of the most tech-illiterate folks I've ever encountered. They have no concept of operational security.
No doctor has ever asked me for permission to store any information about me in whatever systems they're using. For all I know they store it in plain text on an insecure S3 bucket.
@emilymbender no but in the agreement they ask us to sign periodically it said that they might use AI. So I said I wasn’t signing if they were going to. They asked the doc and she said no I don’t use AI transcription at all and I didn’t know that was in there!
@emilymbender
I have-- and refused!
@emilymbender Funnily enough, transcribing can preserve privacy without issues. Whisper.cpp runs decently well on phones, and can be run on servers that process patient records under the same security constraints. Could easily be run locally even.
Problem is, that’s extremely hard to prove in the current „just slap a gear on it and call it steampunk” climate. I would definitely not trust a random provider.
And if they do „summarization”, forget privacy.
@emilymbender psychiatry did it without informed consent. I am livid
@emilymbender for a doctors perspective on the more profound side effects of “efficiency”
https://benngooch.substack.com/p/i-was-an-enthusiastic-early-adopter
“I felt myself becoming a passive observer in encounters where I had previously been an active architect. I felt my clinical memory, my narrative identity, and my sense of connection to my patients beginning to erode at the edges.”
@emilymbender Really helpful. Thanks for sharing.
I often reason about it this way. There are very few things like this where, if you opt out this time, you can’t opt in next time. On the other hand, there are LOTS of situations, this being a good example, where opting out after you opted in is substantially more effort (or even impossible).
Opting out by default is usually a safe thing to do. You can always opt in later if you change your mind.
@emilymbender
Evidence shows litigation decrease if Drs have scribes. A Dr isn’t allowed to remember things in defence. It’s said “If it is not documented, it didn’t happen” even if it did happen & recall can be verified.
The direct effect:
1: more litigation = more insurance cost for the Dr & thus higher consult fees.
2: Drs who have psychological & emotional injury from spurious claims reduce/stop practice.
So there is high motivation for having a scribe.
@emilymbender I opted out at my physical therapist last week. They told me all their patients who work in tech have opted out.
yes i was a declined. when she asked, i said because
1. the companies used sites with CSAM and other abuses
2. it’s spyware. each prompt acts like a honey-pot. since you are giving them the info, it by-passes HIPPA. in turn they get to use and sell that info however they please
3. as an antifascist activist, it puts my life in danger by giving companies ran by fascists access to my whole medical history.
my MD was shocked. they had no idea about the spyware angle or CSAM
@emilymbender Yes, I was asked to sign a consent (stuck in with the other standard consents) authorizing the doctor’s practice to use an AI scribe. I left the room, went up to the front desk and told them I would not sign the consent under any circumstances. They looked a little surprised, but agreed to have one of the techs act as a scribe as normal. Glad I stood my ground - there is no way in Hell I would let a Doc use AI for anything medical related
@emilymbender Im fortunate my gp doesn't even trust the national health database.
@emilymbender Usually I ask my doctor to turn that off.
@emilymbender
Completely agree! I have started having conversations with my patients about the fact that I will never use this technology and to warn them that some institutions/practices have decided they don’t even need patient consent. Aside from all of the privacy issues, it is literally a major part of my job to create detailed, accurate and nuanced notes.
@emilymbender I tried to subscribe to that newsletter and it said "this e-mail address can not be subscribed" Why does it not like my email address?
@emilymbender I do share a lot of AI skepticism, but physician perspective (I use it about 25-30% of visits), there are many highly speculative aspects of this take:
🧵 1/2
1) Point #1 is valid, however, the same data safety questions can be asked regarding other integrated systems. Like where is your EMR data stored, how does your radiology data integrate (reviewed in 3rd party software), etc.
2) Consent: valid concern, but the fullest version would be a long EULA-like text with a checkbox...
@P__X Your experience is your experience, but I am **appalled** at what you're saying about consent here. The fullest version would be too long, so we're not actually doing informed consent? No thank you.
@emilymbender "he fullest version would be too long, so we're not actually doing informed consent?"
No, that is not what is being said there. Unlike a blog post, I am restricted in space. I explicitly said that is is a valid concern. A basic research consent form is 8+ pages of legalese and I'm afraid that the future solution will be to add it as a checkbox for 30 pages of text at check-in that nobody reads and doesn't actually inform better. And again, my point #1.
@emilymbender Agreed. For any points that were valid, none of them necessitate the use of LLMs. Never mind without consent. Disgusting.
@emilymbender Thanks for the write-up! How do you feel about human scribes? I've been saying "yes to human scribes, no to AI scribes" for a while now, but your list makes me realize a lot of the concerns apply there, too.
@emilymbender My provider always asks and always said that the days "doesn't leave our server." Are there really transcription/summarizing AIs that are completely local?
@emilymbender I finally said "no" the last time I went in... and I found out the nurse also wasn't really fond of it either.
@emilymbender I've spotted medically significant errors in the transcription during the session. It's galling.
@emilymbender I have been asked multiple times by medical providers this year if I would consent to AI transcription, and I have said no every time. I'm tempted to ask next time if they have information from an independent audit of the system performance and privacy policy
@emilymbender YES, multiple times, and with wording and timing of the "request" that amounted to coercion. Very little information given in response to questions about what tool exactly was being used, where my data would be sent, their privacy policies etc. Just "this makes it easier for me to help you."
Also recently at the emergency vet hospital, and again in a context that made it extremely difficult to refuse, lest you be seen as a difficult or problem client that creates extra work for the poor vets.
So glad you are writing about this.
@emilymbender I've been asked multiple times, and I always chicken out when they ask and say yes, I think it's the power imbalance, and I don't think it's fair.
Refuse! Don't do the doctor's job of taking notes, let alone support their choice to have AI transcribe notes for them. The accuracy rate is poor, but incorrect notes would still be used to treat you.
Yes, and by a couple of specialists who it has taken a lot of time/money to get in to see, so I've felt almost coerced into agreeing as I didn't want to risk them declining to see me so I consented against my better judgement.
Do you think there are other ways we can fight this outside of directly declining consent in these situations? One of the providers I see who has been using one for ~2 years now is the only person I've been able to see locally who can prescribe a tightly controlled medication, so I'm quite reluctant to risk the (tenuous) relationship I have established (I already don't feel like I can be completely open and honest with her, and fear what may happen if I decline the use of the scribe).
@emilymbender Great article. The privacy of my protected health information is my immediate concern. There are many ways it can and does leak. It’s not the provider’s fault, it is in the whole chain of info systems backing them up. Using an LLM multiplies the risk. Training data is EXTREMELY valuable. There is no way the recording or transcript are getting deleted. It’s getting sold. The incentives to are all wrong to ensure HIPAA compliance.
@emilymbender Yes. I think this is becoming quite common. Medical groups are providing this service through enterprise-wide medical information systems. I was grateful to be asked. I’ll just add that doctors do not like lectures from patients.
@meltedcheese Nowhere am I suggesting that patients lecture their doctors, though?
This information can help patients make decisions for themselves about this. If consent is being requested (as it should be), then 'no' is a complete sentence.
@emilymbender Agreed. I’m sorry that I miscommunicated. => I am the one who “lectured” and only because AI is my area of deep expertise. If I can convince a doctor or two to at least ask the right questions and consult with other doctors before simply accepting the use of LLM technology, that’s a good thing. Patients should have the info, as you say, to make their own decisions.
@emilymbender THANK YOU for point 9. It drives me batty how poorly people (read: management) rate the importance of mental review, slower moments, and backburner thinking for improving or maintaining skills. Something something working out is 10% active work and 90% fueling/rest.
Not to mention that a rushed/stressed provider makes ME stressed and less likely to be candid or “be complicated”, which is not the point of a visit!
@emilymbender yesterday, and I said no.
thanks for putting out a newsletter talking about it
@emilymbender yes. I said no. That I was very much against it.
@emilymbender yes. I said no. Said I was very against it.
My doc did say she was going to use it, but I said no. She said it's easier for her. I said no. She no longer brings the thing in the room. My cardiologist, however, just put the device on the desk & I said no. He started to hem & haw, reached to turn it on, then said it wasn't working. I talked about lots of things, and how it felt like I had a basketball in my gut. The summary "he" wrote said that I play basketball for exercise. I'm 71 with health issues. He lied.
@emilymbender The booking system wouldn't let me proceed without providing consent. I'm allowed to opt-out by providing a written request; I've not (yet) done that.
@BoydStephenSmithJr That ... isn't really consent.
@emilymbender My biggest concern is the potential for psychiatric violence. Inaccurate medical notes produced by these systems could very easily be used as evidence of psychosis or some other kind of psychopathology, leading to forced medical treatment. Having already experienced some of that system, it really worries me. I don’t let medical providers use these systems with me.
@emilymbender there are signs at the doctor's offoce saying you can refuse, but when I did I got a lecture on how this helps, and acting like I had no clue what I was talking about. I mentioned I worked in tech and it was dismissed. As I am in an area with few doctors accepting new patients at the moment.... how do I really refuse?
My therapist asked for permission, I declined, and after my session we got into a long conversation about why. At least they were curious about it.
@commonst @emilymbender Medical providers are one to point fingers at patients for being tech naïve. Medical providers, and the medical industry in general, are notoriously the worst at being informed about tech; worse than any industry short of lawyers. That’s actually why HIPAA exists.
@randocity @emilymbender I am in zcanada. No HIPAa, but we do tend to go where the US goes on a lot of things.