Discussion
Loading...

Post

Log in
  • About
  • Code of conduct
  • Privacy
  • Users
  • Instances
  • About Bonfire
David Chisnall (*Now with 50% more sarcasm!*)
David Chisnall (*Now with 50% more sarcasm!*)
@david_chisnall@infosec.exchange  ·  activity timestamp 2 months ago

The next dominoes in the AI bubble that I expect to fall (if you’d excuse the mixed metaphor):

  1. Insurance companies explicitly exclude coverage of any system using AI and any outputs of AI systems.
  2. Lawyers in big companies issue advice that using AI systems is too high risk.
  3. Big companies demand IT suppliers provide an enterprise-management system switch to disable all AI functionality in products, or provide an AI-free version.

The first is starting. A consortium of insurance companies has asked their regulator to approve this blanket exclusion. Their argument is that the risks of these systems are too unpredictable to be able to insure. They can’t reason about systemic or correlated risk if you add a bullshit generator anywhere in an operational flow.

The second has happened in a few places, but is not widespread. Some places are hedging. When I was at MS, the AI policy was basically: ‘look, we give you all of these shiny toys! Please use them! By the way, you accept all legal liability for their output! Have fun!’. One ruling that this kind of passing-the-blame-to-employees-for-correctly-using-company-provided-tools policy is unenforceable and the lawyers will get very nervous.

The third is a consequence of the first two. If your lawyers tell you something is high risk and you can’t buy insurance, you want to make sure it isn’t used.

  • Copy link
  • Flag this post
  • Block
Morgan ⚧️
Morgan ⚧️
@raphaelmorgan@disabled.social replied  ·  activity timestamp 2 months ago

@david_chisnall

That dwarf from Lord of the Rings saying "Never thought I'd die fighting side by side with an insurance company"
That dwarf from Lord of the Rings saying "Never thought I'd die fighting side by side with an insurance company"
That dwarf from Lord of the Rings saying "Never thought I'd die fighting side by side with an insurance company"
  • Copy link
  • Flag this comment
  • Block
redsakana
redsakana
@redsakana@infosec.exchange replied  ·  activity timestamp 2 months ago

@david_chisnall Techbros could buy Citizens United type "anti-discrimination" legislation to force insurance companies to sell anyway and require "unbiased evaluations" for prohibitions.

My prediction is that AI bros are going to ride out 2026 by pivoting from AI-pilled CEOs to AI-pilled politicians to milk the public sector and government subsidies for all they're worth. Some bespoke laws for Mandatory AI would be nicely in line with this.

  • Copy link
  • Flag this comment
  • Block
Dan W
Dan W
@Salvo@aus.social replied  ·  activity timestamp 2 months ago

@david_chisnall
We are currently in a three-way battle with our Marketting department and Web provider.

The website has a useless chatbot, We keep disabling it for technical enquiries, Marketting keeps re-enabling it because it provides answers quickly.

So far, it has not contributed a useful answer out of hundreds of enquiries.

  • Copy link
  • Flag this comment
  • Block
JP
JP
@jplebreton@mastodon.social replied  ·  activity timestamp 2 months ago

@david_chisnall I worry that one of the unacknowledged social-economic transformations underway is that companies are trying to normalize profound risk, including but not limited to making fraudulent technology part of their critical operations, such that even if some big disasters happen directly because of LLM use the legal precedent set will always focus on some individual employee's bad judgment. They had a lot of incentive to do that even before LLMs came along.

  • Copy link
  • Flag this comment
  • Block
JP
JP
@jplebreton@mastodon.social replied  ·  activity timestamp 2 months ago

@david_chisnall And even normalizing the performance consequences, pricing them in because that's what "everyone's doing now", even when it's mistakes that kill people or ruin their lives. We've seen increasingly extreme attacks on regulatory power from the right and a lot of corporations have decided that's the direction their ideal world lies in.

  • Copy link
  • Flag this comment
  • Block
JP
JP
@jplebreton@mastodon.social replied  ·  activity timestamp 2 months ago

@david_chisnall ah hadn't seen this piece, which you were probably indirectly responding to: https://www.tomshardware.com/tech-industry/artificial-intelligence/insurers-move-to-limit-ai-liability-as-multi-billion-dollar-risks-emerge
encouraging in a way, but i still worry about the erosion of / attacks on norms (including cases where the norms were already negligent and harmful)

Tom's Hardware

Major insurers move to avoid liability for AI lawsuits as multi-billion dollar risks emerge — Recent public incidents have lead to costly repercussions

Major insurers seek permission to exclude AI-related claims from corporate policies.
  • Copy link
  • Flag this comment
  • Block
Angie 🇨🇦🇲🇽🇪🇺🇵🇸🇺🇦
Angie 🇨🇦🇲🇽🇪🇺🇵🇸🇺🇦
@angiebaby@mas.to replied  ·  activity timestamp 2 months ago

@david_chisnall

'I was always told that "Capitalism is About Freedom of Choice" ... so ... offer an AI and an AI-free version of your products and do that "Let the Market Decide" thing that I've been told so much about.'

  • Copy link
  • Flag this comment
  • Block
Dan Neuman 🇨🇦
Dan Neuman 🇨🇦
@dan613@ottawa.place replied  ·  activity timestamp 2 months ago

@david_chisnall HIPAA compliance (health patient privacy law) is compromised by offsite LLMs. My spouse can't turn it off in Windows and it is constantly pestering her to let it summarize MS Teams meetings. Her IT department hasn't gotten around to figuring that out.

  • Copy link
  • Flag this comment
  • Block
Henri Verymetaldev
Henri Verymetaldev
@verymetalsite@mastodon.social replied  ·  activity timestamp 2 months ago

@david_chisnall

This.

The AI policy was basically: "look, we give you all of these shiny toys! Please use them! By the way, you accept all legal liability for their output! Have fun!"

  • Copy link
  • Flag this comment
  • Block
cake-duke
cake-duke
@oneloop@mastodon.xyz replied  ·  activity timestamp 2 months ago

> Insurance companies explicitly exclude coverage of any system using AI and any outputs of AI systems.
What's shocking to me is that this isn't already the case.
@david_chisnall

  • Copy link
  • Flag this comment
  • Block
NiceMicro
NiceMicro
@nicemicro@fosstodon.org replied  ·  activity timestamp 2 months ago

@david_chisnall wow, how dare you call it "passing-the-blame-to-employees-for-correctly-using-company-provided-tools", when they already named it a nice way, saying "human-in-the-loop" :)

  • Copy link
  • Flag this comment
  • Block
Rodrigo Dias
Rodrigo Dias
@rgo@masto.pt replied  ·  activity timestamp 2 months ago

@david_chisnall Agreed. Avoiding AI in production code until risks clear up.

  • Copy link
  • Flag this comment
  • Block
KostikHvostik
KostikHvostik
@KostikHvostik@social.vivaldi.net replied  ·  activity timestamp 2 months ago

@david_chisnall from what i know you are already in the matrix, you don`t realize it yet

  • Copy link
  • Flag this comment
  • Block
Mark vW
Mark vW
@markvonwahlde@mastodon.world replied  ·  activity timestamp 2 months ago

@david_chisnall The caveat emptor stuff is interesting, but the time to really start worrying is when the AI profiteers get legislators to insulate them from third party liability.

  • Copy link
  • Flag this comment
  • Block
David Chisnall (*Now with 50% more sarcasm!*)
David Chisnall (*Now with 50% more sarcasm!*)
@david_chisnall@infosec.exchange replied  ·  activity timestamp 2 months ago

@markvonwahlde

I suspect that would actually kill it. If there is specific legislation shielding a vendor from liability from damage to you, would you use their products?

  • Copy link
  • Flag this comment
  • Block
Mark vW
Mark vW
@markvonwahlde@mastodon.world replied  ·  activity timestamp 2 months ago

@david_chisnall By third party, I mean people like pedestrians, passengers in AI-driven cars, and people who lose important services because of somebody else's AI-driven device. When those people are harmed by an AI-driven device, they should be able to seek relief from the people who put the AI software into the marketplace--along with the people who deployed the AI-driven device.

  • Copy link
  • Flag this comment
  • Block
Kiloku
Kiloku
@Kiloku@burnthis.town replied  ·  activity timestamp 2 months ago

@david_chisnall I've seen the second one where I work as "We can't trust that the providers won't use our inputs as training data. Since our clients are secretive, we must not use AI tools."

  • Copy link
  • Flag this comment
  • Block
Tom
Tom
@Tallish_Tom@mastodon.scot replied  ·  activity timestamp 2 months ago

@david_chisnall

1. https://www.tomshardware.com/tech-industry/artificial-intelligence/insurers-move-to-limit-ai-liability-as-multi-billion-dollar-risks-emerge

  • Copy link
  • Flag this comment
  • Block
Erik Jonker
Erik Jonker
@ErikJonker@mastodon.social replied  ·  activity timestamp 2 months ago

@david_chisnall the definition of an "AI system" is already difficult and excluding any AI component in a larger system doesn't make sense, AI components can be perfectly fine when embedded and surrounded by quality controls/safeguards? 🤔

  • Copy link
  • Flag this comment
  • Block
Petra van Cronenburg
Petra van Cronenburg
@NatureMC@mastodon.online replied  ·  activity timestamp 2 months ago

@david_chisnall I would like to have the same optimism like you. When I see the newest studies about users of LLMs I doubt that we can get rid of this pest (plus the immense damage on environment, drinking water, and knowledge).
After the burst of the US real property bubble, it was the poor and citizens (via taxes) who payed the bills for the banks. Poor countries fell in debt with consequences lasting today. We had no system change: the rich got richer, even faster and faster.

  • Copy link
  • Flag this comment
  • Block
Federation Bot
Federation Bot
@Federation_Bot replied  ·  activity timestamp 2 months ago

@david_chisnall Why I'm not optimist: Meanwhile, these bubble adoring companies just throw away their "human capital" https://arstechnica.com/information-technology/2025/11/hp-plans-to-save-millions-by-laying-off-thousands-ramping-up-ai-use/

  • Copy link
  • Flag this comment
  • Block
Petrus Hilarius
Petrus Hilarius
@phf@mastodon.de replied  ·  activity timestamp 2 months ago

@david_chisnall Is there a reference for that "exclusion of coverage" thing? In the business press maybe? Quick search didn't turn up anything recent, just some two-year-old blathering full of hypotheticals. 🤷

  • Copy link
  • Flag this comment
  • Block
Federation Bot
Federation Bot
@Federation_Bot replied  ·  activity timestamp 2 months ago

@david_chisnall this is important because the only people who are paying anywhere near what it costs to run the service are big companies who are usually buying it as software as a service add-on they just talk to their bill because it sounds trendy and futuristic. Individual level consumers will lose them money even the paid tier so losing businesses means game over

  • Copy link
  • Flag this comment
  • Block
tanavit
tanavit
@tanavit@toot.aquilenet.fr replied  ·  activity timestamp 2 months ago

@david_chisnall

They can also add a clause in the contract enforcing that the customer is solely responsible for using the AI enhanced product.

  • Copy link
  • Flag this comment
  • Block
Megan Lynch (she/her)
Megan Lynch (she/her)
@meganL@mas.to replied  ·  activity timestamp 2 months ago

@david_chisnall So they'll expect an insurance bailout as well, I suppose. https://prospect.org/2025/11/07/openai-maneuvering-for-government-bailout/

  • Copy link
  • Flag this comment
  • Block
Merospit
Merospit
@merospit@infosec.exchange replied  ·  activity timestamp 2 months ago

@david_chisnall There are huge disclaimers that we need to sign to use AI at work. Then we are also told we won't get promotions without using AI, so we are effectively forced to click through the disclaimers.

The managers who are forcing this are also now telling software engineers that they are always responsible for their actions and code.

It is evident that they know there are huge risks to using AI for software coding, but they don't want to back down while the bubble is still inflating.

  • Copy link
  • Flag this comment
  • Block
2qx
2qx
@2qx@mastodon.social replied  ·  activity timestamp 2 months ago

@david_chisnall

We assume an independent judiciary in the same breath we admit that the party with the most money usually wins in the US legal system.

In petrofascism, the monetary system is based on the consumption of fossil fuels (like natural gas for those AI chips). Meaning the money won't be printed for an unending string of civil suits to shutdown the toaster attached to the LNG generators.

They have all the liquidity in the world for insurance markets and lawyers.

  • Copy link
  • Flag this comment
  • Block
karolherbst 🐧 🦀
karolherbst 🐧 🦀
@karolherbst@chaos.social replied  ·  activity timestamp 2 months ago

@david_chisnall 1. is probably enough to make it all burst. Most "leaders" in modern companies don't have the guts to take any risks, so they won't if it isn't insured. And if one or two of the ones that do have to pay heavy damages due to AI doing weird shit, and regulators won't grant blanket exceptions for AI either it's all over.

  • Copy link
  • Flag this comment
  • Block
Jonathan Schofield
Jonathan Schofield
@urlyman@mastodon.social replied  ·  activity timestamp 2 months ago

@david_chisnall ironically enough https://mastodon.social/@urlyman/113820824544996398

  • Copy link
  • Flag this comment
  • Block
David Chisnall (*Now with 50% more sarcasm!*)
David Chisnall (*Now with 50% more sarcasm!*)
@david_chisnall@infosec.exchange replied  ·  activity timestamp 2 months ago

@urlyman

That isn’t new. Insurance has been using ML models for at least a decade. They have to use explainable models though, so it’s typically something based on decision trees (and the inner workings are not visible to sales agents), because they have to be able to prove to regulators that certain protected categories (e.g. race) are not part of the decision matrix.

  • Copy link
  • Flag this comment
  • Block
Jonathan Schofield
Jonathan Schofield
@urlyman@mastodon.social replied  ·  activity timestamp 2 months ago

@david_chisnall good explanation, thanks.

Though my experience earlier this year was different than previous conversations along the same lines with the same insurer where they *could* tell me line item costs of bikes

  • Copy link
  • Flag this comment
  • Block
David Chisnall (*Now with 50% more sarcasm!*)
David Chisnall (*Now with 50% more sarcasm!*)
@david_chisnall@infosec.exchange replied  ·  activity timestamp 2 months ago

@urlyman It probably varies a lot between insurers. There are two different things that compound:

  • Is there some kind of ML system?
  • Are sales agents given access to the way rates are calculated?

Increasingly, the answer to the second is 'no' even when insurers are using a rules-based system, because it's commercially sensitive and they don't want their least-well-paid staff to be able to take it to competitors. If you have a better estimate of a particular risk than a competitor then you can either charge less or pay out less often, both of which make you money.

  • Copy link
  • Flag this comment
  • Block
Jonathan Schofield
Jonathan Schofield
@urlyman@mastodon.social replied  ·  activity timestamp 2 months ago

@david_chisnall thanks. That makes a lot of sense

  • Copy link
  • Flag this comment
  • Block

bonfire.cafe

A space for Bonfire maintainers and contributors to communicate

bonfire.cafe: About · Code of conduct · Privacy · Users · Instances
Bonfire social · 1.0.2-alpha.7 no JS en
Automatic federation enabled
Log in
  • Explore
  • About
  • Members
  • Code of Conduct