Discussion
Loading...

Post

  • About
  • Code of conduct
  • Privacy
  • Users
  • Instances
  • About Bonfire
Bill
@Sempf@infosec.exchange  ·  activity timestamp 4 weeks ago

Looking like you can poison an LLM with around 250 pieces of malicious content. Doesn't matter how big the LLM is. Take a minute to let that sink in. There is math there that's scary.

https://www.darkreading.com/application-security/only-250-documents-poison-any-ai-model

#llm #poison

  • Copy link
  • Flag this post
  • Block
Log in

bonfire.cafe

A space for Bonfire maintainers and contributors to communicate

bonfire.cafe: About · Code of conduct · Privacy · Users · Instances
Bonfire social · 1.0.0 no JS en
Automatic federation enabled
  • Explore
  • About
  • Members
  • Code of Conduct
Home
Login