Discussion
Loading...

Post

  • About
  • Code of conduct
  • Privacy
  • Users
  • Instances
  • About Bonfire
h o ʍ l e t t
@homlett@mamot.fr  ·  activity timestamp 3 weeks ago

→ We Are Still Unable to Secure LLMs from #Malicious Inputs
https://www.schneier.com/blog/archives/2025/08/we-are-still-unable-to-secure-llms-from-malicious-inputs.html

“This kind of thing should make everybody stop and really think before deploying any AI agents. We simply don’t know to defend against these attacks. We have zero agentic AI systems that are secure against these attacks.”

“It’s an existential problem that, near as I can tell, most people developing these technologies are just pretending isn’t there.”

#AI#LLMs #stop #agents #secure #attacks #problem

  • Copy link
  • Flag this post
  • Block
Log in

bonfire.cafe

A space for Bonfire maintainers and contributors to communicate

bonfire.cafe: About · Code of conduct · Privacy · Users · Instances
Bonfire social · 1.0.0-rc.2.21 no JS en
Automatic federation enabled
  • Explore
  • About
  • Members
  • Code of Conduct
Home
Login