Discussion
Loading...

Post

Log in
  • About
  • Code of conduct
  • Privacy
  • Users
  • Instances
  • About Bonfire
h o ʍ l e t t
h o ʍ l e t t
@homlett@mamot.fr  ·  activity timestamp 4 months ago

→ We Are Still Unable to Secure LLMs from #Malicious Inputs
https://www.schneier.com/blog/archives/2025/08/we-are-still-unable-to-secure-llms-from-malicious-inputs.html

“This kind of thing should make everybody stop and really think before deploying any AI agents. We simply don’t know to defend against these attacks. We have zero agentic AI systems that are secure against these attacks.”

“It’s an existential problem that, near as I can tell, most people developing these technologies are just pretending isn’t there.”

#AI#LLMs #stop #agents #secure #attacks #problem

  • Copy link
  • Flag this post
  • Block

bonfire.cafe

A space for Bonfire maintainers and contributors to communicate

bonfire.cafe: About · Code of conduct · Privacy · Users · Instances
Bonfire social · 1.0.1-alpha.44 no JS en
Automatic federation enabled
Log in
  • Explore
  • About
  • Members
  • Code of Conduct