Discussion
Loading...

Post

Log in
  • About
  • Code of conduct
  • Privacy
  • Users
  • Instances
  • About Bonfire
Fastly Devs
Fastly Devs
@fastlydevs@mastodon.social  ·  activity timestamp 3 days ago

Why do LLMs fall for prompt injection attacks that wouldn’t fool a fast-food worker?

In this piece, Fastly Distinguished Engineer Barath Raghavan and security expert Bruce Schneier explain how AI flattens context—and why that makes autonomous AI agents especially risky.

A sharp, practical take on AI security. 🍔🤖: https://spectrum.ieee.org/prompt-injection-attack

#AISecurity #PromptInjection #LLMs #Cybersecurity

Sorry, no caption provided by author
Sorry, no caption provided by author
Sorry, no caption provided by author
IEEE Spectrum

Why AI Keeps Falling for Prompt Injection Attacks

Why AI falls for scams that wouldn't trick a fast-food worker—and what that reveals about AI security.
⁂
More from
IEEE Spectrum
  • Copy link
  • Flag this post
  • Block

bonfire.cafe

A space for Bonfire maintainers and contributors to communicate

bonfire.cafe: About · Code of conduct · Privacy · Users · Instances
Bonfire social · 1.0.2-alpha.7 no JS en
Automatic federation enabled
Log in
  • Explore
  • About
  • Members
  • Code of Conduct