Discussion
Loading...

Post

Log in
  • About
  • Code of conduct
  • Privacy
  • Users
  • Instances
  • About Bonfire
Mike Elgan
Mike Elgan
@MikeElgan@mastodon.social  ·  activity timestamp 1 hour ago

Fact of the moment: "It’s easier than assumed to hack and hijack a robot by placing the right words in its environment. The robot reads the words, interpreting them as commands, and obeys."
https://machinesociety.ai/p/how-to-ron-burgundy-a-robot

Sorry, no caption provided by author
Sorry, no caption provided by author
Sorry, no caption provided by author

How to “Ron Burgundy” a robot

Turns out robots will read whatever's on the prompter.
  • Copy link
  • Flag this post
  • Block
Ross Grady
Ross Grady
@rossgrady@dood.net replied  ·  activity timestamp 1 hour ago
@MikeElgan is there a link in your article to the source UCSC research paper? If so, I can’t seem to locate it so maybe it’s not rendering for me . . .
  • Copy link
  • Flag this comment
  • Block
Mike Elgan
Mike Elgan
@MikeElgan@mastodon.social replied  ·  activity timestamp 1 hour ago

@rossgrady Sorry about that. Here's a link to a press release, which contains a link to the paper: https://news.ucsc.edu/2026/01/misleading-text-can-hijack-ai-enabled-robots/

News

Misleading text in the physical world can hijack AI-enabled robots, cybersecurity study shows

New research anticipates hijacking against AI systems in order to create defenses for a more secure future.
  • Copy link
  • Flag this comment
  • Block

bonfire.cafe

A space for Bonfire maintainers and contributors to communicate

bonfire.cafe: About · Code of conduct · Privacy · Users · Instances
Bonfire social · 1.0.2-alpha.7 no JS en
Automatic federation enabled
Log in
  • Explore
  • About
  • Members
  • Code of Conduct