Discussion
Loading...

Post

Log in
  • About
  • Code of conduct
  • Privacy
  • Users
  • Instances
  • About Bonfire
Parastoo Abtahi
Parastoo Abtahi
@parastoo@hci.social  ·  activity timestamp 2 weeks ago

A fundamental challenge in human-robot interaction is that capabilities and limitations are often opaque to users.

In our upcoming #HRI2026 paper, X-OOHRI, led by the brilliant @laurenwang, we use AR to make robot capabilities and limits visible during object-oriented interactions 🤖

1/4

Your browser does not support the video tag.
GIF
GIF
Open
Split-screen video showing a person wearing an AR headset pointing at books on the shelf. With the controller, she moves a virtual copy of the book to the top shelf. The book turns red with the text “too high to reach.” The user moves the virtual copy down to a lower shelf. The robot then moves the physical book.
GIF
  • Copy link
  • Flag this post
  • Block
Parastoo Abtahi
Parastoo Abtahi
@parastoo@hci.social  ·  activity timestamp 2 weeks ago

X-OOHRI is a mixed-initiative UI!

Users manipulate life-size, colocated virtual twins of objects in AR to issue precise robot instructions 🪄

When problems arise, robots simulate resolutions or suggest alternatives, and users can help through virtual interactions or physical actions ✨

2/4

Your browser does not support the video tag.
GIF
GIF
Open
The user draws a circle on the chair to lasso-select multiple foam blocks. She drags their virtual copies into a basket under a table. They turn red with a label saying “Target placement is occluded.” She selects the “auto” resolution option, and a simulation shows a virtual copy of the basket moving forward before the foam blocks move. The robot then performs the physical action of moving the basket and placing the physical blocks inside it.
GIF
  • Copy link
  • Flag this comment
  • Block
Parastoo Abtahi
Parastoo Abtahi
@parastoo@hci.social  ·  activity timestamp 2 weeks ago

Beyond pick-and-place, X-OOHRI exposes abstract robot actions via a radial menu after selecting a real-world object. Users then manipulate virtual twins to specify missing spatial parameters.

This can also support remote teleoperation 🎮

3/4

Your browser does not support the video tag.
GIF
GIF
Open
The user selects the blinds, and a radial menu appears with three actions: draw, brighten, and dim. They select brighten, but the tilt cord is out of reach, so they choose draw instead. They move the virtual copy of the blinds to open them halfway. The robot then moves the pull cord to open the physical blinds to that position.
GIF
  • Copy link
  • Flag this comment
  • Block
Parastoo Abtahi
Parastoo Abtahi
@parastoo@hci.social  ·  activity timestamp 2 weeks ago

Check out “Explainable OOHRI: Communicating Robot Capabilities and Limitations as AR Affordances” at #HRI2026 for more details!
🔗 Project page: xoohri.github.io 📄 Paper: arxiv.org/abs/2601.14587
Led by Lauren Wang, in collaboration with Mo Kari @hci and @princetoncs.
#HCI #AR #Robotics #HRI
4/4

  • Copy link
  • Flag this comment
  • Block

bonfire.cafe

A space for Bonfire maintainers and contributors to communicate

bonfire.cafe: About · Code of conduct · Privacy · Users · Instances
Bonfire social · 1.0.2-alpha.34 no JS en
Automatic federation enabled
Log in
Instance logo
  • Explore
  • About
  • Members
  • Code of Conduct