Discussion
Loading...

#Tag

Log in
  • About
  • Code of conduct
  • Privacy
  • Users
  • Instances
  • About Bonfire
Parastoo Abtahi
Parastoo Abtahi
@parastoo@hci.social  ·  activity timestamp last month

📣 I’m recruiting 1–2 #HCI PhD students interested in spatial computing, #AR, and #HRI. More information: https://parastooabtahi.com/applicants

If you’re interested, apply to Princeton CS by Dec 15 and mention my name in your application.

Prism-shaped Princeton HCI logo animated to fade in and out on an orange background.
Prism-shaped Princeton HCI logo animated to fade in and out on an orange background.
Prism-shaped Princeton HCI logo animated to fade in and out on an orange background.
Parastoo Abtahi

Princeton HCI Applicants — Parastoo Abtahi

  • Copy link
  • Flag this post
  • Block
sohyeon hwang boosted
Lei Zhang
Lei Zhang
@raynez@hci.social  ·  activity timestamp 4 months ago

Most AI-driven AR for children casts them as consumers, not creators. That’s like being able to read but not write. How can we help them write with #AI and #AR? 🪄

In our #UIST2025 paper, we introduce Capybara, an authoring tool that empowers kids to build expressive, AI-enabled AR experiences.

🧵1/6

Your browser does not support the video tag.
This video cannot be previewed
Open original
Screen recording of Capybara, an AR authoring tool for kids. A child drags and drops colorful code blocks in AR to program interactions. A 3D character appears, customized with accessories generated from voice commands. The system automatically rigs the character, which then mirrors the child’s body movements using body tracking. Scenes show the character typing on a real keyboard and performing daily activities like workouts, blending virtual and physical worlds.
  • Copy link
  • Flag this post
  • Block
Lei Zhang
Lei Zhang
@raynez@hci.social  ·  activity timestamp 4 months ago

Most AI-driven AR for children casts them as consumers, not creators. That’s like being able to read but not write. How can we help them write with #AI and #AR? 🪄

In our #UIST2025 paper, we introduce Capybara, an authoring tool that empowers kids to build expressive, AI-enabled AR experiences.

🧵1/6

Your browser does not support the video tag.
This video cannot be previewed
Open original
Screen recording of Capybara, an AR authoring tool for kids. A child drags and drops colorful code blocks in AR to program interactions. A 3D character appears, customized with accessories generated from voice commands. The system automatically rigs the character, which then mirrors the child’s body movements using body tracking. Scenes show the character typing on a real keyboard and performing daily activities like workouts, blending virtual and physical worlds.
  • Copy link
  • Flag this post
  • Block
Parastoo Abtahi
Parastoo Abtahi
@parastoo@hci.social  ·  activity timestamp 5 months ago

Even virtual agents’ actions can have physical effects, with motion paths that divert attention from the hidden robot. 🐝

3/5

Your browser does not support the video tag.
This video cannot be previewed
Open original
a bee-like virtual character flies toward a Pringles can on a table to the right of the scene. As it arrives, the real can is visually masked, and a virtual duplicate appears. The bee carries the virtual can along a curving path toward the left. Near the end, the bee heads straight to a drop point; at the same moment, the physical can arrives there (moved by a hidden mobile robot). The virtual and physical coincide, the bee exists, and the can seems to materialize.
Parastoo Abtahi
Parastoo Abtahi
@parastoo@hci.social replied  ·  activity timestamp 5 months ago

In #AR, using real-time on-device 3D Gaussian splatting, we create the illusion that physical changes occur instantaneously, while a hidden robot fulfills the “reality promise” moments later, updating the physical world to match what users already perceive visually. 🤖

4/5

Your browser does not support the video tag.
This video cannot be previewed
Open original
A user wearing a headset looks at a mobile robot in a room. From the user’s eyes, a viewing cone spans the robot from top to bottom. Point-based “splats” inside the cone are colored red, showing what is visible, and those outside are gray, for discard. As the headset moves or the robot moves, the cone adapts to always cover the robot. A second cone is used for the arm, and it grows and shrinks to cover the arm as it expands. Feathering is used to blend the cone with the background, acting like an invisibility cloak.
  • Copy link
  • Flag this comment
  • Block
Parastoo Abtahi
Parastoo Abtahi
@parastoo@hci.social  ·  activity timestamp 5 months ago

Check out Lauren Wang’s #UIST2025 poster on GhostObjects: life-size, world-aligned virtual twins for fast and precise robot instruction, with real-world lasso selection, multi-object manipulation, and snap-to-default placement.

This is the first piece in her ongoing work on #AR for #HRI 🤖👓 @hci

Your browser does not support the video tag.
This video cannot be previewed
Open original
A user in AR lasso-selects a pile of books and boxes on the floor, which generates GhostObjects—virtual twins colocated with the real objects. The user then drags the GhostObjects along a trajectory, snapping them into their default shelf positions. And then the robot enters the scene and physically places them on the shelves afterwards.
  • Copy link
  • Flag this post
  • Block
Pedro Lopes
Pedro Lopes
@pedrolopes@hci.social  ·  activity timestamp 6 months ago

Want to work with one of the best thinkers out there, with endless creativity and super supportive environment in futuristic Taipei? 😎
My former PhD student, (now Prof.) Shan-Yuan Teng just started the Dexterous Interaction Lab @ NTU https://lab.tengshanyuan.info/#hci#VR #AR#haptics #phd#AcademicChatter 😍

Dextrerous Interaction lab, with three projects shown. On the left is a tactile device that allows you to feel touch on the fingerpad and yet keeps the fingerpad free (I know, mindblowing!), the middle is a force feedback glove with muscle stimulation that helps you play guitar, and the last is a sensory substitution device that allows you to see by feeling. All co-authored by Prof. Teng.
Dextrerous Interaction lab, with three projects shown. On the left is a tactile device that allows you to feel touch on the fingerpad and yet keeps the fingerpad free (I know, mindblowing!), the middle is a force feedback glove with muscle stimulation that helps you play guitar, and the last is a sensory substitution device that allows you to see by feeling. All co-authored by Prof. Teng.
Dextrerous Interaction lab, with three projects shown. On the left is a tactile device that allows you to feel touch on the fingerpad and yet keeps the fingerpad free (I know, mindblowing!), the middle is a force feedback glove with muscle stimulation that helps you play guitar, and the last is a sensory substitution device that allows you to see by feeling. All co-authored by Prof. Teng.
  • Copy link
  • Flag this post
  • Block

bonfire.cafe

A space for Bonfire maintainers and contributors to communicate

bonfire.cafe: About · Code of conduct · Privacy · Users · Instances
Bonfire social · 1.0.1-beta.35 no JS en
Automatic federation enabled
Log in
  • Explore
  • About
  • Members
  • Code of Conduct