VTechWorks staff will be away for the Thanksgiving holiday beginning at noon on Wednesday, November 27, through Friday, November 29. We will resume normal operations on Monday, December 2. Thank you for your patience.
 

Communicating Inferred Goals With Passive Augmented Reality and Active Haptic Feedback

dc.contributor.authorMullen, James F.en
dc.contributor.authorMosier, Joshen
dc.contributor.authorChakrabarti, Sounaken
dc.contributor.authorChen, Anqien
dc.contributor.authorWhite, Tyleren
dc.contributor.authorLosey, Dylan P.en
dc.date.accessioned2022-02-11T21:20:39Zen
dc.date.available2022-02-11T21:20:39Zen
dc.date.issued2021-10-01en
dc.date.updated2022-02-11T21:20:36Zen
dc.description.abstractRobots learn as they interact with humans. Consider a human teleoperating an assistive robot arm: as the human guides and corrects the arm's motion, the robot gathers information about the human's desired task. But how does the human know what their robot has inferred? Today's approaches often focus on conveying intent: for instance, using legible motions or gestures to indicate what the robot is planning. However, closing the loop on robot inference requires more than just revealing the robot's current policy: the robot should also display the alternatives it thinks are likely, and prompt the human teacher when additional guidance is necessary. In this letter we propose a multimodal approach for communicating robot inference that combines both passive and active feedback. Specifically, we leverage information-rich augmented reality to passively visualize what the robot has inferred, and attention-grabbing haptic wristbands to actively prompt and direct the human's teaching. We apply our system to shared autonomy tasks where the robot must infer the human's goal in real-time. Within this context, we integrate passive and active modalities into a single algorithmic framework that determines when and which type of feedback to provide. Combining both passive and active feedback experimentally outperforms single modality baselines; during an in-person user study, we demonstrate that our integrated approach increases how efficiently humans teach the robot while simultaneously decreasing the amount of time humans spend interacting with the robot. Videos here: https://youtu.be/swq_u4iIP-gen
dc.description.versionAccepted versionen
dc.format.extentPages 8522-8529en
dc.format.extent8 page(s)en
dc.format.mimetypeapplication/pdfen
dc.identifier.doihttps://doi.org/10.1109/LRA.2021.3111055en
dc.identifier.eissn2377-3766en
dc.identifier.issn2377-3766en
dc.identifier.issue4en
dc.identifier.urihttp://hdl.handle.net/10919/108317en
dc.identifier.volume6en
dc.language.isoenen
dc.publisherIEEEen
dc.relation.urihttp://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:000697817600002&DestLinkType=FullRecord&DestApp=ALL_WOS&UsrCustomerID=930d57c9ac61a043676db62af60056c1en
dc.rightsIn Copyrighten
dc.rights.urihttp://rightsstatements.org/vocab/InC/1.0/en
dc.subjectRoboticsen
dc.subjectHaptics and haptic interfacesen
dc.subjectvirtual reality and interfacesen
dc.subjectintention recognitionen
dc.subject0913 Mechanical Engineeringen
dc.titleCommunicating Inferred Goals With Passive Augmented Reality and Active Haptic Feedbacken
dc.title.serialIEEE Robotics and Automation Lettersen
dc.typeArticle - Refereeden
dc.type.dcmitypeTexten
dc.type.otherArticleen
dc.type.otherJournalen
pubs.organisational-group/Virginia Techen
pubs.organisational-group/Virginia Tech/Engineeringen
pubs.organisational-group/Virginia Tech/Engineering/Mechanical Engineeringen
pubs.organisational-group/Virginia Tech/All T&R Facultyen
pubs.organisational-group/Virginia Tech/Engineering/COE T&R Facultyen

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
mullen_ral2021.pdf
Size:
1.61 MB
Format:
Adobe Portable Document Format
Description:
Accepted version