Learning latent actions to control assistive robots

dc.contributor.authorLosey, Dylan P.en
dc.contributor.authorJeon, Hong Junen
dc.contributor.authorLi, Mengxien
dc.contributor.authorSrinivasan, Krishnanen
dc.contributor.authorMandlekar, Ajayen
dc.contributor.authorGarg, Animeshen
dc.contributor.authorBohg, Jeannetteen
dc.contributor.authorSadigh, Dorsaen
dc.date.accessioned2022-02-11T21:14:34Zen
dc.date.available2022-02-11T21:14:34Zen
dc.date.issued2021-08-04en
dc.date.updated2022-02-11T21:14:31Zen
dc.description.abstractAssistive robot arms enable people with disabilities to conduct everyday tasks on their own. These arms are dexterous and high-dimensional; however, the interfaces people must use to control their robots are low-dimensional. Consider teleoperating a 7-DoF robot arm with a 2-DoF joystick. The robot is helping you eat dinner, and currently you want to cut a piece of tofu. Today’s robots assume a pre-defined mapping between joystick inputs and robot actions: in one mode the joystick controls the robot’s motion in the x–y plane, in another mode the joystick controls the robot’s z–yaw motion, and so on. But this mapping misses out on the task you are trying to perform! Ideally, one joystick axis should control how the robot stabs the tofu, and the other axis should control different cutting motions. Our insight is that we can achieve intuitive, user-friendly control of assistive robots by embedding the robot’s high-dimensional actions into low-dimensional and human-controllable latent actions. We divide this process into three parts. First, we explore models for learning latent actions from offline task demonstrations, and formalize the properties that latent actions should satisfy. Next, we combine learned latent actions with autonomous robot assistance to help the user reach and maintain their high-level goals. Finally, we learn a personalized alignment model between joystick inputs and latent actions. We evaluate our resulting approach in four user studies where non-disabled participants reach marshmallows, cook apple pie, cut tofu, and assemble dessert. We then test our approach with two disabled adults who leverage assistive devices on a daily basis.en
dc.description.versionAccepted versionen
dc.format.extentPages 115-147en
dc.format.extent33 page(s)en
dc.format.mimetypeapplication/pdfen
dc.identifier.doihttps://doi.org/10.1007/s10514-021-10005-wen
dc.identifier.eissn1573-7527en
dc.identifier.issn0929-5593en
dc.identifier.issue1en
dc.identifier.other10005 (PII)en
dc.identifier.pmid34366568en
dc.identifier.urihttp://hdl.handle.net/10919/108315en
dc.identifier.volume46en
dc.language.isoenen
dc.publisherSpringeren
dc.relation.urihttp://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:000681168800001&DestLinkType=FullRecord&DestApp=ALL_WOS&UsrCustomerID=930d57c9ac61a043676db62af60056c1en
dc.rightsIn Copyrighten
dc.rights.urihttp://rightsstatements.org/vocab/InC/1.0/en
dc.subjectComputer Science, Artificial Intelligenceen
dc.subjectRoboticsen
dc.subjectComputer Scienceen
dc.subjectAssistive roboticsen
dc.subjectTeleoperationen
dc.subjectShared autonomyen
dc.subjectLatent representationsen
dc.subjectSHARED AUTONOMYen
dc.subjectTELEOPERATIONen
dc.subjectOPTIMIZATIONen
dc.subject0801 Artificial Intelligence and Image Processingen
dc.subject0913 Mechanical Engineeringen
dc.subject1702 Cognitive Sciencesen
dc.subjectIndustrial Engineering & Automationen
dc.titleLearning latent actions to control assistive robotsen
dc.title.serialAutonomous Robotsen
dc.typeArticle - Refereeden
dc.type.dcmitypeTexten
dc.type.otherArticleen
dc.type.otherJournalen
dcterms.dateAccepted2021-07-06en
pubs.organisational-group/Virginia Techen
pubs.organisational-group/Virginia Tech/Engineeringen
pubs.organisational-group/Virginia Tech/Engineering/Mechanical Engineeringen
pubs.organisational-group/Virginia Tech/All T&R Facultyen
pubs.organisational-group/Virginia Tech/Engineering/COE T&R Facultyen

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
losey_auro2021.pdf
Size:
5.08 MB
Format:
Adobe Portable Document Format
Description:
Accepted version