Learning Human Objectives from Sequences of Physical Corrections
dc.contributor.author | Li, Mengxi | en |
dc.contributor.author | Canberk, Alper | en |
dc.contributor.author | Losey, Dylan P. | en |
dc.contributor.author | Sadigh, Dorsa | en |
dc.date.accessioned | 2022-02-11T21:24:24Z | en |
dc.date.available | 2022-02-11T21:24:24Z | en |
dc.date.issued | 2021-05-30 | en |
dc.date.updated | 2022-02-11T21:24:22Z | en |
dc.description.abstract | When personal, assistive, and interactive robots make mistakes, humans naturally and intuitively correct those mistakes through physical interaction. In simple situations, one correction is sufficient to convey what the human wants. But when humans are working with multiple robots or the robot is performing an intricate task often the human must make several corrections to fix the robot’s behavior. Prior research assumes each of these physical corrections are independent events, and learns from them one-at-a-time. However, this misses out on crucial information: each of these interactions are interconnected, and may only make sense if viewed together. Alternatively, other work reasons over the final trajectory produced by all of the human’s corrections. But this method must wait until the end of the task to learn from corrections, as opposed to inferring from the corrections in an online fashion. In this paper we formalize an approach for learning from sequences of physical corrections during the current task. To do this we introduce an auxiliary reward that captures the human’s trade-off between making corrections which improve the robot’s immediate reward and long-term performance. We evaluate the resulting algorithm in remote and in-person human-robot experiments, and compare to both independent and final baselines. Our results indicate that users are best able to convey their objective when the robot reasons over their sequence of corrections. | en |
dc.description.version | Accepted version | en |
dc.format.mimetype | application/pdf | en |
dc.identifier.doi | https://doi.org/10.1109/icra48506.2021.9560829 | en |
dc.identifier.uri | http://hdl.handle.net/10919/108318 | en |
dc.language.iso | en | en |
dc.publisher | IEEE | en |
dc.rights | In Copyright | en |
dc.rights.uri | http://rightsstatements.org/vocab/InC/1.0/ | en |
dc.title | Learning Human Objectives from Sequences of Physical Corrections | en |
dc.title.serial | 2021 IEEE International Conference on Robotics and Automation (ICRA) | en |
dc.type | Conference proceeding | en |
dc.type.dcmitype | Text | en |
pubs.finish-date | 2021-06-05 | en |
pubs.organisational-group | /Virginia Tech | en |
pubs.organisational-group | /Virginia Tech/Engineering | en |
pubs.organisational-group | /Virginia Tech/Engineering/Mechanical Engineering | en |
pubs.organisational-group | /Virginia Tech/All T&R Faculty | en |
pubs.organisational-group | /Virginia Tech/Engineering/COE T&R Faculty | en |
pubs.start-date | 2021-05-30 | en |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- li_icra2021.pdf
- Size:
- 4.71 MB
- Format:
- Adobe Portable Document Format
- Description:
- Accepted version