Here's What I've Learned: Asking Questions that Reveal Reward Learning

Files

TR Number

Date

2022-09-08

Journal Title

Journal ISSN

Volume Title

Publisher

ACM

Abstract

Robots can learn from humans by asking questions. In these questions the robot demonstrates a few different behaviors and asks the human for their favorite. But how should robots choose which questions to ask? Today's robots optimize for informative questions that actively probe the human's preferences as efficiently as possible. But while informative questions make sense from the robot's perspective, human onlookers may find them arbitrary and misleading. In this paper we formalize active preference-based learning from the human's perspective. We hypothesize that --- from the human's point-of-view --- the robot's questions reveal what the robot has and has not learned. Our insight enables robots to use questions to make their learning process transparent to the human operator. We develop and test a model that robots can leverage to relate the questions they ask to the information these questions reveal. We then introduce a trade-off between informative and revealing questions that considers both human and robot perspectives: a robot that optimizes for this trade-off actively gathers information from the human while simultaneously keeping the human up to date with what it has learned. We evaluate our approach across simulations, online surveys, and in-person user studies.

Description

Keywords

Citation