Wahed, MuntasirGruhl, DanielLourentzou, Ismini2023-11-022023-11-022023-10-21http://hdl.handle.net/10919/116603The modern-day research community has an embarrassment of riches regarding pre-trained AI models. Even for a simple task such as lexicon set expansion, where an AI model suggests new entities to add to a predefined seed set of entities, thousands of models are available. However, deciding which model to use for a given set expansion task is non-trivial. In hindsight, some models can be ‘off topic’ for specific set expansion tasks, while others might work well initially but quickly exhaust what they have to offer. Additionally, certain models may require more careful priming in the form of samples or feedback before being fine-tuned to the task at hand. In this work, we frame this model selection as a sequential non-stationary problem, where there exist a large number of diverse pre-trained models that may or may not fit a task at hand, and an expert is shown one suggestion at a time to include in the set or not, i.e., accept or reject the suggestion. The goal is to expand the list with the most entities as quickly as possible. We introduce MArBLE, a hierarchical multi-armed bandit method for this task, and two strategies designed to address cold-start problems. Experimental results on three set expansion tasks demonstrate MArBLE’s effectiveness compared to baselines.application/pdfenCreative Commons Attribution-NonCommercial 4.0 InternationalMArBLE: Hierarchical Multi-Armed Bandits for Human-in-the-Loop Set ExpansionArticle - Refereed2023-11-01The author(s)https://doi.org/10.1145/3583780.3615485