Hierarchical Bayesian Dataset Selection
dc.contributor.author | Zhou, Xiaona | en |
dc.contributor.committeechair | Lourentzou, Ismini | en |
dc.contributor.committeemember | Jin, Ran | en |
dc.contributor.committeemember | Thomas, Christopher Lee | en |
dc.contributor.department | Computer Science | en |
dc.date.accessioned | 2024-06-11T13:45:56Z | en |
dc.date.available | 2024-06-11T13:45:56Z | en |
dc.date.issued | 2024-05 | en |
dc.description.abstract | Despite the profound impact of deep learning across various domains, supervised model training critically depends on access to large, high-quality datasets, which are often challenging to identify. To address this, we introduce <b>H</b>ierarchical <b>B</b>ayesian <b>D</b>ataset <b>S</b>election (<b>HBDS</b>), the first dataset selection algorithm that utilizes hierarchical Bayesian modeling, designed for collaborative data-sharing ecosystems. The proposed method efficiently decomposes the contributions of dataset groups and individual datasets to local model performance using Bayesian updates with small data samples. Our experiments on two benchmark datasets demonstrate that HBDS not only offers a computationally lightweight solution but also enhances interpretability compared to existing data selection methods, by revealing deep insights into dataset interrelationships through learned posterior distributions. HBDS outperforms traditional non-hierarchical methods by correctly identifying all relevant datasets, achieving optimal accuracy with fewer computational steps, even when initial model accuracy is low. Specifically, HBDS surpasses its non-hierarchical counterpart by 1.8% on DIGIT-FIVE and 0.7% on DOMAINNET, on average. In settings with limited resources, HBDS achieves a 6.9% higher accuracy than its non-hierarchical counterpart. These results confirm HBDS's effectiveness in identifying datasets that improve the accuracy and efficiency of deep learning models when collaborative data utilization is essential. | en |
dc.description.abstractgeneral | Deep learning technologies have revolutionized many domains and applications, from voice recognition in smartphones to automated recommendations on streaming services. However, the success of these technologies heavily relies on having access to large and high-quality datasets. In many cases, selecting the right datasets can be a daunting challenge. To tackle this, we have developed a new method that can quickly figure out which datasets or groups of datasets contribute most to improving the performance of a model with only a small amount of data needed. Our tests prove that this method is not only effective and light on computation but also helps us understand better how different datasets relate to each other. | en |
dc.description.degree | Master of Science | en |
dc.format.medium | ETD | en |
dc.format.mimetype | application/pdf | en |
dc.identifier.uri | https://hdl.handle.net/10919/119391 | en |
dc.language.iso | en | en |
dc.publisher | Virginia Tech | en |
dc.rights | Attribution-NonCommercial-NoDerivatives 4.0 International | en |
dc.rights.uri | http://creativecommons.org/licenses/by-nc-nd/4.0/ | en |
dc.subject | Hierarchical Bayesian | en |
dc.subject | Data-Sharing | en |
dc.subject | Reinforcement Learning | en |
dc.subject | Dataset Selection | en |
dc.title | Hierarchical Bayesian Dataset Selection | en |
dc.type | Thesis | en |
thesis.degree.discipline | Computer Science and Application | en |
thesis.degree.grantor | Virginia Polytechnic Institute and State University | en |
thesis.degree.level | masters | en |
thesis.degree.name | Master of Science | en |