ModelPred: A Framework for Predicting Trained Model from Training Data

dc.contributor.authorZeng, Yingyanen
dc.contributor.committeechairJia, Ruoxien
dc.contributor.committeememberAbbott, A. Lynnen
dc.contributor.committeememberJin, Ranen
dc.contributor.departmentElectrical and Computer Engineeringen
dc.date.accessioned2024-08-07T18:53:54Zen
dc.date.available2024-08-07T18:53:54Zen
dc.date.issued2024-06-06en
dc.description.abstractIn this work, we propose ModelPred, a framework that helps to understand the impact of changes in training data on a trained model. This is critical for building trust in various stages of a machine learning pipeline: from cleaning poor-quality samples and tracking important ones to be collected during data preparation, to calibrating uncertainty of model prediction, to interpreting why certain behaviors of a model emerge during deployment. Specifically, ModelPred learns a parameterized function that takes a dataset S as the input and predicts the model obtained by training on S. Our work differs from the recent work of Datamodels as we aim for predicting the trained model parameters directly instead of the trained model behaviors. We demonstrate that a neural network-based set function class is capable of learning the complex relationships between the training data and model parameters. We introduce novel global and local regularization techniques to prevent overfitting and we rigorously characterize the expressive power of neural networks (NN) in approximating the end-to-end training process. Through extensive empirical investigations, we show that ModelPred enables a variety of applications that boost the interpretability and accountability of machine learning (ML), such as data valuation, data selection, memorization quantification, and model calibration.en
dc.description.abstractgeneralWith the prevalence of large and complicated Artificial Intelligence (AI) models, it is important to build trust in the various stages of a machine learning model pipeline, from cleaning poor-quality samples and tracking important ones to be collected during the training data preparation, to calibrating uncertainty of model prediction during the inference stage, to interpreting why certain behaviors of a model emerge during deployment. In this work, we propose ModelPred, a framework that helps to understand the impact of changes in training data on a trained model. To achieve this, ModelPred learns a parameterized function that takes a dataset S as the input and predicts the model obtained by training on S, thus learning the impact from data on the model efficiently. Our work differs from the recent work of Datamodels [28] as we aim for predicting the trained model parameters directly instead of the trained model behaviors. We demonstrate that a neural network-based set function class is capable of learning the complex relationships between the training data and model parameters. We introduce novel global and local regularization techniques to enhance the generalizability and prevent overfitting. We also rigorously characterize the expressive power of neural networks (NN) in approximating the end-to-end training process. Through extensive empirical investigations, we show that ModelPred enables a variety of applications that boost the interpretability and accountability of machine learning (ML), such as data valuation, data selection, memorization quantification, and model calibration. This greatly enhances the trustworthy of machine learning models.en
dc.description.degreeMaster of Scienceen
dc.description.notesAlso published as Zeng, Y., Wang, J. T., Chen, S., Just, H. A., Jin, R., & Jia, R. (2023, February). ModelPred: A Framework for Predicting Trained Model from Training Data. In 2023 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML) (pp. 432-449). IEEE. https://doi.org/10.1109/SaTML54575.2023.00037en
dc.description.sponsorshipAmazon-Virginia Tech Initiative in Efficient and Robust Machine Learningen
dc.format.mediumETDen
dc.format.mimetypeapplication/pdfen
dc.identifier.urihttps://hdl.handle.net/10919/120887en
dc.language.isoenen
dc.publisherVirginia Techen
dc.rightsIn Copyrighten
dc.rights.urihttp://rightsstatements.org/vocab/InC/1.0/en
dc.subjectNeural Network Approximabilityen
dc.subjectData Valuationen
dc.subjectTrustworthy Machine Learningen
dc.titleModelPred: A Framework for Predicting Trained Model from Training Dataen
dc.typeThesisen
thesis.degree.disciplineComputer Engineeringen
thesis.degree.grantorVirginia Polytechnic Institute and State Universityen
thesis.degree.levelmastersen
thesis.degree.nameMaster of Scienceen

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Zeng_Y_T_2024.pdf
Size:
10.67 MB
Format:
Adobe Portable Document Format
License bundle
Now showing 1 - 1 of 1
Name:
license.txt
Size:
1.5 KB
Format:
Item-specific license agreed upon to submission
Description:

Collections