Bui, Dat2020-05-132020-05-132020-05-11http://hdl.handle.net/10919/98240DeepSqueak is a deep-learning based system for detection and analysis of ultrasonic vocalizations. The original DeepSqueak model was created by Kevin R. Coffey, Russel G. Marx, and John F. Neumaier. Rodents engage in social communication through ultrasonic vocalizations, and Dr. Bowers is utilizing DeepSqueak's technology to study rats in his lab. AviSoft is another software package that has been used by Dr. Bowers, to record and manually analyze sound files gathered from the rats. Dr. Bowers would like to use all available data to train DeepSqueak's classification model, to further improve its accuracy, and to reduce manual analysis and labeling work. The purpose of the Vocalization Detection project is to assist with that effort, leveraging the available data, the two software packages, and our processing. Initial efforts involved studying DeepSqueak, AviSoft, and the available data files. Further exploration considered automating use of the tools, and helping with the training of DeepSqueak models. Then the work pivoted, to develop matching methods to take data processed with AviSoft, to transform that into labeled data to improve the training of DeepSqueak models.enCreative Commons CC0 1.0 Universal Public Domain DedicationDeep learning (Machine learning)ClassificationRatVocalizationDeepSqueakAviSoftMATLABVocalization DetectionArticle