Enhanced Neural Network Training Using Selective Backpropagation and Forward Propagation

dc.contributor.authorBendelac, Shirien
dc.contributor.committeechairErnst, Joseph M.en
dc.contributor.committeechairHuang, Jia-Binen
dc.contributor.committeememberWyatt, Chris L.en
dc.contributor.committeememberHeadley, William C.en
dc.contributor.departmentElectrical and Computer Engineeringen
dc.date.accessioned2018-06-23T08:00:16Zen
dc.date.available2018-06-23T08:00:16Zen
dc.date.issued2018-06-22en
dc.description.abstractNeural networks are making headlines every day as the tool of the future, powering artificial intelligence programs and supporting technologies never seen before. However, the training of neural networks can take days or even weeks for bigger networks, and requires the use of super computers and GPUs in academia and industry in order to achieve state of the art results. This thesis discusses employing selective measures to determine when to backpropagate and forward propagate in order to reduce training time while maintaining classification performance. This thesis tests these new algorithms on the MNIST and CASIA datasets, and achieves successful results with both algorithms on the two datasets. The selective backpropagation algorithm shows a reduction of up to 93.3% of backpropagations completed, and the selective forward propagation algorithm shows a reduction of up to 72.90% in forward propagations and backpropagations completed compared to baseline runs of always forward propagating and backpropagating. This work also discusses employing the selective backpropagation algorithm on a modified dataset with disproportional under-representation of some classes compared to others.en
dc.description.abstractgeneralNeural Networks are some of the most commonly used and best performing tools in machine learning. However, training them to perform well is a tedious task that can take days or even weeks, since bigger networks perform better but take exponentially longer to train. What can be done to reduce training time? Imagine a student studying for a test. The student likely solves practice problems that cover the different topics that may be covered on the test. The student then evaluates which topics he/she knew well, and forgoes extensive practice and review on those in favor of focusing on topics he/she missed or was not as confident on. This thesis discusses following a similar approach in training neural networks in order to reduce their training time needed to achieve desired performance levels.en
dc.description.degreeMaster of Scienceen
dc.format.mediumETDen
dc.identifier.othervt_gsexam:15455en
dc.identifier.urihttp://hdl.handle.net/10919/83714en
dc.publisherVirginia Techen
dc.rightsIn Copyrighten
dc.rights.urihttp://rightsstatements.org/vocab/InC/1.0/en
dc.subjectMachine learningen
dc.subjectneural networksen
dc.subjectconvolutional neural networksen
dc.subjectbackpropagationen
dc.subjectforward propagationen
dc.subjecttrainingen
dc.titleEnhanced Neural Network Training Using Selective Backpropagation and Forward Propagationen
dc.typeThesisen
thesis.degree.disciplineComputer Engineeringen
thesis.degree.grantorVirginia Polytechnic Institute and State Universityen
thesis.degree.levelmastersen
thesis.degree.nameMaster of Scienceen

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Bendelac_S_T_2018.pdf
Size:
2.9 MB
Format:
Adobe Portable Document Format

Collections