Rdoc2vec CS4624 Project for Spring 2017

dc.contributor.authorCooke, Austinen
dc.contributor.authorClark, Jakeen
dc.contributor.authorRolph, Stevenen
dc.contributor.authorSherrard, Stephenen
dc.date.accessioned2017-05-13T01:41:33Zen
dc.date.available2017-05-13T01:41:33Zen
dc.date.issued2017-04-28en
dc.description.abstractThis submission includes deliverables for the capstone project Rdoc2vec. It was created by Jake Clark, Austin Cooke, Steven Rolph, and Stephen Sherrard for their client, Eastman Chemical Corporation. Doc2Vec is a machine learning model to create a vector space whose elements are words from a grouping or several groupings of text. By analyzing several documents, all of the words which occur in these documents are placed into the vector space. The distance between these vectors indicates how similar they are. Words which appear in similar contexts have a small distance between them in this vector space. This algorithm has been used by researchers for document analysis, primarily using the Gensim Python library. Our client, Eastman Chemical Corporation, would like to use this approach, when working in a language more suited to their business model. A lot of their software is statistical, written in R. Thus, our job had the following components: become familiar with Doc2vec and R, develop Rdoc2vec, and apply it to parse documents, create a vector space, and make tests. First, to become familiar with the language, we spent a few weeks with tutorials including the Lynda library, which was provided by Virginia Tech. After we felt we were familiar with the language, we learned about two of the dominant algorithms used, called Distributed Bag-of-words (DBOW) and Distributed Memory (DM). After learning these two algorithms, we felt that we were prepared to begin development. Second, we developed a class structure similar to that of Gensim. Keeping this as a skeleton, we developed a parsing algorithm which would be used to train the model. The parser analyzes the documents and computes a frequency for the occurrence of each word. The parser itself takes a list of physical documents stored on the system and completes the analysis, passing the frequency of words along the pipeline. The next step was to create a neural network for training the model. We elected to use the built-in neural network library written in R called nnet. A neural network takes an initial input vector as a parameter. For our purposes, it made sense to use a “one-hot” vector, which has only one input. This can cut down on later calculation because the input vector is only of size one. Then this input is multiplied by several weights to be put into a hidden layer, handled by the nnet library. The values in the hidden layer are multiplied again by several weights to go into the output layer. After creating functions which called the nnet library, we began work on testing. In the meantime, we decided to begin a design on our own implementation of a neural network. By creating a neural network anew, we get around the major problem with nnet, which is optimization. Since nnet is a black box that we cannot affect, we cannot be sure that it is optimized for our application. Since we use “one-hot” vectors, which are not a default application, it is likely that there is some way we can improve the speed in our library. We were not able to finish and test our neural net, so it is something left for future groups to work on. Finally, we began testing. We created a Web scraper which grabbed a number of articles from Wikipedia. We used this scraper to get a number of different documents. Specifically, we scraped information on the congressional districts of several states. This gave us document sets which can be quite large when using several states, or smaller by analyzing individual states. We performed tests on these datasets, the results of which we kept with our code.en
dc.description.notesRdoc2vecPresentation.pdf: PDF version of Final presentation powerpoint Rdoc2vecPresentation.pptx: Powerpoint of Final presentation Rdoc2vecReport.docx: Word document of final semester report Rdoc2vecReport.pdf: PDF version of final semester report r-doc2vec-master.zip: ZIP file containing all computer code from semester projecten
dc.description.sponsorshipEastman Chemicalen
dc.identifier.urihttp://hdl.handle.net/10919/77622en
dc.language.isoen_USen
dc.publisherVirginia Techen
dc.rightsIn Copyrighten
dc.rights.urihttp://rightsstatements.org/vocab/InC/1.0/en
dc.subjectRen
dc.subjectDoc2Vecen
dc.subjectMachine Learningen
dc.subjectSoftwareen
dc.subjectOpen Sourceen
dc.titleRdoc2vec CS4624 Project for Spring 2017en
dc.typeDataseten
dc.typePresentationen
dc.typeReporten
dc.typeSoftwareen

Files

Original bundle
Now showing 1 - 5 of 5
Name:
r_doc2vec-master.zip
Size:
300.07 KB
Format:
Loading...
Thumbnail Image
Name:
Rdoc2vecPresentation.pdf
Size:
1.78 MB
Format:
Adobe Portable Document Format
Loading...
Thumbnail Image
Name:
Rdoc2vecReport.pdf
Size:
1.04 MB
Format:
Adobe Portable Document Format
Name:
Rdoc2vecReport.docx
Size:
1.3 MB
Format:
Microsoft Word XML
Name:
Rdoc2vecPresentation.pptx
Size:
2.31 MB
Format:
Microsoft Powerpoint XML
License bundle
Now showing 1 - 1 of 1
Name:
license.txt
Size:
1.5 KB
Format:
Item-specific license agreed upon to submission
Description: