Big Data Text Summarization - Hurricane Harvey

dc.contributor.authorGeissinger, Jacken
dc.contributor.authorLong, Theoen
dc.contributor.authorJung, Jamesen
dc.contributor.authorParent, Jordanen
dc.contributor.authorRizzo, Roberten
dc.description.abstractNatural language processing (NLP) has advanced in recent years. Accordingly, we present progressively more complex generated text summaries on the topic Hurricane Harvey. We utilized TextRank, which is an unsupervised extractive summarization algorithm. TextRank is computationally expensive, and the sentences generated by the algorithm aren’t always directly related or essential to the topic at hand. When evaluating TextRank, we found that a single sentence interjected and ruined the flow of the summary. We also found that ROUGE evaluation for our TextRank summary was quite low compared to a golden standard that was prepared for us. However, the TextRank summary had high marks for ROUGE evaluation compared to the Wikipedia article lead for Hurricane Harvey. To improve upon the TextRank algorithm, we utilized template summarization with named entities. Template summarization takes less time to run than TextRank but is supervised by the author of the template and script to choose valuable named entities. Thus, it is highly dependent on human intervention to produce reasonable and readable summaries that aren’t error-prone. As expected, the template summary evaluated well compared to the Gold Standard and the Wikipedia article lead. This result is mainly due to our ability to include named entities we thought were pertinent to the summary. Beyond extractive summaries like TextRank and template summarization, we pursued abstractive summarization using pointer-generator networks and multi-document summarization with pointer-generator networks and maximal marginal relevance. The benefit of using abstractive summarization is that it is more in-line with how humans summarize documents. Pointer-generator networks, however, require GPUs to run properly and a large amount of training data. Luckily, we were able to use a pre-trained network to generate summaries. The pointer-generator network is the centerpiece of our abstractive methods and allowed us to create summaries in the first place. NLP is at an inflection point due to deep learning, and our generated summaries using a state-of-the-art pointer-generator neural network are filled with details about Hurricane Harvey, including damage incurred, the average amount of rainfall, and the locations it affected the most. The summary is also free of grammatical errors. We also use a novel Python library, written by Logan Lebanoff at the University of Central Florida, for multi-document summarization using deep learning to summarize our Hurricane Harvey dataset of 500 articles and the Wikipedia article for Hurricane Harvey. The summary of the Wikipedia article is our final summary and has the highest ROUGE scores that we could attain.en
dc.description.notes- BDTS_Hurricane_Harvey_final_report.docx: Editable version of the final report - BDTS_Hurricane_Harvey_final_report.pdf: PDF version of the final report - BDTS_Hurricane_Harvey_presentation.pptx: Editable version of the presentation slides - BDTS_Hurricane_Harvey_presentation.pdf: PDF version of the presentation slides Source file in zip: - - Finds the most frequent words in a JSON file that contains a sentences field. Requires a file to be passed through the -f option. - - Performs basic part-of-speech tagging on a JSON file that contains a sentences field. Requires a file to be passed through the -f option. - - Performs TextRank summarization with a JSON file that contains a sentences field. Requires a file to be passed through the -f option. - - Performs template summarization with a JSON file that contains a sentences field. Requires a file to be passed through the -f option. - - Extracts content from a Wikipedia page given a topic and formats the information for the pointer-generator network using the “” script. Requires a topic to be given in the -t option and an output directory for “” to read from with the -o option. - - Called by "" to convert story files to .bin files. - - Used to clean up the large dataset - requirements.txt - Used with Anaconda for installing all of the dependencies. - small_dataset.json - Properly formatted JSON file for use with other files.en
dc.description.sponsorshipNSF: IIS-1619028en
dc.publisherVirginia Techen
dc.rightsIn Copyrighten
dc.subjecttext summarizationen
dc.subjectdeep learningen
dc.subjecttemplate fillingen
dc.subjectpointer-generator networken
dc.subjectbig dataen
dc.subjectcomputational linguisticsen
dc.subjectinformation extractionen
dc.subjectneural networksen
dc.subjectmulti-document summarizationen
dc.subjectnatural language processingen
dc.subjectHurricane Harveyen
dc.subjectevent summarizationen
dc.subjecttopic summarizationen
dc.subjectbig data text summarizationen
dc.subjectabstractive summarizationen
dc.subjectextractive summarizationen
dc.titleBig Data Text Summarization - Hurricane Harveyen


Original bundle
Now showing 1 - 5 of 5
284.85 KB
1.33 MB
Microsoft Word XML
Thumbnail Image
1.5 MB
Adobe Portable Document Format
Thumbnail Image
316.25 KB
Adobe Portable Document Format
103.84 KB
Microsoft Powerpoint XML
License bundle
Now showing 1 - 1 of 1
1.5 KB
Item-specific license agreed upon to submission