VTechWorks staff will be away for the winter holidays starting Tuesday, December 24, 2024, through Wednesday, January 1, 2025, and will not be replying to requests during this time. Thank you for your patience, and happy holidays!
 

Big Data Text Summarization - Hurricane Irma

dc.contributor.authorChava, Raja Venkata Satya Phanindraen
dc.contributor.authorDhar, Siddharthen
dc.contributor.authorGaur, Yaminien
dc.contributor.authorRambhakta, Pranavien
dc.contributor.authorShetty, Sourabhen
dc.date.accessioned2018-12-13T15:22:01Zen
dc.date.available2018-12-13T15:22:01Zen
dc.date.issued2018-12-13en
dc.description.abstractWith the increased rate of content generation on the Internet, there is a pressing need for making tools to automate the process of extracting meaningful data. Big data analytics deals with researching patterns or implicit correlations within a large collection of data. There are several sources to get data from, such as news websites, social media platforms (for example FaceBook and Twitter), sensors, and other IoT (Internet of Things) devices. Social media platforms like Twitter prove to be important sources of data collection since the level of activity increases significantly during major events such as hurricanes, floods, and events of global importance. For generating summaries, we first had to convert the WARC file which was given to us, into JSON format, which was more understandable. We then cleaned the text by removing boilerplate and redundant information. After that, we proceeded with removing stopwords and getting a collection of the most important words occurring in the documents. This ensured that the resulting summary would have important information from our corpus and would still be able to answer all the questions. One of the challenges that we faced at this point was to decide how to correlate words in order to get the most relevant words out of a document. We tried several techniques such as TF-IDF in order to resolve this. Correlation of different words with each other is an important factor in generating a cohesive summary because while a word may not be in the list of most commonly occurring words in the corpus, it could still be relevant and give significant information about the event. Due to the occurrence of Hurricane Irma around the same time as the occurrence of Hurricane Harvey, a large number of documents were not about Hurricane Irma. Due to this, all such documents were eliminated as they were deemed non-relevant. Classification of documents as relevant or non-relevant ensured that our deep learning summaries were not getting generated on data that was not crucial in building our final summary. Initially, we attempted to use Mahout classifiers, but the results obtained were not satisfactory. Instead, we used a much simpler world filtering approach for classification which has eliminated a significant number of documents by classifying them as non-relevant. We used the Pointer-Generator technique, which implements a Recurrent Neural Network (RNN) for building the deep learning abstractive summary. We combined data from multiple relevant documents into a single document, and thus generated multiple summaries, each corresponding to a set of documents. We wrote a Python script to perform post-processing on the generated summary to convert all the alphabetic characters after a period and space to uppercase. This was important because for lemmatization, stopword removal, and POS tagging, the whole dataset is converted to lowercase. The script also converts the first alphabetic character of all POS-tagged proper nouns to upper case. ROUGE (Recall-Oriented Understudy for Gisting Evaluation) is used to evaluate the generated summary against the golden standard summary. The abstractive summary returns good evaluation results when compared with the Golden Standard on the ROUGE_sent evaluation. The ROUGE_para and cov_entity evaluation results were not up to the mark, but we feel that was mainly due to the writing style of the Gold Standard as our abstractive summary was able provide most of the information related to Hurricane Irma.en
dc.description.notesBig_Data_Text_Summarization_Report_Hurricane_Irma.pdf - Report in PDF format. Big_Data_Text_Summarization_Report_Hurricane_Irma.zip - Report in zip format. Hurricane Irma Final Presentation.pptx - Final Presentation in PowerPoint (pptx) format. Hurricane Irma Final Presentation.pdf - Final Presentation in PDF format. Code_files.zip - Code files compressed in a single zip file.en
dc.description.sponsorshipNSF: IIS-1619028en
dc.identifier.urihttp://hdl.handle.net/10919/86372en
dc.language.isoen_USen
dc.publisherVirginia Techen
dc.rightsCreative Commons CC0 1.0 Universal Public Domain Dedicationen
dc.rights.urihttp://creativecommons.org/publicdomain/zero/1.0/en
dc.subjectText Classificationen
dc.subjectAbstractive Summaryen
dc.subjectApache Sparken
dc.subjectwebpageen
dc.subjectDeep learning (Machine learning)en
dc.titleBig Data Text Summarization - Hurricane Irmaen
dc.typePresentationen
dc.typeReporten
dc.typeSoftwareen

Files

Original bundle
Now showing 1 - 5 of 5
Name:
Code_files.zip
Size:
259.72 MB
Format:
Loading...
Thumbnail Image
Name:
Big_Data_Text_Summarization_Report_Hurricane_Irma.pdf
Size:
4.02 MB
Format:
Adobe Portable Document Format
Name:
Big_Data_Text_Summarization_Report_Hurricane_Irma.zip
Size:
3.85 MB
Format:
Name:
Hurricane Irma Final Presentation.pptx
Size:
1.36 MB
Format:
Microsoft Powerpoint XML
Loading...
Thumbnail Image
Name:
Hurricane Irma Final Presentation.pdf
Size:
343.49 KB
Format:
Adobe Portable Document Format
License bundle
Now showing 1 - 1 of 1
Name:
license.txt
Size:
1.5 KB
Format:
Item-specific license agreed upon to submission
Description: