CS4984: Special Topics
Permanent URI for this collection
The title of the CS4984 Special Topics class can change from year to year, for example, Computational Linguistics (2014) and Big Data Text Summarization (2018), and includes a graduate section, CS5984.
Browse
Browsing CS4984: Special Topics by Subject "automatic summarization"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
- Automatic Summarization of News Articles about Hurricane FlorenceWanye, Frank; Ganguli, Samit; Tuckman, Matt; Zhang, Joy; Zhang, Fangzheng (Virginia Tech, 2018-12-07)We present our approach for generating automatic summaries from a collection of news articles acquired from the World Wide Web relating to Hurricane Florence. Our approach consists of 10 distinct steps, at the end of which we produce three separate summaries using three distinct methods: 1. A template summary, in which we extract information from the web page collection to fill in blanks in a template. 2. An extractive summary, in which we extract the most important sentences from the web pages in the collection. 3. An abstractive summary, in which we use deep learning techniques to rephrase the contents of the web pages in the collection. The first six steps of our approach involve extracting important words, synsets, words constrained by part of speech, a set of discriminating features, important named entities, and important topics from the collection. This information is then used by the algorithms that generate the automatic summaries. To produce the template summary, we employed a modified version of the hurricane summary template provided to us by the instructor. For each blank space in the modified template, we used regular expression matching with selected keywords to filter out relevant sentences from the collection, and then a combination of regex matching and entity tagging to select the relevant information for filling in the blanks. Most values also required unit conversion to capture all values from the articles, not just values of a specific unit. Numerical analysis was then performed on these values to either get the mode or the mean from the set, and for some values such as rainfall the standard deviation was then used to estimate the maximum. To produce the extractive summary, we employed existing extractive summarization libraries. In order to synthesize information from multiple articles, we use an iterative approach, concatenating generated summaries, and summarizing the concatenated summaries. To produce the abstractive summary, we employed existing deep learning summarization techniques. In particular, we used a pre-trained Pointer-Generator neural network model. Similarly to the extractive summary, we cluster the web pages in the collection by topic, before running them through the neural network model, to reduce the amount of repeated information produced. Out of the three summaries that we generated, the template summary is the best overall due to its coherence. The abstractive and extractive summaries both provide a fair amount of information, but are severely lacking in organization and readability. Additionally, they provide specific details that are irrelevant to the hurricane. All three of the summaries could be improved with further data cleaning, and the template summary could be easily extended to include more information about the event so that it would be more complete.
- Computational Linguistics Hurricane GroupCrowder, Nicholas; Nguyen, David; Hsu, Andy; Mecklenburg, Will; Morris, Jeff (2014-12)The problem-project based learning described in our presentation and report addresses automatic summarization of web content using natural language processing. Initially, we used simple techniques such as word frequencies and WordNet along with n-grams to create summaries. Further approaches became more complex due to the introduction of tools such as Mahout and k-means for topics and clustering. This finally culminated in the use of custom templates and a grammar to generate English sentences to accurately summarize a corpus. Our English summary was created using a grammar alongside regular expressions to extract information. The previous units all built up to the construction of quality regular expressions, in addition to a clean dataset, and some extra tools, such as a classifier trained on our data, as well as a part-of-speech tagger.