Browsing by Author "Lee, Jun"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
- Big Data: New Zealand Earthquakes SummaryBochel, Alexander; Edmisten, William; Lee, Jun; Chandalura, Rohit (Virginia Tech, 2018-12-14)The purpose of this Big Data project was to create a computer generated text summary of a major earthquake event in New Zealand. The summary was to be created from a large webpage dataset supplied for our team. This dataset contained 280MB of data. Our team used basic and advanced machine learning techniques in order to create the computer generated summary. The research behind finding an optimal way to create such summaries is important because it allows us to analyze large sets of textual information and to identify the most important parts. It takes a human a long time to write an accurate summary and may even be impossible with the number of documents in our dataset. The use of computers to do this automatically drastically increases the rate at which important information can be extracted from a set of data. The process our team followed to achieve our results is as follows. First, we extracted the most frequently appearing words in our dataset. Our second step was to examine these words and to tag them with their part of speech. The next step our team took was to find and examine the most frequent named entities. Our team then improved our set of important words through TF-IDF vectorization. The prior steps were then repeated with the improved set of words. Next our team focused on creating an extractive summary. Once we completed this step, we used templating to create our final summary. Our team had many interesting findings throughout this process. Our discoveries were as follows. We learned how to effectively use Zeppelin notebooks as a tool for prototyping code. We discovered an efficient way to run our large datasets using the Hadoop cluster along with PySpark. We discovered how to effectively clean our dataset prior to running our programs with it. We also discovered how to create the extractive summary using a template along with our important named entities. Our final result was achieved using the templating method together with abstractive summarization. Our final result included a successful generation of an extractive summary using the templating system. This result was readable and accurate according to the dataset that we were given. We also achieved decent results from the extractive summary technique. These techniques provided mostly readable summaries but still included some noise. Since our templated summary was very specific it is the most coherent and contains only relevant information.
- Event Trend DetectorWard, Ryan; Lee, Jun; Beard, Stuart; Edwards, Skylar; Su, Spencer (Virginia Tech, 2018-05-07)The Global Event and Trend Archive Research (GETAR) project is supported by NSF (IIS-1619028 and 1619371) through 2019. It will devise interactive, integrated, digital library/archive systems coupled with linked and expert-curated webpage/tweet collections. In support of GETAR, the 2017 project built a tool to scrape the news to identify important global events. It generates seeds (URLs of relevant webpages, as well as Twitter-related hashtags and keywords and mentions). A display of the results can be seen from the hall outside 2030 Torgersen Hall. This project extends that work in multiple ways. The quality of the work done has been improved. This is evident in changes done to the clustering algorithm and the user interface changes to the clustering display of global events. Second, in addition to events reported in the news, trends have been identified, and a database of trends and related events were built with a corresponding user interface according to the client’s preferences. Third, the results of the detection are connected to software for collecting tweets and crawling webpages, so automated daily runs find and archive webpages related to each trend and event. The final deliverables include development of a trend detection feature with Reddit news, integration of Google Trends into trend detection, an improved clustering algorithm to have more accurate clusters according to k-means, an improved UI for important global events according to what the client wanted, and an aesthetically pleasing UI to display the trend information. Work accomplished included setting up a table of tagged entities for trend detection and configuring the database for clustering and trends to work with our personal machines, and completing the deliverables. Many lessons were learned regarding the importance of using existing tools, starting early, doing research, having regular meetings, and having good documentation.