VTechWorks staff will be away for the Thanksgiving holiday beginning at noon on Wednesday, November 27, through Friday, November 29. We will resume normal operations on Monday, December 2. Thank you for your patience.
 

English Wikipedia on Hadoop Cluster

TR Number

Date

2016-05-04

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

To develop and test big data software, one thing that is required is a big dataset. The full English Wikipedia dataset would serve well for testing and benchmarking purposes. Loading this dataset onto a system, such as an Apache Hadoop cluster, and indexing it into Apache Solr, would allow researchers and developers at Virginia Tech to benchmark configurations and big data analytics software. This project is on importing the full English Wikipedia into an Apache Hadoop cluster and indexing it by Apache Solr, so that it can be searched.

A prototype was designed and implemented. A small subset of the Wikipedia data was unpacked and imported into Apache Hadoop’s HDFS. The entire Wikipedia Dataset was also downloaded onto a Hadoop Cluster at Virginia Tech. A portion of the dataset was converted from XML to Avro and imported into HDFS on the cluster.

Future work would be to finish unpacking the full dataset and repeat the steps carried out with the prototype system, for all of WIkipedia. Unpacking the remaining data, converting it to Avro, and importing it into HDFS can be done with minimal adjustments to the script written for this job. Continuously run, this job would take an estimated 30 hours to complete.

Description

CS 4624 Multimedia/Hypertext/Information Retrieval Final Project Files submitted: CS4624WikipediaHadoopReport.docx - Final Report in DOCX CS4624WikipediaHadoopReport.pdf- Final Report in PDF CS4624WikipediaHadoopPresentation.pptx - Final Presentation in PPTX CS4624WikipediaHadoopPresentation.pdf - Final Presentation in PDF wikipedia_hadoop.zip - Project files and data

Keywords

Wikipedia, Hadoop Cluster, Solr, XML, Avro, Apache

Citation