Show simple item record

dc.contributor.authorDoan, Vieten
dc.contributor.authorCrawford, Matten
dc.contributor.authorNicholakos, Akien
dc.contributor.authorRizzo, Roberten
dc.contributor.authorSalopek, Jacksonen
dc.date.accessioned2019-08-02T19:26:00Zen
dc.date.available2019-08-02T19:26:00Zen
dc.date.issued2019-05-08en
dc.identifier.urihttp://hdl.handle.net/10919/92622en
dc.description.abstractThis submission resulted from the semester-long team project focused on obtaining the data of 50 DMO websites, parsing the data, storing it in a database, and then visualizing it on a website. We have worked on this project for our client, Dr. Florian Zach, as a part of the Multimedia / Hypertext / Information Access course taught by Dr. Edward A. Fox. We have created a rudimentary website with much of the infrastructure necessary to visualize the data once we have entered it into the database. We have experimented extensively with web scraping technology like Heretrix3 and Scrapy, but then we learned that members of the Interrnet Archive could give us the data we want. We initially tabled our work on web scraping and instead focused on the website and visualizations. We constructed an API in GraphQL in order to query the database and relay the fetched data to the front end visualizations. The website with the visualizations was hosted on Microsoft Azure using a serverless model. On the website we have a homepage, page for visualizations, and a page for information about the project. The website contains a functional navigation bar to change between the three pages. Currently on the homepage, we have a basic USA country map visual with the ability to change a state’s color on a mouse hover. After complications with funding and learning that the Internet Archive would not be able to give us the data in time for us to complete the project, we pivoted away from the website and visualizations. We instead focused back on data collection and parsing. Using Scrapy we gathered the homepages of 98 tourism destination websites for each month they were available from April 2019 back to January 1996. We then used a series of Python scripts to parse this data into a dictionary of general information about the scraped sites as well as a set of CSV files recording the external links of the websites on the given months.en
dc.description.sponsorshipNSF (IIS-1619028 and 1619371)en
dc.language.isoen_USen
dc.publisherVirginia Techen
dc.rightsIn Copyrighten
dc.rights.urihttp://rightsstatements.org/vocab/InC/1.0/en
dc.subjectTourismen
dc.subjectWaybackMachineen
dc.subjectPythonen
dc.subjectScrapyen
dc.subjectParsingen
dc.subjectWeb Scrapingen
dc.titleTourism Destination Websitesen
dc.typePresentationen
dc.typeReporten
dc.description.notesThe files associated with this upload are: "TourismDestinationWebsites.pdf" : This is our report in PDF format. "TourismDestinationWebsites.docx" : This is our report in editable Word .docx format. "TourismDestinationWebsitesFinalPresentation.pdf" : This is our final presentation in PDF format. "TourismDestinationWebsitesFinalPresentation.pptx" : This is our final presentation in editable PowerPoint format.en


Files in this item

Thumbnail
Thumbnail
Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record