CS4624: Multimedia, Hypertext, and Information Access
Permanent URI for this collection
This collection contains the final projects of the students in in the course Computer Science 4624: Multimedia, Hypertext, and Information Access, at Virginia Tech.
This course, taught by Professor Ed Fox, is part of the Human-Computer Interaction track, the Knowledge, Information, and Data track, and the Media/Creative Computing track. The curriculum introduces the architectures, concepts, data, hardware, methods, models, software, standards, structures, technologies, and issues involved with: networked multimedia (e.g., image, audio, video) information, access and systems; hypertext and hypermedia; electronic publishing; virtual reality. Coverage includes text processing, search, retrieval, browsing, time-based performance, synchronization, quality of service, video conferencing and authoring.
Browse
Browsing CS4624: Multimedia, Hypertext, and Information Access by Content Type "Software"
Now showing 1 - 20 of 103
Results Per Page
Sort Options
- 21st InventoryGarner, Elliot; Dean, Brandon; Mason, Brannon (2015-05-14)Currently, Network Infrastructure & Services (NI&S) takes inventory of equipment assigned to employees (computers, laptops, tablets, tools) and sends reports of higher value items to the Controller’s Office. All items have a VT tag number and a CNS number, which can currently only be matched up via an Oracle Forms interface. An inventory clerk must personally verify the existence and location of each piece of equipment. An improvement would be an app that scans an inventory number or bar code and the GPS location where it is scanned and the custodian of that equipment. This data could then be uploaded to a more accessible Google spreadsheet or similar web-based searchable table. The 21st Century Inventory app aims to solve this problem by employing barcode scanning technology integrated into a mobile app which would then send the accompanying asset ID to a CSV formatted output file. By directly tying a product’s asset ID to the user and their information, along with having the capability to scan a product’s barcode to simplify inventory lookup, saving product information to a CSV file, and giving the user the ability to edit the current information of a product in the application, we are providing a significant upgrade to a system that currently solely relies on an Oracle Forms interface.
- 3-Dimensional Weather VisualisationNimitz, Sarah; Forsyth, Duke; Knittle, Andrew (2016-05-04)Project deliverables are provided, including a detailed description of the creation of a polling and parsing system for keeping track of severe weather warnings, as delivered by the National Weather Service, and an interface to allow the user to view a representation of Doppler radar data in three dimensions. The report describes the roles of the team members, the work accomplished over the Spring 2016 semester, and the methods by which the team accomplished this work.
- 4624S14DSpaceEmbargoSchiefer, Jeb; Sharma, Paul (2014-05-07)DSpace [1] is an open source repository application used by many organizations and institutions. It provides a way to access and manage all kinds of digital documents. The 4624S14DSpaceEmbargo project was intended to extend the functionality of the ItemImport command line tool. Specifically the goal was to add the ability to embargo uploaded items until a specified date. This functionality was already implemented for the two web interfaces (XMLUI and JSPUI). DSpace is used by the Virginia Tech library in the form of VTechWorks [2]. The project was overseen initially by Keith Gilbertson and Zhiwu Xie who work for the Virginia Tech library. Near the end of the semester we were introduced to another software developer for the library, Jay Chen. We helped Jay set up the DSpace environment on his local computer and demonstrated to him how to use the ItemImport command line tool. Embargoes are used to limit access until a specified date. An embargo can be applied as a resource policy on an item, group, or bitstream level. An item level embargo restricts access to all of the files uploaded for a particular item. A group level embargoes submissions from anyone that is a member of the specified group. By default, the Anonymous group is the group that is used. A bitstream level embargo restricts access only on a specific file that is uploaded. The date format expected for setting an embargo must adhere to the ISO 8601 date format [3], specifically the YYYY-MM-DD, YYYY-MM, and YYYY variations. The deliverables for this project were the source and this documentation. Source code will be available on VTechWorks as well as GitHub. The GitHub repository [4] will be more up to date than the VTechWorks copy because we will continue some work on the project after the due date for this project based on feedback from the DSpace developers. The JIRA ticket for this feature to be implemented in DSpace 5.0 is DS-1996 [5]. [1] DuraSpace, “DSpace”, 2014, http://dspace.org/ [2] Virginia Tech, “VTechWorks”, 2014, http://vtechworks.lib.vt.edu/ [3] ISO, “Date and time format - ISO 8601,” 2014, http://www.iso.org/iso/home/standards/iso8601.htm [4] GitHub, “jebschiefer/DSpace,” 2014, https://github.com/jebschiefer/DSpace/ [5] DuraSpace JIRA, “[DS-1996] Embargo Support in ItemImport,” 2014, https://jira.duraspace.org/browse/DS-1996
- Analyzing Microblog Feeds to Trade StocksWatts, Joseph; Anderson, Nick; Asbill, Connor; Mehr, Joseph (Virginia Tech, 2017-05-10)The goal of this project is to leverage microblogging data about the stock market to predict price trends and execute trades based on these predictions. Predicting the price trends of stocks with microblogging data involves a complex opinion aggregation model. For this, we built upon previous research, specifically a paper called "CrowdIQ" submitted by a team consisting of some Virginia Tech faculty. This paper details a complicated method of aggregating an accurate opinion by modeling judge reliability and interdependence. Once the overall sentiment of the judges was deduced, we built trading strategies that take this information into account to execute trades. The first step of the project was a sentiment analysis of posts on a microblogging site named StockTwits. These messages can contain a label indicating a bullish or bearish sentiment, which will help indicate a specific position to take on a given stock. However, most users choose not to use these labels on their StockTwits. A classification of these unlabeled tweets is required to autonomously utilize StockTwits to drive the proposed trading strategies. With a working sentiment analysis model, we implemented the opinion aggregation model described by CrowdIQ. This can gauge an accurate market sentiment for a particular stock based on the collection of sentiments that are received from users on StockTwits. The next step was the creation of a trading simulation platform, including a complete virtual portfolio management system and an API for retrieving historical and current stock data. These tools allow us to run quick and repeatable tests of our trading strategies on historical data. We can easily compare the performance of strategies by running them with the same historical data. After we had a viable testing environment setup, we implemented trading strategies. This required research and analysis of other attempts at similar uses of microblogging data on predicting stock returns. The testing environment was focused on a set of stocks that is consistent with those used in CrowdIQ. The implementation of the CrowdIQ strategy served as a baseline against which we compared our results. Development of new trading strategies is an open-ended task that involved a process of trial and error. It is possible for a strategy to find success in 2014, but not perform quite as well in other years, because market climates can be fickle. To assess the dependence of the market climate on our strategy's success, we also tested against data for the year of 2015 and compared the performance. The final deliverable is a viable trading simulation environment coupled with various trading strategies and an analysis of their performance in the years of 2014 and 2015. The analysis of each strategy's performance indicated that our sentiment-based strategies perform better than the index in bullish markets like that of 2014, but, when they encounter a bear market, they typically make poor trading decisions which result in a loss of value.
- Arabic News Article SummarizationAyoub, Souleiman; Freeman, Julia (2015-05-14)This project involves taking Arabic PDF news articles to produce results from our new program that indexes, categorizes, and summarizes them. We fill out a template to summarize news articles with predetermined attributes. These values will be extracted using a named entity recognizer (NER) which will recognize organizations and people, topic generation using an LDA algorithm, and direct information extraction from news articles’ authors and dates. We use Fusion LucidWorks (a Solr based system) to help with the indexing of our data and provide an interface for the user to search and browse the articles with their summaries. Solr is used for information retrieval. The final program should enable end users to sift through news articles quickly.
- Artificial Immune System (AIS) Based Intrusion Detection System (IDS) for Smart Grid Advanced Metering Infrastructure (AMI) NetworksSong, Kevin; Kim, Paul; Tyagi, Vedant; Rajasekaran, Shivani (Virginia Tech, 2018-05-09)The Smart Grid is a large system consisting of many components that contribute to the bidirectional exchange of power. The reason for it being “smart” is because vast amounts of data are transferred between the meter components and the control systems which manage the data. The scale of the smart grid is too large to micromanage. That is why smart grids must learn to use Artificial Intelligence (AI) to be resilient and self-healing against cyber-attacks that occur on a daily basis. Unlike traditional cyber defense methods, Artificial Immune System (AIS) principles have an advantage because they can detect attacks from inside the network and stop them before they occur. The goal of the report is to provide a proof of concept that an AIS can be implemented on smart grid AMI (Advanced Metering Infrastructure) networks and furthermore be able to detect intrusions and anomalies in the network data. The report describes a proof of concept implementation of an AIS system for intrusion detection with a synthetic packet capture (pcap) dataset containing common Internet protocols used in Smart grid AMI networks. An intention of the report is to provide the necessary background for understanding the implementation in the later sections. The background section defines what a smart grid is and how its Advanced Metering Infrastructure (AMI) works, describing all three networks the AMI consists of. The Wide Area Network (WAN) is one of the three networks and we were scoping down to WAN for our project. The report goes on to discuss the current cyber threats as well as defense solutions related to the smart grid network infrastructure today. One of the most widely used defense mechanisms is the Intrusion Detection System (IDS), which has many important techniques that can be used in the AIS based IDS implementation of this report. The most commonly used AIS algorithms are defined. Specifically, the Negative Selection Algorithm (NSA) is used for our implementation. The NSA algorithm components used in the implementation section are thoroughly explained and the AIS based IDS framework is defined. A list of AIS usages/values in enterprise networks is presented as well as research on current NSA use in AIS implementations. The latter portion of the report consists of the design and implementation. Due to data constraints and various other limitations, the team wasn’t able to complete the initial implementation successfully. Therefore, a second implementation design was created, leading to the main implementation which meets the project’s objective. The implementation employs a proof of concept approach using a C# console application which performs all steps of an AIS on user created network data. In conclusion, the second implementation has the ability to detect intrusions in a synthetic dataset of “man-made” network data. This proves the AIS algorithm works and furthers the understanding that if the implementation was scaled up and used on real-time WAN network data it would run successfully and prevent attacks. The report also documents the limitations and problems one can run into when attempting to implement a solution of this scale. The ending sections of the report consists of the Requirements, Assessment, Assumptions, Results, and lessons learned followed by the Acknowledgments to MITRE Corporation which helped immensely throughout the development of the report.
- Autism Support PortalQuayum, Sib; Galliher, Ryan; Nagies, Kenneth; Ritchie, Ayumi (Virginia Tech, 2018-05-08)The Autism Support Portal project involves the creation of a portal site that helps users find information they need about autism. The primary goal of the project is to help users quickly find credible information for their specific need. With the amount of information available online, it can be hard for those interested in autism to find information that is not only credible but useful and updated to reflect current research. The site needs to be easy to use both for the users and for the future administrators of the site. The site also needs to help guide people towards reliable resources while potentially exposing users to new resources. To ensure that our project meets the needs of our potential users, the project was divided into different phases involving data collection, research, design, and implementation. To gather data for our project, we used resources such as the Virginia Tech Center for Autism Research and their connections, to send out anonymous surveys to some of our potential users. We asked several questions pertaining to their interests in the site, what they needed from the site, and what resources were useful to them. This data allowed us to implement a site as specific to the user needs as possible while also giving us other resources to collect credible information from. In addition, Dr. Scarpa provided a lot of other resources that allowed us to solve some of the needs of users, with other resources allowing this project to focus entirely on the implementation of our search engine and the guiding of our users towards effective answers, solutions, and resources. Upon entering the site, users have direct access to the search and are provided with search tips and external resources to help them. The site is set up entirely using WordPress.org. WordPress was chosen to be the CMS or content management system for the site because it is very easy to use and allows administrators to do a lot for the site without the need for extensive technical knowledge. The site needs to be very easy to modify and change after its initial set up so that those who work on it at the Virginia Tech Center for Autism Research can do so quickly. However, using solely WordPress and its plugins created a variety of new obstacles stemming from the different uses of different plugins. To save time and money, research needed to be done on several different plugins to find the ones that not only met the needs of the site but that were also affordable. Even with these obstacles, using WordPress not only allows for easier creation and maintenance, but also easy modification of the site if additional features are wanted or needed. The design of the site allows users to find necessary information very quickly through alphabetically sorted lists that will expose the user to terms that may have been unknown previously. One of the problems with researching autism is asking the right questions. For example, a child with a special need such as autism needs an IEP or individualized education program, which requires a specific search for an IEP. When a user explores education information, the user also needs to be shown some specifics such as IEPs. This example also serves as an example of the need to have our site easily modifiable, as a change in law or name would require someone to change the resource in the site. Using the data and implementation techniques discussed, the end result portal is composed of help and resource pages as well as a refined search that links questions to reliable answers. In addition, the site is designed such that any user without prior technical experience can use the site and adjust the sites that are searched and any other information within the site that is changed.
- Automated Crisis Collection Builder - Final Project ReportBrian Hays; Alex Zhang; Mitchel Rifae; Trevor Kappauf; Parsa Nikpour (2023-11-30)In the contemporary digital landscape, access to timely and relevant information during crisis events is crucial for effective decision-making and response coordination. This project addresses the need for a specialized web application equipped with a sophisticated crawler system to streamline the process of collecting pertinent information related to a user-specified crisis event. The inherent challenge lies in the vast and dynamic nature of online content, where identifying and extracting valuable data from a multitude of sources can be overwhelming. This project aims to empower users by allowing them to input a list of newline-delimited URLs associated with the crisis at hand. The embedded crawler software then systematically traverses these URLs, extracting additional outgoing links for further exploration. Afterwards, the contents of each outgoing URL is then run through a predict function, which evaluates the relevance of each URL based on a scoring system ranging from 0 to 1. This scoring mechanism serves as a critical filter, ensuring that the collected web pages are not only related to the specified crisis event but also possess a significant degree of pertinence. We allow the user to set these thresholds, which enhances the efficiency of information retrieval by prioritizing content most likely to be valuable to the user's needs. Throughout the crawling process, our system tracks a range of statistics, including individual website domains, the origin of each child URL, and the average score assigned to each domain. To provide users with a comprehensive and visually intuitive experience, our user interface leverages React and D3 to display these statistics effectively. Moreover, to enhance user engagement and customization, our platform allows users to create individual accounts. This feature not only provides a personalized experience but also grants users access to a historical record of every crawl they have executed. Users are further empowered with the ability to effortlessly export or delete any of their previous crawls based on their preferences. In terms of deliverables, our project commits to providing fully developed code encompassing both frontend and backend components. Complementing this, we will furnish comprehensive user and developer manuals, facilitating seamless continuity for future students or developers who may build upon our work. Additionally, our final deliverables include a detailed report and a compelling presentation, serving the dual purpose of showcasing our team's progress across various project stages and providing insights into the functionalities and outcomes achieved.
- Background Check for R4 OpSec, LLCHyres, Thomas; Tea, Zachary; Yang, Ted; Gray, Philippe; Springsteen, Timothy; Bierly, Alex (Virginia Tech, 2017-04-28)The main project deliverable was a website for R4 OpSec (r4opsec.com). The purpose of this website is to display information about the company’s services and be able to accept résumés for new hires. The company is owned by Joe Romagnoli and is based in Chantilly, VA. The company works in the field of background investigation checks for the federal, state, and local government, as well as the civilian sector. The background investigation process starts with a company or a government agency reaching out to independent companies that handle an investigation of a new hire to that company. A background investigation usually includes verifying identity, past employment, credit history, and criminal history. The process can take anywhere from a week to a month, depending on how quickly the company is able to verify a person’s information given what the person provides to the company (i.e., proof of past education, W2 forms, date of birth, etc.). The website has a home landing page that displays images and text. There is a section explaining what services the company provides. Another section to display a simple about-us description. Finally, there is a button that brings a user to another page to upload a résumé. There is an admin login page, too, where employees at R4 OpSec can view past submissions. An admin can download the résumé, delete the submission information, search past submissions, or mark submissions as “pending”, “accepted”, or “rejected”. The admin is also able to create new admin accounts, edit their email address, or change their password from the same screen. The client needed the website to be fully functional in about 90 days. The client did not have a basic design in mind. Though, the client did provide a basic website that we could reference for when we were thinking of designs for this website. In November, the client had purchased a year subscription from GoDaddy.com to host his website. We did raise concerns we thought the client should know about when it comes to shared web hosting, which we shall discuss in the report (Section 3.2.5). Lastly, the client wanted to make sure that this project would be expandable, and in the future, other groups or employees of R4 OpSec would be able to build upon what we delivered.
- Blacksburg to Guatemala ArchiveJoshi, Arth; Agarwal, Ankit; Crowson, John (2015-05-14)The primary objective of the Blacksburg to Guatemala Archive Project is to create a medium for cultural exchange between Christ Church, Blacksburg, and their sister parish in San Andres Itzapa, Guatemala. This project will create a website and will allow for these two parishes to remain in close contact following the recent visit of a delegation from Christ Church, Blacksburg, to Guatemala in January 2015. We are completing this project as soon as possible in order to minimize the delay between their visit and the establishment of such a cultural exchange. Both parishes will benefit from our project, satisfying a desire to remain in-touch and embrace cultural differences. We decided to use WordPress to offer the simplest possible solution. WordPress will allow the client to easily maintain stories on the website and will also give readers an easy way to enjoy them. We chose a simple theme and then further perfected it to reduce any complications. We decided to only display aspects that were absolutely necessary to the project. Aside from the stories, categories and search are the only other modules that are visible to the readers. The website also allows users to leave comments on every story so they can interact with the parishes in an easy way. Another big request from the client’s end was to allow for stories written in Spanish. We incorporated WordPress’s bilingual tools to support this functionality. Stories can be written in either language and they will be formatted appropriately. A search engine has also been implemented to display results in both English and Spanish. Overall, the website was a success. The primary focus was usability and the tests we ran proved the website was easy to use. The client was also happy with the results and can see the website being very useful to both parishes.
- Blog and Forum Collection for Trail StudyEason, Andrew D.; Cianfarini, Kevin M.; Hansen, Marshall C.; Davies, Shane J. (Virginia Tech, 2018-05-07)This project is focused on the culture and trends of the Triple Crown Trails (Appalachian Trail, Pacific Crest Trail, and Continental Divide Trail). The goal of this project is to create a large collection of forum and blog posts that relate to the previously stated trails through the use of web crawling and internet searching. One reason for this project is to assist our client with her Master’s Thesis. Our client, Abigail Bartolome is focusing her thesis on the different trends and different ways of life on the Triple Crown Trails, and the use of our tool will help her. The impact of our project is that it will allow our client to be able to sift through information much faster in order to find what she does and does not need for her thesis, instead of wasting time searching through countless entries with non-relevant information. Abigail will also be able to sift through what kind of information she wants specifically through the use of our tagging system. We have provided the dates, titles, and author of each post so she can immediately see if the article has relevant information and was posted in a time frame that is applicable. The project will have two main focuses, the frontend and the backend. The frontend is an easy-to-use interface for Abigail. It will allow her to to search for specific tags, which will filter the blog posts based on what information she seeks. The tags are generated automatically based on the content of all of the forums and blogs together, making them very specific which is good for searching for the kind of content desired by our client. When she finishes adding tags, she can then search for blogs or forums that relate to the topics tagged. The page will display them in a neat format with the title of the article that is hyperlink-embedded so she can click on it to see the information from the article, as well as the author, date, and source of the post. The backend is where all the heavy lifting will be done, but obviously is invisible to the client. This is where we will go through each of the blog or forum websites fed into the web crawler to store all of the relevant information into our database. The backend is also where the tagging system is implemented and where tags are generated and applied to blog posts. WordPress and BlogSpot (for the most part) have a uniform way of going through blogs, so our web crawler acts accordingly based on which website it is, and is able to go through until there are no more blogs on that site. All of the blog posts, contents, pictures, tags, URLs, etc. are stored in the backend database and then linked to our frontend so that we can display it neatly and organized to the liking of Abigail. From 31 sources we have collected 3,423 blog posts to which have been assigned 87,618 tags. Together, the frontend and the backend provide Abigail with a method to both search and view blog post content in an efficient manner.
- Boy Scout Medical Record System for Blue Ridge Mountain CouncilKurlak, John; Whelan, Pat; Greer, Zack; De La Barra, Mauricio (2012-05-03)For this semester project, our team decided to partner with the Boy Scouts of America in Pulaski County. Our coordinator, Gregory W. Harmon, works for the Boy Scouts and manages all of their camping facilities. Since they serve over 120,000 users per day, they were looking for ways to improve their medical recording procedures for filing injuries and accidents. For them, currently everything is written by hand into a log book and supplemented with various forms. Our project is basically a web-based digitalization of this recording procedure. This system has one main form that goes into a database. This main form has the ability to create arbitrary reports with electronic signatures (for legal reasons) as well as the ability to auto populate other form fields. The technologies we used for this project include object-oriented PHP, MySQL, JavaScript, jQuery, phpass, CSS, and HTML5 (appcache/localStorage). The website that we developed has a home login page. After the user has successfully logged in with his or her user account information, there are multiple things he or she can do. The user can create a new user account with user information, delete an existing user, change the password of the currently logged in user, file an injury report (and upload photos of the injury), view previous injury reports, search reports (which can be downloaded and printed), manage backups (manually and automatically), access forms offline, and contact support for help. Some of the other features of the website include automatic output minification (for CSS, HTML, JavaScript, and fonts), client- and server-side input validation, and robust error handling. Our final website solution ended up being 7,141 lines (158 pages) of code long. Our website is divided up into nine directories (root, backend, backups, css, fonts, form-templates, images, js, and photos), and the code is split up across 55 files. The root folder contains all of the website views and controllers. The backend folder contains all of the website models. The backups folder stores all manual and automatic backups in gzip format. The css folder stores all CSS. The fonts folder stores all custom web fonts. The form-templates folder stores RTF templates for each of the output forms. A user can easily modify these RTF templates, which have variable placeholders, to change the way the report forms look. The images folder contains all of the icons and images used by the website. The js folder stores all of the front-end JavaScript and jQuery code. The photos folder contains all of the photos that users have uploaded with injury forms. Our database stores user account information and injury forms. We developed and normalized the database design in MySQL Workbench. We ended up with seventeen tables. Each injury form is broken up across a series of tables. A report table stores foreign keys to each of these injury tables. We managed our tables in phpMyAdmin, a web control panel. We perform database backups using mysqldump, a binary executable that comes with MySQL. To make the website secure, we used the phpass library, which effectively combats rainbow tables and password crackers by using salted, per-user bcrypt password hashes. We also prepared SQL queries to prevent SQL injections. Finally, we sanitized output to prevent cross-site scripting (XSS) attacks. Overall, the website we developed provides a nice alternative to the current paper solution that the Blue Ridge Mountain Council is using. It is our hope that the Blue Ridge Mountain Council can continue to use and modify our system for the years to come.
- Breathe-EZWalker, T. Colton; Toda, Christopher Aska; Cornett, Christopher P.; Robohn, Benjamin F. (2016-05-08)The breathalyzer application is part of an ongoing research project Mikhail Koffarnus is pursuing as a research professor at the Addiction Recovery Research Center (ARRC). Participants, which may number in the hundreds, will participate in an alcohol addiction recovery program which includes random breathalyzer tests. For each test a participant passes, they will be monetarily rewarded. The amount they are rewarded will increase with each successive passed test. The hope is that continued clean tests incentivized by monetary rewards will aid and motivate users on their road to recovery. In order for the ARRC to be able to identify participants, the application must take pictures. The camera is set to take three photos of the participant as they use the BACtrack breathalyzer. By taking these pictures during the measurement process, the application ensures that the BACtrack device will be in the picture with the user. The application will then store the picture that it finds has the highest confidence of face detection as determined by an algorithm. This picture is important because it will help the ARRC confirm the user’s identity, keeping users from easily exploiting the system by having a sober friend blow for them.
- BTDImporterBice, Nathanael; Brink, Scott; Piorkowski, Adam (2014-05-07)The BTD Importer is used to importer Bound Thesis Dissertations to the Electronic Thesis Dissertation Database. The process involves taking a hard copy thesis and scanning it into PDF form. Once in PDF form, the importer script would locate the new PDF and extract its library call number, which is located in the PDF file’s name. Using the call number, the importer script would fetch the metadata of the thesis, such as title and author, by scraping the metadata using AirPAC Classic. The PDF would then be uploaded to the ETD database along with its metadata. The BTD Importer deliverables listed a new importer script that would take new PDFs and look up their metadata using the Sierra APIs to access Addison directly, then taking that metadata and constructing an XML file containing the data. The script would then move the PDF and the new XML file to a new output file structure, which would later be read, by another section of new the importer process. That final section would then upload the PDF and XML file to VTechWorks. The project would require PHP skills which Nathanael had and SQL skills which Adam and Scott had knowledge of, so our group felt like we could complete the project satisfactorily. The project spec also listed the project as being high impact as our work would be used to import roughly 13,000 BTDs into VTechWorks. We completed the project by splitting up the work amongst the group and meeting weekly to discuss milestones and our next goals. We decided to stick with using PHP, as that was what the original importer script was written in. The PHP libraries made it very straightforward to construct an XML file and a directory structure.
- Carilion Case Simulator ProjectRajasekaran, Vikram; Goldberg, Aaron; Murphy, Ryan (2012-05-03)
- Catawba Multimedia WebsiteWhite, Aleksi; Dancy, Zac (2012-05-07)The website for the Catawba Sustainability Center (CSC) was in its infancy, and it needed to be expanded with descriptions for onsite land demonstrations, showcases for student and faculty projects, and spotlights of the businesses on site. The lead content director for the site is Christy Gabbard, and the head of website development is Joe Gabbard
- CEED Phone ApplicationMahajan, Madhur; Hensley, Zach; Liang, Randy; Greynolds, Sean (Virginia Tech, 2018-05-01)This project addresses a problem that is often faced by many current and prospective Center for Enhancement of Engineering Diversity (CEED) members and staff. Members may range from pre-college students (e.g., high school) to parents of students. CEED needs an easier way to communicate information about their programs to current and prospective members and their parents as well. Our solution to this problem is a cross-platform mobile application for an end user. In our application, a user can learn more about CEED, and familiarize themselves with all the services offered. Thereafter they can make an informed decision about their program choices, and also can reach out to CEED employees and fellow students with any questions that they might have. Key features that are included in the mobile app are: enabled push notifications, a forum that allows users to interact with one another, and the ability to view embedded content within the app (e.g., Google Calendar and videos). The implementation is through a cloud platform for cross-platform app development known as Appery.io. This platform provides the ability to create databases and implement custom HTML/CSS/Javascript code for the front-end. Prospective CEED members and parents will be able to download this app from Google Play and/or the App Store once CEED finalizes approval of fees that are required. Below is a list of all the functionalities implemented in our application: 1. User Account Functionality 2. Announcements The client will have the ability to make important announcements. Users will be notified through an in-app inbox message or push notifications. 3. Calendar Users will have the ability to view the CEED event calendar. 4. Database functionality NoSQL database to store forum posts and other organization information 5. Programs Users will have the ability to view various programs offered by CEED. Programs will be categorized by undergraduate and graduate categories. 6. Forum Users will have the ability make forum posts about any topic. The client will have the ability to delete any inappropriate forum posts. Users will be able to post comments on existing forum posts.
- Chapter Classification and SummarizationJackson, Miles; Zhao, Yinhjie (2024-05-07)The US corpus of Electronic Theses and Dissertations (ETDs), partly captured in our research collection numbering over 500,000, is a valuable resource for education and research. Unfortunately, as the average length of these documents is around 100 pages, finding specific research information is not a simple task. Our project aims to tackle this issue by segmenting our sample of 500,000 ETDs, and providing a web interface that provides users with an application that summarizes individual chapters from the previously segmented sample. The first step of the project was to verify that the automatic segmentation process, performed in advance by our client, could be relied upon. This required each team member to analyze 50 segmented documents and verify their integrity by confirming that each chapter was correctly identified and separated into a PDF. During this process, we noted any peculiarities, to identify recurring issues and improve the segmentation process. The rest of our time and effort went into creating an efficient web interface that would allow users to upload ETD chapters and display said chapter’s summary and classification results. We were able to complete a web interface that allows a user to upload an ETD chapter PDF from the sampled ETD database and view the summary of the PDF along with all of the metadata (author, title, publication date, etc.) of the associated ETD. Additionally, the group verified approximately 60 of the automatically segmented documents and detailed any errors or peculiarities thoroughly. Our group delivered both the web interface as a GitHub repository and an Excel spreadsheet detailing the complete results of our segmentation verification process. The interface was designed to be used in aiding research on ETDs. Although this application won’t be available publicly, researchers may use it privately to assist with any ETD research projects they participate in. The web interface uses Streamlit, which is a Python framework for web development. This was the first time anyone in the group had used Streamlit, so we had to learn each feature that we used, which caused quite a few issues. However, quickly searching and accessing the metadata database, which was originally an Excel sheet with 500,000 entries, posed the biggest threat to the usability of our interface. Luckily, we were able to solve all issues through the use of API documentation, our client, Bipasha Banerjee, and our extremely helpful instructor, Professor Edward A. Fox. In terms of technical skills, we have learned how to operate a Streamlit web interface as well as how to use MySQL. However, we also learned a few life lessons. Firstly, do not use the first tool available when attempting to solve a solution. It is wise to take extra time to search for the best tool for a given situation instead of wasting time compensating for using the wrong tool. Secondly, life happens without regard and without warning, but the best move is to reanalyze the situation and push forward to complete the work that must be done.
- Cinemacraft: Virtual Minecraft Presence Using OPERAcraftBarnes, Brittany; Elsi, Godolja; Kiseleva, Marina (2016-04-28)Cinemacraft is an interactive system built off of a Minecraft modification developed at Virginia Tech, OPERAcraft. The adapted system allows users to view their mirror image, as captured by Kinect sensors, in the form of a Minecraft avatar. OPERAcraft, the foundation of the project, was designed to engage K-12 students by allowing users to create and perform virtual operas in Minecraft. With the advanced functionality of Cinemacraft, the reinvented system aims to alter the perspective of how real-time productions will be produced, filmed, and viewed. The system uses Kinect motion-sensing devices that track user movement and extract the data associated with it. The processed data is then sent through middleware, Pd-L2Ork, to Cinemacraft, where it is translated into avatar movement to be displayed on the screen, resulting in a realistic reflection of the user in the form of an avatar in the Minecraft world. Within the display limitations presented by Minecraft, the avatar can replicate the user’s skeletal and facial movements; movements involving minor extremities like hands or feet cannot be recreated because Minecraft avatars do not have elbows, knees, ankles, or wrists. For the skeletal movements, three dimensional points are retrieved from the Kinect device that relate to specific joints of the user and are converted into three dimensional vectors. Using geometry, the angles of movement around each axis (X, Y, and Z) for each body region (arms, legs, etc.) are determined. The facial expressions are computed by mapping eyebrow and mouth movements within certain thresholds to specific facial expressions (mouth smiling, mouth frowning, eyebrows furrowed, etc.).
- Cloud Digital Repo OptimizationFowler, Tom A.; Howe, Christian J. (Virginia Tech, 2018-05-02)The goal of the project is to scale down the CloudFormation templates for deploying the Hyku digital repository application. We have attempted to reduce the cost of running the Hyku application with a base level of performance, essentially reducing it to the minimum viable scale. We have accomplished this by changing these templates and their configuration parameters to use less instances at smaller sizes. After evaluating a number of different options for reducing the base cost, including using other AWS offerings, we have settled on a number of parameters that work well at the base level of performance. In testing these changes, we used a qualitative method of testing the functionality of the existing feature set on the original deployment and comparing that to the functionality of the new deployment. We have seen no changes in functionality from the original deployment. The cost reduction we see with these reduced instance sizes is to about one third of the original cost, resulting in massive savings given that the original cost of running the application was about $800-900 a month. The new cost of running our modified templates with the parameters we have tested is about $300 a month. Given that the original feature set is still functioning as it was before, we believe that we have achieved a satisfactory reduction of cost from the original deployment, and therefore have accomplished the goal we set out to complete. We provide documentation on our process and the changes we made, including on how to reproduce in the future the changes we have made. Since the templates require some level of maintenance, this documentation is vital for deploying them in the future. The documentation provided by the report gives future maintainers the ability to quickly get up and running with the potential problems encountered when working with the templates, and gives future groups the insight to predict the kinds of challenges they will face when working on the Hyku CloudFormation templates.