CS4624: Multimedia, Hypertext, and Information Access
Permanent URI for this collection
This collection contains the final projects of the students in in the course Computer Science 4624: Multimedia, Hypertext, and Information Access, at Virginia Tech.
This course, taught by Professor Ed Fox, is part of the Human-Computer Interaction track, the Knowledge, Information, and Data track, and the Media/Creative Computing track. The curriculum introduces the architectures, concepts, data, hardware, methods, models, software, standards, structures, technologies, and issues involved with: networked multimedia (e.g., image, audio, video) information, access and systems; hypertext and hypermedia; electronic publishing; virtual reality. Coverage includes text processing, search, retrieval, browsing, time-based performance, synchronization, quality of service, video conferencing and authoring.
Browse
Browsing CS4624: Multimedia, Hypertext, and Information Access by Content Type "Video"
Now showing 1 - 20 of 39
Results Per Page
Sort Options
- ABC Drone TeamBartal, Connor; Cooper, Jared (Virginia Tech, 2021-05-13)The ABC Sports Drone capstone team is an extension of the ABC Drone Project which is a group spearheaded by client Charles Kerr and in conjunction with the VT Club Ultimate team, Burn. The goal of the project as a whole is to provide high-quality footage and streaming of amateur sports to the masses. This capstone team is a subsection of the ABC Drone Project that has been tasked with creating software solutions and developing new techniques to help push this drone project to fruition. This report covers the progress of the capstone team in developing new routines for the drone, and the pivots that have been introduced as the team has received new data. The first goal that was tackled was identifying players on a field from an endzone-to-endzone view. This started with the analyzing of contours in addition to their position and attributes to determine if a contour was a player. Artifacts from off the field of play proved to be greatly troublesome, so a field bounding solution was created to eliminate as many artifacts as possible that were not on the field of play. Fairly good accuracy was achieved with this method (~75%), but the goal was set at 85%+ accuracy for identification. After experimenting with motion-detection and object persistence, the best course of action seemed to be identification via a convolutional neural network. No datasets were available that matched the application of this network, so an original dataset needed to be created. An application was developed that allowed for fairly quick extraction of data from sample videos. This data was fed to the neural network and constantly yields around 94% identification accuracy. Although the accuracy is high, it reduces frame rates to approximately 1 FPS. Some market interviews with actual coaches revealed a larger interest in post-processing capability than live-identification, so the client decided to pivot. A system that allows for speed-editing of footage has been developed, and a (proof of concept) companion application will allow coaches to easily track stats and pre-edit film via a GUI. The speed editing program takes in the footage and allows the coach to use a video game controller to create quick cuts to eliminate down time, as well as pan, tilt, and zoom on the footage to ensure the action is always framed. The edits are recorded in an edit-decision-list (EDL) file which is then sent in conjunction with the video file to Amazon Web Services. AWS takes the EDL file and original video and returns a fully-edited game film. With this method, a 90 minute game can be edited in 5 minutes or less. If coaches are recording stats during the game, the footage will also be annotated with important plays which are recorded on a similar EDL for gameplay statistics. Players will then have access to a program that will allow them to click their name to see the timestamps of all of their highlights.
- Adult Day Services Memory Masterclass Promotional VideoKulik, Maddie; Castillo, Pablo; Zurita, Jose (Virginia Tech, 2019-05-01)The goal of the project was to create a promotional video for Virginia Tech’s Adult Day Services center, specifically to advertise for their Memory Masterclass program. Adult Day Services is a center located within the Human Development and Family Sciences Department at Virginia Tech. They are licensed by the Department of Social Services to offer personal care, health monitoring, meals, therapeutic activities, dementia care, and recovery assistance. They serve typically 18 participants each operating day who average about 75 years of age. According to ADS’s mission statement, the center is dedicated to providing a center focused on the well-being and optimal functioning of its participants, a resource for caregiver support, an education opportunity for students, and a community among generations of children, college students, and adults. One of ADS’s main service offerings is their Memory Masterclass course. This course is offered in 6-week sessions to participants over 55 years of age who want to maximize their brain health. The focus of the course is to educate and serve people who have been diagnosed with Mild Cognitive Impairment (MCI). MCI is not a symptom or precursor to Alzheimer's or dementia, but rather a condition that occurs as aging changes brain function. In the 6-week course participants learn strategies for application to daily life that can strengthen brain reserve as you age and get connected with others who have similar concerns about memory. Our main objective was to create a promotional video that Adult Day Services could use on their website to inform and attract people to take the class. This project was broken up into several different stages. The first stage was to meet with our clients, Adult Day Service professionals, to gain a better understanding of the project requirements. Our clients described to us that they would like a video that showcased the active, healthy lifestyle of one of their Memory Masterclass participants. This would include footage of men and women doing outdoor activities, participating in class, and doing mentally-stimulating activities. From meeting with our clients, we came to realize that they wanted a specific type of aesthetic to their video - a combination of active and “homey” footage. An important goal for our clients was to have the video ready to be presented at an AARP event in mid-March, so the first stage of this project had to be completed by that deadline. The second stage was scheduling time to physically shoot the videos. This involved renting camera and sound equipment, coordinating with our clients and course participants, deciding on filming locations, and collecting the raw footage. Once we had shot all of the raw footage, the third stage was comprised of condensing, cleaning, and enhancing the raw footage to create a preliminary draft of the video. The video was delivered to the client, we received feedback, and have begun work revising the video to meet client specifications. The client will be able to use this video for advertising on the ADS website, as well as at different events where their services are promoted. The fourth stage of this project is what we are currently working on right now. Another recommendation was that we prepare another video that was a bit shorter, approximately 90 seconds long, that could be used as a shorter promotion. This shorter video will likely be a condensed version of highlights from the 4-minute video. The third stage was to revise the initial version of the video based on client feedback. This involved sitting down with our client and gaining specific insight as to what details they liked and what they wanted to have modified. After we acquired feedback, we were able to reshoot footage that was not preferable and take more shots of outdoor activities. The final version of the video incorporated footage from both stages of filming and incorporated the client's desired changes. This version of the video was also shown to an applicable user pool of Memory Masterclass students who gave us further feedback.
- AI Aided AnnotationBishop, Jonah B. M.; David, Isaac; Lubana, Ishaandeep (Virginia Tech, 2022-05-11)Human annotation of long documents is a very important task in training and evaluation in NLP. The process generally starts with the human annotators reading over the document in its entirety. Once the annotator feels they have a sufficient grasp on the document, they can begin to annotate it. Specifically, annotators will look for questions that can be answered, and then write down the question and answer. In our client’s case, the chosen long documents are electronic theses and dissertations (ETDs) which are often 100-150 pages minimum, thereby making it a time consuming and expensive process to annotate. The ETDs are annotated on a chapter by chapter basis as content can vary significantly in each chapter. The annotations generated are then used to help evaluate downstream tasks such as summarization, topic modeling, and question answering. The system aids the annotators in the creation of a Knowledge Base that is rich with topics/keywords and question-answer pairs for each chapter in ETDs. The core of the system revolves around an algorithm known as the Maximal Marginal Relevance. By utilizing the MMR algorithm with a changeable lambda value, keywords, and a couple of other elements, we can identify sentences based on their similarity or diversity relative to a collection of sentences. This algorithm would greatly enhance the annotation process in ETDs by automating the process of identifying the most relevant sentences. Thus, annotators do not have to sift through the ETDs one sentence at a time, instead making a comprehensive summary as fast as the MMR algorithm can work. As a result, annotators can save many hours per ETD, resulting in more human generated annotations in a shorter amount of time. The final deliverables are the project, a final slideshow presenting our work throughout the semester, a final report, and a video demonstrating exactly how to use our platform. All of this is available here on VTechWorks in this report. Additionally, the project is being built using GitHub, making it free and available to the public to fork and modify in any way they see fit.
- Apple Ridge Farms Corporate Retreat VideoVernon, Tyler; Dallachie, Charles; Mykich, Andrew; Duval, Matthew (2012-05-03)Apple Ridge Farms, a NPO in the Roanoke area, sponsors an academic summer camp for underprivileged youths in the Roanoke area. They also host corporate retreats and other events on their grounds in the off-season. They requested a short video for internet distribution to increase revenue from their corporate retreat portion of their business. We filmed the grounds on April 27th, 2012, and created a video for them using captured video, images, and audio, as well as images they provided.
- CellCycleVizSmokowski, Cesar J; Lin, Shuai; Shin, Donghyeon E (Virginia Tech, 2022-05-07)The CellCycleViz Project teamed up with our client, Dr. Cao, to create an educational website aimed at teaching users about the cell cycle. The website includes content for a wide range of users, including young students, the general public and experienced users interested in research data. Content on the website includes introductory information for users first learning about the cell cycle, detailed cell cycle models for those interested in more detailed information and mathematical models created with research data from Professor John Tyson's lab. Professor John Tyson is a Distinguished Professor in the Biology department. His lab focuses on studying Caulobacter cells, a type of bacteria widely distributed in freshwater lakes and streams. The website was developed using one HTML file for each webpage and JavaScript files to create interactive cell cycle visualizations.
- Cholera DatabaseCroxall, Emily; Roberto, Michael; Sharma, Hemakshi; Alcantara, Gabriela; García Solares, Andrés (Virginia Tech, 2020-05-12)This project involved work toward a database of Cholera records from 2010 – 2020. The WHO repository was used to extract and normalize data to build CSV files. Each year where data is available has a CSV file containing location and total number of cases in the location. The ProMED repository was used to collect data for the same timeframe. The data was extracted, condensed, and tagged for easier manual viewing. Data for all years available is given in one CSV file. Data from WHO can be viewed in logarithmically colored maps based on the number of cases in each location. These visualizations are produced for each year in the study. The data from ProMED can be viewed in bar graphs which graph the number of articles that occur and in what weeks the articles are written for each country. These visualizations can be seen or downloaded at choleradb.cs.vt.edu. Additionally, all the CSV files of data produced are available for download on our website. Due to the complexity of NLP and the inconsistencies in the ProMED articles, our data is not completely normalized and requires some manual work. Unforeseen circumstances, including the COVID-19 crisis, slowed the project’s progress. Therefore, the ProMED data extraction did not proceed further, other data repositories have not been explored, and interactive visualizations have not been built. The results of this project are compiled datasets and data visualizations from the WHO and ProMED repositories. These are useful to our client for future analysis as well as anyone else who may be interested in the trends of Cholera outbreaks. The results of data collection are formatted for easy analysis and reading. The graphics provide a simple visual for those who are more interested in higher level analysis. This project can be useful to developers who are working on data extraction and representation in the field of epidemiology or other case based global studies. In the future, more repositories can be explored for more extensive results. Additionally, further work can be done with the ProMED set developed in order to condense it further and eliminate the need for any manual analysis after our program is run. The results of this project are all available publicly on choleradb.cs.vt.edu, including for download. All code is open source and available on Gitlab.
- Classifying ETDsShah, Vedant; Ramesh, Vaishali; Daniel, Reema; Gathani, Mihir D. (Virginia Tech, 2023-05-17)Electronic Theses and Dissertations (ETDs) are academic documents that provide an in-depth insight into an account of the research work of a graduate student and are designed to be stored in machine archives and retrieved globally. These documents contain abundant information that may be utilized by various machine learning tasks such as classification, summarization, and question-answering. However, these documents often have incomplete, incorrect, or inconsistent metadata which makes it challenging to accurately categorize these documents without manual intervention since there is no one uniform format to develop the metadata. Therefore, through the Classifying ETDs capstone project, we aim to create a gold standard classification dataset, leverage machine learning and deep learning algorithms to automatically classify ETDs with missing metadata, and develop a website to allow a user to classify an ETD with missing metadata and view already classified ETDs. The expected impact of this project is to advance information availability from long documents and eventually aid in improving long document information accessibility through regular search engines.
- Contemplative Practices InterviewsBrunner, Kevin; Spillane, Evan (2015-05-14)This technical document covers the contemplative practices interview project. This project is a part of the CS 4624 Multimedia, Hypertext, and Information Access capstone course at Virginia Tech. This report aims to describe our requirements, design, outcomes, implementation, prototype, solution refinement phases, testing and evaluation, deliverables, plan, and more. The goal of this project is to raise the visibility of contemplative practices on campus and provide support for developing proposals for contemplative practices. We aim to achieve this goal through a composite video collection containing interviews with various individuals around campus about their contemplative practice and its impact on their lives. The majority of this report was written in increments over the semester as a check-in and documentation every few weeks between the team, course advisor, and client. Therefore, sections one through seven are written from a perspective while still working on earlier stages of the project between January and April 2015. The user’s manual contains sections one and two. The developer’s manual consists of section three. The lessons learned portion is contained in sections four through seven and the final presentation PowerPoint. The acknowledgements are contained at the end before the references. When the project was all finished, we found students had experience in many different disciplines. We met a lot of great people and conduct our interviews to get some excellent footage. Generally, students feel more relaxed and stress-free after practicing. There are endless benefits for their quality of life. Students highly recommended that other students try out a contemplative practice. Performance in the classroom is even helped through practicing. Once we were done filming, we put together an intriguing composite video uploaded to YouTube for the public to view the final product. Some of the problems faced and lessons learned include: finding interviewees, convincing random students that it was worth their spare time to help with our project for free, coordinating our schedules with interviewee’s schedules, equipment availability, originally learning to use the equipment, originally learning to edit the footage, asking the “right” questions that provide us with the information the client is looking for in the final video, making the interviewees feel comfortable enough to open up on camera, creating a memorable storyboard, editing the video so that it actually captures the attention of viewers, rather than boring them with an interview. This journey is chronicled below. As a developer, to continue this project, you should contact the client. Future developers can film their own footage and interviews and then edit their own videos to continue the goals of the project. It will be added to the YouTube collection with past year’s videos.
- DLRL ClusterLech, Adam; Pontani, Joseph; Bollinger, Matthew (2014-05-09)The Digital Library Research Laboratory is a group focused on researching and implementing a full stack Hadoop cluster for data storage and analysis. The DLRL Cluster project is focused on learning and teaching the technologies behind the cluster itself. To accomplish this, we were given three primary goals. First, we were to create tutorials to teach new users how to use Mahout, HBase, Hive, and Impala. The idea was to have basic tutorials that would provide users with an introductory coverage of these modern technologies, including what they are, what they’re used for, and a fundamental level of how they’re used. The first goal was met by creating an in-depth tutorial for each technology. Each tutorial contains step-by-step instructions on how to get started with each technology, along with pictures that allow users to follow along and compare their progress to ensure that they are successful. Second, we would use these tools to demonstrate their capabilities on real data from the IDEAL project. Rather than have to show a demo to each new user of the system firsthand, we created a short (5 to 10 minute) demo video for each technology. This way users could see for themselves how to go about utilizing the software to accomplish tasks. With a video, users are able to pause and go back at their leisure to better familiarize themselves with the commands and interfaces involved. Finally, we would utilize the knowledge gained from researching these technologies and apply it to the actual cluster. We took a real, large, dataset from the DLRL cluster and ran it through each respective technology. Some reports were generated, focusing on efficiency and performance, and an actual result dataset was generated for some data analysis.
- Dynamic Optimizations of Irregular Applications on Many-core Architectures (CS Seminar Lecture Series)Parton, Eric; Zehr, David; Wellington, Jake; Zhang, Zheng (2012-03-02)Enhancing the match between software executions and hardware features is key to computing efficiency in terms of both performance and energy consumption. The match is constantly complicated by emerging architecture features in computing systems and has become a continuously evolving problem. In this talk, I will present some recent findings in the implications of three prominent features of modern systems: the heterogeneity, the rapid growth of processor-level parallelism and the increasingly complex interplay among computing units. In particular, I will focus on how to streamline computations containing dynamic irregularities for General Purpose Graphic Processing Units (GPGPUs), a broadly adopted many-core architecture. The talk will begin with the theoretical foundations of GPGPU program-level transformation techniques, and further describe a runtime optimization system, named G-Streamline, as a unified software solution to irregularities in both memory references and control flows. The system enables on-the-fly elimination of irregularities through adaptive CPU-GPU pipelining and kernel splitting schemes. Working in a holistic fashion, it maximizes whole-program performance by resolving conflicts among optimizations. In the end, I will briefly describe my other work which includes a study of the influence of shared cache on multicore and a new paradigm, named shared-cache-aware optimizations, for parallel software locality enhancement. Bio: Zheng (Eddy) Zhang is a PhD candidate at the Computer Science Department of the College of William & Mary. She received her M.S. in Computer Science at William & Mary with a Computational Operations Research (COR) specialization. Her research generally lies in the area of compilers and programming systems, with a focus on revealing and exploiting the implications of emerging hardware features on the development, compilation, and execution of software. She is the lead author of a paper that won the Best Paper Award at PPoPP'10, and a recipient of a Google Anita Borg Memorial Scholarship. The Computer Science Seminar Lecture Series is a collection of weekly lectures about topics at the forefront of contemporary computer science research, given by speakers knowledgeable in their field of study. These speakers come from a variety of different technical and geographic backgrounds, with many of them traveling from other universities across the globe to come here and share their knowledge. These weekly lectures were recorded with an HD video camera, edited with Apple Final Cut Pro X, and outputted in such a way that the resulting .mp4 video files were economical to store and stream utilizing the university's limited bandwidth and disk space resources.
- FFMPEG on the IBM CloudIshairzay, Rishi; De, Puloma; Hwang, Andrew (2012-05-06)This module aims to introduce FFMPEG to students in a linux environment (IBM Cloud)
- Food WasterLiu, Michael; Wong, James; Sengar, Divya; Chuba, Andrew; Kai, Alan (Virginia Tech, 2017)Approximately 40% of all edible food is wasted each year, costing family approximately $1,500 a year. Consequently, we undertook a task for our client, Susan Chen, in an effort to combat this issue. Our client, currently a first year, graduate student at Virginia Tech pursuing a Master’s degree in Human Nutrition, Food, and Exercise, requested that we create an online-based, public service announcement tool to raise awareness. After several rounds of concept and design refinement, the solution was realized in the form of a website. The purpose of this website is to allow visitors to visualize the current and long term, extrapolated impacts to them and society from food wasted in just a single meal. Two videos were also created for this website to provide both an educational and entertaining experience while they learn more about wasted food in the United States. The front end, i.e., webpage experienced by visiting users is ultimately an HTML document. It is also powered by JQuery to add a number of useful functionalities. One such function is an auto-complete feature so that users can dynamically see available options as they search for food types. On submitting their inputs, visitors will be shown a statistics page powered by D3.js, a JavaScript library for data-driven documents. Node.js is used on the server-side to provide the user input and statistics webpages. When a visitor submits the food types and amounts wasted to the server, the server queries the MySQL database for the appropriate data. The MySQL database is built on top of two datasets, one from a study by the USDA Agricultural Research Service and another from a separate study by the USDA Economic Research Service. This provides 7-year spanning, nationwide unit price averages for numerous food groups to calculate the desired statistics. Certainly, there were a few challenges that appeared during the project’s development. One was attempting to fuse two independently gathered datasets. Another was dealing with improper user inputs. However, as the issues were debated, they were eventually solved one by one. Ultimately, the current product fulfills the required task and goal. Users can calculate their wasted food and from a single source, see the result of that and the impact on them and those around them. Nevertheless, the current richness of the data in the database and modernity of the webpage design means that there is still untapped potential for improvement for this product.
- FreeSpeechApp4VTGupta, Chaitanya; Oh, Samuel; Cho, Andy (Virginia Tech, 2022-05-08)Over the course of his career, Mr. Matthew Newton, the coordinator of Assistive and Education Technology at Virginia Tech, has been working with applications and assistive technology in order to aid those who require non-verbal communication, to receive the means to do so. The FreeSpeech4VT project was launched under the direction of Mr. Newton to provide a free-of-cost tablet application that would allow users to easily communicate using type-to-speech functionality. Our goal as a team was to create an application that could provide basic and advanced implementations of features that paid-applications offer, while also promoting user customization of the application itself. The project consists of one mobile application that pulls from the device’s local data and is able to store data into the device’s memory based on user input of words/tiles. Although cross-platform applications could provide more flexibility and accessibility for users depending on what operating system their devices run on, our team found it best to implement a thorough iPadOS application due to the high frequency of iPad usage for communicative purposes. This application is designed for anyone who may struggle to communicate verbally, both temporarily or more long-term. There were some design choices to be more friendly towards those who may have difficulty with motor function as well. The range of people that can use this application is immense; it can be anywhere from someone who is unable to speak at all and may have some motor function issues to someone who has laryngitis and is able to type out sentences just not speak or speak loudly for some time. We began by meeting with Mr. Newton to discuss his vision. Because ergonomics with the target user-group in mind was so important, we spent a lot of time on iterative design and wireframing, getting feedback and making improvements, and finally getting approval. We then began implementing the design as well as basic Text-To-Speech (TTS) functionality. After that, we implemented more customizability-centric functionality to allow both ease-of-setup for caretakers and ease-of-use for users. We obtained feedback via bi-weekly client meetings. We have delivered an iPadOS application, this report, a final slide presentation, and a video walkthrough of the application in use.
- Giles County Animal RescueParsons, Amber; Myrick, Gregory (2013-05-18)Giles County Animal Rescue is a volunteer organization located in Giles County, Virginia. This group of volunteers assists the Giles County Animal Shelter in placing animals in homes. They also campaign for awareness of the importance of spay/neuter. Since most of their information is accessed on the web, our client Christine Link-Ownes believes that it is important to have a website that is easy to use and update. For this project, we worked with our client and Giles County Animal Rescue to redesign their website, fix bugs, and add new functionality using Drupal. This included recreating the Giles County Animal Rescue website and adding features such as newsletters and animal statuses.
- Human Potential Program for ProfessionalsLove, Kara M.; Higgins, James P.; Wirth, Jeremy S.; Whitcomb, Philip; Abdulrahman, Emad; Mitchell, Calvin (Virginia Tech, 2017-04-28)This project describes the Human Potential Program for Professionals (HPPP) CS4624 Multimedia, Hypertext, and Information Access capstone project and its deliverables. The goal of the HPPP project centers around assisting Dr. Anna Pittman by creating introductory video material for her HPPP program. The HPPP program consists of eight one hour modules, each of which has an accompanying two to five minute introductory video provided by this capstone project. These videos feature Dr. Anna Pittman giving a brief overview of the module and highlighting the main topics. After several meetings with Dr. Anna Pittman to discuss her vision for the introductory videos, a schedule was devised for filming. Dr. Anna Pittman also wanted a logo for HPPP which our team provided. Another aspect of the videos was the accompanying background music. This music was original and used Cycling ‘74’s Max/MSP software to create the final three tracks used in the videos. The raw footage was then edited within Apple’s iMovie software and combined the logo and original music. The material was provided to Dr. Anna Pittman for review. After receiving Dr. Anna Pittman’s comments, the team was able to address each concern and make adjustments. This was an iterative process requiring the team to work very closely with Dr. Anna Pittman.
- LucidWorks Vectorize Module for the Digital Library Curriculum InitiativeKniphuisen, David; Tran, Alan (2013-05-18)The goal of our project was to create a learning module for students who are interested in converting a large number of documents of data into a usable form for machine learning, information retrieval, and related purposes. In order to complete this task, we wrote a module that gives information about how LucidWorks Big Data software handles the task of vectorizing documents using a workflow. This module details the approach that LucidWorks implements, and gives detailed instructions on how to create a collection, start the workflow, check the status of the workflow, and finally access the results after the workflow completes. Upon completion of our module, users will be able to test their understanding using the example documents provided by the LucidWorks software, and be familiar with Hadoop’s distributed file system. After users are familiar with how the software works, they will be able to create their own vectorized representations of documents. Our module also provides information about the installation of LucidWorks software on a virtual machine; if the users have no access to the software they will then be able to create their own instance of it. The module will be available also through http://en.wikiversity.org/wiki/Curriculum_on_Digital_Libraries.
- Marine BlenderShaffer, Zachary; Macht, Henry; Shirazi, Adrian; Campbell, Mitchell (Virginia Tech, 2023-05-17)A model of a realistic marine environment is needed for training a rugged, onboard optical sensor designed by Cell Matrix Corporation, a VTCRC COgro member (i.e., a small company in Virginia Tech's Corporate Research Center), a project led by Dr. Peter Athanas, an ECE professor at Virginia Tech. This will be accomplished within Blender, a free and open 3D modeling and rendering tool. The chosen environment is the intercoastal waters of the Palm Beach Inlet in Florida, between the Port of Palm Beach and the Inlet, approaching the Inlet from the south side of Peanut Island. This active inlet and port area gives the scene of the Blender model. To build an accurate representation of the specified area we will construct a terrain model for the Palm Beach Inlet water area from the Port of Palm Beach to the Inlet, including where the Intercoastal Waterway channel meets the Inlet channel, south of Peanut Island. This covers the surrounding islands and land masses, bridges, and large structures. There will also be roughly five types of boats to model (i.e., yachts, sailboats, mega-yachts, cargo ships, fishing boats, and other boats commonly found in the area), to represent different situations. Different looking classes of boats are needed to train the marine sensor to recognize them, so we choose different classes and create or find-and-customize a model for a boat from each class. The team will be provided with the trajectories of individual boats traveling this area from AIS ship tracking data published by the US Coast Guard. To simulate these realistic situations we have written a Blender script that allows boats to transit along these AIS tracks. The renders we created from our blender project are representations of the Palm Beach Inlet water area, and will hopefully serve as a useful resource for AI model training.
- Mathematics Education Recruitment VideoMay, Daniel; Gates, Greg; Zhang, Jeff (2012-05-02)This video was created to help recruit graduate students for the Mathematics Education program at Virginia Tech.
- Max Video TutorialsDeYoung, Tyler; Kahn, Amanda; Darivemula, Deepika; Russell, John (2016-05-04)In MUS 3065, Computer Music and Multimedia, students learn how to use the Max programming environment in order to compose interactive digital music. This project aims to assist these students by making video tutorials about the more useful aspects of Max in order to reiterate core concepts and methods that Dr. Nichols teaches in class. The selected topics for the videos are coll object, additive synthesis, audio modulation, quad-speaker spatialization, timing in Max, and Max basics. Each video describes the recorded screen of the Max environment as well as voice-overs explaining what the narrator is doing in the video and why.
- Micro-Aggression Video VignettesSharma, Divya; Patha, Laxmi Harshitha; Sethi, Gurkiran; Kotagiri, Pranavi (2016-05-10)The goal of our project is to construct video vignettes of scenarios illustrating different types of micro-aggressions. Micro-aggressions are the everyday verbal, nonverbal, and environmental slights, snubs, or insults, whether intentional or unintentional, that communicate hostile, derogatory, or negative messages to target persons based solely upon their marginalized group membership (from Diversity in the Classroom, UCLA Diversity & Faculty Development, 2014). Interactions and conversations between peers and faculty are never-ending. The biggest concern related to micro-aggression is that individuals may not even know that they are committing a micro-aggression, which is why we want to inform as many individuals as we can about this topic. In fact, a micro-aggression can even occur when someone is giving a compliment. By raising micro-aggression awareness we can have safer, more alert and more intelligent interactions. We created videos displaying different types of micro-aggression events. We have completed shooting and editing of three videos, each with length in the 1-3 minute range. The editing was done in the Innovation Space Center in Torgersen Hall, and the resulting videos are available through YouTube. We have also completed preliminary work on the fourth video. The raw files for the videos are located in the Innovation Space in the "Save Work Here" folder under the names "Sharma" and "Patha". With these videos the overall goal is to garner attention and awareness regarding micro-aggressions that take place on a day-to-day basis. We hope that our videos can be a stepping-stone to finding a solution to an everyday problem, possibly inspiring others to produce additional videos on this important topic.