CS4624: Multimedia, Hypertext, and Information Access

Permanent URI for this collection

This collection contains the final projects of the students in in the course Computer Science 4624: Multimedia, Hypertext, and Information Access, at Virginia Tech. This course, taught by Professor Ed Fox, is part of the Human-Computer Interaction track, the Knowledge, Information, and Data track, and the Media/Creative Computing track. The curriculum introduces the architectures, concepts, data, hardware, methods, models, software, standards, structures, technologies, and issues involved with: networked multimedia (e.g., image, audio, video) information, access and systems; hypertext and hypermedia; electronic publishing; virtual reality. Coverage includes text processing, search, retrieval, browsing, time-based performance, synchronization, quality of service, video conferencing and authoring.

Browse

Recent Submissions

Now showing 1 - 20 of 301
  • Building Web App for Automated Vehicles Fuel/Energy Estimation
    Quinn, Courtney; Scott, Layla; Chao, Christina; Tapia, Eric; Batra, Rohin (Virginia Tech, 2024-12-16)
    “Is switching from a gasoline engine to an electric or hybrid vehicle worth it?” The decision involves multiple factors: while fuel prices are high, electric vehicles also come with significant upfront costs, raising questions about cost-effectiveness and environmental impact. Consumers and researchers alike need accessible tools to evaluate such factors in the context of sustainability. As the transportation sector embraces eco-friendly vehicles, the demand grows for user-centered tools that clarify energy consumption data for both industry experts and general users. This interdisciplinary project responds to this need by creating a web-based application that integrates the Virginia Tech Transportation Institute (VTTI) models for vehicle energy consumption within a graphical user interface. By making these sophisticated models accessible, the platform empowers both researchers and non-expert users to make data-driven decisions on vehicle energy efficiency and environmental impact. Past studies, such as those by Madziel and Campisi, highlight the impact of variables like temperature, vehicle load, and driving style on electric vehicle (EV) energy usage [1]. Our platform provides an intuitive interface, enabling users to upload speed data, select a vehicle type, and view real-time energy analytics through interactive charts. The system includes support for: • Internal Combustion Engine Vehicles (ICEV) • Battery Electric Vehicles (BEV) • Hybrid Electric Vehicles (HEV) • Hydrogen Fuel Cell Vehicles (HFCV) Built using React and Python Flask, the site includes data upload, calculation results, and visualization features for all user levels. By bridging complex analytics with user-friendly design, this platform supports informed, data-driven decisions in sustainable transportation.
  • Crash Rate Prediction from Traffic Volume Data using AI
    Chan, Travis; Hossain, Syeda; Le, Jonathan; Asam, Arham; Khadka, Devanshu (Virginia Tech, 2024-12)
    In today's fast-paced, technology-driven world, we're generating more transportation data than ever before. This data offers opportunities to making roads safer and more efficient, but is often hard to take advantage of. Our client, Dr. Mohamed Farag is a researcher in the Center for Sustainable Mobility (CSM) at the Virginia Tech Transportation Institute, a research institute whose work contributes to the advancement of the transportation industry. To address this challenge, we have developed a user-friendly web application that harnesses machine learning to predict crash rates based on traffic volume data. We have developed a web application that allows users to use machine learning models to predict crash rates for roads. It is comprised of four main components: a frontend interface, a backend server, an API, and offline machine learning model development using Google Colaboratory. Administrators have additional privileges, such as managing machine learning models through the Model Management section. They can upload new models, specify model details like names and attributes, and monitor existing models via the Model List page, which displays all models along with their creation dates and statuses. We've implemented secure user authentication on the frontend using JWT tokens for login and sign-up processes. The Home Page presents users with a tabular view of past predictions, allowing them to see the date, model used, and results, as well as the option to add new predictions. Our backend architecture features a Next.js server for the web backend and a FastAPI server for the machine learning backend. The web backend handles user authentication, prediction collections, and model management, while interfacing with the FastAPI ML backend to generate predictions. To ensure quality and reliability, we've conducted extensive testing and evaluation, including machine learning model testing, model evaluation, and client assessments. We recognize that further work is needed to finalize the product. This report outlines our plans for the remainder of the semester and proposes ideas for future enhancements beyond the current project scope—all aimed at making our roads safer through data-driven insights.
  • Frontend-Crisis Events Digital Library
    Hall, James; Seth, Ananya; Patel, Ansh (Virginia Tech, 2024-12-11)
    The Frontend for Crisis Events Digital Library project addresses the need for a streamlined, consolidated interface to assist users in understanding and analyzing crisis events. Current tools for crisis event analysis exist as separate applications, including Text Summarization, Knowledge Graph Generation, News Coverage Analysis, and Information Extraction. These applications are widely used by responders, analysts, and the public, who often face difficulties navigating through multiple platforms to obtain comprehensive insights. By unifying these tools within a single web interface, the project aims to reduce complexity, save time, and improve accessibility for users. The project will link this frontend interface with an integrated backend that processes and retrieves data efficiently across applications. This front-end solution will be documented extensively, covering its architecture, development process, API integration, and usage guidelines to ensure ease of adoption, scalability, and maintainability. Ultimately, this project enhances users' ability to gather, interpret, and act upon crisis data, fostering a better understanding and response to such events.
  • Traffic Visualization Dashboard Final Report
    Worsley, Gabriel; Noneman, Brett; Borghese, Matt; Xinchen, Liao; Xi, Chen (2024-12)
  • A Discovery Portal for Twitter Collections
    Casery, Christina; Anderson, Quinn; Omotosho, Abdul; Patel, Kirti; Johnson, Adrian (2024-12-15)
    This report documents the continuation of a project begun by previous students three years ago in 2021. About six billion Tweets have been collected in three formats, Social Feed Manager (SFM), yourTwapperKeeper (YTK), and Digital Methods Initiative Twitter Capture and Analysis Toolset (DMI-TCAT), by the Digital Library Research Laboratory (DLRL) at Virginia Tech. The overall goal of this project is to organize these Tweets into event collections and consolidate the collection information that is stored in three different schemas and databases into one web app, making the data more accessible. In Fall 2021, the Library6BTweet team designed an individual Tweet and collection-level Tweet schema. They also worked on converting Tweet data. In Spring 2022, the Twitter Collections team optimized the conversion scripts, converted Tweet data, and looked into implementing a machine learning model to categorize Tweets. In Spring 2024, the Twitter Database Discovery Portal team consolidated the collected data into a local mongo database and built a web app with minimal features that display the collected data and allows the user to search and filter the collections. The Twitter Database Discovery Portal team did not complete extracting the data from the SFM database. Our team’s goal is to build upon the past team’s contributions to finish extracting the data from the SFM database and add new features to the web app.
  • Building an Intelligent QA/Chatbot with LangChain and Open Source LLMs
    Cross, Patrick; Syed, Mikail; Scott, Sean; Singh, Aditya; Zhang, Maokun (2024-12)
    This project developed a web-application Q/A chatbot that enables users to interact with Large Language models (LLMs) through a collection format. The system implemented a Retrieval Augmented Generation (RAG) pipeline to provide context-specific responses based on either user-uploaded documents (.txt, .html, and .zip formats) or user uploaded URLs. The application features secure user authentication, multiple- instances of chat/document contexts through collections, document up- load, and standard LLM chatbot functionalities, including the ability to switch between LLMs. This report will give readers an understanding of how the application was designed and developed; how to install and use the application; how to continue development of the application; lessons learned during development; and future plans for the project.
  • A discovery portal for CS4624 projects
    Arze, Henry; Natysin, Logan; Titi, Matthew; Underwood, Patrick (Virginia Tech, 2024-12-12)
    CS4624 is one of many capstone courses a student within the CS curriculum is able to take. The multimedia and hypertext course digs into the diverse range of multimedia content such as images, audio, video, and any information retrieval and access relating to it. With this comes the capstone project which is a semester long project given to us students to allow a display of mastery within our discipline. It has been a pleasure to have Dr. Farag’s guidance with the project. An insight into real-world applications as well as a diverse approach to different problems has allowed us to grow as both people and developers. The current discovery portal for CS4624 student projects serves as a platform for students working within the course to submit and hold their projects. Details included within the pages on the discovery portal consist of abstract, date, author, presentation, final report, source code, and collections. Additionally, the discovery portal contains filtering features to allow users to specifically search for any project dependent on; recent submissions, issue semester, author, title, subject, content type, and department. This allows teachers to easily access desired projects as well as a safe holding for semester long projects that students worked hard on. With this comes the purpose of the project. After reviewing the functionality and appearance of the existing discovery portal, there were many things that needed to be improved on. The first step was reworking the backend, where we made a transition from MongoDB to MySQL. This change was necessary in order to support a more scalable and relational database, where project files proved to be large, and MongoDB had limitations in supporting such file sizes. We also transitioned from Node/Express to a Flask backend that could easily interact with the ReactJS frontend. Once our system was structured to our liking, we aimed to repair the core features. The login, project sign-up, and other features were not functioning properly, and required fixes to support distinct user roles. Linking all the existing features between the frontend and backend was essential to the continuation of this project. After analyzing the discovery portal, our main focus became expanding the features for the three user roles: admin, client, and student. Because they share functionality and view collections similarly, we created a reusable home view tailored to each user's permissions. Protected pages were introduced to secure our system, and we redesigned the frontend with Tailwind and ShadCN components for a more modern interface. This overhaul now provides CS4624 students, instructors, and clients with a more efficient, centralized platform for accessing, managing, and preserving semester-long projects, eliminating the need for Canvas or manual entries. Upon completion, we hope to provide future CS4624 students and staff with a more convenient tool to guide them in their journey of completing their capstone projects.
  • CS4624 Projects Discovery Portal
    Perryman, Shayne; Kanaan, Adam; Fuentes, Mark; Gonzalez, Maddie; Moshin, Faizan (Virginia Tech, 2024-05-01)
    CS4624 is one of many capstone courses a student within the CS curriculum is able to take. The multimedia and hypertext course digs into the diverse range of multimedia content such as images, audio, video, and any information retrieval and access relating to it. With this comes the capstone project which is a semester long project given to us students to allow a display of mastery within our discipline. It has been a pleasure to have Dr. Farag and Vedant Shah guide and assist us with the project. An insight into real-world applications as well as a diverse approach to different problems has allowed us to grow as both people and developers. The current discovery portal for CS4624 student projects serves as a platform for students working within the course to submit and hold their projects. Details included within the pages on the discovery portal consist of abstract, date, author, presentation, final report, source code, and collections. Additionally, the discovery portal contains filtering features to allow users to specifically search for any project dependent on; recent, issue date, author, title, subject, content type, and department. This allows teachers to easily access desired projects as well as a safe holding for semester long projects that students worked hard on. With this comes the purpose of the project. After reviewing the functionality and appearance of the discovery portal, there were many things that needed to be improved on. The first very noticeable issue was that the search features did not either function properly or at all. For starters, the ‘By Issue Date’ filter would not allow filtering with either month or year. It was required that both were specified along with the open search requiring an exact formatting of date (xx-xx-xxxx). The ‘By Author’ filter expected that the full name was typed out, including the comma separating the first and last name. All open search features expected a prompt that would produce an exact match. It also came to our attention that some features produced over a thousand categories for only 269 projects. This should never be the case as the purpose of filtering is to narrow the search for projects. After analyzing the discovery portal, our main focus became improving upon the search and filtering features. This was something that would require us to completely recreate the discovery portal due to existing source code being unavailable. With this, we first needed to create a front-end and back-end that would relay any information requests to have a web page display. To replicate the discovery portal further, we also implemented an authentication aspect. Accounts would be divided into ‘admin’, ‘professor’, and ‘user’, each having distinct permissions on what they are able to insert, delete, and modify. Following this was the development of our database to store all of the projects’ files as well as a schema that the search and filtering features would utilize. Finally, we implemented a search api for the back-end to access, a completed schema for each project, and created a function search and filter function. Upon completion, we hope to provide future CS4624 students and staff with a more convenient tool to guide them in their journey of completing their capstone projects.
  • Crisis Events Information Extraction
    Rabbani, Eitsaam; Spies, Will; Gregory, Sully; Brown, Brianna; Saikrishna, Nishesh  (Virginia Tech, 2024-05-01)
    Unfortunately, crises occur quite frequently throughout the world. In an increasingly digital age, where most news outlets post articles about events online, there are often tens or even hundreds of articles about the same event. Although the information found in each article is often similar, some information may be specific to a certain article or news outlet. And, as each news outlet usually writes a lengthy article for each crisis event that happens, it can be hard to quickly locate and learn the basic, important information about a given crisis event. This web app project aims to expedite this lengthy process by consolidating any number of articles about a crisis event into who, what, where, when, and how (WWWWH). This information extraction is accomplished using machine learning for named entity recognition and dependency parsing. The extracted WWWWH info is displayed to the user in an easily digestible table, which allows for users to quickly learn the essential information regarding any given crisis event. Both the user’s input and the output data will be saved to a database, so that users can see their previous usages of the program again at any time. While users must manually input web articles into the program, whether as links or .txt files, there is potential in the future to use a web crawler to automate this initial article gathering. The stack for this applications utilizes the MERN Stack. MongoDB was chosen due to its flexible document structure. For the back-end features such as natural language processing and our server we utilized Python and Express/Node.js. The front-end consists of React which is used to fetch our data and utilizes component libraries such as MUI for consistent design language. The deliverables for this project include our Final Presentation and Final Report which show our progress throughout the development stages, and finally our code for the application which are submitted to our professor and client, Mohamed Farag.
  • Integrated Web App for Traffic Simulator
    Nguyen, Phu; Knight, Ryan; Issing, Alex; Shah, Karan; Desai, Joey (2024-05-01)
  • Crisis Events Text Summarization
    Shah, Tarek; Crisafulli, Francesco; Sonnakul, Aniket; Mohammed, Farhan; Guadiamos, Santiago (2024-05-01)
    From mass shootings to public health emergencies, crisis events have unfortunately become a prevalent part of today’s world. This project contributes to the advancement of crisis response capabilities by providing an accessible tool for extracting key insights from diverse sources of information. The system allows users to create collections to aggregate articles and data relevant to specific crisis events. Users can upload files in various formats, including text files of links, zip files containing articles in text format, or zip files with HTML content. The program extracts and organizes information from these sources, storing it efficiently in a SQLite database for future retrieval and analysis. One of the key features of the system is its flexibility in text summarization. The current summarizers available are BERT, T5, and NLTK, but it would be relatively easy to add new summarizers at a later date. Currently, the NLTK and T5 summarizers work relatively quickly, but the BERT summarizer takes minutes before it finishes summarizing. This is because the BERT summarizer is the most powerful, being a larger model and requiring more processing. The front-end of the application is written in React.js using JavaScript. The back-end is composed of the database, the scraper, and the summarizers. The code for accessing the database is written in Python. The Flask framework facilitates back-end operations, allowing seamless integration between frontend and database functionalities. The code for the summarizers is also written using Python. The libraries used in the summarizer code are NLTK, Transformers, PyTorch, and Summarizer. The code for the web scraper is also written using Python and utilizes the BeautifulSoup4 library for parsing HTML. Overall, this project aims to empower users with a crisis information management tool that efficiently aggregates, extracts, and summarizes data to aid in crisis response and decision-making.
  • CS4624: Crisis Events Knowledge Graph Generation
    de Chutkowski, James; Abhilash, Geethika; Turkiewicz, Justin; Tran, Anthony; Walters, Matthew (2024-05-01)
    In a world inundated by information during crisis events, the challenge isn’t just finding data, it’s making sense of it. Knowledge graphs rise to this challenge by structuring disparate data into interconnected insights, enabling a clearer understanding of complex situations through visual relationships and contextual analysis. This report presents a web-based application for generating and managing knowledge graphs and details the process taken to create it. The application integrates React with Material-UI for the frontend, Flask for the backend, and MongoDB and Neo4j for data storage. Users input multi document collections which are processed using Beautiful Soup, Stanford’s Core NLP, NLTK and SpaCy to extract and analyze data, forming triplestores and named entities. These elements are then used to generate knowledge graphs that are stored in Neo4j and rendered on the web via Sigma.js and Graphology. This report addresses development processes, features, testing, step-by step guidance for users and developers, the lessons we learned working on this project, and potential enhancements that can be implemented by future student groups picking up our project.
  • Creating a Website for Election Predictions
    Tran, Pierre; Pham, Danny; Hong, Eunice; Chung, Danny; Tran, Bryan (2024-05-01)
    This project, Creating a Website for Election Predictions, aims to present 2024 presiden- tial election predictions at the county level using demographic variables. By leveraging machine learning techniques on historical survey data, our website offers an interactive map and data visualization tools for public access. This non-academic approach seeks to provide a more accurate and representative analysis of election predictions, diverging from traditional poll-based methods. Additionally, it serves as a user-friendly platform for policymakers and the public to gain insightful, data-driven perspectives on election outcomes.
  • Crisis Events News Coverage Analysis
    Alemu, Hiwot; Hashmi, Rayyan; Wallace, Eric; Harrison, Dylan (2024-05-01)
    Analysis of the coverage of crisis events by news agencies provides important information for historians, activists, and the general public. Detecting bias in news coverage is a direct benefit. Thus, there is a need for an automated tool that, given a set of crisis events and a dataset of webpages about these events, can extract the set of news media outlets that reported about these events and how frequently, the types of events covered by each media outlet, and how each news media outlet links to other outlets, if any. Bias detection and sentiment analysis can then be applied to each media outlet to discover hidden patterns. The web application we have designed will allow users to provide a collection of URLs or HTML files for webpages reporting on a crisis event. The program will provide a thorough analysis of the provided collection, detecting bias in news coverage as well as linkage between different domains. The results of this analysis will then be returned to the user, offering insights into their provided collection of news articles in a way that is accurate, informative, and easy to understand. Our team is optimistic that the application we have developed will assist users in navigating the complexities of news reporting during periods of uncertainty. In today's increasingly divided and turbulent political landscape, discerning the truth from misinformation is more crucial than ever. We believe that our application will empower individuals to make more informed decisions through enhancing the transparency of online news organizations, ultimately contributing to a culture of more responsible journalism and improved civic discourse.
  • Building an Intelligent QA/Chatbot with LangChain and Open Source LLMs
    Bogusz, William; Mohbat, Cedric; Liu, James; Neeser, Andrew; Sigua, Alex (2024-05-01)
    We have created a web application enabling access to Intelligent Q/A chatbots, where the end user has access to query language learning models to retrieve context specific information. This web application will provide a collection-based interface, where documents uploaded by the user provide the context for responses by the language learning model to user input. This is accomplished through retrieval augmented generation (RAG) pipeline. As to reduce inaccuracies and fulfill the user needs of the client, the language learning model will notify the user if a query cannot be sufficiently answered given the documents in a collection. As such, our application emphasizes collection management with the functionality to upload (in .txt, .html or .zip format) and delete documents as well as select specific collections, while providing a familiar interface not much different from the web interface for established AI chatbot services such as OpenAI’s ChatGPT or Anthropic’s Claude. The final product also currently encompasses a landing page and user login, with accessibility to a document upload portal for creating document collections.
  • Discovery Portal for Twitter Collection
    Saif, Hamza; Mustard, Fiona; Duduru, Sai; Forest, Kyra; Agadkar, Vaasu (Team 10, 2024-05-01)
  • Integrated Web App for Crisis Events Crawling
    Hong, Michelle ; Rathje, Sondra ; Angeley, Stephen ; Teaford, Jordan ; Braun, Kristian (Virginia Tech, 2024-04)
    The integration of a web crawler and a text classifier into a unified web application is a practical advancement in digital tools for crisis event information retrieval and parsing. This project combines HTML text processing techniques and a priority-based web crawling algorithm into a system capable of gathering and classifying web content with high relevance to specific crisis events. Utilizing the classifier project’s model trained with targeted data, the application enhances the crawler's capability to identify and prioritize content that is most pertinent to the crisis at hand. The transition from Firebase to MongoDB for backend services provides a much more flexible, accessible, and permanent database solution. As well as this, the system’s backend is further supported by a Flask API, which facilitates the interaction between the frontend, the machine learning model, and the database. This setup not only streamlines the data flow within the application but also simplifies the maintenance and scalability of the system. This integrated web app aims to serve as a valuable tool for stakeholders involved in crisis management, such as journalists, first responders, and policy makers, enabling them to access timely and relevant information swiftly. During development of this project there were many challenges with fixing the two projects; out of the box neither was functional when they were obtained from their respective repositories. As well as this, the projects had incomplete documentation, leaving a lot for our team to figure out on our own. The results of our team is a redesigned frontend, backend, and MongoDB local database together into a cohesive, full application.
  • ScrapingGenAI
    Do, James ; Bae, Heewoon ; Colby, Julius (2024-05-10)
    AI has been widely used for many years and has been a constant front-page news topic. The recent but fast development of generative AI inspired many conversations, from concerns to aspirations. Understanding how the topic develops and when people become more supportive of generative AI is critical for social scientists to pinpoint which developments inspire public discussions. The use of generative AI is relatively new. The data and insight gathered could be used to determine if use in a commercial setting (like in Travel/Hospitality) is viable and what the potential feedback from the public might look like. We developed two specialized web scrapers. The first targets specific keywords within Reddit subreddits to gauge public opinion, and the second extracts discussions from corporate earnings calls to capture the business perspective. The collected data were then processed and analyzed using Python libraries, with visualizations created in Matplotlib, Pandas, and Tkinter to depict trends through line charts, pie charts, and bar charts. We limited our analysis period from August 2022 to March 2024, which is significant as ChatGPT was released in November 2022, allowing us to observe notable changes. These tools not only show changes in public interest and sentiment but also provide a graphical representation of temporal shifts in the perception of AI technologies over time. The final product is designed for anyone interested in company transcripts and in comparing them to the public perspective. The product offers users access to detailed data representations, including numerical trends and visual summaries to further understand the correlation between the company and the public. This comprehensive overview assists in understanding how public and corporate sentiments towards AI have shifted during a recent 20-month period. A significant hurdle was using the PRAW API for Reddit data scraping. Through review of documentation, tutorials, and additional support from a teaching assistant, we successfully implemented the functionality needed to extract and process the data from subreddits effectively. To make our findings more accessible and engaging, future additional work transforming this product into a fully functional website would be beneficial. This platform would make the insights more readily available to a wider audience, including the general public and industry stakeholders. Doing so could enhance the impact and usefulness of our project.
  • Assistive Voice Assistant
    Satnur, Abishek Ajai; Bruner, Charles (2024-05-09)
    This project is an extension of work that has been done in previous years on the sharkPulse website. sharkPulse was created due to the escalating exploitation of shark species and the difficulty of classifying shark sightings. Due to sharks’ low population dynamics, exploitation has only exacerbated the issue and made sharks the most endangered group of marine animals. sharkPulse retrieves sightings from several sources such as Flickr, Instagram, and user submissions to generate shark population data. The website utilizes WordPress , HTML, and CSS for the front end and R-Shiny, PostgreSQL, and PHP to connect the website to the back end database. The team was tasked with improving the general usability of the site by integrating dynamic data-informed visualizations. The major clients of the project are Assistant Professor Franceso Ferreti from the Virginia Tech Department of Fish and Wildlife Conservation and Graduate Research Assistant Jeremy Jenrette. The team established regular contact through Slack, scheduled weekly meetings online with both clients, and acquired access to all major code repositories and relevant databases. The team was tasked with creating dynamic and data-informed visualizations, general UI/UX improvements, and stretch goals for improving miscellaneous pages throughout the site. The team developed PHP scripts to model a variety of statistics by dynamically querying the database. These scripts were then sourced directly through the site via the Elementor WordPress module. All original requirements from the clients have been met as well as some stretch goals established later in the semester. The team created a Leaflet global network map of affiliate links which dynamically sourced the sharkPulse social network groups from an Excel spreadsheet and generated country border markers and links to each country’s social network sites as well as a Taxonomic Accuracy Table for the Shark Detector AI. The team created and distributed a survey form to collect user feedback on the general usability of the site which was compiled and sent to the client for future work.
  • SharkPulse App
    Hagood, Mia; Warner, Patrick; Tran, Anhtuan Vuong (2024-05-09)
    This project is an extension of work that has been done in previous years on the sharkPulse website. sharkPulse was created due to the escalating exploitation of shark species and the difficulty of classifying shark sightings. Due to sharks’ low population dynamics, exploitation has only exacerbated the issue and made sharks the most endangered group of marine animals. sharkPulse retrieves sightings from several sources such as Flickr, Instagram, and user submissions to generate shark population data. The website utilizes WordPress , HTML, and CSS for the front end and R-Shiny, PostgreSQL, and PHP to connect the website to the back end database. The team was tasked with improving the general usability of the site by integrating dynamic data-informed visualizations. The major clients of the project are Assistant Professor Franceso Ferreti from the Virginia Tech Department of Fish and Wildlife Conservation and Graduate Research Assistant Jeremy Jenrette. The team established regular contact through Slack, scheduled weekly meetings online with both clients, and acquired access to all major code repositories and relevant databases. The team was tasked with creating dynamic and data-informed visualizations, general UI/UX improvements, and stretch goals for improving miscellaneous pages throughout the site. The team developed PHP scripts to model a variety of statistics by dynamically querying the database. These scripts were then sourced directly through the site via the Elementor WordPress module. All original requirements from the clients have been met as well as some stretch goals established later in the semester. The team created a Leaflet global network map of affiliate links which dynamically sourced the sharkPulse social network groups from an Excel spreadsheet and generated country border markers and links to each country’s social network sites as well as a Taxonomic Accuracy Table for the Shark Detector AI. The team created and distributed a survey form to collect user feedback on the general usability of the site which was compiled and sent to the client for future work.