Doctoral Dissertations

Permanent URI for this collection

Browse

Recent Submissions

Now showing 1 - 20 of 17920
  • Modulation of Hepatic Lipid Metabolism by Dietary Fats in Neonatal Pigs: Implications for Steatotic Liver Disease
    Yadav, Ravi (Virginia Tech, 2025-09-23)
    Steatotic liver disease (SLD) is increasingly recognized in pediatric populations, yet its nutritional origins and development are poorly understood. Using a neonatal pig model, we conducted three nutritional based studies to evaluate how dietary lipid composition influences the onset, progression, and metabolic regulation in SLD. In Study Ia, we demonstrated that steatosis develops as early as day 7 in pigs fed medium-chain fatty acid (MCFA)-rich formulas and rapidly progresses to steatohepatitis by day 14, independent of whole-body adiposity. In Study Ib, we identified a paradoxical metabolic state characterized by simultaneous upregulation of lipolytic and lipogenic pathways in MCFA-fed pigs, in where increased fatty acid oxidation failed to prevent hepatic lipid accumulation. In Study II, we compared distinct lipid sources and found that laurate/myristate-rich coconut oil exacerbated steatosis and lipogenesis, while caprylate/caprate-rich medium-chain triglyceride (MCT) oil was hepatoprotective, despite no measurable increase in oxidation. In Study III, we tested animal- and plant-based formula fats and found that lard and butter supported growth but promoted mild steatosis, coconut oil induced severe steatosis and central adiposity, and MCT oil reduced formula intake and prevented progression to steatohepatitis. Altogether, this dissertation work reveals that early-life dietary lipid composition exerts differential effects on hepatic outcomes, independent of obesity, and that MCFA species confer divergent metabolic and histopathological consequences. These findings highlight the need to reconsider infant formula lipid blends not only for growth and energy but also for their long-term implications in metabolic programming and pediatric liver health.
  • Beyond the Checkbox: Leveraging AI Chatbots for Inclusive Demographic Data Collection
    Chekili, Amel (Virginia Tech, 2025-09-19)
    Traditional demographic surveys compress rich identities into rigid checkboxes. This dissertation asks whether a conversational chatbot, powered by GPT-4o, can restore that nuance. In a within-subjects experiment, 230 participants completed both a chatbot conversation and the standard Office of Management and Budget (OMB) form. Exploratory analyses showed that participants' open-ended narratives frequently moved beyond the OMB labels. By encoding these responses with the INSTRUCTOR embedding model, and organizing them via hierarchical clustering, the categorization can be "cut" at multiple levels of granularity, producing solutions that can satisfy regulatory reporting and finer leaves that reveal national, regional, and mixed-heritage detail. Hypothesis-driven tests of user experience reinforced these advantages. On the User Experience Questionnaire, the chatbot outscored the demographic checklist on hedonic qualities, novelty, and stimulation, while the checklist retained pragmatic strengths such as dependability. Perceived group inclusivity also rose when data were collected through the chatbot, regardless of how closely respondents' identities aligned with OMB categories. Overall, the findings indicate that a carefully engineered chatbot, paired with advanced natural-language-processing analyses, can enhance race and ethnicity data collection by producing richer information and fostering a more inclusive, engaging respondent experience.
  • Advances in Survival Analysis: Accurate Partial Likelihood Computation by Poisson-Binomial Distributions and Nonparametric Competing Risk Cox Model
    Cho, Youngjin (Virginia Tech, 2025-09-19)
    Two novel contributions to survival analysis are presented. The first project revisits the partial likelihood in the Cox model, which traditionally approximates conditional probabilities using risk score ratios under a continuous-time assumption. We propose a new accurate partial likelihood computation method based on the Poisson-binomial distribution. Although ties are common in real studies, existing Cox model theory largely overlooks tied data. In contrast, our approach accommodates both grouped data with ties and continuous data without ties, offering a unified theoretical framework for accurate partial likelihood computation regardless of data type. Simulations and real data analyses show that the method reduces bias and mean squared error while improving confidence interval coverage rates, particularly when ties are frequent or risk score variability is high. The second project develops a nonparametric regression model for competing risks survival data by combining the proportional cause-specific hazards framework with a smoothing spline ANOVA approach. We establish estimation procedures and theoretical convergence rates. Simulation studies demonstrate the method's effectiveness, and application to a multiple myeloma dataset reveals that for each gene expression covariate, at least one cause-specific effect is nonlinear and differs from the others. The proposed model fills a gap in the existing literature, where competing risks are often overlooked or covariate effects are assumed to follow parametric forms, by providing a flexible and practical framework for data analysis.
  • Deep Learning Methods for Built Environment Operational Management
    Gu, Yueyan (Virginia Tech, 2025-09-19)
    This dissertation investigated the development of efficient, reliable, and scalable time series (TS) deep learning (DL) frameworks toward enhancing operational management in the built environment, with case studies on (i) reliable infrastructure anomaly detection (AD) and (ii) scalable energy forecasting. An unsupervised, univariate probabilistic anomaly detection framework—DEGAN: Density Estimation-based Generative Adversarial Networks (GANs)—was studied to enhance detection accuracy, with an emphasis on balancing the recall-precision trade-off, using a real-world case study of railroad track monitoring. By leveraging repeated inspection data, employing standalone discriminator models trained solely on normal time series samples, and using kernel density estimation for probabilistic AD, DEGAN achieved a balanced F1 score of 0.83 (R = 0.8 | P = 0.86) and outperformed classical unsupervised machine learning baseline methods. The findings demonstrated the potential of DL architectures to effectively encode domain-specific human knowledge in infrastruc- ture monitoring tasks. The second study extended the univariate DEGAN framework for effective and efficient multivariate time series AD. A flexible framework was introduced to support both one-dimensional (1D) and two-dimensional (2D) DL architectures, including Autoencoders (AEs and VAEs) and GANs. Using this framework, 14 combinations of data embedding techniques (ensemble, reshaping, stacking, TS-to-image conversion) and model types (1D and 2D DL models) were evaluated. Using multi-channel railroad track inspection data, a 2D convolutional AE with channel stacking and a 1D convolutional GAN with reshaping (flattening multi-channel sequences into vectors) were identified as the best-performing models. Both achieved an F1 score of 0.86 and demonstrated higher computational efficiency than classical ML models. Expanding the scope beyond context-specific models, the third study addressed the scalability and generalizability of DL models. Given the need for large and heterogeneous datasets, scalable DL models were studied in the context of energy forecasting tasks through the lens of foundation models (FMs)—large models trained on such datasets. A comprehensive literature synthesis was first conducted on Time Series Foundation Models (TSFMs), which represent promising alternatives to specialist energy forecasting models. The synthesis covered general-purpose TSFMs, including native TSFMs (trained exclusively on TS data) and large language model (LLM)-adapted variants. Using data from more than 1,000 buildings, a comprehensive comparative study was then conducted and showed that GEM (a dedicated FM trained solely on the large energy dataset) and a representative TSFM fine-tuned on the large energy dataset (TimesFM2.0-E) consistently outperformed baseline DL models trained on individual buildings, with zero-shot mean absolute error (MAE) improvements ranging from 16.3% to 7.3% across 24h to 168h horizons. Building-level fine-tuning of these two FMs further increased gains to 17.8%–8.5%, with adaptation times reduced to 11–35 seconds, compared to 301–963 seconds for baselines. Although general-purpose TSFMs exhibited weaker zero-shot performance, all of their building-level fine-tuned variants outperformed baselines. These findings demonstrate the effectiveness of TSFMs—particularly energy-pretrained or domain-adapted models—as scalable and high-performing solutions for building energy forecasting. Together, these studies offer insights into achieving reliable and scalable deep learning in infrastructure operational management, advancing the use of generative artificial intelligence and foundation models in real-world, data-driven built environment management.
  • Engineering the future of Hybrid Materials with Coarse-Grained Molecular Dynamics
    Joshi, Soumil Yogesh (Virginia Tech, 2025-09-19)
    The rapid evolution of hybrid materials has opened new frontiers in materials science by combining the distinct properties of polymers, nanoparticles, ceramics, metal oxides, and biomolecules to create adaptable, high-performance systems tailored to complex applications. These materials are critical in areas such as energy storage, environmental sustainability, and biomedical engineering, where conventional single-component materials often fall short of the required versatility and performance. A key tool for understanding and designing these complex materials is coarse-grained (CG) molecular dynamics (MD) which allows researchers to capture molecular-level interactions while efficiently modeling larger, structurally diverse systems over longer timescales, bridging the gap between atomic-level insights and macroscopic material properties. This work advances CG MD methodologies by developing chemically mapped, transferable models tailored to a broad library of polymers, lipids, and biomolecules. These models provide predictive insights into the behavior of hybrid systems, focusing on critical transitions and structural responses to external stimuli. For example, the simulation of thermosensitive polymers such as poly(N-isopropylacrylamide) (PNIPAM) reveals key mechanisms underlying coil-to-globule transitions and self-assembly behaviors, which are essential for applications in drug delivery and responsive materials. Our studies further address experimentally observed, complex behaviors in hybrid materials, such as carbohydrate-protein binding dynamics in glycopolymers, by using CG MD simulations to decode structure-function relationships that govern molecular recognition and interaction efficiency. These simulations showcase the significance of factors such as glycan density in enhancing solvent accessibility, which directly impacts binding affinities and bioactivity. In parallel, CG model development efforts are directed at physically representing lipid bilayer systems, facilitating the simulation of cellular interfaces and hybrid membrane systems. By optimizing parameters with advanced techniques like particle swarm optimization (PSO), these CG models replicate key experimental properties such as bilayer thickness, area per lipid, and bending rigidity, ensuring transferability across diverse conditions. Collectively, this dissertation showcases the strategic development and application of CG MD models for hybrid materials, highlighting the ability of molecular simulations to guide the rational design of materials with tailored functionalities. The methodologies established here lay the groundwork for next-generation CG MD studies, aiming to bridge experimental findings with computational insights to drive innovation in hybrid material science.
  • Precision Adjuvant Design Enabling Tailored Nanoparticle Immunization Platforms for Oxycodone and Other Substance Abuse Vaccines
    Bian, Yuanzhi (Virginia Tech, 2025-09-18)
    Substance use disorders (SUDs) constitute a growing global health burden, with opioids responsible for nearly two‑thirds of SUD‑related fatalities in the United States. Current pharmacotherapies for opioid use disorder are limited by adherence challenges, diversion risks, and inconsistent efficacy. In contrast, vaccines that elicit antibodies capable of sequestering drug molecules in the peripheral circulation provide a non‑addictive, durable alternative. However, their success hinges on judicious adjuvant selection and optimized antigen delivery. This dissertation integrates systematic adjuvant characterization with nanotechnology‑enabled delivery systems to advance a precision‑vaccinology framework targeting oxycodone and related opioids. A comprehensive analysis of preclinical and clinical literature demonstrated that adjuvant efficacy is drug‑specific. No single formulation suffices across nicotine, stimulant, and opioid antigens. Structural compatibility between adjuvant and delivery system together with synergistic adjuvant combination consistently augments vaccine efficacy, suggesting the significance of empirical formulation tailoring and rational adjuvant design. Building on these insights, in vitro cytometric analyses revealed that interferon‑γ (IFN-γ) efficiently stimulated dendritic cells and, when combined with toll‑like receptor (TLR) 3 or 7/8 agonists, elevated dendritic cell activation from 33 to nearly 60 percent, indicating complementary signaling pathways that potentiated innate immunity. These findings were translated into a modular lipid‑poly(lactic-co-glycolic) acid (PLGA) hybrid nanoparticle (hNP)-based vaccine against oxycodone. Relative to a conventional hapten-carrier conjugate vaccine formulated with aluminum hydroxide, the lipid-PLGA hNP formulation elicited higher anti‑oxycodone antibody titers, enhanced peripheral drug sequestration, and markedly lowered brain oxycodone levels in mice. Subsequent encapsulation of IFN‑γ in combination with the TLR agonists polyinosinic–polycytidylic acid (TLR3 agonist) or resiquimod (TLR7/8 agonist) within the same hNP scaffold yielded the highest serum antibody levels, improved antibody affinity, and the greatest attenuation of brain drug exposure following oxycodone challenge, illustrating the translational value of co‑localized and complementary adjuvants. Collectively, these results demonstrated that rational adjuvant design integrated with scalable nanotechnology can overcome the intrinsically poor immunogenicity of small molecular substances and markedly enhance vaccine performance. This work provides a strategic and technological foundation for next‑generation immunotherapies against oxycodone and other psychoactive substances, complementing existing pharmacological and behavioral interventions to address the ongoing opioid crisis.
  • Essays in Empirical Asset Pricing
    Easterwood, Sara Bernadette (Virginia Tech, 2025-09-18)
    This dissertation explores three topics in empirical asset pricing, with a focus on cross-sectional anomalies, factor model evaluation, and information infrastructure in shaping cross-sectional returns and institutional investor demand. In the first chapter, co-authored with colleagues, I show that merger announcement returns account for virtually all of the measured size premium. An empirical proxy for ex ante takeover exposure positively and robustly relates to cross-sectional expected returns. The relation between size and expected returns becomes positive or insignificant, rather than negative, conditional on this takeover characteristic. Asset pricing models that include a factor based on the takeover characteristic outperform otherwise similar models that include the conventional size factor. We conclude that the takeover factor should replace the conventional size factor in benchmark asset pricing models. The second chapter, co-authored with a colleague, critiques the prevailing methods used to evaluate asset pricing models. Many popular models incorporate factors motivated by previously documented cross-sectional return patterns. We argue that popular "out-of-sample" methods for evaluating and comparing such models do not adequately protect against biases driven by the data-instigated nature of the models. Empirically, we show that maximum Sharpe ratio estimates fall substantially for many models when computed using validation samples designed to mitigate data-instigated model bias. Our lower estimates are easier to reconcile with leading risk-based economic models. However, it is also less clear to what extent popular multifactor models actually outperform the classic capital asset pricing model. The final chapter investigates the role of financial data vendors in capital markets. These vendors collect, aggregate, and process data on clients' behalf. I show that data vendors' coverage decisions affect institutional investor demand. The focal vendor in this study, Standard and Poor's ('SandP') Compustat database, provides subscribers with decades of 10-K and 10-Q data; however, it does not cover every public firm in every period. I show that institutional investment in firms with no Compustat coverage is over 36% below its unconditional mean, even controlling for other firm characteristics. A novel quasi-natural experiment establishes a plausibly causal connection: a technology improvement at SandP in the 1990s causes a discrete reduction in missing data. This change in data coverage is followed by a significant increase in institutional investment for treated firms relative to control firms. I then show that missing Compustat data is associated with lower informational efficiency of equity prices. I conclude that data vendors' actions can exert a material influence on capital markets because they affect firms' access to institutional capital.
  • Robot See, Robot Do: On the Development of Robust and Adaptive Imitation Learning for Robots
    Mehta, Shaunak Abhijit (Virginia Tech, 2025-09-18)
    As robots transition from isolated industrial settings to working in close proximity to humans in dynamic environments, their ability to learn and adapt to human feedback and unseen circumstances becomes crucial. Imitation learning offers a promising paradigm for robots to learn complex tasks by mimicking human behavior. However, traditional imitation learning approaches face key challenges in integrating diverse feedback types, managing noisy and inconsistent inputs, and maintaining stability in learning. In this thesis, we develop imitation learning approaches that advance the capabilities of robots and enable them to efficiently learn from humans and adapt to unseen data in diverse environments. This research is structured around four key contributions. First, we consider the scenario where a human is readily available to provide high-quality feedback to the robot. We develop a learning algorithm to enable robots to learn from diverse sources of optimal human feedback: demonstrations, corrections, and preferences. Demonstrations provide high-level task overviews, corrections fine-tune specific motions, and preferences rank robot behaviors for task improvement. By incorporating these active and passive feedback sources under a unified reward learning framework we enable robots to infer task objectives more effectively and optimize their trajectories using constrained optimization techniques. Second, we explore scenarios where the human feedback is noisy or biased due to task complexity or physical constraints. We model the robot's learning rule as a dynamical system and apply Lyapunov stability analysis to derive conditions of convergence. Leveraging these conditions, we modify the robot's learning rule to expand the basins of attractions around the possible tasks (equilibrium points) in the environment. This approach enables the robot to infer the correct task representations from a wider range of human inputs making the learning robust to suboptimal feedback without destabilizing the robot behavior. Next, we consider imitation learning settings where a human is not available to provide additional feedback. In such scenarios, imitation learning algorithms are often prone to covariate shift when they encounter data not seen during training. To tackle this challenge, we develop Stable Behavior Cloning (Stable-BC), a stability-driven imitation learning algorithm. This algorithm ensures that robots maintain reliable performance by encouraging policy stability around demonstrated behaviors without the need for additional training data or complex reinforcement learning methods. Finally, we look at the problem of imitation learning from the users' perspective and aim to reduce the time and effort required to teach the robot. We propose L2D2, a sketching interface and imitation learning algorithm where humans can provide demonstrations by drawing the task. L2D2 leverages vision-language segmentation to autonomously vary object locations and generates synthetic images of the environment for the human to draw upon. By collecting a few physical demonstrations from the users, L2D2 then grounds these diverse 2D drawings in the real world. This approach reduces the time and effort required to teach the robots by enabling the users to rapidly provide a large set of diverse demonstrations. The findings from this research highlight the importance of adaptability and stability when robots and autonomous agents work around and interact with humans in diverse environments. This research contributes to the broader field of robot learning by offering scalable, adaptable, and user-friendly solutions for imitation learning and human-robot interaction, paving the way for more intuitive and robust robotic systems in human environments.
  • Discovering LC3-interacting region (LIR) motifs in hemorrhagic fever viruses: Implications for host autophagy and viral replication.
    Petraccione, Kaylee Diana (Virginia Tech, 2025-09-17)
    Hemorrhagic fever viruses (HFVs) pose significant global health and economic burdens, yet limited FDA-approved therapeutics exist to combat their spread. Determining alterations in host processes during infection can enhance our understanding of viral replication and lead to potential host-based therapeutic targets. Rift Valley fever virus (RVFV) is a HFV that causes significant disease in humans and livestock and was an initial model HFV for the present study. The RVFV nonstructural small (NSs) protein is the main virulence factor of RVFV and we discovered four novel LC3-interacting region (LIR) motifs within NSs (NSs1-4), indicating that NSs interacts with LC3, the host key autophagy protein. Autophagy is a cellular process that can act anti-virally by enhancing immune responses and promoting viral degradation, or in some cases, be pro-viral, facilitating viral replication. LC3 proteins are critical modulators of autophagy and autophagosome maturation. We hypothesize that HFV proteins interact with LC3-family members via LIR motifs to modulate the host autophagy pathway which represents a target for therapeutic development. The NSs protein of RVFV was selected for analysis due to it being a virulence factor and having an intrinsically disordered region, which are hot spots for LIR motifs. Isothermal titration calorimetry, X-ray crystallography, co-immunoprecipitation, and co-localization experiments confirmed that the C-terminal LIR motif (NSs4) interacts with all six human LC3 proteins. We identified phenylalanine 261 (F261) in NSs4 as essential for the LC3 interaction, nuclear retention, and autophagy inhibition in RVFV-infected cells, highlighting how RVFV inhibits autophagy via the NSs4 LIR motif. Mechanistically, LC3 is located in the nucleus at cellular homeostasis, Sirt1 deacetylates LC3, and DOR then interacts with LC3 to transport it to the cytosol where it interacts with various autophagy proteins to form autophagosomes. We discovered that RVFV NSs interacts with Sirt1, competes with DOR for binding to LC3, and retains unacetylated LC3 in the nucleus therefore inhibiting autophagy and enhancing viral pathogenesis in a mouse model. Building on our success with studying an LIR in RVFV NSs, an AI/ML-driven LIR discovery pipeline was developed to identify and analyze LIR motifs in all HFV proteins. The LIR discovery pipeline identified 42 putative LIR motifs in 166 proteins from 22 HFVs using the iLIR and ELM databases. This list was further narrowed down to 17 HFV proteins that contained a LIR motif in a predicted unstructured region and showed a favorable interaction with LC3 via AlphaFold3 and FoldX analysis. Our LIR discovery pipeline identified a highly favorable interaction between LC3 and the Marburg virus nucleoprotein (MARV NP). Utilizing isothermal titration calorimetry and co-immunoprecipitation we have confirmed the interaction of MARV NP and LCA via a LIR motif, and determined that the interaction modulates autophagy in cellulo, demonstrating the ability of our results to be translated across HFV families. Given the threat of emerging and reemerging HFVs, this research is crucial for public health, exploring LIR motifs as therapeutic targets to disrupt viral replication and mitigate future outbreaks.
  • Developing Play-Based Learning Experiences with Integrated Technology: Research-Informed Approaches for Face-to-Face and Blended Early Childhood Classrooms
    Wang, Xuqing (Virginia Tech, 2025-09-15)
    This study offers an integrative review of the literature focused on how digital technologies are purposefully integrated to enhance play-based learning experiences for children aged 3 to 8 in both traditional and blended early childhood classroom settings (Marsh et al., 2016). This study draws upon both the Technological Pedagogical Content Knowledge (TPACK) model (Mishra and Koehler, 2006) and the Digital Play Framework (DPF) (Bird and Edwards, 2015) to analyze peer-reviewed theoretical and empirical literature, aiming to explore three primary research questions: (1) What technology integration strategies for play-based early childhood classrooms are supported in the research literature? (2) What learning theories align with those strategies? (3) What evidence-based recommendations can be synthesized into practical strategies for educators and instructional designers? Findings are organized thematically to reveal key strategies including child-centered digital play, adaptive learning technologies, multimedia content delivery, immersive environments, and scaffolded support systems. The research aligns these strategies with major learning theories such as constructivist, cognitive developmental, social constructivist, psychosocial, and play-based learning theories. Six evidence-based recommendation categories emerge from the analysis: strategic integration approaches, child-centered pedagogies, intentional instructional design, professional development, developmentally appropriate practices, and assessment frameworks that maintain the centrality of play while leveraging technology to enhance rather than replace hands-on, relational, and exploratory learning. The study highlights both the promise and challenges of technology use in ECE, emphasizing the critical importance of pedagogical intentionality, broad access to digital tools, and comprehensive professional development. The conclusions demonstrate that effective technology integration requires intentional pedagogical planning, deep understanding of developmental theory, and careful attention to maintaining play-based learning principles. This dissertation contributes a practical, theory-informed roadmap offering actionable guidance for educators, instructional designers, researchers, and policymakers aiming to enhance early learning through thoughtful, play-driven digital integration.
  • Natural Language Processing for Behavior Change and Health Improvement
    Ahmadi, Sareh (Virginia Tech, 2025-09-11)
    Maladaptive health behaviors contribute to lifestyle-related diseases such as obesity and type 2 diabetes. One key factor driving these behaviors is delay discounting---the tendency to prioritize immediate rewards over delayed ones. Episodic Future Thinking (EFT) is an intervention designed to reduce delay discounting by engaging individuals in vivid mental simulations of future events, thereby influencing decision-making and emotional well-being. Participants generate descriptions of personally significant future events. Research studies have shown EFT's effectiveness in promoting healthier behaviors, including improved exercise and medication adherence. However, the mechanisms underlying EFT and the factors influencing its efficacy remain unclear. With advancements in machine learning (ML) and natural language processing (NLP), new techniques can enhance EFT analysis and user experience. This study investigates EFT cue texts to identify characteristics that make them effective, and explores how a chatbot can assist users in generating impactful cues. We use language models and fine-tune pre-trained models to classify EFT cue texts, facilitating further analysis of their features. Additionally, we leverage instruction-tuned Large Language Models (LLMs) to classify cue texts to address annotation variability. We explore zero-shot and few-shot prompt tuning, demonstrating that a small number of high-quality labeled samples significantly improves classification performance. We present the design, implementation, and evaluation of an AI-powered chatbot using the GPT-4 LLM that generates EFT cue texts for individuals with lifestyle-related conditions. We evaluated the chatbot using both quantitative and qualitative approaches. This included automated assessment by prompting language models as well as user studies incorporating usability assessments and qualitative evaluations. These methods demonstrated the chatbot's effectiveness in generating personalized EFT cues.
  • Biomechanical Responses and Functional Outcomes in Large Animal and Human Surrogate Models of Primary Blast Injury
    Nelson, Allison Julianne (Virginia Tech, 2025-09-10)
    The growing use of explosive weapons over the past century has led to a rise in blast traumatic brain injuries (bTBIs), particularly among military personnel. More than 515,000 servicemembers have been diagnosed with a traumatic brain injury (TBI) since the year 2000, and blast was reported to be the most common cause of TBI in modern U.S. military conflicts, comprising 33.1% of reported cases ((Lindquist et al., 2017; Traumatic Brain Injury Center of Excellence (TBICoE), 2025). Conflicts between Ukraine and Russia continue to highlight the enduring and serious risk of bTBI affecting both military forces and civilian populations (Lawry et al., 2025). These injuries are frequently associated with a wide range of physical, cognitive, behavioral, and psychological symptoms, such as headaches, dizziness, impulsivity, sleep disturbances, memory deficits, anxiety, depression, and mood alterations, which can significantly interfere with daily activities. However, the injury mechanisms causing these functional impairments remain poorly understood. The high incidence and impact of bTBI has brought increased attention to the effectiveness of combat helmets in mitigating blast injury, as current infantry combat helmets were not designed for protection against primary blast. Previous computational and experimental findings have suggested that during a blast exposure, the shock wave infiltrates the gap between the head and helmet and generates regions of increased pressure that are orientation-dependent (Mott et al., 2008; Thomas and Johnson, 2024). The overall objective of this study was to improve the understanding of bTBI biomechanics by evaluating the protective effectiveness of a combat helmet in mitigating blast loading and associated injury outcomes. The influence of blast orientation on functional outcomes was also evaluated at an acute timepoint. Using an instrumented human surrogate model, the effect of blast intensity, orientation, and the presence of a combat helmet on blast loading was examined. In a frontal blast orientation, peak pressures were shown to be reduced at the forehead and front of the head but increased at the back of the head with the combat helmet. When the headform was rotated 45 degrees about the transverse axis, peak pressures and total impulses were notably increased at all measured locations on the head with the addition of the combat helmet, highlighting the need for protective equipment that prevents this increased loading on the head surface. The effect of this increased pressure on injury response was evaluated in a clinically-relevant pig model of bTBI. Findings from this study suggested that the blast exposure group exhibited greater motivation and interest in rewards than the sham group, which could be an indication of impulsive or risk-taking behaviors. The effects of blast orientation on affective behavior and memory and cognition were also evaluated in a translational preclinical model. The frontal blast group expressed primarily increased motivation and interest in rewards compared to sham, which is suggestive of impulsivity, while the lateral blast group primarily exhibited decreased approach behaviors relative to sham, which could be indicative of anhedonia, or a reduced ability to experience pleasure. This work demonstrated that orientation, blast intensity, and the presence of the combat helmet each influenced the blast dynamics and loading on the surface of the head and offered preliminary, yet meaningful, insights into the effects of orientation and the combat helmet on injury outcomes after blast exposure.
  • Targeted Priority Mechanisms in Organ Transplantation
    Wang, Ruochen (Virginia Tech, 2025-09-08)
    The persistent shortage of transplantable organs, compounded by high rates of organ underutilization, necessitates innovative allocation mechanisms. This dissertation develops and analyzes targeted priority mechanisms, voluntary incentive-based programs designed to enhance access for disadvantaged patient groups and improve organ-recipient matching. Using a rigorous queueing-theoretic framework, I characterize patients' equilibrium participation strategies, identifying conditions under which no-, full-, and mixed-participation equilibria emerge. I further establish the necessary and sufficient conditions for their existence and uniqueness, highlighting how careful mechanism design can align individual incentives with socially optimal outcomes. The study extends the analysis to class-separating allocations, demonstrating the feasibility of equilibria that improve social welfare while safeguarding non-participating patients' access to high-quality organs. A clinically detailed simulation of the U.S. kidney allocation system, focusing on elderly patients, illustrates the potential benefits: a targeted threshold of 84% KDPI yields approximately 220 additional annual transplants, reduces the waiting list by more than 450 patients, and prevents over 60 pre-transplant deaths annually, with minimal impact on graft survival rates. Overall, the findings provide both theoretical and practical guidance for the design of efficient, implementable allocation mechanisms.
  • Hardware Memory Compression for Large-scale Systems
    Laghari, Muhammad (Virginia Tech, 2025-09-08)
    Memory has become an increasingly costly resource for both users and service providers, while also contributing significantly to global energy consumption and environmental impact. Memory compression offers a promising solution to mitigate these costs, as memory values exhibit an average compression ratio of up to 3x. Recent work has proposed enhancing the CPU's memory controller to compress memory values transparently, thereby increasing effective memory capacity—an approach referred to as hardware memory compression. This dissertation focuses on hardware memory compression for large-scale systems. In general, large-scale systems have unique properties: (1) applications are more demanding on address translation, (2) many users execute workloads requesting different amounts of memory concurrently, and (3) these systems have stricter reliability requirements. These properties introduce new challenges when implementing hardware memory compression. This dissertation explores hardware-software co-design to address the challenges. To reduce address translation overhead, we propose selectively compressing cold memory pages while keeping hot pages uncompressed. To enable precise memory allocation, we introduce a novel memory allocation mechanism coupled with a dedicated interface. Finally, this dissertation proposes a novel scheduling scheme that avoids relying on existing speculative-based scheduling which makes the system reliable. Collectively, these works aim to make hardware memory compression deployable in large-scale systems.
  • Advanced Robust Statistical Learning Methods with Application in Healthcare and Manufacturing
    Chen, Yixin (Virginia Tech, 2025-09-08)
    This dissertation presents the development and validation of advanced robust statistical methods tailored for applications in healthcare and manufacturing. This work consists of three main parts, each addressing unique challenges and demonstrating the necessity of robust algorithms in statistical learning. In the first part, motivated by the need to understand the relationship between brain networks and phenotypes of interest in small-scale neuroimaging studies with limited sample size, I developed a flow-based generative model termed Disentangled Adversarial Flow or DAF for short, which leverages large-scale multi-source datasets to improve prediction accuracy in neuroimaging studies with smaller sample sizes. A bidirectional-generative architecture and a kernel-based dependence measure are utilized to generate domain-invariant brain connectome. An ensemble-based DAF regression framework is proposed to integrate information from multiple source datasets to improve prediction on the target dataset. This framework ensures reliable predictions with limited sample sizes by borrowing information from other data sources despite the heterogeneity across different sources, exemplifying robustness in statistical learning. Similar challenges arise in the manufacturing context, where variations in product designs, process parameters, and sensor configurations generate diverse data distributions. This poses challenges for developing machine learning pipelines that can consistently achieve high performance under varying conditions. Motivated by this, the second part of the dissertation introduces a weighted ensemble mechanism based on the Bayesian Latent Space Model recommender system that optimizes sparse ensemble weights while incorporating uncertainty quantification. This method allows automatically selecting and adapting optimal pipelines, which helps data-driven decision-making in industrial settings. By automating the selection and adaptation of optimal machine learning pipelines, this method demonstrates robustness by maintaining high performance in the face of changing industrial data conditions. Distribution shifts are also common in medical records, where heterogeneity across different individuals hinders automated diagnosis for patients. A robust algorithm could generalize across different patients and lead to more accurate personalized patient care. Inspired by this, the third part proposes a latent factor model based on Interleaved-window Transformer to characterize the inter-subject heterogeneity, focusing on heterogeneous physiological time series data derived from Electronic Health Records, electrocardiograms, electroencephalograms and etc. Different factors in the latent factor model represent different characteristics of the time series. These latent factors are linked to the response through subject-specific weight, which captures varying contributions to the response in different subjects. Contrastive learning is utilized to estimate the weights for new subject not seen in the training phase. This part underlines the theme of robustness by developing a model that adapts to individual differences, ensuring that the statistical learning methods are effective across diverse patient data. This dissertation demonstrates the value of robustness as a unifying theme in advancing statistical learning methodologies and their applications.
  • Towards Interpretable AI for Longitudinal Disease Monitoring and Clinical Reporting from Chest X-Rays
    Madu, Amarachi Blessing (Virginia Tech, 2025-09-08)
    Chest radiography (CXR) plays a pivotal role in diagnostic imaging for monitoring disease progression and evaluating treatment effectiveness. Despite notable advancements in machine learning, disease progression monitoring remains relatively underexplored. Challenges arise from the specificity of biomarkers that detect change, which vary in their mechanisms, manifestations, and progression rates across diseases, alongside individual variability in response to illness and the complexity of incorporating multimodal longitudinal data. Monitoring disease progression in chest imaging involves intricate tasks, such as anatomical motion estimation and image registration, which require the spatial alignment of sequential X-rays and modeling temporal dynamics. This thesis addresses these challenges by harnessing artificial intelligence techniques for effective disease progression monitoring using non-co-registered sequential CXRs. We investigate three research directions: 1) learning a disease progression model with local and global information, 2) explainable hierarchical learnable differences for disease progression monitoring, and 3) retrieval-augmented longitudinal disease report generation. The overarching goal of this thesis is to develop models that not only accurately track disease progression but also provide interpretable insights about the patient's condition. The three directions in this thesis form a unified strategy for building interpretable, temporally aware AI models that enhance disease monitoring and reporting. Together, these contributions advance early detection and informed treatment decisions, with the potential to significantly improve patient outcomes at scale.
  • Student Changes in Growth Mindset, Social Awareness, and Supportive Relationship Perception in a Trauma-Sensitive School During the COVID-19 Pandemic
    Smith, Brandy Leigh (Virginia Tech, 2025-09-05)
    Trauma-sensitive practices in schools are necessary to address the widespread prevalence of childhood adversity and its impact on cognitive, social, and emotional development. Trauma-sensitive schools integrate social and emotional learning to create safe, supportive, and equitable learning environments for all students. With the COVID-19 pandemic amplifying existing experiences of adversity and potentially creating new experiences of trauma, schools face the challenge of strengthening existing trauma-sensitive practices to build resilience and mitigate the impact of trauma on students. Consideration of student perceptions is one aspect of enhancing trauma-sensitive educational practice delivery. This study examines changes in key elements of social and emotional learning (i.e., student growth mindset, social awareness, and perceptions of supportive relationships) at West Elementary, a trauma-sensitive school, during Fall 2020, Spring 2021, and Fall 2021, using data from the Panorama Social-Emotional Learning survey. This study also explores changes among students receiving special education services, English language learners (ELLs), gender, race/ethnicity category, and virtual attendance status. Analyses found that gender may influence students' development in social awareness, while other demographic factors, such as receiving virtual instruction and special education services, did not show significant impact. These findings provide insight into improving trauma-sensitive practices and social and emotional learning in education. Keywords: trauma-sensitive schools, COVID-19 pandemic, resilience, social and emotional learning,
  • Design, Optimization, and Integration of a SiC-Based Traction Inverter with Enhanced Current Sharing for Paralleled Discrete Devices
    Chang, Che-Wei (Virginia Tech, 2025-09-04)
    Electric vehicles (EVs) with hybrid or full electric traction drives have emerged as leading contenders for reducing exhaust emission. In the traction drive system, dc/ac inverters that spin the motor need to deliver high power, high efficiency, and high density. According to the U.S. Department of Energy (DOE) roadmap, the 2025 targets for traction inverters include achieving a power density of 100 kW/L, reducing inverter cost to $2.7/kW, reaching efficiency of 98 %, and supporting voltage of 800 V. To meet these aggressive targets, this dissertation first investigates one of the critical challenges in high-power inverters: current sharing among paralleled devices. In Chapter 2, the current-sharing mechanisms are comprehensively analyzed, and mathematical models are developed to describe both dynamic and static sharing. These models enable clear identification of key impact parameters, providing practical layout guidelines for designers. Building upon the current sharing analysis, Chapter 3 explores passive current-balancing methods to improve both static and dynamic current sharing. The layout for paralleled devices is first optimized by categorizing and comparing different layout types with a focus on minimizing parasitic loop inductance Lloop and overlapping capacitance C. Then, a novel distributed-block (DB) layout concept is proposed to improve current sharing by mitigating asymmetric parasitic among paralleled traces. Furthermore, the differential-mode-choke (DMC) gate driver is introduced to enhance the dynamic current sharing without impairing power loop and switching performance. While passive methods are effective for layout-induced current imbalance, they remain limited when device mismatch is significant. To address this, Chapter 4 proposes an active gate driver (AGD) solution. A low-cost and compact di/dt-RC current sensing technique is introduced, along with a novel RK sensing structure to improve sensing accuracy. The proposed RK sensing structure requires only three tiny components (2 resistors and 1 capacitor) per MOSFET and can be easily scaled for more paralleled devices, making it highly advantageous for industrial applications. Leveraging this sensing technique, the AGD is developed to balance dynamic currents among paralleled devices, offering a near-perfect balancing performance regardless of imbalance cause. Beyond device-level current balancing, achieving high power density for traction inverters remains challenging in both academia and commercial EVs, and requires system-level circuit and mechanical integration. Chapter 5 proposes a systematic "single-board" integration strategy, and an all-in-one half-bridge (HB) printed circuit board (PCB) is built to demonstrate the proposed strategy. This design not only simplifies integration but also eliminates the constraints of conventional "sandwich" structures, achieving a power density of 101.7 kW/L. A comprehensive experimental evaluation of designed all-in-one HB PCB is also performed. Finally, Chapter 6 addresses the thermal challenges of high-density inverters, which become more and more critical as both power and density increase. A systematic thermal design methodology is proposed for high-density inverters and is validated using two prototypes: a 200 kW multi-level inverter in harsh high-altitude environment and a 200 kW traction inverter using the all-in-one HB PCB. Results reveal that conventional sandwich structures create stagnating air spaces that degrade cooling performance and generate localized hot spots. By contrast, the proposed single-board approach eliminates these thermal bottlenecks and enables robust heat dissipation.
  • The Effect of Sonification in Informal Learning: From Fairy Tales to Art Galleries
    Lee, Yeaji (Virginia Tech, 2025-09-04)
    As technology continues to advance in the present era, lifelong learning has become essential across all stages of life. Learning occurs not only in formal institutional settings but also in informal contexts embedded in daily life. Within these informal learning environments, sonification can play a pivotal role by facilitating learning through auditory interaction, interpretation, and engagement. However, the effectiveness of sonification can vary based on contextual factors and individual capabilities. To investigate the potential of sonification in informal learning, this dissertation presents four empirical studies across two domains: human-robot interaction (HRI) and art appreciation in gallery settings. The first study examined how emotion-reflecting sonification, compared to emotion mitigating sonification, influences the experience of listening to fairy tales, focusing on participants' enjoyment, empathy, and immersion. The second study compared sentiment based sonification with classical music in robot storytelling, evaluating their impact on comprehension, emotional interpretation, and engagement. The third study investigated the role of Augmented Audio in an art gallery environment, assessing its influence on participants' emotional responses and engagement during art appreciation. The fourth study explored how Augmented Audio and Augmented Visual elements influence comprehension, emotional reaction, and sustained engagement with artworks. Collectively, this dissertation offers insights into how sonification can be effectively applied across varied informal learning contexts, from narrative storytelling to visual art appreciation, to foster immersive, emotionally resonant, and cognitively enriching educational experiences.
  • Age and Family History of Alzheimer's Disease and Related Dementias as Predictors of Locus Coeruleus and Salience Network Connectivity
    Seago, Elayna Rose (Virginia Tech, 2025-09-04)
    The locus coeruleus (LC) is a nucleus on the brainstem that produces the majority of the norepinephrine in the brain. The LC interacts with the salience network (SN), specifically the dorsal anterior cingulate cortex (dACC) and the insula, to coordinate and direct attention. The LC is a site of early Alzheimer's pathology and also experiences normal age-related declines in functioning. In this study, the associations between age, family history of Alzheimer's Disease, and functional connectivity between the LC and key nodes of the SN were explored in a sample of 110 older and younger adults while they completed two different attentionally demanding tasks. Additionally, the relationship between age, family history of Alzheimer's Disease and performance on the cognitive tasks was also examined. The results of this study indicate that age and family history of Alzheimer's Disease and Related Dementias (ADRD) interact to influence level of LC-dACC functional connectivity, but not LC-insula functional connectivity, in the Attention Network Task (ANT), but that neither LC-dACC nor LC-insula functional connectivity were predicted by age and family history of ADRD during the Place Discrimination Task (PDT). Furthermore, age predicted task performance on the PDT, but not on the ANT and family history of ADRD was not associated with performance on either task. These findings suggest that the relationship between family history of ADRD and LC-SN functional connectivity varies based on the region of the salience network examined and the age of the individual.