All Faculty Deposits
Permanent URI for this collection
The "All Faculty Deposits" collection contains works deposited by faculty and appointed delegates from the Elements (EFARs) system. For help with Elements, see Frequently Asked Questions on the Provost's website. In general, items can only be deposited if the item is a scholarly article that is covered by Virginia Tech's open access policy, or the item is openly licensed or in the public domain, or the item is permitted to be posted online under the journal/publisher policy, or the depositor owns the copyright. See Right to Deposit on the VTechWorks Help page. If you have questions email us at vtechworks@vt.edu.
Browse
Recent Submissions
- Applying the Midas Touch of Reproducibility to High-Performance ComputingMinor, A. C.; Feng, Wu-chun (IEEE, 2022-09-19)With the exponentially improving serial performance of CPUs from the 1980s and 1990s slowing to a standstill by the 2010s, the high-performance computing (HPC) community has seen parallel computing become ubiquitous, which, in turn, has led to a proliferation of parallel programming models, including CUDA, OpenACC, OpenCL, OpenMP, and SYCL. This diversity in hardware platform and programming model has forced application users to port their codes from one hardware platform to another (e.g., CUDA on NVIDIA GPU to HIP or OpenCL on AMD GPU) and demonstrate reproducibility via adhoc testing. To more rigorously ensure reproducibility between codes, we propose Midas, a system to ensure that the results of the original code match the results of the ported code by leveraging the power of snapshots to capture the state of a system before and after the execution of a kernel.
- Characterization and Optimization of the Fitting of Quantum Correlation FunctionsChuang, Pi-Yueh; Shah, Niteya; Barry, Patrick; Cloet, Ian; Constantinescu, Emil M.; Sato, Nobuo; Qiu, Jian-Wei; Feng, Wu-chun (IEEE, 2024-09)This case study presents a characterization and optimization of an application code for extracting parton distribution functions from high energy electron-proton scattering data. Profiling this application code reveals that the phase-space density computation accounts for 93% of the overall execution time for a single iteration on a single core. When executing multiple iterations in parallel on a multicore system, the application spends 78% of its overall execution time idling due to load imbalance. We address these issues by first transforming the application code from Python to C++ and then tackling the application load imbalance via a hybrid scheduling strategy that combines dynamic and static scheduling. These techniques result in a 62% reduction in CPU idle time and a 2.46x speedup in overall execution time per node. In addition, the typically enabled power-management mechanisms in supercomputers (e.g., AMD Turbo Core, Intel Turbo Boost, and RAPL) can significantly impact intra-node scalability when more than 50% of the CPU cores are used. This finding underscores the importance of understanding system interactions with power management, as they can adversely impact application performance, and highlights the necessity of intra-node scaling tests to identify performance degradation that inter-node scaling tests might otherwise overlook.
- Experiences with VITIS AI for Deep Reinforcement LearningChaudhury, Nabayan; Gondhalekar, Atharva; Feng, Wu-chun (IEEE, 2024-09)Deep reinforcement learning has found use cases in many applications, such as natural language processing, self-driving cars, and spacecraft control applications. Many use cases of deep reinforcement learning seek to achieve inference with low latency and high accuracy. As such, this work articulates our experiences with the AMD Vitis AI toolchain to improve the latency and accuracy of inference in deep reinforcement learning. In particular, we evaluate the soft actor-critic (SAC) model that is trained to solve the MuJoCo humanoid environment, where the objective of the humanoid agent is to learn a policy that allows it to stay in motion for as long as possible without falling over. During the training phase, we prune the model using the weight sparsity pruner from the Vitis AI optimizer at different timesteps. Our experimental results show that pruning leads to an improvement in the evaluation of the reinforcement learning policy, where the trained agent can remain balanced in the environment and accumulate higher rewards, compared to a trained agent without pruning. Specifically, we observe that pruning the network during training can deliver up to 20% better mean episode length and 23% higher reward (better accuracy), compared to a network without any pruning. Additionally, there is an improvement in decision-making latency up to 20%, which is the time between the observation of the agent's state and a control decision.
- On the Scalability of Computing Genomic Diversity Using SparkLeBLAST: A Feasibility StudyPrabhu, Ritvik; Moussad, Bernard; Youssef, Karim; Vatai, Emil; Feng, Wu-chun (IEEE, 2024-09)Studying the genomic diversity of viruses can help us understand how viruses evolve and how that evolution can impact human health. Rather than use a laborious and tedious wet-lab approach to conduct a genomic diversity study, we take a computational approach, using the ubiquitous NCBI BLAST and our parallel and distributed SparkLeBLAST, across 53 patients (40,000,000 query sequences) on Fugaku, the world's fastest homogeneous supercomputer with 158,976 nodes, where each code contains a 48-core A64FX processor and 32 GB RAM. To project how long BLAST and SparkLeBLAST would take to complete a genomic diversity study of COVID-19, we first perform a feasibility study on a subset of 50 query sequences from a single COVID-19 patient to identify bottlenecks in sequence alignment processing. We then create a model using Amdahl's law to project the run times of NCBI BLAST and SparkLeBLAST on supercomputing systems like Fugaku. Based on the data from this 50-sequence feasibility study, our model predicts that NCBI BLAST, when running on all the cores of the Fugaku supercomputer, would take approximately 26.7 years to complete the full-scale study. In contrast, SparkLeBLAST, using both our query and database segmentation, would reduce the execution time to 0.026 years (i.e., 22.9 hours) - resulting in more than a 10,000× speedup over using the ubiquitous NCBI BLAST.
- Optimizing and Scaling the 3D Reconstruction of Single-Particle ImagingShah, Niteya; Sweeney, Christine; Ramakrishnaiah, Vinay; Donatelli, Jeffrey; Feng, Wu-chun (IEEE, 2024-05)An X-ray free electron laser (XFEL) facility can produce on the order of 1,000,000 extremely bright X-ray light pulses per second. Using an XFEL to image the atomic structure of a molecule requires fast analysis of an enormous amount of data, estimated to exceed one terabyte per second and requiring petabytes of storage. The SpiniFEL application provides such analysis by determining the 3D structure of proteins from single-particle imaging (SPI) experiments performed using XFELs, but it needs significantly better performance and efficiency to scale and keep up with the terabyte-per-second data production. Thus, this paper addresses the high-performance computing optimizations and scaling needed to improve this 3D reconstruction of SPI data. First, we optimize data movement, memory efficiency, and algorithms to improve the per-node computational efficiency and deliver a 5.28× speedup over the baseline GPU implementation.In addition, we achieved a 485× speedup for the post-analysis reconstruction resolution, which previously took as long as the 3D reconstruction of SPI data. Second, we present a novel distributed shared-memory computational algorithm to hide data latency and load-balance network traffic, thus enabling the processing of 128× more orientations than previously possible. Third, we conduct an exploratory study over the hyperparameter space for the SpiniFEL application to identify the optimal parameters for our underlying target hardware, which ultimately led to an up to 1.25× speedup for the number of streams. Overall, we achieve a 6.6× speedup (i.e., 5.28×1.25) over the previous fastest GPUMPI-based SpiniFEL realization.
- Improved 2-D Chest CT Image Enhancement With Multi-Level VGG LossChaturvedi, Ayush; Prabhu, Ritvik; Yadav, Mukund; Feng, Wu-chun; Cao, Guohua (IEEE, 2025-03)Chest CT scans play an important role in diagnosing abnormalities associated with the lungs, such as tuberculosis, sarcoidosis, pneumonia, and, more recently, COVID-19. However, because conventional normal-dose chest CT scans require a much larger amount of radiation than x-rays, practitioners seek to replace conventional CT with low-dose CT (LDCT). LDCT often generates a low-quality CT image that poses noise and, in turn, negatively affects the accuracy of diagnosis. Therefore, in the context of COVID-19, due to the large number of affected populations, efficient image-denoising techniques are needed for LDCT images. Here, we present a deep learning (DL) model that combines two neural networks to enhance the quality of low-dose chest CT images. The DL model leverages a previously developed densenet and deconvolution-based network (DDNet) for feature extraction and extends it with a pretrained VGG network inside the loss function to suppress noise. Outputs from selected multiple levels in the VGG network (ML-VGG) are leveraged for the loss calculation. We tested our DDNet with ML-VGG loss using several sources of CT images and compared its performance to DDNet without VGG loss as well as DDNet with an empirically selected single-level VGG loss (DDNet-SL-VGG) and other state-of-the-art DL models. Our results show that DDNet combined with ML-VGG (DDNet-ML-VGG) achieves state-of-the-art denoising capabilities and improves the perceptual and quantitative image quality of chest CT images. Thus, DDNet with multilevel VGG loss could potentially be used as a post-acquisition image enhancement tool for medical professionals to diagnose and monitor chest diseases with higher accuracy.
- Looking Back to Look Forward: 15 Years of the Green500Adhinarayanan, Vignesh; Feng, Wu-chun (IEEE, 2025-01)We revisit a Computer article from 15 years ago that introduced the Green500 -- a list ranking the most energy-efficient supercomputers. Our exploration centers on the advancements achieved during this time, highlighting a notable trend: the energy efficiency of supercomputers has approximately doubled every two years.
- On the Landscape of Graph Clustering at ScaleDey, Saikat; Jha, Sonal; Wanye, Frank; Feng, Wu-chun (IEEE, 2025-06)Graph clustering, also known as community detection, is used to partition and analyze data across a gamut of disciplines, leading to new insights in fields like bioinformatics, networking, and cybersecurity. To keep pace with the exponential growth in collected data, much of the graph clustering research has increasingly pivoted towards developing parallel and distributed clustering algorithms. However, little work has been done to rigorously characterize such algorithms with respect to each other when using the same software stack, hardware stack, and graph dataset inputs. In this manuscript, we identify three open-source, state-of-the-art graph clustering algorithms and characterize the trade-offs between their accuracy and performance on real-world graphs. We show that the ideal choice of graph clustering algorithm depends on the (1) use case, (2) runtime requirements, and (3) accuracy requirements of the user. We provide guidelines for selecting the appropriate state-of-the-practice graph clustering algorithm and conduct a performance characterization of these algorithms through which we identify opportunities for future research in scalable and accurate graph clustering algorithms.
- Scalable and Maintainable Distributed Sequence Alignment Using SparkYoussef, Karim; Elnady, Yusuf; Tilevich, Eli; Feng, Wu-chun (IEEE, 2025-07)The exponential growth of genomic data presents a challenge to bioinformatics research. NCBI BLAST, a popular pairwise sequence alignment tool, does not scale with the hundreds of gigabytes (GB) of sequenced data. Therefore, mpiBLAST was widely adopted and scaled up to 65,536 processors. However, mpiBLAST is tightly coupled with an obsolete NCBI BLAST version, creating a challenge to upgrading mpiBLAST with the ever-changing NCBI BLAST code. Recent parallel BLAST implementations, like SparkBLAST, use parallelism wrappers separate from NCBI BLAST to overcome this issue. However, query partitioning, a parallel method that duplicates the genome database on each compute node, makes SparkBLAST scale poorly with databases larger than a single node's memory. Thus, no parallel BLAST utility simultaneously addresses performance, scalability, and software maintainability. To fill this gap, we introduce SparkLeBLAST, a parallel BLAST tool that uses the Spark framework and efficient data partitioning to combine mpiBLAST's performance and scalability with SparkBLAST's simplicity and maintainability. SparkLeBLAST democratizes scalable genomic analysis for domain scientists without extensive distributed computing experience. SparkLeBLAST runs up to 6.68× faster than SparkBLAST. SparkLeBLAST also accelerates taxonomic assignment of COVID-19 genomic diversity analysis by 20.9× as it speeds up the BLAST search component by 88.6× using 128 compute nodes.
- Optimizing Management of Persistent Data Structures in High-Performance AnalyticsYoussef, Karim; Iwabuchi, Keita; Gokhale, Maya; Feng, Wu-chun; Pearce, Roger (IEEE, 2026-01-01)Large-scale data analytics workflows ingest massive input data into various data structures, including graphs and key-value datastores. These data structures undergo multiple transformations and computations and are typically reused in incremental and iterative analytics workflows. Persisting in-memory views of these data structures enables reusing them beyond the scope of a single program run while avoiding repetitive raw data ingestion overheads. Memory-mapped I/O enables persisting in-memory data structures without data serialization and deserialization overheads. However, memory-mapped I/O lacks the key feature of persisting consistent snapshots of these data structures for incremental ingestion and processing. The obstacles to efficient virtual memory snapshots using memory-mapped I/O include background writebacks outside the application’s control, and the significantly high storage footprint of such snapshots. To address these limitations, we present Privateer, a memory and storage management tool that enables storage-efficient virtual memory snapshotting while also optimizing snapshot I/O performance. We integrated Privateer into Metall, a state-of-the-art persistent memory allocator for C++, and the Lightning Memory-Mapped Database (LMDB), a widely-used key-value datastore in data analytics and machine learning. Privateer optimized application performance by 1.22× when storing data structure snapshots to node-local storage, and up to 16.7× when storing snapshots to a parallel file system. Privateer also optimizes storage efficiency of incremental data structure snapshots by up to 11× using data deduplication and compression.
- Reinforcement Learning-Based Fuzzer for 5G RRC Security EvaluationParikh, Dhairya; Dessources, Dimitri A.; Tripathi, Nishith D.; Reed, Jeffrey H.; Burger, Eric W. (IEEE, 2026-03-09)Open Radio Access Network (O-RAN) and modern Fifth Generation Mobile Networks (5G) Standalone (SA) deployments increase protocol complexity and broaden the attack surface of cellular infrastructure. This paper introduces a reinforcement-learning-based fuzz tester designed to evaluate the Radio Resource Control (RRC) layer in 5G SA networks. The fuzzer operates as a software-defined “false” User Equipment (UE) that attaches to the target network, intercepts and mutates uplink RRC messages, and injects malformed test cases targeting RRC handlers. The system integrates Reinforcement Learning (RL)-driven test-case generation with an automated execution pipeline for message injection and packet-capture analysis, allowing the agent to iteratively learn which mutations most effectively trigger anomalous behavior. Reinforcement feedback is computed from system metrics such as Central Processing Unit (CPU) utilization, thread count, and network Input/Output (I/O) to guide learning toward high-impact inputs. Experimental results demonstrate that the proposed fuzzer uncovers previously unseen protocol-handling anomalies, malformed-message behaviors, and resource-exhaustion conditions, including reproducible RRC/NGAP inconsistencies identified through a deterministic Proof-of-Concept (PoC) evaluation. The paper presents the overall architecture, reinforcement learning formulation, and evaluation results, highlighting how feedback-driven adaptive fuzzing can prioritize high-impact mutations for stateful 5G RRC security assessment.
- A previously unrecognized class of fungal ice-nucleating proteins with bacterial ancestryEufemio, Rosemary J.; Rojas, Mariah; Shaw, Kaden; de Almeida Ribeiro, Ingrid; Guo, Hao-Bo; Renzer, Galit; Belay, Kassaye; Liu, Haijie; Suseendran, Parkesh; Wang, Xiaofeng; Fröhlich-Nowoisky, Janine; Pöschl, Ulrich; Bonn, Mischa; Berry, Rajiv J.; Molinero, Valeria; Vinatzer, Boris A.; Meister, Konrad (American Association for the Advancement of Science, 2026-03-13)Ice-nucleating proteins (INpros) catalyze ice formation at high subzero temperatures, with major biological and environmental implications. While bacterial INpros have been structurally characterized, their counterparts in other organisms have remained largely unknown. Here, we identify membrane-independent proteins in fungi of theMortierellaceae family that promote ice formation with high efficiency. These proteins are predicted to adopt β-solenoid folds and multimerize to form extended ice-binding surfaces, exhibiting mechanistic parallels with bacterial INpros. Structural modeling, phylogenetic analysis, and heterologous gene expression leading to ice nucleation in Escherichia coli and Saccharomyces cerevisiae show that the fungal INpros are encoded by orthologs of the bacterial InaZ gene, which was likely acquired by a fungal ancestor through horizontal gene transfer. The discovery of cell-free fungal INpros provides tools for innovative freezing applications and reveals biophysical constraints on ice nucleation across life.
- Partial Courses of Fidaxomicin Followed by Oral Vancomycin and the Effect on Recurrence of Clostridioides difficile InfectionsPapamanolis, Irene-Constantina; Stornelli, Nicholas; Everson, Nathan; Ahmad, Zayd; Kamrada, Meghan; Lockhart, Ellen Rachel; McDaniel, Lauren (Sage, 2025-12-01)Background: Clostridioides difficile infection (CDI) causes a significant national health care burden. Literature has demonstrated lower rates of CDI recurrence with fidaxomicin compared with oral vancomycin. However, patients are sometimes switched to oral vancomycin before completing a fidaxomicin course. Objective: The objective of this study is to evaluate rates of CDI recurrence in full courses of fidaxomicin versus partial courses of fidaxomicin followed by a switch to oral vancomycin. Methods: In this single-center, retrospective, cohort study of adults with CDI, patients were screened for inclusion if they received either a full 10-day course of fidaxomicin or partial course of fidaxomicin followed by a switch to oral vancomycin. The primary outcome was the rate of CDI recurrence within 30 days after completion of initial therapy determined by a positive CDI test and initiation of treatment. Results: Ninety-nine patients received a full course of fidaxomicin, and 95 patients received a partial course of fidaxomicin followed by oral vancomycin. Mean age was lower in the full course group compared with the partial course (65.3 years vs 71.5 years, P < 0.002). Clostridioides difficile infection recurrence occurred in 5.1% of the full course group and 7.4% of the partial therapy group (P = 0.503) at 30 days and 13.1% versus 14.7% (P = 0.747) at 90 days. Clostridioides difficile infection–related readmissions at 30 days were similar in the full course and partial course groups (7.1% vs 4.2%, P = 0.389). Conclusion and Relevance: Partial courses of fidaxomicin followed by oral vancomycin had similar 30-day CDI recurrence compared with full course fidaxomicin.
- Benchmarking Deep Legendre-SNN for Time Series Classification – Analysis and EnhancementsGaurav, Ramashish; Agarwal, Shrestha; Stewart, Terrence C.; Yi, Yang (IEEE, 2025-10-29)Compute- and energy-efficient Time Series Classification (TSC) is the need of the hour-to cater the continually growing sources and applications of temporal data. State-of-the-Art (SoTA) temporal computational models, e.g., LSTMs/RNNs, HIVE-COTE, Transformers, etc., are high performing, but are also resource intensive, resulting in high energy consumption on CPUs/GPUs. On the contrary, Reservoir Computing (RC) based models are resource-efficient and perform well for simple TSC datasets; and when implemented with spiking neurons, spiking RC-based models offer the promise of high energy-efficiency on neuromorphic hardware. In this work, we analyse, enhance, and benchmark the newly introduced-spiking RC-based, “Legendre Spiking Neural Network” (Legendre-SNN or LSNN) model for TSC. We theoretically investigate the Legendre Delay Network (LDN) that acts as a reservoir in the LSNN model, and bring some useful insights into the design of the LDN-based models. In our analysis, we find that a higher order LDN is necessary for optimal performance with input signals composed of higher frequencies. We also extend the existing LSNN model to multivariate time-series signals and propose the “DeepLSNN” model. We conduct experiments with DeepLSNN on 102 benchmark TSC-datasets (comprising both univariate and multivariate signals). Via such large scale experiments, we present the first benchmark-results for spiking-TSC. Considering DeepLSNN's best results, we find that it outperforms the non-spiking LSTM-FCN on more than 31% of the 102 datasets. We note that our benchmark-results can serve as a comparison criterion for other spiking-TSC experiments.
- Transit time modeling framework for predicting freshwater salinization in urban catchmentsBhide, Shantanu V.; Grant, Stanley B.; McGuire, Kevin J.; Prestegaard, Karen; Kaushal, Sujay S.; Sekellick, Andrew J.; Rippy, Megan A.; Schenk, Todd; Curtis, Shannon; Gomez-Velez, Jesus D.; Hotchkiss, Erin R.; Vikesland, Peter J.; Saksena, Siddharth (Elsevier, 2026-03)The salinity of inland freshwaters is rising globally, particularly in urban watersheds where winter road deicers are widely applied. Attributing stream salinity dynamics to specific sources and transport pathways remains challenging due to episodic salt inputs, engineered drainage, and strong coupling between hydrology and subsurface storage. We present a modeling framework that couples climate-driven deicer build-up and wash-off with transient transit time distribution theory to simulate salt transport through drainage, interflow, and groundwater pathways. Applied to an urban watershed in Northern Virginia (USA), the model reproduces ten years of high-frequency stream salinity measurements across daily-to-decadal timescales. The calibrated model implies an average deicer application of 206 tonnes Cl yr−1, or roughly one 20 kg bag of rock salt person−1 yr−1 when normalized by the 20,000 people living in the watershed. In winter months, higher infiltration routes a large fraction of snowmelt and deicers into shallow subsurface pathways, enhancing vadose-zone and interflow contributions to stream salinity. Limited subsurface storage capacity and seasonal hydrologic turnover flush excess chloride from the vadose zone and groundwater during subsequent summer storms. By linking climate-driven deicer inputs, hydrologic connectivity, and stream water age, the framework provides a transferable basis for diagnosing and managing freshwater salinization in urban watersheds.
- Words matter when gangs cyberbang: Predicting imminent urban violence from gang members’ social media postsFowler, Sherry L.; Stylianou, Antonis C.; Zhang, Dongsong; Lowry, Paul Benjamin; Mousavi, Reza; Reid, Shannon E. (2026)The rise in violent crime across major U.S. cities, fueled mainly by gang members using social media to broadcast messages of loss and aggression, poses an urgent challenge. Although prior research has examined gang-affiliated social media content, there remains a crucial gap in identifying which posts serve as credible signals of impending violence. Addressing this gap is essential for enhancing community safety, improving resource allocation, and optimizing law enforcement strategies. This study introduces a novel research model grounded in a contextualized adaptation of signaling theory. The model identifies key indicators of credible signals, such as follower count, specific hashtags, and retweet counts, which correlate with gang-related aggression. Environmental factors, such as temperature, are also examined for their influence on violent crime escalation. Using this contextualized theory, we designed a machine learning model to predict violent crime counts, training it on a dataset of 143,700 gang-affiliated tweets and their accompanying text and metadata. This approach enables automated identification of credible social media signals related to gang violence. The findings contribute to theory and practice by offering new insights into social media credibility and its link to violent crime, and by demonstrating how such signals can be used for prediction. Furthermore, the predictive model provides law enforcement with advanced tools to anticipate crime and inform community-based prevention strategies and policy development.
- Fostering Information Disclosure in Telemental Healthcare Settings: How Telehealth Can Mitigate the Deleterious Effects of StigmaRaimi, Ryan; Lowry, Paul Benjamin; Straub, Detmar (2026)Insufficient patient disclosure and persistent stigma undermine effective mental health care, a challenge magnified during the COVID-19 pandemic. Telehealth offers a promising avenue to reduce access barriers and improve equity, yet its effectiveness depends on patients’ willingness to disclose sensitive information online. This study develops a middle-range, contextually adapted version of the disclosure processes model (DPM) to explain and predict how stigma and technological features shape online self-disclosure in mental health settings. We conducted a randomized web-based experiment with 309 participants who viewed a video vignette depicting a consultation between a patient and a psychiatrist. The vignette manipulated diagnosis (ADHD vs. schizophrenia) and consultation mode (in-person vs. virtual). Results show that willingness to disclose increases with greater trust in technology, higher perceived social presence, and richer communication media. Initial disclosure goals align with differing levels of technological trust and self-disclosure. However, perceived stigma weakens these positive relationships, reducing patients’ readiness to share sensitive information. The research advances theory by extending the DPM into a context-specific, middle-range information systems framework that integrates stigma and media characteristics in online mental health care. Practically, the findings identify key communication features—such as social presence, richness, and trust in telehealth platforms—that can be calibrated to foster disclosure of stigmatized information. These insights inform the design and implementation of telehealth services that promote open communication and improve treatment engagement in mental health and other stigma-laden domains.
- Multivariate Legendre-SNN on Loihi-2 for Time Series Classification and 5G Jamming DetectionGaurav, Ramashish; Sinha, Sujata; Lin, Chunxiao; Stewart, Terrence C.; Liu, Lingjia; Yi, Yang (IEEE, 2026)5G-&-Beyond technologies offer the promise of improved speed and bandwidth, ultra low latency, high network reliability, and have the potential to enable new applications and services. It only seems fitting to complement the transformative future of 5G-&-Beyond with the low energy offering of Spiking Neural Networks (SNNs) on neuromorphic chips. In this work, we develop Loihi-2 (Intel’s neuromorphic chip) -compatible versions of our previously proposed Legendre-SNN model for univariate and multivariate Time-Series Classification (TSC), as well as for 5G wireless applications. The Legendre-SNN is a reservoir-based SNN, where, the non-spiking Legendre Delay Network (LDN) is used as a static reservoir, followed by a trainable spiking network. Deploying such an SNN model (mix of non-spiking and spiking components) entirely on Loihi-2 is nontrivial - this is due to the scarcity of related approaches and technical documentations. In this work, we present our approach and the technicalities of implementing the non-spiking LDN on the rarely used “Lakemont core” (embedded on Loihi-2); thereby, adding to the scarce technical documentation to program on-chip Lakemont cores. Thus, our presented approach can be leveraged by other researchers as well - to implement their non-spiking components right on-chip. Our proposed hardware-friendly versions of Legendre-SNN when evaluated on Loihi-2, outperform LSTM-based models (- executed on GPU) on 7 of 24 TSC datasets. Here, we also emphasize on the applications of our Legendre-SNN versions for 5G Jamming Detection on Loihi-2, and complement it with a real-time video demonstration of Jamming Detection (with simulated signals) on our physical Kapoho-Point Single Chip Loihi-2 board, followed by detailed energy-analysis. Overall, this work is directed towards the (comparatively) understudied technical side of neuromorphic computing to enable researchers leverage the Lakemont cores and deploy their SNNs entirely on Loihi-2, with a push towards the cause for neuromorphics in Wireless Communications.
- Residential Mobility, Housing Instability, Adverse Childhood Experiences, and the Moderating Role of Neighborhood ContextsYoo, Jaeyong; Fisher, Satya; Kim, Jaehwan (MDPI, 2026-03-06)Housing instability, particularly frequent residential moves, has been associated with poor developmental outcomes, yet its relationship with adverse childhood experiences (ACEs) remains insufficiently understood at the national level. This study addresses this gap by investigating how frequent moves shape children’s exposure to ACEs, and whether community and household contexts influence these effects. Using the 2020–2021 National Survey of Children’s Health data, we ask two questions: (1) Do children who experience frequent moves face greater risk of ACEs? and (2) Do neighborhood and metropolitan contexts mitigate or exacerbate this association? Our contribution is twofold. First, we examine both directions of the relationship: how ACEs predict frequent moves and how frequent moves increase ACE exposure. Second, we incorporate contextual moderators, including supportive neighborhoods, safety, amenities, and urban residence, to provide a more nuanced account of how environments shape resilience or vulnerability. Using logistic and negative binomial regression models, we find that all ACEs significantly predict frequent moves, with parental divorce/separation showing the largest effect. Economic hardship is also a strong predictor of frequent residential mobility, and while food or cash assistance is associated with higher mobility, it moderates the hardship-mobility association. Supportive neighborhoods are associated with lower odds of moving. In turn, frequent moves more than double children’s risk of ACEs. Supportive and safe neighborhoods provide protective benefits, while detracting elements exacerbate adversity. We conclude that reducing frequent moves and strengthening neighborhood supports are critical strategies for mitigating childhood adversity.
- Bot Automation Using Large Language Models (LLMs) and PluginsRamakrishnan, Naren; Butler, Patrick; Mayer, Brian B.; Neeser, Andrew (2024-07)The aim of this research study was to create tools that automate information extraction pipelines to support business processes in contract and procurement management. The research team was specifically asked to explore opportunities to use Large Language Models (LLMs) to accomplish this task. After reviewing the problem space and the potential solutions, the team designed and created a tool to generate reports on the status of entries from the Contractor Performance Assessment Reporting System (CPARS), broken down by contracting division. This tool automates the extraction of the Contracting Officer’s Representative (COR) status information. The team also explored methods for using LLM pipelines to automate other potential contractual management tasks and presented some demonstrations of possible uses. The research indicated that LLMs have significant potential to enhance contract and procurement management processes, e.g., automating field extraction from existing contracts, assisting contract generation and customization, rapid contract analysis, and streamlining routine document processing tasks. Based on demonstrations the sponsor agreed on their potential. Yet, while the potential benefits are substantial there are concerns with data privacy and security, accuracy and reliability, legal and compliance issues, and integration with existing systems. To mitigate these concerns and maximize benefits, the research team suggests focusing on local, open-source LLM solutions like LLaMA or Phi. These models can be deployed on-premises, ensuring data privacy and security while providing powerful LLM capabilities including customization and specialization.