Browsing by Author "Haghighat, Alireza"
Now showing 1 - 20 of 34
Results Per Page
Sort Options
- An Agent-based Platform for Demand Response Implementation in Smart BuildingsKhamphanchai, Warodom (Virginia Tech, 2016-04-28)The efficiency, security and resiliency are very important factors for the operation of a distribution power system. Taking into account customer demand and energy resource constraints, electric utilities not only need to provide reliable services but also need to operate a power grid as efficiently as possible. The objective of this dissertation is to design, develop and deploy the Multi-Agent Systems (MAS) - together with control algorithms - that enable demand response (DR) implementation at the customer level, focusing on both residential and commercial customers. For residential applications, the main objective is to propose an approach for a smart distribution transformer management. The DR objective at a distribution transformer is to ensure that the instantaneous power demand at a distribution transformer is kept below a certain demand limit while impacts of demand restrike are minimized. The DR objectives at residential homes are to secure critical loads, mitigate occupant comfort violation, and minimize appliance run-time after a DR event. For commercial applications, the goal is to propose a MAS architecture and platform that help facilitate the implementation of a Critical Peak Pricing (CPP) program. Main objectives of the proposed DR algorithm are to minimize power demand and energy consumption during a period that a CPP event is called out, to minimize occupant comfort violation, to minimize impacts of demand restrike after a CPP event, as well as to control the device operation to avoid restrikes. Overall, this study provides an insight into the design and implementation of MAS, together with associated control algorithms for DR implementation in smart buildings. The proposed approaches can serve as alternative solutions to the current practices of electric utilities to engage end-use customers to participate in DR programs where occupancy level, tenant comfort condition and preference, as well as controllable devices and sensors are taken into account in both simulated and real-world environments. Research findings show that the proposed DR algorithms can perform effectively and efficiently during a DR event in residential homes and during the CPP event in commercial buildings.
- Analysis and Improvement of the bRAPID Algorithm and its ImplementationBartel, Jacob Benjamin (Virginia Tech, 2019-07-18)This thesis presents a detailed analysis of the bRAPID (burnup for RAPID – Real Time Analysis for Particle transport and In-situ Detection) code system, and the implementation and validation of two new algorithms for improved burnup simulation. bRAPID is a fuel burnup algorithm capable of performing full core 3D assembly-wise burnup calculations in real time, through its use of the RAPID Fission Matrix methodology. A study into the effect of time step resolution on isotopic composition in Monte Carlo burnup calculations is presented to provide recommendations for time step scheme development in bRAPID. Two novel algorithms are implemented into bRAPID, which address: i) the generation of time-dependent correction factors for the fission density distribution in boundary nuclear fuel assemblies within a reactor core; ii) the calculation of pin-wise burnup distributions and isotopic concentrations. Time step resolution analysis shows that a variable time step scheme, developed to accurately characterize important isotope evolution, can be used to optimize burnup calculations and minimize computation time. The two new algorithms have been benchmarked against the Monte Carlo code system Serpent. Results indicate that the time-dependent boundary correction algorithm improves fission density distribution calculations by including a more detailed representation of boundary physics. The pin-wise burnup algorithm expands bRAPID capabilities to provide material composition data at the pin level, with accuracy comparable to the reference calculation. In addition, wall-clock time analyses show that burnup calculations performed using bRAPID with these novel algorithms require a fraction of the time of Serpent.
- APECS: A Polychrony based End-to-End Embedded System Design and Code SynthesisAnderson, Matthew Eric (Virginia Tech, 2015-05-19)The development of high integrity embedded systems remains an arduous and error-prone task, despite the efforts by researchers in inventing tools and techniques for design automation. Much of the problem arises from the fact that the semantics of the modeling languages for the various tools, are often distinct, and the semantics gaps are often filled manually through the engineer's understanding of one model or an abstraction. This provides an opportunity for bugs to creep in, other than standardizing software engineering errors germane to such complex system engineering. Since embedded systems applications such as avionics, automotive, or industrial automation are safety critical, it is very important to invent tools, and methodologies for safe and reliable system design. Much of the tools, and techniques deal with either the design of embedded platforms (hardware, networking, firmware etc), and software stack separately. The problem of the semantic gap between these two, as well as between models of computation used to capture semantics must be solved in order to design safer embedded systems. In this dissertation we propose a methodology for the end-to-end modeling and analysis of safety-critical embedded systems. Our approach consists of formal platform modeling, and analysis; formal application modeling; and 'correct-by-construction' code synthesis with the aim of bridging semantic gaps between the various abstractions and models required for the end-to-end system design. While the platform modeling language AADL has formal semantics, and analysis tools for real-time, and performance verification, the application behavior modeling in AADL is weak and part of an annex. In our work, we create the APECS (AADL and Polychrony based Embedded Computing Synthesis) methodology to allow an embedded system design specification all the way from platform architecture and platform components, the real-time behavior, non-functional properties, as well as the application software modeling. Our main contribution is to integrate a polychronous application software modeling language, and synthesis algorithms in order for synthesis of the embedded software running on the target platform, with the required constraints being met. We believe that a polychronous approach is particularly well suited for a multiprocessor/multi-controller distributed platform where different components often operate at independent rates and concurrently. Further, the use of a formal polychronous language will allow for formal validation of the software prior to code generation. We present a prototype framework that implements this approach, which we refer to as the AADL and Polychrony based Embedded Computing System (APECS). Our prototype utilizes an extended version of Ocarina to provide code generation for the AADL model. Our polychronous modeling language is MRICDF. Our prototype extends Ocarina to support software specification in MRICDF and generate multi-threaded software. Additionally, we implement an automated translation from Simulink to MRICDF, allowing designers to benefit from its formal semantics and exploit engineers' familiarity with Simulink tools, and legacy models. We present case studies utilizing APECS to implement safety critical systems both natively in MRICDF and in Simulink through automated translation.
- Automated Analysis of Astrocyte Activities from Large-scale Time-lapse Microscopic Imaging DataWang, Yizhi (Virginia Tech, 2019-12-13)The advent of multi-photon microscopes and highly sensitive protein sensors enables the recording of astrocyte activities on a large population of cells over a long-time period in vivo. Existing tools cannot fully characterize these activities, both within single cells and at the population-level, because of the insufficiency of current region-of-interest-based approaches to describe the activity that is often spatially unfixed, size-varying, and propagative. Here, we present Astrocyte Quantitative Analysis (AQuA), an analytical framework that releases astrocyte biologists from the ROI-based paradigm. The framework takes an event-based perspective to model and accurately quantify the complex activity in astrocyte imaging datasets, with an event defined jointly by its spatial occupancy and temporal dynamics. To model the signal propagation in astrocyte, we developed graphical time warping (GTW) to align curves with graph-structured constraints and integrated it into AQuA. To make AQuA easy to use, we designed a comprehensive software package. The software implements the detection pipeline in an intuitive step by step GUI with visual feedback. The software also supports proof-reading and the incorporation of morphology information. With synthetic data, we showed AQuA performed much better in accuracy compared with existing methods developed for astrocytic data and neuronal data. We applied AQuA to a range of ex vivo and in vivo imaging datasets. Since AQuA is data-driven and based on machine learning principles, it can be applied across model organisms, fluorescent indicators, experimental modes, and imaging resolutions and speeds, enabling researchers to elucidate fundamental astrocyte physiology.
- Automated Identification and Tracking of Motile Oligodendrocyte Precursor Cells (OPCs) from Time-lapse 3D Microscopic Imaging Data of Cell Clusters in vivoWang, Yinxue (Virginia Tech, 2021-06-02)Advances in time-lapse 3D in vivo fluorescence microscopic imaging techniques enables the observation and investigation into the migration of Oligodendrocyte precursor cells (OPCs) and its role in the central nervous system. However, current practice of image-based OPC motility analysis heavily relies on manual labeling and tracking on 2D max projection of the 3D data, which suffers from massive human labor, subjective biases, weak reproducibility and especially information loss and distortion. Besides, due to the lack of OPC specific genetically encoded indicator, OPCs can only be identified from other oligodendrocyte lineage cells by their observed motion patterns. Automated analytical tools are needed for the identification and tracking of OPCs. In this dissertation work, we proposed an analytical framework, MicTracker (Migrating Cell Tracker), for the integrated task of identifying, segmenting and tracking migrating cells (OPCs) from in vivo time-lapse fluorescence imaging data of high-density cell clusters composed of cells with different modes of motions. As a component of the framework, we presented a novel strategy for cell segmentation with global temporal consistency enforced, tackling the challenges caused by highly clustered cell population and temporally inconsistently blurred boundaries between touching cells. We also designed a data association algorithm to address the violation of usual assumption of small displacements. Recognizing that the violation was in the mixed cell population composed of two cell groups while the assumption held within each group, we proposed to solve the seemingly impossible mission by de-mixing the two groups of cell motion modes without known labels. We demonstrated the effectiveness of MicTracker in solving our problem on in vivo real data.
- Benchmarking of the RAPID Eigenvalue Algorithm using the ICSBEP HandbookButler, James Michael (Virginia Tech, 2019-09-17)The purpose of this thesis is to examine the accuracy of the RAPID (Real-Time Analysis for Particle Transport and In-situ Detection) eigenvalue algorithm based on a few problems from the ICSBEP (International Criticality Safety Benchmark Evaluation Project) Handbook. RAPID is developed based on the MRT (Multi-Stage Response-Function Transport) methodology and it uses the fission matrix (FM) method for performing eigenvalue calculations. RAPID has already been benchmarked based on several real-world problems including spent fuel pools and casks, and reactor cores. This thesis examines the accuracy of the RAPID eigenvalue algorithm for modeling the physics of problems with unique geometric configurations. Four problems were selected from the ICSBEP Handbook; these problems differ by their unique configurations which can effectively examine the capability of the RAPID code system. For each problem, a reference Serpent Monte Carlo calculation has been performed. Using the same Serpent model in the pRAPID (pre- and post-processing for RAPID) utility code, a series of fixed-source Serpent calculations are performed to determine spatially-dependent FM coefficients. RAPID calculations are performed using these FM coefficients to obtain the axially-dependent, pin-wise fission density distribution and system eigenvalue for each problem. It is demonstrated that the eigenvalues calculated by RAPID and Serpent agree with the experimental data within the given experimental uncertainty. Further, the detailed 3-D pin-wise fission density distribution obtained by RAPID agrees with the reference prediction by Serpent which itself has converged to less than 1% weighted uncertainty. While achieving accurate results, RAPID calculations are significantly faster than the reference Serpent calculations, with a calculation time speed-up of between 4x and 34x demonstrated in this thesis. In addition to examining the accuracy of the RAPID algorithm, this thesis provides useful information on the use of the FM method for simulation of nuclear systems.
- Determination of critical parameters in the analysis of road tunnel firesHaghighat, Alireza; Luxbacher, Kramer Davis (Elsevier, 2018-07-12)The analysis of the fluid characteristics downstream of a fire source in transportation tunnels is one the most important factor in the emergency response, evacuation, and the rescue service studies. Some crucial parameters can affect the fluid characteristics downstream of the fire. This research develops a statistical analysis on the computational fluid dynamics (CFD) data of the road tunnel fire simulations in order to quantify the significance of tunnel dimensions, inlet air velocity, heat release rate, and the physical fire size (fire perimeter) on the fluid characteristics downstream of the fire source. The selected characteristics of the fluid (response variables) were the average temperature, the average density, the average viscosity, and the average velocity. The prediction of the designed statistical models was assessed; then the significant parameters’ effects and the parameters interactive effects on different response variables were determined individually. Next, the effect of computational domain length on the selection of the significant parameters downstream of the fire source was analyzed. In this statistical analysis, the linear models were found to provide the statistically good prediction. The effect of the fire perimeter and the parameters interactive effects on the selected response variables downstream of the fire, were found to be insignificant. © 2018
- Development and benchmarking of advanced FM-based particle transport algorithms for steady-state and transient conditions, implementation in RAPID and its VRS web-applicationMascolino, Valerio (Virginia Tech, 2021-06-14)There is a significant need for 3-D steady-state and transient neutron transport formulations and codes that yield accurate, high-fidelity solutions with reasonable computing resources and time. These tools are essential for modeling innovative nuclear systems, such as next-generation reactor designs. The existing methods generally compromise heavily between accuracy and affordability in terms of computation times. In this dissertation, novel algorithms for simulation of reactor transient conditions have been developed and implemented into the RAPID code system. In addition, extensive computational verification and experimental validation of RAPID's steady-state and transient algorithms was performed, and a novel virtual reality system (VRS) web-application was developed for the RAPID code system. The new algorithms, collectively referred to as tRAPID, are based on the Transient Fission Matrix (TFM) methodology. By decoupling the kinetic neutron transport problem into two different stages (an accurate pre-calculation to generate a database and an on-line solution of linear partial differential equations) the method ensures the preservation of the highest level of accuracy while also allowing for high-fidelity modeling and simulation of nuclear reactor kinetics in a short time with minimal computing resources. The tRAPID algorithms have been computationally verified using several computational benchmarks and experimentally validated using the JSI TRIGA Mark-II reactor. In order to develop these algorithms, first the steady-state capabilities of RAPID have been successfully benchmarked against the GBC-32 spent fuel cask system, also highlighting issues with the standard eigenvalue Monte Carlo calculations that our code is capable of overcoming. A novel methodology for accounting for the movement of control rods in the JSI TRIGA reactor has been developed. This methodology, referred to as FM-CRd, is capable of determining the neutron flux distribution changes due to the presence of control rod in real-time. The FM-CRd method has been validated with successfully using the JSI TRIGA reactor. The time-dependent, kinetic capabilities of the new tRAPID algorithm have been implemented based on the Transient Fission Matrix (TFM) method. tRAPID has been verified and validated using the Flattop-Pu benchmark and reference calculations and measurements using the JSI TRIGA reactor. In addition to the main tRAPID algorithms development and benchmarking efforts, a new web-application for the RAPID Code System for input preparation and interactive output visualization was developed. VRS-RAPID greatly enhances the usability, intuitiveness, and outreach possibilities of the RAPID Code System.
- Development of a Methodology for Interface Boundary Selection in the Multiscale Road Tunnel Fire SimulationsHaghighat, Alireza; Luxbacher, Kramer Davis; Lattimer, Brian Y. (2018-07)The simulation of large complex dynamical systems such as a fire in road tunnels is necessary but costly. Therefore, there is a crucial need to design efficient models. Coupling of computational fluid dynamics (CFD) models and 1D network modeling simulations of a fire event, a multiscale method, can be a useful tool to increase the computational efficiency while the accuracy of simulations is maintained. The boundary between a CFD model (near field) and a 1D model (far field) plays a key role in the accuracy of simulations of large systems. The research presented in this paper develops a novel methodology to select the interface boundary between the 3D CFD model and a 1D model in the multiscale simulation of vehicle fire events in a tunnel. The development of the methodology is based on the physics of the fluid structure, turbulent kinetic energy of the dynamical system, and the vortex dynamics. The methodology was applied to a tunnel with 73.73 m(2) cross section and 960 m in length. Three different vehicle fire scenarios were investigated based on two different heat reslease rates (10 MW and 30 MW) and two different inlet velocities (1.5 m/s and 5 m/s). all parameters upstream and downstream of the fire source in all scenarios were investigated at t = 900 s. The effect of changes in heat release rate (HRR) and air velocity on the selection of an interface boundary was investigated. The ratio between maximum longitudinal and transversal velocities was within a range of 10 to 20 in the quasi-1D region downstream of the fire source. The selected downstream interface boundary was 12D(h) m downstream of the fire for the simulations. The upstream interface boundary was selected at 0.5 D-h m upstream the tip of the object when the velocity was greater than equal to the V-c. In the simulations with backlayering (V < V-c), the interface boundary was selected 10 m further from the tip of the backlayering (1.2 D-h). An indirect coupling strategy was utilized to couple CFD models to 1D models at the selected interface boundary; then, the coupled models results were compared to the full CFD model results. The calculated error between CFD and coupled models for mean temperature and velocity at different cross sections were calculated at less than 5%. The findings were used to recommend a modification to the selection of interface boundary in multiscale fire simulations in the road tunnels and more complex geometries such as mines.
- Development of a Novel Detector Response Formulation and Algorithm in RAPID and its BenchmarkingWang, Meng Jen (Virginia Tech, 2019-10-24)Solving radiation shielding problems, i.e. deep penetration problems, is a challenging task from both computation time and resource aspects in field of nuclear engineering. This is mainly because of the complexity of the governing equation for neutral particle transport - Linear Boltzmann Equation (LBE). The LBE includes seven independent variables with presence of integral and differential operators. Moreover, the low successive rate of radiation shielding problem is also challenging for solving such problems. In this dissertation, the Detector Response Function (DRF) methodology is proposed and developed for real-time and accurate radiation shielding calculation. The real-time capability of solving radiation shielding problem is very important for: (1) Safety and monitoring of nuclear systems; (2) Nuclear non-proliferation; and (3) Sensitivity study and Uncertainty quantification. Traditionally, the difficulties of solving radiation problem are: (1) Very long computation time using Monte Carlo method; (2) Extremely large memory requirement for deterministic method; and (3) Re-calculations using hybrid method. Among all of them, the hybrid method, typically Monte Carlo + deterministic, is capable of solving radiation shielding problem more efficiently than either Monte Carlo or deterministic methods. However, none of the aforementioned methods are capable of performing "real-time" radiation shielding calculation. Literature survey reveals a number of investigation on improving or developing efficient methods for radiation shielding calculation. These methods can be categorized by: (1) Using variance reduction techniques to improve successive rate of Monte Carlo method; and (2) Developing numerical techniques to improve convergence rate and avoid unphysical behavior for deterministic method. These methods are considered clever and useful for the radiation transport community. However, real-time radiation shielding calculation capability is still missing although the aforementioned advanced methods are able to accelerate the calculation efficiency significantly. In addition, very few methods are "Physics-based" For example, the mean free path of neutrons are typically orders of magnitude smaller than a nuclear system, i.e. nuclear reactor. Each individual neutron will not travel too far before its history is terminated. This is called the "loosely coupled" nature of nuclear systems. In principle, a radiation shielding problem can be potentially decomposed into pieces and solved more efficient. In the DRF methodology, the DRF coefficients are pre-calculated with dependency of several parameters. These coefficients can be directly coupled with radiation source calculated from other code system, i.e. RAPID (Real-time Analysis for Particle transport and In-situ Detection) code system. With this arrangement, detector/dosimeter response can be calculated on the fly. Thus far, the DRF methodology has been incorporated into the RAPID code system, and applied on four different benchmark problems: (1) The GBC-32 Spent Nuclear Fuel (SNF) cask flooded with water with a $^3$He detector placed on the cask surface; (2) The VENUS-3 experimental Reactor Pressure Vessel (RPV) neutron fluence calculation benchmark problem; (3) RPV dosimetry using the Three-Mile Island Unit-1 (TMI-1) commercial reactor; and (4) A Dry storage SNF cask external dosimetry problem. The results show that dosimeter/detector response or dose value calculations using the DRF methodology are all within $2sigma$ relative statistical uncertainties of MCNP5 + CADIS (Consistent Adjoint Driven Importance Sampling) standard fixed-source calculation. The DRF methodology only requires order of seconds for the dosimeter/detector response or dose value calculations using 1 processor if the DRF coefficients are appropriately prepared. The DRF coefficients can be reused without re-calculations when a model configuration is changed. In contrast, the standard MCNP5 calculations typically require more than an hour using 8 processors, even using the CADIS methodology. The DRF methodology has enabled the capability of real-time radiation shielding calculation. The radiation transport community can be greatly benefited by the development of DRF methodology. Users can easily utilize the DRF methodology to perform parametric studies, sensitivity studies, and uncertainty quantifications. The DRF methodology can be applied on various radiation shielding problems, such as nuclear system monitoring and medical radiation facilities. The appropriate procedure of DRF methodology and necessary parameters on DRF coefficient dependency will be discussed in detail in this dissertation.
- Development of a Novel Fuel Burnup Methodology and Algorithm in RAPID and its Benchmarking and AutomationRoskoff, Nathan (Virginia Tech, 2018-08-02)Fuel burnup calculations provide material concentrations and intrinsic neutron and gamma source strengths as a function of irradiation and cooling time. Detailed, full-core 3D burnup calculations are critical for nuclear fuel management studies, including core design and spent fuel storage safety and safeguards analysis. For core design, specifically during refueling, full- core pin-wise, axially-dependent burnup distributions are necessary to determine assembly positioning to efficiently utilize fuel resources. In spent fuel storage criticality safety analysis, detailed burnup distributions enable best-estimate analysis which allows for more effective utilization of storage space. Additionally, detailed knowledge of neutron and gamma source distributions provide the ability to ensure nuclear material safeguards. The need for accurate and efficient burnup calculations has become more urgent for the simulation of advanced reactors and monitoring and safeguards of spent fuel pools. To this end, the Virginia Tech Transport Theory Group (VT3G) has been working on advanced computational tools for accurate modeling and simulation of nuclear systems in real-time. These tools are based on the Multi-stage Response-function Transport (MRT) methodology. For monitoring and safety evaluation of spent fuel pools and casks, the RAPID (Real-time Analysis for Particle transport and In-situ Detection) code system has been developed. This dissertation presents a novel methodology and algorithm for performing 3D fuel bur- nup calculations, referred to as bRAPID- Burnup with RAPID . bRAPID utilizes the existing RAPID code system for accurate calculation of 3D fission source distributions as the trans- port calculation tool to drive the 3D burnup calculation. bRAPID is capable of accurately and efficiently calculating assembly-wise axially-dependent fission source and burnup dis- tributions, and irradiated-fuel properties including material compositions, neutron source, gamma source, spontaneous fission source, and activities. bRAPID performs 3D burnup calculations in a fraction of the time required by state-of-the-art methodologies because it utilizes a pre-calculated database of response functions. The bRAPID database pre-calculation procedure, and its automation, is presented. The ex- isting RAPID code is then benchmarked against the MCNP and Serpent Monte Carlo codes for a spent fuel pool and the U.S. Naval Academy Subcritical Reactor facility. RAPID is shown to accurately calculate eigenvalue, subcritical multiplication, and 3D fission source dis- tributions. Finally, bRAPID is compared to traditional, state-of-the art Serpent Monte Carlo burnup calculations and its performance will be evaluated. It is important to note that the automated pre-calculation proceedure is required for evaluating the performance of bRAPID. Additionally, benchmarking of the RAPID code is necessary to understand RAPID's ability to solve problems with variable burnups distributions and to asses its accuracy.
- Development of a Software Platform with Distributed Learning Algorithms for Building Energy Efficiency and Demand Response ApplicationsSaha, Avijit (Virginia Tech, 2017-01-24)In the United States, over 40% of the country's total energy consumption is in buildings, most of which are either small-sized (<5,000 sqft) or medium-sized (5,000-50,000 sqft). These buildings offer excellent opportunities for energy saving and demand response (DR), but these opportunities are rarely utilized due to lack of effective building energy management systems and automated algorithms that can assist a building to participate in a DR program. Considering the low load factor in US and many other countries, DR can serve as an effective tool to reduce peak demand through demand-side load curtailment. A convenient option for the customer to benefit from a DR program is to use automated DR algorithms within a software that can learn user comfort preferences for the building loads and make automated load curtailment decisions without affecting customer comfort. The objective of this dissertation is to provide such a solution. First, this dissertation contributes to the development of key features of a building energy management open source software platform that enable ease-of-use through plug and play and interoperability of devices in a building, cost-effectiveness through deployment in a low-cost computer, and DR through communication infrastructure between building and utility and among multiple buildings, while ensuring security of the platform. Second, a set of reinforcement learning (RL) based algorithms is proposed for the three main types of loads in a building: heating, ventilation and air conditioning (HVAC) loads, lighting loads and plug loads. In absence of a DR program, these distributed agent-based learning algorithms are designed to learn the user comfort ranges through explorative interaction with the environment and accumulating user feedback, and then operate through policies that favor maximum user benefit in terms of saving energy while ensuring comfort. Third, two sets of DR algorithms are proposed for an incentive-based DR program in a building. A user-defined priority based DR algorithm with smart thermostat control and utilization of distributed energy resources (DER) is proposed for residential buildings. For commercial buildings, a learning-based algorithm is proposed that utilizes the learning from the RL algorithms to use a pre-cooling/pre-heating based load reduction method for HVAC loads and a mixed integer linear programming (MILP) based optimization method for other loads to dynamically maintain total building demand below a demand limit set by the utility during a DR event, while minimizing total user discomfort. A user defined priority based DR algorithm is also proposed for multiple buildings in a community so that they can participate in realizing combined DR objectives. The software solution proposed in this dissertation is expected to encourage increased participation of smaller and medium-sized buildings in demand response and energy saving activities. This will help in alleviating power system stress conditions by employing the untapped DR potential in such buildings.
- Development of High-Speed Camera Techniques for Droplet Measurement in Annular FlowsCohn, Ayden Seth (Virginia Tech, 2024-06-03)This research addresses the critical need for precise two-phase flow data in the development of computer simulation models, with a specific focus on the annular flow regime's droplet behavior. The study aims to contribute to the evaluation of safety and efficiency in nuclear reactors that handle fluids transitioning between liquid and gas states for thermal energy transport. Central to the investigation is the collection and analysis of droplet size and velocity distribution data, particularly to help with developing models for the water-cooled nuclear power plants. The experimental setup employs advanced tools, including a high-speed camera, lens, teleconverter, and a selected light source, to capture high-resolution images of droplets. Calibration procedures, incorporating depth of field testing, are implemented to ensure accurate droplet size measurements. A critical component of the research is the introduction of a droplet identification program, developed using Matlab, which facilitates efficient processing of experimental data. Preliminary results from the Virginia Tech test facility demonstrate the system's capability to eliminate out-of-focus droplets and obtain precise droplet data in a reasonable amount of time. Experimental results from the Rensselaer Polytechnic Institute test facility provide droplet size and velocity distributions for a variety of annular flow conditions. This facility has a concurrent two-flow system that pumps air and water at different rates through a 9.525 mm inner diameter tube. The conditions tested include gas superficial velocities ranging from 22 to 40 m/s and liquid superficial velocities ranging from 0.09 to 0.44 m/s. The measured flow has a temperature of 21°C and a pressure of 1 atm.
- Development of the Adaptive Collision Source Method for Discrete Ordinates Radiation TransportWalters, William Jonathan (Virginia Tech, 2015-05-08)A novel collision source method has been developed to solve the Linear Boltzmann Equation (LBE) more efficiently by adaptation of the angular quadrature order. The angular adaptation method is unique in that the flux from each scattering source iteration is obtained, with potentially a different quadrature order used for each. Traditionally, the flux from every iteration is combined, with the same quadrature applied to the combined flux. Since the scattering process tends to distribute the radiation more evenly over angles (i.e., make it more isotropic), the quadrature requirements generally decrease with each iteration. This method allows for an optimal use of processing power, by using a high order quadrature for the first few iterations that need it, before shifting to lower order quadratures for the remaining iterations. This is essentially an extension of the first collision source method, and is referred to as the adaptive collision source (ACS) method. The ACS methodology has been implemented in the 3-D, parallel, multigroup discrete ordinates code TITAN. This code was tested on a variety of test problems including fixed-source and eigenvalue problems. The ACS implementation in TITAN has shown a reduction in computation time by a factor of 1.5-4 on the fixed-source test problems, for the same desired level of accuracy, as compared to the standard TITAN code.
- Differential Dependency Network and Data Integration for Detecting Network Rewiring and BiomarkersFu, Yi (Virginia Tech, 2020-01-30)Rapid advances in high-throughput molecular profiling techniques enabled large-scale genomics, transcriptomics, and proteomics-based biomedical studies, generating an enormous amount of multi-omics data. Processing and summarizing multi-omics data, modeling interactions among biomolecules, and detecting condition-specific dysregulation using multi-omics data are some of the most important yet challenging analytics tasks. In the case of detecting somatic DNA copy number aberrations using bulk tumor samples in cancer research, normal cell contamination becomes one significant confounding factor that weakens the power regardless of whichever methods used for detection. To address this problem, we propose a computational approach – BACOM 2.0 to more accurately estimate normal cell fraction and accordingly reconstruct DNA copy number signals in cancer cells. Specifically, by introducing allele-specific absolute normalization, BACOM 2.0 can accurately detect deletion types and aneuploidy in cancer cells directly from DNA copy number data. Genes work through complex networks to support cellular processes. Dysregulated genes can cause structural changes in biological networks, also known as network rewiring. Genes with a large number of rewired edges are more likely to be associated with functional alteration leading phenotype transitions, and hence are potential biomarkers in diseases such as cancers. Differential dependency network (DDN) method was proposed to detect such network rewiring and biomarkers. However, the existing DDN method and software tool has two major drawbacks. Firstly, in imbalanced sample groups, DDN suffers from systematic bias and produces false positive differential dependencies. Secondly, the computational time of the block coordinate descent algorithm in DDN increases rapidly with the number of involved samples and molecular entities. To address the imbalanced sample group problem, we propose a sample-scale-wide normalized formulation to correct systematic bias and design a simulation study for testing the performance. To address high computational complexity, we propose several strategies to accelerate DDN learning, including two reformulated algorithms for block-wise coefficient updating in the DDN optimization problem. Specifically, one strategy on discarding predictors and one strategy on accelerating parallel computing. More importantly, experimental results show that new DDN learning speed with combined accelerating strategies is hundreds of times faster than that of the original method on medium-sized data. We applied the DDN method on several biomedical datasets of omics data and detected significant phenotype-specific network rewiring. With a random-graph-based detection strategy, we discovered the hub node defined biomarkers that helped to generate or validate several novel scientific hypotheses in collaborative research projects. For example, the hub genes detected by the DDN methods in proteomics data from artery samples are significantly enriched in the citric acid cycle pathway that plays a critical role in the development of atherosclerosis. To detect intra-omics and inter-omics network rewirings, we propose a method called multiDDN that uses a multi-layer signaling model to integrate multi-omics data. We adapt the block coordinate descent algorithm to solve the multiDDN optimization problem with accelerating strategies. The simulation study shows that, compared with the DDN method on single omics, the multiDDN method has considerable advantage on higher accuracy of detecting network rewiring. We applied the multiDDN method on the real multi-omics data from CPTAC ovarian cancer dataset, and detected multiple hub genes associated with histone protein deacetylation and were previously reported in independent ovarian cancer data analysis.
- Formal Techniques for Design and Development of Safety Critical Embedded Systems from Polychronous ModelsNanjundappa, Mahesh (Virginia Tech, 2015-05-28)Formally-based design and implementation techniques for complex safety-critical embedded systems are required not only to handle the complexity, but also to provide correctness guarantees. Traditional design approaches struggle to cope with complexity, and they generally require extensive testing to guarantee correctness. As the designs get larger and more complex, traditional approaches face many limitations. An alternate design approach is to adopt a "correct-by-construction" paradigm and synthesize the desired hardware and software from the high-level descriptions expressed using one of the many formal modeling languages. Since these languages are equipped with formal semantics, formally-based tools can be employed for various analysis. In this dissertation, we adopt one such formal modeling language - MRICDF (Multi-Rate Instantaneous Channel-connected Data Flow). MRICDF is a graphical, declarative, polychronous modeling language, with a formalism that allows the modeler to easily describe multi-clocked systems without the necessity of global clock. Unnecessary synchronizations among concurrent computation entities can be avoided using a polychronous language such as MRICDF. We have explored a Boolean theory-based techniques for synthesizing multi-threaded/concurrent code and extended the technique to improve the performance of synthesized multi-threaded code. We also explored synthesizing ASIPs (Application Specific Instruction Set Processors) from MRICDF models. Further, we have developed formal techniques to identify constructive causality in polychronous models. We have also developed SMT (Satisfiablity Modulo Theory)-based techniques to identify dimensional inconsistencies and to perform value-range analysis of polychronous models.
- Graph Neural Networks: Techniques and ApplicationsChen, Zhiqian (Virginia Tech, 2020-08-25)Effective information analysis generally boils down to the geometry of the data represented by a graph. Typical applications include social networks, transportation networks, the spread of epidemic disease, brain's neuronal networks, gene data on biological regulatory networks, telecommunication networks, knowledge graph, which are lying on the non-Euclidean graph domain. To describe the geometric structures, graph matrices such as adjacency matrix or graph Laplacian can be employed to reveal latent patterns. This thesis focuses on the theoretical analysis of graph neural networks and the development of methods for specific applications using graph representation. Four methods are proposed, including rational neural networks for jump graph signal estimation, RemezNet for robust attribute prediction in the graph, ICNet for integrated circuit security, and CNF-Net for dynamic circuit deobfuscation. For the first method, a recent important state-of-art method is the graph convolutional networks (GCN) nicely integrate local vertex features and graph topology in the spectral domain. However, current studies suffer from drawbacks: graph CNNs rely on Chebyshev polynomial approximation which results in oscillatory approximation at jump discontinuities since Chebyshev polynomials require degree $Omega$(poly(1/$epsilon$)) to approximate a jump signal such as $|x|$. To reduce complexity, RatioanlNet is proposed to integrate rational function and neural networks for graph node level embeddings. For the second method, we propose a method for function approximation which suffers from several drawbacks: non-robustness and infeasibility issue; neural networks are incapable of extracting analytical representation; there is no study reported to integrate the superiorities of neural network and Remez. This work proposes a novel neural network model to address the above issues. Specifically, our method utilizes the characterizations of Remez to design objective functions. To avoid the infeasibility issue and deal with the non-robustness, a set of constraints are imposed inspired by the equioscillation theorem of best rational approximation. The third method proposes an approach for circuit security. Circuit obfuscation is a recently proposed defense mechanism to protect digital integrated circuits (ICs) from reverse engineering. Estimating the deobfuscation runtime is a challenging task due to the complexity and heterogeneity of graph-structured circuit, and the unknown and sophisticated mechanisms of the attackers for deobfuscation. To address the above-mentioned challenges, this work proposes the first graph-based approach that predicts the deobfuscation runtime based on graph neural networks. The fourth method proposes a representation for dynamic size of circuit graph. By analyzing SAT attack method, a conjunctive normal form (CNF) bipartite graph is utilized to characterize the complexity of this SAT problem. To overcome the difficulty in capturing the dynamic size of the CNF graph, an energy-based kernel is proposed to aggregate dynamic features.
- Implementation and Verification of the Subgroup Decomposition Method in the TITAN 3-D Deterministic Radiation Transport CodeRoskoff, Nathan J. (Virginia Tech, 2014-05-07)The subgroup decomposition method (SDM) has recently been developed as an improvement over the consistent generalized energy condensation theory for treatment of the energy variable in deterministic particle transport problems. By explicitly preserving reaction rates of the fine-group energy structure, the SDM directly couples a consistent coarse-group transport calculation with a set of fixed-source "decomposition sweeps" to provide a fine-group flux spectrum. This paper will outline the implementation of the SDM into the three-dimensional, discrete ordinates (SN) deterministic transport code TITAN. The new version of TITAN, TITAN-SDM, is tested using 1-D and 2-D benchmark problems based on the Japanese designed High Temperature Engineering Test Reactor (HTTR). In addition to accuracy, this study examines the efficiency of the SDM algorithm in a 3-D SN transport code.
- Medical Isotope Production of Actinium 225 By Linear Accelerator Photon Irradiation of Radium 226VanSant, Paul Daniel (Virginia Tech, 2013-06-12)There is a present and future need for the medical isotope Actinium-225, currently in short supply worldwide. Only a couple manufacturers produce it in very low quantities. In roughly the past 10 years the medical community has explored the use of Ac-225 and its daughter Bismuth-213 for targeting a number of differing cancers by way of Targeted Alpha Therapy (TAT). This method utilizes the alpha-decay of both Ac-225 (half-life 10 days) and Bi-213 (half-life 46 min) to kill cancerous cells on a localized basis. Maximum energy is delivered to the cancer cells thereby greatly minimizing healthy tissue damage. This research proposes a production method using a high-energy photon spectrum (generated by a linear accelerator or LINAC) to irradiate a sample of Radium-226 (half-life 1600yrs). The photo-neutron reaction liberates neutrons from Ra-226 atoms leaving behind Radium-225 (half-life 14.7 days). Ra-225 decays naturally through beta emission to Ac-225. Previous research demonstrated it is possible to produce Ac-225 using a LINAC; however, very low yields resulted which questioned the feasibility of this production method. This research proposes a number of LINAC and radium sample modifications that could be greatly increase yield amounts for practical use. Additionally, photo-neutron cross-section data for Ra-226 was used, which led to improved yield calculations for Ra-225. A MATLAB® model was also created, which enables users to perform quick yield estimates given several key model parameter inputs. Obtaining a sufficient supply of radium material is also of critical importance to this research. Therefore information was gathered regarding availability and inventory of Radium-226. This production method would serve as a way to not only eliminate many hazardous radium sources destined for interim storage, but provide a substantial supply of Ac-225 for future cancer treatment.
- Methods for Radioactive Source Localization via Uncrewed Aerial SystemsAdams, Caleb Jeremiah (Virginia Tech, 2024-03-28)Uncrewed aerial systems (UAS) have steadily become more prevalent in both defense and industrial applications. Nuclear detection and deterrence is one such field that has given rise to many new opportunities for UAS operations. There is a need to research and develop methods to integrate existing radiation detection technology with UAS capable of flying low-altitude missions. This low-altitude scanning can be achieved by combining small and lightweight radiation detectors and state-of-the-art aircraft and avionics. High resolution mapping can then be conducted using the results of these scans. Significant work has been conducted in this field by both private industry and academic institutions, including the Uncrewed Systems Lab (USL) at Virginia Tech. This work seeks to expand this body of knowledge and provide practical experimental information to showcase and validate the efficacy of radiation detection via UAS. Multiple missions were conducted using samples of 137Cs and 60Co as a radioactive source. Various filtering methods were applied to the results of these missions to produce visual maps that aid in the localization of an unknown source to compare various flight parameters. In addition, significant work was conducted to characterize two radiation detectors available to the USL to provide metrics to assist in the UAS design and flight planning. Finally, the detectors were taken to Savannah River National Laboratories to conduct experiments to provide information to aid future designs and missions that wish to detect a wider variety of radioactive sources.