Browsing by Author "Liu, Yang"
Now showing 1 - 20 of 56
Results Per Page
Sort Options
- Application of Defocusing Technique to Bubble Depth MeasurementMugikura, Yuki (Virginia Tech, 2017)The thesis presents a defocusing technique to extract bubble depth information. Typically, when a bubble is out of focus in an image, the bubble is ignored by applying a filter or thresholding. However, it is known that a bubble image becomes blurred as the bubble moves away from the focal plane. Then, this technique is applied to determine the bubble distance along the optical path based on the blurriness or intensity gradient information of the bubble. Using the image processing algorithm, images captured in three different experiments are analyzed to develop a correlation between the bubble distance and its intensity gradient. The suggested models to predict the bubble depth are also developed based on the measurement data and evaluated with the measured data. When the intensity gradient of the bubble is lower or when a bubble is located farther from the focal plane, the model can predict the distance more accurately. However, the models show larger absolute and relative error when the bubble is near the focal plane. To improve the prediction in that region, another model should be considered. Also, depth of field analysis is introduced in order to compare three experimental results with different imaging setups. The applicability of the approach is analyzed and evaluated.
- Application of Optical Fiber Sensors for Quenching Temperature MeasurementHurley, Paul Raymond (Virginia Tech, 2020-06-17)The critical heat flux (CHF) point for a reactor core system is one of the most important factors to discuss in regards to reactor safety. If this point is reached, standard coolant systems are not enough to handle the temperature increase in the cladding, and the likelihood of meltdown greatly increases. While the nucleate boiling and film boiling regimes have been well-investigated, the transition boiling regime between the point of departure from nucleate boiling (DNB) and the minimum film boiling temperature (Tmin) remains difficult to study. This is due to both the complexity of the phenomena, as well as limitations in measurement, where experiments typically utilize thermocouples for temperature data acquisition. As a result of technological advancement in the field of fiber optics, it is possible to measure the quenching temperature to a much higher degree of precision. Optical fiber sensors are capable of taking many more measurements along a fuel simulator length than thermocouples, which are restricted to discrete points. In this way, optical fibers can act as an almost continuous sensor, calculating data at a resolution of less than one millimeter where a thermocouple would only be able to measure at one point. In this thesis, the results of a series of quenching experiments performed on stainless steel, Monel k500, and Inconel 600 rods at atmospheric pressure, with different subcooling levels and surface roughnesses, will be discussed. The rewetting temperature measurement is performed to compare results between thermocouples and optical fiber sensors in a 30 cm rod. These results are further discussed with regard to future application in two-phase flow experiments.
- Augmentation of Jet Impingement Heat Transfer on a Grooved Surface Under Wet and Dry ConditionsAlsaiari, Abdulmohsen Omar (Virginia Tech, 2018-11-27)Array jet impingement cooling experiments were performed on flat and grooved surfaces with the surface at a constant temperature. For the flat surface, power and temperature measurements were performed to obtain convection coefficients under a wide range of operating conditions such as jet speed, orifice to surface stand-of distance, and open area percentage. Cooling performance (CP) was calculated as the ratio between heat transfer and fan power. An empirical model was developed to predict jet impingement heat transfer taking into account the entrainment effects. Experimental results showed that jet impingement can provide high transfer rates with lower rates of cooling cost in comparison to contemporary conventional techniques in the industry. CP values over 279 were measured which are significantly higher than the standard values of 70 to 95 in current technology. The model enhanced prediction accuracy by taking into account the entrainment effects; an effect that is rarely considered in the literature. Experiments on the grooved surfaces were performed at dry and wet surface conditions. Under dry conditions, results showed 10%~55% improvement in heat transfer when compared to the flat surface. Improvement percentage tends to be higher at wider gaps between the array of orifices and the grooved surface. An improvement of 30%~40% was observed when increasing Re either by increasing orifice diameter or jet speed. Similar improvement was observed at higher flow open area percentages. No significant improvement in heat transfer resulted from decreasing the size of the grooves from 3.56mm to 2.54mm. Similarly, no noticeable change in heat transfer resulted from changing the relative position of the jets striking the surface at the top of the grooves to the bottom of the grooves. Deeper grooves with twice the depth gave statistically similar average heat transfer coefficients as shallower grooves. Under wet conditions, a hybrid cooling technique approach was proposed by using air jets impinging on a grooved surface with the grooves containing water. The approached is proposed and evaluated experimentally for its feasibility as an alternative for cooling towers of thermoelectric power plants. Convection heat and mass transfer coefficients were measured experimentally using the heat mass transfer analogy. Results showed that hybrid jet impingement provided high magnitudes of heat flux at low jet speeds and flow rates. High coefficients of performance CP > 3000, and heat fluxes > 8,000W/m2 were observed. Hybrid jet impingement showed 500% improvement as compared to jet impingement on a dry flat surface. CP values of hybrid jet impingement is 600% to 1,500% more as compared to performance of air-cooled condensers and wet cooling towers. Water use for hybrid jet impingement cooling is efficient since evaporation energy is absorbed from the surface directly instead of cooling air to near wet-bulb temperature.
- Cellular Composition and Differentiation Signaling in Chicken Small Intestinal EpitheliumZhang, Haihan; Li, Dongfeng; Liu, Lingbin; Xu, Ling; Zhu, Mo; He, Xi; Liu, Yang (MDPI, 2019-10-27)The small intestine plays an important role for animals to digest and absorb nutrients. The epithelial lining of the intestine develops from the embryonic endoderm of the embryo. The mature intestinal epithelium is composed of different types of functional epithelial cells that are derived from stem cells, which are located in the crypts. Chickens have been widely used as an animal model for researching vertebrate embryonic development. However, little is known about the molecular basis of development and differentiation within the chicken small intestinal epithelium. This review introduces processes of development and growth in the chicken gut, and compares the cellular characteristics and signaling pathways between chicken and mammals, including Notch and Wnt signaling that control the differentiation in the small intestinal epithelium. There is evidence that the chicken intestinal epithelium has a distinct cellular architecture and proliferation zone compared to mammals. The establishment of an in vitro cell culture model for chickens will provide a novel tool to explore molecular regulation of the chicken intestinal development and differentiation.
- Collaborative efforts to forecast seasonal influenza in the United States, 2015–2016McGowan, Craig J.; Biggerstaff, Matthew; Johansson, Michael; Apfeldorf, Karyn M.; Ben-Nun, Michal; Brooks, Logan; Convertino, Matteo; Erraguntla, Madhav; Farrow, David C.; Freeze, John; Ghosh, Saurav; Hyun, Sangwon; Kandula, Sasikiran; Lega, Joceline; Liu, Yang; Michaud, Nicholas; Morita, Haruka; Niemi, Jarad; Ramakrishnan, Naren; Ray, Evan L.; Reich, Nicholas G.; Riley, Pete; Shaman, Jeffrey; Tibshirani, Ryan; Vespignani, Alessandro; Zhang, Qian; Reed, Carrie; Rosenfeld, Roni; Ulloa, Nehemias; Will, Katie; Turtle, James; Bacon, David; Riley, Steven; Yang, Wan; The Influenza Forecasting Working Group (Nature Publishing Group, 2019-01-24)Since 2013, the Centers for Disease Control and Prevention (CDC) has hosted an annual influenza season forecasting challenge. The 2015–2016 challenge consisted of weekly probabilistic forecasts of multiple targets, including fourteen models submitted by eleven teams. Forecast skill was evaluated using a modified logarithmic score. We averaged submitted forecasts into a mean ensemble model and compared them against predictions based on historical trends. Forecast skill was highest for seasonal peak intensity and short-term forecasts, while forecast skill for timing of season onset and peak week was generally low. Higher forecast skill was associated with team participation in previous influenza forecasting challenges and utilization of ensemble forecasting techniques. The mean ensemble consistently performed well and outperformed historical trend predictions. CDC and contributing teams will continue to advance influenza forecasting and work to improve the accuracy and reliability of forecasts to facilitate increased incorporation into public health response efforts. © 2019, The Author(s).
- Data Analysis of an Unsteady Cavitating Flow on a Venturi-type ProfileNemati Kourabbasloo, Navid (Virginia Tech, 2021-12-01)The instability modes and non-linear behavior of a cavitating flow have been studied based on the experimental data obtained from planar Particle Image Velocimetry (PIV). Three data-driven techniques, Proper Orthogonal Decomposition (POD), Dynamic Mode Decomposition (DMD), and Clustered-based Reduced Order Modeling (CROM), are applied to the snapshots of the fluctuating component of velocity to investigate instability modes of the cavitating flow. DMD and POD analysis yield multiple modes are corresponding to slow-varying drift flow, cloud-shedding, and Kelvin-Helmholtz (KH) instability for a fixed inlet flow condition. The high coherence measure obtained from the instabilities suggests a transfer of energy from the largest scales, fluctuating mean flow, to the smaller scales such as cloud cavitation and Kelvin-Helmholtz (KH) instability. It is demonstrated that the POD decorrelation of length scales yields inherently quasi-periodic time dynamics, e.g., incommensurate frequencies. Moreover, the eigenvalue obtained from DMD revealed multiple harmonic with different decay rates associated with the cloud cavitation. The above-mentioned intermittent transition between distinct cloud shedding regimes is investigated via Clustered-based Reduced Order Modeling (CROM). Four aperiodic shedding regimes are identified. 68% of the time, triplets of vortices are formed, while 28% of the time, a pair of vortices are formed in the near wake of the throat. Dominant mechanisms governing the momentum transport and the turbulence kinetic energy production, destruction, and redistribution in distinct regions of the flow field have been identified using Gaussian Mixture Models (GMMs). The preceding data-driven techniques and in-depth analysis of the results facilitated modeling of the cavitation inception and break-up. Accordingly, a phase transition field model is developed using the ultra-fast Time-Resolved Particle Image Velocimetry (TR-PIV) and vapor void fraction spatial and temporal data. The data acquisition is implemented in a Venturi-type test section. The approximate Reynolds number based upon the throat height is 10,000, and the approximate cavitation number is 1.95. The non-equilibrium cavitation model assumes that the phase production and destruction are correlated to the static pressure field, pressure spatial derivatives, void fraction, and divergence of the velocity field. Finally, the dependence of the model on the empirical constants has been investigated.
- Density-Wave Instability Characterization in Boiling Water Reactors under MELLLA+ Domain during ATWSHurley, Paul Raymond (Virginia Tech, 2023-07-09)Density wave oscillations (DWO) are a class of two-phase flow instabilities which can pose significant safety concerns to boiling water reactors (BWR). During an anticipated transient without scram (ATWS) while operating in the proposed extended operating domain MELLLA+, natural circulation conditions can potentially lead to DWO-type instabilities which have the capability to develop into cycles of fuel surface dryout and rewet, damaging core integrity. In order to provide data on these phenomena, a series of tests were performed at the KATHY facility during which DWO was developed with and without simulated neutronic feedback. In this dissertation, the data provided by these tests is analyzed to determine the onset conditions for DWO. Following this, several models are assessed for their capability in predicting this stability boundary compared to the experimental results. The models were chosen in order to provide a suitably large range of prediction methodologies. Two analytical drift-flux models developed with and without thermal equilibrium are shown, with respective differences compared. A computational model of the full KATHY natural circulation loop is built using the 1D thermal-hydraulics code TRACE. This is adapted with a point-kinetics model for neutronic feedback for experimental comparison. With both the analytical models and the TRACE model, a series of parametric studies are performed showing the effects of inlet/outlet flow restrictions, pressure, channel geometry, and axial power profile on the stability boundary. Finally, two machine learning neural network-based models are developed and trained on various subsets of the experimental data. The results from each model showed certain benefits and drawbacks based on model complexity and physicality.
- Design of Optical Measurements for Plasma Actuators for the Validation of Quiescent and Flow Control SimulationsLam, Derrick Chuk-Wung (Virginia Tech, 2016-01-27)The concept of plasma flow control is a relatively new idea based on using atmospheric plasma placed near the edge of an air foil to reduce boundary layer losses. As with any new concept, it is important to be able to quantify theoretical assumptions with known experimental results for validation. Currently there are a variety of experiments being done to better understand plasma flow control, but one particular experiment is through the use of multi-physics modeling of dielectric barrier discharge actuators. The research in this thesis uses optical measurement techniques to validate computational models of flow control actuators being done concurrently at Virginia Tech. The primary focus of this work is to design, build and test plasma actuators in order to determine the plasma characteristics relating to electron temperatures and densities. Using optical measurement techniques such as plasma spectroscopy, measured electron temperatures and densities to compare with theoretical calculations of plasma flow control under a variety of flow conditions. This thesis covers a background of plasma physics, optical measurement techniques, and the designing of the plasma actuator setups used in measuring atmospheric plasmas.
- Desing of the high Pressure HIgh temperature annuLUS flow (PHILUS) FacilityKarabacak, Ali Haydar (Virginia Tech, 2022-06-17)Critical heat flux (CHF) and post-CHF are two critical phenomena in light water-cooled nuclear power plants regarding safety. Even though the general trends of CHF and post- CHF are known, the exact mechanisms are still unknown. To better understand CHF and post-CHF, experimental flow boiling facilities are constructed around the world. However, these facilities are limited in their experimental conditions and spatial resolution necessary to advance our understanding of two-phase heat transfer. Previous rod surface measurements were collected with thermocouples to measure CHF location and temperature excursion, yet thermocouples provide limited spatial resolution, which leads to significant uncertainties in the CHF prediction. On the other hand, optical fiber temperature sensors can measure the temperature and the CHF propagation with high spatial resolution. Also, the capability of the optical fiber at high temperatures has been proven in previous studies. The current study aims to apply optical fiber at high-pressure and high mass fluxes. The high-Pressure HIgh-temperature annuLUS flow (PHILUS) facility was designed to provide desired working conditions in the test section that uses optical fiber temperature sensors. The PHILUS test section has a length of 1320 mm, with 1000 mm of heated length. The working conditions of the PHILUS are up to 18 MPa, temperatures up to 357◦C, and coolant mass flux from 500 to 3700 kg/m2s. The main components of the loop are a steam separator, two heat exchangers (a condenser and a cooler), a bladder-type accumulator, two bypass lines, and a high-pressure pump. Coolant-Boiling in Rod Arrays-Two Fluids (COBRA-TF) code was used to design the CHF and post-CHF experiments to be performed at the PHILUS facility.
- Developing a Novel Ultrafine Coal Dewatering ProcessHuylo, Michael H. (Virginia Tech, 2022-01-13)Dewatering fine coal is needed in many applications but has remained a great challenge. The hydrophobic-hydrophilic separation (HHS) method is a powerful technology to address this problem. However, organic solvents in solvent-coal slurries produced during HHS must be recovered for the method to be economically viable. Here, the experimental studies of recovering solvents from pentane-coal and hexane-coal slurries by combining liquid-solid filtration and in-situ vaporization and removing the solvent by a carrier gas (i.e., drying) are reported. The filtration behaviors are studied under different solid mass loading and filtration pressure. It is shown that using pressure filtration driven by 20 psig nitrogen, over 95% of solvents by mass in the slurries can be recovered, and filtration cakes can be formed in 60 s. The drying behavior was studied using nitrogen and steam at different temperatures and pressures. It is shown that residual solvents in filtration cakes can be reduced below 1400 ppm within 10 s by 15 psig steam superheated to 150C, while other parameter combinations are far less effective in removing solvents. Physical processes involved in drying and the structure of solvent-laden filtration cakes are analyzed in light of these results.
- Development and benchmarking of advanced FM-based particle transport algorithms for steady-state and transient conditions, implementation in RAPID and its VRS web-applicationMascolino, Valerio (Virginia Tech, 2021-06-14)There is a significant need for 3-D steady-state and transient neutron transport formulations and codes that yield accurate, high-fidelity solutions with reasonable computing resources and time. These tools are essential for modeling innovative nuclear systems, such as next-generation reactor designs. The existing methods generally compromise heavily between accuracy and affordability in terms of computation times. In this dissertation, novel algorithms for simulation of reactor transient conditions have been developed and implemented into the RAPID code system. In addition, extensive computational verification and experimental validation of RAPID's steady-state and transient algorithms was performed, and a novel virtual reality system (VRS) web-application was developed for the RAPID code system. The new algorithms, collectively referred to as tRAPID, are based on the Transient Fission Matrix (TFM) methodology. By decoupling the kinetic neutron transport problem into two different stages (an accurate pre-calculation to generate a database and an on-line solution of linear partial differential equations) the method ensures the preservation of the highest level of accuracy while also allowing for high-fidelity modeling and simulation of nuclear reactor kinetics in a short time with minimal computing resources. The tRAPID algorithms have been computationally verified using several computational benchmarks and experimentally validated using the JSI TRIGA Mark-II reactor. In order to develop these algorithms, first the steady-state capabilities of RAPID have been successfully benchmarked against the GBC-32 spent fuel cask system, also highlighting issues with the standard eigenvalue Monte Carlo calculations that our code is capable of overcoming. A novel methodology for accounting for the movement of control rods in the JSI TRIGA reactor has been developed. This methodology, referred to as FM-CRd, is capable of determining the neutron flux distribution changes due to the presence of control rod in real-time. The FM-CRd method has been validated with successfully using the JSI TRIGA reactor. The time-dependent, kinetic capabilities of the new tRAPID algorithm have been implemented based on the Transient Fission Matrix (TFM) method. tRAPID has been verified and validated using the Flattop-Pu benchmark and reference calculations and measurements using the JSI TRIGA reactor. In addition to the main tRAPID algorithms development and benchmarking efforts, a new web-application for the RAPID Code System for input preparation and interactive output visualization was developed. VRS-RAPID greatly enhances the usability, intuitiveness, and outreach possibilities of the RAPID Code System.
- Development and Validation of Reconstruction Algorithms for 3D Tomography DiagnosticsLei, Qingchun (Virginia Tech, 2017-01-10)This work reports three reconstruction algorithms developed to address the practical issues encountered in 3D tomography diagnostics, such as the limited view angles available in many practical applications, the large scale and nonlinearity of the problems when they are in 3D, and the measurement uncertainty. These algorithms are: an algebraic reconstruction technique (ART) screening algorithm, a nonlinear iterative reconstruction technique (NIRT), and an iterative reconstruction technique integrating view registration optimization (IRT-VRO) algorithm. The ART screening algorithm was developed to enhance the performance of the traditional ART algorithm to solve linear tomography problems, the NIRT was to solve nonlinear tomography problems, and the IRT-VRO was to address the issue of view registration uncertainty in both linear and nonlinear problems. This dissertation describes the mathematical formulations, and the experimental and numerical validations for these algorithms. It is expected that the results obtained in this dissertation to lay the groundwork for their further development and expanded adaption in the deployment of tomography diagnostics in various practical applications.
- Development of a Fast X-ray Line Detector System for Two-Phase Flow MeasurementSong, Kyle (Virginia Tech, 2016-12-08)Measuring void fraction distribution in two-phase flow has been a challenging task for many decades because of its complex and fast-changing interfacial structure. In this study, a non-intrusive X-ray measurement system is developed and calibrated to mitigate this challenge. This approach has several advantages over the conventional methods such as the multi-sensor conductivity probe, wire-mesh sensor, impedance void meter, or direct optical imaging. The X-ray densitometry technique is non-intrusive, insensitive to flow regime changes, capable of measuring high temperature or high-pressure flows, and has reasonable penetration depth. With the advancement of detector technology, the system developed in this work can further achieve high spatial resolution (100 micron per pixel) and high temporal resolution (1000 frames per second). This work mainly focuses on the following aspects of the system development: establishing a geometrical model for the line detector system, conducting spectral analysis for X-ray attenuation in two-phase flow, and performing calibration tests. The geometrical model has considered the measurement plane, geometry of the test-section wall and flow channel, relative position of the X-ray source and detector pixels. By assuming axisymmetry, an algorithm has been developed to convert void fraction distribution along the detector pixels to the radial void profile in a circular pipe. The X-ray spectral analysis yielded a novel prediction model for non-chromatic X-rays and non-uniform structure materials such as the internal two-phase flow which contains gas, liquid and solid wall materials. A calibration experiment has been carried out to optimize the detector conversion factor for each detector pixels. Finally, the data measured by the developed X-ray system are compared with the double-sensor conductivity probe and gas flow meter for sample bubbly flow and slug flow conditions. The results show reasonable agreement between these different measuring techniques.
- Development of a Novel Detector Response Formulation and Algorithm in RAPID and its BenchmarkingWang, Meng Jen (Virginia Tech, 2019-10-24)Solving radiation shielding problems, i.e. deep penetration problems, is a challenging task from both computation time and resource aspects in field of nuclear engineering. This is mainly because of the complexity of the governing equation for neutral particle transport - Linear Boltzmann Equation (LBE). The LBE includes seven independent variables with presence of integral and differential operators. Moreover, the low successive rate of radiation shielding problem is also challenging for solving such problems. In this dissertation, the Detector Response Function (DRF) methodology is proposed and developed for real-time and accurate radiation shielding calculation. The real-time capability of solving radiation shielding problem is very important for: (1) Safety and monitoring of nuclear systems; (2) Nuclear non-proliferation; and (3) Sensitivity study and Uncertainty quantification. Traditionally, the difficulties of solving radiation problem are: (1) Very long computation time using Monte Carlo method; (2) Extremely large memory requirement for deterministic method; and (3) Re-calculations using hybrid method. Among all of them, the hybrid method, typically Monte Carlo + deterministic, is capable of solving radiation shielding problem more efficiently than either Monte Carlo or deterministic methods. However, none of the aforementioned methods are capable of performing "real-time" radiation shielding calculation. Literature survey reveals a number of investigation on improving or developing efficient methods for radiation shielding calculation. These methods can be categorized by: (1) Using variance reduction techniques to improve successive rate of Monte Carlo method; and (2) Developing numerical techniques to improve convergence rate and avoid unphysical behavior for deterministic method. These methods are considered clever and useful for the radiation transport community. However, real-time radiation shielding calculation capability is still missing although the aforementioned advanced methods are able to accelerate the calculation efficiency significantly. In addition, very few methods are "Physics-based" For example, the mean free path of neutrons are typically orders of magnitude smaller than a nuclear system, i.e. nuclear reactor. Each individual neutron will not travel too far before its history is terminated. This is called the "loosely coupled" nature of nuclear systems. In principle, a radiation shielding problem can be potentially decomposed into pieces and solved more efficient. In the DRF methodology, the DRF coefficients are pre-calculated with dependency of several parameters. These coefficients can be directly coupled with radiation source calculated from other code system, i.e. RAPID (Real-time Analysis for Particle transport and In-situ Detection) code system. With this arrangement, detector/dosimeter response can be calculated on the fly. Thus far, the DRF methodology has been incorporated into the RAPID code system, and applied on four different benchmark problems: (1) The GBC-32 Spent Nuclear Fuel (SNF) cask flooded with water with a $^3$He detector placed on the cask surface; (2) The VENUS-3 experimental Reactor Pressure Vessel (RPV) neutron fluence calculation benchmark problem; (3) RPV dosimetry using the Three-Mile Island Unit-1 (TMI-1) commercial reactor; and (4) A Dry storage SNF cask external dosimetry problem. The results show that dosimeter/detector response or dose value calculations using the DRF methodology are all within $2sigma$ relative statistical uncertainties of MCNP5 + CADIS (Consistent Adjoint Driven Importance Sampling) standard fixed-source calculation. The DRF methodology only requires order of seconds for the dosimeter/detector response or dose value calculations using 1 processor if the DRF coefficients are appropriately prepared. The DRF coefficients can be reused without re-calculations when a model configuration is changed. In contrast, the standard MCNP5 calculations typically require more than an hour using 8 processors, even using the CADIS methodology. The DRF methodology has enabled the capability of real-time radiation shielding calculation. The radiation transport community can be greatly benefited by the development of DRF methodology. Users can easily utilize the DRF methodology to perform parametric studies, sensitivity studies, and uncertainty quantifications. The DRF methodology can be applied on various radiation shielding problems, such as nuclear system monitoring and medical radiation facilities. The appropriate procedure of DRF methodology and necessary parameters on DRF coefficient dependency will be discussed in detail in this dissertation.
- Development of a Novel Fuel Burnup Methodology and Algorithm in RAPID and its Benchmarking and AutomationRoskoff, Nathan (Virginia Tech, 2018-08-02)Fuel burnup calculations provide material concentrations and intrinsic neutron and gamma source strengths as a function of irradiation and cooling time. Detailed, full-core 3D burnup calculations are critical for nuclear fuel management studies, including core design and spent fuel storage safety and safeguards analysis. For core design, specifically during refueling, full- core pin-wise, axially-dependent burnup distributions are necessary to determine assembly positioning to efficiently utilize fuel resources. In spent fuel storage criticality safety analysis, detailed burnup distributions enable best-estimate analysis which allows for more effective utilization of storage space. Additionally, detailed knowledge of neutron and gamma source distributions provide the ability to ensure nuclear material safeguards. The need for accurate and efficient burnup calculations has become more urgent for the simulation of advanced reactors and monitoring and safeguards of spent fuel pools. To this end, the Virginia Tech Transport Theory Group (VT3G) has been working on advanced computational tools for accurate modeling and simulation of nuclear systems in real-time. These tools are based on the Multi-stage Response-function Transport (MRT) methodology. For monitoring and safety evaluation of spent fuel pools and casks, the RAPID (Real-time Analysis for Particle transport and In-situ Detection) code system has been developed. This dissertation presents a novel methodology and algorithm for performing 3D fuel bur- nup calculations, referred to as bRAPID- Burnup with RAPID . bRAPID utilizes the existing RAPID code system for accurate calculation of 3D fission source distributions as the trans- port calculation tool to drive the 3D burnup calculation. bRAPID is capable of accurately and efficiently calculating assembly-wise axially-dependent fission source and burnup dis- tributions, and irradiated-fuel properties including material compositions, neutron source, gamma source, spontaneous fission source, and activities. bRAPID performs 3D burnup calculations in a fraction of the time required by state-of-the-art methodologies because it utilizes a pre-calculated database of response functions. The bRAPID database pre-calculation procedure, and its automation, is presented. The ex- isting RAPID code is then benchmarked against the MCNP and Serpent Monte Carlo codes for a spent fuel pool and the U.S. Naval Academy Subcritical Reactor facility. RAPID is shown to accurately calculate eigenvalue, subcritical multiplication, and 3D fission source dis- tributions. Finally, bRAPID is compared to traditional, state-of-the art Serpent Monte Carlo burnup calculations and its performance will be evaluated. It is important to note that the automated pre-calculation proceedure is required for evaluating the performance of bRAPID. Additionally, benchmarking of the RAPID code is necessary to understand RAPID's ability to solve problems with variable burnups distributions and to asses its accuracy.
- Development of Advanced Image Processing Algorithms for Bubbly Flow MeasurementFu, Yucheng (Virginia Tech, 2018-10-16)An accurate measurement of bubbly flow has a significant value for understanding the bubble behavior, heat and energy transfer pattern in different engineering systems. It also helps to advance the theoretical model development in two-phase flow study. Due to the interaction between the gas and liquid phase, the flow patterns are complicated in recorded image data. The segmentation and reconstruction of overlapping bubbles in these images is a challenging task. This dissertation provides a complete set of image processing algorithms for bubbly flow measurement. The developed algorithm can deal with bubble overlapping issues and reconstruct bubble outline in 2D high speed images under a wide void fraction range. Key bubbly flow parameters such as void fraction, interfacial area concentration, bubble number density and velocity can be computed automatically after bubble segmentation. The time-averaged bubbly flow distributions are generated based on the extracted parameters for flow characteristic study. A 3D imaging system is developed for 3D bubble reconstruction. The proposed 3D reconstruction algorithm can restore the bubble shape in a time sequence for accurate flow visualization with minimum assumptions. The 3D reconstruction algorithm shows an error of less than 2% in volume measurement compared to the syringe reading. Finally, a new image synthesis framework called Bubble Generative Adversarial Networks (BubGAN) is proposed by combining the conventional image processing algorithm and deep learning technique. This framework aims to provide a generic benchmark tool for assessing the performance of the existed image processing algorithms with significant quality improvement in synthetic bubbly flow image generation.
- Development of High-Performance Optofluidic Sensors on Micro/Nanostructured SurfacesCheng, Weifeng (Virginia Tech, 2020-01-22)Optofluidic sensing utilizes the advantages of both microfluidic and optical science to achieve tunable and reconfigurable high-performance sensing purpose, which has established itself as a new and dynamic research field for exciting developments at the interface of photonics, microfluidics, and the life sciences. With the trend of developing miniaturized electronic devices and integrating multi-functional units on lab-on-a-chip instruments, more and more desires request for novel and powerful approaches to integrating optical elements and fluids on the same chip-scale system in recent years. By taking advantage of the electrowetting phenomenon, the wettability of liquid droplet on micro/nano-structured surfaces and the Leidenfrost effect, this doctoral research focuses on developing high-performance optofluidic sensing systems, including optical beam adaptive steering, whispering gallery mode (WGM) optical sensing, and surface-enhanced Raman spectroscopy (SERS) sensing. A watermill-like beam steering system is developed that can adaptively guide concentrating optical beam to targeted receivers. The system comprises a liquid droplet actuation mechanism based on electrowetting-on-dielectric, a superlattice-structured rotation hub, and an enhanced optical reflecting membrane. The specular reflector can be adaptively tuned within the lateral orientation of 360°, and the steering speed can reach ~353.5°/s. This work demonstrates the feasibility of driving a macro-size solid structure with liquid microdroplets, opening a new avenue for developing reconfigurable components such as optical switches in next-generation sensor network. Furthermore, the WGM sensing system is demonstrated to be stimulated along the meridian plane of a liquid microdroplet, instead of equatorial plane, resting on a properly designed nanostructured chip surface. The unavoidable deformation along the meridian rim of the sessile microdroplet can be controlled and regulated by tailoring the nanopillar structures and their associated hydrophobicity. The nanostructured superhydrophobic chip surface and its impact on the microdroplet morphology are modeled by Surface Evolver (SE), which is subsequently validated by the Cassie-Wenzel theory of wetting. The influence of the microdroplet morphology on the optical characteristics of WGMs is further numerically studied using the Finite-Difference Time-Domain method (FDTD) and it is found that meridian WGMs with intrinsic quality factor Q exceeding 104 can exist. Importantly, such meridian WGMs can be efficiently excited by a waveguiding structure embedded in the planar chip, which could significantly reduce the overall system complexity by eliminating conventional mechanical coupling parts. Our simulation results also demonstrate that this optofluidic resonator can achieve a sensitivity as high as 530 nm/RIU. This on-chip coupling scheme could pave the way for developing lab-on-a-chip resonators for high-resolution sensing of trace analytes in various applications ranging from chemical detections, biological reaction processes to environmental protection. Lastly, this research reports a new type of high-performance SERS substrate with nanolaminated plasmonic nanostructures patterned on a hierarchical micro/nanostructured surface, which demonstrates SERS enhancement factor as high as 1.8 x 107. Different from the current SERS substrates which heavily relies on durability-poor surface structure modifications and various chemical coatings on the platform surfaces which can deteriorate the SERS enhancement factor (EF) as the coating materials may block hot spots, the Leidenfrost effect-inspired evaporation approach is proposed to minimize the analyte deposition area and maximize the analyte concentration on the SERS sensing substrate. By intentionally regulating the temperature of the SERS substrate during evaporation process, the Rhodamine 6G (R6G) molecules inside a droplet with an initial concentration of 10-9 M is deposited within an area of 450 μm2, and can be successfully detected with a practical detection time of 0.1 s and a low excitation power of 1.3 mW.
- Development of High-Speed Camera Techniques for Droplet Measurement in Annular FlowsCohn, Ayden Seth (Virginia Tech, 2024-06-03)This research addresses the critical need for precise two-phase flow data in the development of computer simulation models, with a specific focus on the annular flow regime's droplet behavior. The study aims to contribute to the evaluation of safety and efficiency in nuclear reactors that handle fluids transitioning between liquid and gas states for thermal energy transport. Central to the investigation is the collection and analysis of droplet size and velocity distribution data, particularly to help with developing models for the water-cooled nuclear power plants. The experimental setup employs advanced tools, including a high-speed camera, lens, teleconverter, and a selected light source, to capture high-resolution images of droplets. Calibration procedures, incorporating depth of field testing, are implemented to ensure accurate droplet size measurements. A critical component of the research is the introduction of a droplet identification program, developed using Matlab, which facilitates efficient processing of experimental data. Preliminary results from the Virginia Tech test facility demonstrate the system's capability to eliminate out-of-focus droplets and obtain precise droplet data in a reasonable amount of time. Experimental results from the Rensselaer Polytechnic Institute test facility provide droplet size and velocity distributions for a variety of annular flow conditions. This facility has a concurrent two-flow system that pumps air and water at different rates through a 9.525 mm inner diameter tube. The conditions tested include gas superficial velocities ranging from 22 to 40 m/s and liquid superficial velocities ranging from 0.09 to 0.44 m/s. The measured flow has a temperature of 21°C and a pressure of 1 atm.
- Development of Metallic Fuel Additives and Alloys for Sodium-cooled Fast ReactorsZhuo, Weiqian (Virginia Tech, 2022-07-11)The major goal of the work is to develop effective additives for U-10Zr (wt.%) metallic fuel to mitigate the fuel-cladding chemical interactions (FCCIs) due to fission product lanthanides and to optimize the fuel phase mainly by lowering the gamma-onset temperature. The additives Sb, Mo, Nb, and Ti have been investigated. Metallic fuels with one or two of the additives and with or without lanthanide fission products were fabricated. In this study, Ce was selected as the representative lanthanide fission product. A series of tests and characterizations were carried out on the additive-bearing fuels, including annealing, diffusion coupling, scanning electron microscopy (SEM), X-ray powder diffraction (XRD), and differential scanning calorimetry (DSC). Sb was investigated to mitigate FCCIs because available studies show its potential as a lanthanide immobilizer. This work extends the knowledge of Sb in U-10Zr, including its effect in the Zr-free region. Sb forms precipitates with fuel constituents, either U or Zr. However, it combines with the lanthanide fission product Ce when Ce is present. Those Sb-precipitates are found to be stable upon annealing, and are compatible with the cladding. The additive does not change the phase transition of U-10Zr. Mo, Nb, and Ti have been investigated for phase optimization based on the known characteristics shown in the binary phase diagrams. The quaternary alloys, i.e., two Mo-bearing alloys and two Nb-bearing alloys, were investigated. Compared to U-10Zr, a few weight percentages of Zr are replaced by those additives in the quarternary alloys. The solid-state phase transitions were determined (alpha and U2Ti transfer into gamma). The transition temperature varies depending on the compositions. The Mo-bearing alloys have lower -onset temperatures than the Nb-bearing alloys. All of them have lower gamma-onset temperatures than that of U-10Zr. Since low gamma-onset temperature is favorable, the results indicate that the fuel phase can be optimized by the replacement of a few weight percentages of Zr into those additives. All the experiments were out-of-pile tests. Therefore, in-pile experiments will be necessary to fully evaluate the performance of the additives in the future.
- Development, Evaluation and Improvement of Correlations for Interphase Friction in Gas-Liquid Vertical UpflowClark, Randy R. Jr. (Virginia Tech, 2015-10-15)In this study, liquid-vapor vertical upflow has been research with the intent of finding an improved method of modelling the interphase friction in two-phase vertical flow in nuclear thermal-hydraulic codes. An improved method of modelling interphase friction should allow for better prediction of pressure gradient, void fraction and the phasic velocities. Data has been acquired from several available published resources and analyzed to determine the interphase friction using a force balance between the liquid and vapor phases. Using the Buckingham Pi Theorem, a dimensionless interphase friction force was tested and refined before being compared against seven other dimensionless parameters. Three correlations have been developed that establish a dimensionless interphase friction force as a function of the Weber number, the Froude number and the mixture Froude number. Statistical analysis of the three correlations shows that the mixture Froude number correlation should be the most accurate correlation. The correlations have a weakness that makes them ineffective mostly for bubbly flow and some slug flow scenarios, while they should perform significantly better for annular flow cases. Comparisons have been made against the interphase friction calculations published in the manuals of RELAP5/MOD2, RELAP5/MOD3.3, RELAP5-3D and TRACE. The findings have generally shown that the equations in the manuals provide very inaccurate approximations of the interphase friction compared to the interphase friction that was found via force balance. When analyzing the source code of RELAP5/MOD3.3, several differences were noticed between the source code and manual, which have been discussed. Calculations with the source code equations reveal that the source code provides a modestly improved prediction of the interphase friction force, but still has significant errors. Despite the fact that the manual and source code equations indicate that RELAP5/MOD3.3 should perform poorly in modelling interphase friction, actual RELAP5/MOD3.3 model runs perform very well in predicting pressure gradient, void fraction, the liquid and vapor velocities and the interphase friction force. This is largely due to RELAP5/MOD3.3 being able to adjust parameters to converge to a solution that fits within the boundary conditions established in the input file. Modifications to the RELAP5/MOD3.3 code were first made with the three correlations developed using dimensionless parameters, and were tested with data points that the RELAP5/MOD3.3 flow regime map had predicted would be annular flow. While the mixture Froude number correlation has been analyzed to be the most statistically accurate of the three correlations, it was found that the Weber number correlation performed best when implemented into RELAP5/MOD3.3. In a parametric study of the Weber number correlation, it performed optimally at 150% of the original correlation, improving upon the original RELAP model in almost every metric examined. Additional investigations were performed with individual annular flow correlations that model specific physical parameters. Results with the annular flow physical models were inconclusive as no particular model provided a significant improvement over the original RELAP5/MOD3.3 model, and there was no clear indication that combining the models would provide significant improvement.
- «
- 1 (current)
- 2
- 3
- »