Browsing by Author "Chung, Matthias"
Now showing 1 - 20 of 26
Results Per Page
Sort Options
- Advanced Sampling Methods for Solving Large-Scale Inverse ProblemsAttia, Ahmed Mohamed Mohamed (Virginia Tech, 2016-09-19)Ensemble and variational techniques have gained wide popularity as the two main approaches for solving data assimilation and inverse problems. The majority of the methods in these two approaches are derived (at least implicitly) under the assumption that the underlying probability distributions are Gaussian. It is well accepted, however, that the Gaussianity assumption is too restrictive when applied to large nonlinear models, nonlinear observation operators, and large levels of uncertainty. This work develops a family of fully non-Gaussian data assimilation algorithms that work by directly sampling the posterior distribution. The sampling strategy is based on a Hybrid/Hamiltonian Monte Carlo (HMC) approach that can handle non-normal probability distributions. The first algorithm proposed in this work is the "HMC sampling filter", an ensemble-based data assimilation algorithm for solving the sequential filtering problem. Unlike traditional ensemble-based filters, such as the ensemble Kalman filter and the maximum likelihood ensemble filter, the proposed sampling filter naturally accommodates non-Gaussian errors and nonlinear model dynamics, as well as nonlinear observations. To test the capabilities of the HMC sampling filter numerical experiments are carried out using the Lorenz-96 model and observation operators with different levels of nonlinearity and differentiability. The filter is also tested with shallow water model on the sphere with linear observation operator. Numerical results show that the sampling filter performs well even in highly nonlinear situations where the traditional filters diverge. Next, the HMC sampling approach is extended to the four-dimensional case, where several observations are assimilated simultaneously, resulting in the second member of the proposed family of algorithms. The new algorithm, named "HMC sampling smoother", is an ensemble-based smoother for four-dimensional data assimilation that works by sampling from the posterior probability density of the solution at the initial time. The sampling smoother naturally accommodates non-Gaussian errors and nonlinear model dynamics and observation operators, and provides a full description of the posterior distribution. Numerical experiments for this algorithm are carried out using a shallow water model on the sphere with observation operators of different levels of nonlinearity. The numerical results demonstrate the advantages of the proposed method compared to the traditional variational and ensemble-based smoothing methods. The HMC sampling smoother, in its original formulation, is computationally expensive due to the innate requirement of running the forward and adjoint models repeatedly. The proposed family of algorithms proceeds by developing computationally efficient versions of the HMC sampling smoother based on reduced-order approximations of the underlying model dynamics. The reduced-order HMC sampling smoothers, developed as extensions to the original HMC smoother, are tested numerically using the shallow-water equations model in Cartesian coordinates. The results reveal that the reduced-order versions of the smoother are capable of accurately capturing the posterior probability density, while being significantly faster than the original full order formulation. In the presence of nonlinear model dynamics, nonlinear observation operator, or non-Gaussian errors, the prior distribution in the sequential data assimilation framework is not analytically tractable. In the original formulation of the HMC sampling filter, the prior distribution is approximated by a Gaussian distribution whose parameters are inferred from the ensemble of forecasts. The Gaussian prior assumption in the original HMC filter is relaxed. Specifically, a clustering step is introduced after the forecast phase of the filter, and the prior density function is estimated by fitting a Gaussian Mixture Model (GMM) to the prior ensemble. The base filter developed following this strategy is named cluster HMC sampling filter (ClHMC ). A multi-chain version of the ClHMC filter, namely MC-ClHMC , is also proposed to guarantee that samples are taken from the vicinities of all probability modes of the formulated posterior. These methodologies are tested using a quasi-geostrophic (QG) model with double-gyre wind forcing and bi-harmonic friction. Numerical results demonstrate the usefulness of using GMMs to relax the Gaussian prior assumption in the HMC filtering paradigm. To provide a unified platform for data assimilation research, a flexible and a highly-extensible testing suite, named DATeS , is developed and described in this work. The core of DATeS is implemented in Python to enable for Object-Oriented capabilities. The main components, such as the models, the data assimilation algorithms, the linear algebra solvers, and the time discretization routines are independent of each other, such as to offer maximum flexibility to configure data assimilation studies.
- Antibiotics ameliorate lupus-like symptoms in miceMu, Qinghui; Tavella, Vincent J.; Kirby, Jay L.; Cecere, Thomas E.; Chung, Matthias; Lee, Jiyoung; Li, Song; Ahmed, Sattar Ansar; Eden, Kristin; Allen, Irving C. (Nature, 2017-10-20)Gut microbiota and the immune system interact to maintain tissue homeostasis, but whether this interaction is involved in the pathogenesis of systemic lupus erythematosus (SLE) is unclear. Here we report that oral antibiotics given during active disease removed harmful bacteria from the gut microbiota and attenuated SLE-like disease in lupus-prone mice. Using MRL/lpr mice, we showed that antibiotics given after disease onset ameliorated systemic autoimmunity and kidney histopathology. They decreased IL-17-producing cells and increased the level of circulating IL-10. In addition, antibiotics removed Lachnospiraceae and increased the relative abundance of Lactobacillus spp., two groups of bacteria previously shown to be associated with deteriorated or improved symptoms in MRL/lpr mice, respectively. Moreover, we showed that the attenuated disease phenotype could be recapitulated with a single antibiotic vancomycin, which reshaped the gut microbiota and changed microbial functional pathways in a time-dependent manner. Furthermore, vancomycin treatment increased the barrier function of the intestinal epithelium, thus preventing the translocation of lipopolysaccharide, a cell wall component of Gram-negative Proteobacteria and known inducer of lupus in mice, into the circulation. These results suggest that mixed antibiotics or a single antibiotic vancomycin ameliorate SLE-like disease in MRL/lpr mice by changing the composition of gut microbiota.
- A Bayesian Approach to Estimating Background Flows from a Passive ScalarKrometis, Justin (Virginia Tech, 2018-06-26)We consider the statistical inverse problem of estimating a background flow field (e.g., of air or water) from the partial and noisy observation of a passive scalar (e.g., the concentration of a pollutant). Here the unknown is a vector field that is specified by large or infinite number of degrees of freedom. We show that the inverse problem is ill-posed, i.e., there may be many or no background flows that match a given set of observations. We therefore adopt a Bayesian approach, incorporating prior knowledge of background flows and models of the observation error to develop probabilistic estimates of the fluid flow. In doing so, we leverage frameworks developed in recent years for infinite-dimensional Bayesian inference. We provide conditions under which the inference is consistent, i.e., the posterior measure converges to a Dirac measure on the true background flow as the number of observations of the solute concentration grows large. We also define several computationally-efficient algorithms adapted to the problem. One is an adjoint method for computation of the gradient of the log likelihood, a key ingredient in many numerical methods. A second is a particle method that allows direct computation of point observations of the solute concentration, leveraging the structure of the inverse problem to avoid approximation of the full infinite-dimensional scalar field. Finally, we identify two interesting example problems with very different posterior structures, which we use to conduct a large-scale benchmark of the convergence of several Markov Chain Monte Carlo methods that have been developed in recent years for infinite-dimensional settings.
- Bayesian Parameter Estimation on Three Models of InfluenzaTorrence, Robert Billington (Virginia Tech, 2017-05-11)Mathematical models of viral infections have been informing virology research for years. Estimating parameter values for these models can lead to understanding of biological values. This has been successful in HIV modeling for the estimation of values such as the lifetime of infected CD8 T-Cells. However, estimating these values is notoriously difficult, especially for highly complex models. We use Bayesian inference and Monte Carlo Markov Chain methods to estimate the underlying densities of the parameters (assumed to be continuous random variables) for three models of influenza. We discuss the advantages and limitations of parameter estimation using these methods. The data and influenza models used for this project are from the lab of Dr. Amber Smith in Memphis, Tennessee.
- Compartmental Process-based Model for Estimating Ammonia Emission from Stored Scraped Liquid Dairy ManureKarunarathne, Sampath Ashoka (Virginia Tech, 2017-07-06)The biogeochemical processes responsible for production and emission of ammonia from stored liquid dairy manure are governed by environmental factors (e.g. manure temperature, moisture) and manure characteristics (e.g. total ammoniacal nitrogen concentration, pH). These environmental factors and manure characteristics vary spatially as a result of spatially heterogeneous physical, chemical, and biological properties of manure. Existing process-based models used for estimating ammonia emission consider stored manure as a homogeneous system and do not consider these spatial variations leading to inaccurate estimations. In this study, a one-dimensional compartmental biogeochemical model was developed to (i) estimate spatial variation of temperature and substrate concentration (ii) estimate spatial variations and rates of biogeochemical processes, and (iii) estimate production and emission of ammonia from stored scraped liquid dairy manure. A one-dimension compartmentalized modeling approach was used whereby manure storage is partitioned into several sections in vertical domain assuming that the conditions are spatially uniform within the horizontal domain. Spatial variation of temperature and substrate concentration were estimated using established principles of heat and mass transfer. Pertinent biogeochemical processes were assigned to each compartment to estimate the production and emission of ammonia. Model performance was conducted using experimental data obtained from National Air Emissions Monitoring Study conducted by the United States Environmental Protection Agency. A sensitivity analysis was performed and air temperature, manure pH, wind speed, and manure total ammoniacal nitrogen concentration were identified as the most sensitive model inputs. The model was used to estimate ammonia emission from a liquid dairy manure storage of a dairy farm located in Rockingham and Franklin counties in Virginia. Ammonia emission was estimated under different management and weather scenarios: two different manure storage periods from November to April and May to October using historical weather data of the two counties. Results suggest greater ammonia emissions and manure nitrogen loss for the manure storage period in warm season from May to October compared to the storage period in cold season from November to April.
- Computational Advancements for Solving Large-scale Inverse ProblemsCho, Taewon (Virginia Tech, 2021-06-10)For many scientific applications, inverse problems have played a key role in solving important problems by enabling researchers to estimate desired parameters of a system from observed measurements. For example, large-scale inverse problems arise in many global problems and medical imaging problems such as greenhouse gas tracking and computational tomography reconstruction. This dissertation describes advancements in computational tools for solving large-scale inverse problems and for uncertainty quantification. Oftentimes, inverse problems are ill-posed and large-scale. Iterative projection methods have dramatically reduced the computational costs of solving large-scale inverse problems, and regularization methods have been critical in obtaining stable estimations by applying prior information of unknowns via Bayesian inference. However, by combining iterative projection methods and variational regularization methods, hybrid projection approaches, in particular generalized hybrid methods, create a powerful framework that can maximize the benefits of each method. In this dissertation, we describe various advancements and extensions of hybrid projection methods that we developed to address three recent open problems. First, we develop hybrid projection methods that incorporate mixed Gaussian priors, where we seek more sophisticated estimations where the unknowns can be treated as random variables from a mixture of distributions. Second, we describe hybrid projection methods for mean estimation in a hierarchical Bayesian approach. By including more than one prior covariance matrix (e.g., mixed Gaussian priors) or estimating unknowns and hyper-parameters simultaneously (e.g., hierarchical Gaussian priors), we show that better estimations can be obtained. Third, we develop computational tools for a respirometry system that incorporate various regularization methods for both linear and nonlinear respirometry inversions. For the nonlinear systems, blind deconvolution methods are developed and prior knowledge of nonlinear parameters are used to reduce the dimension of the nonlinear systems. Simulated and real-data experiments of the respirometry problems are provided. This dissertation provides advanced tools for computational inversion and uncertainty quantification.
- Diagonal Estimation with Probing MethodsKaperick, Bryan James (Virginia Tech, 2019-06-21)Probing methods for trace estimation of large, sparse matrices has been studied for several decades. In recent years, there has been some work to extend these techniques to instead estimate the diagonal entries of these systems directly. We extend some analysis of trace estimators to their corresponding diagonal estimators, propose a new class of deterministic diagonal estimators which are well-suited to parallel architectures along with heuristic arguments for the design choices in their construction, and conclude with numerical results on diagonal estimation and ordering problems, demonstrating the strengths of our newly-developed methods alongside existing methods.
- An effective method for parameter estimation with pde constraints with multiple right-hand sidesHaber, Eldad; Chung, Matthias; Herrmann, Felix (Siam Publications, 2012)Often, parameter estimation problems of parameter-dependent PDEs involve multiple right-hand sides. The computational cost and memory requirements of such problems increase linearly with the number of right-hand sides. For many applications this is the main bottleneck of the computation. In this paper we show that problems with multiple right-hand sides can be reformulated as stochastic programming problems by combining the right-hand sides into a few "simultaneous" sources. This effectively reduces the cost of the forward problem and results in problems that are much cheaper to solve. We discuss two solution methodologies: namely sample average approximation and stochastic approximation. To illustrate the effectiveness of our approach we present two model problems, direct current resistivity and seismic tomography.
- Efficient 𝐻₂-Based Parametric Model Reduction via Greedy SearchCooper, Jon Carl (Virginia Tech, 2021-01-19)Dynamical systems are mathematical models of physical phenomena widely used throughout the world today. When a dynamical system is too large to effectively use, we turn to model reduction to obtain a smaller dynamical system that preserves the behavior of the original. In many cases these models depend on one or more parameters other than time, which leads to the field of parametric model reduction. Constructing a parametric reduced-order model (ROM) is not an easy task, and for very large parametric systems it can be difficult to know how well a ROM models the original system, since this usually involves many computations with the full-order system, which is precisely what we want to avoid. Building off of efficient 𝐻-infinity approximations, we develop a greedy algorithm for efficiently modeling large-scale parametric dynamical systems in an 𝐻₂-sense. We demonstrate the effectiveness of this greedy search on a fluid problem, a mechanics problem, and a thermal problem. We also investigate Bayesian optimization for solving the optimization subproblem, and end with extending this algorithm to work with MIMO systems.
- Galerkin Projections Between Finite Element SpacesThompson, Ross Anthony (Virginia Tech, 2015-06-17)Adaptive mesh refinement schemes are used to find accurate low-dimensional approximating spaces when solving elliptic PDEs with Galerkin finite element methods. For nonlinear PDEs, solving the nonlinear problem with Newton's method requires an initial guess of the solution on a refined space, which can be found by interpolating the solution from a previous refinement. Improving the accuracy of the representation of the converged solution computed on a coarse mesh for use as an initial guess on the refined mesh may reduce the number of Newton iterations required for convergence. In this thesis, we present an algorithm to compute an orthogonal L^2 projection between two dimensional finite element spaces constructed from a triangulation of the domain. Furthermore, we present numerical studies that investigate the efficiency of using this algorithm to solve various nonlinear elliptic boundary value problems.
- Integrating Machine Learning Into Process-Based Modeling to Predict Ammonia Losses From Stored Liquid Dairy ManureGenedy, Rana Ahmed Kheir (Virginia Tech, 2023-06-16)Storing manure on dairy farms is essential for maximizing its fertilizer value, reducing management costs, and minimizing potential environmental pollution challenges. However, ammonia loss through volatilization during storage remains a challenge. Quantifying these losses is necessary to inform decision-making processes to improve manure management, and design ammonia mitigation strategies. In 2003, the National Research Council recommended using process-based models to estimate emissions of pollutants, such as ammonia, from animal feeding operations. While much progress has been made to meet this call, still, their accuracy is limited because of the inadequate values of manure properties such as heat and mass transfer coefficients. Additionally, the process-based models lack realistic estimations for manure temperatures; they use ambient air temperature surrogates which was found to underestimate the atmospheric emissions during storage. This study uses machine learning algorithms' unique abilities to address some of the challenges of process-based modeling. Firstly, ammonia concentrations, manure temperature, and local meteorological factors were measured from three dairy farms with different manure management practices and storage types. This data was used to estimate the influence of manure characteristics and meteorological factors on the trend of ammonia emissions. Secondly, the data was subjected to four data-driven machine learning algorithms and a physics-informed neural network (PINN) to predict manure temperature. Finally, a deep-learning approach that combines process-based modeling and recurrent neural networks (LSTM) was introduced to estimate ammonia loss from dairy manure during storage. This method involves inverse problem-solving to estimate the heat and mass transfer coefficients for ammonia transport and emission from stored manure using the hyperparameters optimization tool, Optuna. Results show that ammonia flux patterns mirrored manure temperature closely compared to ambient air temperature, with wind speed and crust thickness significantly influencing ammonia emissions. The data-driven machine learning models used to estimate the ammonia emissions had a high predictive ability; however, their generalization accuracy was poor. However, the PINN model had superior generalization accuracy with R2 during the testing phase exceeded 0.70, in contrast to -0.03 and 0.66 for finite-elements heat transfer and data-driven neural network, respectively. In addition, optimizing the process-based model parameters has significantly improved performance. Finally, Physics-informed LSTM has the potential to replace conventional process-based models due to its computational efficiency and does not require extensive data collection. The outcomes of this study contribute to precision agriculture, specifically designing suitable on-farm strategies to minimize nutrient loss and greenhouse gas emissions during the manure storage periods.
- Linear Parameter Uncertainty Quantification using Surrogate Gaussian ProcessesMacatula, Romcholo Yulo (Virginia Tech, 2020-07-21)We consider uncertainty quantification using surrogate Gaussian processes. We take a previous sampling algorithm and provide a closed form expression of the resulting posterior distribution. We extend the method to weighted least squares and a Bayesian approach both with closed form expressions of the resulting posterior distributions. We test methods on 1D deconvolution and 2D tomography. Our new methods improve on the previous algorithm, however fall short in some aspects to a typical Bayesian inference method.
- Linking neuronal brain activity to the glucose metabolismGöbel, Britta; Oltmanns, Kerstin M.; Chung, Matthias (2013-08-29)Background Energy homeostasis ensures the functionality of the entire organism. The human brain as a missing link in the global regulation of the complex whole body energy metabolism is subject to recent investigation. The goal of this study is to gain insight into the influence of neuronal brain activity on cerebral and peripheral energy metabolism. In particular, the tight link between brain energy supply and metabolic responses of the organism is of interest. We aim to identifying regulatory elements of the human brain in the whole body energy homeostasis. Methods First, we introduce a general mathematical model describing the human whole body energy metabolism. It takes into account the two central roles of the brain in terms of energy metabolism. The brain is considered as energy consumer as well as regulatory instance. Secondly, we validate our mathematical model by experimental data. Cerebral high-energy phosphate content and peripheral glucose metabolism are measured in healthy men upon neuronal activation induced by transcranial direct current stimulation versus sham stimulation. By parameter estimation we identify model parameters that provide insight into underlying neurophysiological processes. Identified parameters reveal effects of neuronal activity on regulatory mechanisms of systemic glucose metabolism. Results Our examinations support the view that the brain increases its glucose supply upon neuronal activation. The results indicate that the brain supplies itself with energy according to its needs, and preeminence of cerebral energy supply is reflected. This mechanism ensures balanced cerebral energy homeostasis. Conclusions The hypothesis of the central role of the brain in whole body energy homeostasis as active controller is supported.
- Mathematical Modeling of Dengue Viral InfectionNikin-Beers, Ryan Patrick (Virginia Tech, 2014-06-06)In recent years, dengue viral infection has become one of the most widely-spread mosquito-borne diseases in the world, with an estimated 50-100 million cases annually, resulting in 500,000 hospitalizations. Due to the nature of the immune response to each of the four serotypes of dengue virus, secondary infections of dengue put patients at higher risk for more severe infection as opposed to primary infections. The current hypothesis for this phenomenon is antibody-dependent enhancement, where strain-specific antibodies from the primary infection enhance infection by a heterologous serotype. To determine the mechanisms responsible for the increase in disease severity, we develop mathematical models of within-host virus-cell interaction, epidemiological models of virus transmission, and a combination of the within-host and between-host models. The main results of this thesis focus on the within-host model. We model the effects of antibody responses against primary and secondary virus strains. We find that secondary infections lead to a reduction of virus removal. This is slightly different than the current antibody-dependent enhancement hypothesis, which suggests that the rate of virus infectivity is higher during secondary infections due to antibody failure to neutralize the virus. We use the results from the within-host model in an epidemiological multi-scale model. We start by constructing a two-strain SIR model and vary the parameters to account for the effect of antibody-dependent enhancement.
- Mathematical Models of Hepatitis B Virus Dynamics during Antiviral TherapyCarracedo Rodriguez, Andrea (Virginia Tech, 2016-04-21)Antiviral therapy for patients infected with hepatitis B virus is only partially efficient. The field is in high demand for understanding the connections between the virus, immune responses, short-term and long-term drug efficacy and the overall health of the liver. A mathematical model was introduced in 2009 to help elucidate the host-virus dynamics after the start of therapy. The model allows the study of complicated viral patterns observed in HBV patients. In our research, we will analyze this model to determine the biological markers (e.g. liver proliferation, immune responses, and drug efficacy) that determine the different decay patterns. We will also investigate how such markers affect the length of therapy and the amount of liver damage.
- Mathematical Models of Immune Responses to Infectious DiseasesErwin, Samantha H. (Virginia Tech, 2017-04-04)In this dissertation, we investigate the mechanisms behind diseases and the immune responses required for successful disease resolution in three projects: i) A study of HIV and HPV co-infection, ii) A germinal center dynamics model, iii) A study of monoclonal antibody therapy. We predict that the condition leading to HPV persistence during HIV/HPV co-infection is the permissive immune environment created by HIV, rather than the direct HIV/HPV interaction. In the second project, we develop a germinal center model to understand the mechanisms that lead to the formation of potent long-lived plasma. We predict that the T follicular helper cells are a limiting resource and present possible mechanisms that can revert this limitation in the presence of non-mutating and mutating antigen. Finally, we develop a pharmacokinetic model of 3BNC117 antibody dynamics and HIV viral dynamics following antibody therapy. We fit the models to clinical trial data and conclude that antibody binding is delayed and that the combined effects of initial CD4 T cell count, initial HIV levels, and virus production are strong indicators of a good response to antibody immunotherapy.
- Numerical Methods for Separable Nonlinear Inverse Problems with Constraint and Low RankCho, Taewon (Virginia Tech, 2017-11-20)In this age, there are many applications of inverse problems to lots of areas ranging from astronomy, geoscience and so on. For example, image reconstruction and deblurring require the use of methods to solve inverse problems. Since the problems are subject to many factors and noise, we can't simply apply general inversion methods. Furthermore in the problems of interest, the number of unknown variables is huge, and some may depend nonlinearly on the data, such that we must solve nonlinear problems. It is quite different and significantly more challenging to solve nonlinear problems than linear inverse problems, and we need to use more sophisticated methods to solve these kinds of problems.
- Optimizing Feeding Efficiency in Dairy Cows Using a Precision Feeding SystemMarra Campos, Leticia (Virginia Tech, 2024-08-26)Current feeding strategies aim to maximize efficiency at the pen level. However, feed intake varies across animals and in response to diet composition, making it difficult to capture these variations and control feeding effectively. A precision feeding system is required to feed animals individually, continuously monitor responses, and make timely adjustments to feed tailoring. Such a system would efficiently integrate dairy operations to enhance profitability and reduce their environmental footprint. Thus, the objectives of this dissertation were to build, test, and apply a precision feeding system able to tailor feeding strategies to animals more precisely and closely match their individual requirements. In Chapter 3, we describe the precision feeding system framework using directional data streams. The system integrates real-time farm data, segmented into data-analytic modules for independent testing and troubleshooting. It provides feeding instructions to automatic feeders and generates animal and financial monitoring reports. In Chapter 4, we describe the "Animal Performance" system module. This study developed a predictive model to estimate individual dry matter intake (DMI) by integrating markers, animal characteristics, dietary nutrient concentrations, and chewing sensor data. The performance of the developed model was then assessed and contrasted with the NASEM (2021) DMI equations. By incorporating covariates derived from short-term use of external and internal markers we demonstrated a greater accuracy of DMI predictions when using a fixed effects model, supporting its predictive capabilities for further application. In Chapter 5, we describe the "Diet Optimization" systems module, used to maximize profit by optimizing rations using a developed compact-vectorized version of NASEM (2021). The study aimed to simulate optimized diets, evaluate the economic impact of feeding individual diet, compare feed costs and income over feed cost (IOFC) for optimized group diet, and compare optimized diets against pen-averages (PEN). The results showed that IND diets had lower costs, higher milk production, and increased IOFC compared to CLU diets. Additionally, both IND and CLU diets outperformed PEN solutions. This work established methods for deriving efficient diet solutions for individual animals and using clustering techniques for more precise pen-level feeding. In Chapter 6, we describe the application of "Animal Performance", "Diet Optimization", and "Nutrient Titration" system modules. The former DMI model described in Chapter 4 was applied to the experimental data. The middle utilized optimized diets generated by the optimizer developed in Chapter 5, with additional algorithm updates. The latter aimed to investigate individual milk true protein production responses of dairy cows to varying levels of metabolizable protein (MP) and rumen-protected amino acids (RPAA) using automatic feeding systems and rank animals based on their individual gross milk protein efficiencies. Results demonstrated heterogeneous animal responses across MP and RPAA levels, ranging from linear, and quadratic to no response, emphasizing the necessity of addressing individual variability within a common pen. High-efficiency animals behaved consistently across MP treatments with lower variability, while low-efficiency animals showed high variability but consistently remained in the bottom efficiency rank. In conclusion, the precision feeding system underscores true capabilities to tailor nutrient delivery to individual cows, maximizing economic and environmental benefits, and sets the stage for future research focused on further refinement and automation of these technologies
- Parameter Estimation Methods for Ordinary Differential Equation Models with Applications to MicrobiologyKrueger, Justin Michael (Virginia Tech, 2017-08-04)The compositions of in-host microbial communities (microbiota) play a significant role in host health, and a better understanding of the microbiota's role in a host's transition from health to disease or vice versa could lead to novel medical treatments. One of the first steps toward this understanding is modeling interaction dynamics of the microbiota, which can be exceedingly challenging given the complexity of the dynamics and difficulties in collecting sufficient data. Methods such as principal differential analysis, dynamic flux estimation, and others have been developed to overcome these challenges for ordinary differential equation models. Despite their advantages, these methods are still vastly underutilized in mathematical biology, and one potential reason for this is their sophisticated implementation. While this work focuses on applying principal differential analysis to microbiota data, we also provide comprehensive details regarding the derivation and numerics of this method. For further validation of the method, we demonstrate the feasibility of principal differential analysis using simulation studies and then apply the method to intestinal and vaginal microbiota data. In working with these data, we capture experimentally confirmed dynamics while also revealing potential new insights into those dynamics. We also explore how we find the forward solution of the model differential equation in the context of principal differential analysis, which amounts to a least-squares finite element method. We provide alternative ideas for how to use the least-squares finite element method to find the forward solution and share the insights we gain from highlighting this piece of the larger parameter estimation problem.
- Parametric Dynamical Systems: Transient Analysis and Data Driven ModelingGrimm, Alexander Rudolf (Virginia Tech, 2018-07-02)Dynamical systems are a commonly used and studied tool for simulation, optimization and design. In many applications such as inverse problem, optimal control, shape optimization and uncertainty quantification, those systems typically depend on a parameter. The need for high fidelity in the modeling stage leads to large-scale parametric dynamical systems. Since these models need to be simulated for a variety of parameter values, the computational burden they incur becomes increasingly difficult. To address these issues, parametric reduced models have encountered increased popularity in recent years. We are interested in constructing parametric reduced models that represent the full-order system accurately over a range of parameters. First, we define a global joint error mea- sure in the frequency and parameter domain to assess the accuracy of the reduced model. Then, by assuming a rational form for the reduced model with poles both in the frequency and parameter domain, we derive necessary conditions for an optimal parametric reduced model in this joint error measure. Similar to the nonparametric case, Hermite interpolation conditions at the reflected images of the poles characterize the optimal parametric approxi- mant. This result extends the well-known interpolatory H2 optimality conditions by Meier and Luenberger to the parametric case. We also develop a numerical algorithm to construct locally optimal reduced models. The theory and algorithm are data-driven, in the sense that only function evaluations of the parametric transfer function are required, not access to the internal dynamics of the full model. While this first framework operates on the continuous function level, assuming repeated transfer function evaluations are available, in some cases merely frequency samples might be given without an option to re-evaluate the transfer function at desired points; in other words, the function samples in parameter and frequency are fixed. In this case, we construct a parametric reduced model that minimizes a discretized least-squares error in the finite set of measurements. Towards this goal, we extend Vector Fitting (VF) to the parametric case, solving a global least-squares problem in both frequency and parameter. The output of this approach might lead to a moderate size reduced model. In this case, we perform a post- processing step to reduce the output of the parametric VF approach using H2 optimal model reduction for a special parametrization. The final model inherits the parametric dependence of the intermediate model, but is of smaller order. A special case of a parameter in a dynamical system is a delay in the model equation, e.g., arising from a feedback loop, reaction time, delayed response and various other physical phenomena. Modeling such a delay comes with several challenges for the mathematical formulation, analysis, and solution. We address the issue of transient behavior for scalar delay equations. Besides the choice of an appropriate measure, we analyze the impact of the coefficients of the delay equation on the finite time growth, which can be arbitrary large purely by the influence of the delay.