Browsing by Author "Chung, Julianne"
Now showing 1 - 17 of 17
Results Per Page
Sort Options
- Compressive Sensing Approaches for Sensor based Predictive Analytics in Manufacturing and Service SystemsBastani, Kaveh (Virginia Tech, 2016-03-14)Recent advancements in sensing technologies offer new opportunities for quality improvement and assurance in manufacturing and service systems. The sensor advances provide a vast amount of data, accommodating quality improvement decisions such as fault diagnosis (root cause analysis), and real-time process monitoring. These quality improvement decisions are typically made based on the predictive analysis of the sensor data, so called sensor-based predictive analytics. Sensor-based predictive analytics encompasses a variety of statistical, machine learning, and data mining techniques to identify patterns between the sensor data and historical facts. Given these patterns, predictions are made about the quality state of the process, and corrective actions are taken accordingly. Although the recent advances in sensing technologies have facilitated the quality improvement decisions, they typically result in high dimensional sensor data, making the use of sensor-based predictive analytics challenging due to their inherently intensive computation. This research begins in Chapter 1 by raising an interesting question, whether all these sensor data are required for making effective quality improvement decisions, and if not, is there any way to systematically reduce the number of sensors without affecting the performance of the predictive analytics? Chapter 2 attempts to address this question by reviewing the related research in the area of signal processing, namely, compressive sensing (CS), which is a novel sampling paradigm as opposed to the traditional sampling strategy following the Shannon Nyquist rate. By CS theory, a signal can be reconstructed from a reduced number of samples, hence, this motivates developing CS based approaches to facilitate predictive analytics using a reduced number of sensors. The proposed research methodology in this dissertation encompasses CS approaches developed to deliver the following two major contributions, (1) CS sensing to reduce the number of sensors while capturing the most relevant information, and (2) CS predictive analytics to conduct predictive analysis on the reduced number of sensor data. The proposed methodology has a generic framework which can be utilized for numerous real-world applications. However, for the sake of brevity, the validity of the proposed methodology has been verified with real sensor data associated with multi-station assembly processes (Chapters 3 and 4), additive manufacturing (Chapter 5), and wearable sensing systems (Chapter 6). Chapter 7 summarizes the contribution of the research and expresses the potential future research directions with applications to big data analytics.
- Computational Advancements for Solving Large-scale Inverse ProblemsCho, Taewon (Virginia Tech, 2021-06-10)For many scientific applications, inverse problems have played a key role in solving important problems by enabling researchers to estimate desired parameters of a system from observed measurements. For example, large-scale inverse problems arise in many global problems and medical imaging problems such as greenhouse gas tracking and computational tomography reconstruction. This dissertation describes advancements in computational tools for solving large-scale inverse problems and for uncertainty quantification. Oftentimes, inverse problems are ill-posed and large-scale. Iterative projection methods have dramatically reduced the computational costs of solving large-scale inverse problems, and regularization methods have been critical in obtaining stable estimations by applying prior information of unknowns via Bayesian inference. However, by combining iterative projection methods and variational regularization methods, hybrid projection approaches, in particular generalized hybrid methods, create a powerful framework that can maximize the benefits of each method. In this dissertation, we describe various advancements and extensions of hybrid projection methods that we developed to address three recent open problems. First, we develop hybrid projection methods that incorporate mixed Gaussian priors, where we seek more sophisticated estimations where the unknowns can be treated as random variables from a mixture of distributions. Second, we describe hybrid projection methods for mean estimation in a hierarchical Bayesian approach. By including more than one prior covariance matrix (e.g., mixed Gaussian priors) or estimating unknowns and hyper-parameters simultaneously (e.g., hierarchical Gaussian priors), we show that better estimations can be obtained. Third, we develop computational tools for a respirometry system that incorporate various regularization methods for both linear and nonlinear respirometry inversions. For the nonlinear systems, blind deconvolution methods are developed and prior knowledge of nonlinear parameters are used to reduce the dimension of the nonlinear systems. Simulated and real-data experiments of the respirometry problems are provided. This dissertation provides advanced tools for computational inversion and uncertainty quantification.
- Computational tools for inversion and uncertainty estimation in respirometryCho, Taewon; Pendar, Hodjat; Chung, Julianne (PLoS, 2021-05-21)In many physiological systems, real-time endogeneous and exogenous signals in living organisms provide critical information and interpretations of physiological functions; however, these signals or variables of interest are not directly accessible and must be estimated from noisy, measured signals. In this paper, we study an inverse problem of recovering gas exchange signals of animals placed in a flow-through respirometry chamber from measured gas concentrations. For large-scale experiments (e.g., long scans with high sampling rate) that have many uncertainties (e.g., noise in the observations or an unknown impulse response function), this is a computationally challenging inverse problem. We first describe various computational tools that can be used for respirometry reconstruction and uncertainty quantification when the impulse response function is known. Then, we address the more challenging problem where the impulse response function is not known or only partially known. We describe nonlinear optimization methods for reconstruction, where both the unknown model parameters and the unknown signal are reconstructed simultaneously. Numerical experiments show the benefits and potential impacts of these methods in respirometry.
- Computationally efficient methods for large-scale atmospheric inverse modelingCho, Taewon; Chung, Julianne; Miller, Scot M.; Saibaba, Arvind K. (Copernicus, 2022-07-20)Atmospheric inverse modeling describes the process of estimating greenhouse gas fluxes or air pollution emissions at the Earth's surface using observations of these gases collected in the atmosphere. The launch of new satellites, the expansion of surface observation networks, and a desire for more detailed maps of surface fluxes have yielded numerous computational and statistical challenges for standard inverse modeling frameworks that were often originally designed with much smaller data sets in mind. In this article, we discuss computationally efficient methods for large-scale atmospheric inverse modeling and focus on addressing some of the main computational and practical challenges. We develop generalized hybrid projection methods, which are iterative methods for solving large-scale inverse problems, and specifically we focus on the case of estimating surface fluxes. These algorithms confer several advantages. They are efficient, in part because they converge quickly, they exploit efficient matrix-vector multiplications, and they do not require inversion of any matrices. These methods are also robust because they can accurately reconstruct surface fluxes, they are automatic since regularization or covariance matrix parameters and stopping criteria can be determined as part of the iterative algorithm, and they are flexible because they can be paired with many different types of atmospheric models. We demonstrate the benefits of generalized hybrid methods with a case study from NASA's Orbiting Carbon Observatory 2 (OCO-2) satellite. We then address the more challenging problem of solving the inverse model when the mean of the surface fluxes is not known a priori; we do so by reformulating the problem, thereby extending the applicability of hybrid projection methods to include hierarchical priors. We further show that by exploiting mathematical relations provided by the generalized hybrid method, we can efficiently calculate an approximate posterior variance, thereby providing uncertainty information.
- Diagonal Estimation with Probing MethodsKaperick, Bryan James (Virginia Tech, 2019-06-21)Probing methods for trace estimation of large, sparse matrices has been studied for several decades. In recent years, there has been some work to extend these techniques to instead estimate the diagonal entries of these systems directly. We extend some analysis of trace estimators to their corresponding diagonal estimators, propose a new class of deterministic diagonal estimators which are well-suited to parallel architectures along with heuristic arguments for the design choices in their construction, and conclude with numerical results on diagonal estimation and ordering problems, demonstrating the strengths of our newly-developed methods alongside existing methods.
- Global Structure of the Mantle Transition Zone Discontinuities and Site Response Effects in the Atlantic and Gulf Coastal PlainGuo, Zhen (Virginia Tech, 2019-09-03)This thesis focuses on two different topics in seismology: imaging the global structures of the mantle transition zone discontinuities and studying the site response effects in the Atlantic and Gulf Coastal Plain. Global structures of the mantle transition zone discontinuities provide important constraints on thermal structures and dynamic processes in the mid mantle. In this thesis, global topographic structures of the 410- and 660-km discontinuities are obtained from finite-frequency tomography of SS precursors. The finite-frequency sensitivities of SS waves and precursors are calculated based on a single-scattering (Born) approximation and can be used for data selection. The new global models show a number of smaller-scale features that were absent in back-projection models. Good correlation between the mantle transition zone thickness and wave speed variations suggests dominantly thermal origins for the lateral variations in the transition zone. The high-resolution global models of the 410- and 660-km discontinuities in this thesis show strong positive correlation beneath western North America and eastern Asia subduction zones with both discontinuities occurring at greater depths. Wavespeed and anisotropy models support vertical variations in thermal structure in the mid mantle, suggesting return flows from the lower mantle occur predominantly in the vicinity of stagnant slabs and the region overlying the stagnant slabs. In oceanic regions, the two discontinuities show a weak anti-correlation, indicating the existence of a secondary global far-field return flow. The Atlantic and Gulf Coastal Plain is covered by extensive Cretaceous and Cenozoic marine sediments. In this thesis, the site response effects of sediments in the Coastal Plain region relative to the reference condition outside that region are investigated using Lg and coda spectral ratios. The high-frequency attenuation factors (kappa) in the Coastal Plain are strongly correlated with the sediment thickness. At frequencies between 0.1-2.86 Hz, the Lg spectral ratio amplitudes are modeled as functions of frequency and thickness of the sediments in the Coastal Plain. Analysis of the residuals from the stochastic ground motion prediction method suggests that incorporating the site response effects as functions of sediment thickness may improve ground motion prediction models for the Coastal Plain region.
- Learning Hyperparameters for Inverse Problems by Deep Neural NetworksMcDonald, Ashlyn Grace (Virginia Tech, 2023-05-08)Inverse problems arise in a wide variety of applications including biomedicine, environmental sciences, astronomy, and more. Computing reliable solutions to these problems requires the inclusion of prior knowledge in a process that is often referred to as regularization. Most regularization techniques require suitable choices of regularization parameters. In this work, we will describe new approaches that use deep neural networks (DNN) to estimate these regularization parameters. We will train multiple networks to approximate mappings from observation data to individual regularization parameters in a supervised learning approach. Once the networks are trained, we can efficiently compute regularization parameters for newly-obtained data by forward propagation through the DNNs. The network-obtained regularization parameters can be computed more efficiently and may even lead to more accurate solutions compared to existing regularization parameter selection methods. Numerical results for tomography demonstrate the potential benefits of using DNNs to learn regularization parameters.
- Numerical Methods for Separable Nonlinear Inverse Problems with Constraint and Low RankCho, Taewon (Virginia Tech, 2017-11-20)In this age, there are many applications of inverse problems to lots of areas ranging from astronomy, geoscience and so on. For example, image reconstruction and deblurring require the use of methods to solve inverse problems. Since the problems are subject to many factors and noise, we can't simply apply general inversion methods. Furthermore in the problems of interest, the number of unknown variables is huge, and some may depend nonlinearly on the data, such that we must solve nonlinear problems. It is quite different and significantly more challenging to solve nonlinear problems than linear inverse problems, and we need to use more sophisticated methods to solve these kinds of problems.
- On the Use of Arnoldi and Golub-Kahan Bases to Solve Nonsymmetric Ill-Posed Inverse ProblemsBrown, Matthew Allen (Virginia Tech, 2015-02-20)Iterative Krylov subspace methods have proven to be efficient tools for solving linear systems of equations. In the context of ill-posed inverse problems, they tend to exhibit semiconvergence behavior making it difficult detect ``inverted noise" and stop iterations before solutions become contaminated. Regularization methods such as spectral filtering methods use the singular value decomposition (SVD) and are effective at filtering inverted noise from solutions, but are computationally prohibitive on large problems. Hybrid methods apply regularization techniques to the smaller ``projected problem" that is inherent to iterative Krylov methods at each iteration, thereby overcoming the semiconvergence behavior. Commonly, the Golub-Kahan bidiagonalization is used to construct a set of orthonormal basis vectors that span the Krylov subspaces from which solutions will be chosen, but seeking a solution in the orthonormal basis generated by the Arnoldi process (which is fundamental to the popular iterative method GMRES) has been of renewed interest recently. We discuss some of the positive and negative aspects of each process and use example problems to examine some qualities of the bases they produce. Computing optimal solutions in a given basis gives some insight into the performance of the corresponding iterative methods and how hybrid methods can contribute.
- Parameter Estimation Methods for Ordinary Differential Equation Models with Applications to MicrobiologyKrueger, Justin Michael (Virginia Tech, 2017-08-04)The compositions of in-host microbial communities (microbiota) play a significant role in host health, and a better understanding of the microbiota's role in a host's transition from health to disease or vice versa could lead to novel medical treatments. One of the first steps toward this understanding is modeling interaction dynamics of the microbiota, which can be exceedingly challenging given the complexity of the dynamics and difficulties in collecting sufficient data. Methods such as principal differential analysis, dynamic flux estimation, and others have been developed to overcome these challenges for ordinary differential equation models. Despite their advantages, these methods are still vastly underutilized in mathematical biology, and one potential reason for this is their sophisticated implementation. While this work focuses on applying principal differential analysis to microbiota data, we also provide comprehensive details regarding the derivation and numerics of this method. For further validation of the method, we demonstrate the feasibility of principal differential analysis using simulation studies and then apply the method to intestinal and vaginal microbiota data. In working with these data, we capture experimentally confirmed dynamics while also revealing potential new insights into those dynamics. We also explore how we find the forward solution of the model differential equation in the context of principal differential analysis, which amounts to a least-squares finite element method. We provide alternative ideas for how to use the least-squares finite element method to find the forward solution and share the insights we gain from highlighting this piece of the larger parameter estimation problem.
- Primary/Soft Biometrics: Performance Evaluation and Novel Real-Time ClassifiersAlorf, Abdulaziz Abdullah (Virginia Tech, 2020-02-19)The relevance of faces in our daily lives is indisputable. We learn to recognize faces as newborns, and faces play a major role in interpersonal communication. The spectrum of computer vision research about face analysis includes, but is not limited to, face detection and facial attribute classification, which are the focus of this dissertation. The face is a primary biometric because by itself revels the subject's identity, while facial attributes (such as hair color and eye state) are soft biometrics because by themselves they do not reveal the subject's identity. In this dissertation, we proposed a real-time model for classifying 40 facial attributes, which preprocesses faces and then extracts 7 types of classical and deep features. These features were fused together to train 3 different classifiers. Our proposed model yielded 91.93% on the average accuracy outperforming 7 state-of-the-art models. We also developed a real-time model for classifying the states of human eyes and mouth (open/closed), and the presence/absence of eyeglasses in the wild. Our method begins by preprocessing a face by cropping the regions of interest (ROIs), and then describing them using RootSIFT features. These features were used to train a nonlinear support vector machine for each attribute. Our eye-state classifier achieved the top performance, while our mouth-state and glasses classifiers were tied as the top performers with deep learning classifiers. We also introduced a new facial attribute related to Middle Eastern headwear (called igal) along with its detector. Our proposed idea was to detect the igal using a linear multiscale SVM classifier with a HOG descriptor. Thereafter, false positives were discarded using dense SIFT filtering, bag-of-visual-words decomposition, and nonlinear SVM classification. Due to the similarity in real-life applications, we compared the igal detector with state-of-the-art face detectors, where the igal detector significantly outperformed the face detectors with the lowest false positives. We also fused the igal detector with a face detector to improve the detection performance. Face detection is the first process in any facial attribute classification pipeline. As a result, we reported a novel study that evaluates the robustness of current face detectors based on: (1) diffraction blur, (2) image scale, and (3) the IoU classification threshold. This study would enable users to pick the robust face detector for their intended applications.
- Recovering signals in physiological systems with large datasetsPendar, Hodjat; Socha, John J.; Chung, Julianne (Company of Biologists, 2016-08-15)In many physiological studies, variables of interest are not directly accessible, requiring that they be estimated indirectly from noisy measured signals. Here, we introduce two empirical methods to estimate the true physiological signals from indirectly measured, noisy data. The first method is an extension of Tikhonov regularization to large-scale problems, using a sequential update approach. In the second method, we improve the conditioning of the problem by assuming that the input is uniform over a known time interval, and then use a least-squares method to estimate the input. These methods were validated computationally and experimentally by applying them to flow-through respirometry data. Specifically, we infused CO2 in a flow-through respirometry chamber in a known pattern, and used the methods to recover the known input from the recorded data. The results from these experiments indicate that these methods are capable of subsecond accuracy. We also applied the methods on respiratory data from a grasshopper to investigate the exact timing of abdominal pumping, spiracular opening, and CO2 emission. The methods can be used more generally for input estimation of any linear system.
- Recovering signals in physiological systems with large datasetsPendar, Hodjat (Virginia Tech, 2020-09-11)In many physiological studies, variables of interest are not directly accessible, requiring that they be estimated indirectly from noisy measured signals. Here, we introduce two empirical methods to estimate the true physiological signals from indirectly measured, noisy data. The first method is an extension of Tikhonov regularization to large-scale problems, using a sequential update approach. In the second method, we improve the conditioning of the problem by assuming that the input is uniform over a known time interval, and then we use a least-squares method to estimate the input. These methods were validated computationally and experimentally by applying them to flow-through respirometry data. Specifically, we infused CO2 in a flow-through respirometry chamber in a known pattern, and used the methods to recover the known input from the recorded data. The results from these experiments indicate that these methods are capable of sub-second accuracy. We also applied the methods on respiratory data from a grasshopper to investigate the exact timing of abdominal pumping, spiracular opening, and CO2 emission. The methods can be used more generally for input estimation of any linear system.
- Recycling Techniques for Sequences of Linear Systems and EigenproblemsCarr, Arielle Katherine Grim (Virginia Tech, 2021-07-09)Sequences of matrices arise in many applications in science and engineering. In this thesis we consider matrices that are closely related (or closely related in groups), and we take advantage of the small differences between them to efficiently solve sequences of linear systems and eigenproblems. Recycling techniques, such as recycling preconditioners or subspaces, are popular approaches for reducing computational cost. In this thesis, we introduce two novel approaches for recycling previously computed information for a subsequent system or eigenproblem, and demonstrate good results for sequences arising in several applications. Preconditioners are often essential for fast convergence of iterative methods. However, computing a good preconditioner can be very expensive, and when solving a sequence of linear systems, we want to avoid computing a new preconditioner too often. Instead, we can recycle a previously computed preconditioner, for which we have good convergence behavior of the preconditioned system. We propose an update technique we call the sparse approximate map, or SAM update, that approximately maps one matrix to another matrix in our sequence. SAM updates are very cheap to compute and apply, preserve good convergence properties of a previously computed preconditioner, and help to amortize the cost of that preconditioner over many linear solves. When solving a sequence of eigenproblems, we can reduce the computational cost of constructing the Krylov space starting with a single vector by warm-starting the eigensolver with a subspace instead. We propose an algorithm to warm-start the Krylov-Schur method using a previously computed approximate invariant subspace. We first compute the approximate Krylov decomposition for a matrix with minimal residual, and use this space to warm-start the eigensolver. We account for the residual matrix when expanding, truncating, and deflating the decomposition and show that the norm of the residual monotonically decreases. This method is effective in reducing the total number of matrix-vector products, and computes an approximate invariant subspace that is as accurate as the one computed with standard Krylov-Schur. In applications where the matrix-vector products require an implicit linear solve, we incorporate Krylov subspace recycling. Finally, in many applications, sequences of matrices take the special form of the sum of the identity matrix, a very low-rank matrix, and a small-in-norm matrix. We consider convergence rates for GMRES applied to these matrices by identifying the sources of sensitivity.
- Reusing and Updating Preconditioners for Sequences of MatricesGrim-McNally, Arielle Katherine (Virginia Tech, 2015-06-15)For sequences of related linear systems, the computation of a preconditioner for every system can be expensive. Often a fixed preconditioner is used, but this may not be effective as the matrix changes. This research examines the benefits of both reusing and recycling preconditioners, with special focus on ILUTP and factorized sparse approximate inverses and proposes an update that we refer to as a sparse approximate map or SAM update. Analysis of the residual and eigenvalues of the map will be provided. Applications include the Quantum Monte Carlo method, model reduction, oscillatory hydraulic tomography, diffuse optical tomography, and Helmholtz-type problems.
- Row-Action Methods for Massive Inverse ProblemsSlagel, Joseph Tanner (Virginia Tech, 2019-06-19)Numerous scientific applications have seen the rise of massive inverse problems, where there are too much data to implement an all-at-once strategy to compute a solution. Additionally, tools for regularizing ill-posed inverse problems are infeasible when the problem is too large. This thesis focuses on the development of row-action methods, which can be used to iteratively solve inverse problems when it is not possible to access the entire data-set or forward model simultaneously. We investigate these techniques for linear inverse problems and for separable, nonlinear inverse problems where the objective function is nonlinear in one set of parameters and linear in another set of parameters. For the linear problem, we perform a convergence analysis of these methods, which shows favorable asymptotic and initial convergence properties, as well as a trade-off between convergence rate and precision of iterates that is based on the step-size. These row-action methods can be interpreted as stochastic Newton and stochastic quasi-Newton approaches on a reformulation of the least squares problem, and they can be analyzed as limited memory variants of the recursive least squares algorithm. For ill-posed problems, we introduce sampled regularization parameter selection techniques, which include sampled variants of the discrepancy principle, the unbiased predictive risk estimator, and the generalized cross-validation. We demonstrate the effectiveness of these methods using examples from super-resolution imaging, tomography reconstruction, and image classification.
- The Sherman Morrison IterationSlagel, Joseph Tanner (Virginia Tech, 2015-06-17)The Sherman Morrison iteration method is developed to solve regularized least squares problems. Notions of pivoting and splitting are deliberated on to make the method more robust. The Sherman Morrison iteration method is shown to be effective when dealing with an extremely underdetermined least squares problem. The performance of the Sherman Morrison iteration is compared to classic direct methods, as well as iterative methods, in a number of experiments. Specific Matlab implementation of the Sherman Morrison iteration is discussed, with Matlab codes for the method available in the appendix.