Browsing by Author "Watson, Layne T."
Now showing 1 - 20 of 339
Results Per Page
Sort Options
- Acoustic propagation in nonuniform circular ducts carrying near sonic mean flowsKelly, Jeffrey J. (Virginia Tech, 1981-01-05)A linear model based on the wave-envelope technique is used to study the propagation of axisymmetric and spinning acoustic modes in hard-walled and lined nonuniform circular ducts carrying near sonic mean flows. This method is valid for large as well as small axial variations, as long as the mean flow does not separate. The wave-envelope technique is based on solving for the envelopes of the quasiparallel acoustic modes that exist in the duct instead o£ solving for the actual wave, thereby reducing the computational time and the round-off error encountered in purely numerical techniques. The influence of the throat Mach number, frequency, boundary-layer thickness and liner admittance on both upstream and downstream propagation of acoustic modes is considered. A numerical procedure, which is stable for cases of strong interaction, for analysis of nonlinear acoustic propagation through nearly sonic mean flows is also developed. This procedure is a combination of the Adams-PECE integration scheme and the singular value decomposition scheme. It does not develop the numerical instability associated with the Runge-Kutta and matrix inversion methods for nearly sonic duct flows. The numerical results show that an impedance condition can be satisfied at the duct exit and a corresponding solution obtained. The numerical results confirm that the nonlinearity intensifies the acoustic disturbance in the throat region, reduces the intensity of the fundamental frequency at the duct exit, and increases the reflections. This implies that the mode conversion properties of variable area ducts can reflect and focus the acoustic signal to the vicinity of the throat in high subsonic flows. Also the numerical results indicate that a shock develops if certain limits on the input parameters are exceeded.
- An Active Set Algorithm for Tracing Parametrized OptimaRakowska, Joanna; Haftka, Raphael T.; Watson, Layne T. (Department of Computer Science, Virginia Polytechnic Institute & State University, 1990)Optimization problems often depend on parameters that define constraints or objective functions. It is often necessary to know the effect of a change in a parameter on the optimum solution. An algorithm is presented here for tracking paths of optimal solutions of inequality constrained nonlinear programming problems as a function of a parameter. The proposed algorithm employs homotopy zero-curve tracing tecnniques to track segments where the set of active constraints is unchanged. The transition between segments is handled by considering all possible sets of active constraints and eliminating nonoptimal ones based on the signs of the Lagrange multipliers and the derivatives of the optimal solutions with respect to the parameter.
- An Adaptive Noise Filtering Algorithm for AVIRIS Data with Implications for Classification AccuracyPhillips, Rhonda D.; Blinn, Christine E.; Watson, Layne T.; Wynne, Randolph H. (Department of Computer Science, Virginia Polytechnic Institute & State University, 2008)This paper describes a new algorithm used to adaptively filter a remote sensing dataset based on signal-to-noise ratios (SNRs) once the maximum noise fraction (MNF) has been applied. This algorithm uses Hermite splines to calculate the approximate area underneath the SNR curve as a function of band number, and that area is used to place bands into “bins” with other bands having similar SNRs. A median filter with a variable sized kernel is then applied to each band, with the same size kernel used for each band in a particular bin. The proposed adaptive filters are applied to a hyperspectral image generated by the AVIRIS sensor, and results are given for the identification of three different pine species located within the study area. The adaptive filtering scheme improves image quality as shown by estimated SNRs, and classification accuracies improved by more than 10% on the sample study area, indicating that the proposed methods improve the image quality, thereby aiding in species discrimination.
- Adaptive Numerical Methods for Large Scale Simulations and Data AssimilationConstantinescu, Emil Mihai (Virginia Tech, 2008-05-26)Numerical simulation is necessary to understand natural phenomena, make assessments and predictions in various research and engineering fields, develop new technologies, etc. New algorithms are needed to take advantage of the increasing computational resources and utilize the emerging hardware and software infrastructure with maximum efficiency. Adaptive numerical discretization methods can accommodate problems with various physical, scale, and dynamic features by adjusting the resolution, order, and the type of method used to solve them. In applications that simulate real systems, the numerical accuracy of the solution is typically just one of the challenges. Measurements can be included in the simulation to constrain the numerical solution through a process called data assimilation in order to anchor the simulation in reality. In this thesis we investigate adaptive discretization methods and data assimilation approaches for large-scale numerical simulations. We develop and investigate novel multirate and implicit-explicit methods that are appropriate for multiscale and multiphysics numerical discretizations. We construct and explore data assimilation approaches for, but not restricted to, atmospheric chemistry applications. A generic approach for describing the structure of the uncertainty in initial conditions that can be applied to the most popular data assimilation approaches is also presented. We show that adaptive numerical methods can effectively address the discretization of large-scale problems. Data assimilation complements the adaptive numerical methods by correcting the numerical solution with real measurements. Test problems and large-scale numerical experiments validate the theoretical findings. Synergistic approaches that use adaptive numerical methods within a data assimilation framework need to be investigated in the future.
- Adjusting Process Count on Demand for Petascale Global OptimizationRadcliffe, Nicholas Ryan (Virginia Tech, 2011-12-15)There are many challenges that need to be met before efficient and reliable computation at the petascale is possible. Many scientific and engineering codes running at the petascale are likely to be memory intensive, which makes thrashing a serious problem for many petascale applications. One way to overcome this challenge is to use a dynamic number of processes, so that the total amount of memory available for the computation can be increased on demand. This thesis describes modifications made to the massively parallel global optimization code pVTdirect in order to allow for a dynamic number of processes. In particular, the modified version of the code monitors memory use and spawns new processes if the amount of available memory is determined to be insufficient. The primary design challenges are discussed, and performance results are presented and analyzed.
- Adjusting process count on demand for petascale global optimization⋆Radcliffe, Nicholas R.; Watson, Layne T.; Sosonkina, Masha; Haftka, Raphael T.; Trosset, Michael W. (Department of Computer Science, Virginia Polytechnic Institute & State University, 2011)There are many challenges that need to be met before efficient and reliable computation at the petascale is possible. Many scientific and engineering codes running at the petascale are likely to be memory intensive, which makes thrashing a serious problem for many petascale applications. One way to overcome this challenge is to use a dynamic number of processes, so that the total amount of memory available for the computation can be increased on demand. This paper describes modifications made to the massively parallel global optimization code pVTdirect in order to allow for a dynamic number of processes. In particular, the modified version of the code monitors memory use and spawns new processes if the amount of available memory is determined to be insufficient. The primary design challenges are discussed, and performance results are presented and analyzed.
- ADML: Aircraft Design Markup Language for Multidisciplinary Aircraft Design and AnalysisDeshpande, Shubhangi; Watson, Layne T.; Love, Nathan J.; Canfield, Robert A.; Kolonay, Raymond M. (Department of Computer Science, Virginia Polytechnic Institute & State University, 2013-12-31)The process of conceptual aircraft design has advanced tremendously in the past few decades due to rapidly developing computer technology. Today’s modern aerospace systems exhibit strong, interdisciplinary coupling and require a multidisciplinary, collaborative approach. Efficient transfer, sharing, and manipulation of aircraft design and analysis data in such a collaborative environment demands a formal structured representation of data. XML, a W3C recommendation,is one such standard concomitant with a number of powerful capabilities that alleviate interoperability issues in a collaborative environment. A compact, generic, and comprehensive XML schema for an aircraft design markup language (ADML) is proposed here to represent aircraft conceptual design and analysis data. The purpose of this unified data format is to provide a common language for data communication, and to improve efficiency and productivity within a multidisciplinary, collaborative aricraft design environment. An important feature of the proposed schema is the very expressive and efficient low level schemata (raw data, mathematical objects, and basic geometry). As a proof of concept the schema is used to encode an entire Convair B58. As the complexity of models and number of disciplines increases, the reduction in effort to exchange data models and analysis results in ADML also increases.
- Advances in aircraft design: multiobjective optimization and a markup languageDeshpande, Shubhangi Govind (Virginia Tech, 2014-01-23)Today's modern aerospace systems exhibit strong interdisciplinary coupling and require a multidisciplinary, collaborative approach. Analysis methods that were once considered feasible only for advanced and detailed design are now available and even practical at the conceptual design stage. This changing philosophy for conducting conceptual design poses additional challenges beyond those encountered in a low fidelity design of aircraft. This thesis takes some steps towards bridging the gaps in existing technologies and advancing the state-of-the-art in aircraft design. The first part of the thesis proposes a new Pareto front approximation method for multiobjective optimization problems. The method employs a hybrid optimization approach using two derivative free direct search techniques, and is intended for solving blackbox simulation based multiobjective optimization problems with possibly nonsmooth functions where the analytical form of the objectives is not known and/or the evaluation of the objective function(s) is very expensive (very common in multidisciplinary design optimization). A new adaptive weighting scheme is proposed to convert a multiobjective optimization problem to a single objective optimization problem. Results show that the method achieves an arbitrarily close approximation to the Pareto front with a good collection of well-distributed nondominated points. The second part deals with the interdisciplinary data communication issues involved in a collaborative mutidisciplinary aircraft design environment. Efficient transfer, sharing, and manipulation of design and analysis data in a collaborative environment demands a formal structured representation of data. XML, a W3C recommendation, is one such standard concomitant with a number of powerful capabilities that alleviate interoperability issues. A compact, generic, and comprehensive XML schema for an aircraft design markup language (ADML) is proposed here to provide a common language for data communication, and to improve efficiency and productivity within a multidisciplinary, collaborative environment. An important feature of the proposed schema is the very expressive and efficient low level schemata. As a proof of concept the schema is used to encode an entire Convair B58. As the complexity of models and number of disciplines increases, the reduction in effort to exchange data models and analysis results in ADML also increases.
- Aircraft Multidisciplinary Design Optimization using Design of Experiments Theory and Response Surface Modeling MethodsGiunta, Anthony A. (Virginia Tech, 1997-05-01)Design engineers often employ numerical optimization techniques to assist in the evaluation and comparison of new aircraft configurations. While the use of numerical optimization methods is largely successful, the presence of numerical noise in realistic engineering optimization problems often inhibits the use of many gradient-based optimization techniques. Numerical noise causes inaccurate gradient calculations which in turn slows or prevents convergence during optimization. The problems created by numerical noise are particularly acute in aircraft design applications where a single aerodynamic or structural analysis of a realistic aircraft configuration may require tens of CPU hours on a supercomputer. The computational expense of the analyses coupled with the convergence difficulties created by numerical noise are significant obstacles to performing aircraft multidisciplinary design optimization. To address these issues, a procedure has been developed to create two types of noise-free mathematical models for use in aircraft optimization studies. These two methods use elements of statistical analysis and the overall procedure for using the methods is made computationally affordable by the application of parallel computing techniques. The first modeling method, which has been the primary focus of this work, employs classical statistical techniques in response surface modeling and least squares surface fitting to yield polynomial approximation models. The second method, in which only a preliminary investigation has been performed, uses Bayesian statistics and an adaptation of the Kriging process in Geostatistics to create exponential function-based interpolating models. The particular application of this research involves modeling the subsonic and supersonic aerodynamic performance of high-speed civil transport (HSCT) aircraft configurations. The aerodynamic models created using the two methods outlined above are employed in HSCT optimization studies so that the detrimental effects of numerical noise are reduced or eliminated during optimization. Results from sample HSCT optimization studies involving five and ten variables are presented here to demonstrate the utility of the two modeling methods.
- Algorithm 1028: VTMOP: Solver for Blackbox Multiobjective Optimization ProblemsChang, Tyler; Watson, Layne T.; Larson, Jeffrey; Neveu, Nicole; Thacker, William; Deshpande, Shubhangi; Lux, Thomas (ACM, 2022-09-10)VTMOP is a Fortran 2008 software package containing two Fortran modules for solving computationally expensive bound-constrained blackbox multiobjective optimization problems. VTMOP implements the algorithm of Deshpande et al. [2016], which handles two or more objectives, does not require any derivatives, and produces well-distributed points over the Pareto front. The first module contains a general framework for solving multiobjective optimization problems by combining response surface methodology, trust region methodology, and an adaptive weighting scheme. The second module features a driver subroutine that implements this framework when the objective functions can be wrapped as a Fortran subroutine. Support is provided for both serial and parallel execution paradigms, and VTMOP is demonstrated on several test problems as well as one real-world problem in the area of particle accelerator optimization.
- Algorithm XXX: QNSTOP—Quasi-Newton Algorithm for Stochastic OptimizationAmos, Brandon D.; Easterling, David R.; Watson, Layne T.; Thacker, William I.; Castle, Brent S.; Trosset, Michael W. (Department of Computer Science, Virginia Polytechnic Institute & State University, 2014-07-01)QNSTOP consists of serial and parallel (OpenMP) Fortran 2003 codes for the quasi-Newton stochastic optimization method of Castle and Trosset. For stochastic problems, convergence theory exists for the particular algorithmic choices and parameter values used in QNSTOP. Both the parallel driver subroutine, which offers several parallel decomposition strategies, and the serial driver subroutine can be used for stochastic optimization or deterministic global optimization, based on an input switch. QNSTOP is particularly effective for “noisy” deterministic problems, using only objective function values. Some performance data for computational systems biology problems is given.
- Algorithm XXX: SHEPPACK: Modified Shepard Algorithm for Interpolation of Scattered Multivariate DataThacker, William I.; Zhang, Jingwei; Watson, Layne T.; Birch, Jeffrey B.; Iyer, Manjula A.; Berry, Michael W. (Department of Computer Science, Virginia Polytechnic Institute & State University, 2009)Scattered data interpolation problems arise in many applications. Shepard’s method for constructing a global interpolant by blending local interpolants using local-support weight functions usually creates reasonable approximations. SHEPPACK is a Fortran 95 package containing five versions of the modified Shepard algorithm: quadratic (Fortran 95 translations of Algorithms 660, 661, and 798), cubic (Fortran 95 translation of Algorithm 791), and linear variations of the original Shepard algorithm. An option to the linear Shepard code is a statistically robust fit, intended to be used when the data is known to contain outliers. SHEPPACK also includes a hybrid robust piecewise linear estimation algorithm RIPPLE (residual initiated polynomial-time piecewise linear estimation) intended for data from piecewise linear functions in arbitrary dimension m. The main goal of SHEPPACK is to provide users with a single consistent package containing most existing polynomial variations of Shepard’s algorithm. The algorithms target data of different dimensions. The linear Shepard algorithm, robust linear Shepard algorithm, and RIPPLE are the only algorithms in the package that are applicable to arbitrary dimensional data.
- Algorithm XXX: VTDIRECT95: Serial and Parallel Codes for the Global Optimization Algorithm DIRECTHe, Jian; Watson, Layne T.; Sosonkina, Masha (Department of Computer Science, Virginia Polytechnic Institute & State University, 2007)VTDIRECT95 is a Fortran 95 implementation of D.R. Jones' deterministic global optimization algorithm called DIRECT, which is widely used in multidisciplinary engineering design, biological science, and physical science applications. The package includes both a serial code and a data-distributed massively parallel code for different problem scales and optimization (exploration vs. exploitation) goals. Dynamic data structures are used to organize local data, handle unpredictable memory requirements, reduce the memory usage, and share the data across multiple processors. The parallel code employs a multilevel functional and data parallelism to boost concurrency and mitigate the data dependency, thus improving the load balancing and scalability. In addition, checkpointing features are integrated into both versions to provide fault tolerance and hot restarts. Important alogrithm modifications and design considerations are discussed regarding data structures, parallel schemes, error handling, and portability. Using several benchmark functions and real-world applications, the software is evaluated on different systems in terms of optimization effectiveness, data structure efficency, parallel performance, and checkpointing overhead. The package organization and usage are also described in detail.
- An Alternative to Full Configuration Interaction Based on a Tensor Product DecompositionSenese, Frederick A.; Beattie, Christopher A.; Schug, John C.; Viers, Jimmy W.; Watson, Layne T. (Department of Computer Science, Virginia Polytechnic Institute & State University, 1989)A new direct full variational approach exploits a tensor (Kronecker) product decompositions of the Hamiltonian. Explicit assembly and storage of the Hamiltonian matrix is avoided by using the Kronecker product structure to form matrix-vector products directly from the molecular integrals. Computation-intensive integral transformations and formula tapes are unnecessary. The wave function is expanded in terms of spin-free primitive sets rather than Slater determinants or configuration state functions and is equivalent to a full configuration interaction expansion. The approach suggests compact storage schemes and algorithms which are naturally suited to parallel and pipelined machines.
- Analysis and Application of Haseltine and Rawlings's Hybrid Stochastic Simulation AlgorithmWang, Shuo (Virginia Tech, 2016-10-06)Stochastic effects in cellular systems are usually modeled and simulated with Gillespie's stochastic simulation algorithm (SSA), which follows the same theoretical derivation as the chemical master equation (CME), but the low efficiency of SSA limits its application to large chemical networks. To improve efficiency of stochastic simulations, Haseltine and Rawlings proposed a hybrid of ODE and SSA algorithm, which combines ordinary differential equations (ODEs) for traditional deterministic models and SSA for stochastic models. In this dissertation, accuracy analysis, efficient implementation strategies, and application of of Haseltine and Rawlings's hybrid method (HR) to a budding yeast cell cycle model are discussed. Accuracy of the hybrid method HR is studied based on a linear chain reaction system, motivated from the modeling practice used for the budding yeast cell cycle control mechanism. Mathematical analysis and numerical results both show that the hybrid method HR is accurate if either numbers of molecules of reactants in fast reactions are above certain thresholds, or rate constants of fast reactions are much larger than rate constants of slow reactions. Our analysis also shows that the hybrid method HR allows for a much greater region in system parameter space than those for the slow scale SSA (ssSSA) and the stochastic quasi steady state assumption (SQSSA) method. Implementation of the hybrid method HR requires a stiff ODE solver for numerical integration and an efficient event-handling strategy for slow reaction firings. In this dissertation, an event-handling strategy is developed based on inverse interpolation. Performances of five wildly used stiff ODE solvers are measured in three numerical experiments. Furthermore, inspired by the strategy of the hybrid method HR, a hybrid of ODE and SSA stochastic models for the budding yeast cell cycle is developed, based on a deterministic model in the literature. Simulation results of this hybrid model match very well with biological experimental data, and this model is the first to do so with these recently available experimental data. This study demonstrates that the hybrid method HR has great potential for stochastic modeling and simulation of large biochemical networks.
- Analysis of a Nonhierarchical Decomposition AlgorithmShankar, Jayashree; Ribbens, Calvin J.; Haftka, Raphael T.; Watson, Layne T.; Sobieszczanski-Sobieski, Jaroslaw (Department of Computer Science, Virginia Polytechnic Institute & State University, 1992)Large scale optimization problems are tractable only if they are somehow decomposed. Hierarchical decompositions are inappropriate for some types of problems and do not parallelize well. Sobieszczanski-Sobieski has proposed a nonhierarchical decomposition strategy for nonlinear constrained optimization that is naturally parallel. Despite some successes on engineering problems, the algorithm as originally proposed fails on simple two dimensional quadratic programs. This paper carefully analyzes the algorithm for quadratic programs, and suggests a number of modifications to improve its robustness.
- Analysis of Function Component Complexity for Hypercube Homotopy AlgorithmsChakraborty, Amal; Allison, Donald C. S.; Ribbens, Calvin J.; Watson, Layne T. (Department of Computer Science, Virginia Polytechnic Institute & State University, 1990)Probability-one homotopy algorithms are a class of methods for solving nonlinear systems of equations that globally convergent from an arbitrary starting point with probability one. The essence of these homotopy algorithms is the construction of a homotopy map p-sub a and the subsequent tracking of a smooth curve y in the zero set p-sub a to the -1 (0) of p-sub a. Tracking the zero curve y requires repeated evaluation of the map p-sub a, its n x (v + 1) Jacobian matrix Dp-sub a and numerical linear algebra for calculating the kernel of Dp-sub a. This paper analyzes parallel homotopy algorithms on a hypercube, considering the numerical algebra, several communications topologies and problem decomposition strategies, functions component complexity, problem size, and the effect of different component complexity distributions. These parameters interact in complicated ways, but some general principles can be inferred based on empirical results.
- Analysis of Function Component Complexity for Hypercube Homotopy AlgorithmsChakraborty, Amal; Allison, Donald C. S.; Ribbens, Calvin J.; Watson, Layne T. (Department of Computer Science, Virginia Polytechnic Institute & State University, 1991)Probability-one homotopy algorithms are a class of methods for solving nonlinear systems of equations that are globally convergent from an arbitrary starting point with probability one. The essence of these homotopy algorithms is the construction of a homotopy map ra and the subsequent tracking of a smooth curve g in the zero set ra-1 (0) of ra . Tracking the zero curve g requires repeated evaluation of the map ra its n x (n + 1) Jacobian matrix Dra , and numerical linear algebra for calculating the kernel of Dra . This paper analyzes parallel homotopy algorithms on a hypercube, briefly reviewing the numerical linear algebra, several communication topologies and problem decomposition strategies, and concentrating on function component complexity, problem size, and the effect of different component complexity distributions. These parameters interact in complicated ways, but some general principles can be inferred based on empirical results. Implications for developing reliable and efficient parallel mathematical software packages for this problem area are also discussed.
- Analysis of the Fitness Effect of Compensatory MutationsZhang, Liqing; Watson, Layne T. (Department of Computer Science, Virginia Polytechnic Institute & State University, 2008)We extend our previous work on the fitness effect of the fixation of deleterious mutations on a population by incorporating the effect of compensatory mutations. Compensatory mutations are important in the sense that they make the deleterious mutations less deleterious, thus reducing the genetic load of the population. The essential phenomenon underlying compensatory mutations is the nonindependence of mutations in biological systems. Therefore, it is an important phenomenon that cannot be ignored when considering the fixation and fitness effect of deleterious mutations. Since having compensatory mutations essentially changes the distributional shapes of deleterious mutations, we can consider the effect of compensatory mutations by comparing two distributions where one distribution reflects the reduced fitness effects of deleterious mutations with the influence of compensatory mutations. We compare different distributions of deleterious mutations without compensatory mutations to those with compensatory mutations, and study the effect of population sizes, the shape of the distribution, and the mutation rates of the population on the total fitness reduction of the population.
- Application of panel methods for subsonic aerodynamicsKim, Meung Jung (Virginia Polytechnic Institute and State University, 1985)Several panel methods are developed to model subsonic aerodynamics. The vorticity panel method for two-dimensional problems is capable of handling general unsteady, potential, lifting flows. The lifting surface is modelled with a vortex sheet and the wakes by discrete vortices. As an imitation of the conditions at the trailing edge, stagnation conditions on both surfaces are used. The over-determined system is solved by an optimization scheme. The present predictions are in good agreement with experimental data and other computations. Moreover the present approach provides an attractive alternative to those developed earlier. Two panel methods for three-dimensional nonlifting problems are developed. One uses source distributions over curved elements and the other vorticity distributions over flat elements. For the source formulation, the effect of weakly nonlinear geometry on the numerical results is shown to accelerate the convergence of numerical values in general. In addition, the extensive comparisons between two formulations reveal that the voticity panel method is even more stable and accurate than the curved source panel method. Another vorticity panel method is developed to study the lifting l flows past three-dimensional bodies with sharp edges. The body is modelled by single vortex sheet for thin bodies and two vortex sheets for thick bodies while the wakes are modelled with a number of strings of discrete vortices. The flows are assumed to separate along the the sharp edges. The combination of continuous vorticity on the lifting surface and discrete vortices in the wakes yields excellent versatility and the capability of handling the tightly rolled wakes and predicting continuous pressure distributions on the lifting surface. The method is applied to thin and thick low-aspect-ratio delta wings and rectangular wings. The computed aerodynamic forces and wake shapes are in quantitative agreement with experimental data and other computational results.