Browsing by Author "Renaud, John E."
Now showing 1 - 5 of 5
Results Per Page
Sort Options
- Convergence analysis of hybrid cellular automata for topology optimizationPenninger, Charles L.; Watson, Layne T.; Tovar, Andres; Renaud, John E. (Department of Computer Science, Virginia Polytechnic Institute & State University, 2009-03-01)The hybrid cellular automaton (HCA) algorithm was inspired by the structural adaptation of bones to their ever changing mechanical environment. This methodology has been shown to be an effective topology synthesis tool. In previous work, it has been observed that the convergence of the HCA methodology is affected by parameters of the algorithm. As a result, questions have been raised regarding the conditions by which HCA converges to an optimal design. The objective of this investigation is to examine the conditions that guarantee convergence to a Karush-Kuhn-Tucker (KKT) point. In this paper, it is shown that the HCA algorithm is a fixed point iterative scheme and the previously reported KKT optimality conditions are corrected. To demonstrate the convergence properties of the HCA algorithm, a simple cantilevered beam example is utilized. Plots of the spectral radius for projections of the design space are used to show regions of guaranteed convergence.
- Convergence of Trust Region Augmented Lagrangian Methods Using Variable Fidelity Approximation DataRodriguez, Jose F.; Renaud, John E.; Watson, Layne T. (Department of Computer Science, Virginia Polytechnic Institute & State University, 1997-08-01)To date the primary focus of most constrained approximate optimization strategies is that application of the method should lead to improved designs. Few researchers have focused on the development of constrained approximate optimization strategies that are assured of converging to a Karush-Kuhn-Tucker (KKT) point for the problem. Recent work by the authors based on a trust region model management strategy has shown promise in managing the convergence of constrained approximate optimization in application to a suite of single level optimization test problems. Using a trust-region model management strategy, coupled with an augmented Lagrangian approach for constrained approximate optimization, the authors have shown in application studies that the approximate optimization process converges to a KKT point for the problem. The approximate optimization strategy sequentially builds a cumulative response surface approximation of the augmented Lagrangian which is then optimized subject to a trust region constraint. In this research the authors develop a formal proof of convergence for the response surface approximation based optimization algorithm. Previous application studies were conducted on single level optimization problems for which response surface approximations were developed using conventional statistical response sampling techniques such as central composite design to query a high fidelity model over the design space. In this research the authors extend the scope of application studies to include the class of multidisciplinary design optimization (MDO) test problems. More importantly the authors show that response surface approximations constructed from variable fidelity data generated during concurrent subspace optimizations (CSSOs) can be effectively managed by the trust region model management strategy. Results for two multidisciplinary test problems are presented in which convergence to a KKT point is observed. The formal proof of convergence and the successfull MDO application of the algorithm using variable fidelity data generated by CSSO are original contributions to the growing body of research in MDO.
- Homotopy methods for constraint relaxation in unilevel reliability based design optimizationAgarwal, Harish; Gano, Shawn E.; Renaud, John E.; Perez, Victor M.; Watson, Layne T. (Department of Computer Science, Virginia Polytechnic Institute & State University, 2007)Reliability based design optimization is a methodology for finding optimized designs that are characterized with a low probability of failure. The main ob jective in reliability based design optimization is to minimize a merit function while satisfying the reliability constraints. The reliability constraints are constraints on the probability of failure corre- sponding to each of the failure modes of the system or a single constraint on the system probability of failure. The probability of failure is usually estimated by performing a relia- bility analysis. During the last few years, a variety of different techniques have been devel- oped for reliability based design optimization. Traditionally, these have been formulated as a double-loop (nested) optimization problem. The upper level optimization loop gen- erally involves optimizing a merit function sub ject to reliability constraints and the lower level optimization loop(s) compute the probabilities of failure corresponding to the failure mode(s) that govern the system failure. This formulation is, by nature, computationally intensive. A new efficient unilevel formulation for reliability based design optimization was developed by the authors in earlier studies. In this formulation, the lower level optimiza- tion (evaluation of reliability constraints in the double loop formulation) was replaced by its corresponding first order Karush-Kuhn-Tucker (KKT) necessary optimality conditions at the upper level optimization. It was shown that the unilevel formulation is computation- ally equivalent to solving the original nested optimization if the lower level optimization is solved by numerically satisfying the KKT conditions (which is typically the case), and the two formulations are mathematically equivalent under constraint qualification and general- ized convexity assumptions. In the unilevel formulation, the KKT conditions of the inner optimization for each probabilistic constraint evaluation are imposed at the system level as equality constraints. Most commercial optimizers are usually numerically unreliable when applied to problems accompanied by many equality constraints. In this investigation an optimization framework for reliability based design using the unilevel formulation is de- veloped. Homotopy methods are used for constraint relaxation and to obtain a relaxed feasible design. A series of optimization problems are solved as the relaxed optimization problem is transformed via a homotopy to the original problem. A heuristic scheme is employed in this paper to update the homotopy parameter. The proposed algorithm is illustrated with example problems.
- KKT conditions satisfied using adaptive neighboring in hybrid cellular automata for topology optimizationPenninger, Charles L.; Tovar, Andres; Watson, Layne T.; Renaud, John E. (Department of Computer Science, Virginia Polytechnic Institute & State University, 2009)The hybrid cellular automaton (HCA) method is a biologically inspired algorithm capable of topology synthesis that was developed to simulate the behavior of the bone functional adaptation process. In this algorithm, the design domain is divided into cells with some communication property among neighbors. Local evolutionary rules, obtained from classical control theory, iteratively establish the value of the design variables in order to minimize the local error between a field variable and a corresponding target value. Karush-Kuhn-Tucker (KKT) optimality conditions have been derived to determine the expression for the field variable and its target. While averaging techniques mimicking intercellular communication have been used to mitigate numerical instabilities such as checkerboard patterns and mesh dependency, some questions have been raised whether KKT conditions are fully satisfied in the final topologies. Furthermore, the averaging procedure might result in cancellation or attenuation of the error between the field variable and its target. Several examples are presented showing that HCA converges to different final designs for different neighborhood configurations or averaging schemes. Although it has been claimed that these final designs are optimal, this might not be true in a precise mathematical sense—the use of the averaging procedure induces a mathematical incorrectness that has to be addressed. In this work, a new adaptive neighboring scheme will be employed that utilizes a weighting function for the influence of a cell’s neighbors that decreases to zero over time. When the weighting function reaches zero, the algorithm satisfies the aforementioned optimality criterion. Thus, the HCA algorithm will retain the benefits that result from utilizing neighborhood information, as well as obtain an optimal solution.
- Reduced Sampling for Construction of Quadratic Response Surface Approximations Using Adaptive Experimental DesignPerez, Victor M.; Renaud, John E.; Watson, Layne T. (Department of Computer Science, Virginia Polytechnic Institute & State University, 2007)The purpose of this paper is to reduce the computational complexity per step from O(n^2) to O(n) for optimization based on quadratic surrogates, where n is the number of design variables. Applying nonlinear optimization strategies directly to complex multidisciplinary systems can be prohibitively expensive when the complexity of the simulation codes is large. Increasingly, response surface approximations, and specifically quadratic approximations, are being integrated with nonlinear optimizers in order to reduce the CPU time required for the optimization of complex multidisciplinary systems. For evaluation by the optimizer, response surface approximations provide a computationally inexpensive lower fidelity representation of the system performance. The curse of dimensionality is a major drawback in the implementation of these approximations as the amount of required data grows quadratically with the number n of design variables in the problem. In this paper a novel technique to reduce the magnitude of the sampling from O(n^2) to O(n) is presented. The technique uses prior information to approximate the eigenvectors of the Hessian matrix of the response surface approximation and only requires the eigenvalues to be computed by response surface techniques. The technique is implemented in a sequential approximate optimization algorithm and applied to engineering problems of variable size and characteristics. Results demonstrate that a reduction in the data required per step from O(n^2) to O(n) points can be accomplished without significantly compromising the performance of the optimization algorithm. A reduction in the time (number of system analyses) required per step from O(n^2) to O(n) is significant, even more so as n increases. The novelty lies in how only O(n) system analyses can be used to approximate a Hessian matrix whose estimation normally requires O(n^2) system analyses.