Browsing by Author "Haftka, Raphael T."
Now showing 1 - 20 of 108
Results Per Page
Sort Options
- Accuracy analysis of the semi-analytical method for shape sensitivity analysisBarthelemy, Bruno (Virginia Polytechnic Institute and State University, 1987)The semi-analytical method, widely used for calculating derivatives of static response with respect to design variables for structures modeled by finite elements, is studied in this research. The research shows that the method can have serious accuracy problems for shape design variables in structures modeled by beam, plate, truss, frame, and solid elements. Local and global indices are developed to test the accuracy of the semi-analytical method. The local indices provide insight into the problem of large errors for the semi-analytical method. Local error magnification indices are developed for beam and plane truss structures, and several examples showing the severity of the problem are presented. The global index provides us with a general method for checking the accuracy of the semi-analytical method for any type of model. It characterizes the difference in errors between a general finite-difference method and the semi-analytical method. Moreover, a method improving the accuracy of the semi-analytical method (when possible) is provided. Examples are presented showing the use of the global index.
- An Active Set Algorithm for Tracing Parametrized OptimaRakowska, Joanna; Haftka, Raphael T.; Watson, Layne T. (Department of Computer Science, Virginia Polytechnic Institute & State University, 1990)Optimization problems often depend on parameters that define constraints or objective functions. It is often necessary to know the effect of a change in a parameter on the optimum solution. An algorithm is presented here for tracking paths of optimal solutions of inequality constrained nonlinear programming problems as a function of a parameter. The proposed algorithm employs homotopy zero-curve tracing tecnniques to track segments where the set of active constraints is unchanged. The transition between segments is handled by considering all possible sets of active constraints and eliminating nonoptimal ones based on the signs of the Lagrange multipliers and the derivatives of the optimal solutions with respect to the parameter.
- An active-constraint logic for nonlinear programmingDas, Alok (Virginia Polytechnic Institute and State University, 1982)The choice of active-constraint logic for screening inequalities has a major impact on the performance of gradient-projection method. It has been found that least-constrained strategies, which keep the number of constraints in the active set as small as possible, are computationally most efficient. However, these strategies are often prone to cycling of constraints between active and inactive status. This occurs mainly due to the violation of some of the constraints, taken as inactive, by the resulting step. This research develops methods for choosing an active set such that constraints in the active set satisfy the Kuhn-Tucker conditions and the resulting step does not violate the linear approximations to any of the constraints satisfied as equalities but considered inactive. Some of the existing active-constraint logics, specifically the dual-violator rule, yield the desired active set when two constraints are satisfied as equalities. However, when three or more constraints are satisfied as equalities, none of the existing logics give the desired active set. A number of general results, which help in the selection of the active set, have been developed in this research. An active-constraint logic has been developed for the case of three constraints. This logic gives the desired active-set. For the general case, when more than three constraints are satisfied as equalities, a separate active-set logic is suggested. This guarantees the nonviolation of the linear approximations to any of the constraints, taken as inactive, by the resulting step. The resulting active-set may not, however, satisfy the Kuhn-Tucker conditions. The efficiency of the proposed logic was tested computationally using quadratic programming problems. Three existing active-set strategies were used for comparision. The proposed logic almost always performed as well or better than the best among the three existing active-set strategies.
- Adjusting process count on demand for petascale global optimization⋆Radcliffe, Nicholas R.; Watson, Layne T.; Sosonkina, Masha; Haftka, Raphael T.; Trosset, Michael W. (Department of Computer Science, Virginia Polytechnic Institute & State University, 2011)There are many challenges that need to be met before efficient and reliable computation at the petascale is possible. Many scientific and engineering codes running at the petascale are likely to be memory intensive, which makes thrashing a serious problem for many petascale applications. One way to overcome this challenge is to use a dynamic number of processes, so that the total amount of memory available for the computation can be increased on demand. This paper describes modifications made to the massively parallel global optimization code pVTdirect in order to allow for a dynamic number of processes. In particular, the modified version of the code monitors memory use and spawns new processes if the amount of available memory is determined to be insufficient. The primary design challenges are discussed, and performance results are presented and analyzed.
- Aircraft Multidisciplinary Design Optimization using Design of Experiments Theory and Response Surface Modeling MethodsGiunta, Anthony A. (Virginia Tech, 1997-05-01)Design engineers often employ numerical optimization techniques to assist in the evaluation and comparison of new aircraft configurations. While the use of numerical optimization methods is largely successful, the presence of numerical noise in realistic engineering optimization problems often inhibits the use of many gradient-based optimization techniques. Numerical noise causes inaccurate gradient calculations which in turn slows or prevents convergence during optimization. The problems created by numerical noise are particularly acute in aircraft design applications where a single aerodynamic or structural analysis of a realistic aircraft configuration may require tens of CPU hours on a supercomputer. The computational expense of the analyses coupled with the convergence difficulties created by numerical noise are significant obstacles to performing aircraft multidisciplinary design optimization. To address these issues, a procedure has been developed to create two types of noise-free mathematical models for use in aircraft optimization studies. These two methods use elements of statistical analysis and the overall procedure for using the methods is made computationally affordable by the application of parallel computing techniques. The first modeling method, which has been the primary focus of this work, employs classical statistical techniques in response surface modeling and least squares surface fitting to yield polynomial approximation models. The second method, in which only a preliminary investigation has been performed, uses Bayesian statistics and an adaptation of the Kriging process in Geostatistics to create exponential function-based interpolating models. The particular application of this research involves modeling the subsonic and supersonic aerodynamic performance of high-speed civil transport (HSCT) aircraft configurations. The aerodynamic models created using the two methods outlined above are employed in HSCT optimization studies so that the detrimental effects of numerical noise are reduced or eliminated during optimization. Results from sample HSCT optimization studies involving five and ten variables are presented here to demonstrate the utility of the two modeling methods.
- Analysis and optimal design of pressurized, imperfect, anisotropic ring-stiffened cylindersLey, Robert Paul (Virginia Tech, 1992-06-15)Development of an algorithm to perform the structural analysis and optimal sizing of buckling resistant, imperfect, anisotropic ring-stiffened cylinders subjected to axial compression, torsion, and internal pressure is presented. The structure is modeled as a branched shell. A nonlinear axisymmetric prebuckling equilibrium state is assumed which is amenable to exact solution within each branch. Buckling displacements are represented by a Fourier series in the circumferential coordinate and finite elements in the axial or radial coordinate. A separate, more detailed analytical model is employed to predict prebuckling stresses in the flange/skin interface region./p> Results of case studies indicate that a nonlinear prebuckling analysis is needed to accurately predict buckling loads and mode shapes of these cylinders, that the rings have a greater influence on the buckling resistance as the relative magnitude of the torsional loading to axial compression loading is increased, but that this ring effectiveness decreases somewhat when internal pressure is added./p> The enforcement of stability constraints is treated in a way that does not require any eigenvalue analysis. Case studies performed using a combination of penalty function and feasible direction optimization methods indicate that the presence of the axisymmetric initial imperfection in the cylinder wall can significantly affect the optimal designs. Weight savings associated with the addition of two rings to the unstiffened cylinder and/or the addition of internal pressure is substantial when torsion makes up a significant fraction of the combined load state./p> Assumption of criticality of the stability constaints and neglect of the stress constraints during the optimal sizing of the cylinders produced designs that nevertheless satisfied all of the stress constraints, in general, as well as the stability constraints. Subsequent re-sizing of one cylinder to satisfy a violated in-plane matrix cracking constraint resulted in an optimal design that was 49% heavier than the optimal design produced when this constraint was ignored. The additional internal pressure necessary to produce a violation of a stress constraint for each optimal design was calculated. Using an unsymmetrically laminated ring flange, a substantial increase in the strength of the flange/skin joint was observed.
- Analysis of a Nonhierarchical Decomposition AlgorithmShankar, Jayashree; Ribbens, Calvin J.; Haftka, Raphael T.; Watson, Layne T.; Sobieszczanski-Sobieski, Jaroslaw (Department of Computer Science, Virginia Polytechnic Institute & State University, 1992)Large scale optimization problems are tractable only if they are somehow decomposed. Hierarchical decompositions are inappropriate for some types of problems and do not parallelize well. Sobieszczanski-Sobieski has proposed a nonhierarchical decomposition strategy for nonlinear constrained optimization that is naturally parallel. Despite some successes on engineering problems, the algorithm as originally proposed fails on simple two dimensional quadratic programs. This paper carefully analyzes the algorithm for quadratic programs, and suggests a number of modifications to improve its robustness.
- Analytical and experimental comparison of deterministic and probabilistic optimizationPonslet, Eric (Virginia Tech, 1994)The probabilistic approach to design optimization has received increased attention in the last two decades. It is widely recognized that such an approach should lead to designs that make better use of the resources than designs obtained with the classical deterministic approach by distributing safety onto the different components and/or failure modes of a system in an optimal manner. However, probabilistic models rely on a number of assumptions regarding the magnitude of the uncertainties, their distributions, correlations, etc. In addition, modelling errors and approximate reliability calculations (first order methods for example) introduce uncertainty in the predicted system reliability. Because of these inaccuracies, it is not clear if a design obtained from probabilistic optimization will really be more reliable than a design based on deterministic optimization. The objective of this work is to provide a partial answer to this question through laboratory experiments — such experimental validation is not currently available in the literature. A cantilevered truss structure is used as a test case. First, the uncertainties in stiffness and mass properties of the truss elements are evaluated from a large number of measurements. The transmitted scatter in the natural frequencies of the truss is computed and compared to experimental estimates obtained from measurements on 6 realizations of the structure. The experimental results are in reasonable agreement with the predictions, although the magnitude of the transmitted scatter is extremely small. The truss is then equipped with passive viscoelastic tuned dampers for vibration control. The controlled structure is optimized by selecting locations for the dampers and for tuning masses added to the truss. The objective is to satisfy upper limits on the acceleration at given points on the truss for a specified excitation. The properties of the dampers are the primary sources of uncertainties. Two optimal designs are obtained from deterministic and probabilistic optimizations; the deterministic approach maximizes safety margins while the probability of failure (i.e. exceeding the acceleration limit) is minimized in the probabilistic approach. The optimizations are performed with genetic algorithms. The predicted probability of failure of the optimum probabilistic design is less than half that of the deterministic optimum. Finally, optimal deterministic and probabilistic designs are compared in the laboratory. Because small differences in failure rates between two designs are not measurable with a reasonable number of tests, we use anti-optimization to identify a design problem that maximizes the contrast in probability of failure between the two approaches. The anti-optimization is also performed with a genetic algorithm. For the problem identified by the anti-optimization, the probability of failure of the optimum probabilistic design is 25 times smaller than that of the deterministic design. The rates of failure are then measured by testing 29 realizations of each optimum design. The results agree well with the predictions and confirm the larger reliability of the probabilistic design. However, the probabilistic optimum is shown to be very sensitive to modelling errors. This sensitivity can be reduced by including the modelling errors as additional uncertainties in the probabilistic formulation.
- Analytical and experimental study of control effort associated with model reference adaptive controlMesser, Richard Scott (Virginia Tech, 1992)During the past decade, researchers have shown much interest in control and identification of Large Space Structures (LSS). Our inability to model these LSS accurately has generated extensive research into robust controllers capable of maintaining stability in the presence of large structural uncertainties as well as changing structural characteristics. In this work the performance of Model Reference Adaptive Control - (MRAC) is studied in numerical simulations and verified experimentally, to understand how differences between the plant and the reference model affect the control effort. MRAC is applied analytically and experimentally to a single-degree-of-freedom system and analytically to a multi-degree-of-freedom system with multi-inputs and multi-outputs. Good experimental and analytical agreement is demonstrated in control experiments and it is shown that MRAC does an excellent job of controlling the structures and achieving the desired performance even when large differences between the plant and ideal reference model exist. However, it is shown that reasonable differences between the reference model and the plant significantly increase the required control effort. The effects of increased damping in the reference model are considered, and it is shown that requiring the controller to provide increased damping actually decreases the required control effort when differences between the plant and reference model exist. This result is very useful because one of the first attempts to counteract the increased control effort due to differences between the plant and reference model might be to require less damping, however, this would actually increase the control effort. The use of optimization to successfully improve performance and reduce control effort is shown to be limited, because the actual control-structure system can not realize all the performance improvements of the analytical optimal system. Finally, it is shown that very large sampling rates may be required to accurately implement MRAC.
- An Application of Anti-Optimization in the Process of Validating Aerodynamic CodesCruz, Juan Ramón (Virginia Tech, 2003-04-04)An investigation was conducted to assess the usefulness of anti-optimization in the process of validating of aerodynamic codes. Anti-optimization is defined here as the intentional search for regions where the computational and experimental results disagree. Maximizing such disagreements can be a useful tool in uncovering errors and/or weaknesses in both analyses and experiments. The codes chosen for this investigation were an airfoil code and a lifting line code used together as an analysis to predict three-dimensional wing aerodynamic coefficients. The parameter of interest was the maximum lift coefficient of the three-dimensional wing, CL max. The test domain encompassed Mach numbers from 0.3 to 0.8, and Reynolds numbers from 25,000 to 250,000. A simple rectangular wing was designed for the experiment. A wind tunnel model of this wing was built and tested in the NASA Langley Transonic Dynamics Tunnel. Selection of the test conditions (i.e., Mach and Reynolds numbers) were made by applying the techniques of response surface methodology and considerations involving the predicted experimental uncertainty. The test was planned and executed in two phases. In the first phase runs were conducted at the pre-planned test conditions. Based on these results additional runs were conducted in areas where significant differences in CL max were observed between the computational results and the experiment — in essence applying the concept of anti-optimization. These additional runs were used to verify the differences in CL max and assess the extent of the region where these differences occurred. The results of the experiment showed that the analysis was capable of predicting CL max to within 0.05 over most of the test domain. The application of anti-optimization succeeded in identifying a region where the computational and experimental values of CL max differed by more than 0.05, demonstrating the usefulness of anti-optimization in process of validating aerodynamic codes. This region was centered at a Mach number of 0.55 and a Reynolds number of 34,000. Including considerations of the uncertainties in the computational and experimental results confirmed that the disagreement was real and not an artifact of the uncertainties.
- Automated design of composite plates for improved damage toleranceGürdal, Zafer (Virginia Polytechnic Institute and State University, 1985)An automated procedure for designing minimum-weight composite plates subject to a local damage constraint under tensile and compressive loadings has been developed. A strain based criterion was used to obtain fracture toughness of cracked plates under tension. Results of an experimental investigation of the effects of simulated through-the-thickness cracks on the buckling, postbuckling, and failure characteristics of composite flat plates are presented. A model for kinking failure of fibers at the crack tip was developed - for compression loadings. A finite element program based on linear elastic fracture mechanics for calculating stress intensity factor (SIF) was incorporated in the design cycle. A general purpose mathematical optimization algorithm was used for the weight minimization. Analytical sensitivity derivatives of the SIF, obtained by employing the adjoint variable technique, were used to enhance the computational efficiency of the procedure. Design results for both unstiffened and stiffened plates are presented.
- Clean Wing Airframe Noise Modeling for Multidisciplinary Design and OptimizationHosder, Serhat (Virginia Tech, 2004-07-29)A new noise metric has been developed that may be used for optimization problems involving aerodynamic noise from a clean wing. The modeling approach uses a classical trailing edge noise theory as the starting point. The final form of the noise metric includes characteristic velocity and length scales that are obtained from three-dimensional, steady, RANS simulations with a two- equation k-omega turbulence model. The noise metric is not the absolute value of the noise intensity, but an accurate relative noise measure as shown in the validation studies. One of the unique features of the new noise metric is the modeling of the length scale, which is directly related to the turbulent structure of the flow at the trailing edge. The proposed noise metric model has been formulated so that it can capture the effect of different design variables on the clean wing airframe noise such as the aircraft speed, lift coefficient, and wing geometry. It can also capture three-dimensional effects which become important at high lift coefficients, since the characteristic velocity and the length scales are allowed to vary along the span of the wing. Noise metric validation was performed with seven test cases that were selected from a two-dimensional NACA 0012 experimental database. The agreement between the experiment and the predictions obtained with the new noise metric was very good at various speeds, angles of attack, and Reynolds Number, which showed that the noise metric is capable of capturing the variations in the trailing edge noise as a relative noise measure when different flow conditions and parameters are changed. Parametric studies were performed to investigate the effect of different design variables on the noise metric. Two-dimensional parametric studies were done using two symmetric NACA four-digit airfoils (NACA 0012 and NACA 0009) and two supercritical (SC(2)-0710 and SC(2)-0714) airfoils. The three-dimensional studies were performed with two versions of a conventional transport wing at realistic approach conditions. The twist distribution of the baseline wing was changed to obtain a modified wing which was used to investigate the effect of the twist on the trailing edge noise. An example study with NACA 0012 and NACA 0009 airfoils demonstrated a reduction in the trailing edge noise by decreasing the thickness ratio and the lift coefficient, while increasing the chord length to keep the same lift at a constant speed. Both two- and three-dimensional studies demonstrated that the trailing edge noise remains almost constant at low lift coefficients and gets larger at higher lift values. The increase in the noise metric can be dramatic when there is separation on the wing. Three-dimensional effects observed in the wing cases indicate the importance of calculating the noise metric with a characteristic velocity and length scale that vary along the span. The twist change does not have a significant effect on the noise at low lift coefficients, however it may give significant noise reduction at higher lift values. The results obtained in this study show the importance of the lift coefficient on the airframe noise of a clean wing and favors having a larger wing area to reduce the lift coefficient for minimizing the noise. The results also point to the fact that the noise reduction studies should be performed in a multidisciplinary design and optimization framework, since many of the parameters that change the trailing edge noise also affect the other aircraft design requirements. It's hoped that the noise metric developed here can aid in such multidisciplinary design and optimization studies.
- A Coarse Grained Parallel Variable-Complexity Multidisciplinary Optimization ParadigmBurgee, Susan L.; Giunta, Anthony A.; Balabanov, Vladimir; Grossman, Bernard M.; Mason, William H.; Narducci, Robert; Haftka, Raphael T.; Watson, Layne T. (Department of Computer Science, Virginia Polytechnic Institute & State University, 1995-10-01)Modern aerospace vehicle design requires the interaction of multiple discipines, traditionally processed in a sequential order. Multidisciplinary optimization (MDO), a formal methodology for the integration of these disciplines, is evolving towards methods capable of replacing the traditional sequential methodology of aerospace vehicle design by concurrent algorithms, with both an overall gain in product performance and a decrease in design time. A parallel MDO paradigm using variable-complexity modeling and multipoint response surface approximations is presented here for the particular instance of the design of a high speed civil transport (HSCT). This paradigm interleaves the disciplines at one level of complexity, and processes them hierarchically at another level of complexity, achieving parallelism within disciplines, rather than across disciplines. A master-slave paradigm manages a coarse grained parallelism of the analysis and optimization codes required by the disciplines showing reasonable speedups and efficiencies on an Intel Paragon.
- Comparing Probabilistic and Fuzzy Set Approaches for Designing in the Presence of UncertaintyChen, Qinghong (Virginia Tech, 2000-08-08)Probabilistic models and fuzzy set models describe different aspects of uncertainty. Probabilistic models primarily describe random variability in parameters. In engineering system safety, examples are variability in material properties, geometrical dimensions, or wind loads. In contrast, fuzzy set models of uncertainty primarily describe vagueness, such as vagueness in the definition of safety. When there is only limited information about variability, it is possible to use probabilistic models by making suitable assumptions on the statistics of the variability. However, it has been repeatedly shown that this can entail serious errors. Fuzzy set models, which require little data, appear to be well suited to use with designing for uncertainty, when little is known about the uncertainty. Several studies have compared fuzzy set and probabilistic methods in analysis of safety of systems under uncertainty. However, no study has compared the two approaches systematically as a function of the amount of available information. Such a comparison, in the context of design against failure, is the objective of this dissertation. First, the theoretical foundations of probability and possibility theories are compared. We show that a major difference between probability and possibility is in the axioms about the union of events. Because of this difference, probability and possibility calculi are fundamentally different and one cannot simulate possibility calculus using probabilistic models. We also show that possibility-based methods tend to be more conservative than probability-based methods in systems that fail only if many unfavorable events occur simultaneously. Based on these theoretical observations, two design problems are formulated to demonstrate the strength and weakness of probabilistic and fuzzy set methods. We consider the design of tuned damper system and the design and construction of domino stacks. These problems contain narrow failure zones in their uncertain variables and are tailored to demonstrate the pitfalls of probabilistic methods when little information is available for uncertain variables. Using these design problems we demonstrate that probabilistic methods are better than possibility-based methods if sufficient information is available. Just as importantly, we show possibility-based methods can be better if little information is available. Our conclusion is that when there is little information available about uncertainties, a hybrid method should be used to ensure a safe design.
- Computational aspects of sensitivity calculations in linear transient structural analysisGreene, William H. (Virginia Polytechnic Institute and State University, 1989)A study has been performed focusing on the calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, transient response problems. One significant goal of the study was to develop and evaluate sensitivity calculation techniques suitable for large-order finite element analyses. Accordingly, approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite element model. Much of the research focused on the accuracy of both response quantities and sensitivities as a function of number of vectors used. Two types of sensitivity calculation techniques were developed and evaluated. The first type of technique is an overall finite difference method where the analysis is repeated for perturbed designs. The second type of technique is termed semianalytical because it involves direct, analytical differentiation of the equations of motion with finite difference approximation of the coefficient matrices. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models. In several cases this fixed mode approach resulted in very poor approximations of the stress sensitivities. Almost all of the original modes were required for an accurate sensitivity and for small numbers of modes, the accuracy was extremely poor. To overcome this poor accuracy, two semi-analytical techniques were developed. The first technique accounts for the change in eigenvectors through approximate eigenvector derivatives. The second technique applies the mode acceleration method of transient analysis to the sensitivity calculations. Both result in accurate values of the stress sensitivities with a small number of modes. In both techniques the computational cost is much less than would result if the vibration modes were recalculated and then used in an overall finite difference method.
- Delamination initiation in postbuckled dropped-ply laminatesDávila, Carlos G. (Virginia Tech, 1991-04-06)The compression strength of dropped-ply, graphite-epoxy laminated plates for the delamination mode of failure is studied by analysis and corroborated with experiments. The nonlinear response of the test specimens is modeled by a geometrically nonlinear finite element analysis. The methodology for predicting delamination is based on a quadratic interlaminar stress criterion evaluated at a characteristic distance from the ply drop-off. The details of the complex state of stress in the region of the thickness discontinuity are studied using three-dimensional solid elements, while the uniform sections of the plate are modeled with quadrilateral shell elements. A geometrically nonlinear transition element was developed to couple the shell elements to the solid elements. The analysis was performed using the COmputational MEchanics Testbed (COMET), an advanced structural analysis software environment developed at the NASA Langley Research Center to provide a framework for research in structural analysis methods. Uniaxial compression testing of dropped-ply, graphite-epoxy laminated plates has confirmed that delamination along the interfaces above and/or below the dropped plies is a common mode of failure initiation. The compression strength of specimens exhibiting a linear response is greater than the compression strength of specimens with the same layup exhibiting geometrically nonlinear response. Experimental and analytical results also show a decrease in laminate strength with increasing number of dropped plies. For linear response there is a large decrease in compression strength with increasing number of dropped plies. For nonlinear response there is less of a reduction in compression strength with increasing number of dropped plies because the nonlinear response causes a redistribution and concentration of interlaminar stresses toward the unloaded edges of the laminate.
- Design of automotive joints: using optimization to translate performance criteria to physical design parametersZhu, Min (Virginia Tech, 1994)In the preliminary design stage of a car body, targets are first set on the performance characteristics of the overall body and its components using optimization and engineering judgment. Then designers try to design the components to meet the determined performance targets and keep the weight low using empirical, trial-and-error procedures. This process usually yields poor results because it is difficult to find a good design that satisfies the targets using trial-and-error and there might even be no feasible design that meets the targets. To improve the current design process, we need tools to link the performance targets and the physical design parameters. A methodology is presented for developing two such tools for design guidance of joints in car bodies. The first tool predicts the performance characteristics of a given joint fast (at a fraction of a second). The second finds a joint design that meets given performance targets and satisfies packaging and manufacturing constraints. These tools can be viewed as translators that translate the design parameters defining the geometry of a joint into performance characteristics of that joint and vice-versa. The methodology for developing the first translator involves parameterization of a joint, identification of packaging, manufacturing and styling constraints, and establishment of a neural network and a response surface polynomial to predict the performance of a given joint fast (at a fraction of a second). The neural network is trained using results from finite element analysis of several joint designs. The second translator is an optimizer that finds the joint with the smallest mass that meets given performance targets and satisfies packaging, manufacturing and styling constraints. The methodology is demonstrated on a joint of an actual car.
- Design of Composite Laminates by a Genetic Algorithm with MemoryKogiso, N.; Watson, Layne T.; Gürdal, Zafer; Haftka, Raphael T.; Nagendra, S. (Department of Computer Science, Virginia Polytechnic Institute & State University, 1994)This paper describes the use of a genetic algorithm with memory for the design of minimum thickness composite laminates subject to strength, buckling and ply contiguity constraints. A binary tree is used to efficiently store and retrieve information about past designs. This information is used to construct a set of linear approximations to the buckling load in the neighborhood of each member of the population of designs. The approximations are then used to seek nearby improved designs in a procedure called local improvement. The paper demonstrates that this procedure substantially reduces the number of analyses required for the genetic search. The paper also demonstrates that the use of genetic algorithms helps find several alternate designs with similar performance, thus giving the designer a choice of alternatives.
- Design of Laminated Plates for Maximum Buckling LoadSing, Yung S.; Haftka, Raphael T.; Watson, Layne T.; Plaut, Raymond H. (Department of Computer Science, Virginia Polytechnic Institute & State University, 1988)The buckling load of laminated plates having midplane symmetry is maximized for a given total thickness. The thicknesses of the layers are taken as the design variables. Buckling analysis is carried out using the finite element method. The optimality equations are solved by a homotopy method which permits tracing optima as a function of total thickness. It is shown that for any design with a given stacking sequence of ply orientations, there exists a design associated with any other stacking sequence which possesses the same bending stiffness matrix and same total thickness. Hence, from the optimum design for a given stacking sequence, one can directly determine the optimum design for any rearrangement of the ply orientations, and the optimum buckling load is independent of the stacking sequence.
- Detecting Delaminations in Composite Structures Using Anti-OptimizationLee, Jaechong; Haftka, Raphael T.; Griffin, Odis Hayden Jr.; Watson, Layne T.; Sensmeier, Mark (Department of Computer Science, Virginia Polytechnic Institute & State University, 1994-07-01)The present study proposes a detection technique for delaminations in a laminated composite structure. The proposed technique optimizes the spatial distribution of harmonic excitation so as to magnify the difference between the delaminated and intact structure. The technique is evaluated by numerical simulation of two-layered aluminum beams. Effects of measurement and geometric noises are included in the analysis. A finite element model for a delaminated composite, based on the layer-wise laminated plate theory in conjunction with a step function to simulate delaminations, is used.