Show simple item record

dc.contributor.authorDeshpande, Shubhangien_US
dc.contributor.authorWatson, Layne T.en_US
dc.contributor.authorShu, Jiangen_US
dc.contributor.authorKamke, Frederick A.en_US
dc.contributor.authorRamakrishnan, Narenen_US
dc.date.accessioned2013-06-19T14:36:51Z
dc.date.available2013-06-19T14:36:51Z
dc.date.issued2009
dc.identifierhttp://eprints.cs.vt.edu/archive/00001093/en_US
dc.identifier.urihttp://hdl.handle.net/10919/19640
dc.description.abstractLarge scale, multidisciplinary, engineering designs are always difficult due to the complexity and dimensionality of these problems. Direct coupling between the analysis codes and the optimization routines can be prohibitively time consuming due to the complexity of the underlying simulation codes. One way of tackling this problem is by constructing computationally cheap(er) approximations of the expensive simulations, that mimic the behavior of the simulation model as closely as possible. This paper presents a data driven, surrogate based optimization algorithm that uses a trust region based sequential approximate optimization (SAO) framework and a statistical sampling approach based on design of experiment (DOE) arrays. The algorithm is implemented using techniques from two packages—SURFPACK and SHEPPACK that provide a collection of approximation algorithms to build the surrogates and three different DOE techniques—full factorial (FF), Latin hypercube sampling (LHS), and central composite design (CCD)—are used to train the surrogates. The results are compared with the optimization results obtained by directly coupling an optimizer with the simulation code. The biggest concern in using the SAO framework based on statistical sampling is the generation of the required database. As the number of design variables grows, the computational cost of generating the required database grows rapidly. A data driven approach is proposed to tackle this situation, where the trick is to run the expensive simulation if and only if a nearby data point does not exist in the cumulatively growing database. Over time the database matures and is enriched as more and more optimizations are performed. Results show that the proposed methodology dramatically reduces the total number of calls to the expensive simulation runs during the optimization process.en_US
dc.format.mimetypeapplication/pdfen_US
dc.publisherDepartment of Computer Science, Virginia Polytechnic Institute & State Universityen_US
dc.subjectProblem solving environmentsen_US
dc.titleData Driven Surrogate Based Optimization in the Problem Solving Environment WBCSimen_US
dc.typeTechnical reporten_US
dc.identifier.trnumberTR-09-24en_US
dc.type.dcmitypeTexten_US
dc.identifier.sourceurlhttp://eprints.cs.vt.edu/archive/00001093/01/wbcEwC09.pdf


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record