Lagrangian Relaxation / Dual Approaches For Solving Large-Scale Linear Programming Problems
dc.contributor.author | Madabushi, Ananth R. | en |
dc.contributor.committeechair | Sherali, Hanif D. | en |
dc.contributor.committeemember | Kobza, John E. | en |
dc.contributor.committeemember | Jacobson, Sheldon H. | en |
dc.contributor.department | Industrial and Systems Engineering | en |
dc.date.accessioned | 2014-03-14T20:51:56Z | en |
dc.date.adate | 1997-02-17 | en |
dc.date.available | 2014-03-14T20:51:56Z | en |
dc.date.issued | 1997-02-17 | en |
dc.date.rdate | 1997-02-17 | en |
dc.date.sdate | 1998-07-18 | en |
dc.description.abstract | This research effort focuses on large-scale linear programming problems that arise in the context of solving various problems such as discrete linear or polynomial, and continuous nonlinear, nonconvex programming problems, using linearization and branch-and-cut algorithms for the discrete case, and using polyhedral outer-approximation methods for the continuous case. These problems arise in various applications in production planning, location-allocation, game theory, economics, and many engineering and systems design problems. During the solution process of discrete or continuous nonconvex problems using polyhedral approaches, one has to contend with repeatedly solving large-scale linear programming(LP) relaxations. Thus, it becomes imperative to employ an efficient method in solving these problems. It has been amply demonstrated that solving LP relaxations using a simplex-based algorithm, or even an interior-point type of procedure, can be inadequately slow ( especially in the presence of complicating constraints, dense coefficient matrices, and ill-conditioning ) in comparison with a Lagrangian Relaxation approach. With this motivation, we present a practical primal-dual subgradient algorithm that incorporates a dual ascent, a primal recovery, and a penalty function approach to recover a near optimal and feasible pair of primal and dual solutions. The proposed primal-dual approach is comprised of three stages. Stage I deals with solving the Lagrangian dual problem by using various subgradient deflection strategies such as the Modified Gradient Technique (MGT), the Average Direction Strategy (ADS), and a new direction strategy called the Modified Average Direction Strategy (M-ADS). In the latter, the deflection parameter is determined based on the process of projecting the unknown optimal direction onto the space spanned by the current subgradient direction and the previous direction. This projected direction approximates the desired optimal direction as closely as possible using the conjugate subgradient concept. The step-length rules implemented in this regard are the Quadratic Fit Line Search Method and a new line search method called the Directional Derivative Line Search Method in which we start with a prescribed step-length and then ascertain whether to increase or decrease the step-length value based on the right-hand and left-hand derivative information available at each iteration. In the second stage of the algorithm (Stage II), a sequence of updated primal solutions is generated using some convex combinations of the Lagrangian subproblem solutions. Alternatively, a starting primal optimal solution can be obtained using the complementary slackness conditions. Depending on the extent of feasibility and optimality attained, Stage III applies a penalty function method to improve the obtained primal solution toward a near feasible and optimal solution. We present computational experience using a set of randomly generated, structured, linear programming problems of the type that might typically arise in the context of discrete optimization. | en |
dc.description.degree | Master of Science | en |
dc.identifier.other | etd-612112439741131 | en |
dc.identifier.sourceurl | http://scholar.lib.vt.edu/theses/available/etd-612112439741131/ | en |
dc.identifier.uri | http://hdl.handle.net/10919/36833 | en |
dc.publisher | Virginia Tech | en |
dc.relation.haspart | etd.pdf | en |
dc.relation.haspart | Amadhabu.pdf | en |
dc.rights | In Copyright | en |
dc.rights.uri | http://rightsstatements.org/vocab/InC/1.0/ | en |
dc.subject | lagrangian relaxation | en |
dc.subject | subgradient | en |
dc.subject | line search | en |
dc.subject | primal recovery | en |
dc.subject | penalty function | en |
dc.title | Lagrangian Relaxation / Dual Approaches For Solving Large-Scale Linear Programming Problems | en |
dc.type | Thesis | en |
thesis.degree.discipline | Industrial and Systems Engineering | en |
thesis.degree.grantor | Virginia Polytechnic Institute and State University | en |
thesis.degree.level | masters | en |
thesis.degree.name | Master of Science | en |
Files
Original bundle
1 - 1 of 1