Browsing by Author "Sherali, Hanif D."
Now showing 1 - 20 of 111
Results Per Page
Sort Options
- An Airspace Planning and Collaborative Decision Making Model Under Safety, Workload, and Equity ConsiderationsStaats, Raymond William (Virginia Tech, 2003-04-04)We develop a detailed, large-scale, airspace planning and collaborative decision-making model (APCDM), that is part of an $11.5B, 10-year, Federal Aviation Administration (FAA)-sponsored effort to increase U.S. National Airspace (NAS) capacity by 30 percent. Given a set of flights that must be scheduled during some planning horizon, we use a mixed-integer programming formulation to select a set of flight plans from among alternatives subject to flight safety, air traffic control workload, and airline equity constraints. Novel contributions of this research include three-dimensional probabilistic conflict analyses, the derivation of valid inequalities to tighten the conflict safety representation constraints, the development of workload metrics based on average (and its variance from) peak load measures, and the consideration of equity among airline carriers in absorbing the costs related to re-routing, delays, and cancellations. We also propose an improved set of flight plan cost factors for representing system costs and investigating fairness issues by addressing flight dependencies occurring in hubbed operations, as well as market factors such as schedule convenience, reliability, and the timeliness of connections. The APCDM model has potential use for both tactical and strategic applications, such as air traffic control in response to severe weather phenomenon or spacecraft launches, FAA policy evaluation, Homeland Defense contingency planning, and military air campaign planning. The model is tested to consider various airspace restriction scenarios imposed by dynamic severe weather systems and space launch Special Use Airspace (SUA) impositions. The results from this model can also serve to augment the FAA's National Playbook of standardized flight profiles in different disruption-prone regions of the National Airspace.
- Algorithmic Approaches for Solving the Euclidean Distance Location and Location-Allocation ProblemsAl-Loughani, Intesar Mansour (Virginia Tech, 1997-07-08)This dissertation is concerned with the development of algorithmic approaches for solving the minisum location and location-allocation problems in which the Euclidean metric is used to measure distances. To overcome the nondifferentiability difficulty associated with the Euclidean norm function, specialized solution procedures are developed for both the location and the location-allocation problems. For the multifacility location problem (EMFLP), two equivalent convex differentiable reformulations are proposed. The first of these is formulated directly in the primal space, and relationships between its Karush-Kuhn-Tucker (KKT) conditions and the necessary and sufficient optimality conditions for EMFLP are established in order to explore the use of standard convex differentiable nonlinear programming algorithms that are guaranteed to converge to KKT solutions. The second equivalent differentiable formulation is derived via a Lagrangian dual approach based on the optimum of a linear function over a unit ball (circle). For this dual approach, which recovers Francis and Cabot's (1972) dual problem, we also characterize the recovery of primal location decisions, hence settling an issue that has remained open since 1972. In another approach for solving EMFLP, conjugate or deflected subgradient based algorithms along with suitable line-search strategies are proposed. The subgradient deflection method considered is the Average Direction Strategy (ADS) imbedded within the Variable Target Value Method (VTVM). The generation of two types of subgradients that are employed in conjunction with ADS are investigated. The first type is a simple valid subgradient that assigns zero components corresponding to the nondifferentiable terms in the objective function. The second type expends more effort to derive a low-norm member of the subdifferential in order to enhance the prospect of obtaining a descent direction. Furthermore, a Newton-based line-search is also designed and implemented in order to enhance the convergence behavior of the developed algorithm. Various combinations of the above strategies are composed and evaluated on a set of test problems. Computational results for all the proposed algorithmic approaches are presented, using a set of test problems that include some standard problems from the literature. These results exhibit the relative advantages of employing the new proposed procedures. Finally, we study the capacitated Euclidean distance location-allocation problem. There exists no global optimization algorithm that has been developed and tested for this class of problems, aside from a total enumeration approach. We develop a branch-and-bound algorithm that implicitly/partially enumerates the vertices of the feasible region of the transportation constraints in order to determine a global optimum for this nonconvex problem. For deriving lower bounds on node subproblems, a specialized variant of the Reformulation-Linearization Technique (RLT) is suitably designed which transforms the representation of this nonconvex problem from the original defining space into a higher dimensional space associated with a lower bounding (largely linear) convex program. The maximum of the RLT relaxation based lower bound that is obtained via a deflected subgradient strategy applied to a Lagrangian dual formulation of this problem, and another readily computed lower bound in the projected location space is considered at each node of the branch-and-bound tree for fathoming purposes. In addition, certain cut-set inequalities in the allocation space, and objective function based cuts in the location space are generated to further tighten the lower bounding relaxation. Computational experience is provided on a set of randomly generated test problems to investigate both the RLT-based and the projected location- space lower bounding schemes. The results indicate that the proposed global optimization approach for this class of problem offers a promising viable solution procedure. In fact, for two instances available available in the in the literature, we report significantly improved solutions. The dissertation concludes with recommendations for further research for this challenging class of problems. Data for the collection of test problems is provided in the Appendix to facilitate further testing in this area.
- Algorithms and Optimization for Wireless NetworksShi, Yi (Virginia Tech, 2007-10-25)Recently, many new types of wireless networks have emerged for both civil and military applications, such as wireless sensor networks, ad hoc networks, among others. To improve the performance of these wireless networks, many advanced communication techniques have been developed at the physical layer. For both theoretical and practical purposes, it is important for a network researcher to understand the performance limits of these new wireless networks. Such performance limits are important not only for theoretical understanding, but also in that they can be used as benchmarks for the design of distributed algorithms and protocols. However, due to some unique characteristics associated with these networks, existing analytical technologies may not be applied directly. As a result, new theoretical results, along with new mathematical techniques, need to be developed. In this dissertation, we focus on the design of new algorithms and optimization techniques to study theoretical performance limits associated with these new wireless networks. In this dissertation, we mainly focus on sensor networks and ad hoc networks. Wireless sensor networks consist of battery-powered nodes that are endowed with a multitude of sensing modalities. A wireless sensor network can provide in-situ, unattended, high-precision, and real-time observation over a vast area. Wireless ad hoc networks are characterized by the absence of infrastructure support. Nodes in an ad hoc network are able to organize themselves into a multi-hop network. An ad hoc network can operate in a stand-alone fashion or could possibly be connected to a larger network such as the Internet (also known as mesh networks). For these new wireless networks, a number of advanced physical layer techniques, e.g., ultra wideband (UWB), multiple-input and multiple-output (MIMO), and cognitive radio (CR), have been employed. These new physical layer technologies have the potential to improve network performance. However, they also introduce some unique design challenges. For example, CR is capable of reconfiguring RF (on the fly) and switching to newly-selected frequency bands. It is much more advanced than the current multi-channel multi-radio (MC-MR) technology. MC-MR remains hardware-based radio technology: each radio can only operate on a single channel at a time and the number of concurrent channels that can be used at a wireless node is limited by the number of radio interfaces. While a CR can use multiple bands at the same time. In addition, an MC-MR based wireless network typically assumes there is a set of "common channels" available for all nodes in the network. While for CR networks, each node may have a different set of frequency bands based on its particular location. These important differences between MC-MR and CR warrant that the algorithmic design for a CR network is substantially more complex than that under MC-MR. Due to the unique characteristics of these new wireless networks, it is necessary to consider models and constraints at multiple layers (e.g., physical, link, and network) when we explore network performance limits. The formulations of these cross-layer problems are usually in very complex forms and are mathematically challenging. We aim to develop some novel algorithmic design and optimization techniques that provide optimal or near-optimal solutions. The main contributions of this dissertation are summarized as follows. 1. Node lifetime and rate allocation We study the sensor node lifetime problem by considering not only maximizing the time until the first node fails, but also maximizing the lifetimes for all the nodes in the network. For fairness, we maximize node lifetimes under the lexicographic max-min (LMM) criteria. Our contributions are two-fold. First, we develop a polynomial-time algorithm based on a parametric analysis (PA) technique, which has a much lower computational complexity than an existing state-of-the-art approach. We also present a polynomial-time algorithm to calculate the flow routing schedule such that the LMM-optimal node lifetime vector can be achieved. Second, we show that the same approach can be employed to address a different but related problem, called LMM rate allocation problem. More important, we discover an elegant duality relationship between the LMM node lifetime problem and the LMM rate allocation problem. We show that it is sufficient to solve only one of the two problems and that important insights can be obtained by inferring the duality results. 2. Base station placement Base station location has a significant impact on sensor network lifetime. We aim to determine the best location for the base station so as to maximize the network lifetime. For a multi-hop sensor network, this problem is particularly challenging as data routing strategies also affect the network lifetime performance. We present an approximation algorithm that can guarantee (1- ε)-optimal network lifetime performance with any desired error bound ε > 0. The key step is to divide the continuous search space into a finite number of subareas and represent each subarea with a "fictitious cost point" (FCP). We prove that the largest network lifetime achieved by one of these FCPs is (1- ε)-optimal. This approximation algorithm offers a significant reduction in complexity when compared to a state-of-the-art algorithm, and represents the best known result to this problem. 3. Mobile base station The benefits of using a mobile base station to prolong sensor network lifetime have been well recognized. However, due to the complexity of the problem (time-dependent network topology and traffic routing), theoretical performance limits and provably optimal algorithms remain difficult to develop. Our main result hinges upon a novel transformation of the joint base station movement and flow routing problem from the time domain to the space domain. Based on this transformation, we first show that if the base station is allowed to be present only on a set of pre-defined points, then we can find the optimal sojourn time for the base station on each of these points so that the overall network lifetime is maximized. Based on this finding, we show that when the location of the base station is un-constrained (i.e., can move to any point in the two-dimensional plane), we can develop an approximation algorithm for the joint mobile base station and flow routing problem such that the network lifetime is guaranteed to be at least (1- ε) of the maximum network lifetime, where ε can be made arbitrarily small. This is the first theoretical result with performance guarantee on this problem. 4. Spectrum sharing in CR networks Cognitive radio is a revolution in radio technology that promises unprecedented flexibility in radio communications and is viewed as an enabling technology for dynamic spectrum access. We consider a cross-layer design of scheduling and routing with the objective of minimizing the required network-wide radio spectrum usage to support a set of user sessions. Here, scheduling considers how to use a pool of unequal size frequency bands for concurrent transmissions and routing considers how to transmit data for each user session. We develop a near-optimal algorithm based on a sequential fixing (SF) technique, where the determination of scheduling variables is performed iteratively through a sequence of linear programs (LPs). Upon completing the fixing of these scheduling variables, the value of the other variables in the optimization problem can be obtained by solving an LP. 5. Power control in CR networks We further consider the case of variable transmission power in CR networks. Now, our objective is minimizing the total required bandwidth footprint product (BFP) to support a set of user sessions. As a basis, we first develop an interference model for scheduling when power control is performed at each node. This model extends existing so-called protocol models for wireless networks where transmission power is deterministic. As a result, this model can be used for a broad range of problems where power control is part of the optimization space. An efficient solution procedure based on the branch-and-bound framework and convex hull relaxations is proposed to provide (1- ε)-optimal solutions. This is the first theoretical result on this important problem.
- Analysis of Decision Postponement Strategies for Aircraft Assignment under UncertaintySuwandechochai, Rawee (Virginia Tech, 2002-04-14)The ability to effectively match supply and demand can lead to significant revenue benefits in the airline industry. Airline supply management deals with assigning the right resources (i.e., aircraft and crew) to the right routes in the flight network. Due to certain crew regulations, operating characteristics, and constraints of the airline companies, these supply management decisions need to be made well in advance of departures, at a time when demand is highly uncertain. However, demand forecasts improve markedly over time, as more information on demand patterns is gathered. Thus, exploiting the flexibilities in the system that allows the partial postponement of supply decisions to a later time, when more accurate demand information is obtained, can significantly improve the airline's revenue. In this thesis, we propose and analyze the Demand Driven Swapping (DDS) approach that aims at improving the airline's revenue by reducing the supply-demand mismatches through dynamically swapping aircraft as departures approach. This research has been done in collaboration with our industrial partner, the United Airlines Research and Development Division. Due to the proximity to departures, the DDS problem is restricted by two main constraints: 1) the initial crew schedule needs to be kept intact (due to certain union contracts); and 2) airport services and operations need to be preserved to the greatest extent possible. As a result, only a limited number of simple swaps can be performed between aircraft types of the same family (i.e. crew-compatible aircraft types). However, the swaps can be potentially performed on a daily basis given the initial fleet assignments. Clearly, the swapping criteria, frequency, and timing will highly impact the revenue benefits of the DDS approach. When the swapping decisions are made several weeks prior to departures (i.e., 4-6 weeks before departures), they will not cause much disturbance to the operations, but will be performed under highly uncertain demand information. On the other hand, swapping decisions that are delayed to a time later (i.e., 1-3 weeks before departures) will decrease the possibility of bad swaps, but will result in larger costs due to the higher disruptions to airport services and operations. Thus our research objective is to provide guidelines and principles on how the flexible capacity should be managed in the system. For this purpose, we study the effectiveness of different swapping strategies, characterized in terms of their frequency and timing, for hedging against the demand uncertainty. We first study stylized analytical models to gain insights into the critical parameters that affect these benefits. Simulation models are then conducted to test the validity of our analytical findings as well as to analyze more complex strategies and assess the dynamic performance of these strategies. The analytical results indicate that strategies that make the swapping decision early in time (in order to minimize disturbances to the operations) perform very well on routes, where the demand uncertainty is low and the expected demands on the legs are well-balanced. Otherwise, a swapping strategy, which revises the swapping decision over time, should be implemented. Our simulation results, based on real data obtained from United Airlines, confirm the analytical findings.
- Analysis of the Benefits of Resource Flexibility, Considering Different Flexibility StructuresHong, Seong-Jong (Virginia Tech, 2004-04-30)We study the benefits of resource flexibility, considering two different flexibility structures. First, we want to understand the impact of the firm's pricing strategy on its resource investment decision, considering a partially flexible resource. Secondly, we study the benefits of a flexible resource strategic approach, considering a resource flexibility structure that has not been studied in the previous literature. First, we study the capacity investment decision faced by a firm that offers two products/services and that is a price-setter for both products/services. The products offered by the firm are of varying levels (complexities), such that the resources that can be used to produce the higher level product can also be used to produce the lower level one. Although the firm needs to make its capacity investment decision under high demand uncertainty, it can utilize this limited (downward) resource flexibility, in addition to pricing, to more effectively match its supply with demand. Sample applications include a service company, whose technicians are of different capabilities, such that a higher level technician can perform all tasks performed by a lower level technician; a firm that owns a main plant, satisfying both end-product and intermediate-product demand, and a subsidiary, satisfying the intermediate-product demand only. We formulate this decision problem as a two-stage stochastic programming problem with recourse, and characterize the structural properties of the firm's optimal resource investment strategy when resource flexibility and pricing flexibility are considered in the investment decision. We show that the firm's optimal resource investment strategy follows a threshold policy. This structure allows us to understand the impact of coordinated decision-making, when the resource flexibility is taken into account in the investment decision, on the firm's optimal investment strategy, and establish the conditions under which the firm invests in the flexible resource. We also study the impact of demand correlation on the firm's optimal resource investment strategy, and show that it may be optimal for the firm to invest in both flexible and dedicated resources when product demand patterns are perfectly positively correlated. Our results offer managerial principles and insights on the firm's optimal resource investment strategy as well as extend the newsvendor problem with pricing, by allowing for multiple resources (suppliers), multiple products, and resource pooling. Secondly, we study the benefits of a delayed decision making strategy under demand uncertainty, considering a system that satisfies two demand streams with two capacitated and flexible resources. Resource flexibility allows the firm to delay its resource allocation decision to a time when partial information on demands is obtained and demand uncertainty is reduced. We characterize the structure of the firm's optimal delayed resource allocation strategy. This characterization allows us to study how the revenue benefits of the delayed resource allocation strategy depend on demand and capacity parameters, and the length of the selling season. Our study shows that the revenue benefits of this strategy can be significant, especially when demand rates of the different types are close, while resource capacities are much different. Based on our analysis, we provide guidelines on the utilization of such strategies. Finally, we incorporate the uncertainty in demand parameters into our models and study the effectiveness of several delayed capacity allocation mechanisms that utilize the resource flexibility. In particular, we consider that demand forecasts are uncertain at the start of the selling season and are updated using a Bayesian framework as early demand figures are observed. We propose several heuristic capacity allocation policies that are easy to implement as well as a heuristic procedure that relies on a stochastic dynamic programming formulation and perform a numerical study. Our study determines the conditions under which each policy is effective.
- The Approach-dependent, Time-dependent, Label-constrained Shortest Path Problem and Enhancements for the CART Algorithm with Application to Transportation SystemsJeenanunta, Chawalit (Virginia Tech, 2004-05-10)In this dissertation, we consider two important problems pertaining to the analysis of transportation systems. The first of these is an approach-dependent, time-dependent, label-constrained shortest path problem that arises in the context of the Route Planner Module of the Transportation Analysis Simulation System (TRANSIMS), which has been developed by the Los Alamos National Laboratory for the Federal Highway Administration. This is a variant of the shortest path problem defined on a transportation network comprised of a set of nodes and a set of directed arcs such that each arc has an associated label designating a mode of transportation, and an associated travel time function that depends on the time of arrival at the tail node, as well as on the node via which this node was approached. The lattermost feature is a new concept injected into the time-dependent, label-constrained shortest path problem, and is used to model turn-penalties in transportation networks. The time spent at an intersection before entering the next link would depend on whether we travel straight through the intersection, or make a right turn at it, or make a left turn at it. Accordingly, we model this situation by incorporating within each link's travel time function a dependence on the link via which its tail node was approached. We propose two effective algorithms to solve this problem by adapting two efficient existing algorithms to handle time dependency and label constraints: the Partitioned Shortest Path (PSP) algorithm and the Heap-Dijkstra (HP-Dijkstra) algorithm, and present related theoretical complexity results. In addition, we also explore various heuristic methods to curtail the search. We explore an Augmented Ellipsoidal Region Technique (A-ERT) and a Distance-Based A-ERT, along with some variants to curtail the search for an optimal path between a given origin and destination to more promising subsets of the network. This helps speed up computation without sacrificing optimality. We also incorporate an approach-dependent delay estimation function, and in concert with a search tree level-based technique, we derive a total estimated travel time and use this as a key to prioritize node selections or to sort elements in the heap. As soon as we reach the destination node, while it is within some p% of the minimum key value of the heap, we then terminate the search. We name the versions of PSP and HP-Dijkstra that employ this method as Early Terminated PSP (ET-PSP) and Early Terminated Heap-Dijkstra (ETHP-Dijkstra) algorithms. All of these procedures are compared with the original Route Planner Module within TRANSIMS, which is implemented in the Linux operating system, using C++ along with the g++ GNU compiler. Extensive computational testing has been conducted using available data from the Portland, Oregon, and Blacksburg, Virginia, transportation networks to investigate the efficacy of the developed procedures. In particular, we have tested twenty-five different combinations of network curtailment and algorithmic strategies on three test networks: the Blacksburg-light, the Blacksburg-full, and the BigNet network. The results indicate that the Heap-Dijkstra algorithm implementations are much faster than the PSP algorithmic approaches for solving the underlying problem exactly. Furthermore, mong the curtailment schemes, the ETHP-Dijkstra with p=5%, yields the best overall results. This method produces solutions within 0.37-1.91% of optimality, while decreasing CPU effort by 56.68% at an average, as compared with applying the best available exact algorithm. The second part of this dissertation is concerned with the Classification and Regression Tree (CART) algorithm, and its application to the Activity Generation Module of TRANSIMS. The CART algorithm has been popularly used in various contexts by transportation engineers and planners to correlate a set of independent household demographic variables with certain dependent activity or travel time variables. However, the algorithm lacks an automated mechanism for deriving classification trees based on optimizing specified objective functions and handling desired side-constraints that govern the structure of the tree and the statistical and demographic nature of its leaf nodes. Using a novel set partitioning formulation, we propose new tree development, and more importantly, optimal pruning strategies to accommodate the consideration of such objective functions and side-constraints, and establish the theoretical validity of our approach. This general enhancement of the CART algorithm is then applied to the Activity Generator module of TRANSIMS. Related computational results are presented using real data pertaining to the Portland, Oregon, and Blacksburg, Virginia, transportation networks to demonstrate the flexibility and effectiveness of the proposed approach in classifying data, as well as to examine its numerical performance. The results indicate that a variety of objective functions and constraints can be readily accommodated to efficiently control the structural information that is captured by the developed classification tree as desired by the planner or analyst, dependent on the scope of the application at hand.
- Assessing the Finite-Time Performance of Local Search AlgorithmsHenderson, Darrall (Virginia Tech, 2001-05-13)Identifying a globally optimal solution for an intractable discrete optimization problem is often cost prohibitive. Therefore, solutions that are within a predetermined threshold are often acceptable in practice. This dissertation introduces the concept of B-acceptable solutions where B is a predetermined threshold for the objective function value. It is difficult to assess a priori the effectiveness of local search algorithms, which makes the process of choosing parameters to improve their performance difficult. This dissertation introduces the B-acceptable solution probability in terms of B-acceptable solutions as a finite-time performance measure for local search algorithms. The B-acceptable solution probability reflects how effectively an algorithm has performed to date and how effectively an algorithm can be expected to perform in the future. The B-acceptable solution probability is also used to obtain necessary asymptotic convergence (with probability one) conditions. Upper and lower bounds for the B-acceptable solution probability are presented. These expressions assume particularly simple forms when applied to specific local search strategies such as Monte Carlo search and threshold accepting. Moreover, these expressions provide guidelines on how to manage the execution of local search algorithm runs. Computational experiments are reported to estimate the probability of reaching a B-acceptable solution for a fixed number of iterations. Logistic regression is applied as a tool to estimate the probability of reaching a B-acceptable solution for values of B close to the objective function value of a globally optimal solution as well as to estimate this objective function value. Computational experiments are reported with logistic regression for pure local search, simulated annealing and threshold accepting applied to instances of the TSP with known optimal solutions.
- Bi-criteria Scheduling Problems on Parallel MachinesPrakash, Divya (Virginia Tech, 1995-07-20)Mathematical programming has not been used extensively for the solution of scheduling problems. Moreover, the study of bicriteria problems on single and parallel machines is an open field for research. This thesis is aimed at developing algorithms to solve bicriteria problems more efficiently and in reasonable amount of time and with little compromise on the optimality of the solutions obtained. Two classes of problems are considered. The first class consists of scheduling unit duration tasks on parallel machines. Various combinations of primary and secondary criteria are considered and optimal seeking algorithms of polynomial time complexity are developed. The second class of problems assume general processing time tasks. An algorithm is developed for the primary criterion of total tardiness and the secondary criterion of total flow time. This algorithm is based on the solution of the underlying mathematical program and makes use of dominance relationship among the jobs and fixing of variables. Experimental results are presented regarding its performance.
- Cell design in a cellular system using guard channels, call queueing and channel borrowingJain, Nikhil (Virginia Tech, 1993-12-05)This dissertation develops an analytic framework to undertake cell design in a cellular system. The cell is modeled in a broader sense than ever done before. In our analytical model, we incorporated the use of guard channels, queueing of new calls, and hybrid channel allocation. A numerically stable and efficient solution to a queueing system with two arrival streams having reserved and borrowable servers has been developed. This queueing system is used to model the cell behavior. The model provides valuable insights into the behavior of the cell, and this in turn has been used to devise an efficient stochastic optimization algorithm for determining the minimum number of channels required by the cell. Our techniques are stable, easy to implement for practical systems and produce optimized solutions quickly. This is particularly useful because we expect that future designs of cellular systems may execute such algorithms on cell-site processors.
- Characterizing the Dynamics of Vulnerability for Roadway Infrastructure SystemsDehghani Sanij, Mohammad Saied (Virginia Tech, 2013-12-30)Critical infrastructure systems, such as transportation, energy, water and communication, are the backbones of sustainable economic and social development. The tragedies and catastrophic events in the past few years have motivated researchers to study the vulnerability of infrastructure systems to disastrous events. A number of existing studies address roadway networks where researchers have characterized the robustness and vulnerability of roadways to earthquakes, floods, and targeted attacks. However, extreme events with infrequent return periods are not very likely to occur in a 50-60 year analysis period of roadways, while many roadways are located in areas that are not even exposed to floods or earthquakes at all. On the other hand, roadway network endogenous characteristics such the condition and degradation over time not only increases the vulnerability of roadways to disastrous events, but also makes the roadway network vulnerable to disruptions that are caused by maintenance and repair activities on the roadways system. Nevertheless, the impacts of these endogenous network characteristics on roadway vulnerability have not been explicitly addressed in the existing studies. This dissertation introduces the concept of condition-based vulnerability assessment (CBVA) to capture the effect of roadway endogenous characteristics such as condition and condition uncertainties, roadway network deterioration over time, topological properties of roadways, and travel rate and travel pattern on the dynamics of roadway network vulnerably. First a methodological framework is developed and the method is applied to an illustrative roadway system. The results show that the vulnerability of roadway system is more affected by the average condition of the roadway network than by the condition of individual roads in the system. Moreover, the findings show that small uncertainties associated with the condition of individual roads can significantly increase the variance of the predicated vulnerability. This initial methodological framework is then enhanced to account for physical degradation of the network over time and network equilibrium, and is applied to a real highway system. For the network studied network degradation increases roadway system vulnerability in a nonlinear mode. The result also suggest that the network vulnerability pattern is not very sensitive to travel pattern and link topological properties when the average network disruption probability (representing average network condition) is less than about 0.5. In other words, at low values of average disruption probability, it does not matter what link has what disruption probability level or how the travelers move across the network. However with further network degradation and as the average network disruption probability increases, the dynamics of network vulnerability depends on travel pattern on the network as well as on the link topological properties.
- A Complementary Column Generation Approach for the Graph Equipartition ProblemAl-Ykoob, Salem M.; Sherali, Hanif D. (2020-03-23)This paper investigates the problem of partitioning a complete weighted graph into complete subgraphs, each having the same number of vertices, with the objective of minimizing the sum of edge weights of the resulting subgraphs. This NP-complete problem arises in many applications such as assignment and scheduling-related group partitioning problems and micro-aggregation techniques. In this paper, we present a mathematical programming model and propose a complementary column generation approach to solve the resulting model. A dual based lower bounding feature is also introduced to curtail the notorious tailing-off effects often induced when using column generation methods. Computational results are presented for a wide range of test problems.
- A Convergence Analysis of Generalized Hill Climbing AlgorithmsSullivan, Kelly Ann (Virginia Tech, 1999-04-12)Generalized hill climbing (GHC) algorithms provide a unifying framework for describing several discrete optimization problem local search heuristics, including simulated annealing and tabu search. A necessary and a sufficient convergence condition for GHC algorithms are presented. The convergence conditions presented in this dissertation are based upon a new iteration classification scheme for GHC algorithms. The convergence theory for particular formulations of GHC algorithms is presented and the implications discussed. Examples are provided to illustrate the relationship between the new convergence conditions and previously existing convergence conditions in the literature. The contributions of the necessary and the sufficient convergence conditions for GHC algorithms are discussed and future research endeavors are suggested.
- Cooperation in Wireless NetworksSharma, Sushant (Virginia Tech, 2010-12-16)Spatial diversity, in the form of employing multiple antennas (i.e., MIMO), has proved to be very effective in increasing network capacity and reliability. However, equipping a wireless node with multiple antennas may not be practical, as the footprint of multiple antennas may not fit on a wireless node (particularly on handheld wireless devices). In order to achieve spatial diversity without requiring multiple antennas on the same node, the so-called cooperative communications (CC) has been introduced. Under CC, each node is equipped with only a single antenna and spatial diversity is achieved by exploiting the antennas on other nodes in the network through cooperative relaying. The goal of this dissertation is to maximize throughput at network level through CC at the physical layer. A number of problems are explored in this investigation. The main contributions of this dissertation can be summarized as follows. 1. Optimal Relay Assignment. We first consider a simple CC model where each source-destination pair may employ only a single relay. For this three-node model, the choice of a relay node (among a set of available relay nodes) for a given session is critical in the overall network performance. We study the relay node assignment problem in a cooperative ad hoc network environment, where multiple source-destination pairs compete for the same pool of relay nodes in the network. Our objective is to assign the available relay nodes to different source-destination pairs so as to maximize the minimum data rate among all pairs. We present an optimal polynomial time algorithm, called ORA, that solves this problem. A novel idea in this algorithm is a "linear marking" mechanism, which maintains linear complexity at each iteration. We offer a formal proof of optimality for ORA and use numerical results to demonstrate its capability. 2. Incorporating Network Coding. It has been shown that network coding (NC) can reduce the time-slot overhead when multiple session share the same relay node in CC. Such an approach is called network-coded CC (or NC-CC). Most of the existing works have mainly focused on the benefits of this approach. The potential adverse effect under NC-CC remains unknown. We explore this important problem by introducing the concept of network coding noise (NC noise). We show that due to NC noise, NC may not be always beneficial to CC. We substantiate this important finding in two important scenarios: analog network coding (ANC) in amplify-and-forward (AF) CC, and digital network coding (DNC) in decode-and-forward (DF) CC. We analyze the origin of NC noise via a careful study of signal aggregation at a relay node and signal extraction at a destination node. We derive a closed-form expression for NC noise at each destination node and show that the existence of NC noise could diminish the advantage of NC in CC. Our results shed new light on how to use NC in CC effectively. 3. Session Grouping and Relay Node Selection. When there are multiple sessions in the network, it may be necessary to combine sessions into different groups, and then have each group select the most beneficial relay node for NC-CC. We study this joint grouping and relay node selection problem for NC-CC. By studying matching problems in hypergraphs, we show that this problem is NP-hard. We then propose a distributed and online algorithm to solve this problem. The key idea in our algorithm is to have each neighboring relay node of a newly joined session determine and offer the best group for this session from the groups that it is currently serving; and then to have the source node of this newly joined session select the best group among all received offers. We show that our distributed algorithm has polynomial complexity. Using extensive numerical results, we show that our distributed algorithm is near-optimal and adapts well to online network dynamics. 4. Grouping and Matching for Multi-Relay Cooperation. Existing models of NC-CC consider only single relay node for each session group. We investigate how NC-CC behaves when multiple relay nodes are employed. For a given session, we develop closed form formulas for the mutual information and achievable rate under multi-relay NC-CC. In multi-relay NC-CC, the achievable rate of a session depends on the other sessions in its group as well as the set of relay nodes used for NC-CC. Therefore, we study NC-CC via joint optimization of grouping and matching of session and relay groups in an ad hoc network. Although we show that the joint problem is NP-hard, we develop an efficient polynomial time algorithm for grouping and matching (called G²M). G²M first builds beneficial relay groups for individual sessions. This is followed by multiple iterations during which sessions are combined with other sessions to form larger and better session groups (while corresponding relay groups are merged and updated accordingly). Using extensive numerical results, we show the efficiency and near optimality of our G²M algorithm.
- A Demand Driven Airline and Airport Evolution StudySeshadri, Anand (Virginia Tech, 2009-05-11)The events of September 11,2001 followed by the oil price hike and the economic crisis of 2008, have lead to a drop in the demand for air travel. Airlines have attempted to return to profitability by cutting service in certain unattractive routes and airports. Simultaneously, delays and excess demand at a few major hubs have lead to airline introducing service at reliever airports. This dissertation attempts to capture the changes in the airline network by utilizing a supply-demand framework.
- A Demand Driven Re-fleeting Approach for Aircraft Assignment Under UncertaintyZhu, Xiaomei (Virginia Tech, 2001-07-23)The current airline practice is to assign aircraft capacity to scheduled flights well in advance of departure. At such an early stage in this process, the high uncertainty of demand poses a major impediment for airlines to best match the airplane capacities with the final demand. However, the accuracy of the demand forecast improves markedly over time, and revisions to the initial fleet assignment become naturally pertinent when the observed demand considerably differs from the assigned aircraft capacity. The Demand Driven Re-fleeting (DDR) approach proposed in this thesis offers a dynamic re-assignment of aircraft capacity to the flight network, as and when improved demand forecasts become available, so as to maximize the total revenue. Because of the need to preserve the initial crew schedule, this re-assignment approach is limited within a single family of aircraft and to the flights assigned to this particular family. This restriction significantly reduces the problem size. As a result, it becomes computationally tractable to include path level demand information into the DDR model, although the problem size can then get very large because of the numerous combinations of composing paths from legs. As an extension, models considering path-class level differences, day-of-week demand variations, and re-capture effects are also presented. The DDR model for a single family with path level demand considerations is formulated as a mixed-integer programming problem. The model's polyhedral structure is studied to explore ways for tightening its representation and for deriving certain classes of valid inequalities. Various approaches for implementing such reformulation techniques are investigated and tested. The best of these procedures for solving large-scale challenging instances of the problem turns out to be an integrated approach that uses certain selected model augmentations and valid inequalities generated via a suitable separation routine and a partial convex hull construction process. Using this strategy in concert with properly selected CPLEX options reduces the CPU time by an average factor of 7.48 over an initial model for a test-bed of problems each having 200 flights in total. Prompted by this integrated heuristic approach, a procedure for finding solutions within a prescribed limit of optimality is suggested. To demonstrate the effectiveness of these developed methodologies, we also solved two large-scale practical-sized networks that respectively involve 800 and 1060 flights, and 18196 and 33105 paths in total, with 300 and 396 flights belonging to the designated family. These problems were typically solved within 6 hours on a SUN Ultra 1 Workstation having 260 MB RAM and a clock-speed of 167 MHz, with one exception that required 14 hours of CPU time. This level of computational effort is acceptable considering that such models are solved at a planning stage in the decision process.
- Demand Management in Evacuation: Models, Algorithms, and ApplicationsBish, Douglas R. (Virginia Tech, 2006-07-31)Evacuation planning is an important disaster management tool. A large-scale evacuation of a region by automobile is a difficult task, especially as demand is often greater than supply. This is made more difficult as the imbalance of supply and demand actually reduces supply due to congestion. Currently, most of the emphasis in evacuation planning is on supply management. The purpose of this dissertation is to introduce and study sophisticated demand management tools, specifically, staging and routing of evacuees. These tools can be used to produce evacuation strategies that reduce or eliminate congestion. A strategic planning model is introduced that accounts for evacuation dynamics and the non-linearities in travel times associated with congestion, yet is tractable and can be applied to large-scale networks. Objective functions of potential interest in evacuation planning are introduced and studied in the context of this model. Insights into the use of staging and routing in evacuation management are delineated and solution techniques are developed. Two different strategic approaches are studied in the context of this model. The first strategic approach is to control the evacuation at a disaggregate level, where customized staging and routing plans are produced for each individual or family unit. The second strategic approach is to control the evacuation at a more aggregate level, where evacuation plans are developed for a larger group of evacuees, based on pre-defined geographic areas. In both approaches, shelter requirements and preferences can also be considered. Computational experience using these two strategic approaches, and their respective solution techniques, is provided using a real network pertaining to Virginia Beach, Virginia, in order to demonstrate the efficacy of the proposed methodologies.
- A Design Methodology for Physical Design for TestabilityAlmajdoub, Salahuddin A. (Virginia Tech, 1996-07-01)Physical design for testability (PDFT) is a strategy to design circuits in a way to avoid or reduce realistic physical faults. The goal of this work is to define and establish a speci c methodology for PDFT. The proposed design methodology includes techniques to reduce potential bridging faults in complementary metal-oxide-semiconductor (CMOS) circuits. To compare faults, the design process utilizes a new parameter called the fault index. The fault index for a particular fault is the probability of occurrence of the fault divided by the testability of the fault. Faults with the highest fault indices are considered the worst faults and are targeted by the PDFT design process to eliminate them or reduce their probability of occurrence. An implementation of the PDFT design process is constructed using several new tools in addition to other "off-the-shelf" tools. The first tool developed in this work is a testability measure tool for bridging faults. Two other tools are developed to eliminate or reduce the probability of occurrence of bridging faults with high fault indices. The row enhancer targets faults inside the logic elements of the circuit, while the channel enhancer targets faults inside the routing part of the circuit. To demonstrate the capabilities and test the eff ectiveness of the PDFT design process, this work conducts an experiment which includes designing three CMOS circuits from the ISCAS 1985 benchmark circuits. Several layouts are generated for every circuit. Every layout, except the rst one, utilizes information from the previous layout to minimize the probability of occurrence for faults with high fault indices. Experimental results show that the PDFT design process successfully achieves two goals of PDFT, providing layouts with fewer faults and minimizing the probability of occurrence of hard-to-test faults. Improvement in the total fault index was about 40 percent in some cases, while improvement in total critical area was about 30 percent in some cases. However, virtually all the improvements came from using the row enhancer; the channel enhancer provided only marginal improvements.
- Design Optimization of Fuzzy Logic SystemsDadone, Paolo (Virginia Tech, 2001-05-18)Fuzzy logic systems are widely used for control, system identification, and pattern recognition problems. In order to maximize their performance, it is often necessary to undertake a design optimization process in which the adjustable parameters defining a particular fuzzy system are tuned to maximize a given performance criterion. Some data to approximate are commonly available and yield what is called the supervised learning problem. In this problem we typically wish to minimize the sum of the squares of errors in approximating the data. We first introduce fuzzy logic systems and the supervised learning problem that, in effect, is a nonlinear optimization problem that at times can be non-differentiable. We review the existing approaches and discuss their weaknesses and the issues involved. We then focus on one of these problems, i.e., non-differentiability of the objective function, and show how current approaches that do not account for non-differentiability can diverge. Moreover, we also show that non-differentiability may also have an adverse practical impact on algorithmic performances. We reformulate both the supervised learning problem and piecewise linear membership functions in order to obtain a polynomial or factorable optimization problem. We propose the application of a global nonconvex optimization approach, namely, a reformulation and linearization technique. The expanded problem dimensionality does not make this approach feasible at this time, even though this reformulation along with the proposed technique still bears a theoretical interest. Moreover, some future research directions are identified. We propose a novel approach to step-size selection in batch training. This approach uses a limited memory quadratic fit on past convergence data. Thus, it is similar to response surface methodologies, but it differs from them in the type of data that are used to fit the model, that is, already available data from the history of the algorithm are used instead of data obtained according to an experimental design. The step-size along the update direction (e.g., negative gradient or deflected negative gradient) is chosen according to a criterion of minimum distance from the vertex of the quadratic model. This approach rescales the complexity in the step-size selection from the order of the (large) number of training data, as in the case of exact line searches, to the order of the number of parameters (generally lower than the number of training data). The quadratic fit approach and a reduced variant are tested on some function approximation examples yielding distributions of the final mean square errors that are improved (i.e., skewed toward lower errors) with respect to the ones in the commonly used pattern-by-pattern approach. Moreover, the quadratic fit is also competitive and sometimes better than the batch training with optimal step-sizes, thus showing an improved performance of this approach. The quadratic fit approach is also tested in conjunction with gradient deflection strategies and memoryless variable metric methods, showing errors smaller by 1 to 7 orders of magnitude. Moreover, the convergence speed by using either the negative gradient direction or a deflected direction is higher than that of the pattern-by-pattern approach, although the computational cost of the algorithm per iteration is moderately higher than the one of the pattern-by-pattern method. Finally, some directions for future research are identified.
- Development of Optimization and Simulation Models for the Analysis of Airfield OperationsBaik, Hojong (Virginia Tech, 2000-05-22)This research is concerned with the modeling and development of algorithmic approaches for solving airport operational problems that arise in Air Traffic Control (ATC) systems within the terminal area at hub airports. Specifically, the problems addressed include the Aircraft Sequencing Problem (ASP) for runway operations, the Network Assignment Problem (NAP) for taxiway operations, and a simulation model for the evaluation of current or proposed ATC system in detail. For the ASP, we develop a mathematical model and apply the Reformulation-Linearization-Technique (RLT) of Sherali and Adams to construct an enhanced tightened version of the proposed model. Since ASP is NP-Hard and in fact, it is a variation of the well-known Traveling Salesman Problem with time-windows, sub-optimal solutions are usually derived to accommodate the real-time constraints of ATC systems. Nevertheless, we exhibit a significant advancement in this challenging class of problem. Also for the purpose of solving relatively large sized problems in practice, we develop and test suitable heuristic procedures. For the NAP, we propose a quasi-dynamic assignment scheme which is based on the incremental assignment technique. This quasi-dynamic assignment method assumes that the current aircraft route is influenced only by the previous aircraft assigned to the network. This simplified assumption obviates the need for iterative rerouting procedures to reach a pure equilibrium state which might not be achievable in practical taxiway operations. To evaluate the overall system, we develop a microscopic simulation model. The simulation model is designed to have the capability for reproducing not only the dynamic behavior of aircraft, but also incorporates communication activities between controllers and pilots. These activities are critical in ATC operations, and in some instances, might limit the capacity of the facility. Finally, using the developed simulation model named Virginia Tech Airport Simulation Model (VTASM) in concert with ASP and NAP, we compare the overall efficiencies of several control strategies, including that of the existing control system as well as of the proposed advanced control system.
- A Disassembly Optimization ProblemBhootra, Ajay (Virginia Tech, 2002-09-20)The rapid technological advancement in the past century resulted in a decreased life cycle of a large number of products and, consequently, increased the rate of technological obsolescence. The disposal of obsolete products has resulted in rapid landfilling and now poses a major environmental threat. The governments in many countries around the world have started imposing regulations to curb uncontrolled product disposal. The consumers, too, are now aware of adverse effects of product disposal on environment and increasingly favor environmentally benign products. In the wake of imminent stringent government regulations and the consumer awareness about ecosystem-friendly products, the manufacturers need to think about the alternatives to product disposal. One way to deal with this problem is to disassemble an obsolete product and utilize some of its components/subassemblies in the manufacturing of new products. This seems to be a promising solution because products now-a-days are made in accordance with the highest quality standards and, although an obsolete product may not be in the required functional state as a whole, it is possible that several of its components or subassemblies are still in near perfect condition. However, product disassembly is a complex task requiring human labor as well as automated processes and, consequently, a huge amount of monetary investment. This research addresses a disassembly optimization problem, which aims at minimizing the costs associated with the disassembly process (namely, the costs of breaking the joints and the sequence dependent set-up cost associated with disassembly operations), while maximizing the benefits resulting from recovery of components/subassemblies from a product. We provide a mathematical abstraction of the disassembly optimization problem in the form of integer-programming models. One of our formulations includes a new way of modeling the subtour elimination constraints (SECs), which are usually encountered in the well-known traveling salesman problems. Based on these SECs, a new valid formulation for asymmetric traveling salesman problem (ATSP) was developed. The ATSP formulation was further extended to obtain a valid formulation for the precedence constrained ATSP. A detailed experimentation was conducted to compare the performance of the proposed formulations with that of other well-known formulations discussed in the literature. Our results indicate that in comparison to other well-known formulations, the proposed formulations are quite promising in terms of the LP relaxation bounds obtained and the number of branch and bound nodes explored to reach an optimal integer solution. These new formulations along with the results of experimentation are presented in Appendix A. To solve the disassembly optimization problem, a three-phase iterative solution procedure was developed that can determine optimal or near optimal disassembly plans for complex assemblies. The first phase helps in obtaining an upper bound on our maximization problem through an application of a Lagrangian relaxation scheme. The second phase helps to further improve this bound through addition of a few valid inequalities in our models. In the third phase, we fix some of our decision variables based on the solutions obtained in the iterations of phases 1 and 2 and then implement a branch and bound scheme to obtain the final solution. We test our procedure on several randomly generated data sets and identify the factors that render a problem to be computationally difficult. Also, we establish the practical usefulness of our approach through case studies on the disassembly of a computer processor and a laser printer.