Browsing by Author "Nachlas, Joel A."
Now showing 1 - 20 of 72
Results Per Page
Sort Options
- Analysis and Approximations of Time Dependent Queueing ModelsNasr, Walid (Virginia Tech, 2008-01-18)Developing equations to compute congestion measures for the general G/G/s/c queueing model and networks of such nodes has always been a challenge. One approach to analyzing such systems is to approximate the model-specified general input processes and distributions by processes and distributions from the more computationally friendly family of phase-type processes and distributions. We develop numerical approximation methods for analysis of general time-dependent queueing nodes by introducing new approximations for the time-dependent first two moments of the number-in-system and departure-count processes.
- The application of a single control chart for dependent variables in multivariate quality controlHanson, Robert Alexander (Virginia Tech, 1990-08-05)Most control charts monitor only one quality characteristic. There are, however, many manufactured products for which good quality requires meeting specifications in more than one physical characteristic. Typical practice when dealing with multiple quality characteristics is to take a separate sample for each characteristic and then create individual univariate control charts which are independently monitored. This method can result in errors due to not accounting for the effects of correlation. In order to avoid these errors, an alternate approach to multivariate quality control problems is proposed and studied here. The original problem is converted into a univariate problem by using the following transformation: y=Σ aixi i where αi = weighting coefficient for the ith quality characteristic Xi = represents the ith quality characteristic This transformation retains sensitivity to changes in the original quality variables. The resulting univariate quality control model takes into account the sampling error probabilities for each of several candidate hypotheses. The probabilities of correctly diagnosing process shifts when an out-of-control state occurs are calculated and tabulated as are the probabilities that the model will signal when an out-of-control state occurs.
- Applied statisical analysis software systemCampbell, Allen Webb (Virginia Tech, 1991-08-08)The personal computer furnishes engineers and managers with an appropriate tool for analysis of statistical problems particularly in areas such as quality control. Following a review of the literature, no simple, low cost software packages were found that allow the personal computer user to design, execute, and statistically analyze physical processes while providing graphical display of the results. The primary objective of this research is to develop a software system which runs on a personal computer and provides management with an affordable vehicle for statistical analysis of problems, particularly those faced by the decision maker. This software system includes the design of inferential procedures as well as their execution. Many of the inferential procedures are based on the assumptions of independent and normally distributed observations, and in the case of confidence intervals and hypothesis tests for means, offers a technique which assists the user in overcoming problems associated with correlated sample data. The software system attempts to detect when these assumptions appear to be violated. In the case of independence, a methodology is applied which attempts to offset the deleterious effects of the correlation on parameter estimates. Although statistical methods are used to interpret sampled data, extensive knowledge of statistics is not necessary to use this software. The software is intended for managers and quality engineers who possess knowledge of basic statistics, the ability to interpret control chart information, and a basic working knowledge of personal computers. Execution requires an IBM PC (or a true compatible) which contains a color graphics card and at least 512K of memory. Both color and monochrome monitors are supported. The software is written in Turbo Basic 1.0, a product of Borland International. The software operates under the MS-DOS 2.0 (or later version) operating system. DOS, Disk Operating System, provides for communication between the keyboard, the disk drives, and the system unit.
- Assessment of server location on system availability by computer simulationWeissmann, Eric (Virginia Tech, 1994)An important characteristic of all systems is availability. Availability is the probability that a system or piece of equipment will operate in a prescribed manner when used under specified conditions. It is primarily a design dependent parameter. Availability derives from a systems reliability and maintainability. Reliability is the probability that a system will operate for a specified time under specified operating conditions. It is commonly measured by the mean-time-between-failure (MTBF). Maintainability is the ability of a system to be maintained. For this report, maintainability is measured by the mean maintenance down time (MDT). The MDT is a function of several variables, including the mean-time-to-repair (MTIR) and Logistics Delay Time (LDT). MTIR is the time that active maintenance is being performed. The LDT is the time delay due to spare part availability, transportation, repair facility availability, traveling to the location of the malfunction, etc. LDT is a major portion of the MDT. In order to meet a requirement of improved system availability, the MTBF and/or the MDT must be improved. Decreasing the distance between the repair organization and the location of the failure may have a significant impact on LDT, assuming that the system's LDT is positively correlated to the distance traveled. An improvement in the LDT corresponds to an improvement in the MDT, hence the availability. For a situation where the repair function (referred to as the repair unit) travels to the location of the failed component, the deployment location may have a critical impact on a system's availability. In a system where the operating components are located over a wide area and the repair organization must travel to the component to effect a repair, there are numerous ways to deploy the servicing units. The systems maintenance concept addresses this issue. The deployment of the repair units is the central focus of this report.
- Availability Analysis for the Quasi-Renewal ProcessRehmert, Ian Jon (Virginia Tech, 2000-10-09)The behavior of repairable equipment is often modeled under assumptions such as perfect repair, minimal repair, or negligible repair. However the majority of equipment behavior does not fall into any of these categories. Rather, repair actions do take time and the condition of equipment following repair is not strictly "as good as new" or "as bad as it was" prior to repair. Non-homogeneous processes that reflect this type of behavior are not studied nearly as much as the minimal repair case, but they far more realistic in many situations. For this reason, the quasi-renewal process provides an appealing alternative to many existing models for describing a non-homogeneous process. A quasi-renewal process is characterized by a parameter that indicates process deterioration or improvement by falling in the interval [0,1) or (1,Infinity) respectively. This parameter is the amount by which subsequent operation or repair intervals are scaled in terms of the immediately previous operation or repair interval. Two equivalent expressions for the point availability of a system with operation intervals and repair intervals that deteriorate according to a quasi-renewal process are constructed. In addition to general expressions for the point availability, several theoretical distributions on the operation and repair intervals are considered and specific forms of the quasi-renewal and point availability functions are developed. The two point availability expressions are used to provide upper and lower bounds on the approximated point availability. Numerical results and general behavior of the point availability and quasi-renewal functions are examined. The framework provided here allows for the description and prediction of the time-dependent behavior of a non-homogeneous process without the assumption of limiting behavior, a specific cost structure, or minimal repair.
- Availability Analysis for the Quasi-Renewal Process with an Age-Dependent Preventive Maintenance PolicyIntiyot, Boonyarit (Virginia Tech, 2007-09-05)A quasi-renewal process is more realistic in modeling the behavior of a repairable system than traditional models such as perfect repair and minimal repair since it reflects the deterioration process of the system over time while traditional models do not. The quasi-renewal parameter is set to a value between 0 and 1 to indicate the rate of deterioration. Moreover, a quasi-renewal process can also be used to model the increasing time of maintenance actions due to the increasing difficulty of maintaining an aging system by setting the parameter to a value larger than 1. We construct a model where the operating times follow a quasi-renewal process and the corrective/preventive maintenance times follow another quasi-renewal process. A quasi-renewal function and two equivalent point availability expressions are developed for the model described by a quasi-renewal process with and age-dependent preventive maintenance policy. In addition, numerical results from various theoretical distributions are obtained to illustrate the behavior of the models. The two equivalent point availability functions each contains an infinite sum and must be truncated to obtain a numerical approximation. The two approximated point availability functions form upper and lower bounds on the real value. The bounds are useful for determining the result accuracy, which can be arbitrarily increased by adding more terms to the truncated summation. Our framework provides a new time-dependent availability model for a non-stationary process with a preventive maintenance policy without any cost structure or optimization problem.
- Availability analysis of opportunistic age replacement policiesDegbotse, Alfred Tsatsu (Virginia Tech, 1996-09-15)This research develops the availability function for a two component series system in which a component is replaced because of component failure or because it reaches a prescribed age. Also each component replacement provides an opportunity for the replacement of the other component. This last maintenance policy is called an opportunistic replacement strategy. The system functions only if the both components of the system are functioning. The system fails if either of the components fails. Component i is replaced if it fails before attaining age Ti since it was last replaced or maintained. The component i is preventatively maintained if is has not failed by the age Ti. This type of replacement plan is called age replacement policy. When component 1 is being replaced or preventatively maintained, if the age of component j ≠ i exceeds τj then both components i and j are replaced at the same time. This type of replacement is called opportunistic replacement of component j and τj is called the opportunistic replacement time for component j. The time dependent and long run availability measures for the system are developed. A nested renewal theory approach is used is used to develop the system availability function. The nesting is defined by considering the replacement of a specific one of the components as an elementary renewal event and the simultaneous replacement of both components as the macroscopic renewal event. More specifically, the renewal process for the system represents a starting point for the entire system and is in fact a renewal process. The intervals between system regeneration points are called “major intervals". The age replacement time Ti and opportunistic replacement time τi are treated as decision parameters during the model development. The probability distribution of the major interval is developed and the Laplace transform of the system availability is developed. Four replacement models are obtained from the main opportunistic age replacement policy. These are a failure replacement policy, an opportunistic failure model, a partial opportunistic age replacement policy and an opportunistic age replacement policy. These models are obtained as specific cases of the general model. The long run availability measure for the failure replacement model is proven to be the same measure as that developed by Barlow and Proschan. This proof validates the modeling approach.
- Availability of continuously-operated, coherent, multifunctional systemsSols, Alberto (Virginia Tech, 1992)Modem systems are characterized by a multifunctional capability. They are designed to accomplish not one but a series of missions, each one requiring the performance of certain functions. Each of those functions requires the support of some system elements. The fact that an element is not available at certain point in time by no means implies that the entire system is "down" at that moment as the traditional availability definition and formulation requires. Depending on what the mission-function and function-element support requirements are, the "down" condition of a certain element may prevent the accomplishment of some system missions, but some others will still be available. Moreover, the traditional approach assumes that the time to failure and the time to repair associated to each element follow both a negative exponential distribution. Therefore, a more comprehensive treatment of the concept of system availability is required. All the necessary assumptions to enable the definition and quantification of availability figures of merit are listed. Then, definitions are established for availability and degraded availability at different levels in the system structure, from element to system. In addition, some related concepts such as mission reliability and dependability are defined. The developed model enables the prediction of the defined availability figures of merit. The foundation of the model is the renewal process associated with each system element and the links that specify the mission-function and function-element support requirements. The formulation for some related concepts is also presented. Some well-known pairs of distributions are considered and the general expressions are particularized for them. Finally, an example is conducted in order to show the applicability of the derived expressions and to compare the obtained results with those obtained using the traditional approach.
- The availability of multifunctional systems with multistate componentsHouzouris, Adrienne (Virginia Tech, 1993-05-05)Reliability and availability measures are well defined for simple systems, but many real systems are not simple. Many systems are designed to perform more than one task. In addition, these systems may be performing at one of many levels. When these factors are not considered, the system availability analysis changes considerably. Many modern systems are designed to perform more than one function. Sols (1992) and Sols and Nachlas (1993) discuss the importance of multifunctional systems and the need to redefine availability to accommodate them. Furthermore, in most traditional reliability theory, components are assumed to be either functioning or failed which oversimplifies many physical situations. Multistate models have been developed in the literature, and conditions for the system structure function have been defined. Multifunctional systems and systems with multistate components have been discussed in the literature individually, however a model for systems with both multifunctional capability and potentially degraded components has not been discussed. By synthesizing the previous work on multistate systems and on multifunctional systems, this research presents a general model for multifunctional, multistate systems. The model is specialized to a general class of multifunctional, multistate systems. Expressions are derived for the element, function, and system availability. The availability for an example system is then computed.
- Balanced, capacitated, location-allocation problems on networks with a continuum of demandNordai, Frederick Leon (Virginia Polytechnic Institute and State University, 1985)Location-allocation problems can be described generically as follows: Given the location or distribution (perhaps, probabilistic) of a set of customers and their associated demands for a given product or service, determine the optimum location of a number of service facilities and the allocation of products or services from facilities to customers, so as to minimize total (expected) location and transportation costs. This study is concerned with a particular subclass of location-allocation problems involving capacitated facilities and a continuum of demand. Specifically, two minisum, network-based location-allocation problems are analyzed in which facilities having known finite capacities are to be located so as to optimally supply/serve a known continuum of demand. The first problem considered herein, is an absolute p-median problem in which p > l capacitated facilities are to be located on a chain graph having both nodal and link demands, the latter of which are defined by nonnegative, integrable demand functions. In addition, the problem is balanced, in that it is assumed the total demand equals the total supply. An exact solution procedure is developed, wherein the optimality of a certain location-allocation scheme (for any given ordering of the facilities) is used to effect a branch and bound approach by which one can identify an optimal solution to the problem. Results from the chain graph analysis are then used to develop an algorithm with which one can solve a dynamic, sequential location-allocation problem in which a single facility per period is required to be located on the chain. Finally, an exact solution procedure is developed for locating a capacitated, absolute 2-median on a tree graph having both nodal and link demands and for which the total demand is again equal to the total supply. This procedure utilizes an algorithm to construct two subtrees, each of whose ends constitute a set of candidate optimal locations for one of the two elements of an absolute 2-median. Additional localization results are used to further reduce the number of candidate pairs (of ends) that need to be considered, and then a post-localization analysis provides efficient methods of comparing the relative costs of the remaining pairs.
- Bi-objective multi-assignment capacitated location-allocation problemMaach, Fouad (Virginia Tech, 2006-10-10)Optimization problems of location-assignment correspond to a wide range of real situations, such as factory network design. However most of the previous works seek in most cases at minimizing a cost function. Traffic incidents routinely impact the performance and the safety of the supply. These incidents can not be totally avoided and must be regarded. A way to consider these incidents is to design a network on which multiple assignments are performed. Precisely, the problem we focus on deals with power supplying that has become a more and more complex and crucial question. Many international companies have customers who are located all around the world; usually one customer per country. At the other side of the scale, power extraction or production is done in several sites that are spread on several continents and seas. A strong willing of becoming less energetically-dependent has lead many governments to increase the diversity of supply locations. For each kind of energy, many countries expect to deal ideally with 2 or 3 location sites. As a decrease in power supply can have serious consequences for the economic performance of a whole country, companies prefer to balance equally the production rate among all sites as the reliability of all the sites is considered to be very similar. Sharing equally the demand between the 2 or 3 sites assigned to a given area is the most common way. Despite the cost of the network has an importance, it is also crucial to balance the loading between the sites to guarantee that no site would take more importance than the others for a given area. In case an accident happens in a site or in case technical problems do not permit to satisfy the demand assigned to the site, the overall power supply of this site is still likely to be ensured by the one or two available remaining site(s). It is common to assign a cost per open power plant and another cost that depends on the distance between the factory or power extraction point and the customer. On the whole, such companies who are concerned in the quality service of power supply have to find a good trade-off between this factor and their overall functioning cost. This situation exists also for companies who supplies power at the national scale. The expected number of areas as well that of potential sites, can reach 100. However the targeted size of problem to be solved is 50. This thesis focuses on devising an efficient methodology to provide all the solutions of this bi-objective problem. This proposal is an investigation of close problems to delimit the most relevant approaches to this untypical problem. All this work permits us to present one exact method and an evolutionary algorithm that might provide a good answer to this problem.
- A Bivariate Renewal Process and Its Applications in Maintenance PoliciesYang, Sang-Chin (Virginia Tech, 1999-12-13)Same types of systems with the same age usually have different amounts of cumulated usage. These systems when in operation usually have different performance and effectiveness. In this case the existing models of the univariate measures of system effectiveness are inadequate and incomplete. For example, the univariate availability measures for these same-aged systems are all the same even though with different amounts of usage. This is the motivation for this research to pursue a bivariate approach in reliability and maintenance modeling. This research presents a framework for bivariate modeling of a single-unit system. Five key efforts are identified and organized as: (i) bivariate failure modeling, (ii) bivariate renewal modeling, (iii) bivariate corrective maintenance (CM) modeling, (iv) bivariate preventive maintenance (PM) modeling, and (v) bivariate availability modeling. The results provide a foundation for further study of bivariate and multivariate models. For bivariate failure modeling, several bivariate failure models are constructed to represent the possible correlation structures of the two system aging variables, time and usage. The behavior of these models is examined under various correlation structures. The developed models are used to analyze example maintenance problems. Models for bivariate renewal, bivariate CM, and bivariate PM are derived based on the constructed bivariate failure models and the developed bivariate renewal theory. For bivariate CM modeling, corrective maintenance is modeled as an alternating bivariate renewal process or simply an ordinary bivariate renewal process. For bivariate PM modeling, PM models are examined under a bivariate age replacement preventive maintenance policy. The Laplace transforms of the renewal functions (and densities) for these models are obtained. Definitions for bivariate availability functions are developed. Based on the derived CM and PM models, the Laplace transforms for their corresponding bivariate availability models are constructed. The idea of the quality of availability measure is also defined in terms of bivariate availability models. The most significant observation is that this framework provides a new way to study the reliability and maintenance of equipment for which univariate measures are incomplete. Therefore, a new area of reliability research is identified. The definitions offered may be modified and the approach to model formulation presented may be used to define other models.
- Cell design in a cellular system using guard channels, call queueing and channel borrowingJain, Nikhil (Virginia Tech, 1993-12-05)This dissertation develops an analytic framework to undertake cell design in a cellular system. The cell is modeled in a broader sense than ever done before. In our analytical model, we incorporated the use of guard channels, queueing of new calls, and hybrid channel allocation. A numerically stable and efficient solution to a queueing system with two arrival streams having reserved and borrowable servers has been developed. This queueing system is used to model the cell behavior. The model provides valuable insights into the behavior of the cell, and this in turn has been used to devise an efficient stochastic optimization algorithm for determining the minimum number of channels required by the cell. Our techniques are stable, easy to implement for practical systems and produce optimized solutions quickly. This is particularly useful because we expect that future designs of cellular systems may execute such algorithms on cell-site processors.
- A comparison of alternative methods to the shewhart-type control chartHall, Deborah A. (Virginia Tech, 1989-01-05)A control chart that simultaneously tracks the mean and variance of a normally distributed variable with no compensation effect is defined in this work. This joint control chart is compared to five other charts: an Χ chart, an s² chart, a Reynolds and Ghosh chart, a Repko process capability plot, and a t-statistic chart. The criterion for comparison is the probability of a Type II sampling error. Several out-of-control cases are examined. In the case of Repko, an equation is defined to compute the Type II error probability. The results indicate that the Reynolds and Ghosh statistic is powerful for cases when the variance shifts out of control. The Χ chart is powerful when the mean shifts with moderate changes in the variance. The joint chart is powerful for moderate changes in the mean and variance.
- Component availability for an age replacement preventive maintenance policyMurdock, William P. (Virginia Tech, 1995)This research develops the availability function for a continuously demanded component which is maintained by an age replacement preventive maintenance policy. The availability function, A(t), is a function of time and is defined as the probability that the component functions at time t. The component is considered to have two states: operating and failed. In this policy, the component is repaired or replaced at time of failure. Otherwise, if the component survives T time units, a preventive maintenance service is performed. T is known as the age replacement period or preventive maintenance policy. The component is considered to be as good as new after either service action is completed. A renewal theory approach is used to develop A(t). Past research has concerned infinite time horizons letting analysis proceed with limiting values. This research considers component economic life that is finite. The lifetime, failure service time and preventive maintenance service time probability distributions are unique and independent. Laplace transforms are used to simplify model development. The age replacement period, T, is treated as a parameter during model development. The partial Laplace transform is developed to deal with truncated random time periods. A general model is developed in which the resulting availability function is dependent on both continuous time and T. An exact expression for the Laplace transform of A(t, T) is developed. Two specific cases are considered. In the first case, the lifetime, repair and preventive maintenance times are unique exponential distributions. This case is used to validate model performance. Tests are performed for t→0, t→∞ and for times in between these extremes. Results validate model performance. The second case models the lifetime as a Weibull distribution with exponential failure repair and preventive maintenance times. Results validate model performance in this case also. Exact infinite series for the partial and normal Laplace transform of the Weibull distribution and survivor function are presented. Research results show that the optimum infinite time horizon age replacement period does not maximize average availability for all finite values of component economic life. This result is critical in lifecycle maintenance planning.
- A Convergence Analysis of Generalized Hill Climbing AlgorithmsSullivan, Kelly Ann (Virginia Tech, 1999-04-12)Generalized hill climbing (GHC) algorithms provide a unifying framework for describing several discrete optimization problem local search heuristics, including simulated annealing and tabu search. A necessary and a sufficient convergence condition for GHC algorithms are presented. The convergence conditions presented in this dissertation are based upon a new iteration classification scheme for GHC algorithms. The convergence theory for particular formulations of GHC algorithms is presented and the implications discussed. Examples are provided to illustrate the relationship between the new convergence conditions and previously existing convergence conditions in the literature. The contributions of the necessary and the sufficient convergence conditions for GHC algorithms are discussed and future research endeavors are suggested.
- Decision Support System to Predict the Manufacturing Yield of Printed Circuit Board Assembly LinesHelo, Felipe (Virginia Tech, 1999-12-01)This research focuses on developing a model to predict the yield of a printed circuit board manufactured on a given assembly line. Based on an extensive literature review as well as discussion with industrial partners, it was determined that there is no tool available for assisting engineers in determining reliable estimates of their production capabilities as they introduce new board designs onto their current production lines. Motivated by this need, a more in-depth study of manufacturing yield as well as the electronic assembly process was undertaken. The relevant literature research was divided into three main fields: process modeling, board design, and PCB testing. The model presented in this research combines elements from process modeling and board design into a single yield model. An optimization model was formulated to determine the fault probabilities that minimize the difference between actual yield values and predicted yield values. This model determines fault probabilities (per component type) based on past production yields for the different board designs assembled. These probabilities are then used to estimate the yields of future board designs. Two different yield models were tested and their assumptions regarding the nature of the faults were validated. The model that assumes independence between faults provided better yield predictions. A preliminary case study was performed to compare the performance of the presented model with that of previous models using data available from the literature. The proposed yield model predicts yield within 3% of the actual yield value, outperforming previous regression models that predicted yield within 10%, and artificial neural network models that predicted yield within 5%. A second case study was performed using data gathered from actual production lines. The proposed yield model continued to provide very good yield predictions. The average difference with respect to the actual yields from this case study ranged between 1.25% and 2.27% for the lines studied. Through sensitivity analysis, it was determined that certain component types have a considerably higher effect on yield than others. Once the proposed yield model is implemented, design suggestions can be made to account for manufacturability issues during the design process.
- Design and Evaluation of a Data-distributed Massively Parallel Implementation of a Global Optimization Algorithm---DIRECTHe, Jian (Virginia Tech, 2007-11-15)The present work aims at an efficient, portable, and robust design of a data-distributed massively parallel DIRECT, the deterministic global optimization algorithm widely used in multidisciplinary engineering design, biological science, and physical science applications. The original algorithm is modified to adapt to different problem scales and optimization (exploration vs.\ exploitation) goals. Enhanced with a memory reduction technique, dynamic data structures are used to organize local data, handle unpredictable memory requirements, reduce the memory usage, and share the data across multiple processors. The parallel scheme employs a multilevel functional and data parallelism to boost concurrency and mitigate the data dependency, thus improving the load balancing and scalability. In addition, checkpointing features are integrated to provide fault tolerance and hot restarts. Important algorithm modifications and design considerations are discussed regarding data structures, parallel schemes, error handling, and portability. Using several benchmark functions and real-world applications, the present work is evaluated in terms of optimization effectiveness, data structure efficiency, memory usage, parallel performance, and checkpointing overhead. Modeling and analysis techniques are used to investigate the design effectiveness and performance sensitivity under various problem structures, parallel schemes, and system settings. Theoretical and experimental results are compared for two parallel clusters with different system scale and network connectivity. An analytical bounding model is constructed to measure the load balancing performance under different schemes. Additionally, linear regression models are used to characterize two major overhead sources---interprocessor communication and processor idleness, and also applied to the isoefficiency functions in scalability analysis. For a variety of high-dimensional problems and large scale systems, the data-distributed massively parallel design has achieved reasonable performance. The results of the performance study provide guidance for efficient problem and scheme configuration. More importantly, the generalized design considerations and analysis techniques are beneficial for transforming many global search algorithms to become effective large scale parallel optimization tools.
- Determination of Unit Costs for Library ServicesNachlas, Joel A.; Pierce, Anthony R. (ACRL Publications, 1979-05-01)As with other public service activities, inflationary trends and public opinion provide a clear mandate for attempts to control the increasing costs of providing library services. Cost control necessarily requires knowledge of the quantities and sources of costs. A methodology; known as microcosting, for identifying the unit costs of providing specific services is presented here. The method is designed to enable library managers to identify at a detailed level the resources consumed in providing a particular service. This information provides''a quantitative basis for and a monitor of library management decisions. To illustrate the use of the methodology, it is applied to the determination of the unit costs of tracking overdue materials at a major university library.
- Dynamic Pricing with Early Cancellation and ResaleAn, Kwan-Ang (Virginia Tech, 2003-01-24)We consider a continuous time dynamic pricing model where a seller needs to sell a single item over a finite time horizon. Customers arrive in accordance with a Poisson process. Upon arrival, a customer either purchases the item if the posted price is lower than his/her reservation price, or leaves empty-handed. After purchasing the item, some customers, however, will return the item to the seller at an exponential rate for a full refund. We assume that a returned item is in mint condition and the seller can resell it to future customers. The objective of the seller is to dynamically adjust the price in order to maximize the expected total revenue when the sale horizon ends. We formulate the dynamic pricing problem as a dynamic programming model and derive the structural properties of the optimal policy and the optimal value function. For cases in which the customer's reservation price is exponentially distributed, we derive the optimal policy in a closed form. For general reservation price distribution, we consider an approximation of the original model by discretizing both time and the allowable price set. We then present an algorithm for numerically computing the optimal policy in this discrete time model. Numerical examples show that if the discrete price set is carefully chosen, the expected total revenue is nearly the same as that when the allowable price set is continuous.