Scholarly Works, Industrial and Systems Engineering
Permanent URI for this collection
Research articles, presentations, and other scholarship
Browse
Browsing Scholarly Works, Industrial and Systems Engineering by Issue Date
Now showing 1 - 20 of 329
Results Per Page
Sort Options
- Determination of Unit Costs for Library ServicesNachlas, Joel A.; Pierce, Anthony R. (ACRL Publications, 1979-05-01)As with other public service activities, inflationary trends and public opinion provide a clear mandate for attempts to control the increasing costs of providing library services. Cost control necessarily requires knowledge of the quantities and sources of costs. A methodology; known as microcosting, for identifying the unit costs of providing specific services is presented here. The method is designed to enable library managers to identify at a detailed level the resources consumed in providing a particular service. This information provides''a quantitative basis for and a monitor of library management decisions. To illustrate the use of the methodology, it is applied to the determination of the unit costs of tracking overdue materials at a major university library.
- The effect of burst duration, interstimulus onset interval, and loudspeaker arrangement on auditory apparent motion in the free fieldStrybel, Thomas Z.; Neale, Wayne (Acoustical Society of America, 1994-12-01)The illusion of auditory apparent motion (AAM) was examined in order to determine the burst durations and interstimulus onset intervals (ISOIs) at which AAM is heard when spatial information regarding source location was varied. In the first experiment AAM was examined in the free field under monaural and binaural listening conditions. AAM was heard at the same burst duration-ISOI combinations for both listening conditions, but the location of the lead source could be determined only under binaural listening. In the second experiment AAM was measured with two and three sound sources. The number of sources did not affect the burst duration-ISOI combinations that produced AAM, but did affect the determination of the location of the lead source. In the third experiment AAM was tested when the sources were located in the median plane. The sources were located either at 0 degrees and 180 degrees azimuth, or both at 0 degrees azimuth, one in the horizontal plane and one 20 degrees above. The location of the speakers did not affect the timing requirements for the perception of AAM, only the timing requirements for the detection of the lead source. In the fourth experiment, AAM was measured when the vertical separation between the sources was either 2.5 degrees or 20 degrees. AAM was heard at both separations, even though 2.5 degrees is less than the vertical MAA. In each of these experiments only burst duration and ISOI determined whether motion was heard. Localization cues were important only for the determination of the direction of motion. Copyright 1994 Acoustical Society of America
- Intractability results in discrete-event simulationJacobson, Sheldon H.; Yücesan, E. (EDP Sciences, 1995)Simulation is often viewed as a modeling methodology of last resort. This is due to the lack of automated algorithms and procedures that exist to aid in the construction and analysis of simulation models. Jacobson and Yucesan (1994) present four structural issue search problems associated with simulation model building, and prove them to be NP-hard, hence intractable under the worst-case analysis of computational complexity theory. In this article, three new structural issue search problems are presented and proven to be NP-hard. The consequences and implications of these results are discussed.
- Development of a new standard laboratory protocol for estimating the field attenuation of hearing protection devices. I. Research of Working Group 11, Accredited Standards Committee S12, noiseRoyster, Julia D.; Berger, Elliott H.; Merry, Carol J.; Nixon, Charles W.; Franks, John R.; Behar, Alberto; Casali, John G.; Dixon-Ernst, Christine; Kieper, Ronald W.; Mozo, Ben T.; Ohlin, Doug; Royster, Larry H. (Acoustical Society of America, 1996-03-01)This paper describes research conducted by Working Group 11 of Accredited Standards Committee S12, Noise, to develop procedures to estimate the field performance of hearing protection devices (HPDs). Current standardized test methods overestimate the attenuation achieved by workers in everyday use on the job. The goal was to approximate the amount of attenuation that can be achieved by noise-exposed populations in well-managed real-world hearing conservation programs, while maintaining acceptable interlaboratory measurement variability. S12/WG11 designed two new laboratory-based protocols for measuring real-ear attenuation at threshold, with explicit procedures for subject selection, training, supervision, and HPD fitting. After pilot-testing, S12/WG11 conducted a full-scale study of three types of earplugs and one earmuff tested by four independent laboratories using both protocols. The protocol designated as ''subject-fit'' assessed the attenuation achieved by subjects who were experienced in threshold audiometry, but naive with respect to the use of hearing protection, when they fit HPDs by following manufacturers' instructions without any experimenter assistance. The attenuation results from the subject-fit method corresponded more closely to real-world data than results from the other protocol tested, which allowed the experimenter to coach subjects in HPD use, Comparisons of interlaboratory measurement variability for the subject-fit procedure to previous interlaboratory studies using other protocols indicated that the measurements with the new procedure are at least as reproducible as those obtained with existing standardized methods. Therefore, the subject-fit protocol was selected for consideration for use in future revisions of HPD attenuation test standards. Copyright 1996 Acoustical Society of America.
- Development of a new standard laboratory protocol for estimating the field attenuation of hearing protection devices. Part III. the validity of using subject-fit dataBerger, Elliott H.; Franks, John R.; Behar, Alberto; Casali, John G.; Dixon-Ernst, Christine; Kieper, Ronald W.; Merry, Carol J.; Mozo, Ben T.; Nixon, Charles W.; Ohlin, Doug; Royster, Julia D.; Royster, Larry H. (Acoustical Society of America, 1998-02-01)The mandate of ASA Working Group S12/WG11 has been to develop "laboratory and/or field procedure(s) that yield useful estimates of field performance" of hearing protection devices (HPDs). A real-ear attenuation at threshold procedure was selected, devised, tested via an, interlaboratory study, and incorporated into a draft standard that was approved in 1997 [J. D. Royster et al., "Development of a new standard laboratory protocol for estimating the field attenuation of hearing protection devices. Part I. Research of Working Group 11, Accredited Standards Committee S12, Noise," J. Acoust. Soc. Am. 99, 1506-1526 (1996); ANSI S12.6-1997, "American National Standard Methods for Measuring Real-Ear Attenuation of Hearing Protectors" (American National Standards Institute, New York, 1997)]. The real-world estimation procedure utilizes a subject-fit methodology with Listeners who are audiometrically proficient, but inexperienced in the use of HPDs. A key factor in the decision to utilize the subject-fit method was an evaluation of the representativeness of the laboratory data vis-a-vis attenuation values achieved by workers in practice, Twenty-two field studies were reviewed to develop a data base for comparison purposes, Results indicated that laboratory subject-fit attenuation values were typically equivalent to or greater than the field attenuation values, and yielded a better estimate of those values than did experimenter-fit or experimenter-supervised fit types of results. Recent data which are discussed in the paper, but which were not available at the time of the original analyses, confirm the findings. (C) 1998 Acoustical Society of America. [S0001-4966(98)03001-X].
- Information theory and the finite-time behavior of the simulated annealing algorithm: Experimental resultsFleischer, M.; Jacobson, Sheldon H. (INFORMS, 1999)This article presents an empirical approach that demonstrates a theoretical connection between (information theoretic) entropy measures and the finite-time performance of the simulated annealing algorithm. The methodology developed reads to several computational approaches for creating problem instances useful in testing and demonstrating the entropy/performance connection: use of generic configuration spaces, polynomial transformations between NP-hard problems, and modification of penalty parameters. In particular, the computational results show that higher entropy measures are associated with superior finite-time performance of the simulated annealing algorithm.
- Effects of stereopsis and head tracking on performance using desktop virtual environment displaysBarfield, Woodrow S.; Hendrix, Claudia; Bystrom, Karl-Erik (MIT Press, 1999-04-01)This study investigated performance in a desktop virtual environment as a function of stereopsis and head tracking. Ten subjects traced a computer-generated wire using a virtual stylus that was slaved to the position of a real-world stylus tracked with a 6-DOF position sensor. The objective of the task was to keep the virtual stylus centered on the wire. Measures collected as the subjects performed the task were performance time, and number of times the stylus overstepped the virtual wire. The time to complete the wire-tracing task was significantly reduced by the addition of stereopsis, but was not affected by the presence of head tracking. The number of times the virtual stylus overstepped the wire was significantly reduced when head-tracking cues were available, but was not affected by the presence of stereoscopic cues. Implications of the results for performance using desktop virtual environments are discussed.
- A conceptual model of the sense of presence in virtual environmentsBystrom, Karl-Erik; Barfield, Woodrow S.; Hendrix, Claudia (MIT Press, 1999-04-01)This paper proposes a model of interaction in virtual environments which we term the immersion, presence, performance (IPP) model. This model is based on previous models of immersion and presence proposed by Barfield and colleagues and Slater and colleagues. The IPP model describes the authors' current conceptualization of the effects of display technology, task demands, and attentional resource allocation on immersion, presence, and performance in virtual environments. The IPP model may be useful for developing a theoretical framework for research on presence and for interpreting the results of empirical studies on the sense of presence in virtual environments. The model may also be of interest to designers of virtual environments.
- Collaborative task performance for learning using a virtual environmentBystrom, Karl-Erik; Barfield, Woodrow S. (MIT Press, 1999-08-01)This paper describes a study on the sense of presence and task performance in a virtual environment as affected by copresence (one subject working alone versus two subjects working as partners), level of control (control of movement and control of navigation through the virtual environment), and head tracking. Twenty subjects navigated through six versions of a virtual environment and were asked to identify changes in locations of objects within the environment. After each trial, subjects completed a questionnaire designed to assess their level of presence within the virtual environment. Results indicated that collaboration did not increase the sense of presence in the virtual environment, but did improve the quality of the experience in the virtual environment. Level of control did not affect the sense of presence, but subjects did prefer to control both movement and navigation. Head tracking did not affect the sense of presence, but did contribute to the spatial realism of the virtual environment. Task performance was affected by the presence of another individual, by head tracking, and by level of control, with subjects performing significantly more poorly when they were both alone and without control and head tracking. In addition, a factor analysis indicated that questions designed to assess the subjects' experience in the virtual environment could be grouped into three factors: (1) presence in the virtual environment, (2) quality of the virtual environment, and (3) task difficulty.
- Enhanced model representations for an intra-ring synchronous optical network design problem allowing demand splittingSherali, Hanif D.; Smith, J. Cole; Lee, Youngho (INFORMS, 2000)In this paper, we consider a network design problem arising in the context of deploying synchronous optical networks (SONET) using a unidirectional path switched ring architecture, a standard of transmission using optical fiber technology. Given several rings of this type, the problem is to find an assignment of nodes to possibly multiple rings, and to determine what portion of demand traffic between node pairs spanned by each ring should be allocated to that ring. The constraints require that the demand traffic between each node pair should be satisfiable given the ring capacities, and that no more than a specified maximum number of nodes should be assigned to each ring. The objective function is to minimize the total number of node-to-ring assignments, and hence, the capital investment in add-drop multiplexer equipments. We formulate the problem as a mixed-integer programming model, and propose several alternative modeling techniques designed to improve the mathematical representation of this problem. We then develop various classes of valid inequalities for the problem along with suitable separation procedures for tightening the representation of the model, and accordingly, prescribe an algorithmic approach that coordinates tailored routines with a commercial solver (CPLEX). We also propose a heuristic procedure which enhances the solvability of the problem and provides bounds within 5-13% of the optimal solution. Promising computational results are presented that exhibit the viability of the overall approach and that lend insights into various modeling and algorithmic constructs.
- Applying variance reduction ideas in queuing simulationsRoss, S. M.; Lin, K. Y. (Cambridge University Press, 2001)Variance reduction techniques are often underused in simulation studies. In this article, we indicate how certain ones can be efficiently employed when analyzing queuing models. The first technique considered is that of dynamic stratified sampling; the second is the utilization of multiple control variates; the third concerns the replacement of random variables by their conditional expectations when trying to estimate the expected value of a sum of random variables.
- The Ph-t/Ph-t/infinity queueing system: Part I - the single nodeNelson, B. L.; Taaffe, Michael R. (INFORMS, 2004)We develop a numerically exact method for evaluating the time-dependent mean, variance, and higher-order moments of the number of entities in a Ph-t/Ph-t/infinity queueing system. We also develop a numerically exact method for evaluating the distribution function and moments of the virtual sojourn time for any time t; in our setting, the virtual sojourn time is equivalent to the service time for virtual entities arriving to the system at that time t. We include several examples using software that we have developed and have put in downloadable form in the Online Supplement to this paper on the journal's website.
- The Ph-t/Ph-t/infinity (k) 100 (K) queueing system: Part II - the multiclass networkNelson, B. L.; Taaffe, Michael R. (INFORMS, 2004)We demonstrate a numerically exact method for evaluating the time-dependent mean, variance, and higher-order moments of the number of entities in the multiclass [Ph-t/Ph-t/infinity](K) queueing network system, as well as at the individual network nodes. We allow for multiple, independent, time-dependent entity classes and develop time-dependent performance measures by entity class at the nodal and network levels. We also demonstrate a numerically exact method for evaluating the distribution function and moments of virtual sojourn time through the network for virtual entities, by entity class, arriving to the system at time t. We include an example using software that we have developed and have put in downloadable form in the Online Supplement to this paper on the journal's website.
- Enhancing Lagrangian dual optimization for linear programs by obviating nondifferentiabilitySherali, Hanif D.; Lim, C. (INFORMS, 2007)We consider non differentiable optimization problems that arise when solving Lagrangian duals of large-scale linear programs. Different from traditional subgradient-based approaches, we design two new methods that attempt to circumvent or obviate the nondifferentiability of the objective function, so that standard differentiable optimization techniques could be used. These methods, called the perturbation technique and the barrier-Lagrangian reformulation, are implemented as initialization procedures to provide a warm start to a theoretically convergent nondifferentiable optimization algorithm. Our computational study reveals that this two-phase strategy produces much better solutions with less computation in comparison with both the stand-alone nondifferentiable optimization procedure employed, and the popular Held-Wolfe-Crowder subgradient heuristic. Furthermore, the best version of this composite algorithm is shown to consume only about 3.19% of the CPU time required by the commercial linear programming solver CPLEX 8.1 (using the dual simplex option) to produce the same quality solutions. We also demonstrate that this initialization technique greatly facilitates quick convergence in the primal space when used as a warm start for ergodic-type primal recovery schemes.
- Solutions and optimality criteria to box constrained nonconvex minimization problemsGao, D. Y. (American Institute of Mathematical Sciences, 2007-05-01)The design of elastic structures to optimize strength and economy of materials is a fundamental problem in structural engineering and related areas of applied mathematics. In this article we explore a finite dimensional framework for approximate solution of such design problems based on linear elasticity with a range of elastic coefficients assumed available as design parameters. Solution methods for related optimization problems based on the matrix trace norm are suggested and analyzed, providing existence and uniqueness theorems. Results of computations for sample problems are presented and compared with parallel results in the literature based on other approaches.
- An Effective Deflected Subgradient Optimization Scheme for Implementing Column Generation for Large-Scale Airline Crew Scheduling ProblemsSubramanian, Shivaram; Sherali, Hanif D. (INFORMS, 2008)We present a new deflected subgradient scheme for generating good quality dual solutions for linear programming (LP) problems and utilize this within the context of large-scale airline crew planning problems that arise inpractice. The motivation for the development of this method came from the failure of a black-box-type approach implemented at United Airlines for solving such problems using column generation in concert with a commercial LP solver, where the software was observed to stall while yet remote from optimality. We identify a phenomenon called dual noise to explain this stalling behavior and present an analysis of the desirable properties of dual solutions in this context. The proposed deflected subgradient approach has been embedded within the crew pairing solver at United Airlines and tested using historical data sets. Our computational experience suggests a strong correlation between the dual noise phenomenon and the quality of the final solution produced, as well as with the accompanying algorithmic performance. Although we observed that our deflected subgradient scheme yielded an average speed-up factor of 10 for the column generation scheme over the commercial solver, the average reduction in the optimality gap over the same number of iterations was better by a factor of 26, along with an average reduction in the dual noise by a factor of 30. The results from the column generation implementation suggest that significant benefits can be obtained by using the deflected subgradient-based scheme instead of a black-box-type or standard solver approach to solve the intermediate linear programs that arise with in the column generation scheme.
- An Optimal Constrained Pruning Strategy for Decision TreesSherali, Hanif D.; Hobeika, Antoine G.; Jeenanunta, Chawalit (INFORMS, 2009)This paper is concerned with the optimal constrained pruning of decision trees. We present a novel 0-1 programming model for pruning the tree to minimize some general penalty function based on the resulting leaf nodes, and show that this model possesses a totally unimodular structure that enables it to be solved as a shortest-path problem on an acyclic graph. Moreover, we prove that this problem can be solved in strongly polynomial time while incorporating an additional constraint on the number of residual leaf nodes. Furthermore, the framework of the proposed modeling approach renders it suitable to accommodate different (multiple) objective functions and side-constraints, and we identify various such modeling options that can be applied in practice. The developed methodology is illustrated using a numerical example to provide insights, and some computational results are presented to demonstrate the efficacy of solving generically constrained problems of this type. We also apply this technique to a large-scale transportation analysis and simulation system (TRANSIMS), and present related computational results using real data to exhibit the flexibility and effectiveness of the proposed approach.
- The Nested Event Tree Model with Application to Combating TerrorismLunday, B. J.; Sherali, Hanif D.; Glickman, T. S. (INFORMS, 2010)In this paper, we model and solve the strategic problem of minimizing the expected loss inflicted by a hostile terrorist organization. An appropriate allocation of certain capability-related, intent-related, vulnerability-related, and consequence-related resources is used to reduce the probabilities of success in the respective attack-related actions and to ameliorate losses in case of a successful attack. We adopt a nested event tree optimization framework and formulate the problem as a specially structured nonconvex factorable program. We develop two branch-and-bound schemes based, respectively, on utilizing a convex nonlinear relaxation and a linear outer approximation, both of which are proven to converge to a global optimal solution. We also design an alternative direct mixed-integer programming model representation for this case, and we investigate a fundamental special-case variant for this scheme that provides a relaxation and affords an optimality gap measure. Several range reduction, partitioning, and branching strategies are proposed, and extensive computational results are presented to study the efficacy of different compositions of these algorithmic ingredients, including comparisons with the commercial software BARON. A sensitivity analysis is also conducted to explore the effect of certain key model parameters.
- Situativity Approaches for Improving Interdisciplinary Team ProcessesKim, Kahyun; McNair, Lisa D.; Coupey, Eloise; Martin, Tom; Dorsa, Edward A.; Kemnitzer, Ron (ASEE, 2010)Interdisciplinary teaming requires not only multiple levels of expertise but also social competencies gained through interactive contexts. In the classroom, a situativity approach that encourages student engagement can help students learn to value differing perspectives. To foster students’ interdisciplinary collaborative skills, an interdisciplinary capstone design class that brings students and faculty from electrical and computer engineering, industrial design, and marketing was developed and twelve fourth-year students participated (four from each discipline). The students were tasked with designing a next generation firefighter helmet that incorporates innovative computing technology. Various interventions such as learning modules and teaming exercises were implemented throughout the class to help students learn how to communicate across disciplines. Direct observation, interviews, questionnaires, and assessment of course assignments indicated both benefits and limitations of the class. Implications and future directions are also discussed.
- An Algorithm for Fast Generation of Bivariate Poisson Random VectorsShin, K.; Pasupathy, R. (INFORMS, 2010)We present the "trivariate reduction extension" (TREx)-an exact algorithm for the fast generation of bivariate Poisson random vectors. Like the normal-to-anything (NORTA) procedure, TREx has two phases: a preprocessing phase when the required algorithm parameters are identified, and a generation phase when the parameters identified during the preprocessing phase are used to generate the desired Poisson vector. We prove that the proposed algorithm covers the entire range of theoretically feasible correlations, and we provide efficient-computation directives and rigorous bounds for truncation error control. We demonstrate through extensive numerical tests that TREx, being a specialized algorithm for Poisson vectors, has a preprocessing phase that is uniformly a hundred to a thousand times faster than a fast implementation of NORTA. The generation phases of TREx and NORTA are comparable in speed, with that of TREx being marginally faster. All code is publicly available.