Browsing by Author "Myers, Raymond H."
Now showing 1 - 20 of 38
Results Per Page
Sort Options
- Bayesian Two Stage Design Under Model UncertaintyNeff, Angela R. (Virginia Tech, 1997-01-16)Traditional single stage design optimality procedures can be used to efficiently generate data for an assumed model y = f(x(m),b) + ε. The model assumptions include the form of f, the set of regressors, x(m) , and the distribution of ε. The nature of the response, y, often provides information about the model form (f) and the error distribution. It is more difficult to know, apriori, the specific set of regressors which will best explain the relationship between the response and a set of design (control) variables x. Misspecification of x(m) will result in a design which is efficient, but for the wrong model. A Bayesian two stage design approach makes it possible to efficiently design experiments when initial knowledge of x(m) is poor. This is accomplished by using a Bayesian optimality criterion in the first stage which is robust to model uncertainty. Bayesian analysis of first stage data reduces uncertainty associated with x(m), enabling the remaining design points (second stage design) to be chosen with greater efficiency. The second stage design is then generated from an optimality procedure which incorporates the improved model knowledge. Using this approach, numerous two stage design procedures have been developed for the normal linear model. Extending this concept, a Bayesian design augmentation procedure has been developed for the purpose of efficiently obtaining data for variance modeling, when initial knowledge of the variance model is poor.
- Connectedness and optimality in multidimensional designsChaffin, Wilkie Willis (Virginia Tech, 1975-05-14)Sennetti (1972) showed the existence of minimal multidimensional designs (MMD's) and minimal augmented multidimensional designs (MAMD's) which allow estimation of type I and type II contrasts. For an MMD, only one more design point is required than there are degrees of freedom for the parameter vector. For MAMD's, the number of assemblies added is equal to the difference between the number of degrees of freedom for the parameter vector and the rank of the design matrix. Using the chain concept of connectedness as defined by Bose (1947), this work suggests a practical procedure to obtain an MMD for estimating type I contrasts and proves the procedure valid. In addition, a procedure is discussed that may be used to obtain an MMD for estimating type II contrasts. After proof of the validity of the procedure, advantages of this procedure over some other possible procedures to obtain an MMD are given. It is shown that only a slight modification of the procedure is necessary to be able to obtain an MAMD for estimating type II contrasts. If there is a restriction on the number of replicates of factor levels for an experiment, then a different approach is suggested. If mij denotes the number of replicates of level j of factor Fi, then it is desired to increase the number of estimators for type I contrasts without altering any of the mij. The interchange algorithm used by Eccleston and Hedayat (1974) to accomplish this for a proper, locally connected (l-connected) randomized block design is extended to two-factor, no interaction designs. The design obtained is pseudo globally connected (pg-connected), thus guaranteeing more estimates for main effect contrasts. In addition, the new design will be better than the old with respect to the S-optimality criterion. It is shown that the procedure can also be used in a two or more factor experiment to pg-connect an I-connected design for two factors. The new design obtained will be better than the old with respect to a new criterion, C-optimality. The algorithm described is proved to have no effect on the amount of aliasing (based on a norm suggested by Hedayat, Raktoe, and Federar, (1974)) due to a possibly incorrect assumption of no interaction. The use of the interchange algorithm to pg-connect a design for level combinations is suggested because of the increased number of estimators for type II contrasts that may be obtained. A theorem is proved which gives the minimum number of estimates that will be available for estimating a type II contrast if a design is pg-connected for level combinations. The last topic discussed is the use of a criterion for choosing a particular MAMD for estimating type II contrasts. The sequentially S-optimal design is defined. It is shown that the sequentially S-optimal design is easy to obtain and is similar to the S-optimal design.
- Construction and Analysis of Linear Trend-Free Factorial Designs Under a General Cost StructureKim, Kiho (Virginia Tech, 1997-07-28)When experimental units exhibit a smooth trend over time or in space, random allocation of treatments may no longer be appropriate. Instead, systematic run orders may have to be used to reduce or eliminate the effects of such a trend. The resulting designs are referred to as trend-free designs. We consider here, in particular, linear trend-free designs for factorial treatment structures such that estimates of main effects and two-factor interactions are trend-free. In addition to trend-freeness we incorporate a general cost structure and propose methods of constructing optimal or near-optimal full or fractional factorial designs. Building upon the generalized foldover scheme (GFS) introduced by Coster and Cheng (1988) we develop a procedure of selection of foldover vectors (SFV) which is a construction method for an appropriate generator matrix. The final optimal or near-optimal design can then be developed from this generator matrix. To achieve a reduction in the amount of work, i.e., a reduction of the large number of possible generator matrices, and to make this whole process easier to use by a practitioner, we introduce the systematic selection of foldover vectors (SSFV). This method does not always produce optimal designs but in all cases practical compromise designs. The cost structure for factorial designs can be modeled according to the number of level changes for the various factors. In general, if cost needs to be kept to a minimum, factor level changes will have to be kept at a minimum. This introduces a covariance structure for the observations from such an experiment. We consider the consequences of this covariance structure with respect to the analysis of trend-free factorial designs. We formulate an appropriate underlying mixed linear model and propose an AIC-based method using simulation studies, which leads to a useful practical linear model as compared to the theoretical model, because the theoretical model is not always feasible. Overall, we show that estimation of main effects and two-factor interactions, trend-freeness, and minimum cost cannot always be achieved simultaneously. As a consequence, compromise designs have to be considered, which satisfy requirements as much as possible and are practical at the same time. The proposed methods achieve this aim.
- Construction and properties of Box-Behnken designsJo, Jinnam (Virginia Tech, 1992-10-05)Box-Behnken designs are used to estimate parameters in a second-order response surface model (Box and Behnken, 1960). These designs are formed by combining ideas from incomplete block designs (BIBD or PBIBD) and factorial experiments, specifically 2k full or 2k-1 fractional factorials. In this dissertation, a more general mathematical formulation of the Box-Behnken method is provided, a general expression for the coefficient matrix in the least squares analysis for estimating the parameters in the second order model is derived, and the properties of Box-Behnken designs with respect to the estimability of all parameters in a second-order model are investigated when 2kfull factorials are used. The results show that for all pure quadratic coefficients to be estimable, the PBIB(m) design has to be chosen such that its incidence matrix is of full rank, and for all mixed quadratic coefficients to be estimable the PBIB(m) design has to be chosen such that the parameters λ₁, λ₂, ...,λm are all greater than zero. In order to reduce the number of experimental points the use of 2k-1 fractional factorials instead of 2k full factorials is being considered. Of particular interest and importance are separate considerations of fractions of resolutions III, IV, and V. The construction of Box-Behnken designs using such fractions is described and the properties of the designs concerning estimability of regression coefficients are investigated. Using designs obtained from resolution V factorials have the same properties as those using full factorials. Resolutions III and IV designs may lead to non-estimability of certain coefficients and to correlated estimators. The final topic is concerned with Box-Behnken designs in which treatments are applied to experimental units sequentially in time or space and in which there may exist a linear trend effect. For this situation, one wants to find appropriate run orders for obtaining a linear trend-free Box-Behnken design to remove a linear trend effect so that a simple technique, analysis of variance, instead of a more complicated technique, analysis of covariance, to remove a linear trend effect can be used. Construction methods for linear trend-free Box-Behnken designs are introduced for different values of block size (for the underlying PBIB design) k. For k= 2 or 3, it may not always be possible to find linear trend-free Box-Behnken designs. However, for k ≥ 4 linear trend-free Box-Behnken designs can always be constructed.
- Correlation between arrival and service patterns as a means of queue regulationHadidi, Nasser (Virginia Tech, 1968-03-05)A major cause of congestion in queuing situations, that is of immoderate waits and lengthening queues, is often the assumed independence of the arrival and service mechanisms. This dissertation is concerned with single server "correlated" models, defined to be such that either the service mechanism is somehow tailored to the arrival pattern, or vice versa. The greatest attention is given to a particular model in which the service time allotted to the nth arrival is λ Tn , where λ is a non-time dependent constant and numerically has the value of congestion index, and Tn is the interval between the (n-l)th and the nth arrivals which, it is important to note, could be observed by the server before service is initiated. It is shown that the effect of the correlation mechanism is to reduce congestion under a given level of traffic intensity, as compared with single server systems in which arrivals and service are independent. This result is achieved without inflicting on the service facility the penalty of increased periods of idleness. The particular model is a queuing interpretation of a stochastic-kinematic situation studied by B. W. Conolly in connection with a military tactical analysis. The dissertation is divided into two parts. Part I develops the theory of the main model with particular reference to state probabilities, waiting time, busy period, and output. Some consideration is also give to a related model where service depends on the arrival pattern, and to what is referred to as the "dual" problem in which the arrival mechanism is geared to service capability. Further, the state probabilities at arrival epochs for a conventional M/M/l queue are obtained by employing a simple probabilistic argument. This is needed for Part II. Part II applies the theory to give a practical comparison of the correlation mechanism with the elementary "independent" single server queues M/M/I, M/D/l and D/M/l; and it is shown in detail that the practical result referred to above is achieved. The superiority of the correlation mechanism increases with traffic intensity. State probability, busy period and output comparisons are made only with the M/M/l system. The main conclusions are found to extend also to these processes. It is concluded that, where its application is practicable, a mechanism of correlation can achieve important gains in efficiency.
- Design of an experiment to investigate the effects of electrode bearing area, weld-pressure, and current on the penetration and tensile-shear strength of resistance spot weldments in SAE CR 1010 sheet steelFitzgerald, William Roy (Virginia Tech, 1964-05-05)The results of this investigation are based on the statistical and visual analyses of the data collected during this experiment
- Dual Model Robust RegressionRobinson, Timothy J. (Virginia Tech, 2004-07-30)In typical normal theory regression, the assumption of homogeneity of variances is often not appropriate. Instead of treating the variances as a nuisance and transforming away the heterogeneity, the structure of the variances may be of interest and it is desirable to model the variances. Aitkin (1987) proposes a parametric dual model in which a log linear dependence of the variances on a set of explanatory variables is assumed. Aitkin's parametric approach is an iterative one providing estimates for the parameters in the mean and variance models through joint maximum likelihood. Estimation of the mean and variance parameters are interrelatedas the responses in the variance model are the squared residuals from the fit to the means model. When one or both of the models (the mean or variance model) are misspecified, parametric dual modeling can lead to faulty inferences. An alternative to parametric dual modeling is to let the data completely determine the form of the true underlying mean and variance functions (nonparametric dual modeling). However, nonparametric techniques often result in estimates which are characterized by high variability and they ignore important knowledge that the user may have regarding the process. Mays and Birch (1996) have demonstrated an effective semiparametric method in the one regressor, single-model regression setting which is a "hybrid" of parametric and nonparametric fits. Using their techniques, we develop a dual modeling approach which is robust to misspecification in either or both of the two models. Examples will be presented to illustrate the new technique, termed here as Dual Model Robust Regression.
- Economically optimum design of cusum charts when there is a multiplicity of assignable causesHsu, Margaretha Mei-Ing (Virginia Tech, 1978-10-05)This study is concerned with the design of cumulative sum charts based on a minimum cost criterion when there are multiple assignable causes occurring randomly, but with known effect. A cost model is developed that relates the design parameters (i.e. sampling interval, decision limit, reference value and sample size) of a cusum chart and the cost and risk factors of the process to the long run average loss cost per hour for the process. Optimum designs for various sets of cost and risk factors are found by minimizing the long run average loss-cost per hour of the process with respect to the design parameters of a cusum chart. Optimization is accomplished by use of Brown's method. A modified Brownian motion approximation is used for calculating ARLs in the cost model. The nature of the loss-cost function is investigated numerically. The effects of changes in the design parameters and in the cost and risk factors are also studied. An investigation of the limiting behavior of the loss-cost function as the decision limit approaches infinity reveals that in some cases there exist some points that yield a lower loss-cost than that of the local minimum obtained by Brown's method. It is conjectured that if the model is extended to include more realistic assumption about the occurrence of assignable causes then only the local minimum solutions will remain. This paper also shows that the multiple assignable cause model can be well approximated by a matched single cause model. Then in practice it may be sufficient to find the optimum design for the matched. single cause model.
- An examination of outliers and interaction in a nonreplicated two-way tableKuzmak, Barbara R. (Virginia Tech, 1990-12-05)The additive-plus-multiplicative model, Yij = μ + αi + βj + ∑p=1kλpτpiγpj, has been used to describe multiplicative interaction in an unreplicated experiment. Outlier effects often appear as interaction in a two-way analysis of variance with one observation per cell. I use this model in the same setting to study outliers. In data sets with significant interaction, one may be interested in determining whether the cause of the interaction is due to a true interaction, outliers or both. I develop a new technique which can show how outliers can be distinguished from interaction when there are simple outliers in a two-way table. Several examples illustrating the use of this model to describe outliers and interaction are presented. I briefly address the topics of leverage and influence. Leverage measures the impact a change in an observation has on fitted values, whereas influence evaluates the effect deleting an observation has on model estimates. I extend the leverage tables for an additive-plus-multiplicative model of rank 1 to a rank k model. Several examples studying the influence in a two-way nonreplicated table are given.
- Experimental design issues in impaired reproduction applicationsChiacchierini, Lisa M. (Virginia Tech, 1996-12-04)Within the realms of biological and medical research, toxicity studies which measure impaired reproduction are becoming more and more common, yet methods for efficiently designing experiments for these studies have received little attention. In this research, response surface design criteria are applied to four models for impaired reproduction data. The important role of control observations in impairment studies is discussed, and for one model, a normal error linear model, a design criterion is introduced for allocating a portion of the sample to the control. Special attention is focused on issues surrounding optimal design of experiments for two of the models, a Poisson exponential model and a Poisson linear model. As most of the optimal designs for these models are obtained via numerical methods rather than directly from criteria, equivalence theory is used to prove analytically that the numerically obtained designs are truly optimal. A further complication associated with designing experiments for Poisson regression is the need to know parameter values in order to implement the optimal designs. Thus, two stage design of experiments is investigated as one solution to this problem. Finally, since researchers frequently do not know the appropriate model for their data a priori, the optimal designs for these two different models are compared, and designs which are robust to model misspecification are highlighted.
- GEM, generalized estuary model: a variation on the Schodfield- Krutchoff stochastic model for estuariesDePietro, Sandra Ann (Virginia Tech, 1975-08-05)In recent years, many mathematical models have been developed to be used as mechanisms for carrying out stream and estuary investigations. In 1971, W.R. Schofield and R.G. Krutchkoff completed work on a stochastic model in an attempt to accurately describe the behavior of an estuary. Through the use of a high-speed computer this one-dimensional model predicts the concentrations of twelve interacting components, subdivided into five biological and seven chemical factors. This is a valuable tool, but from a practical viewpoint, the model is difficult to apply without a fairly strong background in computer science. It is the aim of the present study to simplify the use of the SchofieldKrutchkoff estuary model so that it can be readily accessible to the appropriate personnel, irrespective of their previous exposure to computer programming. Dependent upon the particular estuary studied, it was necessary to make internal program adjustments with respect to boundary conditions, applicable rate constants, tidal lag, and maximum tidal velocity rates. These constants have been replaced by variables for the user to define as input data to the main program segment. The options to choose one of several expressions for the oxygen reaeration rate K₂, whether to weight this equation with wind velocity, vary the volumetric freshwater flow rate with position and request plotted output for each day modeled have also been added.
- Groupings in item demand problemsCarter, Walter (Virginia Tech, 1968-03-06)In this dissertation an iterative procedure, due to Hartley [9], for obtaining the maximum likelihood estimators of the parameters from underlying discrete distributions is studied for the case of grouped random samples. It is shown that when the underlying distribution is Poisson the process always converges and does so regardless of the initial values taken for the unknown parameter. In showing this, a rather interesting property of the Poisson distribution was derived. If one defines a connected group of integers to be such that it contains all the integers between and including its end points, it is shown that the variance of the sub- distribution defined on this connected set is strictly less than the variance of the complete Poisson distribution. A Monte Carlo study was performed to indicate how increasing group sizes affected the variances of the maximum likelihood estimators. As a result of a problem encountered by the Office of Naval Research, combinations of distributions diff kb were introduced. The difference between such combinations and the classical mixtures of distributions is that a new distribution must be considered whenever the random variable in question increases by an integral multiple of a known integer constant, b. When all the data are present, the estimation problem is no more complicated than when estimating the individual parameters from the component distributions. However, it is pointed out that very frequently the observed samples are defective in the fact that none of the component frequencies are observed. Hence, horizontal grouping of the sample values occurs as opposed to the vertical grouping encountered previously in the one parameter Poisson case. An extension of the iterative procedure used to obtain the maximum likelihood estimator of the single parameter grouped Poisson distribution is made to obtain the estimators of the parameters in a horizontally grouped sample. As a practical example, the component distributions were all taken to be from the Poisson family. The estimators were obtained and their properties were studied. The regularity conditions which are sufficient to show that a consistent and asymptotically normally distributed solution to the likelihood equations exist are seen to be satisfied for such combinations of the Poisson distributions. Further, in the full data case, a set of jointly sufficient statistics is exhibited and since, in the presence of sufficient statistics, the solutions to the likelihood equations are unique, the estimators are consistent and asymptotically normal. It is seen that such combinations of distributions can be applied to problems in item demands. A justification of the Poisson distribution is given for such applications, but it is also pointed out that the Negative Binomial distribution might be applicable. It is also shown that such a probability model might have an application in testing the efficiency of an anti-ballistic missile system when under attack by missiles which carry multiple warheads. However, no data were available and hence the study of this application could be carried no further.
- Integrated empirical models based on a sequential research strategyHan, Sung Ho (Virginia Tech, 1991-01-05)A systematic research approach is necessary to investigate complex systems. This approach should provide a tool for examining multifactors in an efficient manner since a large number of factors is usually involved in the design and evaluation of complex systems. This study is used to develop empirical models which describe the functional relationships of many independent variables in the design of a telephone information system. Such a development is based on integrating several data sets using sequential experimentation. Reanalyses of previous experiments were conducted to examine necessary and sufficient conditions for integrating data sets resulting from previous studies. As a result. an experiment was conducted to investigate the effects of independent variables which were not manipulated in the previous experiments. An additional experiment was conducted to provide a bridge among several data sets. The integrated data set was then used to build second-order empirical models using polynomial regression. Determinant values of X'X matrices served as a statistical criterion for achieving minimum variances of coefficients and prediction variances of the models. Based upon the empirical models developed, optimum configurations of the telephone information system were obtained using a nonlinear programming technique. A separate optimization method was used since the empirical models included both continuous variables and discrete variables. Specific procedures and guidelines are suggested in planning and conducting sequential research which deals with a large number of independent variables in an efficient and systematic manner. The procedures and guidelines are summarized based upon the lessons learned from the dissertation research. These include administrative requirements, alternative experimental designs, methodological considerations on conducting sequential experiments, and other necessary rules and decision criteria for bridging data sets and optimizing empirical models. This approach is expected to provide a tool for obtaining generalizable results in human factors research.
- Invariant tests for scale parameters under elliptical symmetryChmielewski, Margaret A. (Virginia Tech, 1978-11-16)In the parametric development of statistical inference it often is assumed that observations are independent and Gaussian. The Gaussian assumption sometimes is justified on appeal to central limit theory or on the grounds that certain normal theory procedures are robust. The independence assumption, usually unjustified, routinely facilitates the derivation of needed distribution theory. In this thesis a variety of standard tests for scale parameters is considered when the observations are not necessarily either Gaussian or independent. The distributions considered are the spherically symmetric vector laws, i.e. laws for which x(nx1) and Px have the same distribution for every (nxn) orthogonal matrix P, and natural extensions of these to laws of random matrices. If x has a spherical law, then the distribution of Ax + b is said to be elliptically symmetric. The class of spherically symmetric laws contains such heavy-tailed distributions as the spherical Cauchy law and other symmetric stable distributions. As such laws need not have moments, the emphasis here is on tests for scale parameters which become tests regarding dispersion parameters whenever second-order moments are defined. Using the principle of invariance it is possible to characterize the invariant tests for certain hypotheses for all elliptically symmetric distributions. The particular problems treated are tests for the equality of k scale parameters, tests for the equality of k scale matrices, tests for sphericity, tests for block diagonal structure, tests for the uncorrelatedness of two variables within a set of m variables, and tests for the hypothesis of equi-correlatedness. In all cases except the last three the null and non-null distributions of invariant statistics are shown to be unique for all elliptically symmetric laws. The usual normal-theory procedures associated with these particular testing problems thus are exactly robust, and many of their known properties extend directly to this larger class. In the last three cases, the null distributions of certain invariant statistics are unique but the non-null distributions depend on the underlying elliptically symmetric law. In testing for block diagonal structure in the case of two blocks, a monotone power property is established for the subclass of all elliptically symmetric unimodal distributions.
- Logistic growth curve parameter estimates for scrotal circumference and relationships with female reproduction in crossbred sheepFossceco, Stewart Lee (Virginia Tech, 1991-09-15)Data from two groups of lambs were analyzed. In group one, seasonal patterns of testis growth through 16 mo of age were assessed on 40 spring-born ram lambs (eight Barbados Blackbelly, 10 Suffolk and 22 1/2-Dorset, 1/4-Finnish Landrace, 1/4-Rambouillet). Scrotal circumference (sc) and body weight (wt) were measured at mean ages of 30, 62, 96, 124, 153, 180, 212, 243, 290, 333, 364, 398, 427, 454, 488 and 517 d. A multivariate repeated measures analysis indicated that there were breed differences in ram sc and wt measurements at each age. When logistic growth curves were fit to ram sc data, breed differences were associated with parameters of the logistic curve that defined mature testis size and the period of rapid testis growth. For group two, data were collected on 1,044 lambs from 727 spring lambings over 5 years; 67 sires and 525 dams were represented. Sc and wt were measured in rams at 5 times (mean ages of 44, 63, 97, 129 and 156 d); ewes were weighed at these times and at three additional times (187, 230 and 271 d). All ewe lambs were kept for fall breeding. Fertility, prolificacy and postweaning spring mating behavior of ewes that had lambed were measured. After ewes lambed, they were exposed to vasectomized rams and checked for postweaning spring mating behavior. Restricted maximum likelihood (REML) was used to estimate variance components for additive genetic, ewe, and litter effects in group two Jambs. Heritability estimates for wt at birth to 150 d ranged from .14 to .42. Heritabilities for sc and sc scaled to the 1/3 power of body weight (rsc) ranged from .09 to .57 and from .13 to .55, respectively, and were largest at approximately 90 d. Logistic sc growth curves were fitted to data from individual ram lambs. Heritabilities of the estimated logistic parameters mature sc (A), sc maturing rate (k), age at inflection of the sc growth curve (t₁) and initial 14-d sc (SC14), were estimated at .09±.15, .17±.18, .37±.29 and .40±.14, respectively. Heritability estimates for fertility and spring mating behavior (spbrd) were .04±.13 and .41±.19, respectively. The heritability estimate for prolificacy was zero. Longitudinal additive genetic covariances among wt, sc and rsc at the second, third and fourth measurements were estimated from approximate multivariate REML analysis treating variances as known. Estimated genetic correlations among wts were largest, and ranged from. 77 to .93. Estimated genetic correlations for rsc traits were between .48 and .90. Estimated genetic correlations for sc ranged only from .10 to .67. Pairwise genetic correlations among sc or rsc with fertility or spbrd were estimated to be moderate and positive (.20 and .34, respectively); t₁ had correlations of -.32 and -.48 with fertility and spbrd, respectively.
- Measurement Error in Designed Experiments for Second Order ModelsMcMahan, Angela Renee (Virginia Tech, 1997-11-04)Measurement error (ME) in the factor levels of designed experiments is often overlooked in the planning and analysis of experimental designs. A familiar model for this type of ME, called the Berkson error model, is discussed at length. Previous research has examined the effect of Berkson error on two-level factorial and fractional factorial designs. This dissertation extends the examination to designs for second order models. The results are used to suggest optimal values for axial points in Central Composite Designs. The proper analysis for experimental data including ME is outlined for first and second order models. A comparison of this analysis to a typical Ordinary Least Squares analysis is made for second order models. The comparison is used to quantify the difference in performance of the two methods, both of which yield unbiased coefficient estimates. Robustness to misspecification of the ME variance is also explored. A solution for experimental planning is also suggested. A design optimality criterion, called the DME criterion, is used to create a second-stage design when ME is present. The performance of the criterion is compared to a D-optimal design augmentation. A final comparison is made between methods accounting for ME and methods ignoring ME.
- A methodology for evaluating energy efficient lighting technologies for their performance, power quality and environmental impactsChoudhry, Mohammad A. (Virginia Tech, 1995-02-07)Recent developments in compact fluorescent lamps, electronic ballasts and adjustable speed drives have expedited the process of taping energy saving potential of these technologies. The proliferation of these. loads, however, has raised new concerns about the power quality in commercial buildings. Higher cost of repair and the reduction in average life of equipment, both on the supply and load sides, could become obvious if these issues are overlooked or ignored. As lighting loads are the largest fraction of the load in most of commercial buildings, a small increase in harmonic distortion level in commercial buildings may jeopardize other loads in the building or the loads connected to the same utility bus. As these devices were tested to quantify their energy saving potential, it was found that they can create undesirable harmonic problems. Such characteristics were quantified for different samples. It was observed that certain combinations of these lamps and ballasts are much more acceptable from power quality viewpoints than when tested individually. A generic algorithm was developed that can help to select certain energy efficient lighting technologies and will minimize the harmonic distortion level in the building. Results from the algorithm were validated on a building load model to test the accuracy of the algorithm results. The proposed algorithm helps to avoid the problems of selecting energy efficient technologies randomly during retrofitting of commercial buildings for energy savings. Pollution mitigation features, and a summary of environmental and power quality status of energy efficient lighting devices were also discussed. A brief description of other nonlinear loads, present in commercial facilities, was also given to evaluate their role in reaping the benefit of energy savings in new lighting technologies. Energy savings and environmental benefits of new lighting devices were highlighted in the presence of other nonlinear loads. This study provides a complete illustration of the benefits and power quality issues related to these technologies.
- Model selection and analysis tools in response surface modeling of the process mean and varianceGriffiths, Kristi L. (Virginia Tech, 1995-04-15)Product improvement is a serious issue facing industry today. And while response surface methods have been developed which address the process mean involved in improving the product there has been little research done on the process variability. Lack of quality in a product can be attributed to its inconsistency in performance thereby highlighting the need for a methodology which addresses process variability. The key to working with the process variability comes in the handling of the two types of factors which make up the product design: control and noise factors. Control factors can be fixed in both the lab setting and the real application. However, while the noise factors can be fixed in the lab setting, they are assumed to be random in the real application. A response-model can be created which models the response as a function of both the control and noise factors. This work introduces criteria for selecting an appropriate response-model which can be used to create accurate models for both the process mean and process variability. These two models can then be used to identify settings of the control factors which minimize process variability while maintaining an acceptable process mean. If the response-model is known, or at least well estimated, response surface methods can be extended to building various confidence regions related to the process variance. Among these are a confidence region on the location of minimum process variance and a confidence region on the ratio of the process variance to the error variance. It is easy to see the importance for research on the process variability and this work offers practical methods for improving the design of a product.
- Multivariate control charts for the mean vector and variance-covariance matrix with variable sampling intervalsCho, Gyo-Young (Virginia Tech, 1991-08-10)When using control charts to monitor a process it is frequently necessary to simultaneously monitor more than one parameter of the process. Multivariate control charts for monitoring the mean vector, for monitoring variance-covariance matrix and for simultaneously monitoring the mean vector and the variance-covariance matrix of a process with a multivariate normal distribution are investigated. A variable sampling interval (VSI) feature is considered in these charts. Two basic approaches for using past sample information in the development of multivariate control charts are considered. The first approach, which is called the combine-accumulate approach, reduces each multivariate observation to a univariate statistic and then accumulates over past samples. The second approach, which is called the accumulate-combine approach, accumulates past sample information for each parameter and then forms a univariate statistic from the multivariate accumulations. Multivariate control charts are compared on the basis of their average time to signal (ATS) performance. The numerical results show that the multivariate control charts based on the accumulate-combine approach are more efficient than the corresponding multivariate control charts based on the combine-accumulate approach in terms of ATS. Also VSI charts are more efficient than corresponding FSI charts.
- On the Efficiency of Designs for Linear Models in Non-regular Regions and the Use of Standard Desings for Generalized Linear ModelsZahran, Alyaa R. (Virginia Tech, 2002-07-01)The Design of an experiment involves selection of levels of one or more factor in order to optimize one or more criteria such as prediction variance or parameter variance criteria. Good experimental designs will have several desirable properties. Typically, one can not achieve all the ideal properties in a single design. Therefore, there are frequently several good designs and choosing among them involves tradeoffs. This dissertation contains three different components centered around the area of optimal design: developing a new graphical evaluation technique, discussing designs for non-regular regions for first order models with interaction for the two- and three-factor case, and using the standard designs in the case of generalized linear models (GLM). The Fraction of Design Space (FDS) technique is proposed as a new graphical evaluation technique that addresses good prediction. The new technique is comprised of two tools that give the researcher more detailed information by quantifying the fraction of design space where the scaled predicted variance is less than or equal to any pre-specified value. The FDS technique complements Variance Dispersion Graphs (VDGs) to give the researcher more insight about the design prediction capability. Several standard designs are studied with both methods: VDG and FDS. Many Standard designs are constructed for a factor space that is either a p-dimensional hypercube or hypersphere and any point inside or on the boundary of the shape is a candidate design point. However, some economic, or practical constraints may occur that restrict factor settings and result in an irregular experimental region. For the two- and three-factor case with one corner of the cuboidal design space excluded, three sensible alternative designs are proposed and compared. Properties of these designs and relative tradeoffs are discussed. Optimum experimental designs for GLM depend on the values of the unknown parameters. Several solutions to the dependency of the parameters of the optimality function were suggested in the literature. However, they are often unrealistic in practice. The behavior of the factorial designs, the well-known standard designs of the linear case, is studied for the GLM case. Conditions under which these designs have high G-efficiency are formulated.