Browsing by Author "Krutchkoff, Richard G."
Now showing 1 - 20 of 23
Results Per Page
Sort Options
- Connectedness and optimality in multidimensional designsChaffin, Wilkie Willis (Virginia Tech, 1975-05-14)Sennetti (1972) showed the existence of minimal multidimensional designs (MMD's) and minimal augmented multidimensional designs (MAMD's) which allow estimation of type I and type II contrasts. For an MMD, only one more design point is required than there are degrees of freedom for the parameter vector. For MAMD's, the number of assemblies added is equal to the difference between the number of degrees of freedom for the parameter vector and the rank of the design matrix. Using the chain concept of connectedness as defined by Bose (1947), this work suggests a practical procedure to obtain an MMD for estimating type I contrasts and proves the procedure valid. In addition, a procedure is discussed that may be used to obtain an MMD for estimating type II contrasts. After proof of the validity of the procedure, advantages of this procedure over some other possible procedures to obtain an MMD are given. It is shown that only a slight modification of the procedure is necessary to be able to obtain an MAMD for estimating type II contrasts. If there is a restriction on the number of replicates of factor levels for an experiment, then a different approach is suggested. If mij denotes the number of replicates of level j of factor Fi, then it is desired to increase the number of estimators for type I contrasts without altering any of the mij. The interchange algorithm used by Eccleston and Hedayat (1974) to accomplish this for a proper, locally connected (l-connected) randomized block design is extended to two-factor, no interaction designs. The design obtained is pseudo globally connected (pg-connected), thus guaranteeing more estimates for main effect contrasts. In addition, the new design will be better than the old with respect to the S-optimality criterion. It is shown that the procedure can also be used in a two or more factor experiment to pg-connect an I-connected design for two factors. The new design obtained will be better than the old with respect to a new criterion, C-optimality. The algorithm described is proved to have no effect on the amount of aliasing (based on a norm suggested by Hedayat, Raktoe, and Federar, (1974)) due to a possibly incorrect assumption of no interaction. The use of the interchange algorithm to pg-connect a design for level combinations is suggested because of the increased number of estimators for type II contrasts that may be obtained. A theorem is proved which gives the minimum number of estimates that will be available for estimating a type II contrast if a design is pg-connected for level combinations. The last topic discussed is the use of a criterion for choosing a particular MAMD for estimating type II contrasts. The sequentially S-optimal design is defined. It is shown that the sequentially S-optimal design is easy to obtain and is similar to the S-optimal design.
- Correlation between arrival and service patterns as a means of queue regulationHadidi, Nasser (Virginia Tech, 1968-03-05)A major cause of congestion in queuing situations, that is of immoderate waits and lengthening queues, is often the assumed independence of the arrival and service mechanisms. This dissertation is concerned with single server "correlated" models, defined to be such that either the service mechanism is somehow tailored to the arrival pattern, or vice versa. The greatest attention is given to a particular model in which the service time allotted to the nth arrival is λ Tn , where λ is a non-time dependent constant and numerically has the value of congestion index, and Tn is the interval between the (n-l)th and the nth arrivals which, it is important to note, could be observed by the server before service is initiated. It is shown that the effect of the correlation mechanism is to reduce congestion under a given level of traffic intensity, as compared with single server systems in which arrivals and service are independent. This result is achieved without inflicting on the service facility the penalty of increased periods of idleness. The particular model is a queuing interpretation of a stochastic-kinematic situation studied by B. W. Conolly in connection with a military tactical analysis. The dissertation is divided into two parts. Part I develops the theory of the main model with particular reference to state probabilities, waiting time, busy period, and output. Some consideration is also give to a related model where service depends on the arrival pattern, and to what is referred to as the "dual" problem in which the arrival mechanism is geared to service capability. Further, the state probabilities at arrival epochs for a conventional M/M/l queue are obtained by employing a simple probabilistic argument. This is needed for Part II. Part II applies the theory to give a practical comparison of the correlation mechanism with the elementary "independent" single server queues M/M/I, M/D/l and D/M/l; and it is shown in detail that the practical result referred to above is achieved. The superiority of the correlation mechanism increases with traffic intensity. State probability, busy period and output comparisons are made only with the M/M/l system. The main conclusions are found to extend also to these processes. It is concluded that, where its application is practicable, a mechanism of correlation can achieve important gains in efficiency.
- Creativity quotient: a statistical instrument for combining cognitive and personality components of creative thinkingSobhany, Maryam Saffaripour (Virginia Polytechnic Institute and State University, 1985)Creative thinking is a multi-faceted trait. It encompasses a constellation of intellectual abilities and personality characteristics. In this study cognitive and personality components of creative thinking were included in an instrument. From the relevant literature the most important cognitive components in order of importance were problem finding, original problem solving, general problem solving, knowledge, and attentiveness to detail. Lack of conformity was suggested to be the most important personality component. Measures of these components of creative thinking were developed. Data were obtained by interviewing 110 third-grade children (M = 8.9 yrs), from which 80 sets were randomly selected to develop a scoring scheme. The scoring scheme was utilized to derive a statistical equation to quantify creative thinking for each individual. To ascertain the reliability and consistency of the developed scoring scheme, the author and two graduate students independently scored the remaining data (30 sets). The coefficient of variability for the three groups of scores were computed by means of pooled estimate of variance. This quantity was found to be .02 which is remarkably small. The relative contribution of each component to creative thinking and the interrelationship between them have been discussed. whether problem finding and problem solving are two separate cognitive processes was also discussed.
- Density estimation and some topics in multivariate analysisGaskins, Ray Allen (Virginia Tech, 1972-05-15)Part I, entitled "A test of goodness-of-fit for multivariate distributions with special emphasis on multinornality", investigates a modification of the Chi-squared goodness-of-fit statistic which eliminates certain objectionable properties of other multivariate goodness-of-fit tests. Special emphasis is given to the multinormal distribution, and computer simulation is used to generate an empirical distribution for this goodness-of-fit statistic for the standardized bivariate normal density. Attempts to fit a four-parameter generalized gamma density function to this empirical distribution were only partially successful. Part II, entitled "The centroid method of numerical integration", begins with a discussion of the often slighted midpoint method of numerical integration, then, using Taylor's theorem, generalized formulae for the centroid method of numerical integration of a function of several variables over a closed bounded region are developed. These formulae are in terms of the derivatives of the integrand and the moments of the region of integration with respect to its centroid. Since most nonpathological bounded regions can be well approximated by a finite set of simplexes, formulae are developed for the moments of general as well as special simplexes. Several numerical examples are given and a comparison is made between the midpoint and Gaussian quadrature methods. FORTRAN programs are included. Part III - entitled "Non-parametric density estimation," begins with an extensive literature review of non-parametric methods for estimating probability densities based on a sample of N observations and goes on to suggest a new method which is to subtract a penalty for roughness fron the log-likelihood before maximizing. The roughness penalty is a functional of the assumed density function and the recommendation is to use a linear combination of the squares of the first and second derivatives of the square root of the density function. Many numerical examples and graphs are given and show that the estimated density function, for selected values of the coefficients in the linear expression, turns out to be very smooth even for very small sample sizes. Computer programs are not included but are available upon request. Part IV, entitles "On separation of product and error variability," surveys standard techniques of partitioning the total variance into product (or item) variance and error (or testing) variance when destructive testing makes replication over the same item impossible. The problem of negative variance estimates is also investigated. The factor-analysis model and related iterative techniques are suggested as an alternative method for dealing with this separation when three or more independent measurements per item are available. The problem of dependent measurements is discussed. Numerical examples are included.
- The Effects of Technology Education, Science, and Mathematics Integration Upon Eighth Grader's Technological Problem-Solving AbilityChildress, Vincent William (Virginia Tech, 1994-07-01)This study investigated the effects of technology education, science, and mathematics (TSM) curriculum integration on the technological problem-solving ability of eighth grade technology education students. The researcher used a quasi-experimental, nonequivalent control group design to compare the performance of students receiving correlated TSM integration to those not receiving integration in an adapted Technology, Science, Mathematics Integration Project Activity (LaPorte & Sanders, 1993). The students were to design, construct, and evaluate wind collectors to generate electricity. The collectors were mounted on a generator for the pretest and posttest measurements. The measure for treatment effect was the output wattage of the generator for each student's wind collector. The samples were drawn from middle schools that had two technology education teachers in the same school, each teaching eighth graders. The pilot study sample (N = 51) was selected from a middle school in rural south-central Virginia. The study sample (N = 33) was selected from a middle school in a suburb of Richmond, Virginia. Treatment group technology education teachers employed echnological problem solving, and they correlated instruction of key concepts with science and mathematics teachers using the adapted TSM Integration Activity. The control group technology education teachers did not correlate instruction with science and mathematics teachers. There was no significant difference between the treatment and control groups for technological problem solving. Evidence suggested that students were applying science and mathematics concepts. The researcher concluded that TSM curriculum integration may promote the application of science and mathematics concepts to technological problem solving and does not hinder the technological problem-solving ability of eighth technology education students.
- Empirical Bayes methods in time series analysisKhoshgoftaar, Taghi M. (Virginia Polytechnic Institute and State University, 1982)In the case of repetitive experiments of a similar type, where the parameters vary randomly from experiment to experiment, the Empirical Bayes method often leads to estimators which have smaller mean squared errors than the classical estimators. Suppose there is an unobservable random variable θ, where θ ~ G(θ), usually called a prior distribution. The Bayes estimator of θ cannot be obtained in general unless G(θ) is known. In the empirical Bayes method we do not assume that G(θ) is known, but the sequence of past estimates is used to estimate θ. This dissertation involves the empirical Bayes estimates of various time series parameters: The autoregressive model, moving average model, mixed autoregressive-moving average, regression with time series errors, regression with unobservable variables, serial correlation, multiple time series and spectral density function. In each case, empirical Bayes estimators are obtained using the asymptotic distributions of the usual estimators. By Monte Carlo simulation the empirical Bayes estimator of first order autoregressive parameter, ρ, was shown to have smaller mean squared errors than the conditional maximum likelihood estimator for 11 past experiences.
- Empirical Bayes procedures in time series regression modelsWu, Ying-keh (Virginia Polytechnic Institute and State University, 1986)In this dissertation empirical Bayes estimators for the coefficients in time series regression models are presented. Due to the uncontrollability of time series observations, explanatory variables in each stage do not remain unchanged. A generalization of the results of O'Bryan and Susarla is established and shown to be an extension of the results of Martz and Krutchkoff. Alternatively, as the distribution function of sample observations is hard to obtain except asymptotically, the results of Griffin and Krutchkoff on empirical linear Bayes estimation are extended and then applied to estimating the coefficients in time series regression models. Comparisons between the performance of these two approaches are also made. Finally, predictions in time series regression models using empirical Bayes estimators and empirical linear Bayes estimators are discussed.
- GEM, generalized estuary model: a variation on the Schodfield- Krutchoff stochastic model for estuariesDePietro, Sandra Ann (Virginia Tech, 1975-08-05)In recent years, many mathematical models have been developed to be used as mechanisms for carrying out stream and estuary investigations. In 1971, W.R. Schofield and R.G. Krutchkoff completed work on a stochastic model in an attempt to accurately describe the behavior of an estuary. Through the use of a high-speed computer this one-dimensional model predicts the concentrations of twelve interacting components, subdivided into five biological and seven chemical factors. This is a valuable tool, but from a practical viewpoint, the model is difficult to apply without a fairly strong background in computer science. It is the aim of the present study to simplify the use of the SchofieldKrutchkoff estuary model so that it can be readily accessible to the appropriate personnel, irrespective of their previous exposure to computer programming. Dependent upon the particular estuary studied, it was necessary to make internal program adjustments with respect to boundary conditions, applicable rate constants, tidal lag, and maximum tidal velocity rates. These constants have been replaced by variables for the user to define as input data to the main program segment. The options to choose one of several expressions for the oxygen reaeration rate K₂, whether to weight this equation with wind velocity, vary the volumetric freshwater flow rate with position and request plotted output for each day modeled have also been added.
- Generalized initial conditions for the stochastic model for pollution and dissolved oxygen in streamsMoushegian, Richard H.; Krutchkoff, Richard G. (Water Resources Research Center, Virginia Polytechnic Institute, 1969)Today there is a tremendous volume of waste material that is being deposited daily into the streams and rivers throughout the United States. The waste material is a by-product of an industrial and population expansion and is increasing in volume and complexity daily. These wastes cannot all be treated and transformed into inert, non-toxic compounds prior to being let into the streams. In the la.st decade the problem received considerable attention and state and federal water-pollution laws have been enacted. When a regulatory agency wishes. to restrict the quality of organic waste discharged into a body of water, it will need some criteria for judging the pollutants introduced into the stream. A sanitary engineer within the regulatory agency has several general methods at his disposal...
- High power shunt regulation of spacecraft solar arraysPatil, Ashok R. (Virginia Tech, 1995)The operation of the basic shunt system for solar arrays is considered. The system is analyzed for stability with a constant power load. The implications of using switching type shunt elements for high power outputs are investigated. The input filter is shown to affect the closed loop design of the system, as well as its weight. Analysis and modeling techniques are developed for a sequential shunt unit. The analysis of bus impedance and loop gain is verified against measurements on hardware. The factors that affect the design are described. The effect of non-linearities in the system is shown to cause limit cycle operation. For more effective use of the input filters, alternatives to the existing scheme are considered, where the on-off and fine control sections are kept distinct. The basic requirements of the scheme are shown to be the suppression of on-off section current, and the inclusion of hystcresis in the control loop.
- Iterated Grid Search Algorithm on Unimodal CriteriaKim, Jinhyo (Virginia Tech, 1997-06-02)The unimodality of a function seems a simple concept. But in the Euclidean space R^m, m=3,4,..., it is not easy to define. We have an easy tool to find the minimum point of a unimodal function. The goal of this project is to formalize and support distinctive strategies that typically guarantee convergence. Support is given both by analytic arguments and simulation study. Application is envisioned in low-dimensional but non-trivial problems. The convergence of the proposed iterated grid search algorithm is presented along with the results of particular application studies. It has been recognized that the derivative methods, such as the Newton-type method, are not entirely satisfactory, so a variety of other tools are being considered as alternatives. Many other tools have been rejected because of apparent manipulative difficulties. But in our current research, we focus on the simple algorithm and the guaranteed convergence for unimodal function to avoid the possible chaotic behavior of the function. Furthermore, in case the loss function to be optimized is not unimodal, we suggest a weaker condition: almost (noisy) unimodality, under which the iterated grid search finds an estimated optimum point.
- Lp norm estimation procedures and an L1 norm algorithm for unconstrained and constrained estimation for linear modelsKim, Buyong (Virginia Polytechnic Institute and State University, 1986)When the distribution of the errors in a linear regression model departs from normality, the method of least squares seems to yield relatively poor estimates of the coefficients. One alternative approach to least squares which has received a great deal of attention of late is minimum Lp norm estimation. However, the statistical efüciency of a Lp estimator depends greatly on the underlying distribution of errors and on the value of p. Thus, the choice of an appropriate value of p is crucial to the effectiveness of p estimation. Previous work has shown that L₁ estimation is a robust procedure in the sense that it leads to an estimator which has greater statistical efficiency than the least squares estimator in the presence of outliers, and that L₁ estimators have some- desirable statistical properties asymptotically. This dissertation is mainly concerned with the development of a new algorithm for L₁ estimation and constrained L₁ estimation. The mainstream of computational procedures for L₁ estimation has been the simplex-type algorithms via the linear programming formulation. Other procedures are the reweighted least squares method, and. nonlinear programming technique using the penalty function approach or descent method. A new computational algorithm is proposed which combines the reweighted least squares method and the linear programming approach. We employ a modified Karmarkar algorithm to solve the linear programming problem instead of the simplex method. We prove that the proposed algorithm converges in a finite number of iterations. From our simulation study we demonstrate that our algorithm requires fewer iterations to solve standard problems than are required by the simplex-type methods although the amount of computation per iteration is greater for the proposed algorithm. The proposed algorithm for unconstrained L₁ estimation is extended to the case where the L₁ estimates of the parameters of a linear model satisfy certain linear equality and/or inequality constraints. These two procedures are computationally simple to implement since a weighted least squares scheme is adopted at each iteration. Our results indicate that the proposed L₁ estimation procedure yields very accurate and stable estimates and is efficient even when the problem size is large.
- A Monte Carlo study of the robustness of the standard deviation of the sample correlation coefficient to the assumption of normalityBrooks, Camilla Anita (Virginia Tech, 1970-03-08)From the case studies presented, one could conclude that for large values of n the standard deviation of r, the usual estimator of the correlation coefficient, and its transform z are only negligibly affected by variation in skewness or variation in kurtosis, the effect being slightly greater for variation in kurtosis. When the variations are in both skewness and kurtosis, the standard deviation of r and of Z are more affected by non-normality, a few significantly so. In small samples (n=10, n=5) the standard deviations of r and,z are quite visibly larger for variations in skewness and variations in kurtosis. The effect is greater for the simultaneous variation of the two. However, all of the values fall within a 95% confidence interval. It would appear then that the increase in the standard deviation of rand z is due more to the natural rise of the standard deviation in small samples rather than to non-normality. Viewing the studies made in totality we may in final conclusion state that the effect of non-normality on the standard deviation of r for samples of any size is not significant enough for concern; i.e., from this Monte Carlo study we will state that the standard deviation of the sample correlation coefficient is robust to the assumption of normality.
- Predicting pollution in the James River Estuary : a stochastic modelBard, Harry; Krutchkoff, Richard G. (Water Resources Research Center, Virginia Polytechnic Institute and State University, 1974)One function of water-quality management is insuring that no pollutants enter a given body of water in sufficient quantity to degrade water quality. To do this most effectively, a manager should be able to forecast what effects additional amounts of various pollutants would have on the body of water. And that is a complex and difficult task...
- Probability forecasts of 30-day precipitationPhilpot, John W.; Krutchkoff, Richard G. (Water Resources Research Center, Virginia Polytechnic Institute, 1969)Rainfall is a very important factor in the overall picture of water resources. The question "How much rain will we get this month?" has long been asked, but never answered with any degree of accuracy. The U.S. Weather Bureau presently provides, each month, a map of the United States divided into regions of Light, Moderate, and Heavy rainfall predictions. This method of prediction falls short for at least three reasons...
- A response surface approach to data analysis in robust parameter designKim, Yoon G. (Virginia Tech, 1992-09-15)It has become obvious that combined arrays and a response surface approach can be effective tools in our quest to reduce (process) variability. An important aspect of the improvement of quality is to suppress the magnitude of the influence coming from subtle changes of noise factors. To model and control process variability induced by noise factors we take a response surface approach. The derivative of the standard response function with respect to noise factors, i. e., the slopes of the response function in the direction of the noise factors, play an important role in the study of the minimum process variance. For better understanding of the process variability, we study various properties of both biased and the unbiased estimators of the process variance. Response surface modeling techniques and the ideas involved with variance modeling and estimation through the function of the aforementioned derivatives is a valuable concept in this study. In what follows, we describe the use of the response surface methodology for situations in which noise factors are used. The approach is to combine Taguchi's notion of heterogeneous variability with standard design and modeling techniques available in response surface methodology.
- A response surface approach to the mixture problem when the mixture components are categorizedCornell, John A. (Virginia Tech, 1968-12-05)A method is developed for experiments with mixtures where the mixture components are categorized (acids, bases, etc.), and each category of components contributes a fixed proportion to the total mixture. The number of categories of mixture components is general and each category will be represented in every mixture by one or more of its member components. The purpose of this paper is to show how standard response surface designs and polynomial models can be used for estimating the response to mixtures of the k mixture components. The experimentation is concentrated in an ellipsoidal region chosen by the experimenter, subject to the constraints placed on the components. The selection of this region, the region of interest, permits the exclusion of work in areas not of direct interest. The transformation from a set of linearly dependent mixture components to a set of linearly independent design variables is shown. This transformation is accomplished with the use of an orthogonal matrix. Since we want the properties of the predictor ŷ at a point w to be invariant to the arbitrary elements of the transformation matrix, we choose to use rotatable designs. Frequently, there are underlying sources of variation in the experimental program whose effects can be measured by dividing the experimentation into stages, that is, blocking the observations. With the use of orthogonal contrasts of the observations, it is shown how these effects can be measured. This concept of dividing the program of experiments into stages is extended to include second degree designs. The radius of the largest sphere, in the metric of the design variables, that will fit inside the factor space is derived. This sphere provides an upper bound on the size of an experimental design. This is important when one desires to use a design to minimize the average variance of ŷ only for a first-degree model. It is also shown with an example how with the use of the largest sphere, one can cover almost all combinations of the mixture components, subject to the constraints.
- Some aspects of time-dependent one-dimensional random walksGibson, Allen Edward (Virginia Tech, 1968-09-05)This dissertation contains a study of related topics connected with the one-dimensional random walk which proceeds by steps of ±1 occurring at random time intervals. In general it is assumed that these intervals are identically and independently distributed. This model may be specialized to the queuing process by inserting a reflecting barrier at the origin so that the displacement S(t) of the random walk at any time t is non-negative. Throughout most of the dissertation it is assumed that the time intervals between steps of the same kind are independently and negative exponentially distributed with non-time- dependent parameter λ for positive steps, and μ for negative steps. Under this assumption we designate the single-server queuing process by the usual notation M/M/l. Using an obvious extension to the queuing notation, we denote by –2/M/M the unrestricted walk in which S(t) may range over the entire set of positive and negative integers including zero. Topics of classical interest are discussed such as first-passage times, first maxima, the time of occurrence of the rth return to zero, and the number of returns to zero during an arbitrary time interval (0,t). In addition to the discussion of these topics for ∞²/M/M and M/M/1, probability density functions are obtained for the first-passage times and the epoch of the maximum on the assumption that time intervals between steps of +1 have a general distribution and steps of -1 occur in a Poisson stream and vice-versa. These more general expressions are new. Special emphasis is placed on the two-state sojourn problem in which it is assumed that at any time t, S(t) belongs to one of two possible states, A and B. The distribution of the sojourn time σB(t) in a given state B during the arbitrary time interval (0,t) is given. The general result for the distribution of σB(t) is applied to the M/M/I queuing process to obtain the distribution of the busy time. A similar application is made to the walk ∞²/M/M to obtain the distribution of σB(t) for the two cases: (i) B is the set of all non-zero integers; and (ii) B is the set of all positive integers. New expressions are given the distribution function of σB(t) in all three cases. New asymptotic formulae for these cases are derived and compared numerically with those obtained by Takacs using different methods. For the more difficult sojourn time problem assuming three possible states, A, B₁, and B₂, the joint probability density function of σA(t) and σB₁(t) is derived. This result, not published before, is applied to –2/M/M assuming that A contains zero only and that B₁ and B₂ consist of the sets of positive and negative integers, respectively. The dissertation also includes a discussion of several results by E. Sparre Andersen concerning fluctuations of sums of random variables and their time-dependent analogues.
- Stochastic model for a dynamic ecosystemSchofield, William R.; Krutchkoff, Richard G. (Water Resources Research Center, Virginia Polytechnic Institute and State University, 1973)Because of increasing concern for our environment, the American public, through its governing bodies, is preparing to invest vast quantities of the nation's resources in the prevention and control of water pollution, as well as the control of air, noise, and radiation pollution, and sol id waste disposal. To have wise choices made in expending these resources, it is necessary first to understand the relationship between the discharge of pollutants into a body of water and the ultimate effect on the quality of that water. Once the cause-effect relationship is known, the effectiveness of a prospective pollution-control investment can be evaluated before the investment is made. In this way the best of many alternate control schemes could be selected for a given locality based on the needs, resources, and conditions of that locality. Usually, this cause-effect relationship is expressed in the form of a mathematical model, where each known step, process, mechanism, etc., is represented by a corresponding mathematical analog. Obviously, the better the pollution mechanism is understood, the more accurate its translation into a mathematical analogue, and thus, the more reliable the comparison of the alternatives. It is also evident that a rigorous comparison must be made between any mathematical model and actual data before the model may be confidently used in a predictive capacity.
- A stochastic model for pollution and dissolved oxygen in streamsThayer, Richard P.; Krutchkoff, Richard G. (Water Resources Research Center, Virginia Polytechnic Institute, 1966)The problem of the pollution of the rivers and estuaries of this nation and the world is now receiving considerable attention, and rightly so. Many rivers are so grossly polluted that there is scum on the surface and the odor of methane and hydrogen sulfide is noticeable. Consequently, there is a danger to health. For example, part of the Hudson River in New York is so foul that only eels can live in it. The lower Mississippi River is full of fish that have died. Through a great effort over a long period of time, the sewage load on the Potomac River has been reduced. The river has improved greatly and the more desirable species of fish are beginning to return...