Browsing by Author "Hinkelmann, Klaus H."
Now showing 1 - 20 of 28
Results Per Page
Sort Options
- Analysis of Zero-Heavy Data Using a Mixture Model ApproachWang, Shin Cheng (Virginia Tech, 1998-03-18)The problem of high proportion of zeroes has long been an interest in data analysis and modeling, however, there are no unique solutions to this problem. The solution to the individual problem really depends on its particular situation and the design of the experiment. For example, different biological, chemical, or physical processes may follow different distributions and behave differently. Different mechanisms may generate the zeroes and require different modeling approaches. So it would be quite impossible and inflexible to come up with a unique or a general solution. In this dissertation, I focus on cases where zeroes are produced by mechanisms that create distinct sub-populations of zeroes. The dissertation is motivated from problems of chronic toxicity testing which has a data set that contains a high proportion of zeroes. The analysis of chronic test data is complicated because there are two different sources of zeroes: mortality and non-reproduction in the data. So researchers have to separate zeroes from mortality and fecundity. The use of mixture model approach which combines the two mechanisms to model the data here is appropriate because it can incorporate the mortality kind of extra zeroes. A zero inflated Poisson (ZIP) model is used for modeling the fecundity in Ceriodaphnia dubia toxicity test. A generalized estimating equation (GEE) based ZIP model is developed to handle longitudinal data with zeroes due to mortality. A joint estimate of inhibition concentration (ICx) is also developed as potency estimation based on the mixture model approach. It is found that the ZIP model would perform better than the regular Poisson model if the mortality is high. This kind of toxicity testing also involves longitudinal data where the same subject is measured for a period of seven days. The GEE model allows the flexibility to incorporate the extra zeroes and a correlation structure among the repeated measures. The problem of zero-heavy data also exists in environmental studies in which the growth or reproduction rates of multi-species are measured. This gives rise to multivariate data. Since the inter-relationships between different species are imbedded in the correlation structure, the study of the information in the correlation of the variables, which is often accessed through principal component analysis, is one of the major interests in multi-variate data. In the case where mortality influences the variables of interests, but mortality is not the subject of interests, the use of the mixture approach can be applied to recover the information of the correlation structure. In order to investigate the effect of zeroes on multi-variate data, simulation studies on principal component analysis are performed. A method that recovers the information of the correlation structure is also presented.
- Bilateral Asymmetry in Chickens of Different Genetic backgroundsYang, Aiming (Virginia Tech, 1998-05-04)The dissertation consists of a series of experiments conducted to study developmental stability of various genetic stocks at different stages in the life cycle. The primary measures of stability were type and degree of asymmetry of bilateral traits and heterosis. Higher relative asymmetry (RA), which was defined as (|L-R| / [(L+R)/2]) x 100, was observed in lines of White Leghorns selected 23 generations for high or low antibody response to sheep red blood cells than in their F1 crosses. The bilateral traits were 39-day shank length and length and weight of the first primary wing feather. Shank length was again measured on day 49 while body, heart, shank, and lung weights and ceca lengths were obtained on day 56. Heterosis was positive for organ sizes and negative for degree of RA. Shank length and diameter, weight and length of the first primary wing feather, and distance between the junction of maxilla and mandibles and auditory canal (face length) were used to classify bilateral types and measure RA in six genetic stocks. The stocks were the S23 generation of White Leghorn lines selected for high or low antibody response to SRBC, sublines where selection had been relaxed for eight generations, and reciprocal crosses of the selected lines. Differences were found among all stocks for the traits measured. Rankings among traits for RA in descending order were face length, shank diameter, feather weight, and shank and feather lengths. The RA of shank and feather lengths did not differ from each other. The mean RA of the five traits was higher for the two selected lines than the crosses between them. The RAs of the two lines where selection had been relaxed was similar to that of selected lines. In a line of White Rocks selected 39 generations for low eight-week body weight, bilateral traits measured were shank length and diameter, face length, and weight and length of the first primary wing feather of females at 240 days of age. The RAs of individuals that had not commenced egg production by 245 days of age were similar to those that had entered lay. In both cases, these RAs were higher than those of a subline in which selection had been relaxed for four generations. Broiler sire lines had higher RA than dam lines for lung weight at hatch. Heterosis of RAs suggested superior homeostasis in F1 crosses than in the sire lines. Based on populations studied, it may be concluded that RAs were trait specific with the RA of shank length being lower (0 < RA < 2%) than lung weight which was 10% or higher regardless of genetic background. The types of bilateral asymmetry exhibited although less consistent, still had consistency such that feather weight and ceca weight exhibited antisymmetry across different stocks. Length and width of shank and weight of lung, were generally of fluctuating asymmetry. Heart:lung ratios differed among genetic stocks. In White Leghorns, lungs from late embryonic development to 25 days after hatch were heavier in a line which had heavier juvenile body weight than in one with lower juvenile body weight. In commercial broilers, heart:lung ratios at hatch were lower and thus inferior in parental lines than in their F1 crosses.
- Connectedness and optimality in multidimensional designsChaffin, Wilkie Willis (Virginia Tech, 1975-05-14)Sennetti (1972) showed the existence of minimal multidimensional designs (MMD's) and minimal augmented multidimensional designs (MAMD's) which allow estimation of type I and type II contrasts. For an MMD, only one more design point is required than there are degrees of freedom for the parameter vector. For MAMD's, the number of assemblies added is equal to the difference between the number of degrees of freedom for the parameter vector and the rank of the design matrix. Using the chain concept of connectedness as defined by Bose (1947), this work suggests a practical procedure to obtain an MMD for estimating type I contrasts and proves the procedure valid. In addition, a procedure is discussed that may be used to obtain an MMD for estimating type II contrasts. After proof of the validity of the procedure, advantages of this procedure over some other possible procedures to obtain an MMD are given. It is shown that only a slight modification of the procedure is necessary to be able to obtain an MAMD for estimating type II contrasts. If there is a restriction on the number of replicates of factor levels for an experiment, then a different approach is suggested. If mij denotes the number of replicates of level j of factor Fi, then it is desired to increase the number of estimators for type I contrasts without altering any of the mij. The interchange algorithm used by Eccleston and Hedayat (1974) to accomplish this for a proper, locally connected (l-connected) randomized block design is extended to two-factor, no interaction designs. The design obtained is pseudo globally connected (pg-connected), thus guaranteeing more estimates for main effect contrasts. In addition, the new design will be better than the old with respect to the S-optimality criterion. It is shown that the procedure can also be used in a two or more factor experiment to pg-connect an I-connected design for two factors. The new design obtained will be better than the old with respect to a new criterion, C-optimality. The algorithm described is proved to have no effect on the amount of aliasing (based on a norm suggested by Hedayat, Raktoe, and Federar, (1974)) due to a possibly incorrect assumption of no interaction. The use of the interchange algorithm to pg-connect a design for level combinations is suggested because of the increased number of estimators for type II contrasts that may be obtained. A theorem is proved which gives the minimum number of estimates that will be available for estimating a type II contrast if a design is pg-connected for level combinations. The last topic discussed is the use of a criterion for choosing a particular MAMD for estimating type II contrasts. The sequentially S-optimal design is defined. It is shown that the sequentially S-optimal design is easy to obtain and is similar to the S-optimal design.
- Construction and Analysis of Linear Trend-Free Factorial Designs Under a General Cost StructureKim, Kiho (Virginia Tech, 1997-07-28)When experimental units exhibit a smooth trend over time or in space, random allocation of treatments may no longer be appropriate. Instead, systematic run orders may have to be used to reduce or eliminate the effects of such a trend. The resulting designs are referred to as trend-free designs. We consider here, in particular, linear trend-free designs for factorial treatment structures such that estimates of main effects and two-factor interactions are trend-free. In addition to trend-freeness we incorporate a general cost structure and propose methods of constructing optimal or near-optimal full or fractional factorial designs. Building upon the generalized foldover scheme (GFS) introduced by Coster and Cheng (1988) we develop a procedure of selection of foldover vectors (SFV) which is a construction method for an appropriate generator matrix. The final optimal or near-optimal design can then be developed from this generator matrix. To achieve a reduction in the amount of work, i.e., a reduction of the large number of possible generator matrices, and to make this whole process easier to use by a practitioner, we introduce the systematic selection of foldover vectors (SSFV). This method does not always produce optimal designs but in all cases practical compromise designs. The cost structure for factorial designs can be modeled according to the number of level changes for the various factors. In general, if cost needs to be kept to a minimum, factor level changes will have to be kept at a minimum. This introduces a covariance structure for the observations from such an experiment. We consider the consequences of this covariance structure with respect to the analysis of trend-free factorial designs. We formulate an appropriate underlying mixed linear model and propose an AIC-based method using simulation studies, which leads to a useful practical linear model as compared to the theoretical model, because the theoretical model is not always feasible. Overall, we show that estimation of main effects and two-factor interactions, trend-freeness, and minimum cost cannot always be achieved simultaneously. As a consequence, compromise designs have to be considered, which satisfy requirements as much as possible and are practical at the same time. The proposed methods achieve this aim.
- Construction and properties of Box-Behnken designsJo, Jinnam (Virginia Tech, 1992-10-05)Box-Behnken designs are used to estimate parameters in a second-order response surface model (Box and Behnken, 1960). These designs are formed by combining ideas from incomplete block designs (BIBD or PBIBD) and factorial experiments, specifically 2k full or 2k-1 fractional factorials. In this dissertation, a more general mathematical formulation of the Box-Behnken method is provided, a general expression for the coefficient matrix in the least squares analysis for estimating the parameters in the second order model is derived, and the properties of Box-Behnken designs with respect to the estimability of all parameters in a second-order model are investigated when 2kfull factorials are used. The results show that for all pure quadratic coefficients to be estimable, the PBIB(m) design has to be chosen such that its incidence matrix is of full rank, and for all mixed quadratic coefficients to be estimable the PBIB(m) design has to be chosen such that the parameters λ₁, λ₂, ...,λm are all greater than zero. In order to reduce the number of experimental points the use of 2k-1 fractional factorials instead of 2k full factorials is being considered. Of particular interest and importance are separate considerations of fractions of resolutions III, IV, and V. The construction of Box-Behnken designs using such fractions is described and the properties of the designs concerning estimability of regression coefficients are investigated. Using designs obtained from resolution V factorials have the same properties as those using full factorials. Resolutions III and IV designs may lead to non-estimability of certain coefficients and to correlated estimators. The final topic is concerned with Box-Behnken designs in which treatments are applied to experimental units sequentially in time or space and in which there may exist a linear trend effect. For this situation, one wants to find appropriate run orders for obtaining a linear trend-free Box-Behnken design to remove a linear trend effect so that a simple technique, analysis of variance, instead of a more complicated technique, analysis of covariance, to remove a linear trend effect can be used. Construction methods for linear trend-free Box-Behnken designs are introduced for different values of block size (for the underlying PBIB design) k. For k= 2 or 3, it may not always be possible to find linear trend-free Box-Behnken designs. However, for k ≥ 4 linear trend-free Box-Behnken designs can always be constructed.
- Correlated response and sexual dimorphism in bidirectional selection experimentsCarte, Ira Franklin (Virginia Tech, 1968-02-05)This dissertation involved two experiments, (1) the study of realized heritabilities of correlated traits, and (2) the study of the inheritance of sexual dimorphism or body weight. The first experiment included data from four generations of double two-way selection for body weight and breast angle at eight weeks of age. Breast angle was considered a correlated trait in the weight subpopulation and body weight a correlated trait in the angle subpopulation. There was a significant divergence between lines for both selected traits. The response to direct selection for breast angle was asymmetrical with the response in the narrow direction being greater then that in the broad direction. The response of body weight to two-way selection was symmetrical through the F₄ generation. Divergence of body weight between the lines selected for breast angle was significant in the F₁, F₃, and F₄ generations. Divergence of breast angle between the lines selected for body weight was significant in the F₂ and subsequent generations. Heritabilities of the unselected traits were obtained by the cumulative difference between lines divided by the expected secondary selection differential and by the regression of the cumulative difference between lines on expected secondary selection differential. The correlated realized heritability of breast angle was greater in the weight lines than was the correlated realized heritability for body weight in the angle lines. Examination of the components of the correlated realized heritability showed that they were the ratio of the genetic to phenotypic covariances of the two traits. The second experiment involved the investigation of sex dimorphism for body weight at eight weeks of age. The heritability estimate for sex dimorphism of this trait was .02, and the genetic correlation of it in males and females was .98. The genetic variability (.02) in sex dimorphism was evidenced by a greater response in males to selection for eight-week body weight than in females. The biological reason for this was additive sex-linkage.
- Economically optimum design of cusum charts when there is a multiplicity of assignable causesHsu, Margaretha Mei-Ing (Virginia Tech, 1978-10-05)This study is concerned with the design of cumulative sum charts based on a minimum cost criterion when there are multiple assignable causes occurring randomly, but with known effect. A cost model is developed that relates the design parameters (i.e. sampling interval, decision limit, reference value and sample size) of a cusum chart and the cost and risk factors of the process to the long run average loss cost per hour for the process. Optimum designs for various sets of cost and risk factors are found by minimizing the long run average loss-cost per hour of the process with respect to the design parameters of a cusum chart. Optimization is accomplished by use of Brown's method. A modified Brownian motion approximation is used for calculating ARLs in the cost model. The nature of the loss-cost function is investigated numerically. The effects of changes in the design parameters and in the cost and risk factors are also studied. An investigation of the limiting behavior of the loss-cost function as the decision limit approaches infinity reveals that in some cases there exist some points that yield a lower loss-cost than that of the local minimum obtained by Brown's method. It is conjectured that if the model is extended to include more realistic assumption about the occurrence of assignable causes then only the local minimum solutions will remain. This paper also shows that the multiple assignable cause model can be well approximated by a matched single cause model. Then in practice it may be sufficient to find the optimum design for the matched. single cause model.
- Evaluating And Interpreting InteractionsHinkelmann, Klaus H. (Virginia Tech, 2004-12-13)The notion of interaction plays an important − and sometimes frightening − role in the analysis and interpretation of results from observational and experimental studies. In general, results are much easier to explain and to implement if interaction effects are not present. It is for this reason that they are often assumed to be negligible. This may, however, lead to erroneous conclusions and poor actions. One reason why interactions are sometimes feared is because of limited understanding of what the word “interaction” actually means, in a practical sense and,in particular, in a statistical sense. As far as the latter is concerned, simply stating that interaction is significant is generally not sufficient. Subsequent interpretation of that finding is needed, and that brings us back to the definition and meaning of interaction within the context of the experimental setting. In the following sections we shall define and discuss various types of variables that affect the response and the types of interactions among them. These notions will be illustrated for one particular experiment to which we shall return throughout our discussion. To help us in the interpretation of interactions we take a closer look at the definitions of two-factor and three-factor interactions in terms of simple effects. This is followed by a discussion of the nature of interactions and the role they play in the context of the experiment, from the statistical point of view and with regard to the interpretation of the results. After a general overview of how to dissect interactions we return to our example and perform a detailed analysis and interpretation of the data using SASr (SAS Institute, 2000), in particular PROC GLM and some of its options, such as SLICE. We mention also different methods for the analysis when interaction is actually present. We conclude the analytical part with a discussion of a useful graphical method when no error term is available for testing for interactions. Finally, we summarize the results with some recommendation reminding the reader that in all of this the experimental design is of fundamental importance.
- Fisher Information Test of NormalityLee, Yew-Haur Jr. (Virginia Tech, 1998-09-03)An extremal property of normal distributions is that they have the smallest Fisher Information for location among all distributions with the same variance. A new test of normality proposed by Terrell (1995) utilizes the above property by finding that density of maximum likelihood constrained on having the expected Fisher Information under normality based on the sample variance. The test statistic is then constructed as a ratio of the resulting likelihood against that of normality. Since the asymptotic distribution of this test statistic is not available, the critical values for n = 3 to 200 have been obtained by simulation and smoothed using polynomials. An extensive power study shows that the test has superior power against distributions that are symmetric and leptokurtic (long-tailed). Another advantage of the test over existing ones is the direct depiction of any deviation from normality in the form of a density estimate. This is evident when the test is applied to several real data sets. Testing of normality in residuals is also investigated. Various approaches in dealing with residuals being possibly heteroscedastic and correlated suffer from a loss of power. The approach with the fewest undesirable features is to use the Ordinary Least Squares (OLS) residuals in place of independent observations. From simulations, it is shown that one has to be careful about the levels of the normality tests and also in generalizing the results.
- General Weighted Optimality of Designed ExperimentsStallings, Jonathan W. (Virginia Tech, 2014-04-22)Design problems involve finding optimal plans that minimize cost and maximize information about the effects of changing experimental variables on some response. Information is typically measured through statistically meaningful functions, or criteria, of a design's corresponding information matrix. The most common criteria implicitly assume equal interest in all effects and certain forms of information matrices tend to optimize them. However, these criteria can be poor assessments of a design when there is unequal interest in the experimental effects. Morgan and Wang (2010) addressed this potential pitfall by developing a concise weighting system based on quadratic forms of a diagonal matrix W that allows a researcher to specify relative importance of information for any effects. They were then able to generate a broad class of weighted optimality criteria that evaluate a design's ability to maximize the weighted information, ultimately targeting those designs that efficiently estimate effects assigned larger weight. This dissertation considers a much broader class of potential weighting systems, and hence weighted criteria, by allowing W to be any symmetric, positive definite matrix. Assuming the response and experimental effects may be expressed as a general linear model, we provide a survey of the standard approach to optimal designs based on real-valued, convex functions of information matrices. Motivated by this approach, we introduce fundamental definitions and preliminary results underlying the theory of general weighted optimality. A class of weight matrices is established that allows an experimenter to directly assign weights to a set of estimable functions and we show how optimality of transformed models may be placed under a weighted optimality context. Straightforward modifications to SAS PROC OPTEX are shown to provide an algorithmic search procedure for weighted optimal designs, including A-optimal incomplete block designs. Finally, a general theory is given for design optimization when only a subset of all estimable functions is assumed to be in the model. We use this to develop a weighted criterion to search for A-optimal completely randomized designs for baseline factorial effects assuming all high-order interactions are negligible.
- Genetic Analysis of Sheep Discrete Reproductive Traits Using Simulation and Field DataRao, Shaoqi (Virginia Tech, 1997-01-14)The applicability of restricted maximum likelihood (REML) in genetic analyses of categorical data was evaluated using simulation and field data. Four genetic models were used to simulate underlying phenotypic variates, which were derived as the sum of additive genetic and environmental effects (Model 1A and 1B) or additive genetic and permanent and temporary environmental effects (Model 2A and 2B). Fifty-eight replicates were simulated, each of which contained 5000 ewes by 500 sires and 5000 dams and with up to five records per ewe. The usual transformation of heritability estimated on the categorical scale to the normal scale for fertility and litter size performed better for a simple animal model than for a repeatability model. Genetic correlation estimates between the two categorical traits for Model 1B and 2B were .49 ± .01 and .48 ± .04, respectively, and were close to the expected value of .50. However, permanent and temporary environmental correlations whose input values were each .50 were underestimated with estimates of .41 ± .05 and .26 ± .03, respectively for Model 2B, and .33 ± .02 for the temporary environmental correlation for Model 1B. Bivariate genetic analyses of litter size with growth and fleece traits were carried out by REML for the data of Suffolk, Targhee and Polypay. Direct heritabilities for most growth traits in all the breeds were low (<.20). Maternal genetic and maternal permanent environmental effects were important for all body weights except for the weaning weight at 120 d for Polypay sheep. Estimates of heritability and permanent environmental effects for litter size for these breeds ranged from .09 to .12 and .00 to .05, respectively. Heritabilities for grease fleece weight and fiber diameter were high for Targhee and Polypay sheep. Direct genetic correlations between growth and litter size were favorable for Suffolk and Targhee but weak for Polypay sheep. Genetic correlations between maternal effects for growth and direct effects for litter size for the breeds were generally small. Within-trait maternal-direct genetic correlations for growth in the breeds were variable and generally negative. Direct genetic correlations of litter size with grease fleece weight and fiber diameter were variable across the breeds.
- The growth pattern of various body and carcass parts and proportions of beef steers as influenced by different planes of nutritionDe Ramos, Mariano Bauyon (Virginia Tech, 1968-12-15)Ten attributes representing various body and. carcass measurements of beef steers were considered for statistical analysis. The slaughter data were obtained from an experiment conducted at Blacksburg, Virginia, by members of the Animal Science Department of the Virginia Polytechnic Institute, described by Kelly et al. (1968). The objective of the study was to obtain estimates of the effects of slaughter time (age), and of the energy level of the ration fed, on the body proportions and carcass composition of beef steers from approximately 7 to 30 months of age. The nutritional regimens used were: Ration I - maintenance; Ration II - full feed of hay; Ration III - limited concentrate plus full feed of hay; Ration IV - full feed of a fattening ration. According to the design of the experiment, the linear model included the general mean, the effects of slaughter time, breed, trial, slaughter time by trial interaction and slaughter time by breed interaction, and an error term. The first part of the investigation dealt with the analysis of variance of the percentage hot carcass to determine whether the effects of breed and trial were significant. In the second part, only the general mean, the slaughter time effect, and an error term were included in the model. When the slaughter time was found significant or highly significant, the sum of squares due to slaughter time was divided into regression components to determine which polynomial regression model best described the relationship between the body component mean and age. The results of the statistical analyses were as follows: 1. Breed and trial effects on the percentage hot carcass were insignificant in all but ration II, in which trial was significant. 2. Slaughter time by trial interaction effect was significant in all rations; slaughter time by breed was not. 3. The percentage hot carcass behaved in a parabolic manner with age, which was concave upward at lower planes of nutrition; the pattern changed to cubic at higher planes. 4. Age had no effect on the mean empty-body weight with steers on ration I; the effect was linear on ration II; quadratic on rations III and IV. Similar growth pattern was obtained for the percentage meat. 5. The percentage of front quarter to total carcass increased proportionately with age in all rations; the opposite trend was obtained for the percentage hind quarter. 6. The relationship of weight of the head, expressed as percentage of the live weight, with respect to age or slaughter time was linear with positive slope on rations I and II and negative on rations III and IV. 7. The percentage moisture in the meat showed a quartic regression with age on ration I presumably due to random fluctuation of the means. For rations II and III, the relationship was quadratic and concave downward; for ration IV, it was linear with negative slope. 8. The percentage crude protein and the percentage ash behaved similar to that of the percentage head, while opposite pattern was obtained for the ether extract.
- Impact of Environmental Classification on Steel Girder Bridge Elements Using Bridge Inspection DataDadson, Daniel K. (Virginia Tech, 2001-05-14)State Departments of Transportation (DOT's) have established Bridge Management Systems (BMS) with procedures to aid in estimating service lives of bridge components and elements. Service life estimates, together with cost information, are used to develop life-cycle costs of bridges. These estimates are necessary to prioritize and optimize bridge improvement programs within budgetary constraints. Several factors, including age, traffic, and environment have been identified in current BMS literature as being directly responsible for the deterioration of bridge components or elements. However, no formal methodology exists to determine the effects of the environment. Estimating bridge elements service lives, without considering the effect of environmental factors, could potentially lead to biased estimates. A methodology is proposed using statistical analysis to determine the effects of environmental regions on service life estimates of steel girder bridge component (concrete deck) and element/protective system (girder paint) using bridge inspection field data collected by bridge inspectors. Further, existing deterioration models are incapable of using the non-numeric element level inspection data, which most state DOT's have been collecting for nearly thirty years per Federal Highway Administration guidelines. The data format used were the numerical condition appraisal scale (9 through 0) for concrete deck component, and the letter condition appraisal (G-F-P-C) for steel girder paint element. The methodology proposed an environmental classification system for use in BMS programs. In addition, least squares mean and corresponding standard errors and also means and corresponding standard deviations of service lives at the component and element/protective system levels were estimated. The steel girder paint estimated service lives can be used in scheduling maintenance, repair and rehabilitation operations, and also in life-cycle costs analysis at the project and network levels. Because of limitations in the concrete deck data sets, the estimated concrete deck service lives are not true estimates of their service lives but do reflect the influence of environmental exposure characteristics on their performance.
- Improvement in accuracy using records lacking sire information in the animal modelDo, Changhee (Virginia Tech, 1992-05-05)Four alternative methods were examined with computer simulated data to improve accuracy of animal model genetic evaluations by including records lacking sire identification. Methods 1 and 2 assumed genetic values of cows missing sire identity were population and management group average, respectively. Methods 3 and 4 accounted for genetic values through producing abilities estimated as random and fixed effects, respectively. Correlations between true and estimated management group effects and breeding values of cows and sires were used as measures of estimation accuracy. Alternative methods were examined to determine 1) optimum, minimum management group size, 2) increases in estimation accuracy of alternative methods relative to the conventional method of discarding records lacking sire identity, 3) the effects on accuracy of missing sire identity for lower true breeding value sires, and 4) the potential to use different alternative methods in herds of varying size, proportion of cows sire identified, and level of variation. Management group effects were estimated more accurately as minimum management group size increased (3 to 6 to 9), but breeding values were less accurate. Accuracies of alternative methods slightly exceeded those of the conventional method for all estimated effects and all minimum group sizes. Accuracies of alternative and conventional methods were compared in 60 population with 250 sires and averages of 11,139 cows with 23,849 records. Alternative methods were always more accurate than the conventional method for estimating group effects. Methods 1 and 3 were uniformly more accurate in estimating breeding values of cows, and estimated breeding values of sires more accurately in 55 and 54, respectively, of 60 populations. Increases in accuracy were largest for method 3, but small for all methods. Intentionally omitting identity for daughters of sires with low breeding value reduced accuracy of estimation for breeding values but not for group effects. However, alternative methods were more accurate than the conventional method. Alternative methods were relatively most accurate for estimating breeding values in small herds having high variance and low proportions of sire identified cows. Method 3 had uniformly highest accuracy but method 1 often was similar with less computing cost.
- Mechanical behavior of red oak in transverse compression as affected by hydro-treatments and its relations to changes in cell wall structure and compositionKubinsky, Eugene Joseph (Virginia Tech, 1971-11-15)Influence of steam-treatment upon properties, structure, and composition of red oak was investigated. Small specimens of red oak heartwood were submitted to steam-treatments at atmospheric pressure for 1.5, 3, 6, 12, 24, 48, and 96 hours. Short steaming induced little or no changes in the properties and the composition of red oak. Prolonged steaming, however, resulted in significant changes in physical and mechanical properties as well as in structure and chemical composition of wood. Shrinkage increased significantly with increasing steaming time. After 96 hrs of steaming volumetric shrinkage to the air-dry condition was 4.4 times that of the non-treated wood and was thus indicative of cell wall collapse. Specific gravity and equilibrium moisture content were decreasing and air-dry density was increasing with an increase in steaming time. Color of wood became darker and the fluid content brighter with prolonged steaming.
- Mechanism of Flake Drying and Its Correlation to QualityDeomano, Edgar Dela Cruz (Virginia Tech, 2001-07-16)This research focuses on experimental investigations of the drying and bending properties of wood flakes. Three species (southern yellow pine, sweetgum, and yellow-poplar) were tested. Experiments on flake drying and effect of flake properties (cutting direction and dimension) and an external factor (temperature) were used to evaluate the flake drying process. Drying experiments were conducted using a convection oven. Bending properties of dried flakes were also measured. Modulus of elasticity (MOE), modulus of rupture (MOR), and strength at proportional limit (SPL) of flakes were measured based on Methods of Testing Small Clear Specimens of Timber (ASTM D143-94) using a miniature material tester. The drying curve was characterized by a second-order/quadratic equation. This equation was then differentiated to get the drying rate curve. Observation on drying and drying rate curves revealed that the rate of moisture loss consists of two falling rate periods; no constant rate drying period was observed. First falling rate drying period is controlled by convective heat transfer. Bound water diffusion controls the second falling rate drying period. Species, cutting direction, dimension, and temperature were found to have significant effect on drying rate of wood flakes. Southern yellow pine has the fastest drying rate followed by sweetgum then yellow-poplar. Differences in drying rate between species were attributed to differences in specific gravity and other factors. Radially-cut specimens have a slower drying rate than tangentially-cut specimens. There were also significant differences in drying rate between the four different flake dimensions. Thickness was found to be the more sensitive parameter in terms of dimensions. As expected, drying temperature also had highly significant effect on drying rate. An increasing trend in drying rate was observed as drying temperature increased. Simulation of flake drying using a numerical model yielded a different result. Simulated flake drying has two drying periods: a constant rate and falling rate. Moisture of the flake decreases constantly and surface temperature increases rapidly to boiling point and remains there in the constant rate drying period. During the falling rate period, rate of moisture transport is limited by the ability of water to diffuse through wood and flake temperature starts to rise. Bending properties were found to vary between and within the three species. Southern yellow pine had the lowest bending stiffness and strength followed by sweetgum while yellow-poplar had the highest bending properties. Radially-cut specimens were found to have lower MOE, MOR, and SPL than tangentially-cut specimens. Drying temperature was also found to have a significant effect on bending stiffness and strength. A decreasing trend in bending properties was observed when drying temperature was increased.
- Model selection and analysis tools in response surface modeling of the process mean and varianceGriffiths, Kristi L. (Virginia Tech, 1995-04-15)Product improvement is a serious issue facing industry today. And while response surface methods have been developed which address the process mean involved in improving the product there has been little research done on the process variability. Lack of quality in a product can be attributed to its inconsistency in performance thereby highlighting the need for a methodology which addresses process variability. The key to working with the process variability comes in the handling of the two types of factors which make up the product design: control and noise factors. Control factors can be fixed in both the lab setting and the real application. However, while the noise factors can be fixed in the lab setting, they are assumed to be random in the real application. A response-model can be created which models the response as a function of both the control and noise factors. This work introduces criteria for selecting an appropriate response-model which can be used to create accurate models for both the process mean and process variability. These two models can then be used to identify settings of the control factors which minimize process variability while maintaining an acceptable process mean. If the response-model is known, or at least well estimated, response surface methods can be extended to building various confidence regions related to the process variance. Among these are a confidence region on the location of minimum process variance and a confidence region on the ratio of the process variance to the error variance. It is easy to see the importance for research on the process variability and this work offers practical methods for improving the design of a product.
- A Monte Carlo study of the robustness of the standard deviation of the sample correlation coefficient to the assumption of normalityBrooks, Camilla Anita (Virginia Tech, 1970-03-08)From the case studies presented, one could conclude that for large values of n the standard deviation of r, the usual estimator of the correlation coefficient, and its transform z are only negligibly affected by variation in skewness or variation in kurtosis, the effect being slightly greater for variation in kurtosis. When the variations are in both skewness and kurtosis, the standard deviation of r and of Z are more affected by non-normality, a few significantly so. In small samples (n=10, n=5) the standard deviations of r and,z are quite visibly larger for variations in skewness and variations in kurtosis. The effect is greater for the simultaneous variation of the two. However, all of the values fall within a 95% confidence interval. It would appear then that the increase in the standard deviation of rand z is due more to the natural rise of the standard deviation in small samples rather than to non-normality. Viewing the studies made in totality we may in final conclusion state that the effect of non-normality on the standard deviation of r for samples of any size is not significant enough for concern; i.e., from this Monte Carlo study we will state that the standard deviation of the sample correlation coefficient is robust to the assumption of normality.
- Multivariate control charts for the mean vector and variance-covariance matrix with variable sampling intervalsCho, Gyo-Young (Virginia Tech, 1991-08-10)When using control charts to monitor a process it is frequently necessary to simultaneously monitor more than one parameter of the process. Multivariate control charts for monitoring the mean vector, for monitoring variance-covariance matrix and for simultaneously monitoring the mean vector and the variance-covariance matrix of a process with a multivariate normal distribution are investigated. A variable sampling interval (VSI) feature is considered in these charts. Two basic approaches for using past sample information in the development of multivariate control charts are considered. The first approach, which is called the combine-accumulate approach, reduces each multivariate observation to a univariate statistic and then accumulates over past samples. The second approach, which is called the accumulate-combine approach, accumulates past sample information for each parameter and then forms a univariate statistic from the multivariate accumulations. Multivariate control charts are compared on the basis of their average time to signal (ATS) performance. The numerical results show that the multivariate control charts based on the accumulate-combine approach are more efficient than the corresponding multivariate control charts based on the combine-accumulate approach in terms of ATS. Also VSI charts are more efficient than corresponding FSI charts.
- Multivariate nonparametric control charts using small samplesKapatou, Alexandra (Virginia Tech, 1996-02-05)The problem under consideration is simultaneous monitoring of the means of two or more correlated variables of a process, by collecting a small fixed random sample at fixed time intervals. The target values are considered known, whereas the variance covariance matrix of the data must be estimated. A typical parametric chart to monitor this process would involve the assumption that the data follow a multivariate normal distribution. If this assumption is not reasonable or if it is difficult to verify, for example in a short production run, a multivariate control chart based on classical nonparametric statistics could be used. Control charts based on the sign and signed rank statistics are explored. Past sample information for each variable is retained through an exponentially weighted moving average statistic (EWMA) in order to increase the sensitivity of the charts to detect small shifts from the target. The properties of the charts are evaluated using simulation. Such charts are not distribution-free in the nonparametric sense, but they are more robust than the parametric equivalent chart because, among other reasons, they require only covariance estimates. Nonparametric charts are less efficient than the parametric equivalent chart if the measurements follow a normal distribution, but they improve significantly if the measurements follow a distribution with heavier tails.