Browsing by Author "Coleman, Garry D."
Now showing 1 - 3 of 3
Results Per Page
Sort Options
- A critique of the VPC's planning methodologyColeman, Garry D. (Virginia Tech, 1988-12-15)The VPC's Planning Methodology (Strategic Performance Improvement Planning Process) has been primarily based on action research. This thesis attempts to externally validate the methodology by asking "has the methodology evolved consistently with the findings of others?" This was accomplished by comparing the methodology to other recent strategic planning/management methodologies and having planning practitioners and consultants compare the VPC's Methodology to their own methodologies. A second objective was to identify potential improvements to the methodology. In most cases, the VPC's Methodology was more comprehensive than the methodologies found in the literature. The only potential shortcoming of the methodology was the lack of an explicit component for a coordinated strategy, although the methodology includes a vision of the future and strategic objectives components. The planning practitioners and consultants offered several minor suggestions for improving the methodology, but none found any significant shortcomings in the methodology. Interestingly, none of the practitioners/consultants mentioned the lack of a strategy component; however, at least two of them felt a better link was needed between strategic and tactical objectives. This leads me to believe the VPC's Methodology has evolved consistently with the findings of others. Some of the potential improvements identified include: a relocation of planning assumptions within the process model, a revised technique for analyzing planning assumptions, the addition of a strategy component, clarification of the role of Key Performance Indicators, clarification and emphasis on under-utilized components of the methodology, and revisions to the process model.
- An Empirical Analysis of Rating Effectiveness for a State Quality AwardSienknecht, Ronald Theodore Jr. (Virginia Tech, 1999-06-28)This research clarified existing inconsistencies in self-assessment literature, and added to the body of knowledge for rating effectiveness of organizational assessments by defining relationships among rating effectiveness criteria (ratee, rater, rating scale, rating process) and measures (interrater reliability, halo error, leniency and severity, range restriction) based on extensive literature review. A research framework was developed from this review, and was employed in computing rating effectiveness measures at the individual (i.e., examiner or eight rating scale dimensions) and sector (e.g., Private Manufacturing Sector, Private Service Sector, Public Local Sector, Public State & Federal Sector) levels for a State Quality Award (SQA) using data from the 1998 applications. Interrater reliability (measured by intraclass correlations for each rating scale dimension) was low to moderate, and differed by dimension. Halo error (measured by the determinant of the dimension intercorrelation matrices for each examiner) was present for all examiners. Leniency and severity (measured by presence of statistically significant Rater main effect for each dimension) was present for 11 of 32 cases, and differed by dimension. Range restriction (measured by variance analysis for each dimension) was present for 22 of 32 cases, and differed by dimension. A post-hoc principle component analysis indicated poor internal reliability for the rating scale. To improve, the SQA should replace the existing rating scale and provide in-depth training on all elements of the rating process. The importance of the SQA using boxplots, histograms, and rating effectiveness measures to make fully informed decisions was discussed.
- Estimating the impact of third-party evaluator training and characteristics on the scoring of written organizational self-assessmentsColeman, Garry D. (Virginia Tech, 1996-07-15)This study examined the process of third-party scoring of organizational self-assessments. An experiment was conducted to illustrate the magnitude of score consistency and accuracy among evaluators, estimate the impact of frame-of-reference (FOR) training on score consistency and accuracy, and explore the relationship between evaluator characteristics and score accuracy. The organizational self-assessment used was the 1995 Malcolm Baldrige National Quality Award Colony Fasteners Case Study. The subjects were 81 graduate students enrolled in two televised graduate engineering courses with considerable quality management content. Subjects were randomly assigned to groups and randomly assigned to four of the seven categories of the Baldrige Award. Each subject evaluated the case study against two categories prior to the treatment. Subjects in the control group evaluated two additional categories and then a two and one-half hour FOR training intervention was provided to all subjects. Next, subjects in the treatment group evaluated their two additional categories. Finally, a questionnaire was administered regarding evaluator characteristics related to previous experience and education. Accuracy was assessed by comparing subjects’ scores to experts’ scores and calculating indices (elevation and dimensional accuracy) for each subject’s scores on each category. Prior to training, no statistical differences were found between groups, but a leniency effect was observed for all subjects. Category 6.0, Business Results, and Category 7.0, Customer Focus and Satisfaction, had statistically smaller score variances than the other five categories. After training, group x time ANOVAs found evidence of an interaction. Examination of simple effects found significant differences between the group mean scores for all three items from Category 6.0 and two of the four items from Category 5.0. Significant simple time effects were found for all three items from Category 6.0 for the treatment group. No meaningful differences were found between group score variances. A significant difference in category score variance was seen across categories for the untrained group. Training improved elevation accuracy, but no evidence was seen of effects on DA. Exploratory regression produced a prediction equation for DA with an adjusted R-square of 0.538. Predictors included work experience, QA/QC experience, employer’s industry and employer’s size.