Browsing by Author "Blinn, Christine Elizabeth"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
- Estimation of Important Scenic Beauty Covariates from Remotely Sensed DataBlinn, Christine Elizabeth (Virginia Tech, 2000-06-13)The overall objective of this study was to determine if remotely sensed data could be used to model scenic beauty. Terrestrial digital images from within forest stands located in Prince Edward Gallion State Forest near Farmville, Virginia were rated for their scenic beauty by a group of students to obtain scenic beauty estimates (SBEs). Since the inter-rater reliability was low for the SBEs, they were not used in the modeling efforts. Instead, stand parameters (collected on tenth acre plots) that have been used in scenic beauty prediction models, like mean diameter at breast height (dbh), were the dependent variables in regression analyses. A color-infrared aerial photograph from the National Aerial Photography Program (NAPP) was scanned to achieve a pixel ground resolution of one meter. The digital aerial photograph was rectified and used as the remotely sensed data. Since the aerial photograph was taken in April, only conifer stands were used in the analyses. Summary statistics were obtained from a 23 by 23 window around plot locations in three images: the original image, a texture image created with the variance algorithm and a 7x7 window, and the first principal component image. The summary statistics were used as the independent variables in regression analyses. The mean texture digital number for the green band predicted the mean dbh of a plot with an R2 of 0.623. A maximum of 44.3 and 27.4 percent of the variability in trees per acre and basal area per acre, respectively, was explained by the models developed in this study. It seems unlikely that the remotely sensed forest stand variables would perform well as surrogates for field measurements used in scenic quality models.
- Increasing the Precision of Forest Area Estimates through Improved Sampling for Nearest Neighbor Satellite Image ClassificationBlinn, Christine Elizabeth (Virginia Tech, 2005-07-29)The impacts of training data sample size and sampling method on the accuracy of forest/nonforest classifications of three mosaicked Landsat ETM+ images with the nearest neighbor decision rule were explored. Large training data pools of single pixels were used in simulations to create samples with three sampling methods (random, stratified random, and systematic) and eight sample sizes (25, 50, 75, 100, 200, 300, 400, and 500). Two forest area estimation techniques were used to estimate the proportion of forest in each image and to calculate forest area precision estimates. Training data editing was explored to remove problem pixels from the training data pools. All possible band combinations of the six non-thermal ETM+ bands were evaluated for every sample draw. Comparisons were made between classification accuracies to determine if all six bands were needed. The utility of separability indices, minimum and average Euclidian distances, and cross-validation accuracies for the selection of band combinations, prediction of classification accuracies, and assessment of sample quality were determined. Larger training data sample sizes produced classifications with higher average accuracies and lower variability. All three sampling methods had similar performance. Training data editing improved the average classification accuracies by a minimum of 5.45%, 5.31%, and 3.47%, respectively, for the three images. Band combinations with fewer than all six bands almost always produced the maximum classification accuracy for a single sample draw. The number of bands and combination of bands, which maximized classification accuracy, was dependent on the characteristics of the individual training data sample draw, the image, sample size, and, to a lesser extent, the sampling method. All three band selection measures were unable to select band combinations that produced higher accuracies on average than all six bands. Cross-validation accuracies with sample size 500 had high correlations with classification accuracies, and provided an indication of sample quality. Collection of a high quality training data sample is key to the performance of the nearest neighbor classifier. Larger samples are necessary to guarantee classifier performance and the utility of cross-validation accuracies. Further research is needed to identify the characteristics of "good" training data samples.