Browsing by Author "Ehrich, Roger W."
Now showing 1 - 20 of 85
Results Per Page
Sort Options
- Abnormal Pattern Recognition in Spatial DataKou, Yufeng (Virginia Tech, 2006-11-29)In the recent years, abnormal spatial pattern recognition has received a great deal of attention from both industry and academia, and has become an important branch of data mining. Abnormal spatial patterns, or spatial outliers, are those observations whose characteristics are markedly different from their spatial neighbors. The identification of spatial outliers can be used to reveal hidden but valuable knowledge in many applications. For example, it can help locate extreme meteorological events such as tornadoes and hurricanes, identify aberrant genes or tumor cells, discover highway traffic congestion points, pinpoint military targets in satellite images, determine possible locations of oil reservoirs, and detect water pollution incidents. Numerous traditional outlier detection methods have been developed, but they cannot be directly applied to spatial data in order to extract abnormal patterns. Traditional outlier detection mainly focuses on "global comparison" and identifies deviations from the remainder of the entire data set. In contrast, spatial outlier detection concentrates on discovering neighborhood instabilities that break the spatial continuity. In recent years, a number of techniques have been proposed for spatial outlier detection. However, they have the following limitations. First, most of them focus primarily on single-attribute outlier detection. Second, they may not accurately locate outliers when multiple outliers exist in a cluster and correlate with each other. Third, the existing algorithms tend to abstract spatial objects as isolated points and do not consider their geometrical and topological properties, which may lead to inexact results. This dissertation reports a study of the problem of abnormal spatial pattern recognition, and proposes a suite of novel algorithms. Contributions include: (1) formal definitions of various spatial outliers, including single-attribute outliers, multi-attribute outliers, and region outliers; (2) a set of algorithms for the accurate detection of single-attribute spatial outliers; (3) a systematic approach to identifying and tracking region outliers in continuous meteorological data sequences; (4) a novel Mahalanobis-distance-based algorithm to detect outliers with multiple attributes; (5) a set of graph-based algorithms to identify point outliers and region outliers; and (6) extensive analysis of experiments on several spatial data sets (e.g., West Nile virus data and NOAA meteorological data) to evaluate the effectiveness and efficiency of the proposed algorithms.
- Activity-based Knowledge Management Tool Design for EducatorsZietz, Jason (Virginia Tech, 2006-08-07)Traditionally, knowledge management tool design has fit into the repository paradigm: a database of stored information that can be queried by an individual seeking information. These tools often rely on two distinct user groups: those who produce the knowledge and those who seek it. The disparity between these two groups - one group benefiting from the other group's work - is a leading cause of a knowledge management tool's failure. Additionally, knowledge management tools fail because the work processes of target users are not fully understood and therefore not addressed in the tool design. Developing knowledge management tools for educators presents additional obstacles in this already hazardous environment. The traditional impediments found in the development of knowledge management systems, such as trust and incentive concerns, are present along with additional concerns faced by educators such as strict time and resource constraints. And like teaching, educators have different impressions of how knowledge management practices should be done. Therefore, any knowledge management tool for educators must address these obstacles in order to be effective. This research describes the development of an activity-centric knowledge management tool. Activity-centric knowledge management tools avoid the repository paradigm by focusing on the processes in which work is done rather than the storing of information that results from such work. This approach to knowledge management in an educational environment allows teachers to focus on the work involved in teaching rather than knowledge management itself which typically involves added tasks such as entering information into a database. First, I describe current knowledge management practices of teachers by reviewing literature from education and knowledge management as well as interviews and surveys of teachers regarding how they incorporate knowledge management into their teaching practices. Next, I examine the development of the Survey Data Visualization Tool, an activity-based knowledge management tool. Finally, I analyze the use of the Survey Data Visualization Tool by a group of teachers.
- Advanced spatial information processes: modeling and applicationZhang, Mingchuan (Virginia Polytechnic Institute and State University, 1985)Making full use of spatial information is an important problem in information-processing and decision making. In this dissertation, two Bayesian decision theoretic frameworks for context classification are developed which make full use of spatial information. The first framework is a new multispectral image context classification technique which is based on a recursive algorithm for optimal estimation of the state of a two-dimensional discrete Markov Random Field (MRF). The implementation of the recursive algorithm is a form of dynamic programming. The second framework is based on a stochastic relaxation algorithm and Markov-Gibbs Random Fields. The relaxation algorithm constitutes an optimization using annealing. We also discuss how to estimate the Markov Random Field Model parameters, which is a key problem in using MRF in image processing and pattern recognition. The estimation of transition probabilities in a 2-D MRF is converted into two 1-D estimation problems. Then a Space-varying estimation method for transition probabilities is discussed.
- Analogical representation in temporal, spatial, and mnemonic reasoningHostetter, Michael (Virginia Tech, 1990-01-05)The traditional Euclidean approach to problem solving in AI has always designed representations for a domain and then spent considerable effort on the methods of efficiently searching the representation in order to extract the desired information. We feel that the emphasis in problem solving should be on the automated construction of the knowledge representation and not on the searching of the representation. This thesis proposes and implements an alternative approach: that of analogical representation. Analogical representation differs from the Euclidean methodology in that it creates a representation for the data from which the acquisition of information is done by simple 'observation.' It is not our goal to propose a system that reduces the NP-hard problem of temporal reasoning to a lower complexity. Our approach simply minimizes the number of times that we must pay the exponential expense. Furthermore, the representation can encode uncertainty and unknownness in an efficient manner. This allows for 'intelligent' creation of a representation and removes the 'mindless' mechanical search techniques from information retrieval, placing the computational effort where it should be: on representation construction.
- Analysis and Reduction of Moire Patterns in Scanned Halftone PicturesLiu, Xiangdong (Virginia Tech, 1996-05-01)In this dissertation we provide a comprehensive theory for the formation of a moire pattern in a sampled halftone image. We explore techniques for restoring a sampled halftone image with a moire pattern and techniques for preventing a moire pattern when a halftone picture is scanned. Specifically, we study the frequency, phase, and spatial geometry of a moire pattern. We observe and explain the half period phase reversal phenomenon that a moire pattern may exhibit. As a case study, we examine the moire patterns generated by a commercial scanner. We propose three restoration methods, including a notch filtering method, a simulation method, and a relaxation method. We also describe a moire prevention method, the partial inverse Fourier transform method. Finally, we propose a research agenda for further investigation.
- Analyzing perspective line drawings using hypothesis based reasoningMulgaonkar, Prasanna Govind (Virginia Polytechnic Institute and State University, 1984)One of the important issues in the middle levels of computer vision is how world knowledge should be gradually inserted into the reasoning process. In this dissertation, we develop a technique which uses hypothesis based reasoning to reason about perspective line drawings using only the constraints supplied by the equations of perspective geometry. We show that the problem is NP complete, and that it can be solved using modular inference engines for propagating constraints over the set of world level entities. We also show that theorem proving techniques, with their attendant complexity, are not necessary because the real valued attributes of the world can be computed in closed form based only on the spatial relationships between world entities and measurements from the given image.
- Arabic News Text Classification and Summarization: A Case of the Electronic Library Institute SeerQ (ELISQ)Kan'an, Tarek Ghaze (Virginia Tech, 2015-07-21)Arabic news articles in heterogeneous electronic collections are difficult for users to work with. Two problems are: that they are not categorized in a way that would aid browsing, and that there are no summaries or detailed metadata records that could be easier to work with than full articles. To address the first problem, schema mapping techniques were adapted to construct a simple taxonomy for Arabic news stories that is compatible with the subject codes of the International Press Telecommunications Council. So that each article would be labeled with the proper taxonomy category, automatic classification methods were researched, to identify the most appropriate. Experiments showed that the best features to use in classification resulted from a new tailored stemming approach (i.e., a new Arabic light stemmer called P-Stemmer). When coupled with binary classification using SVM, the newly developed approach proved to be superior to state-of-the-art techniques. To address the second problem, i.e., summarization, preliminary work was done with English corpora. This was in the context of a new Problem Based Learning (PBL) course wherein students produced template summaries of big text collections. The techniques used in the course were extended to work with Arabic news. Due to the lack of high quality tools for Named Entity Recognition (NER) and topic identification for Arabic, two new tools were constructed: RenA for Arabic NER, and ALDA for Arabic topic extraction tool (using the Latent Dirichlet Algorithm). Controlled experiments with each of RenA and ALDA, involving Arabic speakers and a randomly selected corpus of 1000 Qatari news articles, showed the tools produced very good results (i.e., names, organizations, locations, and topics). Then the categorization, NER, topic identification, and additional information extraction techniques were combined to produce approximately 120,000 summaries for Qatari news articles, which are searchable, along with the articles, using LucidWorks Fusion, which builds upon Solr software. Evaluation of the summaries showed high ratings based on the 1000-article test corpus. Contributions of this research with Arabic news articles thus include a new: test corpus, taxonomy, light stemmer, classification approach, NER tool, topic identification tool, and template-based summarizer – all shown through experimentation to be highly effective.
- Automated Quality Assurance for Magnetic Resonance Imaging with Extensions to Diffusion Tensor ImagingFitzpatrick, Atiba Omari (Virginia Tech, 2005-06-23)Since its inception, Magnetic Resonance Imaging (MRI) has largely been used for qualitative diagnosis. Radiologists and physicians are increasingly becoming interested in quantitative assessments. The American College of Radiology (ACR) developed an accreditation program that incorporates tests pertaining to quantitative and qualitative analyses. As a result, sites often use the ACR procedure for daily quality assurance (QA) testing. The ACR accreditation program uses information obtained from clinical and phantom images to assess overall image quality of a scanner. For the phantom assessment, a human observer performs manual tests on T1 and T2-weighted volumes of the provided phantom. As these tests are tedious and time consuming, the primary goal of this research was to fully automate the procedure for QA purposes. The performance of the automated procedure was assessed by comparing the test results with the decisions made by human observers. The test results of the automated ACR QA procedure were well correlated with that of human observers. The automated ACR QA procedure takes approximately 5 minutes to complete. Upon program completion, the test results are logged in multiple text files. To this date, no QA procedure has been reported for Diffusion Tensor Imaging (DTI). Therefore, the secondary goal of this thesis was to develop a DTI QA procedure that assess two of the associated features used most in diagnosis, namely, diffusion anisotropy and the direction of primary diffusion. To this end, a physical phantom was constructed to model restricted diffusion, relative to axon size, using water-filled polytetrafluoroethylene (PTFE) microbore capillary tubes. Automated procedures were developed to test fractional anisotropy (FA) map contrast and capillary bundle (axon) orientation accuracy.
- Automatic Detection of Elongated Objects in X-Ray Images of LuggageLiu, Wenye III (Virginia Tech, 1997-09-05)This thesis presents a part of the research work at Virginia Tech on developing a prototype automatic luggage scanner for explosive detection, and it deals with the automatic detection of elongated objects (detonators) in x-ray images using matched filtering, the Hough transform, and information fusion techniques. A sophisticated algorithm has been developed for detonator detection in x-ray images, and computer software utilizing this algorithm was programmed to implement the detection on both UNIX and PC platforms. A variety of template matching techniques were evaluated, and the filtering parameters (template size, template model, thresholding value, etc.) were optimized. A variation of matched filtering was found to be reasonably effective, while a Gabor-filtering method was found not to be suitable for this problem. The developed software for both single orientations and multiple orientations was tested on x-ray images generated on AS&E and Fiscan inspection systems, and was found to work well for a variety of images. The effects of object overlapping, luggage position on the conveyor, and detonator orientation variation were also investigated using the single-orientation algorithm. It was found that the effectiveness of the software depended on the extent of overlapping as well as on the objects the detonator overlapped. The software was found to work well regardless of the position of the luggage bag on the conveyor, and it was able to tolerate a moderate amount of orientation change.
- Automatic detection of roads in spot satellite imagesDas, Sujata (Virginia Polytechnic Institute and State University, 1988)The improved spatial resolution of the data from the SPOT satellite provides a substantially better basis for monitoring urban land use and growth with remote sensing than Landsat data. The purpose of this study is to delineate the road network in 20-m resolution SPOT-images of urban areas automatically. The roads appear as linear features. However, most edge and line detectors are not effective in detecting roads in these images because of the low signal to noise ratio, low contrast and blur in the imagery. For the automatic recognition of roads, a new line detector based on surface modelling is developed. A line can be approximated by a piecewise straight curve composed of short linear line-elements, called linels, each characterized by a direction, a length and a position. The approach to linel detection is to fit a directional surface that models the ideal local intensity profile of a linel in the least square sense. A Gaussian surface with a direction of invariance forms an adequate basis for modelling the ideal local intensity profile of the roads. The residual of the least squares fit as well as the parameters of the fit surface characterize the linel detected. The reliable performance of this line operator makes the problems of linking linels more manageable.
- Automatic image analysis methods for use with local operatorsTatem, James E. (Virginia Tech, 1990-05-01)Just as image processing and image data bases have moved out of the lab and into the office environment, so has the need for image enhancement. Image scanners must to be able to capture and store a wide variety of information including faded documents, carbon copies, signatures, postmarks, etc. OCR systems put further demands on scanned image quality in terms of low noise, and unbroken disconnected characters. Straight thresholding techniques do not always meet the performance requirements, but by applying simple image processing techniques some of these problems can be solved. However, more burden is placed on the users to control the image enhancement techniques. The users, most of whom have little technical background, want no part in adjusting parameters. This paper proposes a method of examining small windows of the image to derive parameter settings autonomously. Histograms allow rudimentary measures to be used in setting parameters for edge detection, non-linear filters, and point operators such as non-linear gray scale mapping. Some examples of automatic parameter setting are given in chapter three.
- Cellular automata models for excitable mediaWeimar, Jörg Richard (Virginia Tech, 1991-05-15)A cellular automaton is developed for simulating excitable media. First, general "masks" as discrete approximations to the diffusion equation are examined, showing how to calculate the diffusion coefficient from the elements of the mask. The mask is then combined with a thresholding operation to simulate the propagation of waves (shock fronts) in excitable media, showing that (for well-chosen masks) the waves obey a linear "speedcurvature" relation with slope given by the predicted diffusion coefficient. The utility of different masks in terms of computational efficiency and adherence to a linear speed-curvature relation is assessed. Then, a cellular automaton model for wave propagation in reaction diffusion systems is constructed based on these "masks" for the diffusion component and on singular perturbation analysis for the reaction component. The cellular automaton is used to model spiral waves in the Belousov-Zhabotinskii reaction. The behavior of the spiral waves and the movement of the spiral tip are analyzed. By comparing these results to solutions of the Oregonator PDE model, the automaton is shown to be a useful and efficient replacement for the standard numerical solution of the PDE's.
- Characterizing Web Response TimeLiu, Binzhang M.S. (Virginia Tech, 1998-04-22)It is critical to understand WWW latency in order to design better HTTP protocols. In this study we characterize Web response time and examine the effects of proxy caching, network bandwidth, traffic load, persistent connections for a page, and periodicity. Based on studies with four workloads, we show that at least a quarter of the total elapsed time is spent on establishing TCP connections with HTTP/1.0. The distributions of connection time and elapsed time can be modeled using Pearson, Weibul, or Log-logistic distributions. We also characterize the effect of a user's network bandwidth on response time. Average connection time from a client via a 33.6 K modem is two times longer than that from a client via switched Ethernet. We estimate the elapsed time savings from using persistent connections for a page to vary from about a quarter to a half. Response times display strong daily and weekly patterns. This study finds that a proxy caching server is sensitive to traffic loads. Contrary to the typical thought about Web proxy caching, this study also finds that a single stand-alone squid proxy cache does not always reduce response time for our workloads. Implications of these results to future versions of the HTTP protocol and to Web application design also are discussed.
- Classroom resources and impact on learningKurdziolek, Margaret Angela (Virginia Tech, 2011-08-05)In the past, educators and policy makers believed that by providing more resources they could directly improve student-learning outcomes. To their frustration, this turns out not to be entirely true. Resources may be necessary but they are not sufficient. Resources themselves are not self-enacting, that is, they do not make change inevitable. Differences in their effects depend on differences in their use. This is also true in the case of educational technologies. As developers of these technologies we need to understand how resources fit within the classroom environment as enacted and how they can be effectively used to increase student learning. I report on four case studies conducted within the context of the Scaling-Up SimCalc study. In the study, "treatment" teachers were given a set of new resources to use: a combination of curriculum, educational software, and teacher professional development. "Delayed treatment" (control) teachers were asked to use their usual curriculum. Year-one study results demonstrated by randomized controlled testing the successful use of technology in class settings; however, there was little information on how the students and teachers actually interacted with the resources. Case study classrooms were selected to examine the effects of variation of computational resource arrangements: one utilized a computer lab, two used mobile laptop carts, and one used a laptop connected to a projector. The first round coding and analysis shows that the observed classrooms varied not only in their classroom set-ups but also in how teachers and students interacted with the software, the workbooks, and with one another. The variety of resource interaction points to the robustness of the SimCalc project: students and teachers can interact with the SimCalc resources in a variety of ways and still achieve student-learning gains. However, through subsequent review and analysis of the observation data five themes emerged. These themes suggest commonalities in classrooms practices surrounding the use of resources. Two new theoretical constructs, "socio-physical resource richness" and "resource use withitness" help describe (1) physical and social arrangements of resources and (2) how teachers and students manage resource use.
- Color Face Recognition using Quaternionic Gabor FiltersJones, Creed F. III (Virginia Tech, 2004-12-13)This dissertation reports the development of a technique for automated face recognition, using color images. One of the more powerful techniques for recognition of faces in monochromatic images has been extended to color by the use of hypercomplex numbers called quaternions. Two software implementations have been written of the new method and the analogous method for use on monochromatic images. Test results show that the new method is superior in accuracy to the analogous monochrome method. Although color images are generally collected, the great majority of published research efforts and of commercially available systems use only the intensity features. This surprising fact provided motivation to the three thesis statements proposed in this dissertation. The first is that the use of color information can increase face recognition accuracy. Face images contain many features, some of which are only easily distinguishable using color while others would seem more robust to illumination variation when color is considered. The second thesis statement is that the currently popular technique of graph-based face analysis and matching of features extracted from application of a family of Gabor filters can be extended to use with color. A particular method of defining a filter appropriate for color images is used; the usual complex Gabor filter is adapted to the domain of quaternions.. Four alternative approaches to the extension of complex Gabor filters to quaternions are defined and discussed; the most promising is selected and used as the basis for subsequent implementation and experimentation. The third thesis statement is that statistical analysis can identify portions of the face image that are highly relevant — i.e., locations that are especially well suited for use in face recognition systems. Conventionally, the Gabor-based graph method extracts features at locations that are equally spaced, or perhaps selected manually on a non-uniform graph. We have defined a relevance image, in which the intensity values are computed from the intensity variance across a number of images from different individuals and the mutual information between the pixel distributions of sets of images from different individuals and the same individual. A complete software implementation of the new face recognition method has been developed. Feature vectors called jets are extracted by application of the novel quaternion Gabor filter, and matched against models of other faces. In order to test the validity of the thesis statements, a parallel software implementation of the conventional monochromatic Gabor graph method has been developed and side-by-side testing has been conducted. Testing results show accuracy increases of 3% to 17% in the new color-based method over the conventional monochromatic method. These testing results demonstrate that color information can indeed provide a significant increase in accuracy, that the extension of Gabor filters to color through the use of quaternions does give a viable feature set, and that the face landmarks chosen via statistical methods do have high relevance for face discrimination.
- Communication of Emotion in Mediated and Technology-Mediated Contexts: Face-to-Face, Telephone, and Instant MessagingBurge, Jamika D. (Virginia Tech, 2007-04-23)This dissertation work considers communication between people. I look at coordinating dyads (couples in relationships) and people in working relationships to develop an understanding of how people engage in high-stakes, or emotional communication via various communicative media. The approach for this research is to observe and measure people's behavior during interaction and subsequent reporting of that behavior and associated internal experiences. Qualitative and quantitative methods are employed. Quantitative data are analyzed using a range of statistical analyses, including correlations matrices, ANOVAs, and multivariate statistics. Two controlled laboratory experiments were conducted for this research. These experiments involved couples in relationships. Couples were brought into the lab and argued with each other across one of three technological media: face-to-face, telephone, and instant messaging (IM). In one set of couples' experiments, the couples argued for twenty minutes; in the subsequent couples' experiment, couples were encouraged to take as much time as they needed for their arguments. One of the main results from the first experiment is that couples did, indeed, argue when brought into a laboratory setting. One of the important findings for the second experiment is that time did not affect couples' tendency to reach closure during their arguments. This research is a contribution in that it examines how people engage in highly emotional communication using various technological media. In a society with ever-increasing communication needs that require technology, it becomes necessary to study its communicative affordances. Understanding the context of highly emotional interactions between members of couples gives insight into how technology meets (or fails to meet) these communication needs.
- Comparative Assessment of Network-Centric Software ArchitecturesKrishnamurthy, Likhita (Virginia Tech, 2006-05-01)The purpose of this thesis is to characterize, compare and contrast four network-centric software architectures, namely Client-Server Architecture (CSA), Distributed Objects Architecture (DOA), Service-Oriented Architecture (SOA) and Peer-to-Peer Architecture (PPA) and seven associated frameworks consisting of .NET, Java EE, CORBA, DCOM, Web Services, Jini and JXTA with respect to a set of derived criteria. Network-centric systems are gaining in popularity as they have the potential to solve more complex problems than we have been able to in the past. However, with the rise of SOA, Web Services, a set of standards widely used for implementing service-oriented solutions, is being touted as the "silver bullet" to all problems afflicting the software engineering domain with the danger of making other architectures seem obsolete. Thus, there is an urgent need to study the various architectures and frameworks in comparison to each other and understand their relative merits and demerits for building network-centric systems. The architectures studied here were selected on the basis of their fundamentality and generality. The frameworks were chosen on the basis of their popularity and representativeness to build solutions in a particular architecture. The criteria used for comparative assessment are derived from a combination of two approaches — by a close examination of the unique characteristics and requirements of network-centric systems and then by an examination of the constraints and mechanisms present in the architectures and frameworks under consideration that may contribute towards realizing the requirements of network-centric systems. Not all of the criteria are equally relevant for the architectures and frameworks. Some, when relevant, are relevant in a different sense from one architecture (or framework) to another. One of the conclusions that can be drawn from this study is that the different architectures are not completely different from each other. In fact, CSA, DOA and SOA are a natural evolution in that order and share several characteristics. At the same time, significant differences do exist, so it is clearly possible to judge/differentiate one from the other. All three architectures can coexist in a single system or system of systems. However, the advantages of each architecture become apparent only when they are used in their proper scope. At the same time, a sharp difference can be perceived between these three architectures and the peer-to-peer architecture. This is because PPA aims to solve a totally different class of problems than the other three architectures and hence has certain unique characteristics not observed in the others. Further, all of the frameworks have certain unique architectural features and mechanisms not found in the others that contribute towards achieving network-centric quality characteristics. The two broad frameworks, .NET and Java EE offer almost equivalent capabilities and features; what can be achieved in one can be achieved in the other. This thesis deals with the study of all the four architectures and their related frameworks. The criteria used, while fairly comprehensive, are not exhaustive. Variants of the fundamental architectures are not considered. However, system/software architects seeking an understanding of the tradeoffs involved in using the various architectures and frameworks and their subtle nuances should benefit considerably from this work.
- Computer Analysis of User Interfaces Based on Repetition in Transcripts of User SessionsSiochi, Antonio C.; Ehrich, Roger W. (Department of Computer Science, Virginia Polytechnic Institute & State University, 1990)It is generally acknowledged that the production of quality user interfaces requires a thorough understanding of the user and that this involves evaluating the interface by observing the user working with the system, or by performing human factors experiments. Such methods traditionally involve the use of video tape, protocol analysis, critical incident analysis, etc. These methods require time consuming analyses and may be invasive. In addition, the data obtained through such methods represent a relatively small portion of the use of a system. An alternative approach is to record all user input and systems output, i.e., log the user session. Such transcripts can be collected automatically and non-invasively over a long period of time. Unfortunately, this produces voluminous amounts of data. There is, therefore, a need for tools and techniques that allow an evaluator to identify potential performance and usability problems from such data. It is hypothesized that repetition of user actions is an important indicator of potential user interface problems.
- Computer-based user interface evaluation by analysis of repeating usage patterns in transcripts of user sessionsSiochi, Antonio C. (Virginia Polytechnic Institute and State University, 1989)It is generally acknowledged that the production of quality user interfaces requires a thorough understanding of the user and that this involves evaluating the system by observing the user using the system, or by performing human factors experiments. Such methods traditionally involve the use of videotape, protocol analysis, critical incident analysis, etc. These methods require time consuming analyses and may be invasive. In addition, the data obtained through such methods represent a relatively small portion of the use of a system. An alternative approach is to record all user input and system output onto a file, i.e., log the user session. Such transcripts can be collected automatically and over a long period of time. Unfortunately, this produces voluminous amounts of data. There is therefore a need for tools and techniques that allow an evaluator to extract from such data potential performance and usability problems. It is hypothesized that repetition of user actions is an important indicator of potential user interface problems. This research reports on the use of the repetition indicator as a means of studying user session transcripts in the evaluation of user interfaces. The dissertation discusses the algorithms involved, the interactive tool constructed, the results of an extensive application of the technique in the evaluation of a large image-processing system, and extensions and refinements to the technique. Evidence suggests that the hypothesis is justified and that such a technique is convincingly useful.
- Contextual Boundary Formation by One-dimensional Edge Detection And Scan Line MatchingEhrich, Roger W.; Schroeder, F. H. (Department of Computer Science, Virginia Polytechnic Institute & State University, 1980)No abstract available.