Browsing by Author "Roach, John W."
Now showing 1 - 20 of 69
Results Per Page
Sort Options
- Advanced spatial information processes: modeling and applicationZhang, Mingchuan (Virginia Polytechnic Institute and State University, 1985)Making full use of spatial information is an important problem in information-processing and decision making. In this dissertation, two Bayesian decision theoretic frameworks for context classification are developed which make full use of spatial information. The first framework is a new multispectral image context classification technique which is based on a recursive algorithm for optimal estimation of the state of a two-dimensional discrete Markov Random Field (MRF). The implementation of the recursive algorithm is a form of dynamic programming. The second framework is based on a stochastic relaxation algorithm and Markov-Gibbs Random Fields. The relaxation algorithm constitutes an optimization using annealing. We also discuss how to estimate the Markov Random Field Model parameters, which is a key problem in using MRF in image processing and pattern recognition. The estimation of transition probabilities in a 2-D MRF is converted into two 1-D estimation problems. Then a Space-varying estimation method for transition probabilities is discussed.
- Analogical representation in temporal, spatial, and mnemonic reasoningHostetter, Michael (Virginia Tech, 1990-01-05)The traditional Euclidean approach to problem solving in AI has always designed representations for a domain and then spent considerable effort on the methods of efficiently searching the representation in order to extract the desired information. We feel that the emphasis in problem solving should be on the automated construction of the knowledge representation and not on the searching of the representation. This thesis proposes and implements an alternative approach: that of analogical representation. Analogical representation differs from the Euclidean methodology in that it creates a representation for the data from which the acquisition of information is done by simple 'observation.' It is not our goal to propose a system that reduces the NP-hard problem of temporal reasoning to a lower complexity. Our approach simply minimizes the number of times that we must pay the exponential expense. Furthermore, the representation can encode uncertainty and unknownness in an efficient manner. This allows for 'intelligent' creation of a representation and removes the 'mindless' mechanical search techniques from information retrieval, placing the computational effort where it should be: on representation construction.
- An Analysis of Conjunctive Goal PlanningJoslin, David E.; Roach, John W. (Department of Computer Science, Virginia Polytechnic Institute & State University, 1986)This thesis develops a formal theory of planning, using a simple paradigm of planning that has been previously explored in work such as GPS, HACKER, STRIPS and NOAH. This thesis analyzes the goal interactions that occur when the goal is stated as a conjunction of subgoals. In this analysis we assume that the problem has a finite state space, and that operators are reversible. Graph theory can be used to characterize these subgoal interactions. The entire state space is treated as a graph, and each subgoal or conjunction of subgoals defines a subgraph. Each subgraph is composed of one or more connected components. Solving each subgoal by choosing a connected component that contains a final goal state is a necessary and sufficient condition for solving any planning problem. In the worst case, analyzing goal interactions is shown to be no more effective than enumerating the state space and searching. This complexity proves that no complete algorithm can solve all planning problems in linear time. The technique of goal ordering is analyzed, along with several extensions to that technique. While a generalization of goal ordering is possible, in the worst case generating the goal order requires as much computation as solving the problem by a brute-force search. A technique called capability analysis, derived from the connect component results, uses first-order logic to find the constraints that must apply as subgoals are achieved. A partial implementation uses counterfactual logic to identify the components of a world state that prevent the remaining subgoals from being achieved.
- Analyzing perspective line drawings using hypothesis based reasoningMulgaonkar, Prasanna Govind (Virginia Polytechnic Institute and State University, 1984)One of the important issues in the middle levels of computer vision is how world knowledge should be gradually inserted into the reasoning process. In this dissertation, we develop a technique which uses hypothesis based reasoning to reason about perspective line drawings using only the constraints supplied by the equations of perspective geometry. We show that the problem is NP complete, and that it can be solved using modular inference engines for propagating constraints over the set of world level entities. We also show that theorem proving techniques, with their attendant complexity, are not necessary because the real valued attributes of the world can be computed in closed form based only on the spatial relationships between world entities and measurements from the given image.
- An application of artificial intelligence methods to scheduling parallel processorsSteffen, Mitchell S. (Virginia Tech, 1985-08-05)This research investigated applying Artificial Intelligence (AI) method to develop a scheduling and sequencing system for parallel processors, subject to preference, sequencing, and buffer inventory constraints. Specifically, hierarchical planning and, constraint-directed search were used to develop prototype scheduling system for a case study problem. This research also investigated dividing the scheduling problem into sub-periods to allow parallel scheduling and efficient handling of time-dependent, constraints. The prototype system uses, problem-constraints to define sub-period boundaries, and determine which processors and jobs to include in the sub-period problems. It then solves the sub-period schedules in sequence. The prototype system was tested using operational data from the case study and compared to schedules created by the case study scheduler. The prototype system produced schedules very similar to the human scheduler, and relaxed constraints only slightly more than the scheduler in searching for solutions. The success of the prototype system demonstrated: 1) the effectiveness of hierarchical planning and constraint-directed search as methods for developing scheduling systems for parallel processors; 2) that constraint satisfaction, as opposed to solving an objective function, is a useful alternative method for modeling scheduling problems; and 3) dividing the scheduling problem into sub-period problems reduces the size of the search space- encountered in parallel scheduling while allowing fulfillment of time dependent constraints.
- Automatic detection of roads in spot satellite imagesDas, Sujata (Virginia Polytechnic Institute and State University, 1988)The improved spatial resolution of the data from the SPOT satellite provides a substantially better basis for monitoring urban land use and growth with remote sensing than Landsat data. The purpose of this study is to delineate the road network in 20-m resolution SPOT-images of urban areas automatically. The roads appear as linear features. However, most edge and line detectors are not effective in detecting roads in these images because of the low signal to noise ratio, low contrast and blur in the imagery. For the automatic recognition of roads, a new line detector based on surface modelling is developed. A line can be approximated by a piecewise straight curve composed of short linear line-elements, called linels, each characterized by a direction, a length and a position. The approach to linel detection is to fit a directional surface that models the ideal local intensity profile of a linel in the least square sense. A Gaussian surface with a direction of invariance forms an adequate basis for modelling the ideal local intensity profile of the roads. The residual of the least squares fit as well as the parameters of the fit surface characterize the linel detected. The reliable performance of this line operator makes the problems of linking linels more manageable.
- Automatic Positioning and Design of a Variable Baseline Stereo BoomFanto, Peter Louis (Virginia Tech, 2012-07-17)Conventional stereo vision systems rely on two spatially fixed cameras to gather depth information about a scene. The cameras typically have a fixed distance between them, known as the baseline. As the baseline increases, the estimated 3D information becomes more accurate, which makes it advantageous to have as large a baseline as possible. However, large baselines have problems whenever objects approach the cameras. The objects begin to leave the field of view of the cameras, making it impossible to determine where they are located in 3D space. This becomes especially important if an object of interest must be actuated upon and is approached by a vehicle. In an attempt to overcome this limitation, this thesis introduces a variable baseline stereo system that can adjust its baseline automatically based on the location of an object of interest. This allows accurate depth information to be gathered when an object is both near and far. The system was designed to operate under, and automatically move to a large range of different baselines. This thesis presents the mechanical design of the stereo boom. This is followed by a derivation of a control scheme that adjusts the baseline based on an estimate object location, which is gathered from stereo vision. This algorithm ensures that a certain incident angle on an object of interest is never surpassed. This maximum angle is determined by where a stereo correspondence algorithm, Semi-Global Block Matching, fails to create full reconstructions.
- A behavioral evaluation of command-selection aids for inexperienced computer users/Elkerton, Jay (Virginia Polytechnic Institute and State University, 1985)Two experiments were conducted to determine the feasibility of providing online command-selection aids to novice users of an information retrieval system. The results of the first experiment revealed a difference in the mean and variability of search performance between novice and expert computer users. Half of the novices were performing much like experts, while the rest of the sample was extremely slow. These slower novices were using inefficient scrolling strategies and appeared to be unfamiliar with the structure of the database. The second experiment evaluated whether novices could be assisted or trained with command-selection aids developed from the behavior of experts. The command-selection aids were defined in a 3 X 3 mixed factor design with type of model (frequency, sequence, or plan-based) as the between-subjects variable and dialogue initiative (user, computer, or mixed) as the within-subjects variable. The frequency and sequence models presented and ranked search procedures based on a command-usage profile and a command-transition matrix, respectively. The plan-based model presented an ordered set of search procedures with verbal explanations. All models were constructed for groups of homogeneous search problems selected by a sorting and cluster analysis. The three dialogue-initiatives determined whether the user, the computer, or both the user and computer controlled presentation of advice. Administration of the dialogue initiatives was completely counterbalanced and was followed by a final unaided transfer session. As a result of receiving online aiding, the wide ranging search performance of novice subjects was improved both during assistance and transfer. Performance of aided novices was superior to the slow novices and equal to the fast novices and experts. All three command-selection models were equally effective, with exception of the sequence model which sometimes presented frequent and complicated advice. Of the dialogues, mixed-initiated advice was ineffective during the first aiding session possibly due to the difficulties novices faced deciding whether to receive the suggested assistance. The conclusion of the study was that online command—selection aids can be effective if providing appropriate feedback and minimizing the amount of dialogue in aiding.
- CHARTMAKER: a "true consultant" expert system for designing chartsShulok, Thomas Aaron (Virginia Tech, 1988-06-05)Expert system technology has produced systems that perform heuristic classification. These systems solve problems of a type determined by the knowledge engineer and the expert at system design time. A "true consultant" on the contrary, applies domain knowledge to solve a problem not previously seen. For example, a graphic design consultant must accept the statement of almost any problem from a client and turn it into a visual design. This thesis reports the successful construction of the first such true consultant for a well-understood domain: the visual design task of chart construction. The system leads a client in a dialogue to define a problem in the client's terms and then maps the problem representation into a knowledge base for constructing charts. Extensions of the technology reported in this thesis may aid the creation of a new class of expert systems.
- A Comparative Evaluation of Indexing SchemesGuru Prasad, Comarapalyam K.; Roach, John W. (Department of Computer Science, Virginia Polytechnic Institute & State University, 1990)An empirical experment is reported that compares three indexing techniques used to help answer queries for a medium-sized database. The experiment compares memory and time utilization for B+ trees, superimposed codeword and perfect hashing indexing techniques. Costs for creating and querying the database are compared over a range at different page and buffer sizes for the database. We compare and contract the advantages and disadvantages of the indexing techniques.
- A comparison of block-stacking heuristics used by preschool children and classic artificial intelligence planning paradigmsJohnson, Alison Virginia (Virginia Tech, 1988-11-15)Despite the large body of research in Psychology concerning human planning and problem-solving both by adults and children, the study of planning and problem-solving in the field of Artificial Intelligence has proceeded along its own development with very little concurrent exploration of the methods people use to plan and solve problems. Some of the classic planning programs are unable to solve problems which are trivial for children, and it may be that by exploration of the methods children use we will discover certain heuristics which can be incorporated into AI planning paradigms. This thesis explores this possibility. Children aged 3 to 5 years were recorded performing a block-stacking task which simulates the type of problem given to planners to test their efficiency. The data were analyzed in order to determine those heuristics which are common to planners and children as well as those which are unique to the children. Based on this analysis, the psychological validity of the planning programs are evaluated.
- Concurrent versus retrospective verbal protocol for comparing window usabilityBowers, Victoria A. (Virginia Tech, 1990-01-07)The measurement of software usability has become an important issue in recent years. Metrics of usability include time, errors, questionnaires, ratings, and results of verbal protocols. Concurrent verbal protocol, a method in which the user "thinks aloud" while completing given tasks, has been heavily employed by software usability researchers who want to know the reason a user is having difficulties. Possible problems associated with using concurrent verbal protocol are (1) that verbalization may interfere with the processing required to complete the task, and (2) that subjects may not be able to monitor and express the information of interest to the researcher. A relatively new approach which may avoid these problems is heavily cued retrospective verbal protocol in which the user is presented subsequently with a representation (a video tape, for example) which helps him recall his thoughts during the task without interfering with task completion. This research compared the performance of subjects while completing tasks using both methods of verbal protocol. The verbal data collected by the two protocol techniques was compared to assess any information differences due to the methods of collection. No performance differences were found between the two protocol methods. Reasons for this lack of degradation due to concurrent verbalization are discussed. The kinds of information gathered were quite different for the two methods, with concurrent protocol subjects giving procedural information and retrospective protocol subjects giving explanations and design statements. Implications for usability testing are discussed. The two methods of protocol were employed in a comparison of two different size monitors, a 30.48 cm diagonal and a 53.34 cm diagonal. The subjects' performance, as measured by steps to completion, task completion time, and errors committed, was compared across the two monitors. Subjects were required to complete 12 tasks which varied in the difficulty of the windowing required. Subjective data were also collected in the form of task difficulty ratings, as well as a global measure of user satisfaction. These performance measures and subjective measures were compared across protocol methods as well as monitors. Performance data, as well as subjective data, indicate that on tasks that do not require extensive windowing, there are no difference between the two monitor sizes. As windowing difficulty increases, however, the large monitor's advantages become apparent. Tasks with a high level of windowing difficulty are judged to be easier and require fewer steps on the large monitor than on the small monitor.
- Conjunctive polymorphic type checking with explicit typesFlannery, Kevin E. (Virginia Polytechnic Institute and State University, 1989)An expressive type language and the ability to do compile-time type inference are desirable goals in language design, but the attainment of the former may preclude the possibility of the latter. Specifically, the type conjunction operator (type intersection) induces a rich type language at the expense of decidability of the typeable expressions. Two extreme alternatives to this dilemma are to abandon type inference (and force the programmer to, essentially, supply a derivation for his type claims) or to abandon (or restrict) type conjunction. This work presents a third alternative in which the programmer, at times, may be required to supply explicit types in order for type inference to succeed. In this way, the power of conjunctive types is preserved, yet compile-time type inference can be done for a large class of polymorphic functions, including those typeable with parametric types. To this end, we introduce a simple combinator based language with typing rules based on type conjunction and a subtype relation, of sorts, called "weaker." The validity of the type rules with respect to the usual interpretation of "type" is shown, along with the undecidability of the type relation. It is shown how the computational portion of the language can be modified to accommodate explicit type information which may direct an automatic type derivation. This new language has the principal type property with respect to a decidable relation, although deciding this relation is shown to be an NP-Complete problem. The language is extended to accommodate type fixed points, and extended further to allow all expressions with parametric types to be typed automatically, and to accommodate integers, pairs, sums, and abstract types in the form of type generators.
- Constraints, a model of computationMantha, Suryanarayana M. (Virginia Tech, 1987-07-05)In this thesis constraint solving/satisfaction is presented as a model of computation. Advantages of using constraints as a paradigm of programming are presented. A semantic schema for constraint based computations is given, following a brief survey of the more important systems based on constraints. These systems range from particular algorithms to problem solvers to constraint based general purpose programming languages. Finally, constraint satisfaction is applied to logic programming and theorem proving. It is shown that incorporating constraint solving in definite clause programs enhances their expressive power. Also, an alternative semantics - based on constraint satisfaction - is given for theorem proving.
- Cyrano: a meta model for federated database systemsDzikiewicz, Joseph (Virginia Tech, 1996-05-01)The emergence of new data models requires further research into federated database systems. A federated database system (FDBS) provides uniform access to multiple heterogeneous databases. Most FDBS's provide access to only the older data models such as relational, hierarchical, and network models. A federated system requires a meta data model. The meta model is a uniform data model through which users access data regardless of the data model of the data's native database. This dissertation examines the question of meta models for use in an FDBS that provides access to relational, object oriented, and rule based databases. This dissertation proposes Cyrano, a hybrid of object oriented and rule based data models. The dissertation demonstrates that Cyrano is suitable as a meta model by showing that Cyrano satisfies the following three criteria: 1) Cyrano fully supports relational, object oriented, and rule based member data models. 2) Cyrano provides sufficient capabilities to support integration of heterogeneous databases. 3) Cyrano can be implemented as the meta model of an operational FDBS. This dissertation describes four primary products of this research: 1) The dissertation presents Cyrano, a meta model designed as part of this research that supports both the older and the newer data models. Cyrano is an example of analytic object orientation. Analytic object orientation is a conceptual approach that combines elements of object oriented and rule based data models. 2) The dissertation describes Roxanne, a proof-of-concept FDBS that uses Cyrano as its meta model. 3) The dissertation proposes a set of criteria for the evaluation of meta models. The dissertation uses these criteria to demonstrate Cyrano's Suitability as a meta model. 4) The dissertation presents an object oriented FDBS reference architecture suitable for use in describing and designing an FDBS.
- Deceptive robotsKlock, Brian Lee (Virginia Tech, 1990-08-09)Plan recognition in cooperative or adversarial situations requires the ability to reason with beliefs. The problem is complicated in adversarial situations because an opponent may employ deception. In this case, an agent must also be able to reason about an opponent's beliefs (nested beliefs) as well as his own beliefs. A system has been developed that permits agents to reason with nested beliefs using possible worlds semantics; consistency maintenance is employed to allow agents to revise their beliefs when an inconsistency occurs. The advantages over prior systems are that belief revision occurs without user interaction and that beliefs are treated as objects having equal status with facts; this permits complex interaction between beliefs and actions. The theory includes treatment of many areas necessary to create a system of multiple autonomous reasoning agents. These agents are given the ability to deceive each other and to predict when they are being deceived. The system is shown to be practical by its implementation in a simulation.
- The design of a virtual fact base for PrologHaugh, J. Steven (Virginia Tech, 1991-04-08)The fact and rule list internal to Prolog is capable of handling as many facts as available memory resources permit. A solution to this limitation is to store facts on disk, retrieving them into a main memory database buffer only as needed. Allocating a fixed portion of main memory to buffer database facts frees up scarce main memory for more frequently accessed rules and data structures internal to Prolog. The Prolog Database System built in connection with this project transparently stores and retrieves facts on disk and evaluates them in the order they were asserted allowing for the transfer of existing small scale prototypes into large scale production systems. Since existing relational database techniques were not designed to function in a Prolog environment where facts are evaluated in database facilities were designed, developed, and integrated into Prolog. These database facilities include a unique page replacement policy designed to minimize expensive page faults during the execution of a Prolog program. The look ahead page replacement policy looks ahead on database pages while they are in main memory in order to determine whether they are likely to be accessed again in the future. In this way, a near optimal working set of database pages is maintained in the database buffer, assisting with minimizing expensive page faults.
- Determination of Normal or Abnormal Gait Using a Two-Dimensional Video CameraSmith, Benjamin Andrew (Virginia Tech, 2007-04-06)The extraction and analysis of human gait characteristics using image sequences and the subsequent classification of these characteristics are currently an intense area of research. Recently, the focus of this research area has turned to the realm of computer vision as an unobtrusive way of performing this analysis. With such systems becoming more common, a gait analysis system that will quickly and accurately determine if a subject is walking normally becomes more valuable. Such a system could be used as a preprocessing step in a more sophisticated gait analysis system or could be used for rehabilitation purposes. In this thesis a system is proposed which utilizes a novel fusion of spatial computer vision operations as well as motion in order to accurately and efficiently determine if a subject moving through a scene is walking normally or abnormally. Specifically this system will yield a classification of the type of motion being observed, whether it is a human walking normally or some other kind of motion taking place within the frame. Experimental results will show that the system provides accurate detection of normal walking and can distinguish abnormalities as subtle as limping or walking with a straight leg reliably.
- Development of an object library for a design support systemDe Dios, Bettina G. (Virginia Tech, 1990-05-05)This thesis describes the development of an object library for a design support system presently being developed by the Fine Tools Group of the Architecture faculty. The object library is conceived as a library of “types” based on the premise that a major part of design solution is accomplished by reference to prior solutions. It contains graphical representations of physical objects that make up a building, wherein each object representation possesses the capability of being depicted as a building component or as a building product. The proposed object library is organized by a typology. The usage of terms in the typology, in turn, is controlled by a thesaurus which reflects an ordering of terms used to form object descriptors and contains classifiers derived from the CSI and CI/SfB classification systems.
- Dynamics and control of manipulating robotsVarghese, Philip (Virginia Tech, 1987-02-05)Dynamics and Control of manipulating robots currently is a major field of research in robotics. Various methods of efficiently formulating the non-linear dynamics have been proposed with an emphasis on reducing the computations necessary to solve the dynamics. Several control concepts and algorithms have been suggested.