Browsing by Author "Henry, Sallie M."
Now showing 1 - 20 of 84
Results Per Page
Sort Options
- ADLIF-a structured design language for metric analysisSelig, Calvin Lee (Virginia Tech, 1987-05-15)Since the inception of software engineering, the major goal has been to control the development and maintenance of reliable software. To this end, many different design methodologies have been presented as a means to improve software quality through semantic clarity and syntactic accuracy during the specification and design phases of the software life cycle. On the other end of the life cycle, software quality metrics have been proposed to supply quantitative measures of the resultant software. This study is an attempt to unify the two concepts by providing a means to determine the quality of a design before its implementation.
- An Application Layer Framework for Location- based Service Discovery and Provisioning for Mobile DevicesGopinath, Sunil (Virginia Tech, 2001-01-23)There has been a tremendous rise in the use of Wireless Application Protocol (WAP) services for cellular telephones. Such services include electronic mail, printing, fax delivery, and weather reports. But, current services are limited both in type and nature. Today, mobile telephone users need access to more dynamic, location-based, distributed services that include both hardware resources, like printers and computers, and software services, like application software. Problems due to mobility include clients disconnecting from the network, services leaving the network, and communication problems. This research proposes and demonstrates the feasibility of a framework for a system to meet such a need. More specifically, this work develops and demonstrates a distributed environment where mobile telephone users have access to services dynamically as they enter and leave different service areas. It also provides a framework to support mobility in the application layer context. This work utilizes Sun Microsystem's JINI connection technology to provide distributed services to mobile telephones over WAP. It provides a prototype system to provide Java based software services to mobile telephones. The work also provides several optimizations with respect to client communication by harnessing key features of WAP. This provides a robust, dynamic environment for service provisioning.
- The application of structure and code metrics to large scale systemsCanning, James Thomas (Virginia Polytechnic Institute and State University, 1985)This work extends the area of research termed software metrics by applying measures of system structure and measures of system code to three realistic software products. Previous research in this area has typically been limited to the application of code metrics such as: lines of code, McCabe's Cyclomatic number, and Halstead's software science variables. However, this research also investigates the relationship of four structure metrics: Henry's Information Flow measure, Woodfield's Syntactic Interconnection Model, Yau and Collofello's Stability measure and McClure's Invocation complexity, to various observed measures of complexity such as, ERRORS, CHANGES and CODING TIME. These metrics are referred to as structure measures since they measure control flow and data flow interfaces between system components. Spearman correlations between the metrics revealed that the code metrics were similar measures of system complexity, while the structure metrics were typically measuring different dimensions of software. Furthermore, correlating the metrics to observed measures of complexity indicated that the Information Flow metric and the Invocation Measure typically performed as well as the three code metrics when project factors and subsystem factors were taken into consideration. However, it was generally true that no single metric was able to satisfactorily identify the variations in the data for a single observed measure of complexity. Trends between many of the metrics and the observed data were identified when individual components were grouped together. Code metrics typically formed groups of increasing complexity which corresponded to increases in the mean values of the observed data. The strength of the Information Flow metric and the Invocation measure is their ability to form a group containing highly complex components which was found to be populated by outliers in the observed data.
- Applying software maintenance metrics in the object oriented software development life cylceLi, Wei (Virginia Tech, 1992-09-05)Software complexity metrics have been studied in the procedural paradigm as a quantitative means of assessing the software development process as well as the quality of software products. Several studies have validated that various metrics are useful indicators of maintenance effort in the procedural paradigm. However, software complexity metrics have rarely been studied in the object oriented paradigm. Very few complexity metrics have been proposed to measure object oriented systems, and the proposed ones have not been validated. This research concentrates on several object oriented software complexity metrics and the validation of these metrics with maintenance effort in two commercial systems. The results of an empirical study of the maintenance activities in the two commercial systems are also described. A metric instrumentation in an object oriented software development framework is presented.
- Assessing software quality in Ada based products with the objectives, principles, attributes frameworkBundy, Gary Neal (Virginia Tech, 1990-09-08)This thesis describes the results of a research effort focusing on the validation of a procedure for assessing the quality of an Ada-based product. Starting with the identification of crucial Ada constructs, this thesis outlines a seven step process for defining metrics that support software quality assessment within a framework based on linkages among software engineering objectives, principles, and attributes. The thesis presents the impact of the use of crucial Ada constructs on the software engineering attributes and describes measurement approaches for assessing that impact This thesis also outlines a planned research effon to develop an automated analyzer for the assessment of software quality in Ada-based products and plans for validating the assessment procedure.
- The automated assessment of computer software documentation quality using the objectives/principles/attributes frameworkDorsey, Edward Vernon (Virginia Tech, 1992-10-15)Since humans first put pen to paper, people have critically assessed written work; thus, the assessment of documents per se is not new. Only recently, however, has the issue of formalized document quality assessment become feasible. Enabled by the rapid progress in computing technology, the prospect of an automated, formalized system of quality assessment, based on the presence of certain attributes deemed essential to the quality of a document, is feasible. The existing Objectives/Principles/Attributes Framework, previously applied to code assessment, is modified to allow application to documentation quality assessment. An automated procedure for the assessment of software documentation quality assessment and the development of a prototype documentation analyzer are described. A major shortcoming of the many quality metrics that are proposed in computer science is their lack of empirical validation. In pursuit of such necessary validation for the measures proposed within this thesis, a study is performed to determine the agreement of the measures rendered by Docalyze with those of human evaluators. This thesis demonstrates the applicability of a quality assessment framework to the documentation component of a software product. Further, the validity of a subset of the proposed metrics is demonstrated.
- Automatic, incremental, on-the-fly garbage collection of actorsNelson, Jeffrey Ernest (Virginia Tech, 1989-02-15)Garbage collection is an important topic of research for operating systems, because applications are easier to write and maintain if they are unburdened by the concerns of storage management. The actor computation model is another important topic: it is a powerful, expressive model of concurrent computation. This thesis is motivated by the need for an actor garbage collector for a distributed real-time system under development by the Real-Time Systems Group at Virginia Tech. It is shown that traditional garbage collectors—even those that operate on computational objects—are not sufficient for actors. Three algorithms, with varying degrees of efficiency, are presented as solutions to the actor garbage collection problem. The correctness and execution complexity of the algorithms is derived. Implementation methods are explored, and directions for future research are proposed.
- Belbin's Company Worker, The Self-Perception Inventory, and Their Application to Software Engineering TeamsSchoenhoff, Peter Klaus (Virginia Tech, 2001-12-11)Software engineering often requires a team arrangement because of the size and scope of modern projects. Several team structures have been defined and used, but these structures generally define only the tasks and jobs required for the team. Various process and product metrics seek to improve quality, even though it is generally agreed that the greatest potential benefit lies in people issues. This study uses a team-based personality profiling tool, the Belbin Self-Perception Inventory, to explore the characteristics offered by the Company Worker, one of the team roles defined by Belbin.
- CHARTMAKER: a "true consultant" expert system for designing chartsShulok, Thomas Aaron (Virginia Tech, 1988-06-05)Expert system technology has produced systems that perform heuristic classification. These systems solve problems of a type determined by the knowledge engineer and the expert at system design time. A "true consultant" on the contrary, applies domain knowledge to solve a problem not previously seen. For example, a graphic design consultant must accept the statement of almost any problem from a client and turn it into a visual design. This thesis reports the successful construction of the first such true consultant for a well-understood domain: the visual design task of chart construction. The system leads a client in a dialogue to define a problem in the client's terms and then maps the problem representation into a knowledge base for constructing charts. Extensions of the technology reported in this thesis may aid the creation of a new class of expert systems.
- Class hierarchy design for space time problemsChopra, Sanjay (Virginia Tech, 1995-07-06)The purpose of the project is to design a class hierarchy that will aid in the development of simulations for certain space time problems. The class hierarchy and the problem domain to which it applies are illustrated by considering simulations of three representative problems: a pool game; a collision detection system for robot arms; an automated highway system. The emphasis in the simulations is on the class hierarchy. The class hierarchy contains base classes to model objects, space, time and interactions among objects. These classes could be applied to other similar problems in the problem domain. For example the class objects help to model various objects like cars, pool balls, robots, trains, birds etc. Class space allows the user to subdivide the problem space into smaller dynamic sub-spaces. The user can define rules to decompose the space into 'n’ smaller spaces when there are more than 'x' objects in the space.
- Communicational MeasurementMayo, Kevin A.; Henry, Sallie M. (Department of Computer Science, Virginia Polytechnic Institute & State University, 1991-05-01)A software system is an aggregate of communicating modules. The interfaces supporting module (procedure, etc.) communication characterize the system. Therefore, understanding these interfaces (areas of communication) gives a better description of system complexity. Understanding in an empirical sense implies measuring, and measuring interfaces involves examining both the communicational environment and the exchanged data. There are several different measures associated with the communication environment. Obviously, the structure or nesting level at the communication ping is very interesting. The need to measure the data communicated also raises some very interesting questions concerned with data type and expressional form. This paper reports on the efforts at Virginia Tech to measure, and thus capture, the complexities of software interfaces. Analyzing an Ada system of 85,000 lines of code validated the measures proposed here. The results of this research are very encouraging.
- The Comparison and Improvement of Effort Estimates from Three Software Cost ModelsKafura, Dennis G.; Henry, Sallie M.; Gintner, Mark (Department of Computer Science, Virginia Polytechnic Institute & State University, 1987)This paper presents an empirical investigation of effort estimation techniques using three software cost models: Boehm's basic COCOMO model, Thebaut's COPMO model, and Putnam's model. The results, based on proprietary historical project data provided by a major computer manufacturer, are in three parts: (1) a comparison of the three mdels showing that more accurate estimates are obtained from the COPMO model; (2) improvements in the COPMO estimates by means of two techniques termed "submodeling" and "constructive modeling"; and (3) the definition and evaluation of a promising new technique, termed "adjustment multipliers", for improving a model's estimates. Finally, a second set of industrial data is used to confirm the general applicability of these improvement techniques.
- Comparison of a Graphical and a Textual Design Language Using Software Quality MetricsHenry, Sallie M.; Goff, Roger (Department of Computer Science, Virginia Polytechnic Institute & State University, 1988)For many years the software engineering community has been attacking the software reliability problem on two fronts. First via design methodologies, languages and tools as a precheck on quality and second by measuring the quality of produced software as a postcheck. This research attempts to unify the approach to creating reliable software by providing the ability to measure the quality of a design prior to its implementation. A comparison of a graphical and a textual design language is presented in an effort to support research findings that the human brain works more effectively in images than in text.
- Comparison of an Object-Oriented Programming Language to a Procedural Programming Language for Effectiveness in Program MaintenanceHenry, Sallie M.; Humphrey, Matthew C. (Department of Computer Science, Virginia Polytechnic Institute & State University, 1988)New software tools and methodologies make claims that managers often believe intuitively, without evidence. Many unsupported claims have been made about object-oriented programming. However, without scientific evidence, it is impossible to accept these claims as valid. Although experimentation has been done in the past, most of the research is very recent and the most relevant research has serious drawbacks. This paper describes an experiment which compares the maintainability of two functionally equivalent systems in order to explore the claim that systems developed with object-oriented languages are more easily maintained than those programmed with procedural languages. We found supporting evidence that programmers produce more maintainable code with an object oriented language than a standard procedural language.
- Comparison of an object-oriented programming language to a procedural programming language for effectiveness in program maintenanceHumphrey, Matthew Cameron (Virginia Tech, 1988-05-05)New software tools and methodologies make claims that managers often believe intuitively without evidence. Many unsupported claims have been made about object-oriented programming. However, without rigorous scientific evidence, it is impossible to accept these claims as valid. Although experimentation has been done in the past, most of the research is very recent and the most relevant research has serious drawbacks. This study attempts to empirically verify the claim that object-oriented languages produce programs that are more easily maintained than those programmed with procedural languages. Measurements of subjects performing maintenance tasks onto two identical programs, one object-oriented and the other procedure-oriented show the object-oriented version to be more maintainable.
- Complexity Measurement of a Graphical Programming LanguageHenry, Sallie M.; Goff, Roger (Department of Computer Science, Virginia Polytechnic Institute & State University, 1987)For many years the software engineering community has been attacking the software reliability problem on two fronts: First via design methodologies, languages and tools as a precheck on quality and second by measuring the quality of produced software as a postcheck. This research attempts to unify the approach to creating reliable software by providing the ability to measure the quality of a design prior to its design implementation. Using a graphical design language in an effort to support cognitive science research, we have successfully defined and applied Software Quality Metrics to graphical designs in an effort to predict software quality early in the software lifecycle. Metrics values from the Graphical Design are input to predictor equations, provided in this paper, to give metric values for the resultant source code.
- Complexity measurement of a graphical programming language and comparison of a graphical and a textual design languageGoff, Roger Allen (Virginia Tech, 1987-06-15)For many years the software engineering community has been attacking the software reliability problem on two fronts. First via design methodologies, languages and tools as a precheck on quality and second by measuring the quality of produced software as a postcheck. This research attempts to unify the approach to creating reliable software by providing the ability to measure the quality of a design prior to its implementation. Also presented is a comparison of a graphical and a textual design language in an effort to support cognitive science research findings that the human brain works more effectively in images than in text.
- A Controlled Experiment to Evaluate Maintainability of Object-Oriented SoftwareHenry, Sallie M.; Humphrey, Matthew C. (Department of Computer Science, Virginia Polytechnic Institute & State University, 1990)New software tools and methodologies make claims that managers often believe intuitively without evidence. Many unsupported claims have been made about object-oriented programming. However, without scientific evidence, it is impossible to accept these claims as valid. Although experimentation has been done in the past, most of the research is very recent and the most relevant research has serious drawbacks. This paper describes an experiment which compares the maintainability of two functionally equivalent systems, in order to explore the claim that systems developed with object-oriented languages are more easily maintained than those programmed with procedural languages. We found supporting evidence that programmers produce more maintainable code with an object-oriented language than with a standard procedural language.
- A controlled experiment to identify and test a representative primitive set of user object-oriented cursor actionsChase, Joseph D. (Virginia Tech, 1990-07-06)A method for decomposing the user cursor action component of human-computer interfaces into individual components based on the four categories: target size, target distance, target direction, and selection mode, was investigated. A primitive task set consisting of the Cartesian product of specific elements of the four categories listed above was proposed based on observation of user tasks and a cursor action benchmark task set was developed to measure a user's performance for each element of the set of primitive elements with a given cursor control device. An experiment was conducted to test the proposed primitive task set and associated benchmark task set as a predictor of performance for a set of representative graphics tasks. The predicted times and actual times were shown to have very strong correlations and the data were also shown to conform to Fitts' Law. A description of the experiment, the data collected, and the analysis of these data are included.
- Definition and evaluation of a synthesis-oriented, user-centered task analysis technique: the Task Mapping ModelMayo, Kevin A. (Virginia Tech, 1994-12-15)A software system is an aggregate of communicating modules, and there are several different types of communication among these modules (direct, indirect, and global). Therefore, understanding the interfaces among these modules can characterize the system and are a major factor in the system's complexity. These interfaces could possibly also show and predict inadequacies in the reliability and maintenance of a system. Interfaces are defined early in the development life cycle at a detailed or high level design stage. Knowing that these interfaces exist and their structure leads us to measure them for an indication of the designed interface complexity. This designed interface complexity can then be utilized for software quality assurance by allowing users to choose from among several designs. With data provided by an Ada software developer, the interface complexity metrics correlated with established metrics, but also found complex interfaces that established metrics missed.