Browsing by Author "Arthur, James D."
Now showing 1 - 20 of 173
Results Per Page
Sort Options
- Abstraction Mechanisms in Support of Top-Down and Bottom-Up Task SpecificationRaghu, K. S.; Arthur, James D. (Department of Computer Science, Virginia Polytechnic Institute & State University, 1988)Abstraction is a powerful mechanism for describing objects and relationships from multiple, yet consistent, perspectives. When properly applied to interface design, abstraction mechanisms can provide the interaction flexibility and simplicity so desperately needed and demanded by today's diverse user community. Fundamental to achieving such goals has been the integration of visual programming techniques with a unique blend of abstraction mechanisms to support user interaction and task specification. The research presented in this paper describes crucial abstraction mechanisms employed within the Taskmaster environment to support top-down and bottom-up task specification. In particular, this paper (a) provides an overview of the Taskmaster environment, (b) describes top-down specification based on multi-level, menu-driven interaction and (c) describes bottom-up specification based on cutset identification and pseudo-tool concepts.
- Achieving Asynchronous Speedup While Preserving Synchronous Semantics: An Implementation of Instructional Footprinting in LindaLandry, Kenneth D.; Arthur, James D. (Department of Computer Science, Virginia Polytechnic Institute & State University, 1993)Linda is a coordination language designed to support process creation and inter-process communication within conventional computational languages. Although the Linda paradigm touts architectural and language independence, it often suffers performance penalties, particularly on local area network platforms. Instructional Footprinting is an optimization technique with the primary goal of enhancing the execution speed of Linda programs. The two main aspects of Instructional Footprinting are instructional decomposition and code motion. This paper addresses the semantic issues encountered when the Linda primitives, IN and RD, are decomposed and moved past other Linda operations. Formal semantics are given as well as results showing significant speedup (as high as 64%) when Instructional Footprinting is used.
- ACT++ 3.0: implementation of the actor model using POSIX threadsKhare, Arjun (Virginia Tech, 1994-07-15)The actor model provides a framework for writing concurrent programs. ACT ++ is an implementation of the actor model in C++, allowing concurrent programs to be written in an object-oriented style. In ACT++, each actor is an object possessing one or more independent threads of control. Version 2.0 of ACT ++ uses the PRESTO threads package. As PRESTO threads are available only for certain architectures and operating systems, its use does not meet one of the goals of ACT ++, namely portability among a variety of architectures. To facilitate portability, ACT++ 3.0 is written using the IEEE POSIX 1003.4a standard for threads (Pthreads). This project deals with the implementation of ACT++ 3.0, the testing of the implementation, and its performance.
- Active Library Resolution in Active NetworksLee, David C. (Virginia Tech, 1998-02-20)An active network is a computer network in which new protocols can be installed at run-time in any node within the network. For example, the deployment of Internet multicast technology has been slow because service providers have been reluctant to upgrade and reconfigure their routing nodes. Under the active network scheme, users who desire multicast services can have the service automatically installed without any direct intervention by the user or the provider. One major question in realizing active networks is how the code for the new active library can be found, or resolved, and retrieved. A model of the resolution and retrieval mechanisms is the major focus of this research. To validate the model, a proof-of-concept experimental system that realizes a simple active network architecture was developed. An active library resolution service model, suitable for a global Internet, was investigated using this experimental platform and a simulation system. The two protocol components that were built and studied are the active transport protocol and the active library resolution protocol. The experimental and simulation systems were used to evaluate the extensibility, overhead, resolution time, scalability, and policy constraint support of the service. Extensibility and policy constraint support are an integral part of the proposed design. For libraries located on servers that are at most ten hops away from the requesting source, the resolution time is under 2.6 seconds. Simulations of networks of different sizes and with different error rates exhibit linear resolution time and overhead characteristics, which indicates potential scalability. Behavior under high loss rates showed better than expected performance. The results indicate that the library resolution concept is feasible and that the proposed strategy is a good solution.
- Actor systems platform design and implementation of the actor paradigm in a distributed object-oriented environmentJoshi, Nandan (Virginia Tech, 1993-08-05)This project was undertaken as part of an effort to explore the design of object -oriented systems that are distributed, concurrent, real-time and/or embedded in nature. This work seeks to integrate the concurrency features of the actor model in a distributed, object oriented environment, ESP. The integrated system, called the Actor Systems Platform (ASP), provides a platform for designing concurrent, distributed applications. The actor model provides a mechanism for expressing the inherent concurrency in an application. The concurrency in the application can be exploited by the distributed features available in ESP. The actor abstraction in ASP is provided by a application-level class hierarchy in ESP. The message passing semantics of the actor model are implemented by using special operator overloading in C++. Cboxes are implemented to provide a synchronization mechanism and a means of returning replies. In a concurrent system, simultaneous execution of an object's methods can cause its state to be inconsistent. This is prevented by providing a method locking mechanism using behavior sets. While integrating the concurrency features of the actor model in an object-oriented environment, differences were encountered in determining the invocation semantics of the actor model and those of inherited methods. The problem is investigated and a taxonomy of solutions is presented.
- An Adaptive Time Window Algorithm for Large Scale Network EmulationKodukula, Surya Ravikiran (Virginia Tech, 2002-01-25)With the continuing growth of the Internet and network protocols, there is a need for Protocol Development Environments. Simulation environments like ns and OPNET require protocol code to be rewritten in a discrete event model. Direct Code Execution Environments (DCEE) solve the Verification and Validation problems by supporting the execution of unmodified protocol code in a controlled environment. Open Network Emulator (ONE) is a system supporting Direct Code Execution in a parallel environment - allowing unmodified protocol code to run on top of a parallel simulation layer, capable of simulating complex network topologies. Traditional approaches to the problem of Parallel Discrete Event Simulation (PDES) broadly fall into two categories. Conservative approaches allow processing of events only after it has been asserted that the event handling would not result in a causality error. Optimistic approaches allow for causality errors and support means of restoring state — i.e., rollback. All standard approaches to the problem of PDES are either flawed by their assumption of existing event patterns in the system or cannot be applied to ONE due to their restricted analysis on simplified models like queues and Petri-nets. The Adaptive Time Window algorithm is a bounded optimistic parallel simulation algorithm with the capability to change the degree of optimism with changes in the degree of causality in the network. The optimism at any instant is bounded by the amount of virtual time called the time window. The algorithm assumes efficient rollback capabilities supported by the â Weaves' framework. The algorithm is reactive and responds to changes in the degree of causality in the system by adjusting the length of its time window. With sufficient history gathered the algorithm adjusts to the increasing causality in the system with a small time window (conservative approach) and increases to a higher value (optimistic approach) during idle periods. The problem of splitting the entire simulation run into time windows of arbitrary length, whereby the total number of rollbacks in the system is minimal, is NP-complete. The Adaptive Time Window algorithm is compared against offline greedy approaches to the NP-complete problem called Oracle Computations. The total number of rollbacks in the system and the total execution time for the Adaptive Time Window algorithm were comparable to the ones for Oracle Computations.
- Adding Value to the Software Development Process: A Study in Independent Verification and ValidationArthur, James D.; Groener, Markus K.; Hayhurst, Kelly J.; Michael Holloway, C. (Department of Computer Science, Virginia Polytechnic Institute & State University, 1998-08-01)Independent Verification and Validation (IV&V) is best viewed as an overlay process supporting a software development effort. While the touted benefits of a properly managed IV&V activity are many, they specifically emphasize: (a) early fault detection, (b) reduced time to remove faults, and (c) a more robust end-product. This paper outlines a study funded by NASA-Langley Research Center to examine an existing IV&V methodology, and to confirm (or refute) the touted beneficial claims. In the study two distinct development groups are established, with only one having an IV&V contingent. Both groups are tasked to produce a software product using the same set of requirements. Within each phase of the development effort, fault detection and fault removal data are recorded. An analysis of that data reveals that the group having the IV&V contingent: (a) detected errors earlier in the software development process, and (b) on the average, required significantly less time to remove those faults. Moreover, a test for operational correctness further reveals that the system developed by the group having the IV&V component was substantially more robust than the one produced by the other development group. A statistical analysis of our results is also provided to establish significance.
- ADLIF-a structured design language for metric analysisSelig, Calvin Lee (Virginia Tech, 1987-05-15)Since the inception of software engineering, the major goal has been to control the development and maintenance of reliable software. To this end, many different design methodologies have been presented as a means to improve software quality through semantic clarity and syntactic accuracy during the specification and design phases of the software life cycle. On the other end of the life cycle, software quality metrics have been proposed to supply quantitative measures of the resultant software. This study is an attempt to unify the two concepts by providing a means to determine the quality of a design before its implementation.
- Agile Requirements Generation Model: A Soft-structured Approach to Agile Requirements EngineeringSoundararajan, Shvetha (Virginia Tech, 2008-07-31)The agile principles applied to software engineering include iterative and incremental development, frequent releases of software, direct stakeholder involvement, minimal documentation and welcome changing requirements even late in the development cycle. The Agile Requirements Engineering applies the above mentioned principles to the Requirements Engineering process. Agile Requirements Engineering welcomes changing requirements even late in the development cycle. This is achieved by using the agile practice of evolutionary requirements which suggests that requirements should evolve over the course of many iterations rather than being gathered and specified upfront. Hence, changes to requirements even late in the development cycle can be accommodated easily. There is however, no real process to the agile approach to Requirements Engineering. In order to overcome this disadvantage, we propose to adapt the Requirements Generation Model (a plan-driven Requirements Engineering model) to an agile environment in order to structure the Agile Requirements Engineering process. The hybrid model named the Agile Requirements Generation Model is a soft-structured process that supports the intents of the agile approach. This model combines the best features of the Requirements Generation Model and Agile Software Development.
- Analysis and Evaluation of Methods for Activities in the Expanded Requirements Generation Model (x-RGM)Lobo, Lester Oscar (Virginia Tech, 2004-07-26)In recent years, the requirements engineering community has proposed a number of models for the generation of a well-formulated, complete set of requirements. However, these models are often highly abstract or narrowly focused, providing only pieces of structure and parts of guidance to the requirements generation process. Furthermore, many of the models fail to identify methods that can be employed to achieve the activity objectives. As a consequence of these problems, the requirements engineer lacks the necessary guidance to effectively apply the requirements generation process, and thus, resulting in the production of an inadequate set of requirements. To address these concerns, we propose the expanded Requirements Generation Model (x-RGM), which consists of activities at a more appropriate level of abstraction. This decomposition of the model ensures that the requirements engineer has a clear understanding of the activities involved in the requirements generation process. In addition, the objectives of all the activities defined by the x-RGM are identified and explicitly stated so that no assumptions are made about the goals of the activities involved in the generation of requirements. We also identify sets of methods that can be used during each activity to effectively achieve its objectives. The mapping of methods to activities guides the requirements engineer in selecting the appropriate techniques for a particular activity in the requirements engineering process. Furthermore, we prescribe small subsets of methods for each activity based on commonly used selection criteria such that the chosen criterion is optimized. This list of methods is created with the intention of simplifying the task of choosing methods for the activities defined by the x-RGM that best meet the selection criterion goal
- Anticipating and Mitigating the Professional Challenge to Independent Verification and ValidationDabney, James B.; Arthur, James D. (Department of Computer Science, Virginia Polytechnic Institute & State University, 1998-03-01)Independent Verification and Validation faces three classes of challenges: the Technical Challenge, the Management Challenge, and the Professional Challenge. In this paper we focus on the Professional Challenge, and, in particular, the four phases that characterize it: Denial, Anger, Cooperation and Dependence. We believe that to implement an effective IV&V effort, one must understand the relationship among the phases and the critical issues underlying them. For each of the phases we (a) provide a characteristic description, (b) discuss how they affect the IV&V effort, (c) present representative issues and examples, and (d) describe steps to reduce the adverse impact of the three detrimental phases. The examples provided are those we have encountered while serving in an IV&V capacity; "lessons learned" guide our suggestions for addressing phase-specific issues.
- Applying Dynamic Software Updates to Computationally-Intensive ApplicationsKim, Dong Kwan (Virginia Tech, 2009-06-22)Dynamic software updates change the code of a computer program while it runs, thus saving the programmer's time and using computing resources more productively. This dissertation establishes the value of and recommends practices for applying dynamic software updates to computationally-intensive applications—a computing domain characterized by long-running computations, expensive computing resources, and a tedious deployment process. This dissertation argues that updating computationally-intensive applications dynamically can reduce their time-to-discovery metrics—the total time it takes from posing a problem to arriving at a solution—and, as such, should become an intrinsic part of their software lifecycle. To support this claim, this dissertation presents the following technical contributions: (1) a distributed consistency algorithm for synchronizing dynamic software updates in a parallel HPC application, (2) an implementation of the Proxy design pattern that is more efficient than the existing implementations, and (3) a dynamic update approach for Java Virtual Machine (JVM)-based applications using the Proxy pattern to offer flexibility and efficiency advantages, making it suitable for computationally-intensive applications. The contributions of this dissertation are validated through performance benchmarks and case studies involving computationally-intensive applications from the bioinformatics and molecular dynamics simulation domains.
- Applying software maintenance metrics in the object oriented software development life cylceLi, Wei (Virginia Tech, 1992-09-05)Software complexity metrics have been studied in the procedural paradigm as a quantitative means of assessing the software development process as well as the quality of software products. Several studies have validated that various metrics are useful indicators of maintenance effort in the procedural paradigm. However, software complexity metrics have rarely been studied in the object oriented paradigm. Very few complexity metrics have been proposed to measure object oriented systems, and the proposed ones have not been validated. This research concentrates on several object oriented software complexity metrics and the validation of these metrics with maintenance effort in two commercial systems. The results of an empirical study of the maintenance activities in the two commercial systems are also described. A metric instrumentation in an object oriented software development framework is presented.
- An Approach to Real Time Adaptive Decision Making in Dynamic Distributed SystemsAdams, Kevin Page (Virginia Tech, 2005-12-12)Efficient operation of a dynamic system requires (near) optimal real-time control decisions. Those decisions depend on a set of control parameters that change over time. Very often, the optimal decision can be made only with the knowledge of future values of control parameters. As a consequence, the decision process is heuristic in nature. The optimal decision can be determined only after the fact, once the uncertainty is removed. For some types of dynamic systems, the heuristic approach can be very effective. The basic premise is that the future values of control parameters can be predicted with sufficient accuracy. We can either predict those value based on a good model of the system or based on historical data. In many cases, the good model is not available. In that case, prediction using historical data is the only option. It is necessary to detect similarities with the current situation and extrapolate future values. In other words, we need to (quickly) identify patterns in historical data that match the current data pattern. The low sensitivity of the optimal solution is critical. Small variations in data patterns should affect minimally the optimal solution. Resource allocation problems and other "discrete decision systems" are good examples of such systems. The main contribution of this work is a novel heuristic methodology that uses neural networks for classifying, learning and detecting changing patterns, as well as making (near) real-time decisions. We improve on existing approaches by providing a real-time adaptive approach that takes into account changes in system behavior with minimal operational delay without the need for an accurate model. The methodology is validated by extensive simulation and practical measurements. Two metrics are proposed to quantify the quality of control decisions as well as a comparison to the optimal solution.
- Architecture-Centric Project EstimationHenry, Troy Steven (Virginia Tech, 2007-05-14)In recent years studies have been conducted which suggest that taking an architecture first approach to managing large software projects can reduce a significant amount of the uncertainty present in project estimates. As the project progresses, more concrete information is known about the planned system and less risk is present. However, the rate at which risk is alleviated varies across the life-cycle. Research suggests that there exists a significant drop off in risk when the architecture is developed. Software risk assessment techniques have been developed which attempt to quantify the amount of risk that varying uncertainties convey to a software project. These techniques can be applied to architecture specific issues to show that in many cases, conducting an architecture centric approach to development will remove more risk than the cost of developing the architecture. By committing to developing the architecture prior to the formal estimation process, specific risks can be more tightly bounded, or even removed from the project. The premise presented here is that through the process of architecture-centric management, it is possible to remove substantial risk from the project. This decrease in risk exceeds that at other phases of the life-cycle, especially in comparison of the effort involved. Notably, at architecture, a sufficient amount knowledge is gained by which effort estimations may be tightly bounded, yet the project is early enough in the life-cycle for proper planning and scheduling. Thus, risk is mitigated through the increase in knowledge and the ability to maintain options at an early point. Further, architecture development and evaluation has been shown to incorporate quality factors normally insufficiently considered in the system design. The approach taken here is to consider specific knowledge gained through the architecting process and how this is reflected in parametric effort estimation models. This added knowledge is directly reflected in risk reduction. Drawing on experience of architecture researchers as well as project managers employing this approach, this thesis considers what benefits to the software development process are gained by taking this approach. Noting a strong reluctance of owners to incorporate solid software engineering practices, the thesis concludes with an outline for an experiment which goes about proving the reduction in risk at architecture exceeds the cost of that development.
- Assessing Agile Methods: Investigating Adequacy, Capability, and Effectiveness (An Objectives, Principles, Strategies Approach)Soundararajan, Shvetha (Virginia Tech, 2013-06-10)Agile methods provide an organization or a team with the flexibility to adopt a selected subset of principles and practices based on their culture, their values, and the types of systems that they develop. More specifically, every organization or team implements a customized agile method, tailored to better accommodate its needs. However, the extent to which a customized method supports the organizational objectives, i.e. the 'goodness' of that method, should be demonstrable. Existing agile assessment approaches focus on comparative analyses, or are limited in scope and application. In this research, we propose a systematic, comprehensive approach to assessing the 'goodness' of agile methods. We examine an agile method based on (1) its adequacy, (2) the capability of the organization to support the adopted principles and strategies specified by the method, and (3) the method's effectiveness. We propose the Objectives, Principles and Strategies (OPS) Framework to guide our assessment process. The Framework identifies (a) objectives of the agile philosophy, (b) principles that support the objectives and (c) strategies that implement the principles. It also defines (d) linkages that relate objectives to principles, and principles to strategies, and finally, (e) indicators for assessing the extent to which an organization supports the implementation and effectiveness of those strategies. The propagation of indicator values along the linkages provides a multi-level assessment view of the agile method. In this dissertation, we present our assessment methodology, guiding Framework, validation approach, results and findings, and future directions.
- Assessing Security Vulnerabilities: An Application of Partial and End-Game Verification and ValidationFrazier, Edward Snead (Virginia Tech, 2010-04-21)Modern software applications are becoming increasingly complex, prompting a need for expandable software security assessment tools. Violable constraints/assumptions presented by Bazaz [1] are expandable and can be modified to fit the changing landscape of software systems. Partial and End-Game Verification, Validation, and Testing (VV&T) strategies utilize the violable constraints/assumptions and are established by this research as viable software security assessment tools. The application of Partial VV&T to the Horticulture Club Sales Assistant is documented in this work. Development artifacts relevant to Partial VV&T review are identified. Each artifact is reviewed for the presence of constraints/assumptions by translating the constraints/assumptions to target the specific artifact and software system. A constraint/assumption review table and accompanying status nomenclature are presented that support the application of Partial VV&T. Both the constraint/assumption review table and status nomenclature are generic, allowing them to be used in applying Partial VV&T to any software system. Partial VV&T, using the constraint/assumption review table and associated status nomenclature, is able to effectively identify software vulnerabilities. End-Game VV&T is also applied to the Horticulture Club Sales Assistant. Base test strategies presented by Bazaz [1] are refined to target system specific resources such as user input, database interaction, and network connections. Refined test strategies are used to detect violations of the constraints/assumptions within the Horticulture Club Sales Assistant. End-Game VV&T is able to identify violation of constraints/assumptions, indicating vulnerabilities within the Horticulture Club Sales Assistant. Addressing vulnerabilities identified by Partial and End-Game VV&T will enhance the overall security of a software system.
- Assessing software quality in Ada based products with the objectives, principles, attributes frameworkBundy, Gary Neal (Virginia Tech, 1990-09-08)This thesis describes the results of a research effort focusing on the validation of a procedure for assessing the quality of an Ada-based product. Starting with the identification of crucial Ada constructs, this thesis outlines a seven step process for defining metrics that support software quality assessment within a framework based on linkages among software engineering objectives, principles, and attributes. The thesis presents the impact of the use of crucial Ada constructs on the software engineering attributes and describes measurement approaches for assessing that impact This thesis also outlines a planned research effon to develop an automated analyzer for the assessment of software quality in Ada-based products and plans for validating the assessment procedure.
- An Assessment of SEES Based on Operational ExperiencesArthur, James D.; Groener, Markus K. (Department of Computer Science, Virginia Polytechnic Institute & State University, 1997-03-01)During the Fall of 1995 a modified form of SEES (Software Engineering Evaluation System), denoted SEES, was defined and used in a study to examine the value added of Independent Verification and Validation. A follow-on study, and topic of this report, focuses on an assessment of SEES based on operational experiences gained from the 1995 study. The report partitions its findings relative to phases of the software development process.
- The automated assessment of computer software documentation quality using the objectives/principles/attributes frameworkDorsey, Edward Vernon (Virginia Tech, 1992-10-15)Since humans first put pen to paper, people have critically assessed written work; thus, the assessment of documents per se is not new. Only recently, however, has the issue of formalized document quality assessment become feasible. Enabled by the rapid progress in computing technology, the prospect of an automated, formalized system of quality assessment, based on the presence of certain attributes deemed essential to the quality of a document, is feasible. The existing Objectives/Principles/Attributes Framework, previously applied to code assessment, is modified to allow application to documentation quality assessment. An automated procedure for the assessment of software documentation quality assessment and the development of a prototype documentation analyzer are described. A major shortcoming of the many quality metrics that are proposed in computer science is their lack of empirical validation. In pursuit of such necessary validation for the measures proposed within this thesis, a study is performed to determine the agreement of the measures rendered by Docalyze with those of human evaluators. This thesis demonstrates the applicability of a quality assessment framework to the documentation component of a software product. Further, the validity of a subset of the proposed metrics is demonstrated.