Browsing by Author "Kafura, Dennis G."
Now showing 1 - 20 of 147
Results Per Page
Sort Options
- ACT++ 2.0: A Class Library for Concurrent Programming in C++ Using ActorsKafura, Dennis G.; Mukherji, Manibrata; Lavender, R. Gregory (Department of Computer Science, Virginia Polytechnic Institute & State University, 1992)ACT++ 2.0 is the most recent version of a class library for concurrent programming in C++. The underlying model of concurrent computation is the Actor model. Programs in ACT++ consist of a collection of active objects called actors. Actors execute concurrently and cooperate by sending request and reply messages. An agent, termed the behavior of an actor, is responsible for processing a single request message and for specifying a replacement behavior which processes the next available request message. One of the salient features of ACT++ is its ability to handle the Inheritance Anomaly---the interference between the inheritance mechanism of object-oriented languages and the specification of synchronization constraints in the methods of a class---using the notion of behavior sets. ACT++ has been implemented on the Sequent Symmetry multiprocessor using the PRESTO threads package.
- ACT++ 3.0: implementation of the actor model using POSIX threadsKhare, Arjun (Virginia Tech, 1994-07-15)The actor model provides a framework for writing concurrent programs. ACT ++ is an implementation of the actor model in C++, allowing concurrent programs to be written in an object-oriented style. In ACT++, each actor is an object possessing one or more independent threads of control. Version 2.0 of ACT ++ uses the PRESTO threads package. As PRESTO threads are available only for certain architectures and operating systems, its use does not meet one of the goals of ACT ++, namely portability among a variety of architectures. To facilitate portability, ACT++ 3.0 is written using the IEEE POSIX 1003.4a standard for threads (Pthreads). This project deals with the implementation of ACT++ 3.0, the testing of the implementation, and its performance.
- ACT++: Building a Concurrent C++ with ActorsKafura, Dennis G.; Lee, Keung Hae (Department of Computer Science, Virginia Polytechnic Institute & State University, 1989)ACT++ (Actors in C++) is a concurrent object-oriented language being designed for distributed real-time applications. The language is a hybrid of the actor kernel language and the object-oriented language C++. The concurrency abstraction of ACT++ is derived from the actor model as defined by Agha. This paper discusses our experience in building a concurrent extension of C++ with the concurrency abstraction of the actor model. The current design of ACT++ and its implementation are described. Some problems found in the Agha's actor model are discussed in the context of distributed real-time applications. The use of ACT++ disclosed the difficulty of combining the actor model of concurrency with class inheritance in an object-oriented language.
- Actor systems platform design and implementation of the actor paradigm in a distributed object-oriented environmentJoshi, Nandan (Virginia Tech, 1993-08-05)This project was undertaken as part of an effort to explore the design of object -oriented systems that are distributed, concurrent, real-time and/or embedded in nature. This work seeks to integrate the concurrency features of the actor model in a distributed, object oriented environment, ESP. The integrated system, called the Actor Systems Platform (ASP), provides a platform for designing concurrent, distributed applications. The actor model provides a mechanism for expressing the inherent concurrency in an application. The concurrency in the application can be exploited by the distributed features available in ESP. The actor abstraction in ASP is provided by a application-level class hierarchy in ESP. The message passing semantics of the actor model are implemented by using special operator overloading in C++. Cboxes are implemented to provide a synchronization mechanism and a means of returning replies. In a concurrent system, simultaneous execution of an object's methods can cause its state to be inconsistent. This is prevented by providing a method locking mechanism using behavior sets. While integrating the concurrency features of the actor model in an object-oriented environment, differences were encountered in determining the invocation semantics of the actor model and those of inherited methods. The problem is investigated and a taxonomy of solutions is presented.
- Adapting Protocols to Massively Interconnected SystemsKafura, Dennis G.; Abrams, Marc (Department of Computer Science, Virginia Polytechnic Institute & State University, 1991-05-01)This paper describes ongoing research focused on two critical problems posed by the interconnection of a massive number of computer systems. The interconnection may be achieved through wide area or local area networks. The two problems considered in this research are as follows: (1) performance analysis of the protocols used in an internetwork connecting thousands to millions of nodes, and (2) application development in a massively distributed, heterogeneous environment where components implemented in different programming languages must be integrated and/or reused. The performance analysis problem is addressed by employing large-scale parallel simulation, extended finite state machines and objected-oriented simulation techniques. The approach to solving the application development problem is based on an environment which exploits the synergism between object-oriented programming and layered communication protocols (specifically, OSI).
- ADLIF-a structured design language for metric analysisSelig, Calvin Lee (Virginia Tech, 1987-05-15)Since the inception of software engineering, the major goal has been to control the development and maintenance of reliable software. To this end, many different design methodologies have been presented as a means to improve software quality through semantic clarity and syntactic accuracy during the specification and design phases of the software life cycle. On the other end of the life cycle, software quality metrics have been proposed to supply quantitative measures of the resultant software. This study is an attempt to unify the two concepts by providing a means to determine the quality of a design before its implementation.
- Analysis and Modeling of World Wide Web TrafficAbdulla, Ghaleb (Virginia Tech, 1998-04-27)This dissertation deals with monitoring, collecting, analyzing, and modeling of World Wide Web (WWW) traffic and client interactions. The rapid growth of WWW usage has not been accompanied by an overall understanding of models of information resources and their deployment strategies. Consequently, the current Web architecture often faces performance and reliability problems. Scalability, latency, bandwidth, and disconnected operations are some of the important issues that should be considered when attempting to adjust for the growth in Web usage. The WWW Consortium launched an effort to design a new protocol that will be able to support future demands. Before doing that, however, we need to characterize current users' interactions with the WWW and understand how it is being used. We focus on proxies since they provide a good medium or caching, filtering information, payment methods, and copyright management. We collected proxy data from our environment over a period of more than two years. We also collected data from other sources such as schools, information service providers, and commercial aites. Sampling times range from days to years. We analyzed the collected data looking for important characteristics that can help in designing a better HTTP protocol. We developed a modeling approach that considers Web traffic characteristics such as self-similarity and long-range dependency. We developed an algorithm to characterize users' sessions. Finally we developed a high-level Web traffic model suitable for sensitivity analysis. As a result of this work we develop statistical models of parameters such as arrival times, file sizes, file types, and locality of reference. We describe an approach to model long-range and dependent Web traffic and we characterize activities of users accessing a digital library courseware server or Web search tools. Temporal and spatial locality of reference within examined user communities is high, so caching can be an effective tool to help reduce network traffic and to help solve the scalability problem. We recommend utilizing our findings to promote a smart distribution or push model to cache documents when there is likelihood of repeat accesses.
- Android Hypovisors: Securing Mobile Devices through High-Performance, Light-Weight, Subsystem Isolation with Integrity Checking and Auditing CapabilitiesKrishnan, Neelima (Virginia Tech, 2014-12-12)The cellphone turned 40 years old in 2013, and its evolution has been phenomenal in these 40 years. Its name has evolved from "cellphone" to "mobile phone" and "smartphone" to "mobile device."] Its transformation has been multi-dimensional in size, functionality, application, and the like. This transformation has allowed the mobile device to be utilized for casual use, personal use, and enterprise use. Usage is further driven by the availability of an enormous number of useful applications for easy download from application (App) markets. Casual download of a seemingly useful application from an untrusted source can cause immense security risks to personal data and any official data resident in the mobile device. Intruding malicious code can also enter the enterprise networks and create serious security challenges. Thus, a mobile device architecture that supports secure multi-persona operation is strongly needed. The architecture should be able to prevent system intrusions and should be able to perform regular integrity checking and auditing. Since Android has the largest user base among mobile device operating systems (OS), the architecture presented here is implemented for Android. This thesis describes how an architecture named the "Android Hypovisor" has been developed and implemented successfully as part of this project work. The key contributions of the project work are: 1. Enhancement of kernel security 2. Incorporation of an embedded Linux distribution layer that supports Glibc/shared libraries so that open-source tools can be added easily 3. Integration of integrity checking and auditing tools (Intrusion Detection and Prevention System; IDPS) 4. Integration of container infrastructure to support multiple OS instances. 5. Analysis shows that the hypovisor increases memory usages by 40-50 MB. As the proposed OS is stripped down to support the embedded hypovisor, power consumption is only minimally increased. This thesis describes how the implemented architecture secures mobile devices through high-performance, light-weight, subsystem isolation with integrity checking and auditing capabilities.
- Anomaly Detection Through System and Program Behavior ModelingXu, Kui (Virginia Tech, 2014-12-15)Various vulnerabilities in software applications become easy targets for attackers. The trend constantly being observed in the evolution of advanced modern exploits is their growing sophistication in stealthy attacks. Code-reuse attacks such as return-oriented programming allow intruders to execute mal-intended instruction sequences on a victim machine without injecting external code. Successful exploitation leads to hijacked applications or the download of malicious software (drive-by download attack), which usually happens without the notice or permission from users. In this dissertation, we address the problem of host-based system anomaly detection, specifically by predicting expected behaviors of programs and detecting run-time deviations and anomalies. We first introduce an approach for detecting the drive-by download attack, which is one of the major vectors for malware infection. Our tool enforces the dependencies between user actions and system events, such as file-system access and process execution. It can be used to provide real time protection of a personal computer, as well as for diagnosing and evaluating untrusted websites for forensic purposes. We perform extensive experimental evaluation, including a user study with 21 participants, thousands of legitimate websites (for testing false alarms), 84 malicious websites in the wild, as well as lab reproduced exploits. Our solution demonstrates a usable host-based framework for controlling and enforcing the access of system resources. Secondly, we present a new anomaly-based detection technique that probabilistically models and learns a program's control flows for high-precision behavioral reasoning and monitoring. Existing solutions suffer from either incomplete behavioral modeling (for dynamic models) or overestimating the likelihood of call occurrences (for static models). We introduce a new probabilistic anomaly detection method for modeling program behaviors. Its uniqueness is the ability to quantify the static control flow in programs and to integrate the control flow information in probabilistic machine learning algorithms. The advantage of our technique is the significantly improved detection accuracy. We observed 11 up to 28-fold of improvement in detection accuracy compared to the state-of-the-art HMM-based anomaly models. We further integrate context information into our detection model, which achieves both strong flow-sensitivity and context-sensitivity. Our context-sensitive approach gives on average over 10 times of improvement for system call monitoring, and 3 orders of magnitude for library call monitoring, over existing regular HMM methods. Evaluated with a large amount of program traces and real-world exploits, our findings confirm that the probabilistic modeling of program dependences provides a significant source of behavior information for building high-precision models for real-time system monitoring. Abnormal traces (obtained through reproducing exploits and synthesized abnormal traces) can be well distinguished from normal traces by our model.
- The Application of Concurrent Object-Oriented Techniques to Reactive SystemsKafura, Dennis G.; Lavender, R. Gregory (Department of Computer Science, Virginia Polytechnic Institute & State University, 1992)A language and system model combining concurrency, abstract communication and an object orientation offers several advantages in the design and implementation of large-scale reactive systems. An object-orientation captures the abstraction and variety of entities inhabiting the environment while the autonomy of actual entities is clearly reflected by expressions of concurrency in the program of the reactive system. Abstract communication is necessary to achieve data sharing among heterogeneous systems. However, attempts to design and implement a paradigm unifying these three features have encountered unexpected difficulties. These difficulties include the interference between concurrency control (synchronization) and inheritance, inadequate application-oriented communication abstractions, the absence of a useful model of exception handling for concurrent object-oriented applications, and the lack of a powerful and useful theory of computation based on asynchrony.
- The application of structure and code metrics to large scale systemsCanning, James Thomas (Virginia Polytechnic Institute and State University, 1985)This work extends the area of research termed software metrics by applying measures of system structure and measures of system code to three realistic software products. Previous research in this area has typically been limited to the application of code metrics such as: lines of code, McCabe's Cyclomatic number, and Halstead's software science variables. However, this research also investigates the relationship of four structure metrics: Henry's Information Flow measure, Woodfield's Syntactic Interconnection Model, Yau and Collofello's Stability measure and McClure's Invocation complexity, to various observed measures of complexity such as, ERRORS, CHANGES and CODING TIME. These metrics are referred to as structure measures since they measure control flow and data flow interfaces between system components. Spearman correlations between the metrics revealed that the code metrics were similar measures of system complexity, while the structure metrics were typically measuring different dimensions of software. Furthermore, correlating the metrics to observed measures of complexity indicated that the Information Flow metric and the Invocation Measure typically performed as well as the three code metrics when project factors and subsystem factors were taken into consideration. However, it was generally true that no single metric was able to satisfactorily identify the variations in the data for a single observed measure of complexity. Trends between many of the metrics and the observed data were identified when individual components were grouped together. Code metrics typically formed groups of increasing complexity which corresponded to increases in the mean values of the observed data. The strength of the Information Flow metric and the Invocation measure is their ability to form a group containing highly complex components which was found to be populated by outliers in the observed data.
- Applying software maintenance metrics in the object oriented software development life cylceLi, Wei (Virginia Tech, 1992-09-05)Software complexity metrics have been studied in the procedural paradigm as a quantitative means of assessing the software development process as well as the quality of software products. Several studies have validated that various metrics are useful indicators of maintenance effort in the procedural paradigm. However, software complexity metrics have rarely been studied in the object oriented paradigm. Very few complexity metrics have been proposed to measure object oriented systems, and the proposed ones have not been validated. This research concentrates on several object oriented software complexity metrics and the validation of these metrics with maintenance effort in two commercial systems. The results of an empirical study of the maintenance activities in the two commercial systems are also described. A metric instrumentation in an object oriented software development framework is presented.
- Applying Structure and Code Metrics to Three Large-Scale SystemsKafura, Dennis G.; Canning, James (Department of Computer Science, Virginia Polytechnic Institute & State University, 1985)This work extends the area of research termed software metrics by applying measures of system structure and measures of system code to three realistic software products. Previous research in this area has typically been limited to the application of code metrics such as : lines of code, McCabe's Cyclomatic number, and Halstead's software science variables. However, this research also investigates the relationship of four structure metrics: Henry's Information Flow measure, Woodfield's Syntactic Interconnection Model, Yau and Collofello's Stability measure and McClure's Invocation complexity, to various observed measures of complexity such as, ERRORS, CHANGES and CODING TIME. These metrics are referred to as structure measures since they measure control flow and data flow interfaces between system components. Correlating the metrics to observed measures of complexity indicated that the Information Flow metric and the Invocation Measure typically performed as well as the three code metrics when project factors and subsystem factors were taken into consideration. However, it was generally true that no single metric was able to satisfactorily identify the variations in the data.
- Automated Assessment of Student-written Tests Based on Defect-detection CapabilityShams, Zalia (Virginia Tech, 2015-05-05)Software testing is important, but judging whether a set of software tests is effective is difficult. This problem also appears in the classroom as educators more frequently include software testing activities in programming assignments. The most common measures used to assess student-written software tests are coverage criteria—tracking how much of the student’s code (in terms of statements, or branches) is exercised by the corresponding tests. However, coverage criteria have limitations and sometimes overestimate the true quality of the tests. This dissertation investigates alternative measures of test quality based on how many defects the tests can detect either from code written by other students—all-pairs execution—or from artificially injected changes—mutation analysis. We also investigate a new potential measure called checked code coverage that calculates coverage from the dynamic backward slices of test oracles, i.e. all statements that contribute to the checked result of any test. Adoption of these alternative approaches in automated classroom grading systems require overcoming a number of technical challenges. This research addresses these challenges and experimentally compares different methods in terms of how well they predict defect-detection capabilities of student-written tests when run against over 36,500 known, authentic, human-written errors. For data collection, we use CS2 assignments and evaluate students’ tests with 10 different measures—all-pairs execution, mutation testing with four different sets of mutation operators, checked code coverage, and four coverage criteria. Experimental results encompassing 1,971,073 test runs show that all-pairs execution is the most accurate predictor of the underlying defect-detection capability of a test suite. The second best predictor is mutation analysis with the statement deletion operator. Further, no strong correlation was found between defect-detection capability and coverage measures.
- Automated Detection of Surface Defects on Barked Hardwood Logs and Stems Using 3-D Laser Scanned DataThomas, Liya (Virginia Tech, 2006-09-08)This dissertation presents an automated detection algorithm that identifies severe external defects on the surfaces of barked hardwood logs and stems. The defects detected are at least 0.5 inch in height and at least 3 inches in diameter, which are severe, medium to large in size, and have external surface rises. Hundreds of real log defect samples were measured, photographed, and categorized to summarize the main defect features and to build a defect knowledge base. Three-dimensional laser-scanned range data capture the external log shapes and portray bark pattern, defective knobs, and depressions. The log data are extremely noisy, have missing data, and include severe outliers induced by loose bark that dangles from the log trunk. Because the circle model is nonlinear and presents both additive and non-additive errors, a new robust generalized M-estimator has been developed that is different from the ones proposed in the statistical literature for linear regression. Circle fitting is performed by standardizing the residuals via scale estimates calculated by means of projection statistics and incorporated in the Huber objective function to bound the influence of the outliers in the estimates. The projection statistics are based on 2-D radial-vector coordinates instead of the row vectors of the Jacobian matrix as proposed in the statistical literature dealing with linear regression. This approach proves effective in that it makes the GM-estimator to be influence bounded and thereby, robust against outliers. Severe defects are identified through the analysis of 3-D log data using decision rules obtained from analyzing the knowledge base. Contour curves are generated from radial distances, which are determined by robust 2-D circle fitting to the log-data cross sections. The algorithm detected 63 from a total of 68 severe defects. There were 10 non-defective regions falsely identified as defects. When these were calculated as areas, the algorithm locates 97.6% of the defect area, and falsely identifies 1.5% of the total clear area as defective.
- Automated Identification and Application of Code Refactoring in Scratch to Promote the Culture Quality from the Ground upTechapalokul, Peeratham (Virginia Tech, 2020-06-04)Much of software engineering research and practice is concerned with improving software quality. While enormous prior efforts have focused on improving the quality of programs, this dissertation instead provides the means to educate the next generation of programmers who care deeply about software quality. If they embrace the culture of quality, these programmers would be positioned to drastically improve the quality of the software ecosystem. This dissertation describes novel methodologies, techniques, and tools for introducing novice programmers to software quality and its systematic improvement. This research builds on the success of Scratch, a popular novice-oriented block-based programming language, to support the learning of code quality and its improvement. This dissertation improves the understanding of quality problems of novice programmers, creates analysis and quality improvement technologies, and develops instructional approaches for teaching quality improvement. The contributions of this dissertation are as follows. (1) We identify twelve code smells endemic to Scratch, show their prevalence in a large representative codebase, and demonstrate how they hinder project reuse and communal learning. (2) We introduce four new refactorings for Scratch, develop an infrastructure to support them in the Scratch programming environment, and evaluate their effectiveness for the target audience. (3) We study the impact of introducing code quality concepts alongside the fundamentals of programming with and without automated refactoring support. Our findings confirm that it is not only feasible but also advantageous to promote the culture of quality from the ground up. The contributions of this dissertation can benefit both novice programmers and introductory computing educators.
- Automatic, incremental, on-the-fly garbage collection of actorsNelson, Jeffrey Ernest (Virginia Tech, 1989-02-15)Garbage collection is an important topic of research for operating systems, because applications are easier to write and maintain if they are unburdened by the concerns of storage management. The actor computation model is another important topic: it is a powerful, expressive model of concurrent computation. This thesis is motivated by the need for an actor garbage collector for a distributed real-time system under development by the Real-Time Systems Group at Virginia Tech. It is shown that traditional garbage collectors—even those that operate on computational objects—are not sufficient for actors. Three algorithms, with varying degrees of efficiency, are presented as solutions to the actor garbage collection problem. The correctness and execution complexity of the algorithms is derived. Implementation methods are explored, and directions for future research are proposed.
- A Case Study in the Participatory Design of a Collaborative Science-Based Learning EnvironmentChin, George (Virginia Tech, 2004-08-02)Educational technology research studies have found computer and software technologies to be underutilized in U.S. classrooms. In general, many teachers have had difficulty integrating computer and software technologies into learning activities and classroom curricula because specific technologies are ill-suited to their needs, or they lack the ability to make effective use of these technologies. In the development of commercial and business applications, participatory design approaches have been applied to facilitate the direct participation of users in system analysis and design. Among the benefits of participatory design include mutual learning between users and developers, envisionment of software products and their use contexts, empowerment of users in analysis and design, grounding of design in the practices of users, and growth of users as designers and champions of technology. In the context of educational technology development, these similar consequences of participatory design may lead to more appropriate and effective education systems as well as greater capacities by teachers to apply and integrate educational systems into their teaching and classroom practices. We present a case study of a participatory design project that took place over a period of two and one half years, and in which teachers and developers engaged in the participatory analysis and design of a collaborative science learning environment. A significant aspect of the project was the development methodology we followed - Progressive Design. Progressive Design evolved as an integration of methods for participatory design, ethnography, and scenario-based design. In this dissertation, we describe the Progressive Design approach, how it was used, and its specific impacts and effects on the development of educational systems and the social and cognitive growth of teachers.
- CATY: an ASN.1-C++ translator in support of distributed object-oriented applicationsLong, Wendy (Virginia Tech, 1994-04-15)When heterogeneous computers exchange data over a network, they must agree on a common interpretation of the data. The OSI suite of protocols includes a standard notation, Abstract Syntax Notation One (ASN.1), for describing the structure ("abstract syntax") of data. Previous work has shown that C++ is a good language for work with layered network architectures and specifically with ASN.1: the inheritance and polymorphism features of C++ are nicely suited for work with layered protocols, which can be seen and used in object-oriented terms; a C++ class hierarchy, designed to capture the language concepts of ASN.1, successfully separates the abstract syntax (or application level) from the encoding used during transfer (the "transfer syntax" at presentation level); and the class construct and scoping rules of C++ and the design of the class hierarchy much better preserve the structure and content of ASN.1 than do past attempts with C. This report presents CATV (Class-oriented ASN.1 Translator, Yacc-based), a translator from ASN.1 to a corresponding C++ abstract syntax class hierarchy. It is shown in this report that the translations produced by CATV are preferable to those produced by other translators based on the following criteria: preservation of names and types, consistent access to elements, support of modularity and subtypes, resolution of forward references, flexibility of encoding, and generality of use. Furthermore, it is shown that CATV has better throughput than PEPSY, an ASN.1 to C translator from ISODE.
- Class hierarchy design for space time problemsChopra, Sanjay (Virginia Tech, 1995-07-06)The purpose of the project is to design a class hierarchy that will aid in the development of simulations for certain space time problems. The class hierarchy and the problem domain to which it applies are illustrated by considering simulations of three representative problems: a pool game; a collision detection system for robot arms; an automated highway system. The emphasis in the simulations is on the class hierarchy. The class hierarchy contains base classes to model objects, space, time and interactions among objects. These classes could be applied to other similar problems in the problem domain. For example the class objects help to model various objects like cars, pool balls, robots, trains, birds etc. Class space allows the user to subdivide the problem space into smaller dynamic sub-spaces. The user can define rules to decompose the space into 'n’ smaller spaces when there are more than 'x' objects in the space.