Computer Science Technical Reports
Permanent URI for this collection
The Department of Computer Science collection of technical
reports began in 1973. Please use the subject headings listed below for all submissions.
Subject Headings:
- Algorithms
- Big Data
- Bioinformatics
- Computational Biology
- Computational Science and Engineering
- Computer Graphics/Animation
- Computer Science Education
- Computer Systems
- Cyberarts
- Cybersecurity
- Data and Text Mining
- Digital Education
- Digital Libraries
- Discrete Event Simulation
- High Performance Computing
- Human Computer Interaction
- Information Retrieval
- Machine Learning
- Mathematical Programming
- Mathematical Software
- Modeling and Simulation
- Networking
- Numerical Analysis
- Parallel and Distributed Computing
- Problem Solving Environments
- Software Engineering
- Theoretical Computer Science
- Virtual/Augmented Reality
- Visualization
Browse
Browsing Computer Science Technical Reports by Title
Now showing 1 - 20 of 1036
Results Per Page
Sort Options
- Abstraction Mechanisms in Support of Top-Down and Bottom-Up Task SpecificationRaghu, K. S.; Arthur, James D. (Department of Computer Science, Virginia Polytechnic Institute & State University, 1988)Abstraction is a powerful mechanism for describing objects and relationships from multiple, yet consistent, perspectives. When properly applied to interface design, abstraction mechanisms can provide the interaction flexibility and simplicity so desperately needed and demanded by today's diverse user community. Fundamental to achieving such goals has been the integration of visual programming techniques with a unique blend of abstraction mechanisms to support user interaction and task specification. The research presented in this paper describes crucial abstraction mechanisms employed within the Taskmaster environment to support top-down and bottom-up task specification. In particular, this paper (a) provides an overview of the Taskmaster environment, (b) describes top-down specification based on multi-level, menu-driven interaction and (c) describes bottom-up specification based on cutset identification and pseudo-tool concepts.
- The Abstraction Refinement Model and the Modification-Cost ProblemKeller, Benjamin J.; Nance, Richard E. (Department of Computer Science, Virginia Polytechnic Institute & State University, 1992)A problem common to systems and software engineering is that of estimating the cost of making changes to a system. For system modifications that include changes to the design history of the system this is the "modification-cost" problem. A solution to this problem is important to planning changes in large systems engineering projects. In this paper, a cost model based on the Abstraction Refinement Model (ARM) is proposed as a framework for deriving solutions to the modification-cost problem. The ARM is a characterization of software evolution that is also applicable to general systems. Modifications to systems and their design histories are described using the components of the ARM. The cost model is defined by functions on the ARM components. The derived solution is given by an abstract expression of the cost functions.
- Accelerating Data-Serial Applications on Data-Parallel GPGPUs: A Systems ApproachAji, Ashwin M.; Feng, Wu-chun (Department of Computer Science, Virginia Polytechnic Institute & State University, 2008)The general-purpose graphics processing unit (GPGPU) continues to make significant strides in high-end computing by delivering unprecedented performance at a commodity price. However, the many-core architecture of the GPGPU currently allows only data-parallel applications to extract the full potential out of the hardware. Applications that require frequent synchronization during their execution do not experience much performance gain out of the GPGPU. This is mainly due to the lack of explicit hardware or software support for inter thread communication across the entire GPGPU chip. In this paper, we design, implement, and evaluate a highly-efficient software barrier that synchronizes all the thread blocks running on an offloaded kernel on the GPGPU without having to transfer execution control back to the host processor. We show that our custom software barrier achieves a three-fold performance improvement over the existing approach, i.e., synchronization via the host processor. To illustrate the aforementioned performance benefit, we parallelize a data-serial application, specifically an optimal sequence-search algorithm called Smith-Waterman (SWat), that requires frequent barrier synchronization across the many cores of the nVIDIA GeForce GTX 280 GPGPU. Our parallelization consists of a suite of optimization techniques — optimal data layout, coalesced memory accesses, and blocked data decomposition. Then, when coupled with our custom software-barrier implementation, we achieve nearly a nine-fold speed-up over the serial implementation of SWat. We also show that our solution delivers 25 faster on-chip execution than the na¨ıve implementation.
- Accelerating Electrostatic Surface Potential Calculation with Multiscale Approximation on Graphics Processing UnitsAnandakrishnan, Ramu; Scogland, Thomas R. W.; Fenley, Andrew T.; Gordon, John; Feng, Wu-chun; Onufriev, Alexey V. (Department of Computer Science, Virginia Polytechnic Institute & State University, 2009)Tools that compute and visualize biomolecular electrostatic surface potential have been used extensively for studying biomolecular function. However, determining the surface potential for large biomolecules on a typical desktop computer can take days or longer using currently available tools and methods. This paper demonstrates how one can take advantage of graphic processing units (GPUs) available in today’s typical desktop computer, together with a multiscale approximation method, to significantly speedup such computations. Specifically, the electrostatic potential computation, using an analytical linearized Poisson Boltzmann (ALPB) method, is implemented on an ATI Radeon 4870 GPU in combination with the hierarchical charge partitioning (HCP) multiscale approximation. This implementation delivers a combined 1800-fold speedup for a 476,040 atom viral capsid.
- Accelerating Workloads on FPGAs via OpenCL: A Case Study with OpenDwarfsVerma, Anshuman; Helal, Ahmed E.; Krommydas, Konstantinos; Feng, Wu-chun (Department of Computer Science, Virginia Polytechnic Institute & State University, 2016-05-13)For decades, the streaming architecture of FPGAs has delivered accelerated performance across many application domains, such as option pricing solvers in finance, computational fluid dynamics in oil and gas, and packet processing in network routers and firewalls. However, this performance comes at the expense of programmability. FPGA developers use hardware design languages (HDLs) to implement the application data and control path and to design hardware modules for computational pipelines, memory management, synchronization, and communication. This process requires extensive knowledge of logic design, design automation tools, and low-level details of FPGA architecture, this consumes significant development time and effort. To address this lack of programmability of FPGAs, OpenCL provides an easy-to-use and portable programming model for CPUs, GPUs, APUs, and now, FPGAs. Although this significantly improved programmability yet an optimized GPU implementation of kernel may lack performance portability for FPGA. To improve the performance of OpenCL kernels on FPGAs we identify general techniques to optimize OpenCL kernels for FPGAs under device-specific hardware constraints. We then apply these optimizations techniques to the OpenDwarfs benchmark suite, which has diverse parallelism profiles and memory access patterns, in order to evaluate the effectiveness of the optimizations in terms of performance and resource utilization. Finally, we present the performance of structured grids and N-body dwarf-based benchmarks in the context of various optimization along with their potential re-factoring. We find that careful design of kernels for FPGA can result in a highly efficient pipeline achieving 91% of theoretical throughput for the structured grids dwarf. Index Terms—OpenDwarfs; FPGA; OpenCL; GPU; MIC; Accelerators; Performance Portability
- Accounts from a Claims Reuse Experience: Design of an Airline Fares TrackerMontabert, Cyril; Tarpley, Anderson Ray III (Department of Computer Science, Virginia Polytechnic Institute & State University, 2007-12-01)Previous research efforts have led to the establishment of a repository of claims as reusable knowledge entities. Through the analysis, design, and prototyping of a notification system aimed at monitoring airfares across time, airlines, and location, this paper presents the various work-products resulting from a scenario-based design approach coupled with the Claims Reuse Library to support reuse-centric claims analysis. Finally, we share our experience and findings using the Claims Reuse Library as a core to knowledge transfer.
- Achieving Asynchronous Speedup While Preserving Synchronous Semantics: An Implementation of Instructional Footprinting in LindaLandry, Kenneth D.; Arthur, James D. (Department of Computer Science, Virginia Polytechnic Institute & State University, 1993)Linda is a coordination language designed to support process creation and inter-process communication within conventional computational languages. Although the Linda paradigm touts architectural and language independence, it often suffers performance penalties, particularly on local area network platforms. Instructional Footprinting is an optimization technique with the primary goal of enhancing the execution speed of Linda programs. The two main aspects of Instructional Footprinting are instructional decomposition and code motion. This paper addresses the semantic issues encountered when the Linda primitives, IN and RD, are decomposed and moved past other Linda operations. Formal semantics are given as well as results showing significant speedup (as high as 64%) when Instructional Footprinting is used.
- Achieving Very High Order for Implicit Explicit Time Stepping: Extrapolation MethodsConstantinescu, Emil M.; Sandu, Adrian (Department of Computer Science, Virginia Polytechnic Institute & State University, 2008-07-01)In this paper we construct extrapolated implicit-explicit time stepping methods that allow to efficiently solve problems with both stiff and non-stiff components. The proposed methods can provide very high order discretizations of ODEs, index-1 DAEs, and PDEs in the method of lines framework. These methods are simple to construct, easy to implement and parallelize. We establish the existence of perturbed asymptotic expansions of global errors, explain the convergence orders of these methods, and explore their linear stability properties. Numerical results with stiff ODEs, DAEs, and PDEs illustrate the theoretical findings and the potential of these methods to solve multiphysics multiscale problems.
- Acquisition of an Interactive Computing System for Academic Use: A Case StudyHeafner, John F.; Nance, Richard E. (Department of Computer Science, Virginia Polytechnic Institute & State University, 1981)The acquisition of a large-scale computer system is a complex and important task that universities face periodically. The large capital expenditures and the always expanding need for computing services ensure an increasing importance. Virginia Tech recently made such an acquisition. This paper describes the evaluation procedures leading to the acquisition, beginning with needs assessment and concluding with system selection. The acquisition of a computing system, in this instance a system primarily for interactive instructional support of undergraduates, is a decision that is subject to a variety of influences technical, managerial, political, and personal. This paper describes the authors' attempts (as then Associate Director of the Computing Center and then Head of the Computer Science Department, respectively) to deal with these influences through the use of quantitative techniques, behavioral analysis, and common sense.
- ACT++ 2.0: A Class Library for Concurrent Programming in C++ Using ActorsKafura, Dennis G.; Mukherji, Manibrata; Lavender, R. Gregory (Department of Computer Science, Virginia Polytechnic Institute & State University, 1992)ACT++ 2.0 is the most recent version of a class library for concurrent programming in C++. The underlying model of concurrent computation is the Actor model. Programs in ACT++ consist of a collection of active objects called actors. Actors execute concurrently and cooperate by sending request and reply messages. An agent, termed the behavior of an actor, is responsible for processing a single request message and for specifying a replacement behavior which processes the next available request message. One of the salient features of ACT++ is its ability to handle the Inheritance Anomaly---the interference between the inheritance mechanism of object-oriented languages and the specification of synchronization constraints in the methods of a class---using the notion of behavior sets. ACT++ has been implemented on the Sequent Symmetry multiprocessor using the PRESTO threads package.
- ACT++: Building a Concurrent C++ with ActorsKafura, Dennis G.; Lee, Keung Hae (Department of Computer Science, Virginia Polytechnic Institute & State University, 1989)ACT++ (Actors in C++) is a concurrent object-oriented language being designed for distributed real-time applications. The language is a hybrid of the actor kernel language and the object-oriented language C++. The concurrency abstraction of ACT++ is derived from the actor model as defined by Agha. This paper discusses our experience in building a concurrent extension of C++ with the concurrency abstraction of the actor model. The current design of ACT++ and its implementation are described. Some problems found in the Agha's actor model are discussed in the context of distributed real-time applications. The use of ACT++ disclosed the difficulty of combining the actor model of concurrency with class inheritance in an object-oriented language.
- An Active Set Algorithm for Tracing Parametrized OptimaRakowska, Joanna; Haftka, Raphael T.; Watson, Layne T. (Department of Computer Science, Virginia Polytechnic Institute & State University, 1990)Optimization problems often depend on parameters that define constraints or objective functions. It is often necessary to know the effect of a change in a parameter on the optimum solution. An algorithm is presented here for tracking paths of optimal solutions of inequality constrained nonlinear programming problems as a function of a parameter. The proposed algorithm employs homotopy zero-curve tracing tecnniques to track segments where the set of active constraints is unchanged. The transition between segments is handled by considering all possible sets of active constraints and eliminating nonoptimal ones based on the signs of the Lagrange multipliers and the derivatives of the optimal solutions with respect to the parameter.
- Adapting Protocols to Massively Interconnected SystemsKafura, Dennis G.; Abrams, Marc (Department of Computer Science, Virginia Polytechnic Institute & State University, 1991-05-01)This paper describes ongoing research focused on two critical problems posed by the interconnection of a massive number of computer systems. The interconnection may be achieved through wide area or local area networks. The two problems considered in this research are as follows: (1) performance analysis of the protocols used in an internetwork connecting thousands to millions of nodes, and (2) application development in a massively distributed, heterogeneous environment where components implemented in different programming languages must be integrated and/or reused. The performance analysis problem is addressed by employing large-scale parallel simulation, extended finite state machines and objected-oriented simulation techniques. The approach to solving the application development problem is based on an environment which exploits the synergism between object-oriented programming and layered communication protocols (specifically, OSI).
- Adaptive Key Protection in Complex Cryptosystems with AttributesWang, Zilong; Yao, Danfeng (Daphne); Feng, Rongquan (Department of Computer Science, Virginia Polytechnic Institute & State University, 2012)In the attribute-based encryption (ABE) model, attributes (as opposed to identities) are used to encrypt messages, and all the receivers with qualifying attributes can decrypt the ciphertext. However, compromised attribute keys may affect the communications of many users who share the same access control policies. We present the notion of forward-secure attribute-based encryption (fs-ABE) and give a concrete construction based on bilinear map and decisional bilinear Diffie-Hellman assumption. Forward security means that a compromised private key by an adversary at time t does not break the confidentiality of the communication that took place prior to t. We describe how to achieve both forward security and encryption with attributes, and formally prove our security against the adaptive chosen-ciphertext adversaries. Our scheme is non-trivial, and the key size only grows polynomially with logN (where N is the number of time periods). We further generalize our scheme to support the individualized key-updating schedule for each attribute, which provides a finer granularity for key management. Our insights on the required properties that an ABE scheme needs to possess in order to be forward-secure compatible are useful beyond the specific fs-ABE construction given. We raise an open question at the end of the paper on the escrow problem of the master key in ABE schemes.
- An Adaptive Noise Filtering Algorithm for AVIRIS Data with Implications for Classification AccuracyPhillips, Rhonda D.; Blinn, Christine E.; Watson, Layne T.; Wynne, Randolph H. (Department of Computer Science, Virginia Polytechnic Institute & State University, 2008)This paper describes a new algorithm used to adaptively filter a remote sensing dataset based on signal-to-noise ratios (SNRs) once the maximum noise fraction (MNF) has been applied. This algorithm uses Hermite splines to calculate the approximate area underneath the SNR curve as a function of band number, and that area is used to place bands into “bins” with other bands having similar SNRs. A median filter with a variable sized kernel is then applied to each band, with the same size kernel used for each band in a particular bin. The proposed adaptive filters are applied to a hyperspectral image generated by the AVIRIS sensor, and results are given for the identification of three different pine species located within the study area. The adaptive filtering scheme improves image quality as shown by estimated SNRs, and classification accuracies improved by more than 10% on the sample study area, indicating that the proposed methods improve the image quality, thereby aiding in species discrimination.
- Adding Value to the Software Development Process: A Study in Independent Verification and ValidationArthur, James D.; Groener, Markus K.; Hayhurst, Kelly J.; Michael Holloway, C. (Department of Computer Science, Virginia Polytechnic Institute & State University, 1998-08-01)Independent Verification and Validation (IV&V) is best viewed as an overlay process supporting a software development effort. While the touted benefits of a properly managed IV&V activity are many, they specifically emphasize: (a) early fault detection, (b) reduced time to remove faults, and (c) a more robust end-product. This paper outlines a study funded by NASA-Langley Research Center to examine an existing IV&V methodology, and to confirm (or refute) the touted beneficial claims. In the study two distinct development groups are established, with only one having an IV&V contingent. Both groups are tasked to produce a software product using the same set of requirements. Within each phase of the development effort, fault detection and fault removal data are recorded. An analysis of that data reveals that the group having the IV&V contingent: (a) detected errors earlier in the software development process, and (b) on the average, required significantly less time to remove those faults. Moreover, a test for operational correctness further reveals that the system developed by the group having the IV&V component was substantially more robust than the one produced by the other development group. A statistical analysis of our results is also provided to establish significance.
- Adjusting process count on demand for petascale global optimization⋆Radcliffe, Nicholas R.; Watson, Layne T.; Sosonkina, Masha; Haftka, Raphael T.; Trosset, Michael W. (Department of Computer Science, Virginia Polytechnic Institute & State University, 2011)There are many challenges that need to be met before efficient and reliable computation at the petascale is possible. Many scientific and engineering codes running at the petascale are likely to be memory intensive, which makes thrashing a serious problem for many petascale applications. One way to overcome this challenge is to use a dynamic number of processes, so that the total amount of memory available for the computation can be increased on demand. This paper describes modifications made to the massively parallel global optimization code pVTdirect in order to allow for a dynamic number of processes. In particular, the modified version of the code monitors memory use and spawns new processes if the amount of available memory is determined to be insufficient. The primary design challenges are discussed, and performance results are presented and analyzed.
- ADML: Aircraft Design Markup Language for Multidisciplinary Aircraft Design and AnalysisDeshpande, Shubhangi; Watson, Layne T.; Love, Nathan J.; Canfield, Robert A.; Kolonay, Raymond M. (Department of Computer Science, Virginia Polytechnic Institute & State University, 2013-12-31)The process of conceptual aircraft design has advanced tremendously in the past few decades due to rapidly developing computer technology. Today’s modern aerospace systems exhibit strong, interdisciplinary coupling and require a multidisciplinary, collaborative approach. Efficient transfer, sharing, and manipulation of aircraft design and analysis data in such a collaborative environment demands a formal structured representation of data. XML, a W3C recommendation,is one such standard concomitant with a number of powerful capabilities that alleviate interoperability issues in a collaborative environment. A compact, generic, and comprehensive XML schema for an aircraft design markup language (ADML) is proposed here to represent aircraft conceptual design and analysis data. The purpose of this unified data format is to provide a common language for data communication, and to improve efficiency and productivity within a multidisciplinary, collaborative aricraft design environment. An important feature of the proposed schema is the very expressive and efficient low level schemata (raw data, mathematical objects, and basic geometry). As a proof of concept the schema is used to encode an entire Convair B58. As the complexity of models and number of disciplines increases, the reduction in effort to exchange data models and analysis results in ADML also increases.
- Affordances and Feedback in Nuance-Oriented InterfacesWingrave, Chadwick A.; Bowman, Douglas A.; Ramakrishnan, Naren (Department of Computer Science, Virginia Polytechnic Institute & State University, 2001)Virtual Environments (VEs) and perceptive user interfaces must deal with complex users and their modes of interaction. One way to approach this problem is to recognize users’ nuances (subtle conscious or unconscious actions). In exploring nuance-oriented interfaces, we attempted to let users work as they preferred without being biased by feedback or affordances in the system. The hope was that we would discover the users’ innate models of interaction. The results of two user studies were that users are guided not by any innate model but by affordances and feedback in the interface. So, without this guidance, even the most obvious and useful components of an interface will be ignored.
- An Agenda for Human-Computer Interaction Research: User Interface Development Processes and MethodologiesHartson, H. Rex (Department of Computer Science, Virginia Polytechnic Institute & State University, 1991)This paper is the result of one working group in a workshop entitled "An Agenda for Human-Computer Interaction Research: Science and Engineering Serving Human Needs." The workshop, sponsored by the National Science Foundation, brought together "20-25 of the most prominent HCI researchers from the disciplines of computer science, engineering, information science, psychology, and human factors along with several NSF staff members." The workshop results, to appear in the HCI literature, identify critical research issues and potential avenues for assaulting them, along with necessary infrastructure recommendations related to educational, professional, and facility problems. The overall topical area was divided into five areas, each with an individual researcher in charge of directing discussion and reporting on the area. The areas included theory and taxonomy of HCI models, the interface development process, I/O devices and interaction styles, software tools, and computer supported collaborative work.