Department of Computer Science
Permanent URI for this community
The Department of Computer Science has an accredited undergraduate program that offers specialized ‘tracks’ of study in key areas. Undergraduates are prepared by graduation for pursuing a computing career or for graduate study. Our active corporate partners program offers internships and permanent employment to our students. Students are encouraged to participate in research experiences during their studies. Capstone courses provide significant team project experiences.
The graduate program offers M.S. and Ph.D. degrees, emphasizing thesis work both at the main campus in Blacksburg and at the Northern Virginia Center. About two-thirds of the graduate students are pursuing the Ph.D. degree. The faculty, among whom there are 12 NSF or DOE CAREER Award winners, are active researchers who are visible contributors to the profession and have achieved significant honors.
Browse
Browsing Department of Computer Science by Department "Computer Science"
Now showing 1 - 20 of 1514
Results Per Page
Sort Options
- 5SL: A Language for Declarative Specification and Generation of Digital LibrariesGoncalves, Marcos A.; Fox, Edward A. (2002-07-01)Digital Libraries (DLs) are among the most complex kinds of information systems, due in part to their intrinsic multi-disciplinary nature. Nowadays DLs are built within monolithic, tightly integrated, and generally inflexible systems- or by assembling disparate components together in an ad-hoc way, with resulting problems in interoperability and adaptability. More importantly, conceptual modeling, requirements analysis, and software engineering approaches are rarely supported, making it extremely difficult to tailor DL content and behavior to the interests, needs, and preferences of particular communities. In this paper, we address these problems. In particular, we present 5SL, a declarative language for specifying and generating domain-specific digital libraries. 5L is based on the 5S formal theory for digital libraries and enables high-level specification of DLs in five complementary dimensions, including: the kinds of multimedia information the DL supports (Stream Model); how that information is structured and organized (Structural Model); different logical and presentational properties and operations of DL components (Spatial Model); the behavior of the DL (Scenario Model); and the different societies of actors and managers of services that act together to carry out the DL behavior (Societal Model). The practical feasibility of the approach is demonstrated by the presentation of a 5SL digital library generator for the MARIAN digital library system.
- The Abridgment and Relaxation Time for a Linear Multi-Scale Model Based on Multiple Site PhosphorylationWang, Shuo; Cao, Yang (PLOS, 2015-08-11)Random effect in cellular systems is an important topic in systems biology and often simulated with Gillespie’s stochastic simulation algorithm (SSA). Abridgment refers to model reduction that approximates a group of reactions by a smaller group with fewer species and reactions. This paper presents a theoretical analysis, based on comparison of the first exit time, for the abridgment on a linear chain reaction model motivated by systems with multiple phosphorylation sites. The analysis shows that if the relaxation time of the fast subsystem is much smaller than the mean firing time of the slow reactions, the abridgment can be applied with little error. This analysis is further verified with numerical experiments for models of bistable switch and oscillations in which linear chain system plays a critical role.
- Abstraction Mechanisms in Support of Top-Down and Bottom-Up Task SpecificationRaghu, K. S.; Arthur, James D. (Department of Computer Science, Virginia Polytechnic Institute & State University, 1988)Abstraction is a powerful mechanism for describing objects and relationships from multiple, yet consistent, perspectives. When properly applied to interface design, abstraction mechanisms can provide the interaction flexibility and simplicity so desperately needed and demanded by today's diverse user community. Fundamental to achieving such goals has been the integration of visual programming techniques with a unique blend of abstraction mechanisms to support user interaction and task specification. The research presented in this paper describes crucial abstraction mechanisms employed within the Taskmaster environment to support top-down and bottom-up task specification. In particular, this paper (a) provides an overview of the Taskmaster environment, (b) describes top-down specification based on multi-level, menu-driven interaction and (c) describes bottom-up specification based on cutset identification and pseudo-tool concepts.
- The Abstraction Refinement Model and the Modification-Cost ProblemKeller, Benjamin J.; Nance, Richard E. (Department of Computer Science, Virginia Polytechnic Institute & State University, 1992)A problem common to systems and software engineering is that of estimating the cost of making changes to a system. For system modifications that include changes to the design history of the system this is the "modification-cost" problem. A solution to this problem is important to planning changes in large systems engineering projects. In this paper, a cost model based on the Abstraction Refinement Model (ARM) is proposed as a framework for deriving solutions to the modification-cost problem. The ARM is a characterization of software evolution that is also applicable to general systems. Modifications to systems and their design histories are described using the components of the ARM. The cost model is defined by functions on the ARM components. The derived solution is given by an abstract expression of the cost functions.
- The Academy: A Community of Information Retrieval AgentsFrance, Robert K. (1994-09-06)We commonly picture text as a sequence of words; or alternatively as a sequence of paragraphs, each of which is composed of a sequence of sentences, each of which is itself a sequence of words. It is also worth noting that text is not so much a sequence of words as a sequence of terms, including most commonly words, but also including names, numbers, code sequences, and a variety of other $#*&)&@^ tokens. Just as we commonly simplify text into a sequence of words, so too it is common in information retrieval to regard documents as single texts. Nothing is less common, though, than a document with only a single part, and that unstructured text. Search and retrieval in such a universe involves new questions: Where does a document begin and end? How can we decide how much to show to a user? When does a query need to be matched by a single node in a hypertext, and when may partial matches in several nodes count?
- Accelerating Atmospheric Modeling Through Emerging Multi-core TechnologiesLinford, John Christian (Virginia Tech, 2010-05-05)The new generations of multi-core chipset architectures achieve unprecedented levels of computational power while respecting physical and economical constraints. The cost of this power is bewildering program complexity. Atmospheric modeling is a grand-challenge problem that could make good use of these architectures if they were more accessible to the average programmer. To that end, software tools and programming methodologies that greatly simplify the acceleration of atmospheric modeling and simulation with emerging multi-core technologies are developed. A general model is developed to simulate atmospheric chemical transport and atmospheric chemical kinetics. The Cell Broadband Engine Architecture (CBEA), General Purpose Graphics Processing Units (GPGPUs), and homogeneous multi-core processors (e.g. Intel Quad-core Xeon) are introduced. These architectures are used in case studies of transport modeling and kinetics modeling and demonstrate per-kernel speedups as high as 40x. A general analysis and code generation tool for chemical kinetics called "KPPA" is developed. KPPA generates highly tuned C, Fortran, or Matlab code that uses every layer of heterogeneous parallelism in the CBEA, GPGPU, and homogeneous multi-core architectures. A scalable method for simulating chemical transport is also developed. The Weather Research and Forecasting Model with Chemistry (WRF-Chem) is accelerated with these methods with good results: real forecasts of air quality are generated for the Eastern United States 65% faster than the state-of-the-art models.
- Accelerating Data-Serial Applications on Data-Parallel GPGPUs: A Systems ApproachAji, Ashwin M.; Feng, Wu-chun (Department of Computer Science, Virginia Polytechnic Institute & State University, 2008)The general-purpose graphics processing unit (GPGPU) continues to make significant strides in high-end computing by delivering unprecedented performance at a commodity price. However, the many-core architecture of the GPGPU currently allows only data-parallel applications to extract the full potential out of the hardware. Applications that require frequent synchronization during their execution do not experience much performance gain out of the GPGPU. This is mainly due to the lack of explicit hardware or software support for inter thread communication across the entire GPGPU chip. In this paper, we design, implement, and evaluate a highly-efficient software barrier that synchronizes all the thread blocks running on an offloaded kernel on the GPGPU without having to transfer execution control back to the host processor. We show that our custom software barrier achieves a three-fold performance improvement over the existing approach, i.e., synchronization via the host processor. To illustrate the aforementioned performance benefit, we parallelize a data-serial application, specifically an optimal sequence-search algorithm called Smith-Waterman (SWat), that requires frequent barrier synchronization across the many cores of the nVIDIA GeForce GTX 280 GPGPU. Our parallelization consists of a suite of optimization techniques — optimal data layout, coalesced memory accesses, and blocked data decomposition. Then, when coupled with our custom software-barrier implementation, we achieve nearly a nine-fold speed-up over the serial implementation of SWat. We also show that our solution delivers 25 faster on-chip execution than the na¨ıve implementation.
- Accelerating Electrostatic Surface Potential Calculation with Multiscale Approximation on Graphics Processing UnitsAnandakrishnan, Ramu; Scogland, Thomas R. W.; Fenley, Andrew T.; Gordon, John; Feng, Wu-chun; Onufriev, Alexey V. (Department of Computer Science, Virginia Polytechnic Institute & State University, 2009)Tools that compute and visualize biomolecular electrostatic surface potential have been used extensively for studying biomolecular function. However, determining the surface potential for large biomolecules on a typical desktop computer can take days or longer using currently available tools and methods. This paper demonstrates how one can take advantage of graphic processing units (GPUs) available in today’s typical desktop computer, together with a multiscale approximation method, to significantly speedup such computations. Specifically, the electrostatic potential computation, using an analytical linearized Poisson Boltzmann (ALPB) method, is implemented on an ATI Radeon 4870 GPU in combination with the hierarchical charge partitioning (HCP) multiscale approximation. This implementation delivers a combined 1800-fold speedup for a 476,040 atom viral capsid.
- Accelerating Workloads on FPGAs via OpenCL: A Case Study with OpenDwarfsVerma, Anshuman; Helal, Ahmed E.; Krommydas, Konstantinos; Feng, Wu-chun (Department of Computer Science, Virginia Polytechnic Institute & State University, 2016-05-13)For decades, the streaming architecture of FPGAs has delivered accelerated performance across many application domains, such as option pricing solvers in finance, computational fluid dynamics in oil and gas, and packet processing in network routers and firewalls. However, this performance comes at the expense of programmability. FPGA developers use hardware design languages (HDLs) to implement the application data and control path and to design hardware modules for computational pipelines, memory management, synchronization, and communication. This process requires extensive knowledge of logic design, design automation tools, and low-level details of FPGA architecture, this consumes significant development time and effort. To address this lack of programmability of FPGAs, OpenCL provides an easy-to-use and portable programming model for CPUs, GPUs, APUs, and now, FPGAs. Although this significantly improved programmability yet an optimized GPU implementation of kernel may lack performance portability for FPGA. To improve the performance of OpenCL kernels on FPGAs we identify general techniques to optimize OpenCL kernels for FPGAs under device-specific hardware constraints. We then apply these optimizations techniques to the OpenDwarfs benchmark suite, which has diverse parallelism profiles and memory access patterns, in order to evaluate the effectiveness of the optimizations in terms of performance and resource utilization. Finally, we present the performance of structured grids and N-body dwarf-based benchmarks in the context of various optimization along with their potential re-factoring. We find that careful design of kernels for FPGA can result in a highly efficient pipeline achieving 91% of theoretical throughput for the structured grids dwarf. Index Terms—OpenDwarfs; FPGA; OpenCL; GPU; MIC; Accelerators; Performance Portability
- Access to Autism Spectrum Disorder Services for Rural Appalachian CitizensScarpa, Angela; Jensen, Laura S.; Gracanin, Denis; Ramey, Sharon L.; Dahiya, Angela V.; Ingram, L. Maria; Albright, Jordan; Gatto, Alyssa J.; Scott, Jen Pollard; Ruble, Lisa (2020-01)Background: Low-resource rural communities face significant challenges regarding availability and adequacy of evidence-based services. Purposes: With respect to accessing evidence-based services for Autism Spectrum Disorder (ASD), this brief report summarizes needs of rural citizens in the South-Central Appalachian region, an area notable for persistent health disparities. Methods: A mixed-methods approach was used to collect quantitative and qualitative data during focus groups with 33 service providers and 15 caregivers of children with ASD in rural southwest Virginia. Results: Results supported the barriers of availability and affordability of ASD services in this region, especially relating to the need for more ASD-trained providers, better coordination and navigation of services, and addition of programs to assist with family financial and emotional stressors. Results also suggested cultural attitudes related to autonomy and trust towards outside professionals that may prevent families from engaging in treatment. Implications: Relevant policy recommendations are discussed related to provider incentives, insurance coverage, and telehealth. Integration of autism services into already existing systems and multicultural sensitivity of providers are also implicated.
- Accounts from a Claims Reuse Experience: Design of an Airline Fares TrackerMontabert, Cyril; Tarpley, Anderson Ray III (Department of Computer Science, Virginia Polytechnic Institute & State University, 2007-12-01)Previous research efforts have led to the establishment of a repository of claims as reusable knowledge entities. Through the analysis, design, and prototyping of a notification system aimed at monitoring airfares across time, airlines, and location, this paper presents the various work-products resulting from a scenario-based design approach coupled with the Claims Reuse Library to support reuse-centric claims analysis. Finally, we share our experience and findings using the Claims Reuse Library as a core to knowledge transfer.
- Achieving Asynchronous Speedup While Preserving Synchronous Semantics: An Implementation of Instructional Footprinting in LindaLandry, Kenneth D.; Arthur, James D. (Department of Computer Science, Virginia Polytechnic Institute & State University, 1993)Linda is a coordination language designed to support process creation and inter-process communication within conventional computational languages. Although the Linda paradigm touts architectural and language independence, it often suffers performance penalties, particularly on local area network platforms. Instructional Footprinting is an optimization technique with the primary goal of enhancing the execution speed of Linda programs. The two main aspects of Instructional Footprinting are instructional decomposition and code motion. This paper addresses the semantic issues encountered when the Linda primitives, IN and RD, are decomposed and moved past other Linda operations. Formal semantics are given as well as results showing significant speedup (as high as 64%) when Instructional Footprinting is used.
- Achieving Very High Order for Implicit Explicit Time Stepping: Extrapolation MethodsConstantinescu, Emil M.; Sandu, Adrian (Department of Computer Science, Virginia Polytechnic Institute & State University, 2008-07-01)In this paper we construct extrapolated implicit-explicit time stepping methods that allow to efficiently solve problems with both stiff and non-stiff components. The proposed methods can provide very high order discretizations of ODEs, index-1 DAEs, and PDEs in the method of lines framework. These methods are simple to construct, easy to implement and parallelize. We establish the existence of perturbed asymptotic expansions of global errors, explain the convergence orders of these methods, and explore their linear stability properties. Numerical results with stiff ODEs, DAEs, and PDEs illustrate the theoretical findings and the potential of these methods to solve multiphysics multiscale problems.
- Acoustic differences between healthy and depressed people: a cross-situation studyWang, Jingying; Zhang, Lei; Liu, Tianli; Pan, Wei; Hu, Bin; Zhu, Tingshao (2019-10-15)Background Abnormalities in vocal expression during a depressed episode have frequently been reported in people with depression, but less is known about if these abnormalities only exist in special situations. In addition, the impacts of irrelevant demographic variables on voice were uncontrolled in previous studies. Therefore, this study compares the vocal differences between depressed and healthy people under various situations with irrelevant variables being regarded as covariates. Methods To examine whether the vocal abnormalities in people with depression only exist in special situations, this study compared the vocal differences between healthy people and patients with unipolar depression in 12 situations (speech scenarios). Positive, negative and neutral voice expressions between depressed and healthy people were compared in four tasks. Multiple analysis of covariance (MANCOVA) was used for evaluating the main effects of variable group (depressed vs. healthy) on acoustic features. The significances of acoustic features were evaluated by both statistical significance and magnitude of effect size. Results The results of multivariate analysis of covariance showed that significant differences between the two groups were observed in all 12 speech scenarios. Although significant acoustic features were not the same in different scenarios, we found that three acoustic features (loudness, MFCC5 and MFCC7) were consistently different between people with and without depression with large effect magnitude. Conclusions Vocal differences between depressed and healthy people exist in 12 scenarios. Acoustic features including loudness, MFCC5 and MFCC7 have potentials to be indicators for identifying depression via voice analysis. These findings support that depressed people’s voices include both situation-specific and cross-situational patterns of acoustic features.
- Acquisition of an Interactive Computing System for Academic Use: A Case StudyHeafner, John F.; Nance, Richard E. (Department of Computer Science, Virginia Polytechnic Institute & State University, 1981)The acquisition of a large-scale computer system is a complex and important task that universities face periodically. The large capital expenditures and the always expanding need for computing services ensure an increasing importance. Virginia Tech recently made such an acquisition. This paper describes the evaluation procedures leading to the acquisition, beginning with needs assessment and concluding with system selection. The acquisition of a computing system, in this instance a system primarily for interactive instructional support of undergraduates, is a decision that is subject to a variety of influences technical, managerial, political, and personal. This paper describes the authors' attempts (as then Associate Director of the Computing Center and then Head of the Computer Science Department, respectively) to deal with these influences through the use of quantitative techniques, behavioral analysis, and common sense.
- ACT++ 2.0: A Class Library for Concurrent Programming in C++ Using ActorsKafura, Dennis G.; Mukherji, Manibrata; Lavender, R. Gregory (Department of Computer Science, Virginia Polytechnic Institute & State University, 1992)ACT++ 2.0 is the most recent version of a class library for concurrent programming in C++. The underlying model of concurrent computation is the Actor model. Programs in ACT++ consist of a collection of active objects called actors. Actors execute concurrently and cooperate by sending request and reply messages. An agent, termed the behavior of an actor, is responsible for processing a single request message and for specifying a replacement behavior which processes the next available request message. One of the salient features of ACT++ is its ability to handle the Inheritance Anomaly---the interference between the inheritance mechanism of object-oriented languages and the specification of synchronization constraints in the methods of a class---using the notion of behavior sets. ACT++ has been implemented on the Sequent Symmetry multiprocessor using the PRESTO threads package.
- ACT++: Building a Concurrent C++ with ActorsKafura, Dennis G.; Lee, Keung Hae (Department of Computer Science, Virginia Polytechnic Institute & State University, 1989)ACT++ (Actors in C++) is a concurrent object-oriented language being designed for distributed real-time applications. The language is a hybrid of the actor kernel language and the object-oriented language C++. The concurrency abstraction of ACT++ is derived from the actor model as defined by Agha. This paper discusses our experience in building a concurrent extension of C++ with the concurrency abstraction of the actor model. The current design of ACT++ and its implementation are described. Some problems found in the Agha's actor model are discussed in the context of distributed real-time applications. The use of ACT++ disclosed the difficulty of combining the actor model of concurrency with class inheritance in an object-oriented language.
- An Active Set Algorithm for Tracing Parametrized OptimaRakowska, Joanna; Haftka, Raphael T.; Watson, Layne T. (Department of Computer Science, Virginia Polytechnic Institute & State University, 1990)Optimization problems often depend on parameters that define constraints or objective functions. It is often necessary to know the effect of a change in a parameter on the optimum solution. An algorithm is presented here for tracking paths of optimal solutions of inequality constrained nonlinear programming problems as a function of a parameter. The proposed algorithm employs homotopy zero-curve tracing tecnniques to track segments where the set of active constraints is unchanged. The transition between segments is handled by considering all possible sets of active constraints and eliminating nonoptimal ones based on the signs of the Lagrange multipliers and the derivatives of the optimal solutions with respect to the parameter.
- Adapting Protocols to Massively Interconnected SystemsKafura, Dennis G.; Abrams, Marc (Department of Computer Science, Virginia Polytechnic Institute & State University, 1991-05-01)This paper describes ongoing research focused on two critical problems posed by the interconnection of a massive number of computer systems. The interconnection may be achieved through wide area or local area networks. The two problems considered in this research are as follows: (1) performance analysis of the protocols used in an internetwork connecting thousands to millions of nodes, and (2) application development in a massively distributed, heterogeneous environment where components implemented in different programming languages must be integrated and/or reused. The performance analysis problem is addressed by employing large-scale parallel simulation, extended finite state machines and objected-oriented simulation techniques. The approach to solving the application development problem is based on an environment which exploits the synergism between object-oriented programming and layered communication protocols (specifically, OSI).
- Adaptive Key Protection in Complex Cryptosystems with AttributesWang, Zilong; Yao, Danfeng (Daphne); Feng, Rongquan (Department of Computer Science, Virginia Polytechnic Institute & State University, 2012)In the attribute-based encryption (ABE) model, attributes (as opposed to identities) are used to encrypt messages, and all the receivers with qualifying attributes can decrypt the ciphertext. However, compromised attribute keys may affect the communications of many users who share the same access control policies. We present the notion of forward-secure attribute-based encryption (fs-ABE) and give a concrete construction based on bilinear map and decisional bilinear Diffie-Hellman assumption. Forward security means that a compromised private key by an adversary at time t does not break the confidentiality of the communication that took place prior to t. We describe how to achieve both forward security and encryption with attributes, and formally prove our security against the adaptive chosen-ciphertext adversaries. Our scheme is non-trivial, and the key size only grows polynomially with logN (where N is the number of time periods). We further generalize our scheme to support the individualized key-updating schedule for each attribute, which provides a finer granularity for key management. Our insights on the required properties that an ABE scheme needs to possess in order to be forward-secure compatible are useful beyond the specific fs-ABE construction given. We raise an open question at the end of the paper on the escrow problem of the master key in ABE schemes.