Browsing by Author "Ryder, Barbara G."
Now showing 1 - 20 of 20
Results Per Page
Sort Options
- Anomaly Detection Through System and Program Behavior ModelingXu, Kui (Virginia Tech, 2014-12-15)Various vulnerabilities in software applications become easy targets for attackers. The trend constantly being observed in the evolution of advanced modern exploits is their growing sophistication in stealthy attacks. Code-reuse attacks such as return-oriented programming allow intruders to execute mal-intended instruction sequences on a victim machine without injecting external code. Successful exploitation leads to hijacked applications or the download of malicious software (drive-by download attack), which usually happens without the notice or permission from users. In this dissertation, we address the problem of host-based system anomaly detection, specifically by predicting expected behaviors of programs and detecting run-time deviations and anomalies. We first introduce an approach for detecting the drive-by download attack, which is one of the major vectors for malware infection. Our tool enforces the dependencies between user actions and system events, such as file-system access and process execution. It can be used to provide real time protection of a personal computer, as well as for diagnosing and evaluating untrusted websites for forensic purposes. We perform extensive experimental evaluation, including a user study with 21 participants, thousands of legitimate websites (for testing false alarms), 84 malicious websites in the wild, as well as lab reproduced exploits. Our solution demonstrates a usable host-based framework for controlling and enforcing the access of system resources. Secondly, we present a new anomaly-based detection technique that probabilistically models and learns a program's control flows for high-precision behavioral reasoning and monitoring. Existing solutions suffer from either incomplete behavioral modeling (for dynamic models) or overestimating the likelihood of call occurrences (for static models). We introduce a new probabilistic anomaly detection method for modeling program behaviors. Its uniqueness is the ability to quantify the static control flow in programs and to integrate the control flow information in probabilistic machine learning algorithms. The advantage of our technique is the significantly improved detection accuracy. We observed 11 up to 28-fold of improvement in detection accuracy compared to the state-of-the-art HMM-based anomaly models. We further integrate context information into our detection model, which achieves both strong flow-sensitivity and context-sensitivity. Our context-sensitive approach gives on average over 10 times of improvement for system call monitoring, and 3 orders of magnitude for library call monitoring, over existing regular HMM methods. Evaluated with a large amount of program traces and real-world exploits, our findings confirm that the probabilistic modeling of program dependences provides a significant source of behavior information for building high-precision models for real-time system monitoring. Abnormal traces (obtained through reproducing exploits and synthesized abnormal traces) can be well distinguished from normal traces by our model.
- DroidCat: Unified Dynamic Detection of Android MalwareCai, Haipeng; Meng, Na; Ryder, Barbara G.; Yao, Danfeng (Daphne) (Department of Computer Science, Virginia Polytechnic Institute & State University, 2016)Various dynamic approaches have been developed to detect or categorize Android malware. These approaches execute software, collect call traces, and then detect abnormal system calls or sensitive API usage. Consequently, attackers can evade these approaches by intentionally obfuscating those calls under focus. Additionally, existing approaches treat detection and categorization of malware as separate tasks, although intuitively both tasks are relevant and could be performed simultaneously. This paper presents DroidCat, the first unified dynamic malware detection approach, which not only detects malware, but also pinpoints the malware family. DroidCat leverages supervised machine learning to train a multi-class classifier using diverse behavioral profiles of benign apps and different kinds of malware. Compared with prior heuristics-based machine learning-based approaches, the feature set used in DroidCat is decided purely based on a systematic dynamic characterization study of benign and malicious apps. All differentiating features that show behavioral differences between benign and malicious apps are included. In this way, DroidCat is robust to existing evasion attacks. We evaluated DroidCat using leave-one-out cross validation with 136 benign apps and 135 malicious apps. The evaluation shows that DroidCat provided an effective and scalable unified malware detection solution with 81% precision, 82% recall, and 92% accuracy.
- Effective Fusion and Separation of Distribution, Fault-Tolerance, and Energy-Efficiency ConcernsKwon, Young Woo (Virginia Tech, 2014-07-03)As software applications are becoming increasingly distributed and mobile, their design and implementation are characterized by distributed software architectures, possibility of faults, and the need for energy awareness. Thus, software developers should be able to simultaneously reason about and handle the concerns of distribution, fault-tolerance, and energy-efficiency. Being closely intertwined, these concerns can introduce significant complexity into the design and implementation of modern software. In other words, to develop reliable and energy-efficient applications, software developers must understand how distribution, fault-tolerance, and energy-efficiency interplay with each other and how to implement these concerns while keeping the complexity in check. This dissertation addresses five technical issues that stand on the way of engineering reliable and energy-efficient software: (1) how can developers select and parameterize middleware to achieve the requisite levels of performance, reliability, and energy-efficiency? (2) how can one streamline the process of implementing and reusing fault tolerance functionality in distributed applications? (3) can automated techniques be developed to help transition centralized applications to using cloud-based services efficiently and reliably? (4) how can one leverage cloud-based resources to improve the energy-efficiency of mobile applications? (5) how can middleware be adapted to improve the energy-efficiency of distributed mobile applications operated over heterogeneous mobile networks? To address these issues, this research studies the concerns of distribution, fault-tolerance, and energy-efficiency as well as their interaction. It also develops novel approaches, techniques, and tools that effectively fuse and separate these concerns as required by particular software development scenarios. The specific innovations include (1) a systematic assessment of the performance, conciseness, complexity, reliability, and energy consumption of middleware mechanisms for accessing remote functionality, (2) a declarative approach to hardening distributed applications with resiliency against partial failure, (3) cloud refactoring, a set of automated program transformations for transitioning to using cloud-based services efficiently and reliably, (4) a cloud offloading approach that improves the energy-efficiency of mobile applications without compromising their reliability, (5) a middleware mechanism that optimizes energy consumption by adapting execution patterns dynamically in response to fluctuations in network conditions.
- An Evaluation of Change-Based Coverage CriteriaFisher II, Marc; Wloka, Jan; Tip, Frank; Ryder, Barbara G.; Luchansky, Alexander (Department of Computer Science, Virginia Polytechnic Institute & State University, 2011-03-01)Various coverage criteria are commonly used to assess the quality of test suites, but achieving full coverage according to these criteria is often impossible or impractical. Our research starts from the popular assumption that a disproportionate number of faults is likely to reside in recently changed code. Based on this assumption, we propose several change-based coverage criteria that reflect to what extent changes with respect to a previous program version are exercised by a test suite. In a set of experiments on programs from the SIR repository, we found change-based criteria to reveal faults better than traditional criteria, and to enable the construction of much smaller test suites with similar fault detection effectiveness. We also report on a case study that shows that achieving (near) 100% coverage according to a change-based criterion is both feasible and useful.
- An Experimental Study of the Performance, Energy, and Programming Effort Trade-offs of Android Persistence FrameworksPu, Jing (Virginia Tech, 2016-08-16)One of the fundamental building blocks of a mobile application is the ability to persist program data between different invocations. Referred to as persistence, this functionality is commonly implemented by means of persistence frameworks. When choosing a particular framework, Android-the most popular mobile platform-offers a wide variety of options to developers. Unfortunately, the energy, performance, and programming effort trade-offs of these frameworks are poorly understood, leaving the Android developer in the dark trying to select the most appropriate option for their applications. To address this problem, this thesis reports on the results of the first systematic study of six Android persistence frameworks (i.e., ActiveAndroid, greenDAO, OrmLite, Sugar ORM, Android SQLite, and Realm Java) in their application to and performance with popular benchmarks, such as DaCapo. Having measured and analyzed the energy, performance, and programming effort trade-offs for each framework, we present a set of practical guidelines for the developer to choose between Android persistence frameworks. Our findings can also help the framework developers to optimize their products to meet the desired design objectives.
- Exploring the Impact of Context Sensitivity on Blended AnalysisFisher II, Marc; Dufour, Bruno; Basu, Shrutarshi; Ryder, Barbara G. (Department of Computer Science, Virginia Polytechnic Institute & State University, 2010-04-01)This paper explores the use of context sensitivity both intra- and interprocedurally in a blended (static/dynamic) program analysis for performance diagnosis of framework-intensive Web-based applications. Empirical experiments with an existing blended analysis algorithm [9] compare combinations of (i) use of a context-insensitive call graph with a context-sensitive calling context tree, and (ii) use (or not) of context-sensitive code pruning within methods. These experiments demonstrate achievable gains in scalability and performance in terms of several metrics designed for blended escape analysis, and report results in terms of object instances created, to allow more realistic conclusions from the data than were possible previously.
- Identity-sensitive Points-to Analysis for the Dynamic Behavior of JavaScript ObjectsWei, Shiyi; Ryder, Barbara G. (Department of Computer Science, Virginia Polytechnic Institute & State University, 2013-12-13)JavaScript object behavior is dynamic and adheres to prototype-based inheritance. The behavior of a JavaScript object can be changed by adding and removing properties at runtime. Points-to analysis calculates the set of values a reference property or variable may have during execution. We present a novel, partially flow-sensitive, context-sensitive points-to algorithm that accurately models dynamic changes in object behavior. The algorithm represents objects by their creation sites and local property names; it tracks property updates via a new control-flow graph representation. The calling context is comprised of the receiver object, its local properties and prototype chain. We compare the new points-to algorithm with an existing JavaScript points-to algorithm in terms of their respective performance and accuracy on a client application. The experimental results on real JavaScript websites show that the new points-to analysis significantly improves precision, uniquely resolving on average 11% more property lookup statements.
- Learning-based Cyber Security Analysis and Binary Customization for SecurityTian, Ke (Virginia Tech, 2018-09-13)This thesis presents machine-learning based malware detection and post-detection rewriting techniques for mobile and web security problems. In mobile malware detection, we focus on detecting repackaged mobile malware. We design and demonstrate an Android repackaged malware detection technique based on code heterogeneity analysis. In post-detection rewriting, we aim at enhancing app security with bytecode rewriting. We describe how flow- and sink-based risk prioritization improves the rewriting scalability. We build an interface prototype with natural language processing, in order to customize apps according to natural language inputs. In web malware detection for Iframe injection, we present a tag-level detection system that aims to detect the injection of malicious Iframes for both online and offline cases. Our system detects malicious iframe by combining selective multi-execution and machine learning algorithms. We design multiple contextual features, considering Iframe style, destination and context properties.
- Practical Analysis of the Dynamic Characteristics of JavaScriptWei, Shiyi (Virginia Tech, 2015-10-05)JavaScript is a dynamic object-oriented programming language, which is designed with flexible programming mechanisms. JavaScript is widely used in developing sophisticated software systems, especially web applications. Despite of its popularity, there is a lack of software tools that support JavaScript for software engineering clients. Dataflow analysis approximates software behavior by analyzing the program code; it is the foundation for many software tools. However, several unique features of JavaScript render existing dataflow analysis techniques ineffective. Reflective constructs, generating code at runtime, make it difficult to acquire the complete program at compile time. Dynamic typing, resulting in changes in object behavior, poses a challenge for building accurate models of objects. Different functionalities can be observed when a function is variadic; the variance of the function behavior may be caused by the arguments whose values can only be known at runtime. Object constructors may be polymorphic such that objects created by the same constructor may contain different properties. In addition to object-oriented programming, JavaScript supports paradigms of functional and procedural programming; this feature renders dataflow analysis techniques ineffective when a JavaScript application uses multiple paradigms. Dataflow analysis needs to handle these challenges. In this work, we present an analysis framework and several dataflow analyses that can handle dynamic features in JavaScript. The first contribution of our work is the design and instantiation of the JavaScript Blended Analysis Framework (JSBAF). This general-purpose and flexible framework judiciously combines dynamic and static analyses. We have implemented an instance of JSBAF, blended taint analysis, to demonstrate the practicality of the framework. Our second contribution is an novel context-sensitive points-to analysis for JavaScript that accurately models object property changes. This algorithm uses a new program representation that enables partial flow-sensitive analysis, a more accurate object representation, and an expanded points-to graph. We have defined parameterized state sensitivity (i.e., k-state sensitivity) and evaluated the effectiveness of 1-state-sensitive analysis as the static phase of JSBAF. The third contribution of our work is an adaptive context-sensitive analysis that selectively applies context-sensitive analysis on the function level. This two-staged adaptive analysis extracts function characteristics from an inexpensive points-to analysis and uses learning-based heuristics to decide on an appropriate context-sensitive analysis per function. The experimental results show that the adaptive analysis is more precise than any single context-sensitive analysis for several programs in the benchmarks, especially for those multi-paradigm programs.
- A Practical Blended Analysis for Dynamic Features in JavaScriptWei, Shiyi; Ryder, Barbara G. (Department of Computer Science, Virginia Polytechnic Institute & State University, 2012)JavaScript is widely used in Web applications; however, its dynamism renders static analysis ineffective. Our JavaScript Blended Analysis Framework is designed to handle JavaScript dynamic features. It performs a flexible combined static/dynamic analysis. The blended analysis focuses static analysis on a dynamic calling structure collected at runtime in a lightweight manner, and refines the static analysis using dynamic information. The framework is instantiated for points-to analysis with stmt-level MOD analysis and tainted input analysis. Using JavaScript codes from actual webpages as benchmarks, we show that blended points-to analysis for JavaScript obtains good coverage (86.6% on average per website) of the pure static analysis solution and finds additional points-to pairs (7.0% on average per website) contributed by dynamically generated/loaded code. Blended tainted input analysis reports all 6 true positives reported by static analysis, but without false alarms, and finds three additional true positives.
- A Practical Blended Analysis for Dynamic Features in JavaScriptWei, Shiyi; Ryder, Barbara G. (Department of Computer Science, Virginia Polytechnic Institute & State University, 2012-08-01)The JavaScript Blended Analysis Framework is designed to perform a general-purpose, practical combined static/dynamic analysis of JavaScript programs, while handling dynamic features such as run-time generated code and variadic func- tions. The idea of blended analysis is to focus static anal- ysis on a dynamic calling structure collected at runtime in a lightweight manner, and to rene the static analysis us- ing additional dynamic information. We perform blended points-to analysis of JavaScript with our framework and compare results with those computed by a pure static points- to analysis. Using JavaScript codes from actual webpages as benchmarks, we show that optimized blended analysis for JavaScript obtains good coverage (86.6% on average per website) of the pure static analysis solution and nds ad- ditional points-to pairs (7.0% on average per website) con- tributed by dynamically generated/loaded code.
- ScriptSpaces: An Isolation Abstraction for Web BrowsersDeka, Amarjyoti (Virginia Tech, 2010-07-22)Current web browsers are ill-prepared to manage execution of scripts embedded in web pages, because they treat all JavaScript code executing in a page as one unit. All code shares the same namespace, same security domain, and shares uncontrolled access to the same heap; some browsers even use the same thread for multiple tabs or windows. This lack of isolation frequently causes problems that range from loss of functionality to security compromises. ScriptSpace is an abstraction that provides separate, isolated execution environments for parts or all of a web page. Within each ScriptSpace, we maintain the traditional, single-threaded JavaScript environment to provide compatibility with existing code written under this assumption. Multiple ScriptSpaces within a page are isolated with respect to namespace, CPU, and memory consumption. The user has the ability to safely terminate failing scripts without affecting the functionality of still-functional components of the page, or of other pages. We implemented a prototype of ScriptSpace based on the Firefox 3.0 browser. Rather than mapping ScriptSpaces to OS-level threads, we exploit a migrating-thread model in which threads enter and leave the ScriptSpaces associated with the respective sections of the document tree during the event dispatching process. A proportional share scheduler ensures that the number of bytecode instructions executed within each ScriptSpace is controlled. Our prototype can isolate resource-hogging gadgets within an iGoogle Mashup page as well as across multiple pages loaded in the browser and still retain interactive response.
- A Static Assurance Analysis of Android ApplicationsElish, Karim O.; Yao, Danfeng (Daphne); Ryder, Barbara G.; Jiang, Xuxian (Department of Computer Science, Virginia Polytechnic Institute & State University, 2013-07-11)We describe an efficient approach to identify malicious Android applications through specialized static program analysis. Our solution – referred to as user intention program dependence analysis – performs offline analysis to find the dependence relations between user triggers and entry points to methods providing critical system functions. Analyzing these types of dependences in programs can identify the privileged operations (e.g., file, network operations and sensitive data access) that are not intended by users. We apply our technique on 708 free popular apps and 482 malware apps for Android OS, and the experimental results show that our technique can differentiate between legitimate and malware applications with high accuracy. We also explain the limitations of the user-intention-based approach and point out the need for practitioners to adopt multiple analysis tools for evaluating the assurance of Android applications.
- Supporting Effective Reuse and Safe Evolution in Metadata-Driven Software DevelopmentSong, Myoungkyu (Virginia Tech, 2013-04-29)In recent years, metadata-driven software development has gained prominence. In this implementation model, various application concerns are provided as third-party frameworks and libraries that the programmer configures through metadata, such as XML configuration files or Java annotations. Metadata-driven software development is a special case of declarative programming: metadata serves as a domain-specific language that the programmer uses to declare various concerns, whose implementation is provided by an elaborate ecosystem of libraries and frameworks that serve as pre-defined application building blocks. Examples abound: transparent persistence mechanisms facilitate data management; security frameworks provide access control and encryption; unit testing frameworks provide abstractions for implementing and executing unit tests, etc. Metadata-driven software development has been particularly embraced in enterprise computing as a means of providing standardized solutions to common application scenarios. Despite the conciseness and simplicity benefits of metadata-driven software development, this implementation model introduces a unique set of reuse and evolution challenges. In particular, metadata is not reusable across application modules, and program evolution causes unsafe discrepancies between the main source code and its corresponding metadata. The research described in this dissertation addresses five fundamental problems of metadata-driven software development: (1) bytecode enhancements that transparently introduce concerns hinder program understanding and debugging; (2) mainstream enterprise metadata formats are hard to understand, evolve, and reuse; (3) concerns declared via metadata cannot be reused when source-to-source compiling emerging languages to mainstream ones; (4) metadata correctness cannot be automatically ensured as application source code is being refactored and enhanced; and (5) lacking built-in metadata, JavaScript programs can be enhanced with additional concerns only through manual source code changes. The research described in this dissertation leverages domain-specific languages and automated code generation to enable effective reuse and safe evolution in metadata-driven software development. The specific innovations that address the problems outlined above are as follows: (1) a domain-specific language (DSL) describing bytecode enhancement that facilitates the understanding and debugging of additional concerns; (2) a novel metadata format expressed as a DSL that is easier to author, understand, reuse, and maintain than existing metadata formats; (3) automated metadata translation that enables effective reuse of target language additional concerns from source-to-source compiled source language programs; (4) metadata invariants---a new abstraction for expressing and verifying metadata coding convention; and (5) a new approach to declaratively enhancing JavaScript programs with additional concerns.
- Supporting Software Development Tools with An Awareness of Transparent Program TransformationsSong, Myoungkyu (Virginia Tech, 2013-03-05)Programs written in managed languages are compiled to a platform-independent intermediate representation, such as Java bytecode. The relative high level of Java bytecode has engendered a widespread practice of changing the bytecode directly, without modifying the maintained version of the source code. This practice, called bytecode engineering or enhancement, has become indispensable in transparently introducing various concerns, including persistence, distribution, and security. For example, transparent persistence architectures help avoid the entanglement of business and persistence logic in the source code by changing the bytecode directly to synchronize objects with stable storage. With functionality added directly at the bytecode level, the source code reflects only partial semantics of the program. Specifically, the programmer can neither ascertain the program's runtime behavior by browsing its source code, nor map the runtime behavior back to the original source code. This research presents an approach that improves the utility of source-level programming tools by providing enhancement specifications written in a domain-specific language. By interpreting the specifications, a source-level programming tool can gain an awareness of the bytecode enhancements and improve its precision and usability. We demonstrate the applicability of our approach by making a source code editor and a symbolic debugger enhancements-aware.
- Threat Detection in Program Execution and Data Movement: Theory and PracticeShu, Xiaokui (Virginia Tech, 2016-06-25)Program attacks are one of the oldest and fundamental cyber threats. They compromise the confidentiality of data, the integrity of program logic, and the availability of services. This threat becomes even severer when followed by other malicious activities such as data exfiltration. The integration of primitive attacks constructs comprehensive attack vectors and forms advanced persistent threats. Along with the rapid development of defense mechanisms, program attacks and data leak threats survive and evolve. Stealthy program attacks can hide in long execution paths to avoid being detected. Sensitive data transformations weaken existing leak detection mechanisms. New adversaries, e.g., semi-honest service provider, emerge and form threats. This thesis presents theoretical analysis and practical detection mechanisms against stealthy program attacks and data leaks. The thesis presents a unified framework for understanding different branches of program anomaly detection and sheds light on possible future program anomaly detection directions. The thesis investigates modern stealthy program attacks hidden in long program executions and develops a program anomaly detection approach with data mining techniques to reveal the attacks. The thesis advances network-based data leak detection mechanisms by relaxing strong requirements in existing methods. The thesis presents practical solutions to outsource data leak detection procedures to semi-honest third parties and identify noisy or transformed data leaks in network traffic.
- Understanding Application Behaviours for Android Security: A Systematic CharacterizationCai, Haipeng; Ryder, Barbara G. (Department of Computer Science, Virginia Polytechnic Institute & State University, 2016)In contrast to most existing research on Android focusing on specific security issues, there is little broad understanding of Android application run-time characteristics and their security implications. To mitigate this gap, we present the first dynamic characterization study of Android applications that targets such a broad understanding for Android security. Through lightweight method-level profiling, we have collected 33GB traces of method calls and inter-component communication (ICC) from 114 popular Android applications on Google Play and 61 communicating pairs among them that enabled an extensive empirical investigation of the run-time behaviours of Android applications. Our study revealed that (1) the Android framework was the target of 88.3% of all calls during application executions, (2) callbacks accounted for merely 3% of the total method calls, (3) 75% of ICCs did not carry any data payloads with those doing so preferring bundles over URIs, (4) 85% of sensitive data sources and sinks targeted one or two top categories of information or operations which were also most likely to constitute data leaks. We discuss the security implications of our findings to secure development and effective security defense of modern Android applications.
- User-Centric Dependence Analysis For Identifying Malicious Mobile AppsElish, Karim O.; Yao, Danfeng (Daphne); Ryder, Barbara G. (IEEE, 2012)This paper describes an efficient approach for identifying malicious Android mobile applications through specialized static program analysis. Our solution performs offline analysis and enforces the normal properties of legitimate dataflow patterns to identify programs that violate these properties. To demonstrate the feasibility of our user-centric dependence analysis, we implement a tool to generate a data dependence graph and perform preliminary evaluation to characterize both legitimate and malicious Android apps. Our preliminary results confirm our hypothesis on the differences in user-centric data dependence behaviors between legitimate and malicious apps.
- User-Intention Based Program Analysis for Android SecurityElish, Karim Omar Mahmoud (Virginia Tech, 2015-07-29)The number of mobile applications (i.e., apps) is rapidly growing, as the mobile computing becomes an integral part of the modern user experience. Malicious apps have infiltrated open marketplaces for mobile platforms. These malicious apps can exfiltrate user's private data, abuse of system resources, or disrupting regular services. Despite the recent advances on mobile security, the problem of detecting vulnerable and malicious mobile apps with high detection accuracy remains an open problem. In this thesis, we address the problem of Android security by presenting a new quantitative program analysis framework for security vetting of Android apps. We first introduce a highly accurate proactive detection solution for detecting individual malicious apps. Our approach enforces benign property as opposed of chasing malware signatures, and uses one complex feature rather than multi-feature as in the existing malware detection methods. In particular, we statically extract a data-flow feature on how user inputs trigger sensitive critical operations, a property referred to as the user-trigger dependence. This feature is extracted through nontrivial Android-specific static program analysis, which can be used in various quantitative analytical methods. Our evaluation on thousands of malicious apps and free popular apps gives a detection accuracy (2% false negative rate and false positive rate) that is better than, or at least competitive against, the state-of-the-art. Furthermore, our method discovers new malicious apps available in the Google Play store that have not been previously detected by anti-virus scanning tools. Second, we present a new app collusion detection approach and algorithms to analyze pairs or groups of communicating apps. App collusion is a new technique utilized by the attackers to evade standard detection. It is a new threat where two or more apps, appearing benign, communicate to perform malicious task. Most of the existing solutions assume the attack model of a stand-alone malicious app, and hence cannot detect app collusion. We first demonstrate experimental evidence on the technical challenges associated with detecting app collusion. Then, we address these challenges by introducing a scalable and an in-depth cross-app static flow analysis approach to identify the risk level associated with communicating apps. Our approach statically analyzes the sensitivity and the context of each inter-app communication with low analysis complexity, and defines fine-grained security policies for the inter-app communication risk detection. Our evaluation results on thousands of free popular apps indicate that our technique is effective. It generates four times fewer false positives compared to the state-of-the-art collusion-detection solution, enhancing the detection capability. The advantages of our inter-app communication analysis approach are the analysis scalability with low complexity, and the substantially improved detection accuracy compared to the state-of-the-art solution. These types of proactive defenses solutions allow defenders to stay proactive when defending against constantly evolving malware threats.
- Visualizing the Results of a Complex Hybrid Dynamic-Static AnalysisFisher II, Marc; Marrs, Luke; Ryder, Barbara G. (Department of Computer Science, Virginia Polytechnic Institute & State University, 2010-06-01)Complex static or hybrid static-dynamic analyses produce large quantities of structured data. In the past, this data was generally intended for use by compilers or other software tools that used the produced information to transform the application being analyzed. However, it is becomingly increasingly common for the results of these analyses to be used directly by humans. For example, in our own prior work we have developed a hybrid dynamic-static escape analysis intended to help developers identify sources of object churn within large framework-base applications. In order to facilitate human use of complex analysis results, visualizations need to be developed that allow a user to browse these results and to identify the points of interest within these large data sets. In this paper we present Hi-C, a visualization tool for our hybrid escape analysis that has been implemented as an Eclipse plugin. We show how Hi-C can help developers identify sources of object churn in a large framework-based application and how we have used the tool to assist in understanding the results of a complex analysis.