Browsing by Author "Xiong, Huijun"
Now showing 1 - 5 of 5
Results Per Page
Sort Options
- Evaluation in Information RetrievalPeddi, Bhanu; Xiong, Huijun; ElSherbiny, Noha (2010-10-26)This module addresses the methods used to evaluate an Information Retrieval system. We focus on evaluating a system using relevance and apply the knowledge by using TREC_EVAL.
- Personal Anomaly Detection and Smart-Phone SecurityXiong, Huijun; Yao, Danfeng (Daphne) (Virginia Tech, 2010-04-22)Mobile devices increasingly become the computing platform for networked applications such as Web and email. This development requires strong guarantees on the system integrity and data security of mobile devices against malicious software (malware in short). This work introduces a new personalized anomaly detection approach that is able to achieve host security by modeling and enforcing the legitimate behavior characteristics of a human user. Specifically, we identify characteristic human-user behaviors (namely application-level user inputs via keyboard and mouse), developing protocols for fine-grained traffic-input analysis, and preventing forgeries and attacks by malware. Our solution contains a combination of cryptographic techniques, correlation analysis, and hardware-based integrity measures. Our evaluation is done in computers with real-world and synthetic malware. The uniqueness of this personalized anomaly detection technique is that it allows computer security to be realized without the need for continually monitoring ever-changing malware patterns.
- Secure Data Service Outsourcing with Untrusted CloudXiong, Huijun (Virginia Tech, 2013-06-10)Outsourcing data services to the cloud is a nature fit for cloud usage. However, increasing security and privacy concerns from both enterprises and individuals on their outsourced data inhibit this trend. In this dissertation, we introduce service-centric solutions to address two types of security threats existing in the current cloud environments: semi-honest cloud providers and malicious cloud customers. Our solution aims not only to provide confidentiality and access controllability of outsourced data with strong cryptographic guarantee, but, more importantly, to fulfill specific security requirements from different cloud services with effective systematic ways. To provide strong cryptographic guarantee to outsourced data, we study the generic security problem caused by semi-honest cloud providers and introduce a novel proxy-based secure data outsourcing scheme. Specifically, our scheme improves the efficiency of traditional proxy re-encryption algorithm by integrating symmetric encryption and proxy re-encryption algorithms. With less computation cost on applying re-encryption operation directly on the encrypted data, our scheme allows flexible and efficient user revocation without revealing underlying data and heavy computation in the untrusted cloud. To address specific requirement from different cloud services, we investigate two specific cloud services: cloud-based content delivery service and cloud-based data processing service. For the former one, we focus on preserving cache property in the content delivery network and propose CloudSeal, a scheme for securely and flexibly sharing and distributing content via the public cloud. With the ability of caching the major part of a stored cipher content object in the delivery network for content distribution and keeping the minor part with the data owner for content authorization, CloudSeal achieves security and efficiency both theoretically and experimentally. For the later service, we design and realize CloudSafe, a framework that supports secure and efficient data processing with minimum key leakage in the vulnerable cloud virtualization environment. Through the adoption of one-time cryptographic key strategy and a centralized key management framework, CloudSafe efficiently avoids cross-VM side channel attack from malicious cloud customers in the cloud. Our experimental results confirm the practicality and scalability of CloudSafe.
- Storytelling Security: User-Intention Based Traffic SanitizationXiong, Huijun; Yao, Danfeng (Daphne); Zhang, Zhibin (Department of Computer Science, Virginia Polytechnic Institute & State University, 2010-12-01)Malicious software (malware) with decentralized communication infrastructure, such as peer-to-peer botnets, is difficult to detect. In this paper, we describe a traffic-sanitization method for identifying malware-triggered outbound connections from a personal computer. Our solution correlates user activities with the content of outbound traffic. Our key observation is that user-initiated outbound traffic typically has corresponding human inputs, i.e., keystroke or mouse clicks. Our analysis on the causal relations between user inputs and packet payload enables the efficient enforcement of the inter-packet dependency at the application level. We formalize our approach within the framework of protocol-state machine. We define new application-level traffic-sanitization policies that enforce the inter-packet dependencies. The dependency is derived from the transitions among protocol states that involve both user actions and network events. We refer to our methodology as storytelling security. We demonstrate a concrete realization of our methodology in the context of peer-to-peer file-sharing application, describe its use in blocking traffic of P2P bots on a host. We implement and evaluate our prototype in Windows operating system in both online and offline deployment settings. Our experimental evaluation along with case studies of real-world P2P applications demonstrates the feasibility of verifying the inter-packet dependencies. Our deep packet inspection incurs overhead on the outbound network flow. Our solution can also be used as an offline collect-and-analyze tool.
- WekaPeddi, Bhanu; Xiong, Huijun; ElSherbiny, Noha (2010-12-10)This module stresses the methods of text classification used in information retrieval. We focus on the usage of Weka, a data mining toolkit, in data processing with three classification algorithms: Naive Bayes [1], k Nearest Neighbor [2], and Support Vector Machine [3]) mentioned in the textbook [7].