Virginia Tech
    • Log in
    View Item 
    •   VTechWorks Home
    • Destination Areas (DAs) and Strategic Growth Areas (SGAs)
    • Destination Areas (DAs)
    • Destination Area: Integrated Security (IS)
    • View Item
    •   VTechWorks Home
    • Destination Areas (DAs) and Strategic Growth Areas (SGAs)
    • Destination Areas (DAs)
    • Destination Area: Integrated Security (IS)
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Man vs. Machine: Practical Adversarial Detection of Malicious Crowdsourcing Workers

    Thumbnail
    View/Open
    WangManVS.Machine2014.pdf (581.4Kb)
    Downloads: 38
    Date
    2014-08
    Author
    Gang, Wang
    Wang, Tianyi
    Zheng, Haitao
    Zhao, Ben Y.
    Metadata
    Show full item record
    Abstract
    Recent work in security and systems has embraced the use of machine learning (ML) techniques for identifying misbehavior, e.g. email spam and fake (Sybil) users in social networks. However, ML models are typically derived from fixed datasets, and must be periodically retrained. In adversarial environments, attackers can adapt by modifying their behavior or even sabotaging ML models by polluting training data. In this paper¹, we perform an empirical study of adversarial attacks against machine learning models in the context of detecting malicious crowdsourcing systems, where sites connect paying users with workers willing to carry out malicious campaigns. By using human workers, these systems can easily circumvent deployed security mechanisms, e.g. CAPTCHAs. We collect a dataset of malicious workers actively performing tasks on Weibo, China’s Twitter, and use it to develop MLbased detectors. We show that traditional ML techniques are accurate (95%–99%) in detection but can be highly vulnerable to adversarial attacks, including simple evasion attacks (workers modify their behavior) and powerful poisoning attacks (where administrators tamper with the training set). We quantify the robustness of ML classifiers by evaluating them in a range of practical adversarial models using ground truth data. Our analysis provides a detailed look at practical adversarial attacks on ML models, and helps defenders make informed decisions in the design and configuration of ML detectors.
    URI
    http://hdl.handle.net/10919/83430
    Collections
    • Destination Area: Integrated Security (IS) [106]
    • Scholarly Works, Department of Computer Science [297]

    If you believe that any material in VTechWorks should be removed, please see our policy and procedure for Requesting that Material be Amended or Removed. All takedown requests will be promptly acknowledged and investigated.

    Virginia Tech | University Libraries | Contact Us
     

     

    VTechWorks

    AboutPoliciesHelp

    Browse

    All of VTechWorksCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

    My Account

    Log inRegister

    Statistics

    View Usage Statistics

    If you believe that any material in VTechWorks should be removed, please see our policy and procedure for Requesting that Material be Amended or Removed. All takedown requests will be promptly acknowledged and investigated.

    Virginia Tech | University Libraries | Contact Us