VTechWorks staff will be away for the Thanksgiving holiday beginning at noon on Wednesday, November 27, through Friday, November 29. We will resume normal operations on Monday, December 2. Thank you for your patience.
 

Man vs. Machine: Practical Adversarial Detection of Malicious Crowdsourcing Workers

dc.contributor.authorGang, Wangen
dc.contributor.authorWang, Tianyien
dc.contributor.authorZheng, Haitaoen
dc.contributor.authorZhao, Ben Y.en
dc.contributor.departmentComputer Scienceen
dc.date.accessioned2018-05-31T14:37:49Zen
dc.date.available2018-05-31T14:37:49Zen
dc.date.issued2014-08en
dc.description.abstractRecent work in security and systems has embraced the use of machine learning (ML) techniques for identifying misbehavior, e.g. email spam and fake (Sybil) users in social networks. However, ML models are typically derived from fixed datasets, and must be periodically retrained. In adversarial environments, attackers can adapt by modifying their behavior or even sabotaging ML models by polluting training data. In this paper¹, we perform an empirical study of adversarial attacks against machine learning models in the context of detecting malicious crowdsourcing systems, where sites connect paying users with workers willing to carry out malicious campaigns. By using human workers, these systems can easily circumvent deployed security mechanisms, e.g. CAPTCHAs. We collect a dataset of malicious workers actively performing tasks on Weibo, China’s Twitter, and use it to develop MLbased detectors. We show that traditional ML techniques are accurate (95%–99%) in detection but can be highly vulnerable to adversarial attacks, including simple evasion attacks (workers modify their behavior) and powerful poisoning attacks (where administrators tamper with the training set). We quantify the robustness of ML classifiers by evaluating them in a range of practical adversarial models using ground truth data. Our analysis provides a detailed look at practical adversarial attacks on ML models, and helps defenders make informed decisions in the design and configuration of ML detectors.en
dc.identifier.urihttp://hdl.handle.net/10919/83430en
dc.language.isoen_USen
dc.publisherUSENIXen
dc.relation.ispartofProceedings of the USENIX Security Symposium, 2014en
dc.rightsIn Copyrighten
dc.rights.urihttp://rightsstatements.org/vocab/InC/1.0/en
dc.titleMan vs. Machine: Practical Adversarial Detection of Malicious Crowdsourcing Workersen
dc.typeConference proceedingen
dc.typePresentationen

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
WangManVS.Machine2014.pdf
Size:
581.45 KB
Format:
Adobe Portable Document Format
Description: