Gill, WarisAnwar, AliGulzar, Muhammad Ali2024-03-012024-03-012023-12-04https://hdl.handle.net/10919/118228Federated Learning (FL) is a privacy-preserving distributed machine learning technique that enables individual clients (e.g., user participants, edge devices, or organizations) to train a model on their local data in a secure environment and then share the trained model with an aggregator to build a global model collaboratively. In this work, we propose FedDefender, a defense mechanism against targeted poisoning attacks in FL by leveraging differential testing. FedDefender first applies differential testing on clients’ models using a synthetic input. Instead of comparing the output (predicted label), which is unavailable for synthetic input, FedDefender fingerprints the neuron activations of clients’ models to identify a potentially malicious client containing a backdoor. We evaluate FedDefender using MNIST and FashionMNIST datasets with 20 and 30 clients, and our results demonstrate that FedDefender effectively mitigates such attacks, reducing the attack success rate (ASR) to 10% without deteriorating the global model performance.application/pdfenCreative Commons Attribution 4.0 InternationalFedDefender: Backdoor Attack Defense in Federated LearningArticle - Refereed2024-01-01The author(s)https://doi.org/10.1145/3617574.3617858