pFedSAM: Secure Federated Learning Against Backdoor Attacks via Personalized Sharpness-Aware Minimization
发布时间:2026-03-05
点击次数:
发表刊物:IEEE International Conference on Communications (CCF-C)
摘要:Federated learning is a distributed learning paradigm that allows clients to perform collaborative model training without sharing their local data. Despite its benefit, federated learning is vulnerable to backdoor attacks where malicious clients inject backdoors into the global model aggregation process so that the resulting model will misclassify the samples with backdoor triggers while performing normally on the benign samples. Existing defenses against backdoor attacks either are effective only under very specific attack models or severely deteriorate the model performance on benign samples. To address these deficiencies, this paper proposes pFedSAM, a new federated learning method based on partial model personalization and sharpness-aware training. Theoretically, we analyze the convergence properties of pFedSAM for the general nonconvex and heterogeneous data setting. Empirically, we conduct extensive experiments on a suite of federated datasets and show the superiority of pFedSAM over state-of-the-art robust baselines in terms of both robustness and accuracy.
合写作者:Yuanxiong Guo, Yanmin Gong
第一作者:Zhenxiao Zhang
是否译文:否
发表时间:2025-06-08
