Supervisor of Master's Candidates
Hits:
Journal:IEEE International Conference on Communications (CCF-C)
Abstract:Federated learning is a distributed learning paradigm that allows clients to perform collaborative model training without sharing their local data. Despite its benefit, federated learning is vulnerable to backdoor attacks where malicious clients inject backdoors into the global model aggregation process so that the resulting model will misclassify the samples with backdoor triggers while performing normally on the benign samples. Existing defenses against backdoor attacks either are effective only under very specific attack models or severely deteriorate the model performance on benign samples. To address these deficiencies, this paper proposes pFedSAM, a new federated learning method based on partial model personalization and sharpness-aware training. Theoretically, we analyze the convergence properties of pFedSAM for the general nonconvex and heterogeneous data setting. Empirically, we conduct extensive experiments on a suite of federated datasets and show the superiority of pFedSAM over state-of-the-art robust baselines in terms of both robustness and accuracy.
Co-author:Yuanxiong Guo, Yanmin Gong
First Author:Zhenxiao Zhang
Translation or Not:no
Date of Publication:2025-06-08