Abstract:Federated learning has caught much attention because it can solve data islands. However, it also faces challenges such as the risk of privacy leakage and performance degradation due to model heterogeneity under non-independent and identically distributed data. To this end, this study proposes a personalized federated learning method based on Bregman divergence and differential privacy (FedBDP). This method employs Bregman divergence to measure the differences between local and global parameters and adopt it as a regularization term to update the loss function, thereby reducing model differences to improve model accuracy. Meanwhile, adaptive differential privacy technology is utilized to perturb local model parameters, and the attenuation coefficient is defined to dynamically adjust the level of the differential privacy noise in each round, and thus reasonably allocate the privacy noise level and improve the model availability. Theoretical analysis shows that FedBDP satisfies convergence conditions under both strongly convex and non-convex smooth functions. Experimental results demonstrate that the FedBDP method can guarantee accuracy in the MNIST and CIFAR10 datasets on the premise of satisfying differential privacy.