A New Look and Convergence Rate of Federated Multi-Task Learning with Laplacian Regularization
Published in IEEE Transactions on Neural Networks and Learning Systems, 2022
Recommended citation: Canh T. Dinh, Tung T. Vu, Nguyen H. Tran, Minh N. Dao, Hongyu Zhang, (2022). "A New Look and Convergence Rate of Federated Multi-Task Learning with Laplacian Regularization." Under review (minor revision) IEEE Transactions on Neural Networks and Learning Systems. https://arxiv.org/pdf/2102.07148.pdf
Non-Independent and Identically Distributed (non- IID) data distribution among clients is considered as the key factor that degrades the performance of federated learning (FL). Several approaches to handle non-IID data such as personalized FL and federated multi-task learning (FMTL) are of great interest to research communities. In this work, first, we formulate the FMTL problem using Laplacian regularization to explicitly leverage the relationships among the models of clients for multi-task learning. Then, we introduce a new view of the FMTL problem, which in the first time shows that the formulated FMTL problem can be used for conventional FL and personalized FL. We also propose two algorithms FedU and dFedU to solve the formulated FMTL problem in communication-centralized and decentralized schemes, respectively. Theoretically, we prove that the convergence rates of both algorithms achieve linear speedup for strongly convex and sublinear speedup of order 1/2 for nonconvex objectives. Experimentally, we show that our algorithms outperform the algorithm FedAvg, FedProx, SCAFFOLD, and AFL in FL settings, MOCHA in FMTL settings, as well as pFedMe and Per-FedAvg in personalized FL settings.
Recommended citation: ‘Canh T. Dinh, Tung T. Vu, Nguyen H. Tran, Minh N. Dao, Hongyu Zhang’. “A New Look and Convergence Rate of Federated Multi-Task Learning with Laplacian Regularization.” arXiv:2102.07148.