Loading...

Accelerating federated edge learning

Nguyen, T. D ; Sharif University of Technology | 2021

296 Viewed
  1. Type of Document: Article
  2. DOI: 10.1109/LCOMM.2021.3103536
  3. Publisher: Institute of Electrical and Electronics Engineers Inc , 2021
  4. Abstract:
  5. Transferring large models in federated learning (FL) networks is often hindered by clients' limited bandwidth. We propose $ extsf {FedAA}$ , an FL algorithm which achieves fast convergence by exploiting the regularized Anderson acceleration (AA) on the global level. First, we demonstrate that FL can benefit from acceleration methods in numerical analysis. Second, $ extsf {FedAA}$ improves the convergence rate for quadratic losses and improves the empirical performance for smooth and strongly convex objectives, compared to FedAvg, an FL algorithm using gradient descent (GD) local updates. Experimental results demonstrate that employing AA can significantly improve the performance of FedAvg, even when the objective is non-convex. © 1997-2012 IEEE
  6. Keywords:
  7. Learning systems ; Numerical methods ; Acceleration method ; Anderson accelerations ; Convergence rates ; Distributed optimization ; Fast convergence ; Federated learning ; Large models ; Learning network ; Limited bandwidth ; Quadratic loss ; Gradient methods
  8. Source: IEEE Communications Letters ; Volume 25, Issue 10 , 2021 , Pages 3282-3286 ; 10897798 (ISSN)
  9. URL: https://ieeexplore.ieee.org/document/9509410