Loading...

One-shot federated learning: Theoretical limits and algorithms to achieve them

Salehkaleybar, S ; Sharif University of Technology | 2021

277 Viewed
  1. Type of Document: Article
  2. Publisher: Microtome Publishing , 2021
  3. Abstract:
  4. We consider distributed statistical optimization in one-shot setting, where there are m machines each observing n i.i.d. samples. Based on its observed samples, each machine sends a B-bit-long message to a server. The server then collects messages from all machines, and estimates a parameter that minimizes an expected convex loss function. We investigate the impact of communication constraint, B, on the expected error and derive a tight lower bound on the error achievable by any algorithm. We then propose an estimator, which we call Multi-Resolution Estimator (MRE), whose expected error (when B ≥ d log mn where d is the dimension of parameter) meets the aforementioned lower bound up to a poly-logarithmic factor in mn. The expected error of MRE, unlike existing algorithms, tends to zero as the number of machines (m) goes to infinity, even when the number of samples per machine (n) remains upper bounded by a constant. We also address the problem of learning under tiny communication budget, and present lower and upper error bounds for the case that the budget B is a constant. © 2021 Saber Salehkaleybar, Arsalan Sharifnassab, and S. Jamaloddin Golestani
  5. Keywords:
  6. Budget control ; Error analysis ; Communication efficiency ; Distributed learning ; Federated learning ; Few shot learning ; Loss functions ; Low bound ; Observed samples ; Statistical optimization ; Theoretical algorithms ; Theoretical limits ; Parameter estimation
  7. Source: Journal of Machine Learning Research ; Volume 22 , 2021 , Pages 1-47 ; 15324435 (ISSN)
  8. URL: https://jmlr.org/papers/v22/19-1048.html