Loading...
DONE: Distributed approximate newton-type method for federated edge learning
Dinh, C. T ; Sharif University of Technology | 2022
81
Viewed
- Type of Document: Article
- DOI: 10.1109/TPDS.2022.3146253
- Publisher: IEEE Computer Society , 2022
- Abstract:
- There is growing interest in applying distributed machine learning to edge computing, forming federated edge learning. Federated edge learning faces non-i.i.d. and heterogeneous data, and the communication between edge workers, possibly through distant locations and with unstable wireless networks, is more costly than their local computational overhead. In this work, we propose ${{sf DONE}}$DONE, a distributed approximate Newton-type algorithm with fast convergence rate for communication-efficient federated edge learning. First, with strongly convex and smooth loss functions, ${{sf DONE}}$DONE approximates the Newton direction in a distributed manner using the classical Richardson iteration on each edge worker. Second, we prove that ${{sf DONE}}$DONE has linear-quadratic convergence and analyze its communication complexities. Finally, the experimental results with non-i.i.d. and heterogeneous data show that ${{sf DONE}}$DONE attains a comparable performance to Newton's method. Notably, ${{sf DONE}}$DONE requires fewer communication iterations compared to distributed gradient descent and outperforms DANE, FEDL, and GIANT, state-of-the-art approaches, in the case of non-quadratic loss functions. © 1990-2012 IEEE
- Keywords:
- Distributed machine learning ; Approximation algorithms ; Artificial intelligence ; Learning systems ; Newton-Raphson method ; Optimization ; Complexity theory ; Convergence ; Distributed database ; Federated learning ; Heterogeneous data ; IID data ; Newton's methods ; Optimisations ; Optimization decomposition ; Computational complexity
- Source: IEEE Transactions on Parallel and Distributed Systems ; Volume 33, Issue 11 , 2022 , Pages 2648-2660 ; 10459219 (ISSN)
- URL: https://ieeexplore.ieee.org/document/9695269