Loading...

Private Distributed Computing for Machine Learning Algorithms

Mousavi, Mohammad Hossein | 2019

469 Viewed
  1. Type of Document: M.Sc. Thesis
  2. Language: Farsi
  3. Document No: 52343 (05)
  4. University: Sharif University of Technology
  5. Department: Electrical Engineering
  6. Advisor(s): Maddah-Ali, Mohammad Ali; Mirmohseni, Mahtab
  7. Abstract:
  8. In this thesis, we argue that in many basic algorithms for machine learning, including support vector machine (SVM) for classification, principal component analysis (PCA) for dimensionality reduction, and regression for dependency estimation, we need the inner products of the data samples, rather than the data samples themselves. Motivated by the above observation, we introduce the problem of private inner product retrieval for distributed machine learning, where we have a system including a database of some files, duplicated across some non-colluding servers. A user intends to retrieve a subset of specific size of the inner products of the data files with minimum communication load, without revealing any information about the identity of the requested subset. For achievability, we use the algorithms for multi-message private information retrieval. For converse, we establish that as the length of the files becomes large, the set of all inner products converges to independent random variables with uniform distribution, and derive the rate of convergence. To prove that, we construct special dependencies among sequences of the sets of all inner products with different length, which forms a time-homogeneous irreducible Markov chain, without affecting the marginal distribution. We show that this Markov chain has a uniform distribution as its unique stationary distribution, with rate of convergence dominated by the second largest eigenvalue of the transition probability matrix. This allows us to develop a converse, which converges to a tight bound in some cases, as the size of the files becomes large. While this converse is based on the one in multi-message private information retrieval due to the nature of retrieving inner products instead of data itself some changes are made to reach the desired result.
    We further explore the idea of retrieving inner products when data files are realvalued vectors as opposed to being finite field vectors. This is motivated by the use of real-valued vectors in learning algorithm. We find upper and lower bounds on minimum download cost needed to retrieve these inner products with an acceptable distortion without revealing the identity of requested subset. For achievability, we first quantize inner products and then use the same algorithm as before and for converse we show that as the length of files become large, the set of inner products converges to independent random variables with normal distribution and derive the rate of convergence with use of central limit theorem
  9. Keywords:
  10. Privacy ; Coding ; Distributed System ; Machine Learning ; Privacy Preserving

 Digital Object List