Loading...
Sample complexity of classification with compressed input
Hafez Kolahi, H ; Sharif University of Technology | 2020
556
Viewed
- Type of Document: Article
- DOI: 10.1016/j.neucom.2020.07.043
- Publisher: Elsevier B.V , 2020
- Abstract:
- One of the most studied problems in machine learning is finding reasonable constraints that guarantee the generalization of a learning algorithm. These constraints are usually expressed as some simplicity assumptions on the target. For instance, in the Vapnik–Chervonenkis (VC) theory the space of possible hypotheses is considered to have a limited VC dimension One way to formulate the simplicity assumption is via information theoretic concepts. In this paper, the constraint on the entropy H(X) of the input variable X is studied as a simplicity assumption. It is proven that the sample complexity to achieve an ∊-δ Probably Approximately Correct (PAC) hypothesis is bounded by [Formula presented] which is sharp up to the [Formula presented] factor (a and c are constants). Moreover, it is shown that if a feature learning process is employed to learn the compressed representation from the dataset, this bound no longer exists. These findings have important implications on the Information Bottleneck (IB) theory which had been utilized to explain the generalization power of Deep Neural Networks (DNNs), but its applicability for this purpose is currently under debate by researchers. In particular, this is a rigorous proof for the previous heuristic that compressed representations are exponentially easier to be learned. However, our analysis pinpoints two factors preventing the IB, in its current form, to be applicable in studying neural networks. Firstly, the exponential dependence of sample complexity on 1/∊., which can lead to a dramatic effect on the bounds in practical applications when ∊ is small. Secondly, our analysis reveals that arguments based on input compression are inherently insufficient to explain generalization of methods like DNNs in which the features are also learned using available data. © 2020 Elsevier B.V
- Keywords:
- Compressed representation ; Generalization bound ; Information bottleneck ; Complex networks ; Deep learning ; Deep neural networks ; Information theory ; Learning systems ; Neural networks ; Exponential dependence ; Feature learning ; Information bottleneck theories ; Input variables ; Probably approximately correct ; Sample complexity ; VC dimension ; Learning algorithms ; Article ; Compression ; Deep neural network ; Entropy ; Learning
- Source: Neurocomputing ; Volume 415 , 2020 , Pages 286-294
- URL: https://www.sciencedirect.com/science/article/abs/pii/S0925231220311516