Loading...

Information theoretic limits of learning of the causal features in a linear model

Tahmasebi, B ; Sharif University of Technology | 2018

437 Viewed
  1. Type of Document: Article
  2. DOI: 10.1109/IWCIT.2018.8405042
  3. Publisher: Institute of Electrical and Electronics Engineers Inc , 2018
  4. Abstract:
  5. In this paper, we study the problem of causal features detection in a linear model. In a mathematical model, we consider a dataset of N samples, each represented by a sequence of G binary features. Associated to each sample, there is a binary label. It is assumed that the labels are related to a latent subset of the features, called causal features, via a linear function. More precisely, in our model, each label is the result of a noisy observation of a linear function of the causal features. We assume that the number of the causal features is bounded by L, where L is a given positive integer. In this paper, our objective is to detect the set of the causal features. In this way, at the limits of the parameters N, G and L, we observe a threshold effect at Gh(L/G)/N, where h(.) is the binary entropy function. Hence, we define the rate of the problem of causal features detection as Gh(L/G)/N and we characterize the capacity, using an achievable scheme and a matching converse. © 2018 IEEE
  6. Keywords:
  7. Information theory ; Binary features ; Entropy function ; Features detections ; Information-theoretic limits ; Linear functions ; Noisy observations ; Positive integers ; Threshold effect ; Feature extraction
  8. Source: 2018 Iran Workshop on Communication and Information Theory, IWCIT 2018, 25 April 2018 through 26 April 2018 ; 2018 , Pages 1-6 ; 9781538641491 (ISBN)
  9. URL: https://ieeexplore.ieee.org/document/8405042