Loading...

Deep relative attributes

Souri, Y ; Sharif University of Technology | 2017

681 Viewed
  1. Type of Document: Article
  2. DOI: 10.1007/978-3-319-54193-8_8
  3. Publisher: Springer Verlag , 2017
  4. Abstract:
  5. Visual attributes are great means of describing images or scenes, in a way both humans and computers understand. In order to establish a correspondence between images and to be able to compare the strength of each property between images, relative attributes were introduced. However, since their introduction, hand-crafted and engineered features were used to learn increasingly complex models for the problem of relative attributes. This limits the applicability of those methods for more realistic cases. We introduce a deep neural network architecture for the task of relative attribute prediction. A convolutional neural network (ConvNet) is adopted to learn the features by including an additional layer (ranking layer) that learns to rank the images based on these features. We adopt an appropriate ranking loss to train the whole network in an end-to-end fashion. Our proposed method outperforms the baseline and state-of-the-art methods in relative attribute prediction on various coarse and fine-grained datasets. Our qualitative results along with the visualization of the saliency maps show that the network is able to learn effective features for each specific attribute. Source code of the proposed method is available at https://github.com/yassersouri/ghiaseddin. © Springer International Publishing AG 2017
  6. Keywords:
  7. Deep neural networks ; Network architecture ; Neural networks ; Complex model ; Convolutional neural network ; End to end ; Fine grained ; Saliency map ; Source codes ; State-of-the-art methods ; Visual attributes ; Computer vision
  8. Source: 13th Asian Conference on Computer Vision, ACCV 2016, 20 November 2016 through 24 November 2016 ; Volume 10115 LNCS , 2017 , Pages 118-133 ; 03029743 (ISSN); 9783319541921 (ISBN)
  9. URL: https://link.springer.com/chapter/10.1007%2F978-3-319-54193-8_8