Loading...
- Type of Document: Article
- DOI: 10.1007/978-3-319-54193-8_8
- Publisher: Springer Verlag , 2017
- Abstract:
- Visual attributes are great means of describing images or scenes, in a way both humans and computers understand. In order to establish a correspondence between images and to be able to compare the strength of each property between images, relative attributes were introduced. However, since their introduction, hand-crafted and engineered features were used to learn increasingly complex models for the problem of relative attributes. This limits the applicability of those methods for more realistic cases. We introduce a deep neural network architecture for the task of relative attribute prediction. A convolutional neural network (ConvNet) is adopted to learn the features by including an additional layer (ranking layer) that learns to rank the images based on these features. We adopt an appropriate ranking loss to train the whole network in an end-to-end fashion. Our proposed method outperforms the baseline and state-of-the-art methods in relative attribute prediction on various coarse and fine-grained datasets. Our qualitative results along with the visualization of the saliency maps show that the network is able to learn effective features for each specific attribute. Source code of the proposed method is available at https://github.com/yassersouri/ghiaseddin. © Springer International Publishing AG 2017
- Keywords:
- Deep neural networks ; Network architecture ; Neural networks ; Complex model ; Convolutional neural network ; End to end ; Fine grained ; Saliency map ; Source codes ; State-of-the-art methods ; Visual attributes ; Computer vision
- Source: 13th Asian Conference on Computer Vision, ACCV 2016, 20 November 2016 through 24 November 2016 ; Volume 10115 LNCS , 2017 , Pages 118-133 ; 03029743 (ISSN); 9783319541921 (ISBN)
- URL: https://link.springer.com/chapter/10.1007%2F978-3-319-54193-8_8