Loading...

One-shot learning from demonstration approach toward a reciprocal sign language-based HRI

Hosseini, S. R ; Sharif University of Technology | 2021

269 Viewed
  1. Type of Document: Article
  2. DOI: 10.1007/s12369-021-00818-1
  3. Publisher: Springer Science and Business Media B.V , 2021
  4. Abstract:
  5. This paper addresses the lack of proper Learning from Demonstration (LfD) architectures for Sign Language-based Human–Robot Interactions to make them more extensible. The paper proposes and implements a Learning from Demonstration structure for teaching new Iranian Sign Language signs to a teacher assistant social robot, RASA. This LfD architecture utilizes one-shot learning techniques and Convolutional Neural Network to learn to recognize and imitate a sign after seeing its demonstration (using a data glove) just once. Despite using a small, low diversity data set (~ 500 signs in 16 categories), the recognition module reached a promising 4-way accuracy of 70% on the test data and showed good potential for increasing the extensibility of sign vocabulary in sign language-based human–robot interactions. The expansibility and promising results of the one-shot Learning from Demonstration technique in this study are the main achievements of conducting such machine learning algorithms in social Human–Robot Interaction. © 2021, The Author(s), under exclusive licence to Springer Nature B.V
  6. Keywords:
  7. Agricultural robots ; Convolutional neural networks ; Demonstrations ; Machine learning ; Network architecture ; Social robots ; Statistical tests ; Data glove ; Data set ; Learning from demonstration ; One-shot learning ; Proper learning ; Robot interactions ; Sign language ; Test data ; Learning algorithms
  8. Source: International Journal of Social Robotics ; 2021 ; 18754791 (ISSN)
  9. URL: https://link.springer.com/article/10.1007/s12369-021-00818-1