Loading...
Search for:
facial-expression
0.006 seconds
Simultaneous recognition of facial expression and identity via sparse representation
, Article 2014 IEEE Winter Conference on Applications of Computer Vision, WACV 2014 ; 2014 , Pages 1066-1073 ; ISBN: 9781479949854 ; Fatemizadeh, E ; Mahoor, M. H ; Sharif University of Technology
Abstract
Automatic recognition of facial expression and facial identity from visual data are two challenging problems that are tied together. In the past decade, researchers have mostly tried to solve these two problems separately to come up with face identification systems that are expression-independent and facial expressions recognition systems that are person-independent. This paper presents a new framework using sparse representation for simultaneous recognition of facial expression and identity. Our framework is based on the assumption that any facial appearance is a sparse combination of identities and expressions (i.e., one identity and one expression). Our experimental results using the CK+...
Human Facial Activity Recognition using RGBD Videos
, M.Sc. Thesis Sharif University of Technology ; Jamzad, Mansoor (Supervisor)
Abstract
Human facial activity recognition is one of the endeavors to improve human-computer interaction. Recognition of excitements and emotions on human face by machine and makinga corresponding reaction is essential for man machine intraction.The purpose of this project is recognizingactivities such as speaking, eating, laughing, agree and disagree which have more complexity than usualemotions such as fear and happinesscontained in common datasets.So, adataset in accordance with the above mentioned 5 activities was collected and the appropriate feature vector for analyzing these face activities were implemented.Distance between the interest points located on the face were used as parameters in...
Facial Expression Recognition using Kinect Sensor and Alice Humanoid Robot Real-time Facial Expression Imitation
, M.Sc. Thesis Sharif University of Technology ; Meghdari, Ali (Supervisor) ; Bagheri Shouraki, Saeed (Supervisor)
Abstract
In the recent years, development of new technologies in the field of cognitive science has made a huge effect on the people social life style. Facial expression imitation with applications in the design of human robot interaction (HRI) systems is an active area of research. In this study, we propose an approach using a humanoid social robot, The Alice, for real-time imitation of human facial expression. The facial keypoints of the user are extracted by using the Kinect sensor together with manipulated SDK 2.0 codes. Kinect output array are collected for each expression, then the training dataset is created with these output arrays. An accurate Artificial neural network (ANN), which has a...
Designing a Robot Head for Studying Social Interaction with the Ability to Express Emotions Using a Projector
, M.Sc. Thesis Sharif University of Technology ; Meghdari, Ali (Supervisor) ; Shariati, Azadeh (Supervisor)
Abstract
The most crucial physical component in human-robot interaction is the head of the robot, which may have the potential to interact via representing facial expressions. There are several different types of robotic heads. This thesis presents the design process and realization of a retro-projected social robotic head, “Taban”. Taban is a cost-effective portable robot with a lifelike robotic face which can produce different facial expressions, different 3D face animation avatars (ranging from different ages, races, and gender) with the help of rear-projector in its head to projects animations onto a translucent 3D printout mask. It has a neck system with two degrees of freedom, a camera in the...
PCA-based dictionary building for accurate facial expression recognition via sparse representation
, Article Journal of Visual Communication and Image Representation ; Vol. 25, issue. 5 , July , 2014 , pp. 1082-1092 ; ISSN: 10473203 ; Fatemizadeh, E ; Mahoor, M. H ; Sharif University of Technology
Abstract
Sparse representation is a new approach that has received significant attention for image classification and recognition. This paper presents a PCA-based dictionary building for sparse representation and classification of universal facial expressions. In our method, expressive facials images of each subject are subtracted from a neutral facial image of the same subject. Then the PCA is applied to these difference images to model the variations within each class of facial expressions. The learned principal components are used as the atoms of the dictionary. In the classification step, a given test image is sparsely represented as a linear combination of the principal components of six basic...
Facial expression recognition using geometric normalization and appearance representation
, Article Iranian Conference on Machine Vision and Image Processing, MVIP ; 2013 , Pages 159-163 ; 21666776 (ISSN) ; 9781467361842 (ISBN) ; Raie, A. A ; Mohammadi, M. R ; Sharif University of Technology
IEEE Computer Society
2013
Abstract
Facial expression recognition is a challenging and interesting problem in computer vision and pattern recognition. Geometric variability in both emotion expression and neutral face is a fundamental challenge in facial expression recognition problem. This variability not only directly affects geometric facial expression recognition methods, but also is a critical problem in appearance methods. To overcome this problem, this paper presents an approach which eliminates geometric variability in emotion expression; thus, appearance features can be accurately used for facial expression recognition. Therefore, a fixed geometric model is used for geometric normalization of facial images. This model...
Facial Expression Recognition Using Soft Computing
,
M.Sc. Thesis
Sharif University of Technology
;
Manzuri, Mohammad Taghi
(Supervisor)
Abstract
Human face-to-face communication is an ideal model for designing a multimodal/media human-computer interface (HCI). Recent advances in image analysis and pattern recognition open up the possibility of automatic detection and classification of emotional and conversational facial signals. Automating facial expression analysis could bring facial expressions into man-machine interaction as a new modality and make the interaction tighter and more efficient. In this research an accurate real-time sequence-based system for representation, recognition and analysis of low-intensity facial expressions and facial action uints (FAUs) is presented. The feature extraction is done using facial feature...
Facial Expression Recognition Using a Mobile Camera
, M.Sc. Thesis Sharif University of Technology ; Jamzad, Mansour (Supervisor)
Abstract
Detecting emotions and facial expressions, as a means of nonverbal communication between human and machines, has attracted a great deal of attention in recent decades with the developments in artificial intelligence and acknowledging the ties between robotics and future human life. Human face plays a key role in his communications and processing it in a video source for the sake of mood recognition could be employed in different applications such as improving human and machine communication and analysis of emotions in different circumstances. Facial expression detection is useful in understanding not only momentary emotions, but also mental activities, social interactions and psychological...
Audio-visual speech recognition techniques in augmented reality environments
, Article Visual Computer ; Vol. 30, issue. 3 , March , 2014 , pp. 245-257 ; ISSN: 01782789 ; Ghorshi, S ; Mortazavi, M ; Sharif University of Technology
Abstract
Many recent studies show that Augmented Reality (AR) and Automatic Speech Recognition (ASR) technologies can be used to help people with disabilities. Many of these studies have been performed only in their specialized field. Audio-Visual Speech Recognition (AVSR) is one of the advances in ASR technology that combines audio, video, and facial expressions to capture a narrator's voice. In this paper, we combine AR and AVSR technologies to make a new system to help deaf and hard-of-hearing people. Our proposed system can take a narrator's speech instantly and convert it into a readable text and show the text directly on an AR display. Therefore, in this system, deaf people can read the...
Genetic algorithm-optimised structure of convolutional neural network for face recognition applications
, Article IET Computer Vision ; Volume 10, Issue 6 , 2016 , Pages 559-566 ; 17519632 (ISSN) ; Pooyan, M ; Manzuri Shalmani, M. T ; Sharif University of Technology
Institution of Engineering and Technology
2016
Abstract
Proposing a proper method for face recognition is still a challenging subject in biometric and computer vision applications. Although some reliable systems were introduced under relatively controlled conditions, their recognition rate is not satisfactory in the general settings. This is especially true when there are variations in pose, illumination, and facial expression. To alleviate these problems, a hybrid face recognition system is proposed which benefits from the superiority of both convolutional neural network (CNN) and support vector machine (SVM). To this end, first a genetic algorithm is employed to find the optimum structure of CNN. Then, the performance of the system is improved...
The real-time facial imitation by a social humanoid robot
, Article 4th RSI International Conference on Robotics and Mechatronics, ICRoM 2016, 24 March 2017 ; 2017 , Pages 524-529 ; 9781509032228 (ISBN) ; Bagheri Shouraki, S ; Siamy, A ; Shariati, A ; Sharif University of Technology
Institute of Electrical and Electronics Engineers Inc
2017
Abstract
Facial expression imitation with applications in the design of human robot interaction (HRI) systems is an active area of research. In this study, we propose an approach for real-time imitation of human facial expression by a humanoid social robot 'Alice'. Artificial neural network (ANN) and Kinect sensor are used for recognition and classifying of the facial expressions like happiness, sadness, fear, anger and surprise; with the Alice humanoid robot imitating the comprehended expressions. Results and experiments demonstrate the effectiveness of the approach. © 2016 IEEE
Human–robot facial expression reciprocal interaction platform: case studies on children with autism
, Article International Journal of Social Robotics ; Volume 10, Issue 2 , April , 2018 , Pages 179-198 ; 18754791 (ISSN) ; Taheri, A ; Alemi, M ; Meghdari, A ; Sharif University of Technology
Springer Netherlands
2018
Abstract
Reciprocal interaction and facial expression are some of the most interesting topics in the fields of social and cognitive robotics. On the other hand, children with autism show a particular interest toward robots, and facial expression recognition can improve these children’s social interaction abilities in real life. In this research, a robotic platform has been developed for reciprocal interaction consisting of two main phases, namely as Non-structured and Structured interaction modes. In the Non-structured interaction mode, a vision system recognizes the facial expressions of the user through a fuzzy clustering method. The interaction decision-making unit is combined with a fuzzy finite...
Design and Implementation of an Emotion Recognition and Expression System and Evaluation of Social Robots’ Effectiveness in Speech Therapy Interventions
, Ph.D. Dissertation Sharif University of Technology ; Meghdari, Ali (Supervisor) ; Taheri, Alireza (Supervisor) ; Alemi, Minoo (Co-Supervisor)
Abstract
Employing social robots in interactions with children for educational and healthcare objectives could enhance the efficiency of interventions and boost the children’s cognitive and affective outcomes by increasing engagement and providing motivation. This dissertation investigates three main subtopics to determine the efficacy of utilizing social robots in speech therapy sessions in interactions with children suffering from speech and language disorders. We hypothesize that interacting with social robots acting as therapists’ assistants in speech therapy interventions contributes to the formation of language-based communications and improves the individuals’ language skills. The first...
Human-Robot Facial Expression Interaction Using Kinect and Humanoid
, M.Sc. Thesis Sharif University of Technology ; Meghdari, Ali (Supervisor) ; Alemi, Minoo (Co-Supervisor)
Abstract
From the creation of the first robots, researchers have been fascinated by the possibility of interaction between a robot and its environment, by the possibility of robots interacting with each other and with humans. The common, underlying assumption is that humans prefer to interact with machines in the same way that they interact with other people. In this work, an assistant robot is developed based on a commercial platform, known as Alice R-50 (with the Iranian name of Mina). Alice is designed specifically for human-robot social interaction and has been used widely for studies on developmental and social robotics. It is used to improve and encourage the development of communication and...
Fuzzy local binary patterns: A comparison between Min-Max and Dot-Sum operators in the application of facial expression recognition
, Article Iranian Conference on Machine Vision and Image Processing, MVIP, Zanjan ; 2013 , Pages 315-319 ; 21666776 (ISSN) ; 9781467361842 (ISBN) ; Fatemizadeh, E ; Sharif University of Technology
Abstract
The Local Binary Patterns (LBP) feature extraction method is a theoretically and computationally simple and efficient methodology for texture analysis. The LBP operator is used in many applications such as facial expression recognition and face recognition. The original LBP is based on hard thresholding the neighborhood of each pixel, which makes texture representation sensitive to noise. In addition, LBP cannot distinguish between a strong and a weak pattern. In order to enhance the LBP approach, Fuzzy Local Binary Patterns (FLBP) is proposed. In FLBP, any neighborhood does not represented only by one code, but, it is represented by all existing codes with different degrees. In FLBP, any...
Recognizing combinations of facial action units with different intensity using a mixture of hidden Markov models and neural network
, Article Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 7 April 2010 through 9 April 2010 ; Volume 5997 LNCS , April , 2010 , Pages 304-313 ; 03029743 (ISSN) ; 9783642121265 (ISBN) ; Manzuri Shalmani, M. T ; Kiapour, M. H ; Kiaei, A. A ; Sharif University of Technology
2010
Abstract
Facial Action Coding System consists of 44 action units (AUs) and more than 7000 combinations. Hidden Markov models (HMMs) classifier has been used successfully to recognize facial action units (AUs) and expressions due to its ability to deal with AU dynamics. However, a separate HMM is necessary for each single AU and each AU combination. Since combinations of AU numbering in thousands, a more efficient method will be needed. In this paper an accurate real-time sequence-based system for representation and recognition of facial AUs is presented. Our system has the following characteristics: 1) employing a mixture of HMMs and neural network, we develop a novel accurate classifier, which can...
Spontaneous human-robot emotional interaction through facial expressions
, Article 8th International Conference on Social Robotics, ICSR 2016, 1 November 2016 through 3 November 2016 ; Volume 9979 LNAI , 2016 , Pages 351-361 ; 03029743 (ISSN) ; 9783319474366 (ISBN) ; Alemi, M ; Ghorbandaei Pour, A ; Taheri, A ; Sharif University of Technology
Springer Verlag
2016
Abstract
One of the main issues in the field of social and cognitive robotics is the robot’s ability to recognize emotional states and emotional interaction between robots and humans. Through effective emotional interaction, robots will be able to perform many tasks in human society. In this research, we have developed a robotic platform and a vision system to recognize the emotional state of the user through its facial expressions, which leads to a more realistic humanrobot interaction (HRI). First, a number of features are extracted according to points detected by a vision system from the face of the user. Then, the emotional state of the user is analyzed with the help of these features. For the...
Investigating time-varying functional connectivity derived from the Jackknife Correlation method for distinguishing between emotions in fMRI data
, Article Cognitive Neurodynamics ; Volume 14, Issue 4 , 2020 , Pages 457-471 ; Farahani, N ; Fatemizadeh, E ; Motie Nasrabadi, A ; Sharif University of Technology
Springer
2020
Abstract
Investigating human brain activity during expressing emotional states provides deep insight into complex cognitive functions and neurological correlations inside the brain. To be able to resemble the brain function in the best manner, a complex and natural stimulus should be applied as well, the method used for data analysis should have fewer assumptions, simplifications, and parameter adjustment. In this study, we examined a functional magnetic resonance imaging dataset obtained during an emotional audio-movie stimulus associated with human life. We used Jackknife Correlation (JC) method to derive a representation of time-varying functional connectivity. We applied different binary measures...
Intensity estimation of spontaneous facial action units based on their sparsity properties
, Article IEEE Transactions on Cybernetics ; Volume 46, Issue 3 , 2016 , Pages 817-826 ; 21682267 (ISSN) ; Fatemizadeh, E ; Mahoor, M. H ; Sharif University of Technology
Institute of Electrical and Electronics Engineers Inc
2016
Abstract
Automatic measurement of spontaneous facial action units (AUs) defined by the facial action coding system (FACS) is a challenging problem. The recent FACS user manual defines 33 AUs to describe different facial activities and expressions. In spontaneous facial expressions, a subset of AUs are often occurred or activated at a time. Given this fact that AUs occurred sparsely over time, we propose a novel method to detect the absence and presence of AUs and estimate their intensity levels via sparse representation (SR). We use the robust principal component analysis to decompose expression from facial identity and then estimate the intensity of multiple AUs jointly using a regression model...