Chapter
1 Introduction to Emotion Recognition
1.1 Basics of Pattern Recognition
1.2 Emotion Detection as a Pattern Recognition Problem
1.3.1 Facial Expression–Based Features
1.3.3 EEG Features Used for Emotion Recognition
1.3.4 Gesture- and Posture-Based Emotional Features
1.3.5 Multimodal Features
1.4 Feature Reduction Techniques
1.4.1 Principal Component Analysis
1.4.2 Independent Component Analysis
1.4.3 Evolutionary Approach to Nonlinear Feature Reduction
1.5 Emotion Classification
1.5.3 Hidden Markov Model Based Classifiers
1.5.4 k-Nearest Neighbor Algorithm
1.5.5 Naïve Bayes Classifier
1.6 Multimodal Emotion Recognition
1.7 Stimulus Generation for Emotion Arousal
1.8 Validation Techniques
1.8.1 Performance Metrics for Emotion Classification
2 Exploiting Dynamic Dependencies Among Action Units for Spontaneous Facial Action Recognition
2.3 Modeling the Semantic and Dynamic Relationships Among AUs With a DBN
2.3.1 A DBN for Modeling Dynamic Dependencies among AUs
2.3.2 Constructing the Initial DBN
2.3.4 AU Recognition Through DBN Inference
2.4.1 Facial Action Unit Databases
2.4.2 Evaluation on Cohn and Kanade Database
2.4.3 Evaluation on Spontaneous Facial Expression Database
3 Facial Expressions: A Cross-Cultural Study
3.2 Extraction of Facial Regions and Ekman’s Action Units
3.2.1 Computation of Optical Flow Vector Representing Muscle Movement
3.2.2 Computation of Region of Interest
3.2.3 Computation of Feature Vectors Within ROI
3.2.4 Facial Deformation and Ekman’s Action Units
3.3 Cultural Variation in Occurrence of Different Aus
3.4 Classification Performance Considering Cultural Variability
4 A Subject-dependent Facial Expression Recognition System
4.2.3 Facial Feature Extraction
4.2.5 Facial Expression Recognition
4.3.1 Parameter Determination of the RBFNN
4.3.2 Comparison of Facial Features
4.3.3 Comparison of Face Recognition Using “Inner Face” and Full Face
4.3.4 Comparison of Subject-Dependent and Subject-Independent Facial Expression Recognition Systems
4.3.5 Comparison with Other Approaches
5 Facial Expression Recognition Using Independent Component Features and Hidden Markov Model
5.2.1 Expression Image Preprocessing
5.2.3 Codebook and Code Generation
5.2.4 Expression Modeling and Training Using HMM
6 Feature Selection for Facial Expression based on Rough Set Theory
6.2 Feature Selection for Emotion Recognition Based on Rough Set Theory
6.2.1 Basic Concepts of Rough Set Theory
6.2.2 Feature Selection Based on Rough Set and Domain-Oriented Data-Driven Data Mining Theories
6.2.3 Attribute Reduction for Emotion Recognition
6.3 Experiment Results and Discussion
6.3.1 Experiment Condition
6.3.2 Experiments for Feature Selection Method for Emotion Recognition
6.3.3 Experiments for the Features Concerning Mouth for Emotion Recognition
7 Emotion Recognition from Facial Expressions Using Type-2 Fuzzy Sets
7.2 Preliminaries on Type-2 Fuzzy Sets
7.3 Uncertainty Management in Fuzzy-Space for Emotion Recognition
7.3.1 Principles Used in the IT2FS Approach
7.3.2 Principles Used in the GT2FS Approach
7.4 Fuzzy Type-2 Membership Evaluation
7.5.2 Creating the Type-2 Fuzzy Face-Space
7.5.3 Emotion Recognition of an Unknown Facial Expression
7.6.3 The Confusion Matrix-Based RMS Error
8 Emotion Recognition from Non-frontal Facial Images
8.2 A Brief Review of Automatic Emotional Expression Recognition
8.2.1 Framework of Automatic Facial Emotion Recognition System
8.2.2 Extraction of Geometric Features
8.2.3 Extraction of Appearance Features
8.3 Databases for Non-Frontal Facial Emotion Recognition
8.3.3 CMU Multi-PIE Database
8.3.4 Bosphorus 3D Database
8.4 Recent Advances of Emotion Recognition from Non-Frontal Facial Images
8.4.1 Emotion Recognition from 3D Facial Models
8.4.2 Emotion Recognition from Non-frontal 2D Facial Images
8.5 Discussions and Conclusions
9 Maximum a Posteriori based Fusion Method for Speech Emotion Recognition
9.2 Acoustic Feature Extraction for Emotion Recognition
9.3 Proposed Map-Based Fusion Method
9.3.3 Addressing Small Training Dataset Problem—Calculation of fc|CL(cr)
9.3.4 Training and Testing Procedure
9.4.2 Experiment Description
9.4.3 Results and Discussion
10 Emotion Recognition in Naturalistic Speech and Language—A Survey
10.2 Tasks and Applications
10.2.1 Use-Cases for Automatic Emotion Recognition from Speech and Language
10.2.3 Modeling and Annotation: Categories versus Dimensions
10.3 Implementation and Evaluation
10.3.1 Feature Extraction
10.3.2 Feature and Instance Selection
10.3.3 Classification and Learning
10.3.4 Partitioning and Evaluation
10.3.5 Research Toolkits and Open-Source Software
10.4.1 Non-prototypicality, Reliability, and Class Sparsity
10.4.3 Real-Time Processing
10.4.4 Acoustic Environments: Noise and Reverberation
10.5 Conclusion and Outlook
11 EEG-Based Emotion Recognition Using Advanced Signal Processing Techniques
11.2 Brain Activity and Emotions
11.3 EEG-ER Systems: An Overview
11.5 Advanced Signal Processing in EEG-ER
11.6 Concluding Remarks and Future Directions
12 Frequency Band Localization on Multiple Physiological Signals for Human Emotion Classification Using DWT
12.3 Research Methodology
12.3.1 Physiological Signals Acquisition
12.3.2 Preprocessing and Normalization
12.3.3 Feature Extraction
12.3.4 Emotion Classification
12.4 Experimental Results and Discussions
13 Toward Affective Brain–Computer Interface: Fundamentals and Analysis of EEG-based Emotion Classification
13.1.1 Brain–Computer Interface
13.1.2 EEG Dynamics Associated with Emotion
13.1.3 Current Research in EEG-Based Emotion Classification
13.2 Materials and Methods
13.2.2 EEG Feature Extraction
13.2.3 EEG Feature Selection
13.2.4 EEG Feature Classification
13.3 Results and Discussion
13.3.1 Superiority of Differential Power Asymmetry
13.3.2 Gender Independence in Differential Power Asymmetry
13.3.3 Channel Reduction from Differential Power Asymmetry
13.3.4 Generalization of Differential Power Asymmetry
13.5 Issues and Challenges Toward ABCIs
13.5.1 Directions for Improving Estimation Performance
13.5.2 Online System Implementation
14 Bodily Expression for Automatic Affect Recognition
14.2 Background and Related Work
14.2.1 Body as an Autonomous Channel for Affect Perception and Analysis
14.2.2 Body as an Additional Channel for Affect Perception and Analysis
14.2.3 Bodily Expression Data and Annotation
14.3 Creating a Database of Facial and Bodily Expressions: The Fabo Database
14.4 Automatic Recognition of Affect from Bodily Expressions
14.4.1 Body as an Autonomous Channel for Affect Analysis
14.4.2 Body as an Additional Channel for Affect Analysis
14.5 Automatic Recognition of Bodily Expression Temporal Dynamics
14.5.1 Feature Extraction
14.5.2 Feature Representation and Combination
14.6 Discussion and Outlook
15 Building a Robust System for Multimodal Emotion Recognition
15.3 The Callas Expressivity Corpus
15.3.1 Segmentation of Data
15.4.1 Classification Model
15.4.2 Feature Extraction
15.4.6 Recognizing Missing Data
15.5 Multisensor Data Fusion
15.5.1 Feature-Level Fusion
15.5.2 Ensemble-Based Systems and Decision-Level Fusion
15.6.4 Contradictory Cues
15.7 Online Recognition System
15.7.1 Social Signal Interpretation
15.7.2 Synchronized Data Recording and Annotation
15.7.3 Feature Extraction and Model Training
15.7.4 Online Classification
16 Semantic AudioVisual Data Fusion for Automatic Emotion Recognition
16.3 Data Set Preparation
16.4.1 Classification Model
16.4.2 Emotion Estimation from Speech
17 A Multilevel Fusion Approach for Audiovisual Emotion Recognition
17.2 Motivation and Background
17.3 Facial Expression Quantification
17.4.2 Facial Deformation Features
17.4.3 Marker-Based Audio Visual Features
17.4.4 Expression Classification and Multilevel Fusion
17.5 Experimental Results and Discussion
17.5.1 Facial Expression Quantification
17.5.2 Facial Expression Classification Using SVDF and VDF Features
17.5.3 Audiovisual Fusion Experiments
18 From A Discrete Perspective of Emotions to Continuous, Dynamic, and Multimodal Affect Sensing
18.2 A Novel Method for Discrete Emotional Classification of Facial Images
18.2.1 Selection and Extraction of Facial Inputs
18.2.2 Classifiers Selection and Combination
18.3 A 2D Emotional Space for Continuous and Dynamic Facial Affect Sensing
18.3.1 Facial Expressions Mapping to the Whissell Affective Space
18.3.2 From Still Images to Video Sequences through 2D Emotional Kinematics Modeling
18.4 Expansion to Multimodal Affect Sensing
18.4.1 Step 1: 2D Emotional Mapping to the Whissell Space
18.4.2 Step 2: Temporal Fusion of Individual Modalities to Obtain a Continuous 2D Emotional Path
18.4.3 Step 3: “Emotional Kinematics” Path Filtering
18.5 Building Tools That Care
18.5.1 T-EDUCO: A T-learning Tutoring Tool
18.5.2 Multimodal Fusion Application to Instant Messaging
18.6 Concluding Remarks and Future Work
19 AudioVisual Emotion Recognition using Semi-Coupled Hidden Markov Model with State-Based Alignment Strategy
19.2.1 Facial Feature Extraction
19.2.2 Prosodic Feature Extraction
19.3 Semi-Coupled Hidden Markov Model
19.3.2 State-Based Bimodal Alignment Strategy
19.4.2 Experimental Results
20 Emotion Recognition in Car Industry
20.2 An Overview of Application for the Car Industry
20.3 Modality-Based Categorization
20.3.1 Video-Image-Based Emotion Recognition
20.3.2 Speech Based Emotion Recognition
20.3.3 Biosignal-Based Emotion Recognition
20.3.4 Multimodal Based Emotion Recognition
20.4 Emotion-Based Categorization
20.4.3 Confusion and Nervousness
20.6 Open Issues and Future Steps