Automatic Speech Emotion Recognition Using Machine Learning
Main Article Content
Abstract
For several years, emotion detection from speech signals has been a research topic in human-machine interface applications. To discern emotions from speech signals, a variety of devices have been developed. Theoretical definitions, categorizations, and modalities of emotion expression are all discussed.
To conduct this research, a SER framework based on various classifiers and feature extraction methods was developed. The mel-frequency cepstrum coefficients (MFCC) and modulation spectral (MS) characteristics of speech signals are analysed and fed into various classifiers for training. Using feature selection, this method is used to find the most important function subset (FS). The features extracted from emotional speech samples that make up the database for the speech emotion recognition system include power, pitch, linear prediction cepstrum coefficient (LPCC), and Mel frequency cepstrum coefficient (MFCC). The effectiveness of classification is determined by the extracted features.
Seven emotions are classified using a recurrent neural network (RNN) classifier. Their results are then compared to techniques such as multivariate linear regression (MLR) and support vector machines that are used in the field of emotion detection for spoken audio signals (SVM).
Â
Downloads
Article Details
COPYRIGHT
Submission of a manuscript implies: that the work described has not been published before, that it is not under consideration for publication elsewhere; that if and when the manuscript is accepted for publication, the authors agree to automatic transfer of the copyright to the publisher.
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work
- The journal allows the author(s) to retain publishing rights without restrictions.
- The journal allows the author(s) to hold the copyright without restrictions.
References
Ali H, Hariharan M, Yaacob S, Adom AH. Facial emotion recognition using empirical mode decomposition. Expert Systems with Applications. 2015;42(3): 1261-1277
Liu ZT, Wu M, Cao WH, Mao JW, Xu JP, Tan GZ. Speech emotion recognition based on feature selection and extreme learning machine decision tree. Neurocomputing. 2018;273: 271-280
Ragot M, Martin N, Em S, Pallamin N, Diverrez JM. Emotion recognition using physiological signals: Laboratory vs. wearable sensors. In: International Conference on Applied Human Factors and Ergonomics. Springer; 2017. pp. 15-22
Surabhi V, Saurabh M. Speech emotion recognition: A review. International Research Journal of Engineering and Technology (IRJET). 2016;03:313-316
Wu S, Falk TH, Chan WY. Automatic speech emotion recognition using modulation spectral features. Speech Communication. 2011;53:768-785
Wu S. Recognition of human emotion in speech using modulation spectral features and support vector machines [PhD thesis]. 2009
Tang J, Alelyani S, Liu H. Feature selection for classification: A review. Data Classification: Algorithms and Applications. 2014:37
Martin V, Robert V. Recognition of emotions in German speech using Gaussian mixture models. LNAI. 2009; 5398:256-263
Ingale AB, Chaudhari D. Speech emotion recognition using hidden Markov model and support vector machine. International Journal of Advanced Engineering Research and Studies. 2012:316-318
Ashwinkumar.U.M and Dr. Anandakumar K.R, "Predicting Early Detection of cardiac and Diabetes symptoms using Data mining techniques", International conference on computer Design and Engineering, vol.49, 2012.
Milton A, Sharmy Roy S, Tamil Selvi S. SVM scheme for speech emotion recognition using MFCC feature. International Journal of Computer Applications. 2013;69.