Kombinasi Metode MFCC dan KNN dalam Pengenalan Emosi Manusia Melalui Ucapan

  • Putu Widya Eka Safitri Universitas Udayana
  • Anak Agung Istri Ngurah Eka Karyawati

Abstract

Emotions are expressions that humans have in responding or responding to things that happen to themselves or in the environment around them, with these emotions humans are more able to express what they feel about a situation or event, these emotions can also be a means of communication other than language, because with emotions Of course, humans can know what happens to other humans around them. One of the human expressions to be able to communicate is by voice, sound can also be used to find out the type of emotion that is being experienced by the speaker. Mel-Frequency Cepstrum Coefficient (MFCC) is one of the feature extractions that is often used in the field of speech technology where in this feature extraction the human voice recording will be converted into a convolution matrix, namely a spectrogram or voice signal. K-Nearest Neighbor (K-NN) is a method that works by grouping new data based on the distance (neighborhood) from one data to the other. In the study of classical human emotions with speech using the K-Nearest Neighbor (K-NN) method, it is not appropriate to use this method because it only gets 50% accuracy.


 

References

Fajar Septria, Jangkung Raharjo, Nur Ibrahim, S.T.,M.T, “ KLASIFIKASI EMOSI BERDASARKAN SINYAL SUARA MANUSIA MENGGUNAKAN METODE K-NEAREST NEIGHBOR (K-NN) ” e-Proceeding of Engineering, Vol.6, No.2. Page 4130, 2019.

Siti Helmiyah, Imam Riadi, Rusydi Umar, Abdullah Hanif, Anton Yudhana, Abdul Fadlil, “IDENTIFIKASI EMOSI MANUSIA BERDASARKAN UCAPAN MENGGUNAKAN METODE EKSTRAKSI CIRI LPC DAN METODE EUCLIDEAN DISTANCE ” Jurnal Teknologi Informasi dan Ilmu Komputer (JTIIK), Vol. 7, No. 6. Page 1177-1186, 2020.

Yulistia Aini, Tri Budi Santoso, Titon Dutono. “ Pemodelan CNN Untuk Deteksi Emosi Berbasis Speech Bahasa Indonesia “ Jurnal Politeknik Caltex Riau, Vol.7, No.1. Page 143
– 152, 2021.

GUMELAR, A. B. et al. “Human Voice Emotion Identification Using Prosodic and Spectral Feature Extraction Based on Deep Neural Networks”, International Conference on Serious Games and Applications for Health (SeGAH). IEEE, Page 1– 8. 2019

Anjani Reddy J, Dr. Shiva G. “Emotion Recognition from Speech Using MLP AND KNN ”, RESEARCH ARTICLE. Vol. 11, (Series-II) Page 34-38. 2021

Steven R. Livingstone, Frank A. Russo, “The Ryerson Audio-Visual Database of
Emotional Speech and Song (RAVDESS): A dynamic, multimodal set of facial and vocal expressions in North American English”, PLoS ONE 13(5): e0196391. 2018.
Published
2022-11-25
How to Cite
EKA SAFITRI, Putu Widya; EKA KARYAWATI, Anak Agung Istri Ngurah. Kombinasi Metode MFCC dan KNN dalam Pengenalan Emosi Manusia Melalui Ucapan. Jurnal Nasional Teknologi Informasi dan Aplikasinya (JNATIA), [S.l.], v. 1, n. 1, p. 133-140, nov. 2022. Available at: <https://ojs.unud.ac.id/index.php/jnatia/article/view/92655>. Date accessed: 26 jan. 2023.

Most read articles by the same author(s)

Obs.: This plugin requires at least one statistics/report plugin to be enabled. If your statistics plugins provide more than one metric then please also select a main metric on the admin's site settings page and/or on the journal manager's settings pages.