Departamento de Ingeniería Eléctrica y Electrónica
Permanent URI for this community
Browse
Browsing Departamento de Ingeniería Eléctrica y Electrónica by browse.metadata.advisor "Dongo Escalante, Irvin Franco Benito"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item A combined CNN architecture for speech emotion recognition(Universidad Católica San Pablo, 2024) Begazo Huamani, Rolinson Jhiampier; Dongo Escalante, Irvin Franco BenitoEmotion recognition through speech is a technique employed in various scenarios of Human–Computer Interaction (HCI). Existing approaches have achieved significant results; however, limitations persist, with the quantity and diversity of data being more notable when deep learning techniques are used. The lack of a standard in feature selection leads to continuous development and experimentation. Choosing and designing the appropriate network architecture constitutes another challenge. This study addresses the challenge of recognizing emotions in the human voice using deep learning techniques, proposing a comprehensive approach, and developing preprocessing and feature selection stages while constructing a dataset called EmoDSc as a result of combining several available databases. The synergy between spectral features and spectrogram images is investigated. Independently, the weighted accuracy obtained using only spectral features was 89%, while using only spectrogram images, the weighted accuracy reached 90%. These results, although surpassing previous research, highlight the strengths and limitations when operating in isolation. Based on this exploration, a neural network architecture composed of a CNN1D, a CNN2D, and an MLP that fuses spectral features and spectogram images is proposed. The model, supported by the unified dataset EmoDSc, demonstrates a remarkable accuracy of 96%.Item Evaluation of Robot Emotion Expressions for Human–Robot Interaction(Universidad Católica San Pablo, 2024) Cardenas Santander, Pedro Jesus; Dongo Escalante, Irvin Franco BenitoEmotion recognition has fostered more suitable and effective human–robot interaction (HRI). In particular, social robots have to imitate the expression of feeling through their voices and body gestures in order to ameliorate this interaction. However, robot’s hardware limitations (few joints and computational resources) may restrict the quality of robot’s expressions. To contribute to this area, we conducted a study on how emotions are expressed by humans through gestures, body language, and movements. This study allows understanding universal representation of emotions (movements and gestures) and designing similar movements for robots, despite their hardware limitations. Based on that, we develop and evaluate an emotional interaction system for robots, specifically for Pepper robot. This system utilizes verbal emotion recognition, based on deep learning techniques to interpret and respond with movements and emojis, thus enriching the dynamics of HRI.We implemented two versions of such as interaction system: on board implementation (the emotion recognition process is executed by the robot) and a server-based implementation (the emotion recognition is performed by an external server connected to the robot). We assessed the performance of both versions, as well as the acceptance of robot expressions for HRI. Results show that the combined use of emotional movements and emojis by robot significantly improves the accuracy of emotional conveyance.