A multi-modal visual emotion recognition method to instantiate an ontology

dc.contributor.authorA. Heredia, Juan Pablo
dc.contributor.authorCardinale, Yudith
dc.contributor.authorDongo, Irvin
dc.contributor.authorDíaz-Amado, Jose
dc.date.accessioned2022-03-10T13:59:19Z
dc.date.available2022-03-10T13:59:19Z
dc.date.issued2021
dc.description.abstract"Human emotion recognition from visual expressions is an important research area in computer vision and machine learning owing to its significant scientific and commercial potential. Since visual expressions can be captured from different modalities (e.g., face expressions, body posture, hands pose), multi-modal methods are becoming popular for analyzing human reactions. In contexts in which human emotion detection is performed to associate emotions to certain events or objects to support decision making or for further analysis, it is useful to keep this information in semantic repositories, which offers a wide range of possibilities for implementing smart applications. We propose a multi-modal method for human emotion recognition and an ontology-based approach to store the classification results in EMONTO, an extensible ontology to model emotions. The multi-modal method analyzes facial expressions, body gestures, and features from the body and the environment to determine an emotional state; this processes each modality with a specialized deep learning model and applying a fusion method. Our fusion method, called EmbraceNet+, consists of a branched architecture that integrates the EmbraceNet fusion method with other ones. We experimentally evaluate our multi-modal method on an adaptationof the EMOTIC dataset. Results show that our method outperforms the single-modal methods."es_PE
dc.description.uriTrabajo académicoes_PE
dc.formatapplication/pdfes_PE
dc.identifier.doi10.5220/0010516104530464es_PE
dc.identifier.isbn978-989758523-4
dc.identifier.urihttps://hdl.handle.net/20.500.12590/17035
dc.language.isoenges_PE
dc.publisherSciTePresses_PE
dc.publisher.countryPEes_PE
dc.relation.urihttps://www.scopus.com/record/display.uri?eid=2-s2.0-85111776744&origin=resultslist&sort=plf-f&src=s&nlo=&nlr=&nls=&sid=388854f699364393473c7d2625e8af59&sot=aff&sdt=cl&cluster=scopubyr%2c%222021%22%2ct&sl=48&s=AF-ID%28%22Universidad+Cat%c3%b3lica+San+Pablo%22+60105300%29&relpos=60&citeCnt=0&searchTerm=&featureToggles=FEATURE_NEW_DOC_DETAILS_EXPORT:1es_PE
dc.rightsinfo:eu-repo/semantics/restrictedAccesses_PE
dc.sourceUniversidad Católica San Pabloes_PE
dc.sourceRepositorio Institucional - UCSPes_PE
dc.subjectEmotion Ontologyes_PE
dc.subjectEmotion Recognitiones_PE
dc.subjectMulti-modal Methodes_PE
dc.subjectVisual Expressionses_PE
dc.subject.ocdehttps://purl.org/pe-repo/ocde/ford#1.02.00es_PE
dc.titleA multi-modal visual emotion recognition method to instantiate an ontologyes_PE
dc.typeinfo:eu-repo/semantics/article
dc.type.versioninfo:eu-repo/semantics/publishedVersiones_PE
renati.typehttps://purl.org/pe-repo/renati/type#trabajoAcademico
thesis.degree.disciplineComputación e informáticaes_PE
thesis.degree.grantorUniversidad Católica San Pablo. Facultad de Ciencia de la Computaciónes_PE
Files
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.71 KB
Format:
Item-specific license agreed upon to submission
Description: