Bi-modal annoyance level detection from speech and text

Raquel Justo, Jon Irastorza, Saioa Pérez, M. Inés Torres

Resumen


The main goal of this work is the identification of emotional hints from speech. Machine learning researchers have analysed sets of acoustic parameters as potential cues for the identification of discrete emotional categories or, alternatively, of the dimensions of emotions. However, the semantic information gathered in the text message associated to its utterance can also provide valuable information that can be helpful for emotion detection. In this work this information is included within the acoustic information leading to a better system performance. Moreover, it is noticeable the use of a corpus that include spontaneous emotions gathered in a realistic environment. It is well known that emotion expression depends not only on cultural factors but also on the individual and on the specific situation. Thus, theconclusions extracted from the present work can be more easily extrapolated to a real system than those obtained from a classical corpus with simulated emotions.

Texto completo:

PDF


DOI: http://dx.doi.org/10.26342/2018-61-9