Modeling naturalistic affective states via facial and vocal expressions recognition. Brisbane, Australia.Ĭaridakis, G., Malatesta, L., Kessous, L., Amir, N., Raouzaiou, A., & Karpouzis, K. Scripted dialogs versus improvisation: Lessons learned about emotional elicitation techniques from the IEMOCAP database. Marrakech, Morocco.īusso, C., & Narayanan, S. In Second International Workshop on Emotion: Corpora for Research on Emotion and Affect, International Conference on Language Resources and Evaluation (LREC 2008) (pp.
Recording audio-visual emotional databases from actors: A closer look. Brisbane, Australia.īusso, C., & Narayanan, S. The expression and perception of emotions: Comparing assessments of self versus others. Chania, Crete, Greece.īusso, C., & Narayanan, S. In International Workshop on Multimedia Signal Processing (MMSP 2007) (pp. Joint analysis of the emotional fingerprint in the face and speech: A single subject study. IEEE Transactions on Audio, Speech and Language Processing, 15(8), 2331–2347.īusso, C., & Narayanan, S. Interrelation between speech and facial gestures in emotional utterances: A single subject study. Ubatuba-SP, Brazil.īusso, C., & Narayanan, S. In 7th International Seminar on Speech Production (ISSP 2006) (pp. Interplay between linguistic and affective goals in facial expression during emotional utterances. Antwerp, Belgium.īusso, C., & Narayanan, S. Using neutral speech models for emotional speech analysis. State College, PA.īusso, C., Lee, S., & Narayanan, S. In Sixth International Conference on Multimodal Interfaces ICMI 2004 (pp. Analysis of emotion recognition using facial expressions, speech and multimodal information. Computer Animation and Virtual Worlds, 16(3–4), 283–290.īusso, C., Deng, Z., Yildirim, S., Bulut, M., Lee, C., Kazemzadeh, A., Lee, S., Neumann, U., & Narayanan, S. Natural head motion synthesis driven by acoustic prosodic features. IEEE Transactions on Audio, Speech and Language Processing, 15(3), 1075–1086.īusso, C., Deng, Z., Neumann, U., & Narayanan, S. Rigid head motion in expressive speech animation: Analysis and synthesis. Newcastle, Northern Ireland, UK.īusso, C., Deng, Z., Grimm, M., Neumann, U., & Narayanan, S. In ISCA Tutorial and Research Workshop (ITRW) on Speech and Emotion (pp. Desperately seeking emotions or: Actors, wizards and human beings. Berlin, Germany: Springer-Verlag Press.īatliner, A., Fischer, K., Huber, R., Spilker, J., & Nöth, E. Lecture Notes in Artificial Intelligence (Vol. Picard (Eds.), Affective computing and intelligent interaction (ACII 2007). Using actor portrayals to systematically study multimodal emotion expression: The GEMEP corpus. Facial actions as visual cues for personality.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 9(5), 698–700.Īrya, A., Jefferies, L., Enns, J., & DiPaola, S. Least-squares fitting of two 3-D point sets. Newcastle, Northern Ireland, UK.Īrun, K., Huang, T., & Blostein, S. Analysis of an emotional speech corpus in Hebrew based on objective criteria. Las Vegas, Nevada, USA.Īmir, N., Ron, S., & Laor, N. In 11th International Conference on Human-Computer Interaction (HCI 2005) (pp. EmoTV1: Annotation of real-life emotions for the specification of multimodal affective interfaces.
The detailed motion capture information, the interactive setting to elicit authentic emotions, and the size of the database make this corpus a valuable addition to the existing databases in the community for the study and modeling of multimodal and expressive human communication.Ībrilian, S., Devillers, L., Buisine, S., & Martin, J. The corpus contains approximately 12 h of data. The actors performed selected emotional scripts and also improvised hypothetical scenarios designed to elicit specific types of emotions (happiness, anger, sadness, frustration and neutral state). This database was recorded from ten actors in dyadic sessions with markers on the face, head, and hands, which provide detailed information about their facial expressions and hand movements during scripted and spontaneous spoken communication scenarios. To facilitate such investigations, this paper describes a new corpus named the “interactive emotional dyadic motion capture database” (IEMOCAP), collected by the Speech Analysis and Interpretation Laboratory (SAIL) at the University of Southern California (USC). Since emotions are expressed through a combination of verbal and non-verbal channels, a joint analysis of speech and gestures is required to understand expressive human communication.