Published on Mon Jan 13 2020

Detecting depression in dyadic conversations with multimodal narratives and visualizations

Joshua Y. Kim, Greyson Y. Kim, Kalina Yacef

Conversations contain a wide spectrum of multimodal information that gives us hints about the emotions and moods of the speaker. We developed a system that supports humans to analyze conversations. Our experiments showed that this approach yielded better performance than the baseline model.

0
0
0
Abstract

Conversations contain a wide spectrum of multimodal information that gives us hints about the emotions and moods of the speaker. In this paper, we developed a system that supports humans to analyze conversations. Our main contribution is the identification of appropriate multimodal features and the integration of such features into verbatim conversation transcripts. We demonstrate the ability of our system to take in a wide range of multimodal information and automatically generated a prediction score for the depression state of the individual. Our experiments showed that this approach yielded better performance than the baseline model. Furthermore, the multimodal narrative approach makes it easy to integrate learnings from other disciplines, such as conversational analysis and psychology. Lastly, this interdisciplinary and automated approach is a step towards emulating how practitioners record the course of treatment as well as emulating how conversational analysts have been analyzing conversations by hand.