We describe a method for annotation and analysis of multimodal video data using a sequence of specialized tools that we have made interoperable.
![](/themes/mitre/img/defaults/hero_mobile/MITRE-Building.jpeg)
A Sequence of Tools for Multimodal Analysis of Verbal and Nonverbal Behaviors Related to Interactional Rapport
Download Resources
PDF Accessibility
One or more of the PDF files on this page fall under E202.2 Legacy Exceptions and may not be completely accessible. You may request an accessible version of a PDF using the form on the Contact Us page.
We describe a method for annotation and analysis of multimodal video data using a sequence of specialized tools that we have made interoperable. We present initial, test-sample results derived with this method. This report details the building blocks of cross-language, multi-modal analyses planned for a large corpus of audio-videotaped, dyadic, conversation data comprising elicitations from Gulf-region Arabic speakers, Mexican Spanish speakers, and American English speakers. We discuss how our approach meets the challenge of readying audio, video, and transcription text data from these three diverse languages for annotation and comparative analysis of multimodal language behaviors related to maintenance of interactional rapport.