Paper ID: 2405.08454
Alignment Helps Make the Most of Multimodal Data
Christian Arnold, Andreas Küpfer
When studying political communication, combining the information from text, audio, and video signals promises to reflect the richness of human communication more comprehensively than confining it to individual modalities alone. However, its heterogeneity, connectedness, and interaction are challenging to address when modeling such multimodal data. We argue that aligning the respective modalities can be an essential step in entirely using the potential of multimodal data because it informs the model with human understanding. Taking care of the data-generating process of multimodal data, our framework proposes four principles to organize alignment and, thus, address the challenges of multimodal data. We illustrate the utility of these principles by analyzing how German MPs address members of the far-right AfD in their speeches and predicting the tone of video advertising in the context of the 2020 US presidential race. Our paper offers important insights to all keen to analyze multimodal data effectively.
Submitted: May 14, 2024