Paper ID: 2205.10902
The Case for Perspective in Multimodal Datasets
Marcelo Viridiano, Tiago Timponi Torrent, Oliver Czulo, Arthur Lorenzi Almeida, Ely Edison da Silva Matos, Frederico Belcavello
This paper argues in favor of the adoption of annotation practices for multimodal datasets that recognize and represent the inherently perspectivized nature of multimodal communication. To support our claim, we present a set of annotation experiments in which FrameNet annotation is applied to the Multi30k and the Flickr 30k Entities datasets. We assess the cosine similarity between the semantic representations derived from the annotation of both pictures and captions for frames. Our findings indicate that: (i) frame semantic similarity between captions of the same picture produced in different languages is sensitive to whether the caption is a translation of another caption or not, and (ii) picture annotation for semantic frames is sensitive to whether the image is annotated in presence of a caption or not.
Submitted: May 22, 2022