Multimodal characterization of communication: presentation of a new dataset and first experiments

In-house seminar

In face-to-face interaction, the messages exchanged among participants are not limited to chains (patterns) of words, but are characterized by a continuous interaction among information expressed through multiple channels. In addition to the language, in fact, other faculties such as hearing and sight are involved in the construction of meaning: we decode not only the sequences of words, but also the “way” in which they are produced (intonation, volume, speed) and how we move and show ourselves when we pronounce them. The conveyed meaning depends on not only the combination of these characteristics, but also the involvement or the effect that the message has on the recipient(s). Is it possible to predict this effect automatically? In this seminar a new multimodal dataset will be presented for the study and prediction of the potential for the involvement in the context of guided visits to sites of cultural interest.

Speaker(s): Andrea Amelio Ravelli

He has been a postdoctoral research fellow since 2020 within the ItaliaNLP Group of the Institute for Computational Linguistics “A. Zampolli” of the National Research Council of Italy. He conducted his studies at the University of Florence, training as a modern philologist and taking interest into Computational Linguistics. He received his PhD in 2020 with a thesis on the formation and annotation of multimodal datasets for the representation of actions and events, using the visual medium as a semantic discriminant for the disambiguation of action verbs within the natural language.

Brochure

Video Recording: Multimodal characterization of communication: presentation of a new dataset and first experiments