Overview of SatiSPeech at IberLEF 2025: Multimodal Audio-Text Satire Classification in Spanish
Resumen
This article provides an overview of the SatiSPeech 2025 shared task, organized as part of the IberLEF 2025 workshop, held in conjunction with the XLI International Congress of the Spanish Society for Natural Language Processing (SEPLN 2025). The main goal of this task is to advance research on the automatic recognition of satire—a complex form of communication that presents unique challenges for natural language processing, particularly in areas such as subjectivity analysis, emotion recognition, and deep language understanding. The task is divided into two independent subtasks. The first subtask focuses on satire detection using transcriptions from YouTube videos, distinguishing between satirical and non-satirical content through a text classification approach. The second subtask introduces a multimodal perspective, combining textual and acoustic information, which requires addressing challenges in data representation and the design of models capable of modality fusion. Eleven teams participated in SatiSPeech 2025, each proposing and evaluating different strategies to tackle these problems. This overview analyzes the proposed approaches, the techniques employed, the results obtained, and the key insights gained from this edition.


