On the Impact of Syntactic Infusion for Gender Categorization Across Contextual Dimensions
Resumen
This paper investigates how incorporating syntactic information can enhance the categorization of text into multiple gender dimensions, defined by our own identity (as category), the person we are addressing (to category), or the individual we are discussing (about category). Specifically, we explore the use of dependency grammars to integrate explicit syntactic embeddings while leveraging the strengths of pre-trained masked language models (MLMs). Our goal is to determine if dependency grammars add value beyond the implicit syntactic understanding already captured by MLMs. We begin by establishing a baseline using standard MLMs. Next, we propose a neural architecture that explicitly integrates dependency-based structures into this baseline, enabling a comparative analysis of performance and variations. Finally, in addition to evaluating the results, we analyzed the training dynamics of the two proposed variants to provide additional insights into their behavior during the fine-tuning stage. Explicit syntactic information boosts performance in single-task setups, though its gains fade in multitask scenarios.


