An Analysis of Gender Bias in Text-to-Image Models Using Neutral Prompts in Spanish

Victoria Mu˜noz-García, María Villalba-Osés, Juan Pablo Consuegra-Ayala

Resumen


Text-to-image generative models can create visual content from text but often reflect biases in their training data. This study examines gender bias in three widely used models—ChatGPT (DALL-E), Copilot, and Gemini—using gender-neutral prompts in Spanish, an underexplored language in bias research. A dataset of 300 images from 50 neutral prompts on health and well-being was manually analyzed for gender representation biases. ChatGPT showed the highest stereotyping and lowest neutrality, Copilot maintained strict neutrality, and Gemini exhibited intermediate behavior. Across models, neutrality dropped when analyzing the main subject (gender-target annotations) versus contextual elements (genderrelated annotations). These findings underscore persistent gender bias, even with neutral prompts, and highlight the need for fairer AI systems through systematic evaluation.

Texto completo:

PDF