Women Wearing Lipstick: Measuring the Bias Between an Object and Its Related Gender
Abstract
A gender score for image captioning systems measures and quantifies the bias towards gender-specific objects, providing an additional metric to evaluate bias beyond the Object Gender Co-Occ approach.
In this paper, we investigate the impact of objects on gender bias in image captioning systems. Our results show that only gender-specific objects have a strong gender bias (e.g., women-lipstick). In addition, we propose a visual semantic-based gender score that measures the degree of bias and can be used as a plug-in for any image captioning system. Our experiments demonstrate the utility of the gender score, since we observe that our score can measure the bias relation between a caption and its related gender; therefore, our score can be used as an additional metric to the existing Object Gender Co-Occ approach. Code and data are publicly available at https://github.com/ahmedssabir/GenderScore.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 4
Collections including this paper 0
No Collection including this paper