Options
In-group and Out-group Performance Bias in Facial Retouching Detection
Date Issued
2022-01-01
Author(s)
Bharati, Aparna
Connors, Emma
Vatsa, Mayank
Singh, Richa
Bowyer, Kevin
DOI
10.1109/IJCB54206.2022.10007942
Abstract
Accuracy alone is not sufficient to establish the efficacy of an AI algorithm-issues of demographic bias are an important area of concern. Demographic bias in face recognition algorithms has attracted more attention from the re-search community to date, but bias can also be a problem for face image analysis algorithms, such as detection of manipulated face images. In this paper, we investigate performance of humans and algorithms at detecting retouched face images of subjects from different origin (America, India, China) and gender groups. To be representative of the state of retouching detection, we use eight different algorithms from the literature. In addition to overall human accuracy, differences across origin and gender of the human performing the task are analyzed. We observe different bias patterns, such as algorithms show higher in-group accuracy than out-group, while the extent of retouching and familiarity drives differences in detection accuracy for humans. This is the first work to analyze and compare bias exhibited by humans and algorithms in similar tasks of detecting retouched face images.