The study underscores the importance of addressing biases in medical AI models to improve fairness, generalizability, and clinical utility. In this paper, we present a novel framework that combines Explainable AI (XAI) with image darkness assessment to detect and mitigate bias in cervical histology image classification. Four deep learning architectures were employed-AlexNet, ResNet-50, EfficientNet-B0, and DenseNet-121-with EfficientNet-B0 demonstrating the highest accuracy post-mitigation. Grad-CAM and saliency maps were used to identify biases in the models' predictions. After applying brightness normalisation and synthetic data augmentation, the models shifted focus toward clinically relevant features, improving both accuracy and fairness. Statistical analysis using ANOVA confirmed a reduction in the influence of image darkness on model predictions after mitigation, as evidenced by a decrease in the F-statistic from 120.79 to 14.05, indicating improved alignment of the models with clinically relevant features.
Skarga-Bandurova, Inna Sharifnia, Golshid Biloborodova, Tetiana
School of Engineering, Computing and Mathematics
Year of publication: 2025Date of RADAR deposit: 2025-07-04