Book Chapter


Bias detection in histology images using explainable AI and image darkness assessment

Abstract

The study underscores the importance of addressing biases in medical AI models to improve fairness, generalizability, and clinical utility. In this paper, we present a novel framework that combines Explainable AI (XAI) with image darkness assessment to detect and mitigate bias in cervical histology image classification. Four deep learning architectures were employed-AlexNet, ResNet-50, EfficientNet-B0, and DenseNet-121-with EfficientNet-B0 demonstrating the highest accuracy post-mitigation. Grad-CAM and saliency maps were used to identify biases in the models' predictions. After applying brightness normalisation and synthetic data augmentation, the models shifted focus toward clinically relevant features, improving both accuracy and fairness. Statistical analysis using ANOVA confirmed a reduction in the influence of image darkness on model predictions after mitigation, as evidenced by a decrease in the F-statistic from 120.79 to 14.05, indicating improved alignment of the models with clinically relevant features.

Attached files

Authors

Skarga-Bandurova, Inna
Sharifnia, Golshid
Biloborodova, Tetiana

Oxford Brookes departments

School of Engineering, Computing and Mathematics

Dates

Year of publication: 2025
Date of RADAR deposit: 2025-07-04


Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License


Related resources

This RADAR resource is Identical to Bias Detection in Histology Images Using Explainable AI and Image Darkness Assessment

Details

  • Owner: Isabel Virgo
  • Collection: Outputs
  • Version: 1 (show all)
  • Status: Live
  • Views (since Sept 2022): 134