Skip to main navigation Skip to search Skip to main content

Automated segmentation of COVID-19 lesions in CT scans using attention U-net with hybrid loss functions

  • Samy Bakheet
  • , Rehab Youssef
  • , Mahmoud H. Mofaddel
  • , Moatamad Hassan
  • , Asma Alshehri

Research output: Contribution to journalArticlepeer-review

Abstract

The COVID-19 pandemic has spread rapidly across the globe, presenting significant public health challenges. Biomedical imaging techniques, particularly computed tomography (CT), are vital for detecting and monitoring diseases. Accurately segmenting pneumonia lesions in CT scans is essential for diagnosing COVID-19 and assessing the severity of the disease. However, low-contrast infected regions pose a major challenge for automated segmentation methods. In this paper, we present an accessible deep learning framework for the automatic segmentation of COVID-19-infected regions. This framework integrates Contrast-Limited Adaptive Histogram Equalization (CLAHE) preprocessing with an Attention U-Net model trained using a hybrid Dice-Tversky loss. It is supported by extensive data augmentation techniques to improve generalization. We evaluated our approach on a publicly available COVID-19 CT dataset using 5-fold cross-validation. Our results achieved a Dice score of 0.83, an Intersection over Union (IoU) of 0.71, and an accuracy of 99.74%. To enhance the interpretability of our deep learning model, we applied Explainable Artificial Intelligence (XAI) techniques, such as Gradient-weighted Class Activation Mapping (Grad-CAM). These results demonstrate the effectiveness of our proposed framework and highlight its potential as a practical tool for medical imaging applications.

Original languageEnglish
Article number1551
JournalScientific Reports
Volume16
Issue number1
DOIs
StatePublished - Dec 2026

Fingerprint

Dive into the research topics of 'Automated segmentation of COVID-19 lesions in CT scans using attention U-net with hybrid loss functions'. Together they form a unique fingerprint.

Cite this