2pCePd-Net: Two-Path Cross-Context Encoder With Probability Map-Based Bandpass Decoder for Retinal Vessel Segmentation

Supratim Ghosh, Sourav Pramanik, Anoop Kumar Tiwari, Kottakkaran Sooppy Nisar, Mahantapas Kundu, Mita Nasipuri

Research output: Contribution to journalArticlepeer-review

Abstract

Accurate automatic retinal blood vessel segmentation in fundus images plays an important role in the early diagnosis of any ocular disease detection system. However, most of the past literature has yet to attain a superior result primarily due to a lack of sufficient annotated data and the complexity of the vessel structure under challenging background conditions. In this work, we propose the design of a coherence measure-guided data augmentation model, named lambda-coherence measure-guided Cartesian-square (λCMgC2), for enriching the existing datasets with synthetic and structurally coherent fundus images thus alleviating the issue of data scarcity. Subsequently, we propose a novel end-to-end convolutional network, called two-path cross-context encoder with probability map-based bandpass decoder (2pCePd-Net) for the segmentation of blood vessels endowed with a novel 2pCd+ encoder block with CERg skip connection and a novel p̂BPf -enabled decoder block. The proposed work has been evaluated using four standard datasets, namely, DRIVE, STARE, CHASEDB1, and HRF, and has obtained a benchmark accuracy (Ac) of 97.6%, 98.1%, 98.2%, and 97.7%, respectively. Statistically, our proposed model has further achieved benchmark results across sensitivity (Se), specificity (Sp), F1 , and AUC measures of evaluation as well.

Original languageEnglish
Article number5031314
JournalIEEE Transactions on Instrumentation and Measurement
Volume74
DOIs
StatePublished - 2025

Keywords

  • Cross-attention
  • data augmentation
  • data fusion
  • deep learning (DL)
  • fundus image

Fingerprint

Dive into the research topics of '2pCePd-Net: Two-Path Cross-Context Encoder With Probability Map-Based Bandpass Decoder for Retinal Vessel Segmentation'. Together they form a unique fingerprint.

Cite this