sajjad ahmed shaaz
← research
Under ReviewICPR / Springer LNCS

Wilson-Prime Channel Attention and Gini-Adaptive Dual-Backbone Fusion of Deep Models for Medical Image Classification

Sajjad Ahmed Shaaz et al. — CMATER Lab, Jadavpur University


Abstract

Proposes a dual-backbone architecture (MobileNetV2 + DenseNet121) with novel Wilson Prime Channel Attention, Prime-Gini Adaptive Fusion, and linear-complexity global context modeling for medical image classification.

Datasets:LC25000BreakHisPneumoniaMNIST

Motivation

Medical image classification is a domain where the cost of a wrong prediction is high and the available training data is limited. Standard single-backbone architectures tend to specialise — MobileNetV2 captures fine local texture well, DenseNet121 captures broader structural context — but neither alone covers the full feature space reliably across pathology, histology, and radiology tasks.

The core question: can we design a fusion mechanism that is adaptive rather than fixed? Most dual-backbone work concatenates features or learns a static weighting. We wanted the fusion to respond to the content of the image.

The key ideas

Wilson Prime Channel Attention uses a primality-inspired weighting scheme to selectively amplify channels carrying non-redundant information between the two backbones. The intuition: prime-indexed channels in the feature map tend to encode less correlated information than composite-indexed ones — weighting them differently reduces redundancy in the fused representation.

Prime-Gini Adaptive Fusion weights each backbone's contribution dynamically, based on a Gini impurity score computed over the feature distribution. The backbone whose features show higher impurity — more discriminative spread — gets more weight for that particular input. This makes the fusion input-aware rather than globally fixed.

Linear-complexity global context modeling replaces the quadratic attention typically used for global context with a linear approximation, making it feasible at full feature resolution without prohibitive memory cost.

Writing a paper for the first time

This was my first experience going from idea to submission — not summarising others' work, but generating new ideas and being responsible for defending them.

The hardest part wasn't the implementation. It was learning to write precisely enough that a reviewer who hasn't seen your code can reproduce your method. I rewrote the methods section four times. My supervisor's feedback on the first draft was essentially: "I can tell what you built, but I can't tell why any of it is a good idea." That note changed how I think about technical writing.

The paper is under review. Whatever the outcome, the process taught me more about what research actually is than any coursework has.


← back to research