Hybridized Wavelet-Transformer-Assisted Shearlet-Ripplet (WT-SR) Framework for Medical Image Compression

Main Article Content

C. Nandhini, G. Vijaiprabhu

Abstract

Medical image analysis requires a balance between efficient compression and accurate classification to ensure clinical applicability in storage- and bandwidth-limited environments. In this study, we propose a novel Wavelet–Shearlet–Ripplet (WT-SR) framework that integrates multiresolution decomposition, cross-domain attention-based feature embedding, and CNN–BiLSTM classification with joint compression optimization. The framework performs patch-wise feature extraction, applies modality-aware attention to capture discriminative patterns, and leverages entropy-constrained quantization for high-fidelity compression. To validate its robustness, experiments were conducted on three benchmark brain tumor MRI datasets: Figshare, SARTAJ, and Br35H. Comparative evaluations against state-of-the-art methods including CNN-based models, hybrid CNN–SVM, ResNet-50, CapsNet fusion, JPEG2000, ROI-JPEG, and hybrid DWT–PCA–Huffman demonstrate that WT-SR achieves superior classification accuracy (96.6% average) while simultaneously attaining higher compression ratio (78.6%) and PSNR (42.3 dB). Importantly, the degradation in classification performance after compression was marginal (<0.5%), confirming clinical reliability. The results establish WT-SR as an effective end-to-end solution for medical image management, integrating diagnostic accuracy with computational efficiency. The framework is suitable for telemedicine, cloud-based medical imaging, and large-scale archival systems where diagnostic integrity and storage optimization are equally critical.

Article Details

Issue
Section
Articles