REFINED MASK R-CNN MODEL TO SEGMENT MICROSCOPY IMAGES FOR ACCURATE BLOOD CANCER DETECTION

Main Article Content

Indhumathi Palanisami
Suresh Nachimuthu

Abstract

Early detection of Acute Lymphocytic Leukemia (ALL) as well as Multiple Myeloma (MM) is critical for reducing mortality rates. One promising new approach for the early detection of these blood malignancies is the Deep Learning (DL) model.  However, in order to provide high-quality microscope images for highly accurate blood cancer detection, certain models do not include data diversity enhancement.  In order to generate high-quality microscopic images for the prediction of ALL and MM, DeepBCDnet was developed utilizing Resolution Enhanced and Noise Suppression Generative Adversarial Network (RENS-GAN).   However, a segmentation approach is necessary for this model to enhance its accuracy.  Its scale-invariant structure ignores spatial variations across receptive fields, leading to misclassification of object edge pixels when using Mask Regional Convolutional Neural Network (Mask R-CNN) for image segmentation tasks.  This research proposes R-Mask R-CNN, a Refined Mask R-CNN that fuses deep semantic and shallow high-resolution features in the Region Proposal Network (RPN) as well as Region of Interest (RoI) layers using an attention mechanism and a bottom-up structure.  At the pixel level, this model successfully identifies and segments micrographs.  By incorporating the bottom-up structure into Mask R-Feature Pyramid Network's (FPN) CNN, the path between the lower and top layers is shortened, leading to better usage of features from the lower layers. To fine-tune pixel-level focus, channel-wise and spatial attention methods apply weights to feature maps.  A new semantic segmentation layer takes the place of the earlier fully connected (FC) layer; this layer allows for feature fusion through the construction of an FPN and the summing of backward and forward transmissions of feature maps of identical resolution.  This layout enhances the data propagation between layers, which in turn improves the accuracy of detection and segmentation.  In order to aid classification during segmentation, the network takes into account receptive fields of varying sizes all at once by combining input from multi-scale feature maps.  Mask head structure optimizes feature fusion by adjusting the input image scale.  Lastly, the forms of blood cancer (ALL and MM) are classified using Dense Convolutional Neural Networks (DCNNs).  Deep Blood Cancer Segmentation and Detection network (DeepBCSDnet) is the entire name of the model.  The DeepBCSDnet models outperform the state-of-the-art models in terms of accuracy, with 94.71% and 95.57% correspondingly achieved on the SN-AM Dataset, MiMM_SBILab, and C-NMC datasets, respectively

Downloads

Download data is not yet available.

Article Details

Section
Articles