REFINED MASK R-CNN MODEL TO SEGMENT MICROSCOPY IMAGES FOR ACCURATE BLOOD CANCER DETECTION
Main Article Content
Abstract
Early detection of Acute Lymphocytic Leukemia (ALL) as well as Multiple Myeloma (MM) is critical for reducing mortality rates. One promising new approach for the early detection of these blood malignancies is the Deep Learning (DL) model. However, in order to provide high-quality microscope images for highly accurate blood cancer detection, certain models do not include data diversity enhancement. In order to generate high-quality microscopic images for the prediction of ALL and MM, DeepBCDnet was developed utilizing Resolution Enhanced and Noise Suppression Generative Adversarial Network (RENS-GAN). However, a segmentation approach is necessary for this model to enhance its accuracy. Its scale-invariant structure ignores spatial variations across receptive fields, leading to misclassification of object edge pixels when using Mask Regional Convolutional Neural Network (Mask R-CNN) for image segmentation tasks. This research proposes R-Mask R-CNN, a Refined Mask R-CNN that fuses deep semantic and shallow high-resolution features in the Region Proposal Network (RPN) as well as Region of Interest (RoI) layers using an attention mechanism and a bottom-up structure. At the pixel level, this model successfully identifies and segments micrographs. By incorporating the bottom-up structure into Mask R-Feature Pyramid Network's (FPN) CNN, the path between the lower and top layers is shortened, leading to better usage of features from the lower layers. To fine-tune pixel-level focus, channel-wise and spatial attention methods apply weights to feature maps. A new semantic segmentation layer takes the place of the earlier fully connected (FC) layer; this layer allows for feature fusion through the construction of an FPN and the summing of backward and forward transmissions of feature maps of identical resolution. This layout enhances the data propagation between layers, which in turn improves the accuracy of detection and segmentation. In order to aid classification during segmentation, the network takes into account receptive fields of varying sizes all at once by combining input from multi-scale feature maps. Mask head structure optimizes feature fusion by adjusting the input image scale. Lastly, the forms of blood cancer (ALL and MM) are classified using Dense Convolutional Neural Networks (DCNNs). Deep Blood Cancer Segmentation and Detection network (DeepBCSDnet) is the entire name of the model. The DeepBCSDnet models outperform the state-of-the-art models in terms of accuracy, with 94.71% and 95.57% correspondingly achieved on the SN-AM Dataset, MiMM_SBILab, and C-NMC datasets, respectively
Downloads
Article Details
COPYRIGHT
Submission of a manuscript implies: that the work described has not been published before, that it is not under consideration for publication elsewhere; that if and when the manuscript is accepted for publication, the authors agree to automatic transfer of the copyright to the publisher.
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work
- The journal allows the author(s) to retain publishing rights without restrictions.
- The journal allows the author(s) to hold the copyright without restrictions.