MULTIMODAL MEDICAL IMAGE FUSION USING IHS-DTCWT-PCA INTEGRATED APPROACH FOR EXTRACTING TUMOR FEATURES

: Medical image fusion is a technique where multiple imaging modalities are merged together to obtain a single image that contains maximum information. Spatial domain approach such as IHS-PCA fusion provides best visual quality but required large storage space and also lags directional information. We propose a unique technique to improve the fusion of MRI with PET/SPECT images. To achieve this, we integrate Dual Tree Complex Wavelet Transform with PCA based histogram matching and IHS space.IHS transform is applied to convert RGB channels of multispectral image PET/SPECT to intensity, hue and saturation components. Pathological information in the images can be highlighted in multi-scale and multi-direction by using DTCWT. PCA with weighted average fusion rule is used for extracting tumor features by using principal components. The qualitative and quantitative analysis of the proposed method is compared with existing techniques and proved that our algorithm provides high structural information content, high mutual information with high spatial and spectral resolution thereby enhancing the tumor region in the merged image.


INTRODUCTION
Image fusion is a process of generating a single image by combining two or more images to integrate complementary and multi-view information from source images [1]. The acquired merged image combines best features from the source images. Diagnosis and treatment planning for diseases like brain tumor and Alzheimer's can be more effective and useful with multimodal image fusion. The source images considered for fusion process are at different resolutions and intensity values. All the relevant features are not available in a single image of different modalities. In general, anatomical information with high resolution is provided by CT and MRI, while functional information with low resolution is given by PET and SPECT images [2]. A. P. James and B. V Dasarathy [3] in their paper insisted that it is more useful and necessary to combine anatomical and functional information so as to extract more features related to pathology.
In our work we have considered MRI images in gray scale, while PET and SPECT images are shown in pseudocolor. Spatial domain approaches produce spectral distortions in fused image. To overcome this difficulty, spectral domain approaches are used to obtain better quality fused images. IHS-PCA is one of the spatial domain approaches used to preserve spatial resolution [4]- [7]. In transform domain, DTCWT is the heightening version of DWT. Transform domain technique is helpful in retrieving spectral contents from the fused image. V. Bhateja et.al in [8] has described that DTCWT offers better shift invariance and directional selectivity over DWT. It provides increased memory usage and reduced computational time. From the decomposed images using DTCWT, principal components are extracted and weighted averaging fusion rule is applied to acquire the fused image.
The remaining part of the paper is structured as follows. Histogram matching and IHS-DTCWT-PCA fusion model are described in section 2 followed by proposed methodology in section 3. Assessment parameters are depicted in section 4. Section 5 depicts the experimental analysis and discussions.

A. Histogram Matching
Histogram matching is a technique of finding a transformation of a discrete image such that its histogram perfectly matches the specified histogram.It uses the equalization of histogram of the second image to the histogram of the reference image. W. Dou and Y. Chen [5] in their work, proposed "a new histogram matching process to improve IHS method in spectral fidelity". Several techniques are described in the literature such as exact histogram matching and multiple histogram matching.

B. IHS Transform
Various mathematical representations are available that can convert red, green and blue (RGB) tricolor values into the parameters of human color perception. IHS transformation is an image sharpening technique and the conversion system is a linear transform [1]. It converts RGB channels of the multispectral image into intensity, hue and saturation component. Intensity channel provides the details of brightness component present in the image [9]. Intensity (I) displays the amount of brightness present in the spectrum. Hue (H) depicts visual sensation of different parts of the color spectrum. Saturation (S) refers to the purity of the spectrum [9][10] [11].
The RGB-IHS conversion model is a linear transformation. C. He, Q. Liu et.al [9]in their work exploited the IHS triangular model indicating that "the entire IHS fusion process can lead to a fused and enhanced image". The purpose of IHS transform [9] is to differentiate the intensity and color information from RGB color image. T. Tuet.al [10]in their work "A new look at IHS-like image fusion methods", presented a detailed study demonstrating that the change in saturation during the merging process gives rise to color distortion.Thus, I channel describes the PET image eliminated with color information. Furthermore, it resembles the MRI gray scale image. Consequently, the histogram of MRI image can be matched with that of PET. Hence, MRI image with high spatial resolution is swapped by the PET image with low resolution intensity channel [9].
In the proposed work, the triangular spectral model is employed to obtain intensity, hue and saturation channels from RGB color space [9].
The inverse transform from IHS channels to color channels RGB are This RGB to IHS conversion process leads to an enhanced spectral image after fusion [9].

C. Dual Tree Complex Wavelet Transform[12]
The frequency information present in the image is extracted by DWT. But it fails to provide sufficient facts of directional information. Also, it results in an image with shift variance and additive noise. To overrule these problems, DWT with additional properties has been proposed.I. W. Selesnick et.al [12] in their tutorials, have discussed the theory behind DTCWT and has depicted as how it overcomes the shortcomings of DWT such as oscillations, shift variance, aliasing effect and directionality by means of complex wavelet transforms.The Dual Tree Complex Wavelet Transform [12]is useful in derivation of structure of an image. It is nearly shift invariant, directionally selective and computationally effective compared to DWT [8][12] [13]. The main difference between DWT and DTCWT is that it uses two filter trees instead of one [12]. The directional selectivity feature of DTCWT plays an important role to reproduce the image content across edges, borders and other important directional features. DTCWT with its directional capabilities preserves edge information which is very much essential in medical image fusion [12] [13].

D. PCA Fusion Approach
Principal Component Analysis is a linear transformation method employed to project the data to a new coordinate system. Accordingly, the projection on the first coordinate provides the greatest variance and is known as the first principal component. The next highest variance is identified as second principal component and so on. It permits a decrease in the number of channels by reducing the inter channel dependencies [7].
The two principal components from the matched new MRI image and PET intensity image can be selected based on spatial frequency (SF) [7][16] [17].The equation for an × pixels image ( , ) can be defined as, = �( ) 2 + ( ) 2 (7) WhereRF is Row Frequency [17] given by andCF is Column Frequency [17] given by respectively. The normalized weights ′ ′ and ′ ′ are computed as The fused image coefficients are obtained using weighted coefficients ′ ′ and ′ ′ based on SF is given by where, 1 and 2 represents the principal components after PCA transform.

PROPOSED METHODOLOGY
The proposed method shown in Figure 3, considers a high resolution MRI and a low resolution PET/SPECT images of brain tumor for fusion. MRI is an intensity image while PET is in RGB format. Step 4:PCA fusion rule is applied separately on low frequency and high frequency subbandcoefficientsof MRI and PET images.
Step 5:A new fused intensity image is generated by applying Inverse DTCWT.
Step 6: Finally, inverse IHS is performed on the new fused intensity image and the old hue and saturation components to obtain the final fused image back in RGB space.

PERFORMANCE PARAMETERS
The foremost aim of an image fusion process is that all functional and efficient information must be protected. At the same time, the reconstructed image must not be deviated due to the presence of undesirable artifacts [8]. Performance assessment of the proposed method is conducted using various metrics.J. Kaur [18] has shown no reference methods such as Entropy (EN) [18] and Standard Deviation (SD) [18]. P. Jagalingamand A. V. Hegde [19] has shown full refernce methods for quantitative analysis such as Mutual Information (MI) [19] and Peak signal to Noise Ratio (PSNR) [19] that are responsible for the restored information content in the fused image.
(a)Entropy (E) [18] The measure of data content present in the combined image is estimated utilizing Entropy [18].Higher the entropy esteem, wealthier is the data content in the intertwined image [18]. It is given by = ∑ ℎ =0 ( ) 2 ℎ ( ) . ′ ′ is the entropy of fused image [18] and ′ℎ ′ is the histogram count of the fused image.

(b)Standard Deviation (SD)[18]
Standard deviation is utilized to quantify contrast of an image [18]. An image with high contrast would have a high standard deviation [18]. It is given by where ′ ′ is the standard deviation of the fused image, ′ℎ ′ is the histogram counts of the fused image, ′ ′ is the index of summation and ′ ′ is mean of histogram.
(c) Peak Signal to Noise Ratio (PSNR) [19] It is a widely used metric. It is computed as the numberof gray levels in the image divided by the corresponding pixels in the reference and fused images. A higher estimation of PSNR demonstrates predominant fusion [19]. It is given by the reference image and ( , ) is the fused image.
(d) Mutual Information (MI) [19] It is utilized to quantify the similitude of image power between the combined and reference images. Higher estimation of 'MI' demonstrates better picture quality [19]. .

RESULTS AND DISCUSSIONS
In our work, we have proposed a new method of integrating DTCWT-PCA with IHS transform for fusion. Inorder to evaluate the efficiency and validity of the techniques, twopairs of MRI-PET images and two pairs of MRI-SPECT images of brain tumor are chosen to perform fusion.  In the proposed work of IHS-DTCWT-PCA fusion with histogram matching, the fusion process preserves spatial information, directional information and color details of PET and SPECT images. Assessment parameters such as Entropy (EN), Standard Deviation (SD), Mutual Information (MI) and Peak Signal to Noise Ratio (PSNR) are considered for comparison. IHS-PCA method of fusion is performed in spatial domain that preserved only the spatial resolution with low contrast and low information content. Decomposition is performed in complex space using DTCWT which highly preserves directionality. The principal components selected minimize redundancy and also reduces storage space. The comparison of the fusion techniques withassessment parameters is depicted in Table I. The proposed IHS-DTCWT-PCA integrated method gives the largest value of mutual information (MI). It means that the fused image obtained by the proposed method has highly preserved the original information from the two source images taken as inputs compared to IHS-PCA method. The entropy is high for the proposed method indicating that the fused image has high information content compared to IHS-PCA technique. PSNR of the proposed method is also high thereby reducing the noise andexpanding the data content in the meldedimage.The color information and the contrast of the fused image is high for the proposed method and is indicated by high standard deviation (SD) in case of MRI-PET combination. Whereas, SPECT is a low resolution image that is more prone to artifacts and attenuations. Hence it highlights only the tumor area. When it is fused with MRI, only the tumor details get highlighted with low contrast. Hence the new fused image using our technique includes both spatial details of MRI image and color information of PET/SPECT images simultaneously which gives the best results.

CONCLUSION
The main objective of multimodal medical image fusion is to obtain high information content with high contrast, high spectral and spatial resolution and less noise. We propose a novel technique for fusion of PET/SPECT and MRI images. The PET/SPECT images provide color information with low spatial resolution, while MRIdoes not provide colorin spatial resolution. Spatial domain fusion methods such as IHS-PCA lead to reduced contrast and spectral distortion. Hence we propose a method where transform domain technique such as DTCWT is integrated with IHS-PCA method so as to extract the tumor features in multi-scale and multi-directions with high spectral and spatial resolution. Hence it is clear from the assessment parameters that the proposed method outperforms with regard to tumor feature extraction, reduced storage, reduced computational complexity, human visualization and objective evaluation.