Next Article in Journal
A Review of Thermal Comfort Applied in Bus Cabin Environments
Previous Article in Journal
A Current Control Algorithm to Improve Command Tracking Performance and Resilience of a Grid-Connected Inverter
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Atomic Force Acoustic Microscopy Image Fusion Method Based on Grayscale Inversion and Selection of Best-Fit Intensity

Molecular Biophysics Key Laboratory of Ministry of Education of China, Department of Biomedical Engineering, College of Life Science and Technology, Huazhong University of Science and Technology, Wuhan 430074, China
*
Author to whom correspondence should be addressed.
Submission received: 16 October 2020 / Revised: 19 November 2020 / Accepted: 20 November 2020 / Published: 3 December 2020
(This article belongs to the Section Applied Biosciences and Bioengineering)

Abstract

:
Atomic force acoustic microscopy (AFAM) can provide surface morphology and internal structures of the samples simultaneously, with broad potential in non-destructive imaging of cells. As the output of AFAM, morphology and acoustic images reflect different features of the cells, respectively. However, there are few studies about the fusion of these images. In this paper, a novel method is proposed to fuse these two types of images based on grayscale inversion and selection of best-fit intensity. First, grayscale inversion is used to transform the morphology image into a series of inverted images with different average intensities. Then, the max rule is applied to fuse those inverted images and acoustic images, and a group of pre-fused images is obtained. Finally, a selector is employed to extract and export the expected image with the best-fit intensity among those pre-fused images. The expected image can preserve both the acoustic details of the cells and the background’s gradient information well, which benefits the analysis of the cell’s subcellular structure. The experiments’ results demonstrated that our method could provide the clearest boundaries between the cells and background, and preserve most details from the morphology and acoustic images according to quantitative comparisons, including standard deviation, mutual information, Xydeas and Petrovic metric, feature mutual information, and visual information fidelity fusion.

1. Introduction

High-resolution and nondestructive subcellular imaging instruments ensure that observing the activities of cells accessible is always on demand in biology development. In the 1990s, the introduction of an atomic force microscope (AFM) [1] provided a high-resolution and non-destructive measurement tool at macroscopic levels [2], which is a powerful platform for biological samples, from single molecules to living cells, to be visualized and manipulated [3]. As a near-field microscope, AFM has high resolution in the near field, but it is difficult to detect the inner structures [4].
Atomic force acoustic microscope (AFAM) is a nanoscale microscope that provides ultra-high-resolution images of the sample without destroying the intracellular structures [5]. It is developed by combining AFM with an ultrasound imaging module. The ultrasound transducer produces a single frequency acoustic wave under the cell, and the vibration of the probe is detected by the spot optical tracking system. As the outputs of AFAM, morphology images and acoustic images provide different information: morphology image provides the topography features of the cell by detecting the adhesive between probe and cell, i.e., the AFM image, while the acoustic image shows the phase of the acoustic waves at the same position, which reflects all contributions in the path to make the acoustic signal shift in the phase. In recent years, AFAM is applied for medical and biological research. For example, the observation of liver cells [6]. AFAM is applied to analyze small cells nowadays. The cells are so thin that signal shift is slight because of the thickness. Hence, the acoustic image only reflects the subcellular structure of the cell. Since the acoustic and morphology image provides completely different types of information, it will be beneficial for studying the cells’ structure if we combine them together.
Image fusion is an important research topic in many related areas such as computer vision, remote sensing, robotics, and medical imaging etc. [7]. It is designed to combine multiple source images into a fused image that contains more information than individual source images. Image fusion is widely used in multi-mode image processing, such as visible and infrared images, computed tomography, and magnetic resonance images, and PET-CT images. Strategies for image fusion vary for different fusion tasks. Multi-scale transform (MST) is one of the most popular tools applied in image fusion tasks and has been fully studied. It divides the image into a low-frequency map, which shows a smoothed overall characteristic of the image, and a series of high-frequency maps, which contain the edge details of the image. The commonly used MST methods are the Laplacian pyramid (LP) [8] and nonsubsampled contourlet transform (NSCT) [9]. NSCT is an improved version of contourlet with good fusion results. Unfortunately, its computational efficiency is low. Fusion strategies based on MSTs can be further developed by using an effective way for low-frequency and high-frequency maps fusion to acquire high fusion quality. Yu Liu [10] proposes an image fusion framework based on MSTs and sparse representation (SR) [11], which performs well for the fusion of multi-modality images but needs complex computation. In addition to MST-based methods, deep learning for image fusion has attracted a lot of interest. In recent years, generative adversarial networks (GANs) [12] have been proven to provide more details than the traditional Convolutional Neural Networks (CNNs) in image generation. Jiayi Ma [13] introduced an end-to-end fusion method based on FusionGAN, avoiding manually designing fusion models in traditional methods. However, the network’s loss function is still manual designed and the method only works well in visible and infrared images fusion [13].
Although various fusion methods show high performance in medical, multi-focus, and visible-infrared image processing, methods for fusing AFAM images need further research because the observer has difficulty to fuse the different information from the morphological image and acoustic image by mind. Morphology and acoustic images provide different information about the cells, but the high level of the morphology image’s intensity may cover the details of the acoustic images. We propose a new method that uses grayscale to efficiently preserve both morphology and acoustic images. It highlights the position and shape of cells and preserves the cell’s acoustic details in a single image. First, grayscale inversion is used to transform the morphology images into a series of inverted images with different average intensities. Next, the max rule is applied to fuse the inverted image and acoustic image to obtain a group of pre-fused images. Finally, a selector is employed to determine the expected image with the best-fit intensity among the pre-fused images.

2. Proposed Method

In this section, we introduce our fusion method and details to replicate the experiment.

2.1. Review of the Fusion Method

Figure 1 shows that the proposed fusion method contains three steps: inversion, fusion, and selector. Firstly, a morphology image is inverted into a series of inverted images using different averages. The max rule is then used to fuse the inverted images and acoustic image into the pre-fused images sequences. Finally, a selector is used to choose the expected image from the pre-fused images sequence as the fusion images’ output.

2.1.1. Inversion

Our proposed method’s first step is inverting the morphology image into a series of inverted images whose average intensities are different from each other. Our grayscale inversion approach is a simple linear approach, which is defined as below:
I m k i , j = V m a x I m i , j + k 1 ,
where I m is the original morphology image. I m k is the k t h inverted morphology image, V m a x is the largest gray level of I m . k = 1 , 2 , , L is the collection of inverted morphology images, and L is the maximum gray value of the acoustic image. We apply greyscale inversion to match the gray level diffusion of morphology and acoustic images. In Figure 2, we show the morphology, inverted, and acoustic image. The morphology image’s intensity relates to the sample’s height, so the bright area is the cell region, and the dark area is the background, respectively. As for acoustic images, many high-intensity noises surround the cell region, and the acoustic details in the cell are darker than the same region in the morphology image. Grayscale inversion aims to produce a high-intensity background and low-intensity cell region. The high-intensity background will reduce the contract of the noise and even cover it. Simultaneously, the low-intensity cell region makes acoustic details easier to be presented. The bright region, which is considered as background, may be beyond the bounds of the gray level. To present the cell’s shape and gradient in the morphology image, the dark areas, which is considered the cell region, should take priority of the fusion work. As a trade-off, we truncate the image gray level to the bounds of the morphological image gray level, i.e., the pixels whose intensity is larger than 255 will be truncated to 255 for the 8-bit image.

2.1.2. Max Rule

The fusion rule of inverted and acoustic images is the max rule. At the pixel (i,j), it compares the intensity of the k t h inverted image and acoustic image, and selects the larger one as the output. It is defined as follows:
I f k i , j = max I m k i , j , I a i , j ,
where I f k is the k t h pre-fused image, I a is the acoustic image. Now, we fuse the information from both morphology and acoustic image. In the background, because the inversed image has a higher average gray level, the pre-fused image has smooth and high-intensity areas. In the boundary of the cell, the gradient from the bright region to dark region indicates the cell’s shape. In the cell region, acoustic details are preserved because they have higher intensity than the inversed image. In Figure 3, as the increasing value of k, the pre-fused image provides more gradient information of the inverted morphology image but loses details of the acoustic image. Thus, the next step is to choose the image that preserves both morphology and acoustic information. We name this ideal image as the best-fit intensity image.

2.1.3. Selector

To choose the best-fit intensity image, a selector is designed to measure the information loss of the pre-fused images to select the best-fit intensity image as the final output of the fused image.
Because AFAM fusion aims is to combine the information of the cell, the regions of the cells should be extracted, and the fusion problem could be simplified to the fusion problem in the regions of cells. To acquire the position and shape of the cell, a cell extraction algorithm based on Otsu’s approach [14] was developed. Erosion and dilation steps are used to reduce the influence of the noise and smoothen the boundary of the cell region. Our cell extraction method is a triple-step process. The first step is detection. Otsu’s threshold is used to segment the morphology image into C E L 0 and B K G at gray level t , which is defined as follows:
B K G = 0 , 1 , 2 , , t ,
C E L 0 = t + 1 , t + 2 , , L 1 ,
where t is the threshold of classifying background and the cell. To clarifiy, Otsu’s threshold is not an accurate segmentation method because it does not perform well in segmenting smooth boundary. Our method is not sensible to the segmentation accuracy.
However, the small object, which is not the region of interest in the view of content but has the same gray level section, does harm to the fusion performance. To clear them, we apply the erosion step. We assume that the main part of the morphology image is the cell, and noise occupies small and separated regions. To clear these areas, the erosion step is defined as follows:
C E L 1 = I m B K G A ,
L A = r o u n d C E L 0 × p 1 1 ,
where A is a disk-shaped structuring element, L A is the size of A , and p 1 = 1.2 is the multiple coefficient that determines the narrowing of the original segmentation of the cell. The function of A is to clear the noise regions which have a similar intensity feature of the cells, and the proper value of p 1 is to clear all the noise regions but still preserve the region of the cells. According to our hypothesis, the noise regions are removed during the erosion process but the cell region is narrowed either. Therefore, the dilation step, which is the third step, is used to recover the cell region. It is defined as follows:
C E L = C E L 1 B ,
L B = r o u n d C E L 1 × 1 p 2 ,
where B is a disk-shaped structuring element, L B is the size of B , and p 2 is a multiple coefficient that determines the extension of C E L 1 . The function of B is to recover the region of cells and make the boundary of the cells smooth. The value of p 2 is determined by p 1 , and it is defined as follows:
p 2 = 1 p 1 .
At last, we obtain C E L , which represents the region of cell, and B K G , representing the background. The relationship of morphology image, B K G and C E L is defined as follows:
I m = C E L , B K G .
Next, to set the standard of the best-fit intensity image among the pre-fused images, we define a multi-regional information loss (MRIL) to measure the information loss. It is a double-stage algorithm, which would calculate the total information loss of the inner and outer cells. To define the set of the criterion of the pre-fused images, characteristic of inverted morphology and acoustic image should be taken into consideration. As it is shown in Figure 2, inverted morphology and acoustic images show different information in different regions. In the cell region, because of grayscale inversion, the intensities of inverted morphology are weaker than those in the background and provide little information about the inner structures. By contrast, the acoustic image provides much richer details in the region of the cell. Thus, the fused image should preserve as many details of the acoustic image as we can. In terms of background, inverted morphology image is used to show the cell’s shape and position, which is not clear in acoustic image. To measure the total information loss, MRIL is defined as follows:
L t o t a l k = L C E L k + L B K G k 2 ,
where L C E L k is the loss of cell and L B K G k is the loss of background. They are given by:
L C E L k = i , j C E L C E L I f k i , j I a c i , j C E L ,
L B K G = r S F e I f k i , j , I m k ,
where I f k is the k t h pre-fused image, I a c is the acoustic image, r S F e I f k , I m k has a similar definition of the ratio of SF error [15]. The difference is that our rSFe method measures the space-frequency error between the pre-fused image and inverted morphology image in the region of B K G instead of fused image and source images. The reason why we choose rSFe to measure the loss in the background for the following two reasons. Firstly, the space-frequency remains the same no matter what the value of k is. Secondly, rSFe is a suitable regulation that makes the background as bright as possible to cover the acoustic noise, and it is not sensible to the segmentation accuracy of the cell either. Given that the acoustic image is characterized by their pixel intensities, MRIL is sensitive to among the pixels preserved from the acoustic image in the region of the cell.
According to the definition of information loss, the expected image is the image with the lowest L t o t a l . Thus, the selected image is defined as follows:
I f = I f k 0 .

2.2. Experiment

In this section, we introduce the experiment of our method, including data acquisition, performance metrics, and the results and discussion of the experiment.

2.2.1. Data Acquisition

All the AFAM images we used in the experiment are acquired by CSPM5500, which is an open atomic force acoustic microscope from Guangzhou Benyuan Nano-Instruments Ltd., Guangzhou, China. Escherichia coli (E. coli) and Staphylococcus aureus (S. aureus) are the AFAM measurements samples. They are stored by the China Center for Type Culture Collection in Wuhan (Wuhan H22). As shown in Figure 4, 16 pairs of source images are employed to verify the effectiveness of the competing methods. Among them, there are 15 pairs of S. aureus images, and there is 1 pair of E. coli images. Four fusion methods, including Laplacian pyramid (LP) [10], nonsubsampled contourlet transform (NSCT) [11], fusion method based on Laplacian pyramid and sparse representation (LP-SR) [10], and FusionGAN [13], are applied for the comparison of the fusion performance. According to the original papers, we set the parameters of competing methods but use our AFAM images as the source images. Meanwhile, since visible and infrared images fusion has a similar pattern to AFAM images, we use our AFAM images database to retrain the FusionGAN to obtain better fusion performance at AFAM fusion task.

2.2.2. Performance Metrics

Because the reference image does not exist, it is hard to quantitatively evaluate a fused image’s quality. In recent years, many fusion metrics have been proposed to evaluate fusion methods’ performance, but none of them are widely believed. Therefore, it is necessary to apply several fusion metrics to evaluate fusion methods. To better evaluate the fusion performance of the cell region, we manually segment the source image of the source image into cell, and background and all the evaluation metrics measure the fusion performance in the areas of cells. Segmentation results are shown in Figure 5. The quantitatively evaluate metrics, we chose the following five metrics, including standard deviation (SD) [16], mutual information (MI) [17], Xydeas and Petrovic metric ( Q A B / F ) [18], feature mutual information (FMI) [19], and the visual information fidelity fusion (VIFF) [20]. Here are the definitions of the evaluation metrics.
  • SD is a value of the degree of a set of image data averages, reflecting the fused image’s contract. Mathematically, SD is defined as follows:
    S D = i = 1 M j = 1 N I F i , j μ 2 ,
    where I F is the fused image, and μ is the mean value of the fused image.
  • MI measures the degree of dependency between two events A and B. It is defined as follows:
    I U V U , V = v V u U p u , v l o g 2 p u , v p u p v ,
    where p u , v is the joint distribution and p u and p v are the marginal distribution. Considering two source images A and B, and the fused image F, MI is defined as follow:
    M I = I F A f , a + I F B f , b .
  • Q A B / F is an objective performance metric which measures the relative amount of edge information that is transferred from the source images to the fused image. It is defined as follows:
    Q A B F = i = 1 N j = 1 M Q A F i , j w A i , j + Q B F i , j w B i , j i N j M w A i , j + w B i , j ,
    Q A F i , j = Q g A F i , j Q o A F i , j ,
    Q B F i , j = Q g B F i , j Q o B F i , j ,
    where Q g X F i , j and Q o X F i , j are the edge strength and orientation preservation value at the pixel i , j . w A i , j and w B i , j reflect the weight of Q A F i , j and Q B F i , j .
  • FMI computes the amount of feature transfer from source images to the fused image. A gradient map of an image provides information of texture, edge strength, and contrast included.
  • VIFF is a recently proposed metric that measures the visual information fidelity between the source images and fused image. It measures the visual information of every block in each sub-band of the fused image based on the Gaussian scale mixture model, distortion model, and human visual system (HVS) model.

3. Results and Analysis

Firstly, we try to compare the fusion quality of different methods. As for Figure 6, three typical images are selected to intuitive results on the fusion performance. The first and second rows show the fusion method’s ability to provide the details between the acoustic image. The third row requires the fusion method to make a distinction of two cells. The first two columns are morphology and acoustic images, the last column presents the fusion result of our method. The other columns are the results of the compared fusion methods. From the result, it is easy to conclude that our fused images are the best at preserving acoustic information. The inverted morphology part smartly highlights the shape of cell and avoids covering the acoustic details. LP and RP present similar fusion results. They can both acquire boundary and the acoustic information of cells, but they lack the acoustic information inside the cell. The fusion result of NSCT shows that the edge of the inner structures is sharp, and the shape of cells is easy to distinguish, but the contrast of the inner structure is low. FusionGAN is good at preserving morphology information, but it only keeps gradient information of acoustic images. As a result, the internal structures of the cells remain low contrast and hard to be found. Our method’s fusion result is much different from the others, which is the result of grayscale inversion applied to morphology images. The area outside the cells is not the region of interest, so the strong intensivity of the inverted morphology image would cover the acoustic noise, which is why the region of the background is bright. The cells’ region is thought to contain much acoustic information, and the low insensitivity of morphology information has high contract with the acoustic information and highlights the detail of the internal structures of cells.
Moreover, the gradient from the bright background and the dark region of the cell forms the boundary. As a result, the obtained results by the proposed have sharp edges, more details, and enhanced contrast inside the internal structure and clear boundary of cells. The analysis of fusion images’ features indicates that the application of grayscale inversion significantly highlights the cell and prevents the bright morphology pixel to cover the cell structure.
To assess the fusion quality in the cells’ region, we further give quantitative comparisons of the five methods. We first manually segment the morphology image to obtain the weight map of the cell. Then, we apply the weight map to pre-process the fused image, morphology image, and acoustic image. Next, the five fusion metrics are used to evaluate the fusion quality of the pre-processed fusion image. Table 1, Table 2, Table 3, Table 4 and Table 5 list the metrics of the 16 images; the 9th image is E. coli and the others are S. aureus. Table 6 shows the result of the t-test between our proposed method and the methods with the second-highest value. According to Table 1, Table 2 and Table 3, our method approximately has the most extensive MI, Q A B / F , and FMI. The largest MI implies that our method preserves the most information of the source images. Our method has the best MI because it is enhanced to save the intensities of the acoustic image in the region of the cell, which is much stronger than the morphology information. The largest Q A B / F demonstrates that our fused image has rich edges information. Since the acoustic image provides most of the edge information, we can also consider that our fused image has the most considerable acoustic information. The largest FMI also demonstrates that our method has the most features including texture, edge strength, and contrast. Although the values of VIFF and SD do not have as large an advantage as the above three metrics, they still have the highest average among the compared methods. The results of the t-test imply the significance of our method. The result of VIFF yet demonstrates that our method is great, consistent with the human visual system. Although the VIFF of NSCT is close to our method, other metrics imply a certain gap between NSCT and the proposed method. The result of SD shows that our fused image expressed high contrast but not clear enough to be considered as the best. According to Table 5, FusionGAN shows much better fusion performance than the other method in the 10 t h image, and results in low reliability according to the t-test shown in Table 6. Further research implies that SD of our method is highly dependent on acoustic image. The low SD of the fused image is the result of low contrast of the 10 t h acoustic image. The fusion metrics above indicate that our method is good at preserving acoustic detail inside the cell, which is beneficial to analyzing the nanoscale and subcellular structure of the cell.
To compare the computationally intensive, we list the operation time of processing a pair of images in Table 7. Since FusionGAN runs in TensorFlow, it is not suitable to compare with other methods running in MATLAB. Our method is slower than LP-SR and LP, but it can still fuse the images in less than one second.

4. Conclusions

In this paper, a novel fusion method for AFAM image fusion was proposed based on grayscale inversion and best-fit intensity selection. Grayscale inversion aims to change the grayscale feature of the morphology image and avoid the tradeoff between morphology and acoustic information. The inversion process is improved by inverting a single image into a series of inverted images and selecting the most suitable one as the output, which could avoid the tradeoff between high-intensity morphology information and complex acoustic details. The image segmentation method based on Otsu’s approach segments the regions of cells and background, which can greatly help reduce the interruption of the morphology image and improve segmentation precision. The selection of the best-fit intensity is applied to measure the total information loss of the pre-fused image and choose the best one as the output of our method, and MRIL is used as the fusion parameter of optimization. Experiments on different fusion methods can prove that our method is the best at presenting the features of source images, and saving information from the AFAM images. The results of quantitative comparisons also demonstrated that the contrast of our fused images depends on the source of the acoustic images, which means that our method is designed to restore acoustic image in the region of the cell. The results indicate that our method is good at highlighting the cell and providing subcellular structure, which is beneficial for the analysis of the cell’s structure. Currently, the quality of acoustic image limits the fusion performance, while in the future, we will focus on the fusion rules for the inverted and acoustic images and make our fusion model more robust and efficient.

Author Contributions

Conceptualization, X.L. and Z.C.; methodology, Z.C.; investigation, Z.C. and X.L.; validation, Z.C.; data curation, Z.C.; resource, X.L.; writing original draft preparation Z.C.; writing-review and editing, X.L. and M.D.; supervision, M.D.; project administration M.D.; funding acquisition, M.D. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (grant No. 81571754), and partly supported by the Major National Key Scientific Instrument and Equipment Development Project (grant No. 2013YQ160551) and National undergraduate Innovation and Entrepreneurship Research Practice Program, No. 201918YB05.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shea, S.J.O.; Welland, M.E. Atomic force microscope. Phys. Rev. Lett. 1992, 59, Unit 2C.2. [Google Scholar]
  2. Ikai, A.; Afrin, R.; Saito, M.; Watanabe-Nakayama, T. Atomic force microscope as a nano- and micrometer scale biological manipulator: A short review. Semin. Cell Dev. Biol. 2018, 73, 132–144. [Google Scholar] [CrossRef] [PubMed]
  3. Dufrêne, Y.F.; Ando, T.; Garcia, R.; Alsteens, D.; Martinez-Martin, D.; Engel, A.; Gerber, C.; Müller, D.J. Imaging modes of atomic force microscopy for application in molecular and cell biology. Nat. Nanotechnol. 2017, 12, 295–307. [Google Scholar] [CrossRef] [PubMed]
  4. Rabe, U. Atomic force acoustic microscopy. Nanosci. Technol. 2006, 15, 1506–1511. [Google Scholar]
  5. Wang, T.; Ma, C.; Hu, W. Visualizing subsurface defects in graphite by acoustic atomic force microscopy. Microsc. Res. Tech. 2017, 80, 66–74. [Google Scholar] [CrossRef] [PubMed]
  6. Li, X.; Lu, A.; Deng, W.; Su, L.; Wang, J.; Ding, M. Noninvasive subcellular imaging using atomic force acoustic microscopy (AFAM). Cells 2019, 8, 314. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Pure, A.A.; Gupta, N.; Shrivastava, M. An overview of different image fusion methods for medical applications. Int. J. Sci. Eng. Res. 2013, 4, 129–133. [Google Scholar]
  8. Burt, P.J.; Adelson, E.H. The laplacian pyramid as a compact image code. IEEE Trans. Commun. 1983, 31, 532–540. [Google Scholar] [CrossRef]
  9. Chen, X.W.; Ge, J.; Liu, J.G. Non-subsampled contourlet texture retrieval using four estimators. Appl. Mech. Mater. 2012, 263, 167–170. [Google Scholar] [CrossRef]
  10. Liu, Y.; Liu, S.; Wang, Z. A general framework for image fusion based on multi-scale transform and sparse representation. Inf. Fusion 2015, 24, 147–164. [Google Scholar] [CrossRef]
  11. Yang, B.; Li, S. Multifocus image fusion and restoration with sparse representation. IEEE Trans. Instrum. Meas. 2010, 59, 884–892. [Google Scholar] [CrossRef]
  12. Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.C.; Bengio, Y. Generative Adversarial Networks. arXiv 2014, arXiv:abs/1406.2661. [Google Scholar] [CrossRef]
  13. Ma, J.; Yu, W.; Liang, P.; Li, C.; Jiang, J. FusionGAN: A generative adversarial network for infrared and visible image fusion. Inf. Fusion 2019, 48, 11–26. [Google Scholar] [CrossRef]
  14. Jong, G.; Hendrick, H.; Wang, Z.; Kurniadi, D.; Aripriharta, A.; Horng, G. Implementation of Otsu’s method in vein locator devices. Int. J. Adv. Sci. Eng. Inf. Technol. 2018, 8, 743–748. [Google Scholar] [CrossRef] [Green Version]
  15. Zheng, Y.; Essock, E.A.; Hansen, B.C.; Haun, A.M. A new metric based on extended spatial frequency and its application to DWT based fusion algorithms. Inf. Fusion 2007, 8, 177–192. [Google Scholar] [CrossRef]
  16. Rao, Y.J. In-fibre Bragg grating sensors. Meas. Sci. Technol. 1997, 8, 355. [Google Scholar] [CrossRef]
  17. Ma, J.; Ma, Y.; Li, C. Infrared and visible image fusion methods and applications: A survey. Inf. Fusion 2018, 45, 153–178. [Google Scholar] [CrossRef]
  18. Xydeas, C.S.; Petrovic, V.S. Objective image fusion performance measure. Electron. Lett. 2000, 36, 308–309. [Google Scholar] [CrossRef] [Green Version]
  19. Asha, C.S.; Lal, S.; Gurupur, V.P.; Saxena, P.P. Multi-modal medical image fusion with adaptive weighted combination of NSST bands using chaotic grey wolf optimization. IEEE Access 2019, 7, 40782–40796. [Google Scholar] [CrossRef]
  20. Li, S.; Kang, X.; Fang, L.; Hu, J.; Yin, H. Pixel-level image fusion. Inf. Fusion 2017, 33, 100–112. [Google Scholar] [CrossRef]
Figure 1. Flow chart of the proposed method.
Figure 1. Flow chart of the proposed method.
Applsci 10 08645 g001
Figure 2. (a) Morphology image, (b) inversed morphology image, and (c) acoustic image.
Figure 2. (a) Morphology image, (b) inversed morphology image, and (c) acoustic image.
Applsci 10 08645 g002
Figure 3. The pre-fused images with different k values: (a) k = 0, (b) k = 50, (c) k = 100, (d) k = 150, (e) k = 200, (f) k = 250.
Figure 3. The pre-fused images with different k values: (a) k = 0, (b) k = 50, (c) k = 100, (d) k = 150, (e) k = 200, (f) k = 250.
Applsci 10 08645 g003
Figure 4. Sixteen pairs of morphology and acoustic images.
Figure 4. Sixteen pairs of morphology and acoustic images.
Applsci 10 08645 g004
Figure 5. Segmentation result of the compared images. The red circles are the regions of cells.
Figure 5. Segmentation result of the compared images. The red circles are the regions of cells.
Applsci 10 08645 g005
Figure 6. Fusion results on three typical morphology and acoustic image. (From left to right: morphology image, acoustic image, Laplacian pyramid (LP), nonsubsampled contourlet transform (NSCT), fusion method based on Laplacian pyramid and sparse representation (LP-SR), fusion GAN, proposed).
Figure 6. Fusion results on three typical morphology and acoustic image. (From left to right: morphology image, acoustic image, Laplacian pyramid (LP), nonsubsampled contourlet transform (NSCT), fusion method based on Laplacian pyramid and sparse representation (LP-SR), fusion GAN, proposed).
Applsci 10 08645 g006
Table 1. Mutual information (MI) between proposed and compared methods. The bold represents the best value.
Table 1. Mutual information (MI) between proposed and compared methods. The bold represents the best value.
ImageLPNSCTLP-SRFusionGANProposed
13.41383.57493.79454.47647.7349
23.83544.25084.16864.81918.0714
33.75624.02273.74194.71247.9157
44.38604.92464.66305.34428.8741
53.41033.81633.87914.37928.7135
64.09184.57724.32104.59407.8550
73.76214.26874.00844.71837.8393
83.38964.38254.39644.74798.1814
92.25282.39802.67283.62537.0375
103.11644.08833.13965.32856.3951
113.90454.03354.20354.48497.7655
123.48483.89693.73204.12308.0500
134.27844.52564.16594.57768.0357
144.86935.33854.70215.10168.8752
153.59553.80343.74543.96647.9100
163.48143.72393.93813.91208.3066
Table 2. Xydeas and Petrovic metric ( Q A B / F ) between proposed and compared methods. The bold represents the best value.
Table 2. Xydeas and Petrovic metric ( Q A B / F ) between proposed and compared methods. The bold represents the best value.
ImageLPNSCTLP-SRFusionGANProposed
10.34020.41430.32110.19690.6621
20.35060.48010.34900.21130.7311
30.33830.42590.30680.24570.6585
40.31160.43880.28840.25850.6331
50.39780.52880.38460.24100.6962
60.36950.50730.34170.24880.7218
70.34370.42640.33790.23170.7033
80.38850.48460.39710.21160.7361
90.35520.32980.35970.10520.6111
100.32170.50890.33520.20330.6087
110.29370.41440.28040.17640.5886
120.35870.50320.36790.30920.6906
130.30070.40150.30570.26160.5988
140.32070.43830.32460.21900.5420
150.39950.52330.41400.22920.6239
160.41810.55150.42690.26410.7636
Table 3. Feature mutual information (FMI) between proposed and compared methods. The bold represents the best value.
Table 3. Feature mutual information (FMI) between proposed and compared methods. The bold represents the best value.
ImageLPNSCTLP-SRFusionGANProposed
10.30580.33430.29720.24520.5037
20.28880.32580.28550.25250.5203
30.29620.32290.28510.25480.5122
40.35550.38210.33810.30380.5323
50.35010.37520.34510.29770.5412
60.31140.34470.30300.26460.5212
70.32270.35580.31450.25370.5402
80.28770.32540.28470.16200.5330
90.31760.33050.31720.28370.5121
100.26190.32280.26760.25710.5686
110.31150.35410.31070.15120.4802
120.31660.35590.31510.34000.4911
130.31950.35400.31750.32900.4784
140.36520.39960.36470.35300.5091
150.31750.35290.31750.32310.4554
160.31190.34210.31190.31150.4884
Table 4. The visual information fidelity fusion (VIFF) between proposed and compared methods. The bold represents the best value.
Table 4. The visual information fidelity fusion (VIFF) between proposed and compared methods. The bold represents the best value.
ImageLPNSCTLP-SRFusionGANProposed
10.65000.77180.66990.51560.9963
20.66200.92320.66850.61991.1875
30.66510.76900.69930.63751.1042
40.60880.75610.62810.59291.0091
50.86270.98990.94350.66751.1277
60.78500.96530.82140.68101.1944
70.69030.89710.69230.57141.2731
80.81101.03760.90950.61861.2405
90.69540.64590.69680.54320.8411
100.70831.16160.69351.23471.2102
110.69550.77980.70260.49810.9608
120.70360.85710.71220.72401.0765
130.71890.76770.73980.66320.9949
140.6750.78420.69790.59110.8309
150.78910.86990.79590.62621.0216
160.77000.92200.79490.70751.1852
Table 5. Standard deviation (SD) between proposed and compared methods. The bold represents the best value.
Table 5. Standard deviation (SD) between proposed and compared methods. The bold represents the best value.
ImageLPNSCTLP-SRFusionGANProposed
136.965839.832039.926134.599040.1020
238.477647.609847.435843.236650.6317
337.789537.613441.424533.523342.3467
441.558646.587247.707741.234750.8664
531.955042.235237.749139.061148.8546
633.164939.887541.025133.834845.9419
728.969735.663633.422534.439541.0504
833.315841.995142.739046.472647.0966
937.947435.499231.906529.499634.1644
109.473918.43048.595341.737612.6174
1140.232644.486242.317742.024138.2377
1233.831739.496636.358541.067244.4249
1338.008140.506738.716137.365142.8768
1443.435549.646643.262541.297544.2562
1536.207537.534239.743438.790245.0851
1635.606139.348842.311543.211448.3688
Table 6. T test between proposed and compared methods.
Table 6. T test between proposed and compared methods.
MI Q A B / F FMIVIFFSD
LP 1.38 × 10 15 4.17 × 10 14 8.07 × 10 12 9.99 × 10 9 1.83 × 10 4
NSCT 9.14 × 10 14 8.71 × 10 10 3.35 × 10 11 8.38 × 10 8 5.46 × 10 2
LP-SR 7.01 × 10 17 2.64 × 10 13 5.51 × 10 12 4.07 × 10 8 5.42 × 10 4
FusionGAN 1.02 × 10 11 3.25 × 10 14 7.40 × 10 10 4.45 × 10 8 1.64 × 10 1
Table 7. Operation time of the fusion methods.
Table 7. Operation time of the fusion methods.
MethodTime (s)
NSCT1.114156
LP0.005338
LP-SR0.040919
Proposed0.4031
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chen, Z.; Li, X.; Ding, M. An Atomic Force Acoustic Microscopy Image Fusion Method Based on Grayscale Inversion and Selection of Best-Fit Intensity. Appl. Sci. 2020, 10, 8645. https://0-doi-org.brum.beds.ac.uk/10.3390/app10238645

AMA Style

Chen Z, Li X, Ding M. An Atomic Force Acoustic Microscopy Image Fusion Method Based on Grayscale Inversion and Selection of Best-Fit Intensity. Applied Sciences. 2020; 10(23):8645. https://0-doi-org.brum.beds.ac.uk/10.3390/app10238645

Chicago/Turabian Style

Chen, Zhaozheng, Xiaoqing Li, and Mingyue Ding. 2020. "An Atomic Force Acoustic Microscopy Image Fusion Method Based on Grayscale Inversion and Selection of Best-Fit Intensity" Applied Sciences 10, no. 23: 8645. https://0-doi-org.brum.beds.ac.uk/10.3390/app10238645

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop