Next Article in Journal
A Tool Life Prediction Model Based on Taylor’s Equation for High-Speed Ultrasonic Vibration Cutting Ti and Ni Alloys
Next Article in Special Issue
THz Surface Plasmons in Wide and Freestanding Graphene Nanoribbon Arrays
Previous Article in Journal
Effect of SW-CNT Diameter on Polymer Degradation and Resistance of Polystyrene/SW-CNTs Composites Induced by γ-Irradiation
Previous Article in Special Issue
Characteristics of Thin High Entropy Alloy Films Grown by Pulsed Laser Deposition
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deep Learning-Based Layer Identification of 2D Nanomaterials

School of Computer Science and Technology, Changchun Normal University, Changchun 130032, China
*
Authors to whom correspondence should be addressed.
Submission received: 22 September 2022 / Revised: 5 October 2022 / Accepted: 10 October 2022 / Published: 14 October 2022
(This article belongs to the Special Issue 2D Materials-Based Thin Films and Coatings)

Abstract

:
Two-dimensional (2D) nanomaterials exhibit unique properties due to their low dimensionality, which has led to great potential for applications in biopharmaceuticals, aerospace, energy storage, mobile communications and other fields. Today, 2D nanomaterials are often prepared and exfoliated by a combination of mechanical and manual methods, which makes the production of 2D nanomaterials inefficient and prevents standardized and industrialized manufacturing. Recent breakthroughs in semantic segmentation techniques based on deep learning have enabled the accurate identification and segmentation of atomic layers of 2D nanomaterials using optical microscopy. In this study, we analyzed in detail sixteen semantic segmentation models that perform well on public datasets and apply them to the layer identification and segmentation of graphene and molybdenum disulfide. Furthermore, we improved the U2-Net model to obtain the best overall performance, namely 2DU2-Net. The accuracy of the 2DU2-Net model was 99.03%, the kappa coefficient was 95.72%, the dice coefficient was 96.97%, and the average cross–merge ratio was 94.18%. Meanwhile, it also had good performance in terms of computation, number of parameters, inference speed and generalization ability. The results show that deep learning-based semantic segmentation methods can greatly improve efficiency and replace most manual operations, and different types of semantic segmentation methods can be adapted to different properties of 2D nanomaterials, thus promoting the research and application of 2D nanomaterials.

1. Introduction

Two-dimensional (2D) nanomaterials were introduced by K.S. Novoselov et al. [1], who successfully isolated monolayers of graphene. Following continuous research, 2D nanomaterials have shown many excellent optical [2], electrical [3], thermal [4], magnetic [5] and mechanical [6] properties, which have attracted extensive attention from the scientific community. Today, the preparation of 2D nanomaterials has evolved from mechanical exfoliation to liquid phase exfoliation, vapor phase deposition, wet chemical synthesis and other methods [7]. At the same time, researchers have discovered more 2D nanomaterials, which include single-element silicene, germanene, stilene, boronene, tellurene and black phosphorus [8,9]; transition metal disulfide compounds [10], such as MoS2, WSe2, ReS2, PtSe2, NbSe2, etc.; main group metal–sulfide compounds [11], such as GaS, InSe, SnS, SnS2, etc.; and two-dimensional transition metal carbon (nitride) compounds [12] and other two-dimensional nanomaterials, such as h-BN [13], Bi2O2Se [14], etc. After a long period of research and evidence, the complex types and properties of 2D nanomaterials have shown great potential for applications in biopharmaceuticals [15], aerospace [16], energy storage [17], integrated circuits [18], mobile communications [19], and other fields.
The physical, chemical, thermal, and electrical properties of 2D nanomaterials with different atomic layer thicknesses vary greatly, making both single-layer and few-layer 2D nanomaterials highly valuable for research, and the optical properties of 2D nanomaterials with different layer numbers often vary greatly [2]. Traditionally, the number of layers in 2D nanomaterials is often measured using optical microscopy (OM) [20], atomic force microscopy (AFM) [21], or by optical methods such as Raman spectroscopy [22]. However, these methods require a large number of researchers and are not only inefficient but also wasteful of resources. For many chemically unstable 2D nanomaterials, conventional methods require a large investment in personnel and equipment to be successful, such as Bi2Sr2CaCu2O8+δ [23], which degrades in a short time. Therefore, it is important to find a more general and efficient method for the layer identification of 2D nanomaterials.
With the rapid development of deep learning techniques, many semantic segmentation network models based on deep learning techniques have emerged [24,25], which show excellent performance and powerful generalization ability and are capable of most semantic segmentation tasks. Accordingly, OM images of 2D nanomaterials are rich in physical and chemical information, and it is advantageous to use semantic segmentation models based on deep learning techniques to identify the number of atomic layers in OM images of 2D nanomaterials. At the same time, many methods have emerged in the field of materials science to explore the properties of materials through deep learning techniques [26], which have played a crucial role in promoting the development of this discipline. To this end, we carefully selected 16 semantic segmentation models that perform well on public datasets for identifying and segmenting the atomic layers of 2D nanomaterials, and trained the models on graphene and molybdenum disulfide (MoS2) OM image data with pixel-level labeling. We found that the U2-Net [27] model has a better performance in recognizing the layers of 2D nanomaterials and has the best performance in terms of computation, number of parameters, compatibility and training deployment. Therefore, based on the existing experiments, the network structure of the U2-Net [27] model was adapted by multi-scale connectivity and the use of pyramidal pooling module to obtain a new model and denoted as 2DU2-Net. Based on the characteristics of nanomaterials, this model is able to better identify two-dimensional nanomaterials with fine zeros, discrete and edge regions.
The use of a semantic segmentation model based on deep learning to identify the number of layers of 2D nanomaterials can save a lot of human and material resources, which is of great importance for the research of 2D nanomaterials. The contributions of this paper are summarized as follows:
(1)
Sixteen different types of semantic segmentation models were used to analyze their specific effects on 2D nanomaterial OM images.
(2)
The U2-Net [27] model based on encoder–decoder architecture was found to have good performance and environmental adaptability without a backbone network and is suitable for various applications for detecting 2D nanomaterials.
(3)
We improved the model structure of U2-Net [27] by means of multiscale connectivity and pyramidal pooling to obtain a 2DU2-Net model that is more adaptable to two-dimensional nanomaterial layer identification and segmentation.

2. Related Work

In recent years, many 2D nanomaterial layer recognition methods have been proposed, and deep learning-based semantic segmentation methods have shown greater advantages than traditional methods.
In the early days, when deep learning was not effectively used due to the limitations of hardware devices, machine learning was commonly used to tackle problems, and this was no exception in the field of 2D nanomaterials. Dezhen Xue et al. [28] accelerated the search of new materials with target properties through a self-designed machine learning framework. Mathew J. Cherukara et al. [29] accurately predicted the physical, chemical and mechanical properties of nanomaterials through machine learning. Yuhao Li et al. [30] used a machine learning approach to assist optical microscopy in identifying two-dimensional nanomaterials. Yu Mao et al. [31] used machine learning to analyze information related to single-layer continuous films in MoS2 Raman spectra. Wenbo Sun et al. [32] used machine learning to assist in the design and prediction of high-performance organic photovoltaic materials. Ya Zhuo et al. [33] efficiently predicted inorganic phosphorus using a machine learning approach. These approaches using machine learning often require complex feature engineering to obtain experimental data [34], which are labor-intensive and not conducive to efficient 2D nanomaterial layer studies.
With the continuous development of deep learning techniques, the use of deep learning-based semantic segmentation models to identify the layers of 2D nanomaterials has become increasingly advanced. Bingnan Han et al. [35] identified and characterized 2D nanomaterials by designing a 2DMOINet model based on an encoder–decoder structure. Bin Wu et al. [36], on the other hand, identified 2D nanomaterials by improving the SegNet model [37]. Satoru Masubuchi et al. [38] used a Mask-RCNN-based neural network model [39] combined with optical microscopy to automatically search for 2D nanomaterials. Li Zhu et al. [40] used an artificial neural network ANN [41] to identify and characterize 2D nanomaterials and van der Waals heterogeneous structures. Jaimyun Jung et al. [42] used a ResNet approach [43] for a structural and mechanical analysis of 2D nanomaterials by applying a super-resolution image (SR) to the 2D nanomaterials. Yashar Kiarashinejad et al. [44] used a dimensionality-reduction-based deep learning approach to design electromagnetic nanostructures. Sicheng Wu et al. [45] accelerated the discovery of two-dimensional catalysts for hydrogen precipitation reactions by combining a graph convolutional neural network model (CGCNN) [46] with deep learning algorithms. These deep learning-based semantic segmentation models avoid the complex process of feature extraction by machine learning algorithms and have more outstanding capabilities in high-level abstract feature extraction. Deep learning-based semantic segmentation models can also be trained end-to-end, providing better adaptability and deployment capabilities.
Both machine learning algorithms and deep learning-based network models, such as 2DMOINet [35], SegNet [37], Mask-RCNN [39], ANN [41], ResNet [43], and CGCNN [46] adopt early network model design ideas. Recently, many network models with better performance in public datasets have emerged, and we have carefully selected and classified 16 of the best network models: the U-Net [47] and U2-Net [27] models based on encoder–decoder structures; the PSPNet [48] and PFPNNet [49] models based on multiscale and pyramidal structures and expanded convolutional models; the DeepLabV3 [50] and DeepLabV3+ [51] models based on dilation convolution; the DNLNet [52], DANNet [53], ISANet [54], OCRNet [55] models based on attention mechanism; the STDC-Seg [56] and BiseNetv2 [57] models based on temporal-real semantic segmentation model; and other high-performing models, including FCN [58], HRNet [59], SFNet [60], and ANN [41]. These models have been integrated and refined by PaddleSeg [61] to outperform earlier deep learning-based semantic segmentation models in terms of comprehensive performance, generalisation, and deployment, and these models achieve advanced mean intersection-to-merge ratios (MIoU) [61] on public datasets such as Cityscapes [62].
With the development of deep learning, most of the problems previously solved by machine learning can be solved by deep learning methods. Semantic segmentation techniques based on deep learning have made great progress and greatly improved in terms of performance, generalization ability, and deployment. We will continue to work on more advanced image segmentation algorithms for 2D nanomaterial layer recognition and segmentation.

3. Materials and Methods

In order to better identify and segment 2D nanomaterials presented in OM images, the U2-Net [27] network model is restructured to focus on identifying more discrete, small and edge-region 2D nanomaterials by combining contextual information. The next section will focus on the U2-Net [27] model and the 2DU2-Net model obtained by adapting a network structure and adding a pyramidal pooling module. Finally, the workflow for layer identification and the segmentation of 2D nanomaterials is explained.

3.1. Network Module Design

With the emergence of U-Net [47] and SegNet [37] network models, the encoder–decoder structure has become the mainstream of semantic segmentation models. Network models based on encoder–decoder structures are structurally stable, do not require a pre-trained backbone network and are highly adaptable to complex environments, and have been widely used in the medical imaging field [63]. U2-Net [27] uses a two-layer nested U-shaped structure based on an encoder–decoder structure, and its top layer is a large U-shaped structure consisting of 11 phases, as shown in Figure 1. Each stage is populated by its residual U-block [27] (RSU) modified from the residual block [43], as shown in Figure 2a,b and Figure 3a. Inspired by U-Net3+ [64], a multilayer concatenation operation was performed on top of the RSU block in order to fuse multi-scale and contextual information, as shown in Figure 3b.
As can be seen in Figure 3a, in the decoder stage, each layer simply fuses the information from the encoder and the underlying decoder that is symmetrical to it. With the improved multi-layer connection, as shown in Figure 3b, each layer in the encoder stage needs to connect all layers above the corresponding encoder, except the first and last layer. This facilitates the acquisition of low-resolution feature map information that is located at the bottom layer.
Inspired by PSPNet [48], a pyramid pooling module was added to the encoder output of the first nested RSU block to fuse multi-scale and contextual information, as shown in Figure 4 and Figure 5.
As can be seen from Figure 4, the final output of the encoder of the RSU block is scaled up to the same size as the output image using 1 × 1, 2 × 2, 3 × 3, 6 × 6 2D adaptive maximum pooling, and then up-sampling. Finally, feature fusion is achieved by concatenation and convolution operations. By inserting the pyramid pooling module, global information can be better supplemented, and its detailed segmentation capability can be improved by a small number of samples.

3.2. Network Architecture Design

U2-Net [27] consists of three main components: (1) a 6-stage encoder, (2) a 5-stage decoder, and (3) a significant graph fusion module. In the encoder stages En_1, En_2, En_3, En_4, En_5, and En_6, we used RSU-7, RSU-6, RSU-5, RSU-4, RSU-4F, and RSU-4F, whose median denotes the height of the RSU block, respectively. The decoder stage has a similar structure to its symmetric encoder stage, De_1, De_2, De_3, De_4, De_5, as shown in Figure 1, for the original U2-Net [27], and Figure 5 for the 2DU2-Net module after adjusting the network structure.
Figure 5 shows the 2DU2-Net model obtained by adjusting the network structure. Based on the design ideas of U-Net3+ [64] and PSPNet [48], multi-scale connectivity and pyramidal pooling models are used in the encoder and decoder stages. In the encoder stages, En_1, En_2, En_3, and En_4, and the corresponding decoder stages, the purple blocks in the middle part are connected in multiple layers, and the output of the first purple block passes through the pyramid pooling module represented by the yellow block. In the En_5, En_6 and De_5 stages, only the pyramid pooling module represented by the yellow block is used due to the small size of the output feature map.

3.3. Loss Functions

The dataset used in the experiment is divided into three categories and labeled at the pixel level, where the single layer is red, the double layer is green, and the background is black, as shown in Figure 6. Therefore, we chose to use the cross-entropy loss function [65] to improve the segmentation accuracy of the categories, and its formula is expressed as follows:
L o s s = 1 N i c = 0 M y i c log ( p i c )
In Equation (1), pic is the predicted probability that the observed sample i belongs to category c, yic is the sign function, and if the true category of sample i is equal to c, take 1 otherwise take 0. M is the number of label types, and N is the total number of pixel points.

3.4. Overall Flow of the Experiment

In the data collection and processing phase of 2D nanomaterials, a deep learning-based target detection model [67] can be used to assist OM and AFM devices to identify and detect 2D nanomaterials. After acquiring the data, different types of data tags can be created with the help of data tagging tools according to the type and characteristics of 2D nanomaterials. For 2D nanomaterials, where data acquisition and training are difficult, data can be expanded by generative adversarial networks [68], data augmentation [69], and semi-supervised [70] or weakly supervised [71] training. After collecting and processing the data, a suitable network model needs to be selected according to the properties of the 2D nanomaterials. For chemically stable 2D nanomaterials, a network model with a backbone network and a large number of convolutional layers can be chosen. For discrete, fragmented and low-contrast 2D nanomaterials, a network model based on an attention mechanism can be chosen [72]. For 2D nanomaterials with an unstable chemistry and high demands on the experimental environment, a lightweight network model with a real-time performance can be chosen [25]. In summary, for a wide variety of 2D nanomaterials with complex structures and different properties, different types of deep learning network models can be chosen to solve the problem. After training, the models need to be deployed, and for different types of devices, suitable network models can be deployed through model compression [73], migration learning [74], etc.

4. Results and Discussion

4.1. Data Sets

The data used in this paper were obtained from the open source project by Yu Saito et al. [66]. The acquired open-source data were redistributed as needed to accommodate for subsequent model training and improvement. According to Yu Saito et al. [66], graphene and MoS2 were mechanically exfoliated onto SiO2/Si substrates, and graphene and MoS2 images were acquired by OM from different angles, and the thickness and number of layers were determined and labeled at the pixel level using AFM and comparison methods. In this paper, 68 images were collected, including 33 of MoS2 and 35 of graphene, as shown in Figure 6. The single layer is labeled in red, the double layer is labeled in green, and the background is labeled in black. To improve the learning accuracy of the network model and prevent overfitting, data enhancement was achieved by random cropping, flipping, rotating and distorting the original images. Finally, the labeled dataset was randomly divided into training and testing datasets in the ratio of 8:2, with 2000 training and 500 testing datasets.

4.2. Evaluation Indicators

The variety of 2D nanomaterials, the complexity of their preparation, the difficulty of their preservation, and the differences between their properties make it essential to select a comprehensive and reliable network model. At the same time, a network model for practical applications should not only consider its performance in terms of accuracy, but also its robustness, scalability and dependence on resources. To evaluate the quality of these realistic OM images and network models, six evaluation metrics were used [24,25,75,76,77]. (1) Giga Floating-point Operations Per Second (GFOPs), (2) Params, (3) Accuracy, and (4) Kappa coefficient (Kappa), (5) Dice coefficient (Dice), and (6) Mean intersection over union (MIoU) were analyzed as follows:
(1) GFOPs [25] are the number of computations required for model inference and can be used to measure the complexity of the model.
(2) Params [76] refers to how many parameters the model contains, which directly determines the size of the model and affects memory usage during model inference.
(3) Accuracy [24,25,75] is a metric used to evaluate classification models, i.e., the proportion of the total number of correct model predictions, and is given by the following formula:
A c c u r a c y = T P + T N T P + T N + F P + F N
The confusion matrix shows that TP is the true case, TN is the true negative case, FP is the false positive case and FN is the false negative case.
(4) Kappa [77] is a metric used for a consistency test. Consistency refers to whether the model prediction results and the actual classification results are consistent, and can be used to measure the effectiveness of the classification with the following formula:
K a p p a = p o p e 1 p e
where po is the sum of the number of correctly classified samples in each category divided by the total number of samples, i.e., the overall classification accuracy. pe is the “sum of the products of the actual and predicted numbers” for each of the categories, divided by the “square of the total number of samples”.
(5) Dice [24] is a set similarity measure function used to calculate the similarity of two samples, and it is often used to evaluate the goodness of segmentation algorithms. Its public expression is as follows:
D i c e = 2 | A B | | A | + | B |
where |AB| is the intersection between A and B, and the subtables |A| and |B| denote the number of elements of A and B. The factor of 2 in the numerator is due to the double counting of elements common to A and B in the denominator.
(6) MIoU [24,25,75] is a standard measure of semantic partitioning that calculates the average of the ratio of intersection and concatenation of all classes. Its public representation is as follows:
M I o U = 1 n c i n i i i n i j + j n j i n i i
where nij indicates the number of pixels that are actually in category i but are predicted to be in category j, nii indicates the number of pixels that are actually in category i but are also predicted to be in category i, and nc indicates the total number of categories.

4.3. Network Training and Results Analysis

To test and analyze the segmentation performance of U2-Net [27], 2DU2-Net and 15 other network models on the dataset, we combined the U2-Net [27] network with U-Net [47], PSPNet [48], PFPNNet [49], DeepLabV3 [50], DeepLabV3+ [51], DNLNet [52], DANNet [53], ISANet [54], OCRNet [55], STDC-Seg [56], BiseNetv2 [57], FCN [58], HRNet [59], SFNet [60] and ANN [42] network models, which were quantitatively and qualitatively analyzed.

4.3.1. Training Setup

In the training process, each OM image was resized to 512 × 512. ResNet [43], HRNet [59] and STDC [56] were used as the backbone networks for the backbone model. During the pre-training of the network model, it was found that the loss function of the network converged at about 30 iterations, so the number of training iterations was set to 50, and the data are shown in Figure 7. The initial learning rate was set to 0.01, and the learning rate was decayed using the learning rate decay method. The optimization method uses SGD, with a batch size of 16 and random initialization, and the whole experiment takes about 100 h. A 64-bit Windows 11 operating system was used. The network was built, trained and tested based on PaddleSeg [61]. Details of the configuration are as follows: Anaconda3, PaddlePaddle2.3.1, Paddleseg2.6.0, OpenCV4.1.1, Cuda10.2 and Cudnn7.6.

4.3.2. Discussion and Analysis of Results

The performance metrics of GFOPs, Params, MIoU, Accuracy, Kappa and Dice for U2-Net [27], 2DU2-Net and 15 other network models are shown in Table 1. The GFOPs of U2-Net [27] were 51.32, Params 4.36, MIoU 93.74%, Accuracy 99.03% Kappa 95.72%, and Dice 96.73%. U2-Net [27] has the lowest computational and parametric counts, and the performance of U2-Net [27] is very competitive with most models in terms of MIoU, Accuracy, Kappa and Dice. The 2DU2-Net model outperforms the U2-Net [27] model, and outperforms most other models in terms of image detail and edge processing, as shown in Figure 8. Models using backbone networks tend to have large GFOPs and params, especially for models, such as PSPNet [48] and DNLNet [52], which use ResNet [43] as the backbone. Lightweight networks, such as BiseNetv2 [57] and STDC-Seg [56], U2-Net [27] and 2DU2-Net, are more outstanding in terms of performance, reaching a lightweight level in terms of inference speed, GFOPs, and Params.
Some of the prediction results of U2-Net [27], 2DU2-Net and other network models are shown in Figure 8. From top to bottom, the first row shows the input image, the second row shows the labeled image, the third row shows the predicted image of 2DU2-Net, followed by the predicted images of other network models, where the yellow box indicates the segmentation of the detail part, the blue box indicates the misclassification, and the purple box indicates the unsegmented part. After a comparison with the labelled images and the input images, it was found that all network models could accurately extract the color and location of the image for large peeled and contrasting areas. The 2DU2-Net had the best segmentation results in terms of distinguishing between detailed, scattered and edge regions of 2D nanomaterials, with finer contour lines than the other models, and the most detailed segmentation of scattered regions without segmentation errors. The segmentation of U2-Net [27] has shortcomings and errors at the edges, but is more competitive with the other models in terms of performance and results. In terms of segmentation refinement and correctness, the U-Net [47], PSPNet [48], PFPNNet [49], DNLNet [52], OCRNet [55], BiseNetv2 [57] and SFNet [60] models suffer from classification errors regarding very small details. In discrete regions with a low contrast, PSPNet [48], DeepLabV3+ [51], DNLNet [52], DANNet [53], ISANet [54], BiseNetv2 [57], HRNet [59], SFNet [60] and ANN [41] models have some fine areas that are not segmented. The DeepLabV3 [50], STDC-Seg [56] and FCN [58] models have minor shortcomings compared to 2DU2-Net in terms of edge detail. Overall, the segmentation results of these models are far superior to those of the earlier deep learning-based segmentation models, with 2DU2-Net outperforming the other models in terms of performance, practical results, and detail. It can be seen that 2DU2-Net has a better ability to identify the number of layers in 2D nanomaterials.
The experimental results show that the 2DU2-Net and U2-Net [27] models based on the encoder–decoder structure without the use of a non-backbone network have greater advantages in terms of computation, number of parameters, model performance, inference speed, robustness, and generalization ability. They are also effective in segmenting 2D nanomaterial images in practical tests and deployment. It is worth noting that the U-shaped structure-based model is widely used in medical devices [63,78], which is similar to the experimental setting of 2D nanomaterials. At the same time, other different types of network models show a clear trend in segmentation results compared to machine learning-based and previous neural network models, and their efficiency and refinement far exceed those of manual ones.

5. Conclusions

We carefully selected 16 deep learning-based semantic segmentation models that have recently been proposed and achieved good results on public datasets. These models were carefully tuned for optimal performance prior to experimentation, and we trained them on graphene and MoS2 datasets. After quantitative and qualitative analysis, it was found that these models achieved results well above the artificial level, with the 2DU2-Net and U2-Net [27] models achieving the best overall performance. In the tests, the 2DU2-Net model achieved 99.03% Accuracy, 95.72% Kappa, 96.97% Dice and 94.18% MIoU. The 2DU2-Net and U2-Net [27] models also performed better than other models in terms of computation, number of parameters, inference speed and generalization ability. The 2DU2-Net is designed to better segment two-dimensional nanomaterials at edges and discrete regions. It is based on the U2-Net [27] model and adjusts its network structure through multiscale connectivity, incorporating pyramidal pooling modules, which improve the segmentation performance over the U2-Net [27] model and show a more outstanding capability to process detail compared to other models.
The results show that the semantic segmentation models based on deep learning are novel tools for the fast identification of layers of 2D nanomaterials, and that these trained semantic segmentation models can efficiently identify 2D nanomaterials other than those used for training, with a good generalization ability and high accuracy. Secondly, migration learning and model optimization methods are used to better distinguish single-layer, double-layer and multi-layer 2D nanomaterials for different types of 2D nanomaterials. Finally, the inference process of the deep learning-based model can be adapted to various devices and can be run on a remote server.
Next, we will search for more semantic segmentation models and train them on a wider range of 2D nanomaterials. At the same time, we will develop a segmentation toolkit for 2D nanomaterials that can be applied to a wider range of 2D nanomaterials research. Finally, this study will help to optimize the research process of 2D nanomaterials and open up new avenues for layer identification in 2D nanomaterials.

Author Contributions

Conceptualization, Y.Z.; methodology, H.Z.; investigation, S.Z. and H.Z.; resources Y.Z. and G.L.; software, H.Z. and J.Z.; funding acquisition, Y.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant number: 61604019; Joint fund of Science & Technology Department of Liaoning Province and State Key Laboratory of Robotics, China: grant number: 2020-KF-22-08; Changchun Normal University Graduate Research Innovation Project: grant number: 2022-094.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data and material used to prepare this manuscript are not available.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Novoselov, K.S.; Geim, A.K.; Morozov, S.V.; Jiang, D.; Zhang, Y.; Dubonos, S.V.; Grigorieva, I.V.; Firsov, A.A. Electric Field Effect in Atomically Thin Carbon Films. Science 2004, 306, 666–669. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Li, X.; Han, W.P.; Wu, J.; Qiao, X.F.; Zhang, J.; Tan, P. Layer-Number Dependent Optical Properties of 2D Materials and Their Application for Thickness Determination. Adv. Funct. Mater. 2017, 27, 1604468. [Google Scholar] [CrossRef]
  3. Liu, Y.; Xiao, C.; Li, Z.; Xie, Y. Vacancy Engineering for Tuning Electron and Phonon Structures of Two-Dimensional Materials. Adv. Energy Mater. 2016, 6, 1600436. [Google Scholar] [CrossRef]
  4. Song, H.; Liu, J.; Liu, B.; Wu, J.; Wu, J.; Cheng, H.M.; Cheng, H.M.; Kang, F. Two-Dimensional Materials for Thermal Management Applications. Joule 2018, 2, 442–463. [Google Scholar] [CrossRef] [Green Version]
  5. Thiel, L.; Wang, Z.; Tschudin, M.A.; Rohner, D.; Gutiérrez-Lezama, I.; Ubrig, N.; Gibertini, M.; Giannini, E.; Morpurgo, A.F.; Maletinsky, P. Probing magnetism in 2D materials at the nanoscale with single-spin microscopy. Science 2019, 364, 973–976. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Ma, X.; Liu, L.; Zhang, Z.; Wei, Y. Bending Stiffness of Circular Multilayer van der Waals Material Sheets. J. Appl. Mech. 2022, 89, 031011. [Google Scholar] [CrossRef]
  7. Cai, X.; Luo, Y.; Liu, B.; Cheng, H.M. Preparation of 2D material dispersions and their applications. Chem. Soc. Rev. 2018, 47, 6224–6266. [Google Scholar] [CrossRef] [PubMed]
  8. Mannix, A.J.; Kiraly, B.; Hersam, M.C.; Guisinger, N.P. Synthesis and chemistry of elemental 2 D materials. Nat. Rev. Chem. 2017, 1, 0014. [Google Scholar] [CrossRef]
  9. Xie, Z.; Zhang, B.; Ge, Y.; Zhu, Y.; Nie, G.; Song, Y.; Lim, C.K.; Zhang, H.; Prasad, P.N. Chemistry, Functionalization, and Applications of Recent Monoelemental Two-Dimensional Materials and Their Heterostructures. Chem. Rev. 2021, 122, 1127–1207. [Google Scholar] [CrossRef] [PubMed]
  10. Wang, Q.H.; Kalantar-zadeh, K.; Kis, A.; Coleman, J.N.; Strano, M.S. Electronics and optoelectronics of two-dimensional transition metal dichalcogenides. Nat. Nanotechnol. 2012, 7, 699–712. [Google Scholar] [CrossRef] [PubMed]
  11. Kooi, B.J.; Noheda, B. Ferroelectric chalcogenides—Materials at the edge. Science 2016, 353, 221–222. [Google Scholar] [CrossRef] [PubMed]
  12. VahidMohammadi, A.; Rosen, J.; Gogotsi, Y. The world of two-dimensional carbides and nitrides (MXenes). Science 2021, 372, eabf1581. [Google Scholar] [CrossRef] [PubMed]
  13. Weng, Q.; Wang, X.; Wang, X.; Bando, Y.; Golberg, D.V. Functionalized hexagonal boron nitride nanomaterials: Emerging properties and applications. Chem. Soc. Rev. 2016, 45, 3989–4012. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Wu, J.; Yuan, H.; Meng, M.; Chen, C.; Sun, Y.; Chen, Z.; Dang, W.; Tan, C.; Liu, Y.; Yin, J.; et al. High electron mobility and quantum oscillations in non-encapsulated ultrathin semiconducting Bi2O2Se. Nat. Nanotechnol. 2017, 12, 530–534. [Google Scholar] [CrossRef]
  15. Huang, H.; Feng, W.; Chen, Y. Two-dimensional biomaterials: Material science, biological effect and biomedical engineering applications. Chem. Soc. Rev. 2021, 50, 11381–11485. [Google Scholar] [CrossRef]
  16. Vogl, T.; Sripathy, K.; Sharma, A.; Reddy, P.; Sullivan, J.; Machacek, J.R.; Zhang, L.; Karouta, F.; Buchler, B.C.; Doherty, M.W.; et al. Radiation tolerance of two-dimensional material-based devices for space applications. Nat. Commun. 2019, 10, 1202. [Google Scholar] [CrossRef] [Green Version]
  17. Xiong, G.; He, P.; Wang, D.; Zhang, Q.; Chen, T.-f.; Fisher, T.S. Hierarchical Ni–Co Hydroxide Petals on Mechanically Robust Graphene Petal Foam for High-Energy Asymmetric Supercapacitors. Adv. Funct. Mater. 2016, 26, 5460–5470. [Google Scholar] [CrossRef]
  18. Das, S.; Sebastian, A.; Pop, E.; McClellan, C.J.; Franklin, A.D.; Grasser, T.; Knobloch, T.; Illarionov, Y.; Penumatcha, A.V.; Appenzeller, J.; et al. Transistors based on two-dimensional materials for future integrated circuits. Nat. Electron. 2021, 4, 786–799. [Google Scholar] [CrossRef]
  19. Cheng, Z.; Cao, R.; Wei, K.; Yao, Y.; Liu, X.; Kang, J.; Dong, J.; Shi, Z.; Zhang, H.; Zhang, X. 2D Materials Enabled Next-Generation Integrated Optoelectronics: From Fabrication to Applications. Adv. Sci. 2021, 8, 2003834. [Google Scholar] [CrossRef]
  20. Li, H.; Wu, J.; Huang, X.; Lu, G.; Yang, J.; Lu, X.; Xiong, Q.; Zhang, H. Rapid and reliable thickness identification of two-dimensional nanosheets using optical microscopy. ACS Nano 2013, 7, 10344–10353. [Google Scholar] [CrossRef]
  21. Resta, A.; Leoni, T.; Barth, C.; Ranguis, A.; Becker, C.; Bruhn, T.; Vogt, P.; Le Lay, G. Atomic Structures of Silicene Layers Grown on Ag(111): Scanning Tunneling Microscopy and Noncontact Atomic Force Microscopy Observations. Sci. Rep. 2013, 3, 2399. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  22. Yin, P.; Lin, Q.; Duan, Y. Applications of Raman spectroscopy in two-dimensional materials. J. Innov. Opt. Health Sci. 2020, 13, 2030010. [Google Scholar] [CrossRef]
  23. Yu, Y.; Ma, L.; Cai, P.; Zhong, R.; Ye, C.; Shen, J.; Gu, G.; Chen, X.H.; Zhang, Y. High-temperature superconductivity in monolayer Bi2Sr2CaCu2O8+δ. Nature 2019, 575, 156–163. [Google Scholar] [CrossRef] [PubMed]
  24. Minaee, S.; Boykov, Y.; Porikli, F.M.; Plaza, A.J.; Kehtarnavaz, N.; Terzopoulos, D. Image Segmentation Using Deep Learning: A Survey. ITPAM 2022, 44, 3523–3542. [Google Scholar] [CrossRef] [PubMed]
  25. Takos, G. A Survey on Deep Learning Methods for Semantic Image Segmentation in Real-Time. arXiv 2020, arXiv:abs/2009.12942. [Google Scholar]
  26. Choudhary, K.; DeCost, B.L.; Chen, C.; Jain, A.; Tavazza, F.; Cohn, R.; Park, C.W.; Choudhary, A.N.; Agrawal, A.; Billinge, S.J.L.; et al. Recent advances and applications of deep learning methods in materials science. npj Comput. Mater. 2022, 8, 59. [Google Scholar] [CrossRef]
  27. Qin, X.; Zhang, Z.V.; Huang, C.; Dehghan, M.; Zaiane, O.R.; Jägersand, M. U2-Net: Going Deeper with Nested U-Structure for Salient Object Detection. arXiv 2020, arXiv:abs/2005.09007. [Google Scholar] [CrossRef]
  28. Xue, D.; Balachandran, P.V.; Hogden, J.; Theiler, J.; Xue, D.; Lookman, T. Accelerated search for materials with targeted properties by adaptive design. Nat. Commun. 2016, 7, 11241. [Google Scholar] [CrossRef] [Green Version]
  29. Cherukara, M.J.; Narayanan, B.; Kinaci, A.; Sasikumar, K.; Gray, S.K.; Chan, M.K.Y.; Sankaranarayanan, S.K.R.S. Ab Initio-Based Bond Order Potential to Investigate Low Thermal Conductivity of Stanene Nanostructures. J. Phys. Chem. Lett. 2016, 7, 3752–3759. [Google Scholar] [CrossRef]
  30. Li, Y.-h.; Kong, Y.; Peng, J.; Yu, C.-b.; Li, Z.; Li, P.; Liu, Y.; Gao, C.; Wu, R. Rapid identification of two-dimensional materials via machine learning assisted optic microscopy. J. Mater. 2019, 5, 413–421. [Google Scholar] [CrossRef]
  31. Mao, Y.; Dong, N.; Wang, L.; Chen, X.; Wang, H.; Wang, Z.; Kislyakov, I.M.; Wang, J. Machine Learning Analysis of Raman Spectra of MoS2. Nanomaterials 2020, 10, 2223. [Google Scholar] [CrossRef] [PubMed]
  32. Sun, W.; Zheng, Y.; Yang, K.; Zhang, Q.; Shah, A.A.; Wu, Z.; Sun, Y.; Feng, L.; Chen, D.; Xiao, Z.; et al. Machine learning–assisted molecular design and efficiency prediction for high-performance organic photovoltaic materials. Sci. Adv. 2019, 5, eaay4275. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  33. Zhuo, Y.; Mansouri Tehrani, A.; Oliynyk, A.O.; Duke, A.C.; Brgoch, J. Identifying an efficient, thermally robust inorganic phosphor host via machine learning. Nat. Commun. 2018, 9, 4377. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  34. Wei, J.; Chu, X.; Sun, X.; Xu, K.; Deng, H.-X.; Chen, J.; Wei, Z.; Lei, M. Machine learning in materials science. InfoMat 2019, 1, 338–358. [Google Scholar] [CrossRef] [Green Version]
  35. Han, B.; Lin, Y.; Yang, Y.; Mao, N.; Li, W.; Wang, H.; Yasuda, K.; Wang, X.; Fatemi, V.; Zhou, L.; et al. Deep-Learning-Enabled Fast Optical Identification and Characterization of 2D Materials. Adv. Mater. 2020, 32, 2000953. [Google Scholar] [CrossRef]
  36. Wu, B.; Wang, L.; Gao, Z. A two-dimensional material recognition image algorithm based on deep learning. In Proceedings of the 2019 International Conference on Information Technology and Computer Application (ITCA), Guangzhou, China, 20–22 December 2019; pp. 247–252. [Google Scholar]
  37. Badrinarayanan, V.; Kendall, A.; Cipolla, R. SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. ITPAM 2017, 39, 2481–2495. [Google Scholar] [CrossRef]
  38. Masubuchi, S.; Watanabe, E.; Seo, Y.-S.; Okazaki, S.; Sasagawa, T.; Watanabe, K.; Taniguchi, T.; Machida, T. Deep-learning-based image segmentation integrated with optical microscopy for automatically searching for two-dimensional materials. Npj 2d Mater. Appl. 2020, 4, 3. [Google Scholar] [CrossRef] [Green Version]
  39. He, K.; Gkioxari, G.; Dollár, P.; Girshick, R.B. Mask R-CNN. ITPAM 2020, 42, 386–397. [Google Scholar] [CrossRef]
  40. Zhu, L.; Tang, J.; Li, B.; Hou, T.; Zhu, Y.; Zhou, J.; Wang, Z.; Zhu, X.; Yao, Z.; Cui, X.; et al. Artificial Neuron Networks Enabled Identification and Characterizations of 2D Materials and van der Waals Heterostructures. ACS Nano 2022, 16, 2721–2729. [Google Scholar] [CrossRef]
  41. Zhu, Z.; Xu, M.; Bai, S.; Huang, T.; Bai, X. Asymmetric Non-Local Neural Networks for Semantic Segmentation. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea, 27 October–2 November 2019; pp. 593–602. [Google Scholar]
  42. Jung, J.; Na, J.; Park, H.K.; Park, J.-M.; Kim, G.; Lee, S.; Kim, H.-S. Super-resolving material microstructure image via deep learning for microstructure characterization and mechanical behavior analysis. npj Comput. Mater. 2021, 7, 96. [Google Scholar] [CrossRef]
  43. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  44. Kiarashinejad, Y.; Abdollahramezani, S.; Adibi, A. Deep learning approach based on dimensionality reduction for designing electromagnetic nanostructures. npj Comput. Mater. 2020, 6, 96. [Google Scholar] [CrossRef]
  45. Wu, S.; Wang, Z.; Zhang, H.; Cai, J.; Li, J. Deep Learning Accelerates the Discovery of Two-Dimensional Catalysts for Hydrogen Evolution Reaction. Energy Environ. Mater. 2021. [Google Scholar] [CrossRef]
  46. Xie, T.; Grossman, J.C. Crystal Graph Convolutional Neural Networks for an Accurate and Interpretable Prediction of Material Properties. PhRvL 2018, 120 14, 145301. [Google Scholar] [CrossRef] [Green Version]
  47. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. arXiv 2015, arXiv:abs/1505.04597. [Google Scholar]
  48. Zhao, H.; Shi, J.; Qi, X.; Wang, X.; Jia, J. Pyramid Scene Parsing Network. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 6230–6239. [Google Scholar]
  49. Kim, S.-W.; Kook, H.-K.; Sun, J.-Y.; Kang, M.-C.; Ko, S. Parallel Feature Pyramid Network for Object Detection. In Proceedings of the ECCV, Munich, Germany, 8–14 September 2018. [Google Scholar]
  50. Chen, L.-C.; Papandreou, G.; Schroff, F.; Adam, H. Rethinking Atrous Convolution for Semantic Image Segmentation. arXiv 2017, arXiv:abs/1706.05587. [Google Scholar]
  51. Chen, L.-C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation. In Proceedings of the ECCV, Munich, Germany, 8–14 September 2018. [Google Scholar]
  52. Yin, M.; Yao, Z.; Cao, Y.; Li, X.; Zhang, Z.; Lin, S.; Hu, H. Disentangled Non-Local Neural Networks. arXiv 2020, arXiv:abs/2006.06668. [Google Scholar]
  53. Fu, J.; Liu, J.; Tian, H.; Fang, Z.; Lu, H. Dual Attention Network for Scene Segmentation. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 3141–3149. [Google Scholar]
  54. Huang, L.; Yuan, Y.; Guo, J.; Zhang, C.; Chen, X.; Wang, J. Interlaced Sparse Self-Attention for Semantic Segmentation. arXiv 2019, arXiv:abs/1907.12273. [Google Scholar]
  55. Yuan, Y.; Chen, X.; Wang, J. Object-Contextual Representations for Semantic Segmentation. arXiv 2020, arXiv:abs/1909.11065. [Google Scholar]
  56. Fan, M.; Lai, S.; Huang, J.; Wei, X.; Chai, Z.; Luo, J.; Wei, X. Rethinking BiSeNet For Real-time Semantic Segmentation. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 9711–9720. [Google Scholar]
  57. Yu, C.; Gao, C.; Wang, J.; Yu, G.; Shen, C.; Sang, N. BiSeNet V2: Bilateral Network with Guided Aggregation for Real-time Semantic Segmentation. Int. J. Comput. Vis. 2021, 129, 3051–3068. [Google Scholar] [CrossRef]
  58. Shelhamer, E.; Long, J.; Darrell, T. Fully Convolutional Networks for Semantic Segmentation. ITPAM 2017, 39, 640–651. [Google Scholar] [CrossRef] [PubMed]
  59. Sun, K.; Xiao, B.; Liu, D.; Wang, J. Deep High-Resolution Representation Learning for Human Pose Estimation. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 5686–5696. [Google Scholar]
  60. Lee, J.; Kim, D.; Ponce, J.; Ham, B. SFNet: Learning Object-Aware Semantic Correspondence. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 2273–2282. [Google Scholar]
  61. Liu, Y.; Chu, L.; Chen, G.; Wu, Z.; Chen, Z.; Lai, B.; Hao, Y. PaddleSeg: A High-Efficient Development Toolkit for Image Segmentation. arXiv 2021, arXiv:abs/2101.06175. [Google Scholar]
  62. Cordts, M.; Omran, M.; Ramos, S.; Rehfeld, T.; Enzweiler, M.; Benenson, R.; Franke, U.; Roth, S.; Schiele, B. The Cityscapes Dataset for Semantic Urban Scene Understanding. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 3213–3223. [Google Scholar]
  63. Taghanaki, S.A.; Abhishek, K.; Cohen, J.P.; Cohen-Adad, J.; Hamarneh, G. Deep semantic segmentation of natural and medical images: A review. Artif. Intell. Rev. 2020, 54, 137–178. [Google Scholar] [CrossRef]
  64. Huang, H.; Lin, L.; Tong, R.; Hu, H.; Zhang, Q.; Iwamoto, Y.; Han, X.; Chen, Y.-W.; Wu, J. UNet 3+: A Full-Scale Connected UNet for Medical Image Segmentation. In Proceedings of the ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, 4–8 May 2020; pp. 1055–1059. [Google Scholar]
  65. Ma, J. Segmentation Loss Odyssey. arXiv 2020, arXiv:abs/2005.13449. [Google Scholar]
  66. Saito, Y.; Shin, K.; Terayama, K.; Desai, S.; Onga, M.; Nakagawa, Y.; Itahashi, Y.M.; Iwasa, Y.; Yamada, M.; Tsuda, K. Deep-learning-based quality filtering of mechanically exfoliated 2D crystals. NPJ Comput. Mater. 2019, 5, 124. [Google Scholar] [CrossRef] [Green Version]
  67. Gui, J.; Sun, Z.; Wen, Y.; Tao, D.; Ye, J. A Review on Generative Adversarial Networks: Algorithms, Theory, and Applications. arXiv 2021, arXiv:abs/2001.06937. [Google Scholar] [CrossRef]
  68. Shorten, C.; Khoshgoftaar, T.M. A survey on Image Data Augmentation for Deep Learning. J. Big Data 2019, 6, 60. [Google Scholar] [CrossRef]
  69. Liu, Y.; Jin, L.; Lai, S. Automatic labeling of large amounts of handwritten characters with gate-guided dynamic deep learning. Pattern Recognit. Lett. 2019, 119, 94–102. [Google Scholar] [CrossRef]
  70. Zhou, Z.-H. A brief introduction to weakly supervised learning. Natl. Sci. Rev. 2018, 5, 44–53. [Google Scholar] [CrossRef]
  71. Guo, M.-H.; Xu, T.; Liu, J.; Liu, Z.-N.; Jiang, P.-T.; Mu, T.-J.; Zhang, S.-H.; Martin, R.R.; Cheng, M.-M.; Hu, S. Attention Mechanisms in Computer Vision: A Survey. arXiv 2022, arXiv:abs/2111.07624. [Google Scholar] [CrossRef]
  72. Cheng, Y.; Wang, D.; Zhou, P.; Zhang, T. A Survey of Model Compression and Acceleration for Deep Neural Networks. arXiv 2017, arXiv:abs/1710.09282. [Google Scholar]
  73. Tan, C.; Sun, F.; Kong, T.; Zhang, W.; Yang, C.; Liu, C. A Survey on Deep Transfer Learning. arXiv 2018, arXiv:abs/1808.01974. [Google Scholar]
  74. Zaidi, S.S.A.; Ansari, M.S.; Aslam, A.; Kanwal, N.; Asghar, M.N.; Lee, B. A Survey of Modern Deep Learning based Object Detection Models. Digit. Signal Process. 2022, 126, 103514. [Google Scholar] [CrossRef]
  75. Lateef, F.; Ruichek, Y. Survey on semantic segmentation using deep learning techniques. Neurocomputing 2019, 338, 321–348. [Google Scholar] [CrossRef]
  76. Geiger, M.; Jacot, A.; Spigler, S.; Gabriel, F.; Sagun, L.; d’Ascoli, S.; Biroli, G.; Hongler, C.; Wyart, M. Scaling description of generalization with number of parameters in deep learning. J. Stat. Mech. Theory Exp. 2020, 2020, 023401. [Google Scholar] [CrossRef] [Green Version]
  77. McHugh, M.L. Interrater reliability: The kappa statistic. Biochem. Med. 2012, 22, 276–282. [Google Scholar] [CrossRef]
  78. Lei, T.; Wang, R.; Wan, Y.; Du, X.; Meng, H.; Nandi, A.K. Medical Image Segmentation Using Deep Learning: A Survey. IET Image Process. 2022, 16, 1243–1267. [Google Scholar]
Figure 1. U2-Net model structure.
Figure 1. U2-Net model structure.
Coatings 12 01551 g001
Figure 2. Basic structure of the residual module (a) residual module (b) residual U block.
Figure 2. Basic structure of the residual module (a) residual module (b) residual U block.
Coatings 12 01551 g002
Figure 3. Internal structure of two residual U-blocks: (a) residual U-block (b) and multiscale connected residual U-block.
Figure 3. Internal structure of two residual U-blocks: (a) residual U-block (b) and multiscale connected residual U-block.
Coatings 12 01551 g003
Figure 4. Pyramid-like pooling module for 2DU2-Net†.
Figure 4. Pyramid-like pooling module for 2DU2-Net†.
Coatings 12 01551 g004
Figure 5. 2DU2-Netmodel structure.
Figure 5. 2DU2-Netmodel structure.
Coatings 12 01551 g005
Figure 6. Experimental image [66]. (a) original image; (b) label image (Single layer marker in red, double layer marker in green, background marker in black.).
Figure 6. Experimental image [66]. (a) original image; (b) label image (Single layer marker in red, double layer marker in green, background marker in black.).
Coatings 12 01551 g006
Figure 7. Loss curve when the network is pre-trained.
Figure 7. Loss curve when the network is pre-trained.
Coatings 12 01551 g007
Figure 8. Network model prediction results: (a) 2DU2-Net; (b) U2-Net; (c) U-Net; (d) PSPNet; (e) PFPNNet; (f) DeepLabV3; (g) DeepLabV3+; (h) DNLNet; (i) DANNet; (j) ISANet; (k) OCRNet; (l) STDC-Seg (m) BiseNetv2; (n) FCN; (o) HRNet; (p) SFNet; (q) ANN.
Figure 8. Network model prediction results: (a) 2DU2-Net; (b) U2-Net; (c) U-Net; (d) PSPNet; (e) PFPNNet; (f) DeepLabV3; (g) DeepLabV3+; (h) DNLNet; (i) DANNet; (j) ISANet; (k) OCRNet; (l) STDC-Seg (m) BiseNetv2; (n) FCN; (o) HRNet; (p) SFNet; (q) ANN.
Coatings 12 01551 g008aCoatings 12 01551 g008b
Table 1. Network model test results.
Table 1. Network model test results.
ModelsBackbone NetworkGFOPsParams (M)MIoU (%)Accuracy (%)Kappa (%)Dice (%)
2DU2-Net-180.9112.4694.18%99.03%95.72%96.97%
U2-Net-51.324.3693.74%98.85%95.38%96.73%
U-Net-124.4651.1492.76%98.82%94.67%96.19%
PSPNetResNet50265.59259.0394.36%99.02%95.88%97.07%
PFPNNetResNet101144.76109.5194.10%99.06%95.68%96.93%
DeepLabV3ResNet50162.69149.2393.20%99.00%95.01%96.43%
DeepLabV3+ResNet50114.15102.2094.97%99.10%96.35%97.40%
DNLNetResNet50209.68191.0093.80%99.02%95.48%96.76%
DANNetResNet50199.21181.2594.87%99.13%96.17%97.34%
ISANetResNet50159.23144.0393.76%98.84%95.31%96.74%
OCRNetHRNet1852.9864.2094.75%99.03%96.09%97.28%
STDC-SegSTDC18.4531.6093.07%98.88%94.87%96.37%
BiSeNetv2-8.068.8891.45%98.68%93.67%95.46%
FCNHRNet1818.5136.8994.50%99.08%96.02%97.14%
HRNetHRNet48161.51267.3494.34%99.07%95.77%97.06%
SFNetResNet1868.3752.6693.86%99.00%95.45%96.80%
ANNResNet50204.35185.5694.30%99.10%95.81%97.04%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhang, Y.; Zhang, H.; Zhou, S.; Liu, G.; Zhu, J. Deep Learning-Based Layer Identification of 2D Nanomaterials. Coatings 2022, 12, 1551. https://0-doi-org.brum.beds.ac.uk/10.3390/coatings12101551

AMA Style

Zhang Y, Zhang H, Zhou S, Liu G, Zhu J. Deep Learning-Based Layer Identification of 2D Nanomaterials. Coatings. 2022; 12(10):1551. https://0-doi-org.brum.beds.ac.uk/10.3390/coatings12101551

Chicago/Turabian Style

Zhang, Yu, Heng Zhang, Shujuan Zhou, Guangjie Liu, and Jinlong Zhu. 2022. "Deep Learning-Based Layer Identification of 2D Nanomaterials" Coatings 12, no. 10: 1551. https://0-doi-org.brum.beds.ac.uk/10.3390/coatings12101551

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop