Next Article in Journal
Development of Direct-Vibration Actuator for Bezel-Less Display Panels on Mobile Phones
Next Article in Special Issue
Fast Diffraction Calculation for Spherical Computer-Generated Hologram Using Phase Compensation Method in Visible Range
Previous Article in Journal
Laser-Induced Breakdown Spectroscopy for Determination of Spectral Fundamental Parameters
Previous Article in Special Issue
Detecting the Extremely Small Angle of an Axicon by Phase-Shifting Digital Holography
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Digital Holographic Interferometry without Phase Unwrapping by a Convolutional Neural Network for Concentration Measurements in Liquid Samples

1
Unidad Académica de Ingeniería Eléctrica, Universidad Autónoma de Zacatecas, Campus Siglo XXI, Zacatecas C.P. 98160, Mexico
2
Unidad Académica de Ciencia y Tecnología de la Luz y la Materia, Universidad Autónoma de Zacatecas, Campus Siglo XXI, Zacatecas C.P. 98160, Mexico
*
Authors to whom correspondence should be addressed.
Submission received: 4 March 2020 / Revised: 19 March 2020 / Accepted: 23 March 2020 / Published: 20 July 2020
(This article belongs to the Special Issue Incoherent Digital Holography)

Abstract

:
Convolutional neural networks (CNNs) and digital holographic interferometry (DHI) can be combined to improve the calculation efficiency and to simplify the procedures of many DHI applications. In DHI, for the measurements of concentration differences between liquid samples, two or more holograms are compared to find the difference phases among them, and then to estimate the concentration values. However, liquid samples with high concentration difference values are difficult to calculate using common phase unwrapping methods as they have high spatial frequencies. In this research, a new method to skip the phase unwrapping process in DHI, based on CNNs, is proposed. For this, images acquired by Guerrero-Mendez et al. (Metrology and Measurement Systems 24, 19–26, 2017) were used to train the CNN, and a multiple linear regression algorithm was fitted to estimate the concentration values for liquid samples. In addition, new images were recorded to evaluate the performance of the proposed method. The proposed method reached an accuracy of 0.0731%, and a precision of ±0.0645. The data demonstrated a high repeatability of 0.9986, with an operational range from 0.25 gL−1 to 1.5 gL−1. The proposed method was performed with liquid samples in a cylindrical glass.

1. Introduction

A liquid sample can be classified using physical properties, such as concentration, color, boiling temperature, and fusion point. In the case of concentration, this can be defined as the amount of solute mass in the total volume of a solution [1]. There are many methods and tools for the estimation of concentrations in liquid samples; however, most of them are invasive and destructive [2,3,4]. A technique that is able to perform measurements of concentration differences with high accuracy, in a non-invasive and non-destructive way, is digital holographic interferometry (DHI) [5].
DHI is a high precision, non-contact, non-invasive, non-destructive, and full-field optical metrology technique [6,7]. DHI is able to measure, with a very high sensitivity, variations in the physical properties of phase objects (i.e., a liquid sample in a glass container can be considered as a phase object), based on the comparison of wavefronts recorded as holograms at different instants in times or states of an object [8,9]. The holograms are recorded by an image sensor, and saved on a computer; then, a reconstruction process can be performed using numerical methods [10,11,12]. The phase difference extracted from reconstructed object images is wrapped from −π to π. The accuracy of the DHI depends on the accuracy with which the phase difference is estimated, which is usually noisy and wrapped [13]. However, phase unwrapping methods require a robust algorithm [14,15,16,17]. In addition, phase unwrapping methods have a trade-off between the computational cost and accuracy, i.e., a high accuracy method requires more computational time [18].
The growth of new computer vision and open-source technologies can improve the trade-off between computational cost and accuracy in phase difference estimation. A new promising technology is the convolutional neural network (CNN). CNNs are mathematical algorithms that mimic the functioning of the mammalian visual cortex using advanced operation blocks, and several layers of neurons, due to the ability to approximate any continuous function accurately [19]. CNNs have been applied to multiple tasks, including image classification, object detection, object tracking, and scene labeling [20,21,22,23,24,25,26].
Specifically for the optical metrology techniques, CNNs have been applied as a phase demodulation from a single fringe pattern in projection profilometry [27], as a phase and amplitude reconstructor from a single hologram intensity pattern in holography [28], as an estimator of depth position without multiple diffraction calculations in digital holography [29], and as an optical fringe pattern denoising method in interferometry [30]. CNNs have also been applied in digital holographic interferometry, including new phase unwrapping methods, e.g., Spoorthi et al. [31] proposed a phase unwrapping method using the wrapped phase as input and wrap-count as a semantic label, Zhang et al. [32] presented a phase unwrapping method based on a semantic segmentation algorithm, and Zhang et al. [33] generated the unwrapped phase from the combination of a denoised wrapped phase and a corrected integral multiple.
Therefore, in order to improve the calculation efficiency and simplify the procedures of phase difference estimation, the aim of this research was to implement open sources and artificially intelligent technologies for DHI in liquid samples. This research proposes to develop a new method that does not need phase unwrapping to estimate concentrations with high sensitivity, high accuracy, and with a low computational cost.

2. Materials and Methods

2.1. Experimental Setup

The optical system for concentration measurements was based on DHI. The principles and mathematical equations of DHI are well known in the literature [5,6]. In DHI, phase difference maps were obtained from the correlation between two holograms. The experimental setup we used to record the holograms is shown in Figure 1. The optical system had a He-Ne laser light LA1 (CrystaLaser, Reno, NV, USA) with a peak wavelength of λ = 543 nm, and with a maximum output power of 15 mW. The laser beam was divided into two beams by a beam splitter BS1. One beam (object beam) was sent to the L1 and L2 lenses to be expanded and collimated, respectively. Then, the object beam was scattered by a diffuser D1, and passed through a common glass tube that contained the liquid sample S1 to be analyzed.
The object beam passed through a rectangular aperture A1 and was collected by a positive lens L5. Then, it was sent to a cubic beam splitter BS2 that was placed in front of an 8-bit charge-coupled device camera (CCD) (Pixelink, Rochester, NY, USA). In addition, the liquid sample with respect to the acquisition camera had a distance of 25 cm. Otherwise, the reference beam was reflected by the M1 and M2 mirrors. Later, the reference beam was sent to the L3 and L4 lenses to be expanded and collimated, and then, the output beam was sent to BS2 where it could interfere with the object beam, which was right in front of the CCD camera.
A small angle f C X was introduced between the object and the reference beam on the Mach–Zehnder configuration to achieve the off-axis holography geometry. The CCD was a monochromatic sensor with 1280 × 1024 pixels (1.3 MP) with an 8-bit dynamic range. The pixel size was 5.2 μm. The holograms were continuously recorded at 13 fps, while the liquid sample passed through the glass tube at a rate of 12 mL/min. This allowed us to create a large image dataset by recording 13 different holograms per second. The liquid sample was continuously injected by a syringe Infusion Pump KDS 200 (KD Scientific Inc., Holliston, MA, USA).

2.2. Phase Difference Images

Seven liquid samples were created by mixing 1 liter of distiller water with various masses of NaCl. The different masses of NaCl used are shown in Table 1.
In order to measure the concentration difference between two liquid samples, two holograms were recorded at different moments or states. A hologram is obtained from a wave-front. The wave-front coming from certain liquid sample is represented as:
U 1 = u 1 ( x . y )   e i ϕ 1 ( x , y )
where u 1 ( x . y ) is the amplitude, ϕ 1 is the phase of the wave-front, and x , y are the rectangular coordinates of the recording sensor plane.
Then, a second hologram was obtained from a wave-front coming from another liquid sample or after slightly modifying the concentration of the liquid sample. The new wave-front is represented as:
U 2 = u 2 ( x . y )   e i ϕ 2 ( x , y )
where ϕ 2 is the new phase that indicates a change in the optical path length, i.e., ϕ 2 =   ϕ 1 + Δ ϕ 2 1 .
The procedure continued with the calculation of the phase difference from the individual phase terms Δ ϕ 2 1 =   ϕ 2 +   ϕ 1 . A phase term depends on the transverse distances and the refractive index of the liquid mixture inside a glass tube. The refractive index difference is related to the change of concentration (CON) and the temperature (T) between liquid samples. In the case of aqueous salt mixtures, the liquid samples have a linear relationship between the refractive index and concentration (CON), which is considered to be constant at 1.71 × 10−3 at a temperature of 20 °C. Therefore, the concentration difference between two liquid samples can be described as [5]:
Δ ϕ 2 1 ( x , y ) = k { d i ( x , y ) [ 1.71 × 10 3 ] [ C O N 2 ( x , y ) C O N 1 ( x , y ) ] }
where k = π / 2 λ , λ is the wavelength, C O N 1 and C O N 2 are the concentration values of two liquid samples, and d i is the inner transversal distance of the glass tube.
Therefore, the wrapped phase difference images were obtained from the difference between two holograms with different concentrations. The wrapped phase difference images were obtained from the difference between the different samples and the sample with only distilled water (0 g of NaCl), which did not require special preparation. In addition, six classes were created with different concentrations, and they are shown in Table 2. The wrapped phase difference images for each class are shown in Figure 2.

2.3. Proposed Method

A simple and highly modularized network architecture for image classification is ResNeXt [34]. ResNeXt is an improvement of a previous version of ResNet due to its cardinality of 32. This next dimension is known as cardinality. A high cardinality value is a more effective way of gaining accuracy in image classifications. That is to say, ResNeXt is built by the repetition of building blocks that add a set of transformations with the same topology, which allows for greater accuracy [35]. ResNeXt architecture currently has the lowest Top-1 and Top-5 errors among Torchvision package models [36]. The input size for ResNeXt is 224 × 224 RGB images. Therefore, the characteristics of the ResNeXt model with transfer learning (TL) principles are ideal for use in this research. The main principle of TL is that a CNN model that has been previously trained for a certain task is reused, and trained again to learn a new task. This is possible by modifying the last fully connected layer according to the new number of classes to classify in the new task, and using the process of fine-tuning, which essentially retrains the whole model by unfreezing the convolutional base layers, and allows the weight and bias to be recalculated (updating all of the model parameters). Figure 3 shows the TL process in a CNN, where the last fully connected layer was changed, from classifying general images to classifying the interferogram dataset collected in this research.
In this research, a ResNeXt model was used to extract implicit information from a dataset and to then use the extracted information to create a feature vector. This feature vector was used to fit a multiple linear regression (Regressor) to predict the concentration measurements from wrapped phase images. The final layer in the ResNeXt model was the logits layer, which returns raw probability values in a feature vector. The Regressor was fitted with the feature vector or logits vector, and an equation was made. The equation relates the feature vector, obtained from the training dataset used to fit the Regressor, with a new feature vector obtained from a new image. Figure 4 shows the generation of the multiple linear regression from logits vectors obtained by the trained CNN. In addition, Figure 5 shows the general operation of the proposed method, where an unknown hologram was used as an input image, and its logits vector obtained by the CNN was associated with the logits vector (used to fit the Regressor) to estimate the concentration measurement.
In addition, CNNs have a hierarchical organization in layers, which increases their processing capacity. A CNN has the ability to learn low and high levels of features, and the learned features are used to categorize the input image. A useful tool to understand how neural networks categorize the input images is a saliency map. Saliency maps analyze the learned features and offer a visualization of what the CNN uses to categorize. Saliency maps in computer vision can give indications of the image regions that have more impact on the final decision of the CNN. A gradient across the RGB output channel appears because the CNN works using three different filters, one for each RGB color. Then, the backpropagation step used by the CNN gives classification clues when it calculates the max gradient of the input image. In a saliency map, the dots in the max gradients are not noise, they indicate the pixels in the image that contribute to the output classification. The higher dot density in the phase difference maps is the central region, which strongly contributes to the CNN image classification. Figure 6 shows some saliency maps randomly taken from this research. The saliency maps show that the CNN model is focused in the image lines due to the phase differences, and not on the noise generated by the high spatial frequencies.

2.4. Training Process

A total of 7200 images of six different classes were recorded. To create the train images dataset (1000 images per class), 6000 images were randomly taken. The remaining 1200 images were used to build the validation dataset (200 images per class). In addition, 1000 new images were recorded with intermediate concentration samples, which were used to prove the proposed method, and they were not used to train the CNN. The intermediate concentration samples were made with NaCl masses of: 0.375 g, 0.625 g, 0.875 g, 1.125 g, and 1.625 g.
The image size registered by the CCD camera was 1280 × 1024 pixels, however, the images were center cropped at 224 × 224 pixels, according to the input layer of the CNN.
The training process was developed and implemented using Google Colaboratory, a free cloud service for machine learning education. It provides a virtual machine on a GPU (graphics processing unit) of a Nvidia Tesla K80 with 2496 CUDA cores. ResNeXt was extracted from PyTorch torchvision package. For the training of the CNN, the algorithm executed a total training of 100 epochs with the optimizer of stochastic gradient descent (SGD). The epoch number was selected by analyzing the loss of training according to previous executions of the training process. The network was trained with a momentum of 0.9, and a stochastic gradient descent. The batch size was 40. The hyperparameters used in the experiment are listed in Table 3.

2.5. Performance Metrics of the Proposed Method

The performance metrics were carried out in two stages. The first stage evaluated the performance of the CNN as an image classifier, and the second stage evaluated the performance of the Regressor as a concentration estimator.

2.5.1. Performance Metrics of the CNN

The confusion matrix is a metric for the evaluation of the CNN as image classifier. A confusion matrix is defined by four terms, which are: true positive (TP, elements predicted as elements that belong to a particular class, and that belong to that class); true negative (TN, elements predicted as elements that do not belong to a particular class, and that do not belong to that class); false positive (FP, elements predicted as elements that belong to a particular class, and that do not belong to that class ); false negative (FN, elements predicted as elements that do not belong to a particular class, and that belong to that class).
The accuracy is defined as the percentage of the total number of predictions that were correctly classified and is calculated as:
A c c u r a c y = ( T P + F N ) / N
where N is the total number of elements to be classified.
The precision is the ability to predict an element according to the class it belongs to and is defined as:
P r e c i s i o n = T P / ( T P + F P )
The recall is the ability of the classifier to label all the positive cases and is calculated as:
R e c a l l = T P / ( T P + F N )
The specificity is the ability of the classifier to label all the negative cases and is defined as:
S p e c i f i c i t y = T N / ( T N + F P )
The F-Score determines the precision of our classifier and is calculated as:
F S c o r e = ( 2 * P r e c i s i o n * R e c a l l ) / ( P r e c i s i o n + R e c a l l )
The receiver operating characteristic (ROC) is a graph of Recall versus Specificity. This graph characterizes the ability of a CNN to identify positive cases as positive, and negative cases as negative. Thereby, the area under the ROC curve (AUC) is the probability that a couple of positive and negative cases chosen at random are correctly classified.

2.5.2. Performance Metrics of the Regressor

The coefficient of determination ( R 2 ) determines the quality of the model to replicate results. It is described as:
R 2 = 1 ( i = 1 n ( y _ t r u e i y _ p r e d i c t e d ) ) 2 / ( i = 1 n ( y _ t r u e i y _ m e a n ) ) 2
where y _ p r e d i c t e d is the predicted value; y _ t r u e is the true value, and y _ m e a n is the average value of the y true data.
The mean absolute error (MAE) is the mean of the difference between the true values and the predicted values. It is calculated as:
M A E = ( i = 1 n a b s ( y _ t r u e i y _ p r e d i c t e d i ) ) / n
where n is the total number of data.
The mean square error (MSE) is a statistical measure of the goodness of fit or reliability of the model according to the data. It is determined as:
M S E = ( i = 1 n ( y _ t r u e i y _ p r e d i c t e d i ) 2 ) / n

3. Experimental Results and Discussion

The performance metrics were evaluated with the validation dataset, which consisted of six classes with 200 images in each class. For the CNN evaluation as the image classifier, a perfect classification was reached in the confusion matrix. The confusion matrix reached is shown in Figure 7. The confusion matrix obtained was a diagonal matrix, where the main diagonal elements reached the maximum classification percentage, i.e., the CNN correctly classified 100% of the images according with their classes.
Therefore, the performance metrics of the CNN as classifier reached the maximum score in accuracy, precision, recall, specificity, F-score, ROC, and AUC. The values of the performance metrics are shown in Table 4, and the ROC in Figure 8.
The performance of the Regressor as a phase difference estimator was also calculated, and the results obtained are shown in Table 5. The Regressor presented a high coefficient of determination ( R 2 ) of 0.9986, which indicates that it is a high-quality model. Also, the Regressor presented low values of MAE and MSE errors, which indicates that the model presented a good capacity to estimate. It is noted that the regressor reached a high performance, however, there are errors that guarantee that the CNN was not overfitted (the CNN did not memorize the images).
The training accuracy and training loss curves are shown in Figure 9 and Figure 10, respectively. The CNN obtained a high value of accuracy of 90% in its first epochs; however, the CNN accuracy presented fluctuations until the epoch 55 reached the maximum numerical score of accuracy.
The concentration values estimated by the proposed method with all the validation dataset are shown in Table 6. The error relative between the true and predicted values, the standard deviation (STD), the mean absolute error (MAE), and the mean square error (MSE) per class are also listed in Table 6.
According to the validation dataset, and the classes with which the CNN was trained, the proposed method presents a precision of ± 0.0147, and an accuracy of 0.0043%.
In addition, five new classes with different phases of difference were used to verify the proposed method. The difference phases estimated by the proposed method are shown in Table 7. Also, the new holograms are shown in Figure 11. These holograms were not used before in the training process.
It should be noted that the errors in Table 6 are smaller than the values obtained with the conventional DHI method [5]. However, in Table 7 the errors are slightly higher; however, the precision can be improved if the image dataset is increased.
Therefore, the proposed method can calculate concentration ranges from 0.25 gL−1L to 1.5 gL−1, with a precision of ± 0.0645, and accuracy of 0.0731% based on data and classes that were not used to train the CNN.

4. Conclusions

In this paper, a new method to skip the phase unwrapping process in DHI based on CNNs is proposed. The CNN and the Regressor were used to estimate the concentrations of liquid samples in a cylindrical glass, using the images obtained with digital holographic interferometry. Using DHI to measure difference concentration values in liquid samples, large differences created phase map differences with high-frequency fringes, and the fringes appeared as noise. In addition, the liquid samples needed to be fragmented to create minor concentration liquid values, which created phase difference maps that could be correctly mapped by the sensor, and could be unwrapped with common methods.
Using a CNN, the concentration liquid samples were estimated using the phase map difference, and the unwrapping process was omitted. This method directly estimated the concentration values in liquid samples associated with difference phases, without the necessity of common phase unwrapping processes due to the CNN that was trained to quantify the phase wrapped images. In other words, the proposed method was able to estimate the concentration values from an input image based on the spatial distribution of the phase wrapped image without using the conventional phase equations. Although the differences among the image samples for each class could be obvious to the eye, their classification and quantification is not, and the CNN must be trained to estimate the sample concentrations that are different from those it was trained with.
In other words, our proposed method was able to estimate the concentration values for classes that were unknown based on the central region of the input image. The results showed a performance with high accuracy and precision. Once the CNN was trained and the Regressor was fitted, the proposed method was able to calculate the concentration values directly, with classes that were not used in the training process, as long as the values were in the operational range. As further work, a range extension could be performed, and a different form of the unwrapping phase based on CNNs could be analyzed.

Author Contributions

Conceptualization, C.G.-M., T.S.-A. and D.L.-B.; methodology, C.G.-M. and D.L.-B.; software, C.G.-M.; validation, M.A.-E., and C.O.-O.; formal analysis, I.M.; investigation, C.G.-M. and D.L.-B.; resources, T.S.-A.; data curation, C.G.-M. and T.S.-A.; writing—original draft preparation, I.M., D.L.-B. and C.G.-M.; writing—review and editing, I.M., D.L.-B. and C.G.-M.; visualization, C.G.-M. and T.S.-A.; supervision, T.S.-A., M.A.-E. and I.M.; project administration, C.G.-M. and D.L.-B.; funding acquisition, T.S.-A. and C.O.-O. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Vargaftik, N.B. Handbook of Physical Properties of Liquids and Gases: Pure Substances and Mixtures, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 1975; ISBN 978-3-642-52506-3. [Google Scholar]
  2. Henning, B.; Daur, P.-C.; Prange, S.; Dierks, K.; Hauptmann, P. In-Line concentration measurement in complex liquids using ultrasonic sensors. Ultrasonics 2000, 38, 799–803. [Google Scholar] [CrossRef]
  3. Walker, D.A. A fluorescence technique for measurement of concentration in mixing liquids. J. Phys. E Sci. Instrum. 1987, 20, 217–224. [Google Scholar] [CrossRef]
  4. Perrier, F.; Aupiais, J.; Girault, F.; Przylibski, T.A.; Bouquerel, H. Optimized measurement of radium-226 concentration in liquid samples with radon-222 emanation. J. Environ. Radioact. 2016, 157, 52–59. [Google Scholar] [CrossRef] [PubMed]
  5. Guerrero-Méndez, C.; Saucedo-Anaya, T.; Araiza-Esquivel, M.; Balderas-Navarro, R.E.; López-Martínez, A.; Olvera-Olvera, C. Measurements of Concentration differences between Liquid Mixtures using Digital Holographic Interferometry. Metrol. Meas. Syst. 2017, 24, 19–26. [Google Scholar] [CrossRef] [Green Version]
  6. Guerrero-Mendez, C.; Anaya, T.S.; Araiza-Esquivel, M.; Balderas-Navarro, R.E.; Aranda-Espinoza, S.; López-Martínez, A.; Olvera-Olvera, C. Real-Time measurement of the average temperature profiles in liquid cooling using digital holographic interferometry. Opt. Eng. 2016, 55, 121730. [Google Scholar] [CrossRef]
  7. Dancova, P.; Psota, P.; Vit, T. Measurement of a Temperature Field Generated by a Synthetic Jet Actuator using Digital Holographic Interferometry. Actuators 2019, 8, 27. [Google Scholar] [CrossRef] [Green Version]
  8. Saucedo-A, T.; De la Torre-Ibarra, M.H.; Santoyo, F.M.; Moreno, I. Digital holographic interferometer using simultaneously three lasers and a single monochrome sensor for 3D displacement measurements. Opt. Express 2010, 18, 19867–19875. [Google Scholar] [CrossRef]
  9. Pedrini, G.; Osten, W.; Gusev, M.E. High-Speed digital holographic interferometry for vibration measurement. Appl. Opt. 2006, 45, 3456. [Google Scholar] [CrossRef]
  10. Kreis, T. Handbook of Holographic Interferometry: Optical and Digital Methods, 1st ed.; John Wiley & Sons: Weinheim, Germany, 2006; Volume 1, ISBN 978-3-527-60492-0. [Google Scholar]
  11. Toker, G.R. Holographic Interferometry: A Mach–Zehnder Approach; CRC Press: Boca Raton, FL, USA, 2017; ISBN 978-1-315-21678-2. [Google Scholar]
  12. Wada, A.; Kato, M.; Ishii, Y. Multiple-Wavelength digital holographic interferometry using tunable laser diodes. Appl. Opt. 2008, 47, 2053. [Google Scholar] [CrossRef]
  13. Gorthi, S.S.; Rastogi, P. Phase estimation in digital holographic interferometry using cubic-Phase-Function based method. J. Mod. Opt. 2010, 57, 595–600. [Google Scholar] [CrossRef]
  14. Goldstein, R.M.; Zebker, H.A.; Werner, C.L. Satellite radar interferometry: Two-Dimensional phase unwrapping. Radio Sci. 1988, 23, 713–720. [Google Scholar] [CrossRef] [Green Version]
  15. Huang, Z.; Shih, A.J.; Ni, J. Phase unwrapping for large depth-Of-Field 3D laser holographic interferometry measurement of laterally discontinuous surfaces. Meas. Sci. Technol. 2006, 17, 3110–3119. [Google Scholar] [CrossRef]
  16. Cusack, R.; Huntley, J.M.; Goldrein, H.T. Improved noise-Immune phase-Unwrapping algorithm. Appl. Opt. 1995, 34, 781–789. [Google Scholar] [CrossRef] [PubMed]
  17. Stetson, K.A.; Wahid, J.; Gauthier, P. Noise-Immune phase unwrapping by use of calculated wrap regions. Appl. Opt. 1997, 36, 4830–4838. [Google Scholar] [CrossRef]
  18. Yatabe, K.; Tanigawa, R.; Ishikawa, K.; Oikawa, Y. Time-Directional filtering of wrapped phase for observing transient phenomena with parallel phase-shifting interferometry. Opt. Express 2018, 26, 13705. [Google Scholar] [CrossRef]
  19. Hajian, A.; Styles, P. Application of Soft Computing and Intelligent Methods in Geophysics; Springer: Berlin/Heidelberg, Germany, 2018. [Google Scholar]
  20. Deng, J.; Li, J.; Feng, H.; Zeng, Z. Flexible depth segmentation method using phase-Shifted wrapped phase sequences. Opt. Lasers Eng. 2019, 122, 284–293. [Google Scholar] [CrossRef]
  21. Ju, C.; Bibaut, A.; van der Laan, M.J. The relative performance of ensemble methods with deep convolutional neural networks for image classification. J. Appl. Stat. 2018, 45, 2800–2818. [Google Scholar] [CrossRef]
  22. Cao, X.; Yao, J.; Xu, Z.; Meng, D. Hyperspectral Image Classification With Convolutional Neural Network and Active Learning. IEEE Trans. Geosci. Remote Sens. 2020, 1–13. [Google Scholar] [CrossRef]
  23. Souza, J.F.L.; Santos, M.D.; Magalhães, R.M.; Neto, E.M.; Oliveira, G.P.; Roque, W.L. Automatic classification of hydrocarbon “leads” in seismic images through artificial and convolutional neural networks. Comput. Geosci. 2019, 132, 23–32. [Google Scholar] [CrossRef]
  24. Lu, X.; Zhang, Y.; Yuan, Y.; Feng, Y. Gated and Axis-Concentrated Localization Network for Remote Sensing Object Detection. IEEE Trans. Geosci. Remote Sens. 2020, 58, 179–192. [Google Scholar] [CrossRef]
  25. Leung, H.K.; Chen, X.-Z.; Yu, C.-W.; Liang, H.-Y.; Wu, J.-Y.; Chen, Y.-L. A Deep-Learning-Based Vehicle Detection Approach for Insufficient and Nighttime Illumination Conditions. Appl. Sci. 2019, 9, 4769. [Google Scholar] [CrossRef] [Green Version]
  26. Ihsanto, E.; Ramli, K.; Sudiana, D.; Gunawan, T.S. An Efficient Algorithm for Cardiac Arrhythmia Classification Using Ensemble of Depthwise Separable Convolutional Neural Networks. Appl. Sci. 2020, 10, 483. [Google Scholar] [CrossRef] [Green Version]
  27. Feng, S.; Chen, Q.; Gu, G.; Tao, T.; Zhang, L.; Hu, Y.; Yin, W.; Zuo, C. Fringe pattern analysis using deep learning. Adv. Photonics 2019, 1, 025001. [Google Scholar] [CrossRef] [Green Version]
  28. Rivenson, Y.; Rivenson, Y.; Rivenson, Y.; Rivenson, Y.; Zhang, Y.; Zhang, Y.; Zhang, Y.; Günaydın, H.; Teng, D.; Teng, D.; et al. Non-Iterative Holographic Image Reconstruction and Phase Retrieval Using a Deep Convolutional Neural Network. In CLEO: Science and Innovations; Optical Society of America: San Jose, CA, USA, 2018; p. STh1J.3. [Google Scholar]
  29. Shimobaba, T.; Kakue, T.; Ito, T. Convolutional Neural Network-Based Regression for Depth Prediction in Digital Holography. In Proceedings of the 2018 IEEE 27th International Symposium on Industrial Electronics (ISIE), Cairns, Australia, 13–15 June 2018; pp. 1323–1326. [Google Scholar]
  30. Lin, B.; Fu, S.; Zhang, C.; Wang, F.; Li, Y. Optical fringe patterns filtering based on multi-Stage convolution neural network. Opt. Lasers Eng. 2020, 126, 105853. [Google Scholar] [CrossRef] [Green Version]
  31. Spoorthi, G.E.; Gorthi, S.; Gorthi, R.K.S.S. PhaseNet: A Deep Convolutional Neural Network for Two-Dimensional Phase Unwrapping. IEEE Signal Process. Lett. 2019, 26, 54–58. [Google Scholar] [CrossRef]
  32. Zhang, T.; Jiang, S.; Zhao, Z.; Dixit, K.; Zhou, X.; Hou, J.; Zhang, Y.; Yan, C. Rapid and robust two-Dimensional phase unwrapping via deep learning. Opt. Express 2019, 27, 23173–23185. [Google Scholar] [CrossRef]
  33. Zhang, J.; Tian, X.; Shao, J.; Luo, H.; Liang, R. Phase unwrapping in optical metrology via denoised and convolutional segmentation networks. Opt. Express 2019, 27, 14903–14912. [Google Scholar] [CrossRef] [Green Version]
  34. Xie, S.; Girshick, R.; Dollar, P.; Tu, Z.; He, K. Aggregated Residual Transformations for Deep Neural Networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; IEEE: Honolulu, HI, USA, 2017; pp. 5987–5995. [Google Scholar]
  35. Xu, K.; Zhu, B.; Wang, D.; Peng, Y.; Wang, H.; Zhang, L.; Li, B. Meta Learning Based Audio Tagging. In Proceedings of the Workshop on Detection and Classification of Acoustic Scenes and Events (DCASE 2018), Surrey, UK, 19–20 November 2018; pp. 19–20. [Google Scholar]
  36. PyTorch, O. Torchvision Models. Available online: https://pytorch.org/docs/stable/torchvision/models.html (accessed on 25 January 2020).
Figure 1. Schematic diagram of the experimental setup of digital holographic interferometry (DHI). LA1: laser; BS1 and BS2: cubic 50:50 beam splitters; M1 and M2: mirrors; L1, L2, L3, L4, and L5: lenses; D1: diffuser; S1: sample liquid (glass cylindrical tube; A1: rectangular aperture; CCD: charge-coupled device; f C X : the spatial carrier frequency along the x direction of the sensor plane.
Figure 1. Schematic diagram of the experimental setup of digital holographic interferometry (DHI). LA1: laser; BS1 and BS2: cubic 50:50 beam splitters; M1 and M2: mirrors; L1, L2, L3, L4, and L5: lenses; D1: diffuser; S1: sample liquid (glass cylindrical tube; A1: rectangular aperture; CCD: charge-coupled device; f C X : the spatial carrier frequency along the x direction of the sensor plane.
Applsci 10 04974 g001
Figure 2. Wrapped phase difference images taken randomly of the dataset of this research for each class created by subtracted holograms.
Figure 2. Wrapped phase difference images taken randomly of the dataset of this research for each class created by subtracted holograms.
Applsci 10 04974 g002
Figure 3. Representation of the transfer learning (TL) process and the fine-tuning of the convolutional neural network (CNN). The last fully connected layer of the CNN was changed from classifying general objects to classifying the new dataset of wrapped phase images.
Figure 3. Representation of the transfer learning (TL) process and the fine-tuning of the convolutional neural network (CNN). The last fully connected layer of the CNN was changed from classifying general objects to classifying the new dataset of wrapped phase images.
Applsci 10 04974 g003
Figure 4. Representation of the TL process, and fine-tuning of the CNN. The last fully connected layer of the CNN was changed from classifying general objects to classifying the dataset collected in this research.
Figure 4. Representation of the TL process, and fine-tuning of the CNN. The last fully connected layer of the CNN was changed from classifying general objects to classifying the dataset collected in this research.
Applsci 10 04974 g004
Figure 5. General operation of the proposed method.
Figure 5. General operation of the proposed method.
Applsci 10 04974 g005
Figure 6. Saliency maps taken randomly of the dataset of this research. From left to right: input image, gradients across the RGB channels, max gradients, and overlay. (a) Class 1, (b) Class 2, (c) Class 3.
Figure 6. Saliency maps taken randomly of the dataset of this research. From left to right: input image, gradients across the RGB channels, max gradients, and overlay. (a) Class 1, (b) Class 2, (c) Class 3.
Applsci 10 04974 g006
Figure 7. The confusion matrix reached.
Figure 7. The confusion matrix reached.
Applsci 10 04974 g007
Figure 8. The receiver operating characteristic (ROC) graph reached. Note: all color classes overlap in the horizontal line. Area under the ROC curve (AUC).
Figure 8. The receiver operating characteristic (ROC) graph reached. Note: all color classes overlap in the horizontal line. Area under the ROC curve (AUC).
Applsci 10 04974 g008
Figure 9. The training accuracy curve along 100 epochs.
Figure 9. The training accuracy curve along 100 epochs.
Applsci 10 04974 g009
Figure 10. The training loss curve along 100 epochs.
Figure 10. The training loss curve along 100 epochs.
Applsci 10 04974 g010
Figure 11. New classes with holograms used to evaluate the proposed method.
Figure 11. New classes with holograms used to evaluate the proposed method.
Applsci 10 04974 g011
Table 1. Liquid samples used.
Table 1. Liquid samples used.
Samples
0123456
NaCl (g)0.000.250.500.751.001.251.50
Table 2. Classes created from the correlations between each sample (1, 2, 3, 4, 5, and 6) and reference sample (0).
Table 2. Classes created from the correlations between each sample (1, 2, 3, 4, 5, and 6) and reference sample (0).
Classes
123456
Difference (gL−1)0.250.500.751.001.251.50
Table 3. Hyperparameters used in the training process. Stochastic gradient descent (SGD).
Table 3. Hyperparameters used in the training process. Stochastic gradient descent (SGD).
HyperparametersValue
Algorithm optimizer SGD
Number of epochs100.0
Learning rate0.001
Momentum0.900
Batch size40.00
Table 4. Performance metrics of the CNN as classifier.
Table 4. Performance metrics of the CNN as classifier.
MetricsValue
Accuracy1.0000
Precision1.0000
Recall1.0000
Specificity1.0000
F-score1.0000
Training time (min)918.520
Table 5. Performance metrics of the Regressor. Mean absolute error (MAE), mean square error (MSE).
Table 5. Performance metrics of the Regressor. Mean absolute error (MAE), mean square error (MSE).
MetricsValue
R 2 0.9986
MAE0.0125
MSE0.0002
Standard deviation0.4238
Table 6. Concentration estimated by the proposed method. Standard deviation (STD).
Table 6. Concentration estimated by the proposed method. Standard deviation (STD).
ValuesClasses
123456
True value0.25000.50000.75001.00001.25001.5000
Predicted value0.25030.50310.75681.00021.24061.4942
Error 0.00030.00310.00680.00020.00940.0058
STD0.01840.01410.01080.01150.01330.0201
MAE0.01480.01160.01010.00900.01310.0166
MSE0.00030.00020.00020.00010.00020.0004
Table 7. The estimated concentration of extra recorded holograms with the proposed method.
Table 7. The estimated concentration of extra recorded holograms with the proposed method.
ValuesClasses
7891011
True value0.37500.62500.87501.12501.3750
Predicted value0.46470.71670.91341.07761.2766
Error (%)0.08970.09170.03840.04740.0984
STD0.12190.03300.05690.03610.0746

Share and Cite

MDPI and ACS Style

Guerrero-Mendez, C.; Saucedo-Anaya, T.; Moreno, I.; Araiza-Esquivel, M.; Olvera-Olvera, C.; Lopez-Betancur, D. Digital Holographic Interferometry without Phase Unwrapping by a Convolutional Neural Network for Concentration Measurements in Liquid Samples. Appl. Sci. 2020, 10, 4974. https://0-doi-org.brum.beds.ac.uk/10.3390/app10144974

AMA Style

Guerrero-Mendez C, Saucedo-Anaya T, Moreno I, Araiza-Esquivel M, Olvera-Olvera C, Lopez-Betancur D. Digital Holographic Interferometry without Phase Unwrapping by a Convolutional Neural Network for Concentration Measurements in Liquid Samples. Applied Sciences. 2020; 10(14):4974. https://0-doi-org.brum.beds.ac.uk/10.3390/app10144974

Chicago/Turabian Style

Guerrero-Mendez, Carlos, Tonatiuh Saucedo-Anaya, Ivan Moreno, Ma. Araiza-Esquivel, Carlos Olvera-Olvera, and Daniela Lopez-Betancur. 2020. "Digital Holographic Interferometry without Phase Unwrapping by a Convolutional Neural Network for Concentration Measurements in Liquid Samples" Applied Sciences 10, no. 14: 4974. https://0-doi-org.brum.beds.ac.uk/10.3390/app10144974

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop