Next Article in Journal
Modelling Dispersion Compensation in a Cascaded-Fiber-Feedback Optical Parametric Oscillator
Previous Article in Journal / Special Issue
A 100 Gbps OFDM-Based 28 GHz Millimeter-Wave Radio over Fiber Fronthaul System for 5G
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Dual-Output Mode Analysis of Multimode Laguerre-Gaussian Beams via Deep Learning

1
National Laboratory of Solid State Microstructures and Collaborative Innovation, Center of Advanced Microstructures, Nanjing University, Nanjing 210093, China
2
College of Engineering and Applied Sciences, Nanjing University, Nanjing 210093, China
3
School of Physics, Nanjing University, Nanjing 210093, China
*
Authors to whom correspondence should be addressed.
Submission received: 3 May 2021 / Revised: 18 May 2021 / Accepted: 21 May 2021 / Published: 24 May 2021
(This article belongs to the Special Issue Recent Developments in Optical Communications)

Abstract

:
The Laguerre-Gaussian (LG) beam demonstrates great potential for optical communication due to its orthogonality between different eigenstates, and has gained increased research interest in recent years. Here, we propose a dual-output mode analysis method based on deep learning that can accurately obtain both the mode weight and phase information of multimode LG beams. We reconstruct the LG beams based on the result predicted by the convolutional neural network. It shows that the correlation coefficient values after reconstruction are above 0.9999, and the mean absolute error (MAE) of the mode weights and phases are about 1.4 ×   10 3 and 2.9 ×   10 3 , respectively. The model still maintains relatively accurate prediction for the associated unknown data set and the noise-disturbed samples. In addition, the computation time of the model for a single test sample takes only 0.975 ms on average. These results show that our method has good abilities of generalization and robustness and allows for nearly real-time modal analysis.

1. Introduction

With the rapid development of information technology in the Internet era, traditional optical communication techniques are becoming increasingly difficult to meet the growing communication needs [1]. In 1992, Allen et al. discovered a beam carrying an orbital angular momentum (OAM) with a helical phase e i l φ [2]. The mutual orthogonality between the eigenstates of different OAM modes provides a new exploitable dimension for channel extension of optical communication [3,4,5], and, therefore, the researches on the application of OAM in the fields of particle manipulation [6,7], quantum information [8,9], and optical communication [10,11,12] have been rapidly developed. In addition to these fruitful research results, modal analysis is a necessary and important direction. In optical communication, modal analysis allows obtaining information about the amplitude and phase of the beams, which are important parameters for transmitting information. Researchers have proposed some methods to perform mode analysis of OAM beams, such as coherence measurements [13,14] and intensity recordings [15,16]. However, coherence measurements can only determine the mode amplitude and cannot obtain the mode phase information; intensity recording methods take more time and are difficult to achieve in real-time. In recent years, the convolutional neural network (CNN) has been widely used in fields related to optical imaging [17,18,19,20], and likewise in mode recognition [21,22], demultiplexing [23,24,25] of OAM beams. Even in propagation environments, such as atmospheric turbulence and underwater, CNNs have shown good accuracy performance [26,27,28,29]. However, most of these studies focus on identifying a single OAM beam mode or a combination of modes of multiple OAM beams. The phase information, which is unknown for optical intensity profile images, has been less studied.
In this paper, we propose a dual-output convolutional neural network (Y-Net) based modal analysis method for the multimode Laguerre-Gaussian (LG) beams [30], which is a kind of common beam that carries OAM. Our method not only outputs the weight of each mode based on the optical intensity profile of the input beams, but also obtains the phase information simultaneously. Moreover, we evaluated the method by optical field reconstruction and prediction errors at different mode numbers and propagation distances, and obtained superior results, which further demonstrate the advantages of the proposed scheme. Our approach has the potential to provide implications for accurate, robust and fast real-time modal analysis of OAM beams.

2. Materials and Methods

In the cylindrical coordinate system, a single-mode LG beam with zero radial index can be represented as [31]:
u l ( r ,   φ ,   z ) = A | l | w ( z ) [ r 2 w ( z ) ] | l | exp [ r 2 w 2 ( z ) ] L | l | [ 2 r 2 w 2 ( z ) ] exp ( i l φ ) exp [ i k r 2 z 2 ( z 2 + z R 2 ) ] exp [ i ( | l | + 1 ) arctan ( z z R ) ]
where l represents the topological charge, r is the radiation distance, φ is the azimuth, z is the propagation distance, A | l | = 2 π | l | ! , w ( z ) = w ( 0 ) [ ( z 2 + z R 2 ) z R 2 ] 1 2 , L | l | is Laguerre polynomial, z R = k w 0 2 2 is Rayleigh length and k is the wave vector.
The superimposed optical field of LG beams with different l-quantum numbers, which are orthogonal to each other, can be expressed by the following equations:
U ( r ,   φ ,   z ) = n = 1 N a n e i θ n u n l ( r ,   φ ,   z )
where N is the number of modes, u n l ( r ,   φ ,   z ) is the nth LG beam eigenmode, a n and θ n are the amplitude and phase of each eigenmode, respectively. a n 2 is the proportion of the nth eigenmode in the superimposed optical field and satisfies this expression n = 1 N a n 2 = 1 , which we call the mode weight. The optical intensity profiles of the multimode LG beams are shown in Figure 1.
Furthermore, the weights of different modes can be expressed as [ a 1 2 ,   a 2 2 ,   ,   a N 2 ] , while the expressing of the phase is a little different. Since the phases of each mode are relative, we define the first mode as the fundamental mode, which has a phase value of zero. The other modes are in relative phase, expressed as [ θ 1 ,   θ 2   ,   ,   θ N     1 ] . It should be noted that the number of elements of the vector of relative phase is one less than the number of modes, and we also linearly scale the elemental values of this vector from [0, 2π] to [0, 1]. In addition, we choose the optical field intensity profile of the multimode LG beams as the input, defined as I(x, y) = | U ( r ,   φ ,   z ) | 2 .
CNN is a typical deep learning method. In the ImageNet competition in 2012, Krizhevsky et al. proposed a CNN-AlexNet with ReLU as the activation function, which achieved a far better performance than other algorithms and gained wide attention from researchers [32]. A general CNN framework is composed of several layers with different functions connected in a certain order, and the output of each layer is used as the input features of the next layer and up to the output layer of the final model. In addition, a number of adjustable parameters are available for each layer for training. The core component of CNN is the convolutional layer, which extracts the features of the input image through convolutional operations and characterizes the obtained features into higher-dimensional feature spaces.
Other layers also have important roles, for example, the batch normalization (BN) layer normalizes the feature vector output from the convolutional layer. The max-pooling layer extracts a window from the feature map input and outputs the maximum value for each channel. The role of the fully connected (FC) layer is to map the learned features to the sample labeling space.
In order to obtain the amplitude and phase information of different modes in the LG beams simultaneously, we design a dual output convolutional neural network, as shown in Figure 2 below. The convolutional operation part of the model consists of 4 blocks, connected with 2 fully connected layers and the output layer, showing a Y-shaped structure. The branch of the dual output structure consists of the 2nd–4th blocks, the fully connected layer and the output layer. Each block contains 3 convolutional layers, 3 batch normalization layers and 1 max-pooling layer. The convolution kernels of the convolution layers in each block are of a size 3 × 3 and step size 1 × 1.
The advantages of this design are that, on the one hand, with limited hardware computing power, it takes less time to use one model to output both amplitude and phase than to use two models separately; on the other hand, the dual output structure is connected through block1 in Figure 2, and the two branch structures share the output feature map of block1. We believe that such dual-output structure will retain the link between amplitude and phase for the optical intensity profile.
The optical field intensity profile is limited to 128 × 128 image size, and the mode proportion of each mode is randomly and uniformly distributed between (0, 1) and normalized, and the relative phase values are randomly and uniformly distributed in [0, 2π] and linearly transformed. Other parameters are set as follows: the wavelength of the LG beam is 1064 nm, the beam waist radius is 15 mm. We generate a total of 100,000 samples (one input image, two label vectors) and divide them into three data sets: the training set, the validation set and the test set, which contain data in the ratio of 6:2:2. The model is trained using the samples in the training set and validated in the validation set, and we can pause to adjust the model parameters during the training process and finally test the model in the test set.
The performance of the model is closely related to the setting of hyper-parameters. We use a mini-batch of size 64 to speed up the computation and use the Adam function with an initial learning rate of 0.01 as an optimizer. Moreover, our model uses a decaying learning rate training method, in which the learning rate is halved every 4 epochs for the first 20 training epochs, and each epoch for the subsequent training epochs. As for the activation functions in the output layer, Softmax and Sigmoid [33] functions are used as activation functions for predicting the mode weights and relative phases, respectively. The loss function of the model considers the mean absolute error (MAE) function, and the weight ratio of the two vectors of mode weights and relative phase is set to 1:1, that is Loss = Loss A + Loss P , then the final loss function can be shown as:
Loss = 1 N n   =   1 N | y A n y A n | + 1 N     1 n   =   1 N     1 | y P n     y P n |
where N is the number of vector elements, y n is the element in the real label vector, and y n is the element in the predicted label vector. A, P in the corner scale represent the weight vector and the relative phase vector, respectively.
All training and testing processes in this work are performed on a GPU server with RTX 2080ti graphics card, and the loss function values of the model converged after 30 training epochs, with the total process taking only about 25 min. The model is tested on a test set of 20,000 samples in 19.5 s, indicating an average computation time of 0.975 ms per sample, which demonstrates that our model allows for fast real-time modal analysis of multimode LG beams. It should be noted that the complexity of the model could be reduced by adjusting the hyperparameters and structure of the model to speed up the computation of the model. Using parallel computing to provide more computational resources is also a way to reduce computation time.

3. Results

When the weights of each mode in the multimode LG beams are known, as well as the relative phase, the input optical field intensity profile can be easily reconstructed, and the accuracy of the model prediction can be visualized by reconstructing the image. Here we have used the correlation coefficient to characterize the effect of the reconstruction [34], which is expressed as follows:
C = | Δ I m ( r ) Δ I r ( r ) d 2 r Δ I m ( r ) 2 d 2 r Δ I r ( r ) 2 d 2 r |
where Δ I j ( r ) = I j ( r )     I j ¯ ( r ) ,   ( j = m ,   r ) and I j ¯ represents the mean value of the input optical intensity I m or the reconstructed optical intensity of I r . The value of C shows the similarity between the reconstructed image and the original image and is provided by Figure 3. Ideally, when the reconstructed image is the same as the original image, the correlation coefficient C is the maximum value of 1. The residual image profile [35] can be expressed by Δ I ( x ,   y ) = | I m     I r | , which is the absolute value of the difference between the reconstructed image and the original image at each pixel point.
We use a dual-output CNN to predict the weights and relative phases of multimode LG beams with 9 modes (l = 0, 1, ⋯, 8) superimposed, and put the reconstructed image according to Equation (2) together with the original input image and the residual image for visual comparison, and the results are shown in Figure 3. It can be seen that the correlation coefficient between the reconstructed image and the input image is above 0.999, while the intensity value of the residual image is almost 0, which indicates that our scheme is very feasible. It should be noted in Figure 3b that the residual phase images have several red points at which the phase values converge to 2π, indicating that the predicted values of the phase at these points may be opposite to the true values. Since the phase of optical wave is periodic, the phase difference converging to 2π can be considered as converging to 0, which is unavoidable when only one optical field profile is involved in the modal analysis [36].
We investigate the effect of the mode number of multimode LG beams on the mode analysis performance of the CNN, which is evaluated by the MAE function. The weight error and phase error are defined as Δ a 2 = 1 N | a 2 p   a 2 t | and Δ θ = 1 N     1 | θ p   θ t | , where the corner scale p and t denote the predicted and true values, respectively. As shown in Figure 4, the mode weight error and phase error gradually increase as the number of modes increases, and the phase error is always higher than the weight error, while the difference between them also becomes larger with the increase in the number of modes. Possible reason is that the optical intensity profile of the multimode LG beams is progressively more complex as the number of modes increases, which increases the difficulty of feature extraction and characterization of the CNN and leads to a gradual increase in the weights and phase error. However, this situation can be improved by training a larger number of [18] or higher resolution [35] samples. We can also use other methods that are common in the field of deep learning to reduce the prediction error of the model. For example, pre-training, hyperparameters adjusting, and so on.
Good generalizability is an important dimension when evaluating the performance of CNN. In practical application scenarios of optical communication, l-quantum number non-adjacent LG beams are used for multiplexing to avoid crosstalk of adjacent OAM modes during propagation. Our model is trained based on samples with adjacent l-quantum numbers, and samples with non-adjacent l are not considered. To verify whether the model performs better on unknown samples, we generate two datasets with mode composition of l = 1, 3, 5, 7, 9 and l = 1, 5, 9, respectively. We tested the CNN trained by the dataset with mode composition of l = 1, 2, 3, 4, 5, 6, 7, 8, 9 on the above two new datasets to investigate the generalizability of the model. The test results are shown in Figure 5, the predicted values of mode weights basically agree with the actual values, indicating that CNN has good generalization ability for the associated unknown dataset, which can reduce the burden of device level and transfer it to the data processing category. This demonstrates the application value of the dual output CNN-based approach for modal analysis.
Our model is trained with images when the propagation distance is zero, but it is also suitable for modal analysis of samples with non-zero propagation distance. We used the model to test samples with different propagation distances and different mode combinations, and the results are shown in Figure 6. The weight error increases with the increase in beam propagation distance, but even for the most complex 9 modes multiplexed beams, the weight error of the model is only 5.6 ×   10 3 after propagating 120 m, indicating that our model can support the mode analysis work of multimode LG beams within a certain distance. It is worth mentioning that the prediction accuracy of CNN can be improved if the data samples after propagating a certain distance can be added to the data set.
The performance of neural networks can also be affected by noise factors. We test on a dataset containing random noise to investigate the robustness of the model to noise. Each pixel value in the optical intensity profile image is multiplied by a factor f = 1 + N ( 0 ,   1 ) · σ to generate the noisy image dataset, where N ( 0 ,   1 ) is the standard normal distribution and σ is the noise intensity [18]. As shown in Figure 7a, the images of the optical intensity profile of the 9-mode superimposed LG beams propagated for 100 m become gradually blurred with increasing σ values, and we also have selected a local region of the image to show this change in more detail. As shown in Figure 7b, the error value of the model prediction and the slope of the curve increase with the increasing noise intensity, and when the noise intensity reaches 0.12, the weighting error is still less than 1.4 ×   10 2 . It should be noted that it is difficult to reach this level of noise intensity in real situations [18], and the results in Figure 7b confirm that our model has a strong noise immunity.

4. Conclusions

In summary, we propose a dual-output CNN mode analysis method that can quickly and accurately predict the mode weights and phase information of multimode LG beams simultaneously. The trained CNN can process a single input intensity image in less than 1 ms, and can also achieve more accurate predictions even for correlated unknown datasets and noise-disturbed samples. The performance of the model demonstrates that our method is accurate, robust and fast, which can reduce the burden of device level. In addition, our method may be applicable to the mode analysis of other OAM beams, such as the Bessel beam, which indicates that our method might be of general value to the practical application of OAM beams to optical communications.

Author Contributions

Conceptualization, X.Y., Y.X., R.Z., R.L., X.H., X.F., and Y.C.; methodology, X.Y., Y.X., R.Z., R.L., Y.C., and J.Z.; Software, X.Y., Y.X., and R.Z.; validation, X.Y., Y.X., and R.Z.; formal analysis, X.Y., Y.X., and R.Z.; investigation, X.Y., Y.X., and R.Z.; resources, C.Z., Y.Q., and Y.Z.; data curation, X.Y. and Y.X.; writing—original draft presentation, X.Y. and Y.X.; writing— review and editing, X.Y., Y.X., R.Z., C.Z., Y.Q., and Y.Z.; visualization, X.Y.; supervision, X.Y., Y.X., C.Z., Y.Q., and Y.Z.; project administration, X.Y., Y.X., R.Z., C.Z., Y.Q., and Y.Z.; funding acquisition, C.Z. and Y.Q. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key R&D Program of China (Grant No. 2017YFA0303700); National Natural Science Foundation of China (Grants No. 91950103, No. 11874214, No. 11774165, and No. 12004177); Priority Academic Program Development of Jiangsu Higher Education Institutions of China (PAPD).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to be part of future research.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Service, R.F. Light Beams With a Twist Could Give a Turbo Boost to Fiber-Optic Cables. Science 2013, 340, 1513. [Google Scholar] [CrossRef]
  2. Allen, L.; Beijersbergen, M.W.; Spreeuw, R.J.C.; Woerdman, J.P. Orbital angular momentum of light and the transformation of Laguerre-Gaussian laser modes. Phys. Rev. A 1992, 45, 8185–8189. [Google Scholar] [CrossRef]
  3. Peng, J.; Zhang, L.; Zhang, K.; Ma, J. Channel capacity of OAM based FSO communication systems with partially coherent Bessel–Gaussian beams in anisotropic turbulence. Opt. Commun. 2018, 418, 32–36. [Google Scholar] [CrossRef]
  4. Du, J.; Wang, J. High-dimensional structured light coding/decoding for free-space optical communications free of obstructions. Opt. Lett. 2015, 40, 4827–4830. [Google Scholar] [CrossRef] [PubMed]
  5. Wang, A.; Zhu, L.; Chen, S.; Du, C.; Mo, Q.; Wang, J. Characterization of LDPC-coded orbital angular momentum modes transmission and multiplexing over a 50-km fiber. Opt. Express 2016, 24, 11716–11726. [Google Scholar] [CrossRef] [PubMed]
  6. Gu, B.; Hu, Y.; Zhang, X.; Li, M.; Zhu, Z.; Rui, G.; He, J.; Cui, Y. Angular momentum separation in focused fractional vector beams for optical manipulation. Opt. Express 2021, 29, 14705–14719. [Google Scholar] [CrossRef]
  7. Bobkova, V.; Stegemann, J.; Droop, R.; Otte, E.; Denz, C. Optical grinder: Sorting of trapped particles by orbital angular momentum. Opt. Express 2021, 29, 12967–12975. [Google Scholar] [CrossRef] [PubMed]
  8. Vallone, G.D.; Ambrosio, V.; Sponselli, A.; Slussarenko, S.; Marrucci, L.; Sciarrino, F.; Villoresi, P. Free-Space Quantum Key Distribution by Rotation-Invariant Twisted Photons. Phys. Rev. Lett. 2014, 113, 060503. [Google Scholar] [CrossRef] [Green Version]
  9. Cozzolino, D.; Bacco, D.; Da Lio, B.; Ingerslev, K.; Ding, Y.; Dalgaard, K.; Kristensen, P.; Galili, M.; Rottwitt, K.; Ramachandran, S.; et al. Orbital Angular Momentum States Enabling Fiber-based High-dimensional Quantum Communication. Phys. Rev. Appl. 2019, 11, 064058. [Google Scholar] [CrossRef] [Green Version]
  10. Bozinovic, N.; Yue, Y.; Ren, Y.; Tur, M.; Kristensen, P.; Huang, H.; Willner, A.E.; Ramachandran, S. Terabit-Scale Orbital Angular Momentum Mode Division Multiplexing in Fibers. Science 2013, 340, 1545–1548. [Google Scholar] [CrossRef] [Green Version]
  11. Willner, A.E.; Li, L.; Xie, G.; Ren, Y.; Huang, H.; Yue, Y.; Ahmed, N.; Willner, M.J.; Willner, A.J.; Yan, Y.; et al. Orbital-angular-momentum-based reconfigurable optical switching and routing. Photon Res. 2016, 4, B5–B8. [Google Scholar] [CrossRef] [Green Version]
  12. Wang, A.; Zhu, L.; Wang, L.; Ai, J.; Chen, S.; Wang, J. Directly using 8.8-km conventional multi-mode fiber for 6-mode orbital angular momentum multiplexing transmission. Opt. Express 2018, 26, 10038–10047. [Google Scholar] [CrossRef] [PubMed]
  13. Turunen, J.; Tervonen, E.; Friberg, A.T. Coherence theoretic algorithm to determine the transverse-mode structure of lasers. Opt. Lett. 1989, 14, 627–629. [Google Scholar] [CrossRef]
  14. Tervonen, E.; Turunen, J.; Friberg, A.T. Transverse laser-mode structure determination from spatial coherence measurements: Experimental results. Appl. Phys. B 1989, 49, 409–414. [Google Scholar] [CrossRef]
  15. Cutolo, A.; Isernia, T.; Izzo, I.; Pierri, R.; Zeni, L. Transverse mode analysis of a laser beam by near- and far-field intensity measurements. Appl. Opt. 1995, 34, 7974–7978. [Google Scholar] [CrossRef] [PubMed]
  16. Xue, X.; Wei, H.; Kirk, A.G. Intensity-based modal decomposition of optical beams in terms of Hermite–Gaussian functions. J. Opt. Soc. Am. A 2000, 17, 1086–1091. [Google Scholar] [CrossRef]
  17. Wang, K.; Dou, J.; Kemao, Q.; Di, J.; Zhao, J. Y-Net: A one-to-two deep learning framework for digital holographic reconstruction. Opt. Lett. 2019, 44, 4765–4768. [Google Scholar] [CrossRef]
  18. Liu, A.; Lin, T.; Han, H.; Zhang, X.; Chen, Z.; Gan, F.; Lv, H.; Liu, X. Analyzing modal power in multi-mode waveguide via machine learning. Opt. Express 2018, 26, 22100–22109. [Google Scholar] [CrossRef]
  19. Wang, H.; Lyu, M.; Situ, G. eHoloNet: A learning-based end-to-end approach for in-line digital holographic reconstruction. Opt. Express 2018, 26, 22603–22614. [Google Scholar] [CrossRef]
  20. Lyu, M.; Wang, W.; Wang, H.; Wang, H.; Li, G.; Chen, N.; Situ, G. Deep-learning-based ghost imaging. Sci. Rep. 2017, 7, 17865. [Google Scholar] [CrossRef]
  21. Lohani, S.; Knutson, E.M.; O’donnell, M.; Huver, S.D.; Glasser, R.T. On the use of deep neural networks in optical communications. Appl. Opt. 2018, 57, 4180–4190. [Google Scholar] [CrossRef]
  22. Liu, Z.; Yan, S.; Liu, H.; Chen, X. Superhigh-Resolution Recognition of Optical Vortex Modes Assisted by a Deep-Learning Method. Phys. Rev. Lett. 2019, 123, 183902. [Google Scholar] [CrossRef] [PubMed]
  23. Park, S.R.; Cattell, L.; Nichols, J.M.; Watnik, A.; Doster, T.; Rohde, G.K. De-multiplexing vortex modes in optical communications using transport-based pattern recognition. Opt. Express 2018, 26, 4004–4022. [Google Scholar] [CrossRef]
  24. Doster, T.; Watnik, A.T. Machine learning approach to OAM beam demultiplexing via convolutional neural networks. Appl. Opt. 2017, 56, 3386–3396. [Google Scholar] [CrossRef] [PubMed]
  25. Bekerman, A.; Froim, S.; Hadad, B.; Bahabad, A. Beam profiler network (BPNet): A deep learning approach to mode demultiplexing of Laguerre–Gaussian optical beams. Opt. Lett. 2019, 44, 3629–3632. [Google Scholar] [CrossRef]
  26. Zhao, Q.; Hao, S.; Wang, Y.; Wang, L.; Wan, X.; Xu, C. Mode detection of misaligned orbital angular momentum beams based on convolutional neural network. Appl. Opt. 2018, 57, 10152–10158. [Google Scholar] [CrossRef] [PubMed]
  27. Li, J.; Zhang, M.; Wang, D.; Wu, S.; Zhan, Y. Joint atmospheric turbulence detection and adaptive demodulation technique using the CNN for the OAM-FSO communication. Opt. Express 2018, 26, 10494–10508. [Google Scholar] [CrossRef] [PubMed]
  28. Liu, J.; Wang, P.; Zhang, X.; He, Y.; Zhou, X.; Ye, H.; Li, Y.; Xu, S.; Chen, S.; Fan, D. Deep learning based atmospheric turbulence compensation for orbital angular momentum beam distortion and communication. Opt. Express 2019, 27, 16671–16688. [Google Scholar] [CrossRef]
  29. Cui, X.; Yin, X.; Chang, H.; Liao, H.; Chen, X.; Xin, X.; Wang, Y. Experimental study of machine-learning-based orbital angular momentum shift keying decoders in optical underwater channels. Opt. Commun. 2019, 452, 116–123. [Google Scholar] [CrossRef]
  30. Courtial, J.; Padgett, M.J. Performance of a cylindrical lens mode converter for producing Laguerre–Gaussian laser modes. Opt. Commun. 1999, 159, 13–18. [Google Scholar] [CrossRef]
  31. Hall, D.G. Vector-beam solutions of Maxwell’s wave equation. Opt. Lett. 1996, 21, 9–11. [Google Scholar] [CrossRef] [PubMed]
  32. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
  33. Kwan, H.K. Simple sigmoid-like activation function suitable for digital hardware implementation. Electron. Lett. 1992, 28, 1379–1380. [Google Scholar] [CrossRef]
  34. Lee Rodgers, J.; Nicewander, W.A. Thirteen Ways to Look at the Correlation Coefficient. Am. Stat. 1988, 42, 59–66. [Google Scholar] [CrossRef]
  35. An, Y.; Huang, L.; Li, J.; Leng, J.; Yang, L.; Zhou, P. Learning to decompose the modes in few-mode fibers with deep convolutional neural network. Opt. Express 2019, 27, 10127–10137. [Google Scholar] [CrossRef] [PubMed]
  36. Brüning, R.; Gelszinnis, P.; Schulze, C.; Flamm, D.; Duparré, M. Comparative analysis of numerical methods for the mode analysis of laser beams. Appl. Opt. 2013, 52, 7769–7777. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Optical intensity profile of multimode LG beams. (a) l = 0, 1, 2; (b) l = 0, 1, 2, 3; (c) l = 0, 1, 2, 3, 4.
Figure 1. Optical intensity profile of multimode LG beams. (a) l = 0, 1, 2; (b) l = 0, 1, 2, 3; (c) l = 0, 1, 2, 3, 4.
Optics 02 00009 g001
Figure 2. Illustration of dual output CNN.
Figure 2. Illustration of dual output CNN.
Optics 02 00009 g002
Figure 3. Comparison of the reconstructed image with the original image for 9 modes of superimposed LG beams. (a) Image comparison of mode weights; (b) Comparison of mode phase maps.
Figure 3. Comparison of the reconstructed image with the original image for 9 modes of superimposed LG beams. (a) Image comparison of mode weights; (b) Comparison of mode phase maps.
Optics 02 00009 g003
Figure 4. The relationship between mode error and the change of mode number.
Figure 4. The relationship between mode error and the change of mode number.
Optics 02 00009 g004
Figure 5. Comparison of the predicted and true values of the mode weights for the non-adjacent topological charge condition. (a) The cases of l = 1, 3, 5, 7, 9; (b) The case of l = 1, 5, 9.
Figure 5. Comparison of the predicted and true values of the mode weights for the non-adjacent topological charge condition. (a) The cases of l = 1, 3, 5, 7, 9; (b) The case of l = 1, 5, 9.
Optics 02 00009 g005
Figure 6. The relation between mode weight error and distance.
Figure 6. The relation between mode weight error and distance.
Optics 02 00009 g006
Figure 7. The performance of CNN for noisy input profiles. (a) Input profiles under different noise intensities; (b) The relation between mode weight error and noise intensity.
Figure 7. The performance of CNN for noisy input profiles. (a) Input profiles under different noise intensities; (b) The relation between mode weight error and noise intensity.
Optics 02 00009 g007
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yuan, X.; Xu, Y.; Zhao, R.; Hong, X.; Lu, R.; Feng, X.; Chen, Y.; Zou, J.; Zhang, C.; Qin, Y.; et al. Dual-Output Mode Analysis of Multimode Laguerre-Gaussian Beams via Deep Learning. Optics 2021, 2, 87-95. https://0-doi-org.brum.beds.ac.uk/10.3390/opt2020009

AMA Style

Yuan X, Xu Y, Zhao R, Hong X, Lu R, Feng X, Chen Y, Zou J, Zhang C, Qin Y, et al. Dual-Output Mode Analysis of Multimode Laguerre-Gaussian Beams via Deep Learning. Optics. 2021; 2(2):87-95. https://0-doi-org.brum.beds.ac.uk/10.3390/opt2020009

Chicago/Turabian Style

Yuan, Xudong, Yaguang Xu, Ruizhi Zhao, Xuhao Hong, Ronger Lu, Xia Feng, Yongchuang Chen, Jincheng Zou, Chao Zhang, Yiqiang Qin, and et al. 2021. "Dual-Output Mode Analysis of Multimode Laguerre-Gaussian Beams via Deep Learning" Optics 2, no. 2: 87-95. https://0-doi-org.brum.beds.ac.uk/10.3390/opt2020009

Article Metrics

Back to TopTop