Next Article in Journal
Predicting Compressive Strength of 3D Printed Mortar in Structural Members Using Machine Learning
Next Article in Special Issue
Novel Beam Scan Method of Fabry–Perot Cavity (FPC) Antennas
Previous Article in Journal
Application of Association Rules Analysis in Mining Adverse Drug Reaction Signals
Previous Article in Special Issue
Axial-Symmetric Diffraction Radiation Antenna with a Very Narrow Funnel-Shaped Directional Diagram
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Machine Learning Technique to Improve an Impedance Matching Characteristic of a Bent Monopole Antenna

1
Department of Electronics Engineering, Andong National University, 1375 Gyengdong-ro, Andong-si 36729, Gyeongsangbuk-do, Korea
2
Department of Electronic Engineering, Myongji University, 116 Myongji-ro, Cheoin-gu, Yongin-si 17058, Gyeonggi-do, Korea
3
Department of Data Science, Korea National University of Transportation, 157, Cheoldobangmulgwan-ro, Uiwang-si 16106, Gyeonggi-do, Korea
*
Author to whom correspondence should be addressed.
Submission received: 29 September 2021 / Revised: 1 November 2021 / Accepted: 12 November 2021 / Published: 16 November 2021

Abstract

:
We designed the wire monopole antenna bent at three points by applying a machine learning technique to achieve a good impedance matching characteristic. After performing the deep neural network (DNN)-based training, we validated our machine learning model by evaluating mean squared error and R-squared score. Considering the mean squared error of about zero and R-squared score of about one, the performance prediction by the resulting machine learning model showed a high accuracy compared with that by the numerical electromagnetic simulation. Finally, we interpreted the operating principle of the antennas with a good impedance matching characteristic by analyzing equivalent circuits corresponding to their structures. The accomplished works in this research provide us with the possibility to use the machine learning technique in the antenna design.

1. Introduction

The wire monopole antenna is one of the most popular antennas because it provides an omni-directional radiation pattern, high radiation efficiency, and easy fabrication, etc. [1,2,3]. In addition, the geometry of a wire monopole antenna can be easily modified in various manners to improve antenna performances: control of operating frequency bands by adding resonating elements [4,5], increment of antenna bandwidth by applying the tapered structure for a feeder [6], enhancement of the radiation gain by equipping with director and reflector wires [7,8], reduction of antenna size by folding the antenna body [9,10,11], achievement of favorable impedance matching by placing shorting wires or capacitive loads at the optimum location of an antenna body [9,12,13,14], and control of both radiation pattern and polarization by rendering antenna arms bent or folded [15,16,17].
Among the modified wire monopole antennas, the bent wire antennas such as an inverted-L antenna have good performances in the reduction of antenna size, as well as impedance matching because the angles between wire elements (bending angles) and the lengths of the wire elements affect the antenna impedance and the current induced on the wire [3,9,17,18]. However, as is well known, the reduction of the antenna size simultaneously yields the increase in quality (Q) factor resulting in the diminishment of the antenna bandwidth [19,20]. This is because the antenna bandwidth is inversely proportional to the Q factor. In the case of an inverted-L antenna, the location of a bending point on the wire monopole significantly influences the Q factor because the location of the bending point adjusts the antenna size—the smallest sphere enclosing the antenna structure. In the results of [9], as the height of the inverted-L antenna—which implies the distance between the bending point and the ground plane—decreases under 10% of an electrical wavelength (0.1 λ0) at the operating frequency (f0), the Q factor drastically increases. Thus, the bent wire antennas such as the inverted-L antenna have to be optimized by utilizing the useful methodology to achieve the required antenna bandwidth in the restricted antenna size.
To optimize antenna structure having target antenna performances, various optimization algorithms such as the genetic algorithm and the particle swarm algorithm have been employed [21,22]. The more complex the antenna structure is, the more effective the optimization algorithm is in antenna design. However, electromagnetic (EM) analysis or simulation required to evaluate the fitness of sample antennas in the optimization process is a burden to antenna designers because the EM simulation needs lots of time and efforts to obtain the results. To overcome the aforementioned limitation in antenna design, machine learning (ML) techniques can be one of the solutions because the resulting ML model approximately provides the proper structure of an antenna without the EM simulation [23,24,25]. In [23], machine learning (ML) techniques such as least absolute shrinkage and selection operator (lasso), artificial neural networks (ANNs), and k-nearest neighbor (kNN) provide an efficient framework to identify optimal design for a T-shaped monopole antenna.
Machine learning [26] is a science that utilizes the capability of a computer to help humans deal with heterogeneous problems without being explicitly programmed. Some large projects related to machine learning which gained a big reputation in recent years can be listed here: GoogleBrain which focused on pattern detection in images and videos [27], AlexNet which led to the use of GPUs and convolutional neural networks in machine learning [28,29,30], and DeepFace which can recognize people with the same precision as a human can [31], etc. Supervised and unsupervised learning are most common machine learning methods. Supervised learning algorithms are trained by using the available input and the corresponding label (or the desired output). Training a model (the destination of a machine learning algorithm) is a process of the comparison of the actual label and the generated output, and minimizing these differences. Classification, regression, and prediction are sorted as a subcategory of supervised learning. Otherwise, unsupervised learning refers to using the input data without labels. The model itself now is required to figure out what is being shown, explore the data, and find the structure within. Some applications that can be mentioned here are customer segmentation, items recommendation, or detect the observations that seem strange in the dataset, etc.
Inspired by the powerful applicability of the machine learning techniques, we herein designed the bent monopole antenna having a good impedance matching characteristic using a deep neural network (DNN). Compared with the ANN model in [23], the proposed DNN uses multiple hidden layers to extract valuable features from the input [32]. The ML performance of the proposed DNN model is verified with a lasso [23] and linear regression. The proposed DNN yields highly accurate results for design parameters for bent monopole antennas. In the following parts of this paper, we describe the antenna geometry background and the training data generation in Section 2. The ML technique, the training process, and ML performance are then discussed in Section 3.1. Next, the validation of our ML model is explained in Section 3.2 where the impedance matching characteristics of optimal antenna models are compared with those of EM simulation and interpreted by their circuital models.

2. Antenna Geometry and Data Generation

This section introduces the overall structure and design parameters of a bent wire monopole antenna and then explains the acquisition of the training data using the Numerical Electromagnetic Code (NEC) simulation [1].

2.1. Antenna Geometry

The geometry of the proposed wire antenna, built on an infinite ground plane at z = 0, is shown in Figure 1 where the copper wire with the radius of 0.5 mm is bent on the x-z plane (y = 0). The antenna body, connected to the feed with the characteristic impedance (Z0) of 50 Ω, is divided into three sections that are determined by both the lengths of wire-sections (ln) and the bending angles (θn) at the bending points (Pn) (where n = 1, 2, and 3). Thus, the location of (n + 1)th bending point (Pn+1) is derived from nth bending point (Pn) and its design parameters ln and θn, as shown in Table 1.

2.2. Data Generation for Machine Learning

To obtain the machine learning model predicting the antenna performance, the training data including the antenna performance corresponding to the antenna structure is required. Here, as the interesting antenna performance, we considered the impedance matching of an antenna with the operating frequency of 1 GHz in the frequency range from 950 to 1050 MHz. The reason why we employed the bent wire is that the antenna impedance is effectively controlled by the multiple bent wire sections to achieve a good impedance matching characteristic [3,9,17,18]. We thus defined the cost function (Cost) as (1) to evaluate the impedance matching characteristic:
C o s t = m = 1 M | Γ ( f m ) | = m = 1 M | Z a n t ( f m ) Z 0 Z a n t ( f m ) + Z 0 |
where M, Γ(fm), Zant(fm), and Z0 are defined as the number of total frequency points in the interesting frequency range, the reflection coefficient at the frequency fm, the antenna impedance at the frequency fm, and the characteristic impedance of a feeder. Herein, we also set M, Z0, and fm are as 201, 50 Ω, and (950 + 0.5m) MHz, respectively.
Based on the definition in (1), the Cost becomes zero in the best case that an antenna is perfectly matched to a feeder of Z0 in every interesting frequency. In the worst case, that any electromagnetic wave does not propagate from the feeder to the antenna (full reflection) in any interesting frequency, the Cost is expected to be M (= 1 × M) because the full reflection makes the reflection coefficient one at each interesting frequency. Thus, we can introduce the average fitness (AF) in percentage to address the quality of the impedance matching in the interesting frequency range, as defined as (2).
A F ( % ) = m = 1 M ( 1 | Γ ( f m ) | 1 ) × 100
We obtained the training data of 729,000 samples using NEC simulation for the ML model, as shown in Table 2. In addition, the collected training data includes the information on the evaluated Cost corresponding to the design parameters l1, l2, l3, θ1, θ2, and θ3 of each sample antenna.

3. Machine Learning Techniques

In this approach, in order to figure out the optimal antenna design which has the lowest Cost corresponding to a set of design parameters, we employed a deep neural network (DNN) model to handle this regression task [33]. The DNN model is a subfield of machine learning with algorithms inspired by the idea of simulating the human brain [34]. The DNN model has recently gained increasing research interests due to their capability of overcoming the drawback of traditional algorithms dependent on hand-designed programs, and is investigated in many different domains, such as speech recognition, computer vision, pattern recognition, and regression [35].
Figure 2 shows the structure of a DNN model, which consists of an input layer, hidden layers, and an output layer. The input layer contains all visible information about the object of interest (known as “data” in machine learning) and a hidden layer executes complicated mathematical functions to extract valuable features from the input layer. Based on that, the appropriate information is calculated for the output.
Besides, regression analysis is a statistical measurement for estimating the relationships between one dependent variable (the factor we want to understand or predict) and a series of independent variables (the factors that influence the dependent variable) [33]. Here, we propose a DNN model based on regression for the design of a bent wire monopole antenna. In this problem, the input variables are assigned as a set of six design parameters X = {l1, l2, l3, θ1, θ2, θ3} (expressed in Section 2), and the output of the regression task is the corresponding output y = Cost as defined in (1). Therefore, the relation between the input and the output is formulated as:
y = F ( a X i + b ) ,
where F represents the DNN regression model including a and b as unknown parameters; the simple denotation for the complexity of the problem will be defined during the training process. In the training phase, mean squared error (MSE) is utilized as the loss function [36]. The MSE is computed as the average of the squared differences between the predicted and actual Cost values as:
M S E = 1 n i = 1 n ( y i     y ^ i ) ,
where n is represented for the number of data points, as well as yi and y ^ i are the actual and the corresponding predicted Cost values to the given design parameters Xi, respectively. Minimizing the MSE to zero is key for training the model because the MSE approaching zero implies that the predicted Cost is substantially similar to the actual Cost computed from NEC simulation.
In addition, R-squared score (R2) was chosen as a performance metrics to evaluate the model performance by comparing the trained model predictions with the observed data [37,38]. It is a statistical measure of how close the data are to the fitted regression line. The R-squared score is computed as:
R 2 = 1 Unexplained   variation Total   variation = 1 i = 1 n ( y i y ^ i ) 2 i = 1 n ( y i y ¯ i ) 2 ,
where the unexplained variance is the sum of the square of the difference between the actual and predicted Cost values and the total variance is calculated in a similar way by replacing the predicted value by the average actual one y ¯ i .
We illustrated our machine learning process to predict the antenna design as a block diagram in Figure 3. In this specific problem, the obtained data, which is composed of antenna design parameters and the Cost value for each sample antenna, is used to train a machine learning model, or DNN. The model training process is discussed in detail in Section 3.1. The remaining part of Section 3 presents the plan to identify the optimal design parameters based on a grid search and the trained machine learning model as well as the performance verification.

3.1. Machine Learning Results

The data obtained in Section 2.2 was fed into the DNN model with the proportion in percentage: training set/validation set/testing set = 70%/15%/15%. The training, validation, and testing sets make the specific roles in the machine learning process: all the samples in the training set are used for training the various models and the responsibility of the validation set is to compare their performances and choose which one to employ; on the other hand, the testing set is kept separately from the remaining to verify whether the final predictions of the trained model are correct or not. If the predictions tend to be distinct from the corresponding actual values, the training process needs to be reset.
Table 3 shows the structure of the proposed DNN model, where the activation functions are Rectified Linear Unit (ReLU) for hidden layers and linear for the final layer to generate the numerical output [39,40]. All these parameters are well defined for our dataset. The proposed model was trained using MSE loss function, optimized by Adam [41] with learning rate of 0.001 and batch size of 128. A set of sequential Dense layers—the basic layer which consumes the input and computes the new output by performing matrix-vector multiplications—is the main component of this network. Adding the dropout layer behind each Dense layer was considered due to its efficiency in preventing overfitting (a concept in data science that a model performs well on training data but tends to fail on the untrained data such as testing set; this is not the desired result for every type of model in machine learning). However, dropout regularization was claimed to be not well-performing in regression task as in classification and natural language processing tasks [42]. Therefore, due to experiments, the dropout layer is not used in our model. The number of units in Dense layers are adjusted by trying multiple combinations to find the best preciseness for our dataset. The detailed information for the layers is summarized in Table 3.
Figure 4 shows a comparison between the actual value and the predicted one in the testing set to visualize the performance of the proposed method. Each dot in the figure is formed by the numerical actual value as the x-coordinate and the corresponding predicted values as the y-coordinate. Therefore, if two values are more and more approximate, the point generated from them (the blue dots) will be closer and closer to the function graph y = x (the diagonal red line in the figure). Based on this figure, it could be argued that our prediction in testing phase is approaching the perfect performance, which confirms the efficiency of our DNN model.
Table 4 shows the comparison with two other machine learning techniques such as Lasso [43] and Linear regression [44]. MSE and R-squared score calculated on the testing set are utilized as the aspects of comparison. As mentioned before, smaller MSE and higher R-squared score denote a better model, therefore, our DNN model with MSE of 0.0353 and R-squared score of 0.999 is superior to both Lasso and Linear regression. In the regression problem, the proposed DNN outperforms the other methods.

3.2. Validation of Machine Learning Model

To find the optimal design parameters of the bent wire antenna based on the proposed DNN model, a grid search is formed to the parameters l1, l2, l3, θ1, θ2, and θ3 with the number of points for each parameter double that of the one used for the training set while maintaining the range of each variable. This search ensures that this grid is denser than the training dataset. In other words, the number of points for each wire section length and each bending angle is 20 and 18, respectively. Therefore, the total number of samples generated by all possible combinations of parameters in this grid is 46,656,000.
All the samples in the grid are then put through our well-trained DNN model to obtain the corresponding Cost value. To validate our ML model, we compared their Costs derived from the ML model with those calculated by NEC simulation after extracting several designs of the antenna with Costs of around 60, 70, and 80 from the grid search. Table 5 represents the design parameters and corresponding Costs derived from ML model and NEC simulation. The comparison of Costs in Table 5 reveals that the relative errors between Costs from the ML model and NEC simulation are smaller than 2%, which verifies the accuracy of our ML model.
After searching the antenna models with high AF on the entire grid using our ML model, we tabulated the design parameters and AFs and Costs of resulting antennas in comparison with the corresponding AFs and Costs derived by NEC simulations in Table 6. Table 6 shows the design parameters of the ranked antennas in terms of AF/Cost when searching the antenna models on the entire grid. The ranked antennas in Table 6 achieve AF over about 77%, which means the Costs are smaller than about 46. In addition, the resulting AFs from our ML model favorably agree with those from NEC simulation in that the maximum deviation between both AFs for the same model is smaller than 1%. This agreement of the AF indicates that our ML model is well-trained and capable of evaluating the impedance matching characteristic of a wire monopole antenna accurately.
The performed validation informs that the ML technique can be possibly applied to the antenna design from a practical standpoint. We thus expect that there may be a few advantages if the proposed method using ML technique is applied to antenna design. First, we can extract approximated optimal design parameters from the grid search using the ML model when beginning antenna design. Second, we can easily derive the key design parameters by analyzing the change of an evaluation function corresponding to a variation of an individual design parameter. Finally, we can expansively apply the proposed method using the ML technique to the design of antennas with a more complicated structure.
In Table 6, the resulting antenna models can be divided into two types in geometry: one type is the monopole antenna with a top load such as the inverted-L antenna (the antennas with the ranks 6 and 9 in Table 6) and the other type is the monopole antenna with a middle load (the antennas with the ranks 1~5, 7~8, and 10 in Table 6). Thus, to further study two types of antennas, we examined the radiation pattern and the impedance matching characteristics of both types of antennas having the ranks 1 and 6 in Table 6 using NEC and commercial EM simulators [45]. From the simulated results, we confirmed that both antennas have a radiation pattern similar to that of an ideal monopole antenna. In addition, we represent the simulated reflection coefficients of both antennas as a function of frequency in Figure 5, where the configuration of both antennas is illustrated in the insets. Although there is a small discrepancy in reflection coefficient owing to the applied analysis methods (NEC using the moment of method and Femtet using the finite element method), the result of Figure 5 implies that both types of antennas operate well with small reflection coefficients in the interesting frequency range from 950 to 1050 MHz. This result also validates our ML model that can predict the impedance matching characteristic of the monopole antenna formed by a bent wire.
Based on the geometry depicted in the insets of Figure 5, we represent the monopole antennas with top load and middle load in Figure 6 as possible equivalent circuits [46,47,48,49,50]. We expressed the vertical and horizontal wire sections as series and parallel RLC circuits, respectively. In addition, we assigned the parallel capacitance (Ch2) caused by the open end of the antenna arm to the possible equivalent circuit. The series capacitance (Cv) is negligible due to the current flowing along the vertical wire section. Thus, the parallel capacitances (Ch1 and Ch2) derived from the loads and the open end dominantly influence the impedance matching because it cancels out the series inductance (Lv1) yielded by the vertical wire section. In the case of a monopole antenna with a middle load, we found that the basic operating principle is similar to that of a monopole antenna with a top load. Although the contribution of the second vertical wire section (third wire section) to the antenna impedance is small, we think it may finely control the induced current affecting antenna impedance.

4. Conclusions

We designed a bent monopole antenna with a good impedance matching characteristic by using an ML technique. We utilized the collected data, which is composed of antenna design parameters and the corresponding Cost for each sample antenna, to train the DNN machine learning model. In the ML training, we checked the maturity of our ML model by evaluating MSE and R-squared values. After confirming that our ML model had an MSE of about 0.035, as well as an R-squared value above 0.99, we validated our ML model by comparing the actual AF and Cost values with the predicted ones. The resulting comparison showed a favorable agreement between both AFs and Costs due to the error being smaller than 1%. In addition, we compared the AFs and Costs of the antennas with a good matching performance with those calculated by NEC simulation. This comparison also showed high accuracy in the estimation of antenna performance. Finally, we interpreted the antennas with a good impedance matching characteristic in the perspective of a structure and a possible equivalent circuit. We found that the antennas with a good impedance matching performance are divided into two types of antennas with a top load and a middle load structure. In addition, we concluded that both types of antennas adequately utilize vertically and horizontally extended wire sections to achieve the good impedance matching characteristic. The accomplished works in this research provide us with the possibility to use the ML technique in the antenna design. From a practical point of view, we expect that it would improve our research to investigate various factors affecting antenna performance such as ground size and fabrication errors of bending angles and length of wire sections.

Author Contributions

Conceptualization, J.C. and Y.-H.K.; methodology, J.C., T.H.A.P. and Y.-H.K.; software, T.H.A.P.; validation, J.C., T.H.A.P. and Y.-H.K.; formal analysis, J.C., T.H.A.P. and Y.-H.K.; investigation, J.C. and T.H.A.P.; writing—original draft preparation, J.C. and T.H.A.P.; writing—review and editing, J.C. and Y.-H.K.; visualization, J.C. and T.H.A.P. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the Korea National University of Transportation in 2021 and in part by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (No. 2021R1I1A3050649).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data Sharing is not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Balanis, C.A. Antenna Theory: Analysis and Design: Analysis and Design, 4th ed.; John Wiley and Sons, Inc.: Hoboken, NJ, USA, 2016. [Google Scholar]
  2. Kraus, J.D.; Marhefka, R.J. Antennas for All Applications, 3rd ed.; McGraw-Hill: New York, NY, USA, 2002. [Google Scholar]
  3. Trainotti, V.; Figueroa, G. Vertically polarized dipoles and monopoles, directivity effective height and antenna factor. IEEE Trans. Antennas Propag. 2010, 56, 379–409. [Google Scholar] [CrossRef]
  4. Cho, C.; Park, I.; Choo, H. Broadband electrically small antenna using two electromagnetically coupled radiators. Microw. Opt. Technol. Lett. 2010, 52, 1369–1372. [Google Scholar] [CrossRef]
  5. Jung, J.H.; Park, I. Electromagnetically coupled small broadband monopole antenna. IEEE Antennas Wirel. Propag. Lett. 2003, 2, 349–351. [Google Scholar] [CrossRef]
  6. Manohar, M.; Kshetrimayum, R.S.; Gogoi, A.K. Printed monopole antenna with tapered feed line, feedregion and patch for super wideband applications. IET Microw Antennas Propag. 2014, 8, 39–45. [Google Scholar] [CrossRef] [Green Version]
  7. Shi, Z.; Zheng, R.; Ding, J.; Guo, C. A novel pattern-reconfigurable antenna using switched printed elements. IEEE Antennas Wirel. Propag. Lett. 2012, 11, 1100–1103. [Google Scholar]
  8. Juan, Y.; Che, W.; Yang, W.; Chen, Z.N. Compact pattern-reconfigurable monopole antenna using parasitic strips. IEEE Antennas Wirel. Propag. Lett. 2017, 16, 557–560. [Google Scholar] [CrossRef]
  9. Chen, Z.N. Note on impedance characteristics of L-shaped wire monopole antenna. Microw. Opt. Technol. Lett. 2000, 26, 22–23. [Google Scholar] [CrossRef]
  10. Chen, H.-D. Compact broadband microstrip-line-fed sleeve monopole antenna for DTV application and ground plane effect. IEEE Antennas Wirel. Propag. Lett. 2008, 7, 497–500. [Google Scholar] [CrossRef]
  11. Olaode, O.O.; Palmer, W.D.; Joines, W.T. Characterization of meander dipole antennas with a geometry-based, frequency-independent lumped element model. IEEE Antennas Wirel. Propag. Lett. 2012, 11, 346–349. [Google Scholar] [CrossRef]
  12. Trainotti, V.; Dorado, L.A. Short low- and medium- frequency antenna performance. IEEE Antennas Propag., Mag. 2005, 47, 66–90. [Google Scholar] [CrossRef]
  13. Hristov, H.D.; Carrasco, H.; Feick, R. Bent inverted-F antenna for WLAN units. Microw. Opt. Technol. Lett. 2008, 50, 1505–1509. [Google Scholar] [CrossRef]
  14. Foltz, H.D.; McLean, J.S.; Crook, G. Disk-loaded monopoles with parallel strip elements. IEEE Trans. Antennas Propag. 1998, 46, 1894–1896. [Google Scholar] [CrossRef]
  15. Hamid, M.A.K.; Boerner, W.M.; Shafai, L.; Towaij, S.J.; Alsip, W.P.; Wilson, G.J. Radiation characteristics of bent-wire antennas. IEEE Trans. Electromagn. Compat. 1970, EMC–12, 106–111. [Google Scholar] [CrossRef]
  16. Liu, X.; Christopoulos, C.; Thomas, D.W.P. Prediction of radiation losses and emission from a bent wire by a network model. IEEE Trans. Electromagn. Compat. 2006, 48, 476–484. [Google Scholar] [CrossRef]
  17. Choo, J.; Yoo, S.; Choo, H. Design of a ceiling-mounted reader antenna to maximize the readable volume coverage ratio for an indoor UHF RFID application. Microw. Opt. Technol. Lett. 2017, 59, 2136–2141. [Google Scholar] [CrossRef]
  18. Geyi, W.; Rao, Q.; Wang, D. Handset antenna design: Practice and theory. Progr. Electromagn. Res. 2008, 80, 123–160. [Google Scholar]
  19. McLean, J.S. A re-examination of the fundamental limits on the radiation Q of electrically small antennas. IEEE Trans. Antennas Propag. 1996, 44, 672–676. [Google Scholar] [CrossRef]
  20. Geyi, W. Physical limitations of antenna. IEEE Trans. Antennas Propag. 2003, 51, 2116–2123. [Google Scholar] [CrossRef] [Green Version]
  21. Choo, H.; Rogers, R.L.; Ling, H. Design of electrically small wire antennas using a Pareto genetic algorithm. IEEE Trans. Antennas Propag. 2005, 53, 1038–1046. [Google Scholar] [CrossRef]
  22. Robinson, J.; Rahmat-Samii, Y. Particle swarm optimization in electromagnetics. IEEE Trans. Antennas Propag. 2004, 52, 397–407. [Google Scholar] [CrossRef]
  23. Sharma, Y.; Zhang, H.H.; Xin, H. Machine learning techniques for optimizing design of double T-shaped monopole antenna. IEEE trans. Antennas Propag. 2020, 68, 5658–5663. [Google Scholar] [CrossRef]
  24. Cui, L.; Zhang, Y.; Zhang, R.; Liu, Q.H. A modified efficient KNN method for antenna optimization and design. IEEE trans. Antennas Propag. 2020, 68, 6858–6866. [Google Scholar] [CrossRef]
  25. Xiao, L.-Y.; Shao, W.; Jin, F.-L.; Wang, B.-Z. Multiparameter modeling with ANN for antenna design. IEEE trans. Antennas Propag. 2018, 66, 3718–3723. [Google Scholar] [CrossRef]
  26. Mitchell, T. Machine Learning; McGraw Hill: New York, NY, USA, 1997; ISBN 0-07-042807-7. OCLC 36417892. [Google Scholar]
  27. Mallory, H.; Shaun, V.A.; Mao, G.F.; Wang, J. An Overview of Google Brain and Its Applications. In Proceedings of the 2018 International Conference on Big Data and Education, Honolulu, HI, USA, 9–11 March 2018; pp. 72–75, ACM ISBN 978-1-4503-6358-7. [Google Scholar] [CrossRef]
  28. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. In Proceedings of the Advances in Neural Information Processing Systems 25 (NIPS’2012), Lake Tahoe, NV, USA, 3–6 December 2012; pp. 1097–1105. [Google Scholar]
  29. What Is GPUs? Available online: https://www.intel.com/content/www/us/en/products/docs/processors/what-is-a-gpu.html (accessed on 29 September 2021).
  30. Albawi, S.; Mohammed, T.A.; Al-Zawi, S. Understanding of a convolutional neural network. In Proceedings of the 2017 International Conference on Engineering and Technology (ICET), Antalya, Turkey, 21–23 August 2017; pp. 1–6. [Google Scholar]
  31. Taigman, Y.; Yang, M.; Ranzato, M.; Wolf, L. Deepface: Closing the gap to human-level performance in face verification. In Proceedings of the Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 1701–1708. [Google Scholar]
  32. Delalleau, O.; Bengio, Y. Shallow vs. deep sum-product networks. In Proceedings of the Advances in Neural Information Processing Systems 24 (NIPS’2011), Granada, Spain, 12–14 December 2011; pp. 666–674. [Google Scholar]
  33. Sykes, A.O. An Introduction to Regression Analysis. Am. Stat. 1993, 61, 101. [Google Scholar]
  34. Haykin, S.S. Neural Networks: A Comprehensive Foundation, 2nd ed.; Prentice Hall: Englewood Cliffs, NJ, USA, 1999. [Google Scholar]
  35. Liu, W.; Wang, Z.; Liu, X.H.; Zeng, N.Y.; Liu, Y.R.; Alsaadi, F.E. A survey of deep neural network architectures and their applications. Neurocomputing 2017, 234, 11–26. [Google Scholar] [CrossRef]
  36. Sammut, C.; Webb, G.I. Encyclopedia of Machine Learning; Springer Science & Business Media: Boston, MA, USA, 2011. [Google Scholar]
  37. Chicco, D.; Warrens, M.J.; Jurman, G. The coefficient of determination R-squared is more informative than SMAPE, MAE, MAPE, MSE, and RMSE in regression analysis evaluation. PeerJ Comput. Sci. 2021. [Google Scholar] [CrossRef] [PubMed]
  38. Botchkarev, A. A new typology design of performance metrics to measure errors in machine learning regression algorithms. Interdiscip. J. Inf. Knowl. Manag. 2019, 14, 45–76. [Google Scholar] [CrossRef] [Green Version]
  39. Nair, V.; Hinton, G.E. Rectified linear units improve restricted Boltzmann machines. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), Haifa, Israel, 21–24 June 2010; pp. 807–814. [Google Scholar]
  40. Feng, J.L.; Lu, S.N. Performance analysis of various activation functions in artificial neural networks. J. Phys. Conf. Ser. 2019, 1237, 022030. [Google Scholar] [CrossRef]
  41. Kingma, D.; Ba, J. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference for Learning Representations (ICLR), San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
  42. Özgür, A.; Nar, F. Effect of Dropout layer on Classical Regression Problems. In Proceedings of the 2020 28th Signal Processing and Communications Applications Conference (SIU), Gaziantep, Turkey, 5–7 October 2020; pp. 1–4. [Google Scholar]
  43. Tibshirani, R. Regression Shrinkage and Selection via the lasso. J. R. Statist. Soc. B 1996, 58, 267–288. [Google Scholar] [CrossRef]
  44. Hocking, R.R. Developments in Linear Regression Methodology: 1959–1982. Technometrics 1983, 26, 219–230. [Google Scholar]
  45. Femtet. Available online: https://www.muratasoftware.com/en (accessed on 29 September 2021).
  46. Simpson, T.L. A wideband equivalent circuit electric dipoles. IEEE Trans. Antennas Propag. 2020, 68, 7636–7639. [Google Scholar] [CrossRef]
  47. Smith, G.S. Analysis of Hertz’s experimentum crucis on electromagnetic waves. IEEE Antennas Propag. Mag. 2016, 58, 96–108. [Google Scholar] [CrossRef]
  48. Simpson, T.L. Revisiting Heinrich Hertz’s 1888 laboratory. IEEE Antennas Propag. Mag. 2018, 60, 132–140. [Google Scholar] [CrossRef]
  49. Pozar, D.M. Microwave Engineering, 4th ed.; John Wiley and Sons, Inc.: Hoboken, NJ, USA, 2011. [Google Scholar]
  50. Gupta, K.G.; Garg, R.; Bahl, I.; Bhartia, P. Microstrip linEs and Slotlines, 2nd ed.; Artech House, Inc.: Norwood, MA, USA, 1996. [Google Scholar]
Figure 1. Geometry of the bent wire monopole antenna.
Figure 1. Geometry of the bent wire monopole antenna.
Applsci 11 10829 g001
Figure 2. DNN structure.
Figure 2. DNN structure.
Applsci 11 10829 g002
Figure 3. The block diagram of the proposed method.
Figure 3. The block diagram of the proposed method.
Applsci 11 10829 g003
Figure 4. Testing set performance.
Figure 4. Testing set performance.
Applsci 11 10829 g004
Figure 5. Impedance matching characteristics of the antennas with ranks 1 and 6 in Table 6.
Figure 5. Impedance matching characteristics of the antennas with ranks 1 and 6 in Table 6.
Applsci 11 10829 g005
Figure 6. Possible equivalent circuits corresponding to two types of antennas in Table 6: (a) possible equivalent circuit of the antennas with rank 6; (b) possible equivalent circuit of the antennas with rank 1.
Figure 6. Possible equivalent circuits corresponding to two types of antennas in Table 6: (a) possible equivalent circuit of the antennas with rank 6; (b) possible equivalent circuit of the antennas with rank 1.
Applsci 11 10829 g006
Table 1. Locations of the feed point, the bending points, and the end point of the bent wire monopole antenna.
Table 1. Locations of the feed point, the bending points, and the end point of the bent wire monopole antenna.
Bending PointLocation
x (m)y (m)z (m)
P0 (Feed point)x0 = 000
P1x1 = 000.007
P2x2 = x1 + l1cosθ10z2 = z1 + l1sinθ1
P3x3 = x2 + l2cosθ20z3 = z2 + l2sinθ2
P4 (End point)x4 = x3 + l3cosθ30z4 = z3 + l3sinθ3
Table 2. Conditions of design parameters and interesting frequency for training data acquisition.
Table 2. Conditions of design parameters and interesting frequency for training data acquisition.
ParameterRangeNumber of Points
Frequency950~1050 MHz201
Length of wire-section (ln)0.03~0.18 λ0 (9~54 mm) 10
Bending angle (θn) at bending point Pn10~90°9
Table 3. The structure of the proposed DNN model.
Table 3. The structure of the proposed DNN model.
LayerLayer TypeNumber of Units
InputInput
Dense 1Hidden512
Dense 2Hidden256
Dense 3Hidden128
Dense 4Hidden64
Dense 5Hidden32
Dense 6Hidden16
Dense 7Hidden8
Dense 8Output1
Table 4. Comparison with other machine learning techniques.
Table 4. Comparison with other machine learning techniques.
MethodMean Squared ErrorR-Squared Score
Lasso753.4980.0533
Linear regression521.7970.3444
Ours (DNN)0.03530.999
Table 5. Design parameters of the ranked antenna models in terms of AF when searching antenna models on the entire grid.
Table 5. Design parameters of the ranked antenna models in terms of AF when searching antenna models on the entire grid.
Design ParametersAF (%)/Cost
Unit: mmUnit: degreeML ModelNEC Simulation
l1l2l3θ1θ2θ3
23.216.127.975.971.219.470.1/60.069.6/61.0
32.718.518.571.275.942.965.2/70.065.6/69.1
35.118.518.547.690.085.2960.2/80.060.1/80.2
Table 6. Design parameters of the ranked antenna models in terms of AF when searching antenna models on the entire grid.
Table 6. Design parameters of the ranked antenna models in terms of AF when searching antenna models on the entire grid.
RankDesign ParametersAF (%)/Cost
Unit: mmUnit: degreeML ModelNEC Simulator
l1l2l3θ1θ2θ3
149.39.09.090.014.790.078.76/42.7078.58/43.05
249.39.09.090.010.085.378.71/42.7978.49/43.24
349.39.09.090.014.785.378.70/42.8078.46/43.30
449.39.09.090.019.490.078.66/42.8978.49/43.24
549.39.09.090.010.080.678.66/42.8978.39/43.44
632.723.211.490.090.010.078.63/42.9577.90/44.42
749.39.09.090.019.485.378.63/42.9678.33/43.56
849.39.09.090.014.780.678.62/42.9878.32/43.58
935.120.811.490.090.010.078.61/42.9977.97/44.28
1049.39.09.090.024.190.078.61/43.0078.30/43.62
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Choo, J.; Pho, T.H.A.; Kim, Y.-H. Machine Learning Technique to Improve an Impedance Matching Characteristic of a Bent Monopole Antenna. Appl. Sci. 2021, 11, 10829. https://0-doi-org.brum.beds.ac.uk/10.3390/app112210829

AMA Style

Choo J, Pho THA, Kim Y-H. Machine Learning Technique to Improve an Impedance Matching Characteristic of a Bent Monopole Antenna. Applied Sciences. 2021; 11(22):10829. https://0-doi-org.brum.beds.ac.uk/10.3390/app112210829

Chicago/Turabian Style

Choo, Jaeyul, Thi Ha Anh Pho, and Yong-Hwa Kim. 2021. "Machine Learning Technique to Improve an Impedance Matching Characteristic of a Bent Monopole Antenna" Applied Sciences 11, no. 22: 10829. https://0-doi-org.brum.beds.ac.uk/10.3390/app112210829

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop