Next Article in Journal
MFAM: Multiple Frequency Adaptive Model-Based Indoor Localization Method
Previous Article in Journal
A Multi-Hop Clustering Mechanism for Scalable IoT Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Sensor Fusion to Estimate the Depth and Width of the Weld Bead in Real Time in GMAW Processes

by
Guillermo Alvarez Bestard
1,2,*,
Renato Coral Sampaio
3,
José A. R. Vargas
4 and
Sadek C. Absi Alfaro
5
1
Postgraduate Program in Mechatronic Systems (PPMEC), Faculty of Technology, University of Brasilia, Campus Darcy Ribeiro, Brasilia 70910-900, Brazil
2
Department of Automatic Control, Institute of Cybernetics, Mathematics and Physics, Havana 10400, Cuba
3
Software Engineering Group, Faculty of Gama, University of Brasilia, Brasilia 72444-240, Brazil
4
Department of Electrical Engineering, Faculty of Technology, University of Brasilia, Brasilia 70910-900, Brazil
5
Department of Mechanical and Mechatronic Engineering, Faculty of Technology, University of Brasilia, Brasilia 70910-900, Brazil
*
Author to whom correspondence should be addressed.
Submission received: 4 December 2017 / Revised: 19 February 2018 / Accepted: 5 March 2018 / Published: 23 March 2018
(This article belongs to the Section Physical Sensors)

Abstract

:
The arc welding process is widely used in industry but its automatic control is limited by the difficulty in measuring the weld bead geometry and closing the control loop on the arc, which has adverse environmental conditions. To address this problem, this work proposes a system to capture the welding variables and send stimuli to the Gas Metal Arc Welding (GMAW) conventional process with a constant voltage power source, which allows weld bead geometry estimation with an open-loop control. Dynamic models of depth and width estimators of the weld bead are implemented based on the fusion of thermographic data, welding current and welding voltage in a multilayer perceptron neural network. The estimators were trained and validated off-line with data from a novel algorithm developed to extract the features of the infrared image, a laser profilometer was implemented to measure the bead dimensions and an image processing algorithm that measures depth by making a longitudinal cut in the weld bead. These estimators are optimized for embedded devices and real-time processing and were implemented on a Field-Programmable Gate Array (FPGA) device. Experiments to collect data, train and validate the estimators are presented and discussed. The results show that the proposed method is useful in industrial and research environments.

1. Introduction

Arc welding is extensively used in industrial applications. Among all welding methods, the Gas Metal Arc Welding (GMAW) process is used in automotive, aeronautic and naval industries. In the energy industries, it is applied in the manufacturing of thermoelectric, hydroelectric, nuclear and wind farms. It is also used in the production of tools and instruments for the scientific, industrial, domestic and medical sectors. Other applications include the construction of civil and military structures and more specifically in the manufacturing of pipes to fluids transportation such as gas, oil and water. Among the GMAW strengths is its versatile welding process that yields high-quality weld beads.
In the process, the geometry of the weld bead is critical and commonly used for quality validation, but an online measurement is challenging due to the extreme environmental conditions imposed on the welding arc [1]. These characteristics limit the process automation since the control loop cannot be closed with classical instrumentation techniques and requires non-contact measuring techniques, such as video or thermographic cameras, microphones, spectrometers, X-rays, ultrasound, among others.
The weld bead geometry includes width, reinforcement and depth of penetration [2]. These parameters are governed by several process factors such as wire feed speed, welding speed, welding current, welding voltage and contact tip to work distance.
The sensing and analysis techniques used in the welding process is presented in [3]. Among all studied references, only 15% use dynamic process models due to the difficulty in obtaining a continuous data set of weld bead depth. In several control systems, the static model is not acceptable for satisfactory performance. In addition, it should be noted that the vision, infrared thermography and sensor fusion techniques combined with neural network, image processing and statistical analysis, are the most commonly used in more recent works.
Vision sensing techniques are a good solution to measure the weld bead width and reinforcement but are not suitable for measuring the weld bead depth. One common method uses a laser beam that draws one or more lines or dots on the weld bead while a camera captures the image created by them. A band-pass optical filter is used to isolate the lines emitted by the laser. This technique uses special lenses, image processing algorithms and triangulation techniques. The final result is a two-dimensional or three-dimensional profile of the weld bead or weld pool with geometric information [4,5,6,7,8]. This system is referred to as profilometer, profile sensor or laser scanner.
Infrared thermography provides a non-contact measurement of the molten weld pool infrared emission that is strongly correlated with the temperature. The infrared sensing of arc welding processes has been extensively investigated and results prove that the information provided by this measurement technique can help determinate the weld bead depth and width [9,10,11,12,13,14]. Several patents have been recognized in the United States of America to detect and control total penetration using infrared information [15,16,17].
Sensor fusion techniques are used to describe the static and dynamic behaviour of process variables [18,19,20,21]. These techniques are increasingly being used to estimate magnitudes that are impossible to measure directly. Sensor fusion is based on several other areas of knowledge, such as statistics and artificial intelligence, but it finds new applications in this field.
By combining different sensing technologies to take advantage of each one’s strength, it is possible to obtain better sensing results. As an example, the weld pool vision and infrared information can be combined to obtain the weld pool dimensions with more accuracy, or the combination of the weld bead width (from a vision system) and the back bead thermographic information (from a pyrometer) to estimate the bead penetration [22,23]. Nowadays, because of these and other advantages, sensor fusion systems were studied to obtain an effective weld pool and weld bead estimations [24,25,26,27,28,29,30,31].
Despite the advantages and the success of these methods in a scientific environment, their application in an industrial environment is low or null. Related works do not show the application of embedded devices to bead geometry control in the arc welding process and an overview shows that the implementation of welding geometry control systems does not find support in the embedded devices area. Other welding related works such as weld pool volume control [32], image processing [33], wire feed control [34], defect detection [35,36,37] and arc signal monitoring [38,39], were found. This indicates that the implementation of sensor fusion algorithms in embedded devices to estimate the geometry of the weld bead can become a new field of application for these technologies.
This paper presents a system developed to stimulate a GMAW conventional process and collect values of the arc variables, infrared thermography and weld bead geometry. The GMAW process uses a constant voltage power source. The system allows real-time open-loop control of the process and stores experimental data on a computer. Image processing algorithms were developed to extract features from the thermographic data, measure the external weld bead dimensions and depth by making a longitudinal cut in the weld bead. This information is used to train a multilayer perceptron neural network, which is based on a sensor fusion algorithm to estimate the depth and width of the weld bead that can be used in real-time. The inputs of the neural network add previous measurements and estimated values to capture the process dynamics. The estimators are implemented, simulated and tested in a Field-Programmable Gate Array (FPGA) embedded system. The actual application area is automatic control, arc welding and sensor fusion research.
In this context, we can cite three main contributions of this work. First is a novel modelling method which improves the model response by adding dynamic process information rather than relying on static models used in most works found in the literature. This is achieved by the image processing algorithm to the macrographic analysis developed in this work that collects a continuous data set of weld bead depth, necessary to the dynamic model.
The second main contribution is a novel method, also used in modelling, that obtains the thermographic features and supplies information about the amount and spatial distribution of the energy in the workpiece. This method does not use the second derivate to thermographic width calculation, reducing the mistakes when multiple inflexion points are found. Also, a real thermographic curve is used to volume calculations instead of the ideal Gaussian curve. This approach improves model accuracy and simplifies the calculations, reducing it just to sum operations. Other works use the Gaussian curve to thermographic profile representation and need more complex equations.
The third main contribution is the floating-point FPGA implementation of the artificial neural network (ANN) equations that supplant the need for normalization steps by incorporating them in the weights of the ANN. The tests of the estimators in the FPGA device show a low response time and are suitable for use in real-time.

2. Materials and Methods

In this work, a data acquisition and open-loop control system was developed. The next sections describe this system and the analysis methods to measure the relevant data and estimate the depth and width of the weld bead.

2.1. Data Acquisition and Open-Loop Control System for GMAW Welding Process

This experimental system was developed to collect values of selected variables and stimulate the GMAW process, allowing an open-loop control and data storage in a computer.
The variables measured by the system are welding voltage, welding current, infrared thermography and welding torch (or workpiece) position. The system sends several set points to the welding power source such as the set point of welding voltage and the set point of the wire feed speed and controls the welding speed. Despite this, the control is in open loop because the system does not use the measurements to change the welding parameters and follows a sequence previously defined by the system operator.
The data acquisition and open-loop control system has six main components: the welding power source, the welding table, the thermographic camera, the laser scanner, the data acquisition and control interface and the computer where data acquisition and stimulus sequence design software are executed [40]. The components, the data flow and a schematic side view are shown in Figure 1.
The welding power source used is the Inversal 450 [41]. The communication algorithm for remote control of the welding power source and data acquisition was developed, based on the RS232 protocol [42]. A state machine, implemented in the control interface, defines the operation parameters and obtains the arc variables measurements (welding voltage and welding current) and status of the welding power source. The operation sequence for controlling the welding power source and the data acquisition software was also developed [40]. A state machine is used to continuously repeat this sequence in the main loop of the interface.
The electro-mechanical system of the flat welding table is composed of a linear axis with 5 mm of linear movement by revolution and a stepper motor of 1.8 grades by step. The system is used to move the piece, keeping the welding torch fixed. It was developed by students of the Automation and Control Group (GRACO) of UnB. It has a stepper motor control circuit [43] with signals for modifying the stepper time (speed) and the direction. Other signals show the status of the system and protect against overload [44]. The structure supports 15 kg of load and up to 15 mm/s of speed. The support structure for the thermographic camera was developed in this work and is shown in Figure 1b.
The data acquisition and control interface was developed to synchronize the movement of the piece with the parameters of the welding power source operation and to obtain the arc variables measurements in real-time [40]. Figure 2 shows the simplicity and small size of the interface (since its greater complexity is in the firmware) as well as the low cost of its components.
A microcontroller PIC18F2550 from Microchip Technology [45] (Arizona, USA, imported to Brazil by Ichip Tecnologia Ltda—ME) controls the operation. It first receives from the computer the start and end points of the weld, and the sequence of values to be sent to the welding source in each position of the piece. Then, after starting, the interface has an autonomous and independent operation and sends the measurements obtained in each position in real-time to the computer. The acquisition program (synchronized with the interface) obtains the thermographic data regardless of the clock or priorities of the operating system. The USB communication frames between the interface and the computer were created, based on ASCII characters. This allows the control of the welding power source, the welding table, the interface and the operation sequence of the process.
The thermographic camera ThermoVision A40 [46] from FLIR Systems (Oregón USA, imported to Brazil by FLIR Systems Brazil) is used to obtain the temperature values of the weld molten puddle. It employs a semiconductor sensor of focal plane array uncooled microbolometer technology; has a temperature range between −40 °C and 2000 °C; a spectral range between 7.5 and 13 μm; a sampling frequency of 120 Hz (120 frames per second) and a maximum resolution of 16-bit monochrome and 8-bit colour. The infrared data is obtained from a Firewire interface [47] of the computer in a matrix format, with the temperature of each pixel of the image. The camera is fixed at 50 mm from the molten weld pool at a 45° angle from the horizontal plane of the welding table (see Figure 1b,c).
The emissivity of the molten pool is not constant, so the temperature measurements using this method should be considered approximate. The accurate measuring of the absolute values of temperatures is not considered significant for the purposes of this work.
The data acquisition and stimulus sequence design software developed using the libraries provided by the thermographic camera manufacturer [48]. The software Thermo Data Welding (TDW) developed in this work collects and stores the data of the welding process and operation of the whole system in text files. The data are collected using the communication links with the data acquisition interface and the thermographic camera (see Figure 1a). Figure 3 shows the primary form of TDW.
The TDW is divided in several modules oriented to specific functions such as: thermographic camera configuration and verification, welding power source configuration (defines inductance, pre-gas and post-gas times, gap wire-arc time, wire diameter, contact tip to work distance, type and thickness of the material, composition and flow rate of the shielding gas), adjustment of the initial position of the piece in the welding table, creation the welding sequence (defines start and end positions, stimulus that will be sent to the welding power source and the sampling period). Three files are created in each experiment (see Figure 3 bottom) that store the system configuration, the stimulus sequence and the measurements collected.

2.2. Infrared Image Features Extraction

The thermographic matrix (supplied by the thermographic camera) is processed with an algorithm developed to obtain the next characteristics in each sampling time: thermographic peak (Tp), base plane (Tb), thermographic curve width (Tw), thermographic area (Ta) and thermographic volume (Tv). The thermographic image is taken on the weld pool area as a physical reference, but the welding arc and the electrode are included. The developed algorithm tries to obtain information about the amount and spatial distribution of the energy supplied to the workpiece to estimate the depth and width of the weld bead. An example of the algorithm output for one sample is shown in Figure 4.
The data processing algorithm applies a moving average filter of 3 × 3 pixels to the infrared matrix. The thermographic peak or maximum value of the matrix is calculated. The base plane is calculated as the average of 10% of the values in the left and right edges (left and right side, see Figure 4b). The boundary plane is 10% above the base plane.
The sum of active pixels in the intersection plane between the thermographic surface and the boundary plane is the area. The sum of the thermographic values within the intersection plane is the thermographic volume (see Figure 4d).
In the image shown in Figure 4b, it is possible to observe several inflexion points in pixels 17 and 19. This characteristic is observed in several images of the thermographic profile. It makes inexact the use of the second derivative analysis to determine the width of the thermographic profile. In this work, to prevent this problem, the thermographic width is the maximum count of active pixels in the front axis (front side) of the intersection plane, as shown in Figure 4d.
This algorithm was developed using Matlab scripts but is very simple and can be implemented in an FPGA or microcontroller device.

2.3. Laser Scanner and Macrographic Analysis to Obtain the Geometric Profile of the Weld Bead

The laser scanner and the macrographic analysis algorithm are a hardware and software development to obtain the profile of the weld bead after the completion of the welding process. The laser scanner gets the three-dimensional external geometry and the macrographic analysis algorithm obtains the weld bead depth profile.
The laser scanner (or profilometer) operation is based on a triangulation technique and image processing algorithms developed specifically for this system. It draws a line on the weld bead with a laser and a low-resolution camera (webcam of 600 × 400 pixels, 0.24 MP) with USB communication captures the image. The camera provides only the laser line image because an optical filter, adjusted to the laser emission frequency of the laser, blocks other light sources. Figure 5 show this process.
To operate the scanner, after the welding finishes the workpiece remains fixed on the welding table. Then, the control interface moves into the scanning position. A software developed using Matlab/GUI acquires the image from the camera and applies the image processing algorithm. The algorithm filters the image and completes the missing data, then rotates the image to find the baseline corresponding to the base metal surface and calculates the reinforcement and width of the bead. Figure 5b shows the image obtained by the camera for the current position in red and the calculated profile in white.
The system saves pixel resolution, position information, reinforcement, width and profile of the weld bead for each position in a text file. These data are useful for three-dimensional reconstruction and statistical analysis of the weld bead geometry. The system can scan a weld bead up to approximately 20 mm wide and 10 mm high. The calibration is done with an automatic procedure using pieces with known dimensions. A resolution of 0.035 mm of width and 0.05 mm of height has been verified with an error of less than 5% of the full scale. This is an acceptable low-cost solution for research purposes despite its slow performance and low image resolution.
The macrographic analysis algorithm was developed using Matlab scripts and needs an image of the welding workpiece. The workpiece is cut in a longitudinal direction, that is, in the direction of the torch movement (see Figure 6a,b). If a misalignment of the cut tool with the weld bead is detected, it may be necessary to perform more than one cut-analysis operation. In these cases, the maximum value of each position is used.
The cut piece will be polished and etched using Nital solution (usually 2.5%) to clearly show the weld bead penetration, as can be seen in Figure 6b. An image processing algorithm analyses the image of this side of the piece, then corrects misalignment and filters the image for the border detection procedure. The base-line (shown in red in Figure 6c) is detected. The thickness and length of the piece are known and are used to calculate the scale coefficients. The weld bead penetration limit zone is detected (shown in yellow in Figure 6c) and the difference between it and the base-line returns as the weld bead depth profile. The profile data is filtered, and the missing values are filled with a linear interpolation method.
In the traditional transversal cut, is not possible to obtain enough information to make a dynamic model because of the difficulty to obtain a continuous data set of penetration. This macrographic analysis algorithm allows us to obtain a continuous data set of penetration to a dynamic model.

2.4. Sensor Fusion to Estimate the Depth and Width of the Weld Bead

A cooperative, data-in/data-out and centralized sensor fusion [18] algorithm was developed based on a Multilayer Perceptron (MLP) artificial neural network to estimate two characteristics of the weld bead geometry. The inputs of the neural network add previous measurements and estimates values to capture the process dynamics.
The MLP is composed of an input layer, one or more hidden layers and an output layer. This work uses only one hidden layer and a single output as shown in Figure 7.
For a better training performance, all inputs have to be normalized in the interval [−1, 1] by Equation (1). In this equation xi = [x1, x2, … xn] are the n inputs and x’i = [x’1, x’2, … x’n] represent the normalized values.
x i = 2 ( x i min ( x i ) max ( x i ) min ( x i ) ) 1
In mathematical terms, Equation (2) represents the input of a neuron in the hidden layer where wi,j is the weight of neuron j associated with the input i, and bj is the bias of neuron j. Equation (3) expresses the output of the hidden layer where f is the tansig activation function presented in Equation (4).
s 1 , j = ( i = 1 n w i , j x i ) + b j
y j =   f ( s 1 , j ) = f [ ( i = 1 n w i , j x i ) + b j ]
f =   t a n s i g ( s ) = 2 1 + e 2 s 1
Equation (5) represents the output layer of the neural network where y’j and vj are respectively the output and weight associated with neuron j of the hidden layer, and bout is the output bias.
y o u t = ( j = 1 m v j y j ) + b o u t
Finally, the output y’out has to be restored to the original interval (inverse normalization), which is done through Equation (6).
y o u t = ( y o u t + 1 2 ) [ max ( y o u t ) min ( y o u t ) ] + min ( y o u t )
The same MLP structure is used to estimate both the weld bead depth ( D ^ ) and the weld bead width ( W ^ ) by changing the input parameters of the welding current (i) to the welding voltage (u). A selection signal is used to change the parameters (weights), the inputs and the output of the network. A controller evaluates the two algorithms in sequence, updating the weights of the neural network for each case. Figure 8 shows the block diagram of the estimator.
The artificial neural network has eight neurons in the input layer, twelve in the hidden layers and one in the output layer. The size of the hidden layer was selected by balancing the computational cost (neuron quantity) and the estimation error by testing different configurations.
The input variables are the thermographic peak (Tp), the thermographic base (Tb), the thermographic area (Ta), the thermographic volume (Tv), the thermographic width (Tw), the measurements of welding current or voltage (iu) in the actual (nT) and previous sample (nT-T) and the previous estimated value [ D W ^ ( n T T ) ]. The symbol T is the sample time and n is the sample number. The activate function is the hyperbolic tangent sigmoid transfer function [49]. The network training should be done with experimental measurements of the parameters of input and output, and by using the backpropagation algorithm.
The Selection signal allows switching between the estimation of the weld bead depth or the weld bead width. To estimate the weld bead depth, this algorithm uses the features of the infrared image and measurements of the welding current as indicators of the energy delivered (welding energy) and the last value of the weld bead depth. To estimate the weld bead width, the infrared features, the welding voltage and the last value of the weld bead width are used.
For the training and validation of the neural network, the input values of the thermographic and arc variables are obtained from the data acquisition system (see Section 2.1) and infrared feature extraction algorithm (see Section 2.2). The real values of the weld bead depth are obtained from the macrographic analysis. The real values of the weld bead width are obtained from the laser scanner (see Section 2.3).
These dynamic estimator’s models are valid only for the current process conditions and must be updated if these change, such as electrode diameter and material, base material, gas type and flow rate, welding current or welding voltage range, among others.
Several programs were developed for data processing and data plotting. All the curves, the statistical analysis, the image processing and the neural network training and testing were made using Matlab scripts.
We must emphasize the importance of simplifying the estimating algorithm due to real-time requirements. The use of the proposed parameters and not the complete infrared image can help reduce the resources used and increase the speed of the estimation process. The same structure for the estimation of two parameters in sequential mode minimizes the hardware requirements but increases the processing time. If less processing time is required and the hardware resource is not restricted, two equal neural networks can be used. The implementation and synthesis of the neural network can be done with the methodology and tools proposed in [50,51]. Ayala et al., show a floating-point implementation in hardware for a neural network architecture that guarantees low latency.

2.5. FPGA Implementation of Weld Bead Depth and Width Estimators

The MLP implementation in hardware is based on the model presented in Section 2.4. The overall architecture consists of multiple neurons in parallel, which are synchronized and controlled by a Finite State Machine (FSM) as shown in Figure 9. All operations are carried out using customized variable width floating-point arithmetic and trigonometric libraries based on the IEEE 754 standard. These units are described in [52,53] and provide much better precision and a larger dynamic range suited for small and large real numbers compared to fixed-point or simple integer arithmetic.
The top-level neural network architecture begins its operation with the activation of the start signal, at this point, the FSM sends a start_neuron command to each neuron which receives the network inputs and its respective weights (a) and bias (b) The neuron then outputs its value (y) and activates a ready_neuron signal. The FSM then activates a start_mult signal which begins multiplying each neuron output by its respective weight (v). After receiving the ready_mult signal, the FSM executes its final states which sums all neuron outputs multiplied by their weights with the output bias (bout). Finally, the ready_nn signal is set and the network output (yout) becomes available.
The neuron module has a similar architecture and is depicted in Figure 10. Again, an FSM receives a start signal and begins the first stage of multiplying each input with its respective weight and accumulating the results. To simplify the architecture, the bias value (bj) is inserted in the same pipeline and multiplied by 1. After that, the resulting value enters the activation function as described in Equation (4). The exponential operation uses a CORDIC module described in [53] while the rest of the arithmetic operators use the same floating-point units described in [52].
To optimize FPGA resource usage, all sequential operations inside the neuron use shared floating-point units which reduces them to a total of two adders, one multiplier and one divider. Likewise, the multipliers used in the top-level architecture are also shared with the multipliers inside each neuron avoiding resource duplication.
The final step to the MLP hardware implementation was to address the input and output normalization steps described in Equations (1) and (6). To avoid adding more logic to the MLP architecture in hardware, a transform operation was applied to each weight and bias. To calculate the transforms, we began by substituting Equation (1) into Equation (2) and obtaining the Equation (7).
s 1 , j = ( i = 1 n w i , j ( 2 ( x i min ( x i ) max ( x i ) min ( x i ) ) 1 ) ) + b j
s 1 , j = ( i = 1 n w i , j ( 2 x i max ( x i ) min ( x i ) 2 min ( x i ) max ( x i ) min ( x i ) 1 ) ) + b j
s 1 , j =   ( i = 1 n 2 x i w i , j max ( x i ) min ( x i ) 2 min ( x i ) w i , j max ( x i ) min ( x i ) w i , j ) + b j
s 1 , j = i = 1 n x i 2 w i , j max ( x i ) min ( x i ) i = 1 n ( 2 min ( x i ) w i , j max ( x i ) min ( x i ) w i , j ) + b j
From Equation (10) it is clear that the neuron weights and bias can be recalculated to include the normalization as expressed in Equations (11) and (12).
w _ n o r m i , j = 2 w i , j max ( x i ) min ( x i )
b i a s _ n o r m j = i = 1 n ( 2 min ( x i ) w i , j max ( x i ) min ( x i ) w i , j ) + b j
An equivalent transform was calculated for the output inverse normalization where Equation (5) is substituted into Equation (6).
y o u t = ( ( j = 1 m v j y j ) + b o u t + 1 2 ) ( max ( y o u t ) min ( y o u t ) ) + min ( y o u t )
y o u t = ( j = 1 m v j y j ( max ( y o u t ) min ( y o u t ) ) ) 2 + ( b o u t + 1 ) ( max ( y o u t ) min ( y o u t ) ) 2 + min ( y o u t )
Once again, from Equation (14) it is possible to isolate the expressions that transform the weights and bias used to compute the inverse normalization.
v _ n o m r j = ( max ( y o u t ) min ( y o u t ) ) 2 v j
b _ n o r m o u t = ( b o u t + 1 ) ( max ( y o u t ) min ( y o u t ) ) 2 + min ( y o u t )
With the use of these transforms, there is no need to add any logic to the MLP implemented in hardware.

3. Results and Discussion

Two welding experiments were completed for depth and width of the weld bead modelling and validation. In the following sections, we show the details of the experiments, the measurements obtained, modelling and validation of geometry estimators and an FPGA simulation of the estimators.

3.1. Experimental Details

The welding experiments were made using carbon steel pieces with dimensions of 85 × 50 × 6 mm on which welds were laid adopting the bead-on-plate technique (welding on top). The surfaces of the pieces were cleaned to eliminate any dirt and oxides. The piece was welded in a horizontal position, with the torch at 90° from the horizontal plane and 12 mm of contact tip to work distance (CTWD). The shielding gas was composed of 98% Argon and 2% Oxygen, with 15 psi of pressure.
The selection of the welding electrode wire was based mainly upon matching the mechanical properties and physical characteristics of the base metal, weld size and existing electrode inventory. Steel wires with a diameter of 1 mm (Product code CAJMERS6100S015W00 of Merit®S-6 Lincoln Electric from São Paulo, Brazil) were used.
The measurements were obtained using the data acquisition and control system described in Section 2.1. The weld bead profile was obtained using the laser scanner and macrographic analysis described in Section 2.3. The welding power source was configured (through the data acquisition and control system) with a rate up and a rate down of 50, 2 s of pre-gas time, 1.5 s of post-gas time and 0.05 s of gap time. The sampling time was 20 ms. The dimensions of the thermographic image were 49 × 25 pixels.

3.1.1. Experiment 1: Process Response to the Step in the Welding Speed and Wire Feed Speed

This experiment was performed to investigate the behaviour of the process with welding speed and wire feed speed variations. These variations were made by keeping the deposited volume by area unit constant by keeping the ratio between the welding speed and wire feed speed constant. The welding voltage was also constant. The step input sequence used in this experiment is shown in Table 1. The weld bead obtained is shown in Figure 11 and the three-dimensional weld bead profile is shown in Figure 12.
Figure 13 shows the unfiltered data and Figure 14 shows the filtered data. The data used for modelling is unfiltered but can be filtered in the modelling algorithm if necessary. In these figures, it is possible to notice that, although the wire feed speed has changed, the bead width and the bead height remain stable because of the constant relationship between the wire feed speed and the welding speed and the constant value of the welding voltage.
The correlation between the wire feed speed and the welding current is observed in the figures. When the wire feed speed is increased the welding current increases too. The relationship between the weld bead depth and the welding current is clear although the depth reaches a small value. This is observed in Figure 14.
The autocorrelation and cross-correlation analysis show high values, indicating an adequate sampling time and a delay of 11 sampling times between the weld bead depth and the stimulus.

3.1.2. Experiment 2: Process Response to the Step in the Welding Voltage

This experiment was performed to investigate the behaviour of the process with welding voltage variations. The wire feed speed and the welding speed were constant. The step input sequence used in this experiment is shown in Table 2. The weld bead obtained is shown in Figure 15 and the three-dimensional weld bead profile is shown in Figure 16.
Figure 17 shows the unfiltered data and while Figure 18 shows the filtered data. The correlation between the welding voltage and the thermographic parameters is observed in the figures. When the voltage is increased, the thermographic width increases too. In consequence, the area and volume are increased.
All thermographic parameters were calculated from the base value, except the thermographic peak. When the base value changes, the width, area and volume are inversely affected. In this experiment, the thermographic peak value is stable, but the base value is incremented. The natural response would be a decrease in the width, area and volume, but that did not happen. On the contrary, those values increased until the welding voltage returned to 19 V and then started decreasing. This behaviour indicates that width, area and volume indeed increase during the voltage step interval, but this change is less noticed since the base value is also increasing.
The relation between the bead width, reinforcement and the voltage is clear too, but the applied stimuli are not disturbing enough. The complex relationship between the welding parameters and the disturbances effect is noted when the weld bead depth changes for no apparent reason, although a small variation of the welding current in position 55 mm is observed.
The autocorrelation and cross-correlation analysis show high values, indicating an adequate sampling time and a no dead time between the bead width and the stimulus.

3.2. Modelling and Validation of Estimators

The neural network proposed in Section 2.4 for estimating the depth and width of the weld bead was trained and validated using the experimental data described in Section 3.1. Each experimental dataset was divided in 70% to train, 15% to validate and 15% to test. Experiment 1 was used to model the weld bead depth due to the relationship between this variable and the welding current. In this case, it is modified by the wire feed speed. Experiment 2 was used to model the weld bead width due to the relationship between this variable and the welding voltage, modified in this experiment.
In addition, the complete data set from Experiment 2 was useful for testing the weld bead depth model in different process conditions. Similarly, the complete data set from Experiment 1 was useful for testing the weld bead width model.
The model must represent the dynamic of the welding process. Variations in the measured signal can be induced by measurement noise or by the process dynamics. In the first case, the filter can be useful to reduce the noise in the measurements and improve the modelling. In the second case, the filter eliminates important information of the process and reduces the model accuracy. A mix of both cases can be found in the measurements. In these cases, the use of more complex filters can be necessary. In this work, several training trials were performed using filtered and unfiltered data (inputs and targets). The filter used is the moving average with varying widths between 0–20. This filter is simple to implement in an embedded device. The satisfactory results obtained in each case prove that a more complex filter is not necessary.
Table 3 shows the results of the training and test processes. The quality of the results is measured using the Mean Squared Error (MSE) and the fit metric (R) between the model outputs and the targets.

3.2.1. Training Results

In the weld bead depth modelling, the best result was obtained without filtering. In the weld bead width modelling, the target values were filtered with 10-element window.
The model obtained from the weld bead depth has a fit of 0.99844, a performance (MSE) of 7.6109 × 10−4 at epoch 6, a closed-loop performance of 6.4 × 10−3 and the network response is satisfactory.
The model of the weld bead width has a fit of 0.9957, a performance (MSE) of 31.008 × 104 at epoch 11, a closed-loop performance of 0.094 and the network response is satisfactory.
Figures 19a and 20a represent the regression plot between the model outputs and the targets (real weld bead depth or width). The dashed line represents the ideal result (outputs = targets) and the solid line represents the best fit linear regression line. The value of R is an indication of the relationship between the outputs and targets. If R = 1, this indicates that there is an exact linear relationship between outputs and targets. The regression plot for these models achieves an excellent value across the range of measurements.
The performance curves, shown in Figures 19b and 20b do not have indications of overfitting and define the best performance value in epoch 6 and 11 for depth and width respectively. This performance is obtained in open loop and the input D W ^ ( k T T ) is substituted by the corresponding target value in the input data set. Performance values show that the estimation errors are very low.
The closed-loop performance is obtained with the feedback activated, such as when the model is used as an estimator in real time. It is natural that it is larger than the previous performance value because the model error is also fed back. For this model, the performance value predicts a good behaviour in a closed loop.
In the response curves shown in Figure 21 and Figure 22, the model reproduces the behaviour of the process with high accuracy. The estimation error is very low (less than 0.05 mm for depth and 0.1 for width) all along the curve.

3.2.2. Testing Results

A test of the weld bead depth model, with a different data set, is shown in Figure 23. The dataset was obtained in Experiment 2 with welding voltage variations (see Section 3.1.2). This model has a fit of 0.9901 and a performance of 8.076 × 10−4. The closed-loop performance is 0.244 and the estimator response is satisfactory with an estimation error of less than 0.1 mm.
The weld bead width model response, with a different data set, is shown in Figure 24. The dataset was obtained in Experiment 1 with welding speed and wire feed speed variations (see Section 3.1.1). The model has a fit of 0.852 and performance is 0.128. The closed-loop performance was 0.67 and the estimator response was acceptable with an estimating error less than 0.3 mm along the curve.
Finally, it is necessary to emphasize that more significant errors are registered at the beginning and the end of the weld bead. These intervals correspond to the opening and closing of the arc, which are very unstable moments of the process and does not respond to the same dynamic, so they do not represent a threat to the quality of the model.

3.3. Simulations of Estimators in FPGA Device

The MLP described in Section 2.5 was synthesized using Intel® Quartus® Prime 15.1 (Altera Corp. from San Jose, CA, USA) for the target FPGA (Cyclone V-5CSEMA5F31C6N) kit DE1-SoC from Terasic Inc. (Hsinchu City, Taiwan). The architecture was then simulated using Mentor Graphics QuestaSim 10.6c from Mentor Graphics Corp. (Orlando, FL, USA). Since the floating-point (FP) units have configurable precision, experiments were conducted using FP units with 32, 27, 24 and 18 bits, all using an exponent width of 8 bits and variable sizes for the mantissa.
Simulation results for the weld bead depth estimator are shown in Figure 25, while results for the weld bead width estimator are presented in Figure 26. In both cases, the simulation output is compared to experimental data measurements and show satisfactory performance for all architectures but the one using 18-bits floating point precision as can be noticed by the noisy red dotted signal in Figure 26.
A more objective error measurement using MSE is presented in Table 4 and confirms the lower accuracy of the 18 bits architecture for the weld bead width estimator.
Finally, FPGA synthesis results are presented in Table 5, where it can be observed that lower bit widths lead to less resource usage as well as faster performance. The chosen architecture was the one using 24 bits of precision which can run at 130 MHz and compute the MLP output in 1.54 us.

3.4. The Response of Autoregressive with Exogenous Inputs Linear Models

To compare our results with others methods, two autoregressive with exogenous inputs (ARX) linear models were developed using Matlab’s System Identification Toolbox to estimate the weld bead geometry and compare its performance with neural network models.
The inputs of the models are the same as in the neural network (Tp, Tb, Ta, Tv, Tw, i or u), but removing the delayed variables. The Multi-Input and Single-Output (MISO) model structure and the results of the identification process are shown in Table 6.
Similar to the procedure used for the neural network, the measurements obtained in Experiments 1 and 2 were used to identify and test the ARX models. The measurements and model responses are compared in Figure 27. The data set from Experiment 1 was used to identify the depth model (see Figure 27a) and test the width model (see Figure 27d). The data set from Experiment 2 was used to identify the width model (see Figure 27c) and test the depth model (see Figure 27b).
The performance of the ARX models is not satisfactory as shown in the Fit and MSE parameters in Table 6 and the plotted curves in Figure 27. The response of the models with the training data is better than the response with the testing data, as show the MSE values in Table 6 and Figure 27b,d. In all cases, the models showed a very low accuracy compared to the neural network. Tests show that these linear ARX models cannot satisfactorily represent the dynamics of the welding process with the selected inputs and structure.

4. Conclusions

The data acquisition and control systems development can be a complex task under arc welding conditions. Related works show the complexity of the arc welding process and the evolution of measurement techniques towards non-contact measurements and intelligent artificial algorithms. This study also shows an open field in the research on the measuring of the weld bead geometry in the electric arc welding process and specifically in the GMAW process.
Sensory fusion techniques have been developed for the estimation of depth and width of the weld bead using a new approach for data processing.
The estimators, based on the fusion of thermographic information and arc welding variables, were implemented using dynamic models and tested with satisfactory results. A novel method to obtain the thermographic features was developed using uncomplex calculations.
The models of depth and width of the weld bead are valid only in the working conditions of the identification experiments. In case of a change in operating conditions, the model must be updated.
A low-cost data acquisition and open-loop control system were developed based on a microcontroller device. It allows sending stimuli to the GMAW process to cause changes in the variables of interest in real-time and captures the dynamics of their behaviour.
An image processing algorithm was developed to measure the depth of the weld bead over its entire length and that guarantees the data to obtain the dynamic model of this magnitude.
A real-time FPGA implementation of the estimators based on an MLP neural network is shown along with a novel method to calculate the weights and bias of the neural network, maintaining the same structure and performance. The architecture used floating-point arithmetic and was experimentally optimized to reduce FPGA resource usage by varying the floating-point bit width. Experimental results obtained by the two dynamic models on the FPGA show high precision and adequate performance being able to compute each estimation in 1.54 us.
The work offers a non-contact measuring tool for researchers and students and can be used in the industry by operators, technicians and unskilled labour.

Acknowledgments

This work has been supported by the Brasilia University (UnB), the government research CAPES foundation and CNPQ.

Author Contributions

G.A.B. was responsible for bibliographic research, development of the data acquisition and control system, the algorithms and software for features extraction of infrared image, the laser scanner and macrographic analysis to obtain the geometric profile of the weld bead, design and perform the experiments, design the sensor fusion algorithm and modelling and testing the estimators. R.C.S. and G.A.B. wrote the paper and were responsible for designing and implementing the MLP in FPGA, while R.C.S. executed the synthesis, simulations and optimizations. J.A.R.V. worked on the ARX model and reviewed the paper. S.C.A.A. coordinated the scientific research, worked in data interpretation and reviewed the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bestard, G.A.; Alfaro, S.C.A. Propuesta de diseño de un sistema de control de la geometría del cordón en procesos de soldadura orbital. Rev. Control. Cibern. Autom. 2016, 4. [Google Scholar]
  2. American Welding Society Staff. Welding Science and Technology, 9th ed.; Jenney, C.L., O’Brien, A., Eds.; American Welding Society: Miami, FL, USA, 2002; Volume 1, ISBN 0-87171-657-7. [Google Scholar]
  3. Bestard, G.A. Sensor Fusion and Embedded Devices to Estimate and Control the Depth and Width of the Weld Bead in Real Time. Ph.D. Thesis, Universidade de Brasília, Brasília, Brazil, 2017. [Google Scholar]
  4. Brosed, F.J.; Aguilar, J.J.; Guillomïa, D.; Santolaria, J. 3D geometrical inspection of complex geometry parts using a novel laser triangulation sensor and a robot. Sensors 2011, 11, 90–110. [Google Scholar] [CrossRef] [PubMed]
  5. Luo, H.; Chen, X. Laser visual sensing for seam tracking in robotic arc welding of titanium alloys. Int. J. Adv. Manuf. Technol. 2005, 26, 1012–1017. [Google Scholar] [CrossRef]
  6. Liu, Y.; Zhang, W.; Zhang, Y. Estimation of Weld Joint Penetration under Varying GTA Pools. Weld. J. Suppl. AWS 2013, 92, 313–321. [Google Scholar]
  7. Liu, Y.; Zhang, Y. Control of 3D weld pool surface. Control Eng. Pract. 2013, 21, 1469–1480. [Google Scholar] [CrossRef]
  8. Liu, Y.K.; Zhang, W.J.; Zhang, Y.M. Nonlinear Modeling for 3D Weld Pool Characteristic Parameters in GTAW. Weld. J. 2015, 94, 231s–240s. [Google Scholar]
  9. Chen, W.H.; Nagarajan, S.; Chin, B.A. Weld penetration sensing and control. Infrared Technol. 1988, 972, 268–272. [Google Scholar]
  10. Nagarajan, S.; Chen, H.W.; Chin, B.A. Infrared sensing for adaptive arc welding. Weld. J. 1989, 68, 462–466. [Google Scholar]
  11. Nagarajan, S.; Banerjee, P.; Chen, W.; Chin, B.A. Weld pool size and position control using IR sensors. Proceedings of NSF Design and Manufacturing Systems Conference, Atlanta, GA, USA, 25–28 March 1990. [Google Scholar]
  12. Nagarajan, S.; Chin, B.A.; Chen, W. Control of the welding process using infrared sensors. IEEE Trans. Robot. Autom. 1992, 8, 86–93. [Google Scholar] [CrossRef]
  13. Beardsley, H.E.; Zhang, Y.M.M.; Kovacevic, R. Infrared sensing of full penetration state in gas tungsten arc welding. Int. J. Mach. Tool Manuf. 1994, 34, 1079–1090. [Google Scholar] [CrossRef]
  14. Chokkalingham, S.; Chandrasekhar, N.; Vasudevan, M. Predicting the depth of penetration and weld bead width from the infra red thermal image of the weld pool using artificial neural network modeling. J. Intell. Manuf. 2012, 23, 1995–2001. [Google Scholar] [CrossRef]
  15. Iceland, W.F.; Martin, E. O’Dor Weld Penetration Control. U.S. Patent 3567899A, 2 March 1971. [Google Scholar]
  16. Nomura, H.; Satoh, Y.; Tohno, K.; Satoh, Y.; Kuratori, M. Arc light intensity controls current in SA welding system. Weld. Met. Fabr. 1980, 90, 457–463. [Google Scholar]
  17. Bangs, E.R.; Longinow, N.E.; Blaha, J.R. Using Infrared Image to Monitor and Control Welding. U.S. Patent 4877940A, 30 October 1989. [Google Scholar]
  18. Bestard, G.A.; Alfaro, S.C.A. Sensor fusion: Theory review and applications. In Proceedings of the 23rd ABCM International Congress of Mechanical Engineering COBEM 2015, Rio de Janeiro, Brazil, 6–11 December 2015. [Google Scholar]
  19. Henderson, T.C.; Dekhil, M.; Kessler, R.R.; Griss, M.L. Sensor Fusion. Control Probl. Robot. Autom. 1998, 230, 193–207. [Google Scholar]
  20. Dong, J.; Zhuang, D.; Huang, Y.; Fu, J. Advances in Multi-Sensor Data Fusion: Algorithms and Applications. Sensors 2009, 9, 7771–7784. [Google Scholar] [CrossRef] [PubMed]
  21. Sasiadek, J.Z. Sensor fusion. Annu. Rev. Control 2002, 26, 203–228. [Google Scholar] [CrossRef]
  22. Song, J.B.; Hardt, D.E. Closed-loop control of weld pool depth using a thermally based depth estimate. Weld. J. 1993, 72, 471s–478s. [Google Scholar]
  23. Song, J.B.; Hardt, D.E. Multivariable adaptive control of bead geometry in GMA welding. Int. J. Press. Vessel. Pip. 2002, 79, 251–262. [Google Scholar]
  24. Chen, B.; Wang, J.; Chen, S. Modeling of pulsed GTAW based on multi-sensor fusion. Sens. Rev. 2009, 29, 223–232. [Google Scholar] [CrossRef]
  25. Chen, B.; Wang, J.; Chen, S. Prediction of pulsed GTAW penetration status based on BP neural network and D-S evidence theory information fusion. Int. J. Adv. Manuf. Technol. 2010, 48, 83–94. [Google Scholar] [CrossRef]
  26. Pal, K.; Pal, S.K. Monitoring of Weld Penetration Using Arc Acoustics. Mater. Manuf. Process. 2011, 26, 684–693. [Google Scholar] [CrossRef]
  27. Alfaro, S.C.A. Sensors for quality control in welding. Soldag. Insp. 2012, 17, 192–200. [Google Scholar] [CrossRef]
  28. Alfaro, S.C.A.; Cayo, E.H. Sensoring fusion data from the optic and acoustic emissions of electric arcs in the GMAW-S process for welding quality assessment. Sensors 2012, 12, 6953–6966. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  29. Yu, H.; Ye, Z.; Chen, S. Application of arc plasma spectral information in the monitor of Al–Mg alloy pulsed GTAW penetration status based on fuzzy logic system. Int. J. Adv. Manuf. Technol. 2013, 68, 2713–2727. [Google Scholar] [CrossRef]
  30. Chen, B.; Chen, S.; Feng, J. A study of multisensor information fusion in welding process by using fuzzy integral method. Int. J. Adv. Manuf. Technol. 2014, 74, 413–422. [Google Scholar] [CrossRef]
  31. Liu, Y.; Zhang, Y. Fusing machine algorithm with welder intelligence for adaptive welding robots. J. Manuf. Process. 2017, 27, 18–25. [Google Scholar] [CrossRef]
  32. Dave, V.R.; Cola, M.J. Methods for Control of Fudion Welding Process by Maintaining a Controller Weld Pool Volume. U.S. Patent 8354608, 15 January 2013. [Google Scholar]
  33. Nefedyev, E.; Gomera, V.; Sudakov, A. Application of Acoustic Emission Method for Control of Manual Arc Welding, Submerged Arc Welding. In Proceedings of the 31st Conference of the European Working Group on Acoustic Emission (EWGAE), Dresden, Germany, 3–5 Septermber 2014; pp. 1–9. [Google Scholar]
  34. Yang, F.; Zhang, S.; Zhang, W.M. Design of GTAW Wire Feeder Control System Based on Nios II. Appl. Mech. Mater. 2013, 397–400, 1909–1912. [Google Scholar] [CrossRef]
  35. Hurtado, R.H.; Alfaro, S.C.A.; Llanos, C.H. Discontinuity welding detection using an embedded hardware system. ABCM Symp. Ser. Mechatron. 2012, 5, 879–888. [Google Scholar]
  36. Llanos, C.H.; Hurtado, R.H.; Alfaro, S.C.A. FPGA-based approach for change detection in GTAW welding process. J. Braz. Soc. Mech. Sci. Eng. 2016, 38, 913–929. [Google Scholar] [CrossRef]
  37. Velasco, R.H.H. Detecção on-line de Descontinuidades no Processo de Soldagem GTAW Usando Sensoriamento Infravermelho e FPGAs. Master’s Thesis, Universidade de Brasília, Brasília, Brazil, 2010. [Google Scholar]
  38. Machado, M.V.R.; Mota, C.P.; Neto, R.M.F.; Vilarinho, L.O. Sistema Embarcado para Monitoramento Sem Fio de Sinais em Soldagem a Arco Elétrico com Abordagem Tecnológica. Soldag. Insp. 2012, 17, 147–157. [Google Scholar] [CrossRef]
  39. Millán, R.; Quero, J.M.; Franquelo, L.G. Welding data adquisition based on FPGA. IC’s Instrum. Control 1997, 1, 2–6. [Google Scholar] [CrossRef]
  40. Bestard, G.A.; Alfaro, S.C.A. Sistema de adquisición de datos y control en lazo abierto para procesos de soldadura GMAW. In Proceedings of the Taller Internacional de Cibernética Aplicada TCA 2017, Vedado, Cuba, 21 February 2017; pp. 1–7. [Google Scholar]
  41. IMC-Soldagem Manual de Instruções Inversal 450/600. Available online: https://www.imc-soldagem.com.br/images/documentos/manuais/inversal_450-600_manual_instrucoes_2ed_(1998).pdf (accessed on 10 November 2017).
  42. Park, J.; Mackay, S.; Wright, E. Practical Data Communications for Instrumentation and Control; Elsevier: Amsterdam, The Netherlands, 2003; ISBN 07506 57979. [Google Scholar]
  43. Franco, F.D. Monitorização e localização de defeitos na soldagem TIG através do sensoriamento infravermelho. Master’s Thesis, Universidad de Brasília, Brasília, Brazil, 2008. [Google Scholar]
  44. Berger Lahr GmbH & Co. KG. Product Manual. Intelligent Compact Drive Pulse/Direction Stepper Motor IclA IDS; Berger Lahr GmbH & Co. KG: Lahr, Germany, 2006. [Google Scholar]
  45. Microchip Technology Inc. Pic18F4550. Available online: http://ww1.microchip.com/downloads/en/DeviceDoc/39632e.pdf (accessed on 10 November 2017).
  46. FLIR-Systems. ThermoVisionTM A40 M Manual del Usuario; FLIR-Systems: Wilsonville, OR, USA, 2004. [Google Scholar]
  47. Trade Association FireWire Design Guide. Available online: http://www.1394ta.org/developers/designguide/tafwdesignguide.pdf (accessed on 2 December 2017).
  48. FLIR-Systems ThermoVisionTM SDK User’s Manual. Available online: http://www.mds-flir.com/data/bbsData/ThermoVision SDK.pdf (accessed on 1 December 2017).
  49. MathWorks Hyperbolic Tangent Sigmoid Transfer Function—MATLAB Tansig. Available online: https://www.mathworks.com/help/nnet/ref/tansig.html (accessed on 19 July 2017).
  50. Ayala, H.V.H.; Muñoz, D.M.; Llanos, C.H.; dos Santos Coelho, L. Efficient hardware implementation of radial basis function neural network with customized-precision floating-point operations. Control Eng. Pract. 2017, 60, 124–132. [Google Scholar] [CrossRef]
  51. Ayala, H.; Sampaio, R.; Mu, D.M.; Llanos, C.; Coelho, L.; Jacobi, R. Nonlinear Model Predictive Control Hardware Implementation with Custom-precision Floating Point Operations. Proceedings of 24th Mediterranean Conference on Control and Automation (MED), 2016, Athens, Greece, 21–24 June 2016; IEEE, 2016. [Google Scholar] [CrossRef]
  52. Muñoz, D.M.; Sanchez, D.F.; Llanos, C.H.; Ayala-Rincón, M. Tradeoff of FPGA design of a floating-point library for arithmetic operators. J. Integr. Circuits Syst. 2010, 5, 42–52. [Google Scholar]
  53. Muñoz, D.M.; Sanchez, D.F.; Llanos, C.H.; Ayala-Rincón, M. FPGA based floating-point library for CORDIC algorithms. In Proceedings of the 2010 VI Southern Programmable Logic Conference (SPL), Ipojuca, Brazil, 24–26 March 2010; pp. 55–60. [Google Scholar]
Figure 1. Data acquisition and open loop control system: (a) Main components; (b) Flat welding table and instruments support; (c) Schematic side view.
Figure 1. Data acquisition and open loop control system: (a) Main components; (b) Flat welding table and instruments support; (c) Schematic side view.
Sensors 18 00962 g001
Figure 2. Data acquisition and control interface.
Figure 2. Data acquisition and control interface.
Sensors 18 00962 g002
Figure 3. Data acquisition and stimulus sequence design form and data files.
Figure 3. Data acquisition and stimulus sequence design form and data files.
Sensors 18 00962 g003
Figure 4. Graphical representation of the infrared image features extracted from a sample: (a) 3D curve of the thermographic values; (b) Front view of the curve with peak, boundary and base values; (c) Intersection between the boundary plane and the 3D curve; (d) Area, volume and width detected.
Figure 4. Graphical representation of the infrared image features extracted from a sample: (a) 3D curve of the thermographic values; (b) Front view of the curve with peak, boundary and base values; (c) Intersection between the boundary plane and the 3D curve; (d) Area, volume and width detected.
Sensors 18 00962 g004
Figure 5. Low-cost laser scanner: (a) Main components; (b) Software for data acquisition and image processing; (c) Details of the optical components and the laser line.
Figure 5. Low-cost laser scanner: (a) Main components; (b) Software for data acquisition and image processing; (c) Details of the optical components and the laser line.
Sensors 18 00962 g005
Figure 6. Macrographic analysis applied in a weld bead: (a) Top view of the weld bead; (b) Original image of the lateral view of the cut and polished piece; (c) Image processed by the algorithm that shows the depth profile of the bead (yellow line) and the surface of the workpiece (red line).
Figure 6. Macrographic analysis applied in a weld bead: (a) Top view of the weld bead; (b) Original image of the lateral view of the cut and polished piece; (c) Image processed by the algorithm that shows the depth profile of the bead (yellow line) and the surface of the workpiece (red line).
Sensors 18 00962 g006
Figure 7. Structure of multilayer perceptron neural network.
Figure 7. Structure of multilayer perceptron neural network.
Sensors 18 00962 g007
Figure 8. Estimator of depth and width of the weld bead that uses the thermographic features with welding current or voltage in cooperative mode.
Figure 8. Estimator of depth and width of the weld bead that uses the thermographic features with welding current or voltage in cooperative mode.
Sensors 18 00962 g008
Figure 9. Blocks diagram of neural network implementation in Field-Programmable Gate Array (FPGA) device using a Finite State Machine (FSM).
Figure 9. Blocks diagram of neural network implementation in Field-Programmable Gate Array (FPGA) device using a Finite State Machine (FSM).
Sensors 18 00962 g009
Figure 10. Block diagram of neuron implementation in FPGA device.
Figure 10. Block diagram of neuron implementation in FPGA device.
Sensors 18 00962 g010
Figure 11. Weld bead obtained in the experiment with variations in the welding speed and wire feed speed: (a) Top view of the weld bead; (b) Weld bead depth profile (yellow line) and the surface of the workpiece (red line) obtained by the macrographic analyses algorithm.
Figure 11. Weld bead obtained in the experiment with variations in the welding speed and wire feed speed: (a) Top view of the weld bead; (b) Weld bead depth profile (yellow line) and the surface of the workpiece (red line) obtained by the macrographic analyses algorithm.
Sensors 18 00962 g011
Figure 12. Three-dimensional reconstruction of the weld bead obtained in the experiment with variations in the welding speed and wire feed speed.
Figure 12. Three-dimensional reconstruction of the weld bead obtained in the experiment with variations in the welding speed and wire feed speed.
Sensors 18 00962 g012
Figure 13. Unfiltered experimental measurements obtained with the data acquisition and control system varying the welding speed and wire feed speed.
Figure 13. Unfiltered experimental measurements obtained with the data acquisition and control system varying the welding speed and wire feed speed.
Sensors 18 00962 g013
Figure 14. Filtered experimental measurements obtained with the data acquisition and control system varying the welding speed and wire feed speed.
Figure 14. Filtered experimental measurements obtained with the data acquisition and control system varying the welding speed and wire feed speed.
Sensors 18 00962 g014
Figure 15. Weld bead obtained in the experiment with variations in the welding voltage: (a) Top view of the weld bead; (b) Weld bead depth profile (yellow line) and the surface of the workpiece (red line) obtained by the macrographic analyses algorithm.
Figure 15. Weld bead obtained in the experiment with variations in the welding voltage: (a) Top view of the weld bead; (b) Weld bead depth profile (yellow line) and the surface of the workpiece (red line) obtained by the macrographic analyses algorithm.
Sensors 18 00962 g015
Figure 16. Three-dimensional reconstruction of the weld bead obtained in the experiment with variations in the welding voltage.
Figure 16. Three-dimensional reconstruction of the weld bead obtained in the experiment with variations in the welding voltage.
Sensors 18 00962 g016
Figure 17. Unfiltered experimental measurements obtained with the data acquisition and control system varying the welding voltage.
Figure 17. Unfiltered experimental measurements obtained with the data acquisition and control system varying the welding voltage.
Sensors 18 00962 g017
Figure 18. Filtered experimental measurements obtained with the data acquisition and control system varying the welding voltage.
Figure 18. Filtered experimental measurements obtained with the data acquisition and control system varying the welding voltage.
Sensors 18 00962 g018
Figure 19. Results in the training of the weld bead depth model: (a) Regression curve between the model output and the real depth (target); (b) Performance curve in training, validation and test.
Figure 19. Results in the training of the weld bead depth model: (a) Regression curve between the model output and the real depth (target); (b) Performance curve in training, validation and test.
Sensors 18 00962 g019
Figure 20. Results in the training of the weld bead width model: (a) Regression curve between the model output and the real width (target); (b) Performance curve in training, validation and test.
Figure 20. Results in the training of the weld bead width model: (a) Regression curve between the model output and the real width (target); (b) Performance curve in training, validation and test.
Sensors 18 00962 g020
Figure 21. Comparison between the model response (outputs) and the experimental measurements (targets), obtained in the training of the weld bead depth model.
Figure 21. Comparison between the model response (outputs) and the experimental measurements (targets), obtained in the training of the weld bead depth model.
Sensors 18 00962 g021
Figure 22. Comparison between the model response (outputs) and the experimental measurements (targets), obtained in the training of the weld bead width model.
Figure 22. Comparison between the model response (outputs) and the experimental measurements (targets), obtained in the training of the weld bead width model.
Sensors 18 00962 g022
Figure 23. Comparison between the model response (outputs) and the experimental measurements (targets), obtained in the testing of the weld bead depth model with a different data set.
Figure 23. Comparison between the model response (outputs) and the experimental measurements (targets), obtained in the testing of the weld bead depth model with a different data set.
Sensors 18 00962 g023
Figure 24. Comparison between the model response (outputs) and the experimental measurements (targets), obtained in the testing of the weld bead width model with a different data set.
Figure 24. Comparison between the model response (outputs) and the experimental measurements (targets), obtained in the testing of the weld bead width model with a different data set.
Sensors 18 00962 g024
Figure 25. Comparison between the weld bead depth experimental measurements and hardware MPL implementation with various floating-point precisions.
Figure 25. Comparison between the weld bead depth experimental measurements and hardware MPL implementation with various floating-point precisions.
Sensors 18 00962 g025
Figure 26. Comparison between the weld bead width experimental measurements and hardware MPL implementation with various floating-point precisions.
Figure 26. Comparison between the weld bead width experimental measurements and hardware MPL implementation with various floating-point precisions.
Sensors 18 00962 g026
Figure 27. Comparison between the experimental measurements and the response of model ARX.
Figure 27. Comparison between the experimental measurements and the response of model ARX.
Sensors 18 00962 g027
Table 1. Stimulus sequence applied during the experiment with steps in the welding speed and wire feed speed.
Table 1. Stimulus sequence applied during the experiment with steps in the welding speed and wire feed speed.
Position (mm)Welding Speed (mm/s)Welding Voltage (V)Wire Feed Speed (m/min)Arc Status
5.06.020.03.0On
30.08.020.04.0On
60.06.020.03.0On
85.06.020.03.0Off
Table 2. Stimulus sequence applied during the experiment with steps in the welding voltage.
Table 2. Stimulus sequence applied during the experiment with steps in the welding voltage.
Position (mm)Welding Speed (mm/s)Welding Voltage (V)Wire Feed Speed (m/min)Arc Status
5.06.019.03.5On
30.06.021.03.5On
60.06.019.03.5On
85.06.019.03.5Off
Table 3. Training and test results of the estimators.
Table 3. Training and test results of the estimators.
Estimator ofWeld Bead DepthWeld Bead Width
Training (Exp. 1)Test (Exp. 2)Training (Exp. 2)Test (Exp. 1)
Fit (R)0.99840.99010.99570.8516
Epoch6-11-
Open loop MSE7.6109 × 10−48.076 × 10−431.008 × 10−40.128
Closed loop MSE6.4 × 10−30.2240.0940.67
Table 4. Comparison between the accuracy of implementations with different floating-point precisions.
Table 4. Comparison between the accuracy of implementations with different floating-point precisions.
MPL Floating Point PrecisionWeld Bead Depth Estimator MSEWeld Bead width Estimator MSE
32 bits2.095 × 10−45.045 × 10−3
27 bits2.095 × 10−45.046 × 10−3
24 bits2.094 × 10−45.056 × 10−3
18 bits2.460 × 10−41.186 × 10−2
Table 5. Summary of resources and performance of implementations with different floating-point precisions.
Table 5. Summary of resources and performance of implementations with different floating-point precisions.
Architecture Floating-Point PrecisionALMs Usage (Elements/%)DSPs Usage (Elements/%)Frequency (MHz)Time (us)
32 bits19,809/62%24/21%117.081.71
27 bits16,615/52%24/21%125.771.59
24 bits14,233/44%24/21%130.191.54
18 bits10,502/33%24/21%135.721.47
Table 6. Parameters of autoregressive with exogenous inputs (ARX) linear models.
Table 6. Parameters of autoregressive with exogenous inputs (ARX) linear models.
Model ParameterARX Weld Bead Depth ModelARX Weld Bead Width Model
Order of A (na)44
Order of B+1 (nb)[4 4 4 4 4 8][4 4 4 4 4 8]
Input-output delay (nk)[1 1 1 1 1 1][1 1 1 1 1 1]
Fit Percent (Fit)79.87%59.09%
Mean squared error (MSE) in training0.00370.8029
Mean squared error (MSE) in testing2.44646.4616

Share and Cite

MDPI and ACS Style

Bestard, G.A.; Sampaio, R.C.; Vargas, J.A.R.; Alfaro, S.C.A. Sensor Fusion to Estimate the Depth and Width of the Weld Bead in Real Time in GMAW Processes. Sensors 2018, 18, 962. https://0-doi-org.brum.beds.ac.uk/10.3390/s18040962

AMA Style

Bestard GA, Sampaio RC, Vargas JAR, Alfaro SCA. Sensor Fusion to Estimate the Depth and Width of the Weld Bead in Real Time in GMAW Processes. Sensors. 2018; 18(4):962. https://0-doi-org.brum.beds.ac.uk/10.3390/s18040962

Chicago/Turabian Style

Bestard, Guillermo Alvarez, Renato Coral Sampaio, José A. R. Vargas, and Sadek C. Absi Alfaro. 2018. "Sensor Fusion to Estimate the Depth and Width of the Weld Bead in Real Time in GMAW Processes" Sensors 18, no. 4: 962. https://0-doi-org.brum.beds.ac.uk/10.3390/s18040962

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop