Next Article in Journal
How Can Floor Covering Influence Buildings’ Demand Flexibility?
Next Article in Special Issue
Heat Transfer Efficiency Prediction of Coal-Fired Power Plant Boiler Based on CEEMDAN-NAR Considering Ash Fouling
Previous Article in Journal
The Role of Agriculture in Climate Change Mitigation—A Polish Example
Previous Article in Special Issue
Energy Idle Aware Stochastic Lexicographic Local Searches for Precedence-Constraint Task List Scheduling on Heterogeneous Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Application of the Deep CNN-Based Method in Industrial System for Wire Marking Identification

1
DTP Ltd., 66-002 Zielona Góra, Poland
2
Faculty of Transport, Warsaw University of Technology, 00-662 Warsaw, Poland
3
Faculty of Mechanical Engineering, University of Zielona Góra, 65-516 Zielona Gora, Poland
*
Author to whom correspondence should be addressed.
Submission received: 23 May 2021 / Revised: 9 June 2021 / Accepted: 15 June 2021 / Published: 19 June 2021

Abstract

:
Industry 4.0, a term invented by Wolfgang Wahlster in Germany, is celebrating its 10th anniversary in 2021. Still, the digitalization of the production environment is one of the hottest topics in the computer science departments at universities and companies. Optimization of production processes or redefinition of the production concepts is meaningful in light of the current industrial and research agendas. Both the mentioned optimization and redefinition are considered in numerous subtopics and technologies. One of the most significant topics in these areas is the newest findings and applications of artificial intelligence (AI)—machine learning (ML) and deep convolutional neural networks (DCNNs). The authors invented a method and device that supports the wiring assembly in the control cabinet production process, namely, the Wire Label Reader (WLR) industrial system. The implementation of this device was a big technical challenge. It required very advanced IT technologies, ML, image recognition, and DCNN as well. This paper focuses on an in-depth description of the underlying methodology of this device, its construction, and foremostly, the assembly industrial processes, through which this device is implemented. It was significant for the authors to validate the usability of the device within mentioned production processes and to express both advantages and challenges connected to such assembly process development. The authors noted that in-depth studies connected to the effects of AI applications in the presented area are sparse. Further, the idea of the WLR device is presented while also including results of DCNN training (with recognition results of 99.7% although challenging conditions), the device implementation in the wire assembly production process, and its users’ opinions. The authors have analyzed how the WLR affects assembly process time and energy consumption, and accordingly, the advantages and challenges of the device. Among the most impressive results of the WLR implementation in the assembly process one can be mentioned—the device ensures significant process time reduction regardless of the number of characters printed on a wire.

1. Introduction

The authors of this paper considered the process of the assembly of the industrial enclosure, which results in a complete control cabinet in the final part of the process. Such a process, conducted in the traditional way, involving direct human reading and plenty of on-hand operations, is time- and energy consuming and subject to mistakes in its installation. Therefore, it is worth developing innovative devices that might support the process and thus dispense with the human factor which, potentially, is subject to errors. The assumption of the authors is that the production process of control cabinets is preceded by the automatically supported production of wires and the assembly of wires is enhanced with the use of a dedicated software system.

1.1. Shaping the Solutions for Wire Assembly Process of the Control Cabinets

Work dedicated to improving wire preprocessing, which is understood as preliminary preparation for adequately connecting wire operations inside an industrial enclosure, has been carried out for years in the industry. To name just a few historically significant events, it is worth noting that insulation stripper and the wire separator for twisted wire pairs were developed and patented in 1974 by Folkenroth and Ullman [1]; meanwhile, the world’s first, electronically controlled, fully automatic crimping machine was developed in 1982 [2], whereas the apparatus for making a wire harness was patented by Hirano and Yamashita in 1989 [3]. Next, the apparatus and method for preparing the wire-end were patented in 1997 by Lucenta et al. [4]. Additionally, it is worth mentioning that the launch of the world’s first fully automatic crimping machine equipped with a twister device was reported in 1999 [2].
The scientific papers were rarely connected to the automated processing of wires/wire harnesses—one of the few was the publication of Block and Gage, released in 1988 [5]. Improvement of the wiring production process and the subprocess of identification of wire markings have been foremost of industrial interest in recent years. A significant device of such a kind is the personal wiring assistant that prepares a whole wire but only one at a time when required [6]. Nevertheless, users of this device noted that it was moving too slowly. Although trials to solve this problem were open, its development has been rather at a standstill over the years, presumably because of the delicacy of the material and the complex nature of the process, as Jan-Henry Schall reported in a personal communication during live training on innovations in the control cabinet production process in 2017 in the Rittal Innovation Center (Haiger, Germany). Another example of such a supporting device is the wire terminal, which prepares, in advance, all the wires necessary to complete the assembly process and packs them in assembly sequence in special rail-based wire “warehouses” that can accommodate up to 1300 wires [7]. One of the latest examples of improvement in this specific production process is the EPLAN Smart Wiring (ESW) system, which is a touchscreen application that guides an employee, step by step, through the whole wire assembly process of the control cabinet. For the most advanced and automated version of the production process, which includes all the wires being preproduced to the unique, exact length, the ESW system provides a search window to identify which wire on the application list is the one held by an employee [8,9,10].

1.2. Wire Marking Identification with Deep Neural Network Applications

What raised the interest of the authors of the current paper, was the development of the assembly process of control cabinets by speeding up commitment to the technological processes and operations. In the assembly process, it was observed that the wires are marked with a sequence of signs using raster fonts, i.e., pixilation composed of dots creating a symbol in a definite square sign matrix. Such a reading of wire markings, often on tiny wires of diameter from 0.75 to 1.5 mm2, can become exhausting for assembly personnel; therefore, it is worth considering automatization of this process by methods connected to image recognition. The above objective was achieved by inventing a method and constructing a prototype device with its software based on deep neural networks (DNNs) for identifying wire markings. Consequently, this paper’s main aim is to present the developed solution and evaluation of its operation. The consequence of this main paper’s objective is to analyze answers for particular research questions that raised the authors’ interest after reviewing the literature, and therefore, these research questions are presented in the adequate section of this paper (the final paragraph of Section 2).
In recent years, the use of multilayer DNN has become the basis for describing processes in specialized descriptors, used for decision making in various fields and areas of knowledge. This approach allows one primarily to describe the features of the analyzed process. Such mechanisms are much more effective than the methods of descriptor generation traditionally used [11]. As a result, they make it possible to improve the accuracy, precision, and reliability of the operation, used in expert software systems.
The issue of the use of multilayer neural networks using descriptors is related, among others, to natural language processing (NLP), data exploration, and, above all, to computer vision. In contrast, in the case of translating a real image to the level of computer vision, DNNs are used. The field used to detect or process images with their help is called deep learning. Deep learning is characterized by the fact that the neural network between input and output has hidden layers, the goal of which is to learn the details called low-level features for the shallowest layers while generalizing information from shallow layers, or the so-called high-level features, for the deeper layers. Due to the fact that the deep network is characterized by the presence of many layers, extracting features takes place hierarchically.
Rosenblatt [12] and later Fukushima [13] presented initial works on neural networks with a multilayer model. As the precursor of modern neural networks, many sources point to Jürgen Schmidhuber [14], who began working on them around the 1990s. One could even say that around the year 2000, all the pieces of the modern neural network puzzle were ready (quite a lot of equipment, data, and ideas), but it was not until around 2010 that they began to be used. However, hardly anyone in 2010 considered neural networks a good idea—rather, it seemed a dead end for scientists. LeCun and Bengio [15] developed a structure and algorithm for learning a specialized multilayer network called convolutional neural network (CNN), on the basis of which Hinton and Osindero [16] gave impetus to the development of this kind of artificial intelligence (AI). The authors of the last publication have shown that deep networks can be taught in a different way. The proposed method involved learning the network layer by layer, followed by supervised tutoring. The authors also presented improved quality of multilayer network mappings, compared to the traditional, shallow, multilayer perceptron (MLP) networks. Multilayer networks began to be termed DNN. The methods of teaching such networks and general issues are associated with deep learning. Nevertheless, the last breakthrough year, in terms of the topic herein presented, was 2012, when the work of Le et al. was published [17], describing the innovative way in classifying dogs and cats and when the CNN, called AlexNet, was presented by Krizhevsky et al. [18]. It is worth mentioning, that the world’s AI breakthrough was awarded in 2019 with the “IT-Nobel Prize,”namely, the Turing Award for Geoffrey Hinton, Yann LeCun, and Yoshua Bengio [19].
There are many variations of the network, which are modifications of the basic structure of CNN. One example is the auto-encoder (AE), as a multilayer, nonlinear generalization of a linear principal component analysis (PCA) network [20] and recursive networks of the long short-term memory (LSTM) type [21], which are an effective solution to the problem of backward propagation in time, or the limited restricted Boltzmann machine (RBM) device used in deep belief networks (DBNs) [16].

1.3. Challenges of Wire Marking Identification

The industrial control cabinet includes a significant number of electrical devices and connections transferred through the wires. In the end, the control cabinet functions include control, electric signal transmission, and energy distribution. Control cabinets could be compared to the heart, responsible for controlling and powering the whole machine, robot, or industrial shop-floor cell. Each technological line, production node or automatic, logistic process requires the implementation of the individually designed control cabinet. In the modern production processes of these devices, a significant number of wires are used. The wires are signed with printed, alpha-numerical markings in the ink print technology. The problems with identifying the markings, described as state of the art, remain unsolved so far, and there are clear deficiencies or gaps when it comes to recognizing any solution to the wire’s markings automatically; this gap in the research is proved in the following section of this paper. Identification of particular wires in the assembly process of the control cabinet relies on the assembly crew reading the markings, which is a time-consuming process, with a high risk of error due to the small sizes of the markings and their syntax; they are thus relatively costly. It is worth mentioning that the special font used for printing on wires was designed so that it covers maximum wire space available for the printer nozzle from above and is large and clear enough for the assembly operator to read it on those tiny wires. Due to its small size, the process may cause eye fatigue, since the manual wiring production may take a whole working day, considered as 8 h, as one shift of work in the domestic situation of the paper’s authors. The aim of the current research is the automation of the identification of the wire’s markings, which will enable costs to be significantly reduced in the manufacture of control cabinets through reductions in assembly time, improving the prevention of errors in assembly and a higher degree of quality assurance integration. As mentioned in [22], a high-speed system for marking wires and cabling reduces the costs of preparing wire, which is still an up-to-date statement, enhancing product maintenance while providing convenient, sequenced labeling kits.
The abovementioned latest innovations as the wire terminal, ESW wiring support system, and the invented Wire Label Reader (WLR) fit perfectly to the Industry 4.0 trend of production digitalization. It is worth mentioning that the Industry 4.0 concept and the huge engineering effort behind it, celebrates its 10th anniversary this year, which was recently reported by Zuehlke [23], one of the significant contributors in this area.
Further sections of the paper are as follows. The next is the literature review with research questions which reflect simultaneously the research gaps. The third section concerns the research methodology and additionally, it corresponds to the methodology applied in the WLR device that is connected to image recognition. Consequently, the device for identifying wire markings, challenges of wire image recognition, and in particular the reading process for wire markings, and the methods applied are presented. The core section of the paper, Section 4, presents various contexts of the device analysis. The authors focus on the results of neural network training in the first subsection, then present tests on the duration of the assembly process of the control cabinets; opinions of the device’s prototype users are also presented. Finally, the paper is concluded by summing up the answers to research questions, description of research limitation, future development of the device, and potential future research agendas with the proposed system’s application.

2. Literature Review

In recent decades, image recognition has largely been automated by applying analytical and numerical methods using algorithms as well as AI methods coupled with DNNs.
In the past, one of the solutions for the problem of image recognition was the installation of markers on areas or objects of interest. These markers provided support, among other things, for tracking and positioning, along with orientation, while at the same time tracking desired objects [24,25,26]. Even though the markers are a suitable laboratory solution for requirements, or it is reasonable to use them in processes of an unchanging nature, they would not work in applications centered around the concept of Industry 4.0, which is oriented to flexibility and frequent changes. For these types of applications, the additional task of permanently attaching markers to the environment being analyzed and subjecting them to continuous change would be burdensome.
In order to eliminate physical markers, the latest methods of machine learning (ML) are used, namely, feature learning and deep learning as described by Le et al. [17] and as previously indicated. Deep learning is a ground-breaking approach to the use of AI [16,27]. Eliminating markers is still a current topic and there is no single, optimal solution in the literature regarding this issue [28,29,30]. It is expected that the use of AI will eliminate the need to place physical markers, which will lead to full, automatic recognition of the environment and selected elements.
Interest in the topic can be observed in the world’s leading universities. Below, the authors of this paper underlined examples of doctoral dissertations from the Massachusetts Institute of Technology. In the work by Velez [31], the author presented the models and algorithms used to search for objects in the real world. In contrast, Chen [32] analyzed the interactive object recognition from the level of a mobile device. Jaroensri [33] considered the use of synthetic data, such as artificially created images, with the use of numerical methods, to better prepare neural networks to solve problems related to computer image recognition. It is worth mentioning at this point that the solution presented in this paper, apart from real pictures, also used the one synthetically generated. Li [34] presented, among other things, advanced image recognition through deep learning. Florence [35] described deep, visual learning that was designed to manipulate a robot. In turn, the author Wu [36] focused on the science of seeing the physical world in the context of deep learning, which generally deals with certain problems that require a large amount of data for self-learning. The accelerated development of photovoltaics through ML was presented in the publication by Oviedo Perhavec [37]. In contrast, Ma [38] presented ML in ocean applications, such as wave prediction, for advanced renewable energy control.
Several other doctoral theses on this topic were developed at the California Institute of Technology and at the Jagiellonian University. At the California Institute of Technology, Yang [39] worked on fast, adaptive, and extended digital image correlation using, among other things, methods for matching images in order to compare them, which is one of the elements of the algorithm presented in this article. At the Jagiellonian University, Jastrzębski [40] worked on the generalization and optimization trajectory of DNNs, presenting a new perspective of stochastic gradient descent (SGD) controlling optimization.
There are numerous advanced solutions that facilitate the search for definite items or the identification of the definite properties of images, such as face detection and the recognition method, patented by Irmatov et al. [41].
One of the first areas of image recognition was optical character recognition (OCR), which is an area of research that addresses the application of the neural network for the recognition of signs and handwriting. The solutions connected to that research agenda are known from several patent descriptions, successively [42,43,44,45,46,47,48].
Most current solutions in image recognition have been dominated by the application of CNNs wherein the structure is adjusted—in the most natural way—to the analysis of two-dimensional data in the search for definite patterns. Examples of the application of neural networks for image recognition can be traced in the description of the application by Kim et al. [49] and in further publications. Jaderberg et al. [50] developed a system that enables the localization and recognition of characters in natural scene images (real-life images), which is acclaimed as one of the most challenging tasks in image-based sequence recognition. A regional proposal mechanism for detection and deep convolutional neural networks (DCNNs) for text recognition was applied by the authors. In their paper, Shi et al. [51] also studied the problem of character/textual recognition in image scenes. A novel neural network architecture was proposed by the authors that integrates feature extraction, sequence modeling, and transcription into a unified framework. The work of Gerber and Chung [52] can be also included in the thematic strand of the recognition of natural scene images. The authors researched the detection of vehicle number plates by applying a multiple CNN approach, with a particular interest in applying the solution to mobile devices. The authors stated that the computing power of mobile devices is limited; therefore, they proposed a fast method for recognizing the abovementioned plates. The authors mentioned that they expected it to be applied in the field of intelligent transportation systems.
In the case of Palka et al. [53], the authors developed the OCR system in order to recognize handwritten characters. The main goal of their research was to develop new principles in the field of processing handwritten text, with special emphasis on text with language specifics like diacritics. The authors of [54] presented a method believed to be simpler than existing 2D LSTM models, in particular, an end-to-end, trainable, OCR system that combines CNN for extracting features with LSTM for sequence modeling. The results were applied by the authors both to English and Arabic handwriting data and English machine print data. As mentioned in [55], recognizing Arabic handwriting is considered to be one of the most challenging recognition research topics due to the italic nature of the handwriting and the particular similarities between the shapes of the various characters. The authors of the paper [55] proposed a new architecture combining CNN and bidirectional LSTM (BLSTM) based on a character model with a connectionist temporal classification (CTC) decoder. The authors of the paper [56] evaluated the performance of different deep learning networks, as applied to the recognition of Odia (Oriya) printed characters, such as CNN and the LSTM based recurrent neural network (RNN) and convolutional LSTM. The authors’ comparisons were applied in terms of error rate, accuracy, etc. According to Addis et al. [57], the Ethiopian script is no less challenging. It uses a large number of characters in the script, many of which are visually similar in nature, which poses a challenge for OCR development. In the paper [57], the authors developed the application of BLSTM neural networks in order to recognize typed Ethiopian scripts. According to the latter study, LSTM networks achieve an average sign error rate of 2.12% without using language modeling or any other postprocessing, which indicated that the proposed result was very promising. LSTM, which is an artificial RNN architecture used in the field of deep learning, was applied as the network for speech recognition, handwriting, and polyphonic music on a sample of over 5400 runs [58].
The authors of [59] researched, compared, and analyzed character recognition with the use of three, currently applied deep-learning structures, namely, the AlexNet structure, the LeNet structure, and the authors’ own SPNet structure. The authors stated that OCR is still a challenge in the field of computer vision.
In [60], the authors focused on an empirical exploration of the use of character-level convolutional networks (ConvNets) for text classification. A large-scale database was created to show that character-level CNNs can achieve competitive results, compared to traditional tools for converting text into numerical vectors such as bags of words, N-grams, and TF-IDF variants and deep learning models, i.e., word-based ConvNets and recurrent neural networks.
The authors of [61] considered the encoder-based analysis of text processing. The authors presented the results connected to text encoders based on convolutional deep structured semantic models (C-DSSMs) or transformers, which showed high performance in many NLP tasks.
The encoder architecture that overcomes undetectable errors using a fine-grained character level was developed by Javaloy and García-Mateos [62]; moreover, the authors of the last paper considered a general-purpose encoder based on input and comparison with a causal feature extractor (CFE).
As the authors of [63] observed, the limited numbers of dictionaries applied, in recognition research based on LSTM RNN, provided the lexicon-driven decoding process based on a lexicon verification process, coupled with original cascade architecture. The proposed approach achieved new state-of-the-art performance on the RIMES and IAM datasets and provided 90% accuracy on the RIMES dataset for a giant lexicon record of 3 million words.
In the paper [64], two data expansion and normalization techniques were presented, namely, a novel profile normalization technique for both word and line images and an extension of existing text images using random perturbations on a regular grid. These techniques, combined with LSTM CNN, significantly reduce error rates when recognizing the handwriting in characters and words.
In the paper [65], the authors presented a system that combines sequence recognition methods with a new method for encoding input data using Bézier curves, which allows the authors to obtain faster recognition times in comparison to the previously developed system. The authors determined the optimal configuration of their models by applying a series of experiments and presenting the results based on public datasets.
This part of the literature review can be concluded as follows. The authors considered the recognition of particular alphabets, the recognition of characters in an image scene, the recognition of handwriting, as applied to different languages, and developed several of the abovementioned methods and tests of solutions for large-scale databases. This research is not directly connected to this paper’s subject matter; nevertheless, it is important as a kind of knowledge source for the current research in the area of the recognition of wire label marking. Consequently, analysis of the literature connected to the recognition of characters printed on wires is essential. The authors decided to analyze publications connected to wire assembly in the context of wire labels in scientific databases, namely, Scopus. It was found that consideration of such a topic is very rare; the conclusions are outlined briefly in the following paragraphs.
Sprovieri [66] presented, the author presented various techniques to apply a mark on wire or a cable, as used in wire harness assembly, assessing them as labels which are, on the one hand, economical, simple, and carry plenty of information while, on the other hand, applying them was specified as labor consuming. Therefore, inkjets are applied to print such labels. The quality of such prints can be questioned and is discussed further on in this paper, Section 3.2. Camillio in [67] mentioned, the authors mentioned that properly labeled wires ensure correct installation and high performance in harnesses and electrical panels. Inkjet printing was also described by Webber [68] and by Mitchell et al. [69]. As one might suppose, inkjets are one option for printing labels on wires, although cost effective and time consuming, whereas the other laser-based method is characterized by a higher quality of prints. Apart from those two options by Gray and Falson [70], the following are also mentioned: printed markers, hot-stamped logos, sleeve markers, full circle polyvinyl chloride markers, nylon clip markers, computer printable sleeves. Mitchell et al. [69] mentioned that the use of an inkjet printer in the factory is worthwhile since it ensures that printing operations can be undertaken in relatively remote locations and can be integrated into larger computer-controlled systems without the need for frequent intervention by the operator. As previously mentioned by Tierney in [71], no marking method exists that might be suitable for all applications since no two applications are the same—they may have limited space constraints, have markings that mostly consist of one line of alphanumeric text, or have similar code constructions, yet many alternatives in the sets of the codes. In a rare group of scientific papers connected to the subject matter of this paper, it is worth bearing in mind [5] the statement mentioned above. The authors developed an automated approach on the wire shop floor, focusing on the manufacturing process in the following areas: engineering design input, wire marking, wire termination, and harness layup.
Interesting research featuring the recognition of wire marking on paper was presented in [72]. The author analyzed and examined wire mark patterns on the reverse side of printed sheets of paper, which were not directly connected to the topic of wire label marking but might be a potential future research agenda.
Much has changed in the industry since Markstein [73] and Emmerich [74] presented manufacturing techniques for wire harnessing. Programming machinery for wire harnessing that eliminated programming the machine manually, thus saving time and reducing errors, was mentioned in several papers, e.g., [75]; nevertheless, the automation of the processes mentioned has started to take hold in recent years [9].
Summarizing the literature review, it can be noted that the achievable publications considered in the article covered the following topics:
  • Elimination of physical markers by AI application [16,17,27,28,29,30];
  • Image recognition with real pictures and synthetically generated ones [33,34,41,49];
  • OCR as a special area of interest in the topic of image recognition [42,43,44,45,46,47,48,53,54,55,56,57,58,59];
  • Handwriting image recognition and other challenges connected to characters, lexicons, libraries [63,64];
  • Image recognition in natural scenes [50,51,52];
  • Sequence recognition methods [65];
  • Techniques to apply a mark on wire/cable, and discussion of their imperfections [66,67,68,69,72];
  • Manufacturing techniques for wire harnessing, including automatization [9,73,74,75].
All the abovementioned papers prove that character recognition is a well-developed research agenda in wire assembly technologies. Most of these papers are presented in trade journals; nevertheless, the aspect directly connected to reading wire labeling is less common in the scientific literature. Therefore, the need for the research gap to be investigated is earnestly considered by the current progress in the research.
Based on the literature review, certain devices that support the control cabinet production and wire assembly process are mentioned in the scientific literature and trade journals as well. A few of such devices are as follows: insulation stripper and the wire separator for twisted wire pairs [1], automatic crimping machine [2], apparatus for preparing the wire end [4], automatic crimping machine equipped with a twister device [2], personal wiring assistant [5], wire terminal as wire preparing device [7] and employee guiding touchscreen application ESW [8,9,10]. Nevertheless, a review of the literature shows a notable lack of consideration for automatic reading of information printed on wires (wire markings). The authors of this paper addressed this challenge and presented the recently developed device as WLR. The WLR is an integral part of the industrial system for industrial control and electrical cabinet production. The hardware tests of such a device raised some research questions worth answering. Research question 1 (RQ1): to what extent can the assembly process time be affected by using an automated WLR? Research question 2 (RQ2): what challenges does the WLR present, and what are the advantages of using it? Research question 3 (RQ3): to what extent does the use of the WLR device affect energy consumption? A specific research methodology was constructed around these research questions.

3. Materials and Methods

In order to find the answers to these research questions, the authors developed the following research methodology. For finding the answer to RQ1, the authors decided to compare the wire assembly process duration obtained by experiment and additionally by application of method time measurement (MTM). The results of such computation allow answering RQ3 as well. Further, the results were compared. In contrast, the answers to RQ2 were developed solely by the experience of specialists in the field covered by this article. The answers can be found in this paper’s concluding section.
MTM is a predetermined motion–time system (PTMS). MTM was released in 1948 and today exists in several variations such as MTM-1 (first-generation PMTS), MTM-2 (second-generation PMTS), MTM-UAS (universal analyzing system), MTM-MEK, and MTM-SAM [76,77]. MTM is acclaimed as a standard for the design of human work as it supports analyses of manual operations or tasks by assign the duration of the fundamental human motions, namely, reach, move, turn, grasp, position, disengage, and release [78]. MTM is applied in the industry as a set of standardized times in which an employee should complete that task. Typically, the MTM time unit is called time measurement unit (TMU) and 1 TMU is equal to 0.00001 h or 0.0006 min or 0.036 s; nevertheless, some references used typically accepted time units without conversion to TMU (e.g., [79]). An interesting fact is that the authors of [80] faced developing MTM for measuring manufacturing and assembly processes automatically with the application of internet-of-things technology and RFID antenna.
Meanwhile, the experiment is understood as “a study in which an intervention is deliberately introduced to observe its effects” [81] (p. 12), quoted in [82].
One of the aspects considered is research methodology and the other is the methodology applied in the WLR device. In this section, the authors focused on presenting the method for identifying wire markings and the device by which this method is implemented. Firstly, the device is presented, with the challenges of recognizing wire marking being mentioned and, in turn, leading to a detailed description of the method applied in the device.

3.1. The Device for Identifying Wire Markings

The method for identifying wire markings was developed to support the recognition of wire markings during the manual wiring process when assembling control cabinets. Nevertheless, it can also be applied to automated production environments and to assembly systems. In view of the fact that the method is connected to visual recognition, it must ensure image recognition, especially its optical character. It is worth mentioning here that images in digital image processing are typically represented as a two-dimensional or three-dimensional discrete analog signal [83]. Therefore, it can be defined by the mathematical function f(x,y) where x and y are the two coordinates, one representing the horizontal dimension and the other, the vertical dimension [83]. Certainly, a separate aspect is modeling and transforming these coordinates into a digital signal [84] (p. 11). The method is described in detail in Section 3.3.
As mentioned in the introductory section, a control cabinet results from the application of the assembly process in an industrial enclosure. This process, from the point of view of its logistics, consists of two main design steps: (1) the design of an electrical and wiring diagram leading to the fulfillment of the specification and (2) assembly of all the components together [9]. The assembly subprocess is divided into two subordinate steps, namely, (2a) attachment of components and (2b) wiring them according to the diagram of the wiring scheme [85]. The details of this process were described in [9]; additionally, the WLR was mentioned as an optional device.
The authors of this paper considered the process of industrial enclosure assembly, which results in a control cabinet. In this manner, the current paper is a continuation of the previous contribution of the authors, namely Szajna et al. [9,10]. The invention of the WLR is the focus of the current paper, together with the preparation and handling of wires. To briefly recall the whole control cabinet process flow, consisting of designing, preproduction, logistics, and production processes, the authors present it in Figure 1.
This paper focused on two production steps, or subordinate processes, of the whole production process, presented in Figure 1 and colored with the orange background—the preparation of wires and their assembly. The assumption of the authors is that the researched control cabinet production process is based on the state-of-the-art wiring steps of the process. It means that it consists of the automatic preproduction of wires and the use of a wiring assembly software support system. Therefore, taking into account that both subordinate processes are usually carried out manually, it is worth making efforts to achieve the most efficient and shortest possible preparation and assembly time for installing the components into the control cabinets; as [38] stated, the design of the wiring layout and assembling operations are complicated and time consuming. This premise was the guiding principle of the authors of the solution here presented.

3.1.1. Preparation of the Wires

The subordinate process of preproduction of wires with three variants is given in detail in Figure 2. The objects marked with the green background, presented in Figure 2, specify the production steps that are of interest to this paper’s authors, and it is briefly described in the next paragraph.
The identification of wires requires the inclusion of a unique mark, or label, along each wire, specifying the connection between particular components of the control cabinet by linking the adequate end points of a wire. Different solutions were created in order to print such a label. Simply put, there are two ways of handling this task. The first assumes printing a label with a printer on a separate surface and then attaching it to the wire; usually, this is a registration sticker placed as a wrap-around marker, rotating wire labels, flags, tags, sleeves, and print-on hook material, based on [86]. The disadvantage is that this is not a fully automated action. The second way of labeling assumes printing directly onto a wire’s insulation, that is, the rubber or plastic vinyl cover [87]. The authors of this paper focused on the second way, considering the fact that it provides full automation of wire preproduction, as characterized by the biggest influence in the optimization of complex process time.
The result of the fully automated preproduction of a unique wire with exact length, marking, and end preparation with end ferrule is presented in Figure 3. Different companies design and deliver complete machines that perform the whole wire preproduction task and thus are able to choose the correct cross section and color from the wire’s roll, cut the wire to the appropriate length, strip it and crimp it, if required, label it with a CAD-defined name and optionally, and complete it as a bundle. Such prepared wires allow the operator to focus only on assembling them, as was presented in [9].
One of the important topics in recognizing characters printed on the wire insulation is the size, quality, and type of font applied. The concept of universal design regarding printing on materials, especially related to font type and size, is based on particular guidelines. However, these are not specified in this paper since they are not directly relevant to the problem under consideration. Moreover, they may vary from one geographic area to another. Nevertheless, it is worth mentioning that the character printed directly onto a wire (Figure 3) is usually written in a special font in order to be large and clear enough to be read by the operator/the wire assembling employee. This, however, can be still very challenging, especially when the printing surface ranges from 0.5 mm2 to 2.5 mm2, as it is considered as the main range of wires used in a control cabinet [87]. The inscription is printed on a wire by a dot matrix printer [10]. Reading hundreds of such labels can be extremely challenging and cause significant visual strain on the operator. That is the second aspect of the paper, which leads to the optimization of complex processing time and additional attention to the working environment’s ergonomics.

3.1.2. Wires Assembly

The wire assembly is the second subordinate process of the whole control cabinet production process, presented in Figure 1. The traditional wire assembly subordinate process was based on paper documentation with an electric diagram. It was conducted without using any device or software support. This subordinate process is reminiscent of the manual order picking process in logistics in that it consists of such operations as picking a wire, verifying the connection and wire’s characteristics in the technical documentation, and plugging the wire into the correct socket in an industrial enclosure, as mentioned in [9], who also presented a software tool for process support and indicated potential further improvement: when a WLR is available, wires are marked with an alphanumeric label, which identifies the connection between particular components of a built control cabinet. This improvement is the latest state of the art on a prototype level and is analyzed in this paper in order to compare it with the actual process (which is understood here as using the assembly support software system and typing characters given in a label into a search window to find out the allocation of a particular wire and its connection ends). Instead of manually reading labels and typing them into a search window, an automatic device for reading the marking on the wire is applied, which after reading, automatically opens the correct position on the support software list with detailed information about the assembly of the particular wire. Wire assembly steps, with the usage of the reading prototype, are presented in Figure 4.
The automatic device applied in order to read wire marking, the WLR, is similar to a black box and is presented in Figure 5, marked with a number (5). This figure presents the entire wiring production station which, apart from the WLR, consists of (1) industrial enclosure; which will form the control cabinet at the end of the process; (2) assembly frame; (3) ESW support system [88,89] (in [89] starting in 02:50 (mm:ss) of the cited video coverage); (4) wire holder, with preproduced wires (e.g., from the mentioned wire terminal machine).
The ESW system is used to present step-by-step wire routing. Application of this system consists of the following four steps [9]:
  • Select a wire to be assembled (typing the wire label or automatic reading by the reading prototype); the ESW displays the needed assembly information; the assembling employee considers the information and interprets the entire wiring 3D visualization in order to learn the appropriate routing;
  • Enlarge the visualization by zooming into the graphics in order to recognize the first connection point of the wire end;
  • Reduce the visualization in order to check the wiring route through the wire ducts;
  • Enlarge the visualization by zooming into the graphics in order to recognize the second connection point of the wire end.
With the aid of the ESW application, even less qualified employees or interns are able to carry out the production task.
It is worth mentioning that the WLR device is patent protected separately in Poland PL000000421368 (Adaszyński et al. [90]) and as a European patent EP000003460719 (Adaszyński et al. [91]). Patent applications were preceded by dedicated research of databases: Espacenet, Register Plus, USPTO PAIR, and Google Patents. The particularly active company in the studied area, mainly in the use of AR to monitor and manage industrial automation, including production lines, is the American enterprise Rockwell Automation Technologies, with a patent application package from 2015 and 2016, some of which have become patents in 2019. Additionally, of interest are submissions from Enertiv Inc., Honeywell, Tesla, Wittur Holding GmbH and, interestingly, a bundle of five Siemens submissions—all from one original application US2002046368 entitled ”System for, and method of, situation-relevant assistance to interaction with the aid of augmented-reality technologies” from 1999—all these patents expired in March 2019 [92]. It was necessary to deeper analyze the Chinese application CN104820827 entitled “Method for recognizing punctiform characters on surfaces of cables,” which could clash with the potential proprietary reading prototype, turning not to be the case at the end [93].
The main interest from the point of view of this paper, in the case of the wiring station, is the WLR, which is a device designed for identifying and reading the wire marking, using the latest AI findings. Therefore, this device needs to be described in detail.
The device for identifying the wire markings comprises a camera, a microcomputer, a lighting system, and a display. The display is located on the upper wall of the device and equipped with a monitor inclined at an angle of 12° to 18° with respect to the horizontal plane of the device (the left side of Figure 6). Inside the device housing, certain additional equipment is located beyond the camera observation field (the right side of Figure 6). Included in this equipment, one may find a microcomputer with a power supply, a signaling device, and a system to control peripheral devices such as the lighting set drive, the monitor, the signaling device, and sensors (including movement sensors giving energy efficiency management when turning on the device). As can be observed in Figure 6, the wire to be identified is placed inside the device on a pad with the following predefined characteristics. In order to ensure the highest possible uniformity of the background, the pad has to be characterized by a bright, matt, smooth surface, preferably in the form of white foil (Figure 7). The color white provides the highest possible universal contrast, in relation to any wire applied in the control cabinets. Additionally, a matt characteristic ensures the elimination of reflections. The smoothness of the surface ensures the elimination of the texture effect and uniformity of the background, increasing the effectiveness of the preparation and identification of printed labels. The lighting set in the device consists of LED lamps. The location of the LED lamps was crucial. Regarding the preliminary tests, it was stated that the incorrect placement of a lighting set resulted in the occurrence of reflections. Such reflections were causing white overexposure on some parts of the analyzed wire. In this case, it was impossible to read or recognize the labeling. More challenges connected to image quality during the identification of the wire marking are presented further in this paper, in the next subsection.
In its current form, the WLR device is powered from an external AC/DC power supply, which draws a maximum current of 0.5 (A) at 230 (V) from the power grid. The power supply on the DC (direct voltage) side is capable of delivering a maximum of 2.5 (A) and 5.1 (V), which would mean a required power of 12.75 (W). The current list of components of the WLR device along with the marked power requirements for the current design consists of the system baseboard (Raspberry PI 3B+; in idle mode, the current consumption is at 350 (mA); therefore, the value of power is equal to 1.9 (W), and in maximum load mode, the current draw is 980 (mA); therefore, the value of power is equal to 5.1 (W)); an LCD 7″ display (maximum current draw is 1 (A) at 5 (V); power is 5 (W)); a camera (draws 100 (mA) to 250 (mA) at 5 (V); from 0.5 (W) to 1.25 (W)); an executive control system (60 (mA) at 5 (V); about 0.3 (W)). The indicated parameters allowed the authors to determine the maximum power drawn from the power supply as 11.65 (W).
The WLR device is dedicated to a specific production step in the manual wiring task in the control cabinet production process. The WLR device is currently in the prototype phase, fulfilling the following main requirements:
  • Recognition of the font, as readable and as clear as possible;
  • Recognition of all the printed characters, including those printed faultily;
  • Identification of wires of 1.5 mm2 cross section, covered with blue insulation and the marking printed in a white font;
  • Complex integration at the system level with the ESW wiring assembly support software system.
The methods that are applied for the purpose of these processes are based on advanced algorithms and AI technologies: DNNs and ML. Their application is described further in the paper. Nevertheless, in this section, it is worth mentioning that the authors conducted their research based on the development of the device and appropriate identification software and by precisely applying recent methods in ML, feature learning, and deep learning [17], representing the mentioned breakthrough in AI [16,27]).

3.2. Challenges of Wire Image Recognition

As previously mentioned, the WLR was developed in order to assure recognition of the font, as readable and clear as possible, and recognition of all the printed characters, including those printed faultily (Figure 8, Figure 9, Figure 10, Figure 11 and Figure 12). These requirements are connected to automatic character recognition [94], also known as OCR, usually involving ML.
The aspects differentiating the area of interest of the current paper’s authors from the widely developed solutions of automatic character recognition are small sizes, i.e., the diameter of the wires characterized by a cross section ranges in average from 0.5 mm2 to 2.5 mm2 [10], with prototype limitation of 1.5 mm2. Small printable area sizes are caused by the diameters of the wires, resulting in the necessity to apply small character matrices, for instance, 5 × 5 pixels, which significantly reduces the possibility to differentiate the appearance of printed characters. The other such aspect is the curvature of the printing, the malformation of the wires, and the bending and twirling of the wires in their manufacturing process and uncoiling from the roll, as well as marked printing defects. The curvature of the printing results mentioned, in the unfavorable phenomena, is connected with equal light exposure, shadows, reflexes, and a decrease in the homogeneousness view within the scope of single characters. Additionally, the bending and twirling of the wires resulting in the nonlinear positioning of the characters in marking can be observed in Figure 8 to a limited extent and very often in an incomplete view of all the characters from one point of observation. Moreover, the high-speed printing applied in wire marking in an industrial environment causes the deformation of signs in the form of the nonlinear transference of the character matrix and blurring of the ink (Figure 9), a lack of some of the print points or failures of printing, which, in the case of small matrices, causes huge differences in the pattern (Figure 10, details of which are presented in Figure 11 and Figure 12). All the aspects mentioned result in many challenges that do not appear in other spheres of automatic character recognition.
The process of both natural and artificial image recognition is composed of the image acquisition, image processing, i.e., the initial filtration, the elimination of distortions, image compression and exposition of the primary features, etc., the image analysis, i.e., the identification of the characteristic features of the image, image recognition and its semantic interpretation [95] (p. 11). The standard visual system for image processing comprises the module for image introduction, the image display device, the permanent copy device, auxiliary storage, and the image processor [95] (p. 12). These aspects, however, are described in detail in the next subsection.

3.3. The Method for Identifying Wire Markings

The method applied for the purpose of reading wire labels in the control cabinet production process is based on advanced algorithms and AI technologies, namely, DNNs and ML. This method is presented in the current subsection.

3.3.1. The Reading Process for Wire Markings and the Method Applied

Automatic readings and identification of a wire’s markings are characterized by high efficiency when optimal positioning of the wire mark in the camera observation field is ensured (specific for the prototype stage). Such circumstances are provided in the WLR by the wire positioning subsystem, which consists of a wire lead and a monitor. The wire lead’s construction is profiled in a specific manner to ensure the possibility of double-sided insertion of a wire of up to 10 mm in diameter. The wire’s lead axis is perpendicular in relation to the camera observation axis and parallel to the panel. Before giving a detailed description of the method applied, it is worth describing the wire reading process.
The wire reading process starts with placing a particular wire into a wire lead in such a way that the wire marking is completely fixed in the wire lead and points directly towards the camera; the wire lead’s axis must be perpendicular in relation to the camera observation axis. When a wire is placed in the device, at least one sensor detects it, activates the signal denoting the wire’s presence, and sends a signal via the peripheral devices control system to the microcomputer. As a consequence, a control signal is sent from the microcomputer to the camera, which immediately takes a picture of the wire under analysis and transmits a return signal to the microcomputer. This return signal contains a full image spectrum of the wire’s marking. This spectrum is a matter of the application of the DCNN, as one of the main methods applied in the solution described. In general, the microcomputer separates the image characteristics, analyses them, and identifies the wire marking. In the first step of this identification, the DCNN—or more precisely the first DCNN—is trained to recognize empty spaces between characters not yet identified and signaling them as empty spaces (Figure 13). Empty space identification starts with the microcomputer processing a wire’s image and isolating a certain number of elementary component images of 32 by 32 pixels, along the whole, the original image of the wire. These elementary images are treated as input signals for the neural network, which determines their consistency with empty spaces between the patterns of the characters. The consistency with empty spaces, as mentioned, is understood as the probability that an empty space occurs between a pair of consecutive characters. The signals for elementary images are analyzed from the left edge of the original image of the whole wire and continue with the computing step of one pixel. As a consequence of such analysis, the neural network trained to recognize empty spaces returns a probability value close to one at the output layer for a node representing an empty space between characters. When an empty space is not identified, the returned value is close to zero.
When the empty spaces are identified, the second step of identification is run. This step concerns recognizing the characters in the isolated image of the wire by the second DCNN adequately trained to recognize characters (Figure 14; the sample figure presents signal in case of “zero” signal occurrence). As with the first DCNN, this step also starts with the microcomputer processing a wire’s image and isolating a certain number of elementary component images of 32 by 32 pixels along the original image of the whole wire. The next operations in this procedure are comparable to the first one. As a result, if an elementary image is recognized as being consistent with the pattern of a particular character, a probability value close to one is returned at the output layer for a node representing this particular character, whereas in the case of the other nodes, representing other characters, a value close to zero is returned; however, if an elementary image does not carry a signal that uniquely identifies a particular character, then values proportional to the adequacy of the signal with respect to each character are returned for more nodes in the output layer; the sum of the returned values across all output nodes for each component image is equal to one. When the analysis of all elementary images has been completed, the results’ signals are determined and specify the probability of the occurrence of particular characters in each position along an analyzed wire, with their position computed in pixels (Figure 15). These signals are determined by multiplying the signals designated for particular output nodes of the neural network in order to recognize characters with signals, supplemented to one, designated for that position for a node specifying an empty space between characters in a neural network to recognize empty spaces between characters.
Consequently, in the third step, signals of empty spaces and particular characters generated in the two previous steps are applied to identify a compiled signal of a wire marking. The microcomputer uses signals, stored in its operating memory, representing a set of markings permissible for a given group of wires, treated as reference signals. The microcomputer analyzes the consistency of the signals’ sequence from two previous steps with the reference signals. Certainly, the system is ready to react to certain imperfection; therefore, should the ineffective identification of a wire marking occur, the microcomputer sends an additional control signal to the camera. This control signal takes another picture of the wire, and the full above-described process is repeated. For such analysis, the unit uses the maximum likelihood method, i.e., the maximum likelihood estimator [96], where the analysis of consistency enables the correction of identified markings in the case of possibly misidentified particular characters in a wire’s marking, i.e., misinterpreted signals for specific positions generated in previous steps. On the other hand, while identification of a wire marking is effective, an appropriate signal of the identification of wire marking is sent by the microcomputer to the signaling device via the peripheral devices control system. Consequently, the device emits a signal to denote the end of the wire marking identification process.
As mentioned in the above paragraph, the identification method is based on the use of DCNNs: one is applied in order to recognize the empty spaces between the characters, while the other is used in order to recognize characters occurring in the wire’s markings.
It is worth mentioning the straight structure of the DCNNs applied, which is as follows:
  • Input layer characterized and noted as (32 × 32 × 3), which stands for sizes of elementary image 32 by 32 pixels, namely, (image height) × (image width) and three fundamental colors (the RGB image processing scale is applied);
  • The first convolutional layer is equipped with 32 filters of 3 × 3 pixels input size;
  • The first maximum pooling layer is sized as 2 × 2 pixels;
  • The second convolutional layer is equipped with 32 filters of 3 × 3 pixels input size;
  • The second maximum pooling layer is sized as 2 × 2 pixels;
  • A dense layer of 128 nodes;
  • An output layer is differentiated for each DCNN: in the network allocated to search empty spaces between characters it is characterized by two nodes, namely, the empty space between the characters and a particular character, whereas in the case of the network allocated for the recognition of characters, it consists of the number of nodes equal to the number of characters used for wire markings, which certainly is a configurable parameter;
  • All activating functions in both DCNNs are of the rectified linear unit ReLU type.
The WLR device needs DCNNs “training’’ before it is actually put into service. This “training” process for DCNNs is implemented outside the device, in the computer; its main feature is the high processing power, supported by someone in attendance and consists of three fundamental phases as follows:
  • Human description of the markings on the training images;
  • Training the neural network in order to recognize empty spaces between the characters;
  • Training a neural network in order to recognize particular characters.
The training process of both networks utilizes the algorithm of stochastic function optimization called the “adaptive moment estimation” (ADAM; [97]).
Each character during the training process is featured by the code and position of a particular character, counted in pixels from the left edge of the original image up to the beginning of the character field relating to a given character; the size of the character field is 32 × 32 pixels. The isolated characters are shown in Figure 16 on the wire marking image.
This process is carried out with someone in attendance. Information about the positions and codes of a depicted character is entered for a given image. In the example illustrated in Figure 16, the image is associated with the following string of description pairs. Firstly, in brackets, is the number of pixels given from the left edge of the original image, with a particular character after the comma being mentioned: (190, =), (222, 0), (254, 0), (284, 2), (316, .), (347, S), (380, P), (412, 0), (443, 1), (475, –), (508, S), (540, F), (573, 1), (605, :), (636, 2), (669, 1), (701, empty space), (732, T), (764, L). Such an input, during the training process of the neural network, ensures that a particular character, as well as the empty spaces between the characters, are recognized, i.e., identification of fragments of the photographed wire with no characters printed on the insulation.
The optimization of parameters in particular layers of the DCNN used to recognize the spacing takes place by reading the original image of particular empty spaces and applying the “ADAM” algorithm. According to Figure 16, the specimen of spacing between the characters and the characters themselves change frequently, mainly due to the variability of the characters appearing on the left and right sides of the empty space character; the three examples of a “zero” character in Figure 16 illustrate this point. The additional factor of specimen variability is the quality and deformation of the print, as was mentioned in Section 3.2. In the training process, the neural network parameters are optimized, ensuring generalization of the eventual spacing pattern between the characters in such a way that it represents common features and flexibly modeled variants of the differences of particular real images of the gaps between characters.
The neural network’s training can also be based on synthetically generated images [33]. An example of such an image is given in Figure 17.
In cases where there are not enough training data, the authors of the device decided to generate several images synthetically. These images were highly randomized in the following aspects: backgrounds and gradients, the positioning and orientation of a set of characters, wire lines, fonts applied, and the particular wire, marking “printed” on an artificial wire. All the synthetic images were generated with the custom program written in Python 3.6 using OpenCV 3.2.0. The training data, including both real-life and synthetic images, was augmented by implementing the neural network and augmenting the “flips”, that is, the rotations up to 25° and shearing up to 15°.
When the training process is finalized, the result is converted into executable code on the device’s microcomputer. This code implements the automatic identification process of the wire’s markings.

4. Results of Neural Network Training; Application of the Prototype Device in the Wire Assembly Process and Users’ Opinions

A key activity in developing a product is evaluating its usability [98,99]; therefore, it was decided to describe the analyses of three different aspects connected to the solution described. The first aspect is connected to training the neural network, which is a significant part of the method and device, described in the previous section. Secondly, the authors decided to analyze the duration of the operations and logistics processes that are significant from the point of view of applying the WLR onto the wiring production process. Lastly, the post-test surveys are described in order to present the users’ opinions and rates of satisfaction.

4.1. Results of Neural Network Training

The DCNNs training, from the methodological point of view, was described in Section 3.3.1. In the current subsection, the authors describe the actual results obtained during the training. As previously mentioned, the training was conducted based on the application of real-life photographs and synthetically developed images [33] of the markings/labels of the wires. In the training process, the neural network parameters were optimized. This optimization ensured generalization of the pattern of eventual empty spaces between the characters in such a way that it represents the common features and flexibly modeled variants of the differences of particular real images of the gaps between the characters.
For the purpose of the DCNN training phase, 10,000 examples in total were used. This quantity consisted of 1500 real-life photos with the rest being generated synthetically. The real-life photographs represented 118 different wires, which were shot from different angles, in differentiated configurations, and from various perspectives; a minimum of 10 photos of each wire was included in the training phase. The synthetically obtained images of wires totaled 8500. It was assumed that 850 synthetic wires were taken into consideration, differentiated, in similar aspects, as real-life objects with their configurations and perspectives presented from different angles. The accuracy of recognition obtained was equal to 99.7%. Such improvement was obtained as a result of the different algorithms applied. For example, the authors noted that the first version of the system, which used the k-nearest neighbors algorithm, instead of DNNs, was accurate in recognizing at 85.3%.

4.2. Tests of the Duration of the Assembly Process for Control Cabinets

Admittedly, in the current research, the authors analyzed two subordinate processes of the complex process of the assembly of the control cabinets, namely, (1) the preparation of the wires and (2) the assembly of the wires itself (it should be noted here that these tests and the presented computation relate to a process of one wire assembly—the authors call it as “assembly process”; however, it should be born in mind that the full assembly process consists of around 300 wires in average per one complete control cabinet).
From the point of view of the duration taken up by the process, the second subordinate process of assembly is of more interest to the authors. It was decided to present analyses of it from three different points of view. Firstly, the duration of time taken up by the operations, forming the assembly subordinate process, with the use of preproduced wires, with predefined length (e.g., 30/50/70/100 cm), as shown in the middle option presented in Figure 2, was measured based on the MTM method with the assumption of not using the WLR. Secondly, the duration of time taken up by the operations, with the use of unique wires, with all parameters ready (color, length, end preparation, label), as shown in the bottom option presented in Figure 2, was measured with the use of a stopwatch and MTM method with the assumption of not using the WLR. Thirdly, the device had been applied throughout the system, and the results were obtained both with the use of a stopwatch and MTM. All the results are given in Table 1.
Discussion on a way of values obtainment, especially in the case of MTM usage, validation of methods, and comparison of the obtained results are presented below.
In Table 1, the process of assembly without the WLR of two types (A/B) and assembly with the WLR is mentioned. The A-type process is differentiated from the B-type by the length of assembled wires. In the case of the A-type process, wires have the predefined length (e.g., 30/50/70/100 cm) and the predefined end preparation carried out (e.g., one end ferrule and one insulation stripped), whereas the B-type (the wires are of unique type) have all parameters ready for a certain, single connection (color, length, end preparation, label). In the case of assembly with the WLR, wires are also of a unique type. In Table 1, each of these assembly process types is defined by its fundamental operations. Such operations in the case of the experiment-based process were applied directly as written. Meanwhile, when MTM was applied, the operations had to be divided into basic activities, with attention paid to their assignment to the records standardized in the MTM method. It is worth describing what types of basic activities were coupled with each of the fundamental operations, which are presented below. Every time the particular basic activity is mentioned, the duration assigned to it or the method of calculating it is given, along with an attribution of the reference, when the particular duration was found.
Firstly, the fundamental operations of the A-type process are listed (basic activities, their duration, and a commentary, if necessary, are given after each colon), without the WLR:
  • Look at an industrial enclosure: a glance, which takes 0.00600 min according to [79];
  • Click the ESW assembly support system: click, which takes 5 TMU = 0.00300 min according to [100];
  • The ESW system opening of the extended information with the 3D graphics of a selected wire on a wiring list: a value given in Table 1 was assumed;
  • Look at the displayed wire length: a look at a single word takes 5.05 TMU according to [78], whereby the average number of such words is assumed to be three numerical characters printed on a wire per word (the number of characters is borrowed from the B-type process: 3/3 + 13/3 + 20/3 + 28/3 = 21.3 words); additionally, finger swiping on the wire while reading the words is also included as a convenience often used by employees reading a wire marking (2 TMU per a word according to [78]);
  • Reach to the holder on the left, for a wire: a look at the holder takes 0.00600 min according to 79], a reach of a distance of 30 cm to the holder takes 10.8 TMU according to [78], and a move between wire strands takes 5.6 TMU according to [78];
  • Detach a wire with the closest predefined length from the holder: a look at the holder takes 0.00600 min according to [79], a straight grip of a wire takes 10.8 TMU according to [78], a reach of a distance of 30 cm (horizontal move of hand to choose a wire) to a holder takes 16.7 TMU according to [78], a reach a distance of 50 cm above holder with a wire in hand to get out a wire takes 10.8 TMU according to [78], and a move between wire strands takes 5.6 TMU according to [78];
  • Adjust a wire’s ends preparation if needed (the predefined ends ones may not fit): a look at a wire takes 0.00600 min according to [79], a grasp of a wire takes 5.6 TMU according to [78], and a move of a wire takes 12.9 [79];
  • Look at the ESW assembly support system to identify the wire routing: reading a route by eye tracking at two wire’s ends and its middle (one such a basic activity takes 0.118000 min according to [79];
  • Mount the wire: a look at an industrial enclosure takes 0.00600 min according to [79], a move of a wire at a distance of 30 cm takes 25.8 TMU according to [78], a move of a wire takes 12.9 TMU according to [78], an installation of one wire’s end (releasing of a grasp takes 0.00300 min according to [79]), the next look at an industrial enclosure, the next move of a wire, and the installation of second wire’s end occur;
  • Hide a wire surplus in a wiring duct: precise allocation of a wire into wires duct (six actions were assumed: insertion at two wire’s ends and twice in a middle, including two corrections between a middle and an end; each action takes 0.015000 min according to [79]), two moves of a wire between its middle and an end, which take 15.2 TMU according to [78], four plugs of a wire into a wires’ duct (at two wire’s ends and twice in a middle), which takes 5.6 TMU each according to [78].
Secondly, the fundamental operations of the B-type process are listed (basic activities, their duration, and a commentary, if necessary, are given after each colon), without the WLR:
  • Look at the industrial enclosure: a glance takes 0.00600 min according to [79];
  • Reach to the holder on the left, for a wire: a look at the holder takes 0.00600 min according to [79], a reach of a distance of 30 cm to the holder takes 10.8 TMU according to [78];
  • Detach the wire from the holder: a look at the holder takes 0.00600 min according to [79], a straight grip of a wire takes 10.8 TMU according to [78], a reach a distance of 50 cm above holder with a wire in hand to get out a wire takes 10.8 TMU according to [78], a move between wire strands takes 5.6 TMU according to [78];
  • Grab a wire with both hands: a grip of a wire takes 10.8 TMU according to [78];
  • Rotate a wire so that a label is visible with eyes: rotation of a wire takes 10.8 TMU according to [78];
  • Type the label into the search window of the ESW assembly support system: a glance at a wire takes 0.00600 min according to [79], a glance at a keyboard takes 0.00600 min according to [80], a glance at ESW system’s monitor takes 0.00600 min according to [79], a click takes 5 TMU according to [100]; additionally, a finger swiping on a wire while reading the characters is also included as a convenience often used by employees reading a wire marking (2 TMU per three characters according to [79]);
  • Click the result so that the extended information with the 3D graphics opens on the wiring list: a click takes 5 TMU according to [100];
  • Look at the ESW assembly support system to identify a wire routing: a look at a single notation takes 5.05 TMU according to [78];
  • Mount a wire: a look at an industrial enclosure takes 0.00600 min according to [79], a move of a wire at a distance of 30 cm takes 25.8 TMU according to [79], followed by an installation of one wire’s end (releasing of a grasp takes 0.00300 min according to [79]), the next look at an industrial enclosure, the next move of a wire, and an installation of second wire’s end.
Consequently, the fundamental operations of the assembly process with the use of the WLR are listed below (basic activities, their duration, and a commentary, if necessary, are given after each colon):
  • Look at an industrial enclosure: a glance takes 0.00600 min according to [79];
  • Reach to the holder on the left, for a wire: a look at the holder takes 0.00600 min according to [79], a reach of a distance of 30 cm to the holder takes 10.8 TMU according to [78];
  • Detach a wire from the holder: a look at the holder takes 0.00600 min according to [79]; a straight grip of a wire takes 10.8 TMU according to [78]; a reach of a distance of 50 cm above the holder with a wire in hand to get out a wire takes 10.8 TMU according to [78]; a move between wire strands takes 5.6 TMU according to [78];
  • Grab a wire with both hands: a grip of a wire takes 10.8 TMU according to [78];
  • Rotate the wire so that a label is visible with eyes: rotation of a wire takes 10.8 TMU according to [78];
  • Move a wire with both hands towards the WLR: precise, symmetrical positioning, which is assumed as 10.8 TMU according to [80];
  • Place a wire on the white background of the WLR: a move of a wire at a distance of 5 cm takes 5.2 TMU according to [78];
  • Wait for the acoustic signal that the picture for recognition was taken: this is assumed to be one second;
  • The ESW assembly support system opens the extended info with the 3D graphics of a matching wire on a wiring list (in the meantime, while ESW gives the match) a move the hand with a wire from the reader: one millisecond for extended info programming operations assumed; hand movement is irrelevant from the viewpoint of the current operation;
  • Look at the ESW assembly support system to identify the wire routing: a look at a single notation takes 5.05 TMU according to [78];
  • Mount a wire (no wire surplus): a look at an industrial enclosure takes 0.00600 min according to [79]; a move of a wire at a distance of 30 cm takes 25.8 TMU according to [78]; an installation of one wire’s end (releasing of a grasp takes 0.00300 min according to [79]);
As far as the experiment-based results are concerned, the authors obtained the following assembly process time. When the WLR was not applied, the wires with a various number of characters printed on wires’ labels were taken into consideration, namely, 3, 13, 20, and 28 characters per wire (these four various lengths were selected to form a representative group; for better recognition, they are marked with the gray color in Table 1, with additional gray color pointing the time differences in mid-section of this table). Each time, three or four experiments were conducted for a particular length of a wire. For 28 characters long label, the values were equal to 38, 43, 48 s, which were averaged and converted into minutes in Table 1, as the value of 0.71667 min. For the 20-character-long label, the values were equal to 32, 33, 33 s, which were averaged and converted into minutes in Table 1, as the value of 0.54444 min. For the 13-character-long label, which is treated as a medium-length set of characters, the values were equal to 22, 24, 23 s, which were averaged and converted into minutes in Table 1 as the value of 0.38333 min, whereas for the 3-character-long label, which is treated as a short-length set of characters, the values were equal to 12, 10, 9, 9 s, which were averaged and converted into minutes in Table 1, as the value of 0.17222 min.
As was abovementioned, fundamental operations and their duration in the case of all three assembly process variants are given in Table 1.
It turned out that the result for the A-type process is nearly the same as the one for the B-type with 28 characters (both, around 0.705 min). All other results for the B-type demand less assembly time. This would mean that it is worth using the unique wires, instead of the predefined ones. Nevertheless, these variants of the assembly process are not the core ones.
Fundamental operations of the B-type process were given under experimentation as well. The experiment consisted of the same basic activities as in the case of the MTM application. Therefore, the comparison of MTM-based results and experiment-based results is treated as validation of the MTM-method application. Absolute differences between the values obtained with the MTM-based method and experiment-based method are equal to 1.000473325, 0.020724881, 0.000669117, 0.015521093 s (given in a sequence of numbers of signs in wire label marking as in Table 1), which makes them insignificant. These differences can be observed in Figure 18 as well. When the number of characters increased, the difference between MTM-based and experiment-based process times increased as well. However, it is not alarming from the point of view of actual process realization since this difference is approximately one second. According to the standalone computation, a difference of more than two seconds between the MTM-based trend for the experiment-based trend would only appear at about 190 characters, and this number of characters is not applied in wire markings. Therefore, the authors claimed that the validation is satisfactory. Such validation is significant when an assembly process is enriched with the application of the WLR device.
As the validation results are satisfactory, it is possible to compare the B-type process as a more representative one with the assembly process in the case in which the WLR is applied. As it is given in Table 1, the time of the assembly process with the use of the WLR is equal to 0.12866 min, based on MTM computation. Meanwhile, the experiment-based results were measured in the range between 0.10000 and 0.15000 min (Figure 19). Therefore, the experiment is inherently enriched by randomized characteristics of particular wires in the process and correctly ranges around the value obtained using MTM standards. Furthermore, and this is a particularly valuable finding, the more characters printed on the wire to read, the greater the difference between the process without using the WLR and the process times with using this device. The absolute profit in process duration with the use of the WLR is equal to 25% to 82%.
The developed process time can also be considered in the context of energy use by the WLR device. The power consumption of the WLR is 0.012 (kW) after rounding up the values (the power analysis is presented in Section 3.1.2). The authors assumed the cost of energy consumption is equal to 0.2134 (EUR/kWh) (average value for EU, based on [101]). To present the energy consumption of the device, it is significant to remind that the full assembly process consists of around 300 wires, on average, per one complete control cabinet. Therefore, unlike one-wire assembling that is considered in Table 1, full control cabinet assembly time is given in Table 2. Consequently, the energy consumption of the WLR device and its cost are calculated for the full control cabinet as well. It should be emphasized at this point that with such a very low power consumption of the WLR device (0.00900 (kWh) in the case of experiment-based results), the abovementioned time benefits are significant. As energy consumption may even decrease (0.00772 (kWh) in the case of MTM-based results), it is worth mentioning future approaches to optimize this consumption.
This optimization can primarily concern components such as the display or the system baseboard. In the case of the display, this could involve, for example, choosing a device based on the organic light-emitting diode (OLED) or active-matrix organic light-emitting diode (AMOLED) technology, or choosing an e-INK display in which the maximum power required during refresh is approximately from 0.6 to 1.2 (W) (depending on the display’s model). Even removing the display from the system can be considered among the potential actions, due to the fact that the WLR device communicates with the ESW software, and therefore, the image from the device’s camera (the so-called control view) can also be accessed via the Ethernet interface. Display removal is certain in the case of 360° future version, described below.
In the case of the baseboard system, a newer Raspberry PI4 chip or a chip based on NXP® i.MX8 processor can be applied instead of Raspberry PI 3B+. In the first case, higher computing performance (power) can be achieved at the same energy consumption, while in the second case, i.e., NXP® i.MX8, similar computing performance (power) to Raspberry PI4 would be available at similar energy consumption values.
It is worth mentioning that the proposed solution takes into account the latest technical innovations. Currently, NXP® is already working on a newer generation of i.MX9 processors, which are expected to be more energy optimized than the NXP i.MX8 series ([102], as of March 2021).

4.3. Simplified Post-Test Surveys of Using the Actual Production Process of the Assembly of the Wiring

One of the prototypes of the WLR is held in the Rittal Innovation Center, which is dedicated to those end-users who visit this place to test state-of-the-art market solutions and also can study future prototypes. In April 2017, tests were performed with some of the users taking part in a survey. The survey was simplified to a couple of short questions; one of them is significant from the point of view of current research, namely, “What do you think the accuracy level of the WLR was? Please specify as a percentage” Five of the answers given, were of special interest to the research team, which are the following:
  • Answer of person 1 (shop-floor assembly employee): 90%—apart from the quantitative value given, this person also described their point of view qualitatively, by mentioning, “[it] works very well but it sometimes happened that the recognition was wrong”;
  • Answer of person 2 (shop-floor manager): 95%—adding, “[it] works great; only once was it wrong”;
  • Answer of person 3 (software developer): 85%—adding, “The problem is that when the character sequences are similar to each other, but differ by just 1 character, then in such cases, unfortunately, it often fails. Such similar sequence happens not often though”;
  • Answer of person 4 (shop-floor assembly employee): 95%—adding, “Nearly always indicates the correct wire”;
  • Answer of person 5 (logistics employee): 100%—adding, “There were no inaccurate recognitions”.

5. Conclusions

The authors of this paper presented the recently developed device, as the WLR, enriching the processes included in control cabinets assembly. This paper’s main aim was to present the developed solution and evaluation of its operation. The authors stated three research questions and decided to find the answers.
The first question to be answered was RQ1 as follows: “to what extent can the assembly process time be affected by using an automated WLR?” When a user of any device or system is not assisted by a convenient support tool, the prolonged search for information can lead to frustration [103]. Based on the computing results presented in Table 1, and especially in relation to the simplified post-test survey presented at the end of the previous section, it can be claimed the application of the WLR in the assembly process is considered to be very promising by the industry community. When users had to type a wire label into the ESW search window, they seemed tense and pressured. The WLR enabled the automation of the manual search action. In the prototype version of the whole assembly system, users seemed to be more relaxed and satisfied with their work—they knew what and how to do as the work was well and efficiently organized. This was influenced by the user-friendly device, improving the process through automatically entered data, and giving significant time savings in searching and obtaining information. The use of WLR ensures process time reduction from 25 to 82% depending on the number of characters to be read.
Moreover, it is worth mentioning that the algorithm of the prototype has not been optimized yet during the analysis period, and some experts assume that the operating time can be reduced by less than 3 or 4 seconds in comparison to from 6 to 8 s mentioned in Table 1, which were the experiment-based results.
The RQ2 was as follows: “what challenges does the WLR present, and what are the advantages of using it?”
The significant time reduction from the mentioned 25 to 82% is the most remarkable advantage, which dramatically automates and speeds up the process. It is worth mentioning that it was a nondestructive evaluation. The device also positively influences eye fatigue and general operator tiredness, in terms of this single reading operation. Complex integration at the system level with the ESW wiring assembly support software system is another benefit; it is tested to work as simple as plug and play.
The RQ3 was as follows: “to what extent does the use of the WLR device affect energy consumption?” To answer this question, the basics of the energy consumption of the WLR device should be mentioned. The WLR device’s prototype is characterized by very low power consumption (0.00900 (kWh) in the case of experiment-based results and 0.00772 (kWh) in the case of MTM-based results). As energy consumption may even decrease, it is worth mentioning future approaches to optimize this consumption—these approaches were suggested in Section 4.2.
When energy consumption was considered, particular costs were elaborated. It is worth mentioning additional information on the financial aspects of the presented solution. The economic aspect is a very important part of any Industry 4.0 reasoning, which is why an executive summary has been created. The total cost of the prototype stage concerning the WLR device was around EUR 40,000. To reach the final stage of a complete, fast, tested, and certified device, covering recognition of different wires and different fonts, the R&D investment has been estimated at EUR 650,000 (18 month period). The main cost items in both stages are R&D work in software development (operating and synchronization system, labeled dataset merged with DCNN training, recognition algorithm) and hardware involving the conception of an original, dedicated printed circuit board (PCB) and specific LED (light-emitting diode) lighting system, integration of electronics, creating a distinct component setup, together with manufacturing. Adding the two mentioned amounts, EUR 690,000 has been used for the first-phase return of investment (ROI) calculation. The authors made initial research consisting of interviews and surveys, conducted in various industrial companies in German-speaking countries, namely, DACH (D—Germany, A—Austria, CH—Switzerland). It can be forecasted that at least 100 panel builders would buy the WLR in the first year of sales. The second outcome coming from this research is about the business model and ROI calculation. Since the WLR works in the plug-and-play mode with ESW software, one subscription model could be bundled with the ESW under the ePulse cloud platform, making it extremely convenient for the user. The payment could be fixed in the monthly amount of EUR 325, or EUR 3500 yearly. The benefit of saved time exceeds the WLR cost, not to mention the long-term benefits, as for example, the benefit of reducing the operator’s eye fatigue, which is hard to notice and count but extremely significant. The advantages of the subscription model provide constant guarantee and automatic upgrade to the newest software version with the newest AI recognition database.
The new device is not causing any additional complexity in the stage of work preparation. The automatic machines for the preproduction of exact wires are already broadly used since they provide huge support in this aspect of the control cabinet production process.
The presented solution is characterized by some limitations as well. Some of them were described as challenges of wire image recognition as well as failures of recognition (Section 3.2). It was reported that the accuracy of obtained recognition of markings printed on the wires was equal to 99.7%. Extremely rarely, in 0.3% of cases, the WLR device was not able to recognize a wire’s label. Assuming that the regular control cabinet consists of 300 wires, it means that only 1 wire out of this pool would be not recognized. Such an issue is directly indicated by the WLR so that the unrecognized wire is not assembled. Additionally, the WLR displays (together with the ESW system) a hint to perform manual reading by the operator or to put such a wire to the side until the rest is assembled.
In fact, the recognition level is very promising, but it is not 100% effective due to the defects and the distortion of the print, resulting from bending and twirling of the wires, the print tilting from the matrix, blurring and rubbing off, or the lack of print points. Such results were also authenticated by various testers of the device, whose opinions were presented in Section 4.2. The available number of printing examples was quite large but still limited. A couple of thousands of more samples would be necessary when turning the prototype into an actual product, both from the accuracy point of view and from the universality of the solution, covering different wire colors, cross sections, etc. It is worth emphasizing that reading (recognition) issues are mainly generated on the printing process side, not the recognition side. Regardless, larger training data would presumably generate even better results. Moreover, the hardware used, and the recognizing algorithm can be optimized for faster operation, as mentioned before.
It is worth mentioning that the WLR device, apart from its unique purpose in the described processes, is also destined to thrive new value, which could be treated as enrichment of knowledge, in the following aspects. Firstly, the technology was enriched with the original device supported by the original software algorithm and hardware setup with original, dedicated PCB and specific LED lighting system. Secondly, the benefit for the industry to provide quantitative facts was developed and, in the case of qualitative knowledge, enrichment was developed by observing different users’ reactions to the setup, without and with the WLR using, to spot advantages and issues in the work setup and the prototype itself. The aspect of market and business model research is not treated by the authors as an enrichment of knowledge. With reference to the original software algorithm, it is worth underlining that the original algorithm has been created for the WLR device, comprising of (1) using the classical approach, including OpenCV edge detection, (2) performing data augmentation resulting in a unique 8500 dataset (specifically programmed synthetic data generator for DCNN training, since the 1500 available real photos were not enough to provide satisfactory results), (3) DCNN training and adjusting the weights, (4) applying beam search algorithm, and (5) probability calculus of a potential single element recognition and calculation of sums of single probabilities to find the final answer using maximum likelihood estimation method. We also proved that the DCNN is applicable in this example since the existing methods used mainly the classical approach of template matching. The authors successfully proved that DCNN can work in real time on artificial dictionaries with artificial labels (in the existing methods, the correction of errors is taken from the English spelling dictionary).
It is worth mentioning the future development of the presented solution as well. Firstly, the optimization of both hardware components being applied, and the created software is in focus. This will allow the device to work even faster, as above mentioned. Optimization of the software does not mean additional DCNN training and the improvement of the recognition ratio. The next major milestone cannot be treated simply as the improvement; it is connected with significant software’s libraries enrichment, such as the addition of the possibility to read other fonts, colors of both fonts and backgrounds (wire insulation), or different wire cross sections. The method and the software’s core remain the same, only requiring adjustments. The software architecture was planned in the first phases of the device planning and elaboration; therefore, the code does not have to be re-designed or rewritten. The software works based on image and patterns recognition; therefore, new, innumerable photos and images generated synthetically are required for DCNN training (as it was mentioned before, a couple of thousands of more samples of labels are needed to train the neural network better, before turning the presented prototype into a holistically functional product).
One of the most interesting research concepts is the possibility to provide the WLR device the ability of 360° reading. This will allow further optimization of the algorithm—the user will be able to place the wire in the WLR in any way, instead of the precise, label-upward position that is required in the prototype. A transparent tub for wire insertion and a set of cameras or a set of mirrors were initially thought over to be included in the device. An additional option, on top of 360° reading, is to transform the current shape into a handheld device. This, however, is another a major development.
If the 360° reading is not enough exciting to be included in the device, then the connection of character recognition with the AR glasses, presented by the authors of [9], seems extremely interesting. Giving the augmented-reality (AR) glasses the ability to read the text on the wire would eliminate the need to have a specific reading device. Initial tests with the Microsoft HoloLens v.1 have not been satisfactory enough, and the engineers expressed doubt that it would have been possible to achieve the set goal on this version of the AR glasses. Further tests of such a solution are planned in the nearest future using HoloLens v.2. This topic presents even more promising possibilities for the future development of the described solution.
The authors would like to emphasize again, that the presented industrial system for markings identification uses the newest techniques of AI, described in the paper. It is worth mentioning, that these methods introduce very promising results in other fields of focus of the authors, including industry, energy, human behavior. Interesting areas and applications consider smart grid forecasting [104], attack detection in smart grids [105], mapping [106], different challenging feature recognition [107,108], and detection of defects in manufacturing [109].

Author Contributions

Conceptualization, methodology, literature analysis, preliminary analysis of equipment, prototype creation and elaboration, tests, post-test surveys, patent preparation, substantial text writing, editing and reviewing, original draft elaboration, graphic design, the concept for comparison study, and reviewing, linguistic analysis: A.S.; organizational work, literature analysis, substantial text writing, editing and reviewing, original draft, the concept for comparison study/conducting/benchmarking/statistical analysis, linguistic analysis, graphic design, correspondence with the reviewers: M.K.; prototype creation and elaboration, CNN building and testing, text revision: K.C.; organizational work, literature analysis/text writing, language proofreading: R.S. and W.W. All authors have read and agreed to the published version of the manuscript.

Funding

The research was financed by the universities and companies involved.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data supporting reported results can be found in the DTPoland company.

Acknowledgments

With this article, the authors would like to pay a special tribute to Marek Adaszyński, engaged in the project and patent, who sadly passed away. The authors would like to thank the DTPoland scientific team, engaged in the project: Janusz Szajna, Krzysztof Diks, Tomasz Kozlowski, and moreover, special thanks are aimed at the DTPoland engineering team: Piotr Charyna, Filip Chmielewski, Krzysztof Ciebiera, Joanna Cieniuch, Grzegorz Gajdzis, Anna Mroczkowska, Marcin Mucha, Sebastian Pawlak, Wojciech Regeńczuk, Eugeniusz Tswigun, Andrzej Warycha, Mariusz Życiak. Special thanks to Ernst Raue and Jacek Robak, who significantly influenced the Hannover Messe 2017 promotion. Very special acknowledgments are aimed at Jan-Henry Schall, Head of Rittal Innovation Center.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Folkenroth, E.; Ullman, R. Insulation Stripper and Wire Separator for Twisted Wire Pairs. U.S. Patent 3853156A, 10 December 1974. Available online: https://worldwide.espacenet.com/publicationDetails/biblio?DB=EPODOC&II=0&ND=3&adjacent=true&locale=en_EP&FT=D&date=19741210&CC=US&NR=3853156A&KC=A (accessed on 18 November 2020).
  2. Komax Holding AG. History. 2020. Available online: https://www.komaxgroup.com/en/Group/About-Komax/History/ (accessed on 18 November 2020).
  3. Hirano, K.; Yamashita, H. Apparatus for Making a Wire Harness. U.S. Patent 5063656A, 12 November 1991. Available online: https://worldwide.espacenet.com/publicationDetails/biblio?II=0&ND=3&adjacent=true&locale=en_EP&FT=D&date=19911112&CC=US&NR=5063656A&KC=A (accessed on 18 November 2020).
  4. Lucenta, R.W.; Pellegrino, T.P.; Stenstrom, E.; Wright, S.F.; Krause, H.G. Wire End Preparation Apparatus and Method. U.S. Patent 5896644A, 27 April 1997. Available online: https://worldwide.espacenet.com/publicationDetails/biblio?II=0&ND=3&adjacent=true&locale=en_EP&FT=D&date=19990427&CC=US&NR=5896644A&KC=A (accessed on 18 November 2020).
  5. Block, M.D.; Gage, C.A. Automated processing of wire harnesses. In Proceedings of the International SAMPE Symposium and Exhibition, Covina, CA, USA, 7–10 March 1988; Carrillo, G., Newell, E.D., Brown, W.D., Phelan, P., Eds.; Society for the Advancement of Material and Process Engineering: Covina, CA, USA, 1988; Volume 2, pp. 289–299. [Google Scholar]
  6. Steinhauer. Personal Wiring Assistant. Available online: https://www.steinhauerna.com/personal-wiring-assistant.html (accessed on 19 November 2020).
  7. Rittal. Rittal at SPS IPC Drives: New Wire Terminal from Rittal Automation Systems. 2018. Available online: https://www.rittal.com/com-en/content/en/unternehmen/presse/pressemeldungen/pressemeldung_detail_68480.jsp (accessed on 19 November 2020).
  8. EPLAN. EPLAN Smart Wiring. Clever Software for Wiring for Panel Building. 2020. Available online: https://www.eplan-software.com/solutions/eplan-platform/eplan-smart-wiring/ (accessed on 19 November 2020).
  9. Szajna, A.; Stryjski, R.; Woźniak, W.; Chamier-Gliszczyński, N.; Kostrzewski, M. Assessment of Augmented Reality in Manual Wiring Production Process with Use of Mobile AR Glasses. Sensors 2020, 20, 4755. [Google Scholar] [CrossRef]
  10. Szajna, A.; Szajna, J.; Stryjski, R.; Sąsiadek, M.; Woźniak, W. The Application of Augmented Reality Technology in the Production Processes. In Intelligent Systems in Production Engineering and Maintenance. ISPEM 2018; Burduk, A., Chlebus, E., Nowakowski, T., Tubis, A., Eds.; Advances in Intelligent Systems and Computing; Springer: Cham, Switzerland, 2019; Volume 835, pp. 316–324. [Google Scholar] [CrossRef]
  11. Osowski, S. Głębokie sieci neuronowe i ich zastosowania w eksploracji danych. Przegląd Telekomun. Wiadomości Telekomun. 2018, 5. [Google Scholar] [CrossRef]
  12. Rosenblatt, F. Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms; Cornell Aeronautical Laboratory, Inc., Cornell University: Buffalo, NY, USA, 1962. [Google Scholar]
  13. Fukushima, K. Neocognitron—A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biol. Cybern. 1980, 36, 193–202. [Google Scholar] [CrossRef]
  14. Tappert, C.C. Who Is the Father of Deep Learning? In Proceedings of the 2019 International Conference on Computational Science and Computational Intelligence (CSCI), Las Vegas, NV, USA, 5–7 December 2019; IEEE: New York, NY, USA, 2019; pp. 343–348. [Google Scholar] [CrossRef]
  15. LeCun, Y.; Bengio, Y. Convolutional networks for images, speech, and time-series. In The Handbook of Brain Theory and Neural Networks; Arbib, M.A., Ed.; MIT Press: Cambridge, MA, USA, 1998; pp. 255–258. [Google Scholar]
  16. Hinton, G.E.; Osindero, S.; Teh, Y.W. A fast learning algorithm for deep belief nets. Neural Comput. 2006, 18, 1527–1554. [Google Scholar] [CrossRef] [PubMed]
  17. Le, Q.V.; Ranzato, M.A.; Monga, R.; Devin, M.; Chen, K.; Corrado, G.S.; Dean, J.; Ng, A.Y. Building High-level Features Using Large Scale Unsupervised Learning. In Proceedings of the 29th International Conference on Machine Learning, Edinburgh, UK, 26 June–1 July 2012; Langford, J., Pinea, J., Eds.; Omnipress: Madison, WI, USA, 2012. [Google Scholar]
  18. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. 2012. Available online: https://www.cs.toronto.edu/~kriz/imagenet_classification_with_deep_convolutional.pdf (accessed on 6 March 2021).
  19. Metz, C. Turing Award Won by 3 Pioneers in Artificial Intelligence. The New York Times. 27 March 2019. Available online: https://www.nytimes.com/2019/03/27/technology/turing-award-ai.html (accessed on 17 April 2021).
  20. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
  21. Schmidhuber, J. Deep learning in neural networks: An overview. Neural Netw. 2015, 61, 85–117. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  22. Anon. Wire making system aids test station production. Electron. Package Prod. 1979, 19, 125–126. [Google Scholar]
  23. Zuehlke, D. 10 years Industrie 4.0—Congratulations! LinkedIn Post. 1 April 2021. Available online: https://www.linkedin.com/pulse/10-years-industrie-40-congratulations-detlef-zuehlke/?trackingId=c4f08DxTRt7pIv1Ge0UUlw%3D%3D (accessed on 2 April 2021).
  24. Kato, H.; Billinghurst, M. Marker tracking and HMD calibration for a video-based augmented reality conferencing system. In Proceedings of the 2nd IEEE and ACM International Workshop on Augmented Reality (IWAR’99), San Francisco, CA, USA, 20–21 October 1999; IEEE: New York, NY, USA, 1999; pp. 85–94. [Google Scholar] [CrossRef] [Green Version]
  25. Hirzer, M. Marker Detection for Augmented Reality Applications; Technical Report ICG–TR–08/05. Seminar/Project Image Analysis; Institute of Computer Graphics and Vision, Technische Universität Graz: Graz, Austria, 2008; Available online: https://www.tugraz.at/fileadmin/user_upload/Institute/ICG/Documents/lrs/pubs/hirzer_tr_2008.pdf (accessed on 24 April 2021).
  26. Katiyar, A.; Kalra, K.; Garg, C. Marker Based Augmented Reality. Adv. Comput. Sci. Inf. Technol. 2015, 2, 441–445. [Google Scholar]
  27. Bengio, Y.; LeCun, Y. Scaling learning algorithms towards AI. In Large-Scale Kernel Machines; Bottou, L., Chapelle, O., DeCoste, D., Weston, J., Eds.; MIT Press: Cambridge, MA, USA, 2007; pp. 1–41. [Google Scholar]
  28. Yoon, S.J.; Roh, K.S.; Hyung, S.Y.; Ahn, S.H. Markerless Augmented Reality System and Method Using Projective Invariant. US Patent 8791960, 29 July 2014. Available online: https://worldwide.espacenet.com/publicationDetails/biblio?II=0&ND=3&adjacent=true&locale=en_EP&FT=D&date=20110421&CC=US&NR=2011090252A1&KC=A1 (accessed on 5 January 2021).
  29. Wang, J.; Shen, Y.; Yang, S. A practical marker-less image registration method for augmented reality oral and maxillofacial surgery. Int. J. Comput. Assist. Radiol. Surg. 2019, 14, 763–773. [Google Scholar] [CrossRef]
  30. Oufqir, Z.; El Abderrahmani, A.; Satori, K. From Marker to Markerless in Augmented Reality. In Embedded Systems and Artificial Intelligence. Advances in Intelligent Systems and Computing; Bhateja, V., Satapathy, S., Satori, H., Eds.; Springer Nature Switzerland AG: Cham, Switzerland, 2020; Volume 1076, pp. 599–612. [Google Scholar] [CrossRef]
  31. Velez, J.J. Robust Object Exploration and Detection. Ph.D. Thesis, Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA, USA, 2015. Available online: https://dspace.mit.edu/handle/1721.1/97813 (accessed on 28 November 2020).
  32. Chen, Y.H.T. Interactive Object Recognition and Search Over Mobile Video. Ph.D. Thesis, Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA, USA, 2017. Available online: https://dspace.mit.edu/handle/1721.1/111876 (accessed on 28 November 2020).
  33. Jaroensri, R. Learning to Solve Problems in Computer Vision with Synthetic Data. Ph.D. Thesis, Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA, USA, 2019. Available online: https://dspace.mit.edu/handle/1721.1/122560 (accessed on 4 March 2021).
  34. Li, S. Computational Imaging through Deep Learning. Ph.D. Thesis, Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA, USA, 2019. Available online: https://dspace.mit.edu/handle/1721.1/122070 (accessed on 28 November 2020).
  35. Florence, P.R. Dense Visual Learning for Robot Manipulation. Ph.D. Thesis, Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA, USA, 2020. Available online: https://dspace.mit.edu/handle/1721.1/128398 (accessed on 29 November 2020).
  36. Wu, J. Learning to See the Physical World. Ph.D. Thesis, Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA, USA, 2020. Available online: https://dspace.mit.edu/handle/1721.1/128332 (accessed on 29 November 2020).
  37. Perhavec, O.; Felipe, J. Accelerated Development of Photovoltaics by Physics-Informed Machine Learning. Ph.D. Thesis, Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA, USA, 2020. Available online: https://dspace.mit.edu/handle/1721.1/127060 (accessed on 29 November 2020).
  38. Ma, Y. Machine Learning in Ocean Applications: Wave Prediction for Advanced Controls of Renewable Energy and Modeling Nonlinear Viscous Hydrodynamics. Ph.D. Thesis, Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA, USA, 2020. Available online: https://dspace.mit.edu/handle/1721.1/127057 (accessed on 29 November 2020).
  39. Yang, X.; Liu, J.; Lv, N.; Xia, H. A review of cable layout design and assembly simulation in virtual environments. Virtual Real. Intell. Hardw. 2019, 1, 543–557. [Google Scholar] [CrossRef]
  40. Jastrzębski, S. Generalizacja i Trajektoria Optymalizacji Głębokich Sieci Neuronowych (Generalization and Trajectory Optimization of Deep Neural Networks). Ph.D. Thesis, Faculty of Mathematics and Information Technologies, Jagiellonian University, Kraków, Poland, 2019. Available online: https://ruj.uj.edu.pl/xmlui/handle/item/73272 (accessed on 16 December 2020). (In Polish).
  41. Irmatov, A.A.; Bazanov, P.V.; Buryak, D.Y.; Kuznetsov, V.D.; Mun, W.-J.; Yang, H.-K.; Lee, Y.-J. Method and System for Automated Face Detection and Recognition. U.S. Patent 9367730, 14 June 2016. Available online: https://scienceon.kisti.re.kr/srch/selectPORSrchPatent.do?cn=USP2016069367730 (accessed on 8 March 2021).
  42. Gaborski, R.R. Neural Network with Back Propagation Controlled through an Output Confidence Measure. U.S. Patent 5052043, 24 September 1991. Available online: https://patentimages.storage.googleapis.com/6e/50/9f/9b6a7978d1443f/US5052043.pdf (accessed on 8 March 2021).
  43. Loewenthal, K.H.; Bryant, S.M. Neural Network Optical Character Recognition System and Method for Classifying Characters in a Moving Web. U.S. Patent 5712922, 27 January 1998. Available online: https://patentimages.storage.googleapis.com/ff/f9/74/de006a80a8a332/US5712922.pdf (accessed on 8 March 2021).
  44. Diep, T.A.; Avi-Itzhak, H.I.; Garland, H.T. Training a Neural Network Using Centroid Dithering by Randomly Displacing a Template. U.S. Patent 5625707, 29 April 1997. Available online: https://patentimages.storage.googleapis.com/c2/a7/46/2c9e5d02a67c8e/US5625707.pdf (accessed on 8 March 2021).
  45. Gaborski, R.S.; Beato, L.J.; Barski, L.L.; Tan, H.-L.; Assad, A.M.; Dutton, D.L. Optical Character Recognition Neural Network System for Machine-Printed Characters. U.S. Patent 5048097, 10 September 1991. Available online: https://patentimages.storage.googleapis.com/ea/0a/2c/d2bee51ffd0ed5/US5048097.pdf (accessed on 8 March 2021).
  46. Shustorovich, A.; Thrasher, C.W. Neural Network Based Character Position Detector for Use in Optical Character Recognition. U.S. Patent 5542006, 30 July 1996. Available online: https://pdfpiw.uspto.gov/.piw?docid=05542006 (accessed on 8 March 2021).
  47. Oki, T. Neural Network for Character Recognition and Verification. U.S. Patent 5742702, 21 April 1998. Available online: https://patentimages.storage.googleapis.com/a5/2b/77/49a8f48b3759a5/US5742702.pdf (accessed on 8 March 2021).
  48. Takahashi, H. Neural Network Architecture for Recognition of Upright and Rotated Characters. U.S. Patent 6101270, 8 August 2000. Available online: https://patentimages.storage.googleapis.com/05/43/69/510c174e12e39c/US6101270.pdf (accessed on 8 March 2021).
  49. Kim, M.Y.; Rigazio, L.; Fujimura, R.; Tsukizawa, S.; Kozuka, K. Image Recognition Method. U.S. Patent 20170083796, 22 March 2017. Available online: https://scienceon.kisti.re.kr/srch/selectPORSrchPatent.do?cn=JPA2017030059207 (accessed on 8 March 2021).
  50. Jaderberg, M.; Simonyan, K.; Vedaldi, A.; Zisserman, A. Reading Text in the Wild with Convolutional Neural Networks. Int. J. Comput. Vis. 2016, 116, 1–20. [Google Scholar] [CrossRef] [Green Version]
  51. Shi, B.; Bai, X.; Yao, C. An end-to-end trainable neural network for image-based sequence recognition and its application to scene text recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2298–2304. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  52. Gerber, C.; Chung, M. Number Plate Detection with a Multi-Convolutional Neural Network Approach with Optical Character Recognition for Mobile Devices. J. Inf. Process. Syst. 2016, 12, 100–108. [Google Scholar] [CrossRef] [Green Version]
  53. Palka, J.; Palka, J.; Navratil, M. OCR systems based on convolutional neocognitron network. Int. J. Math. Models Methods Appl. Sci. 2011, 7, 1257–1264. [Google Scholar]
  54. Rawls, S.; Cao, H.; Kumar, S.; Natarajan, P. Combining convolutional neural networks and LSTMs for segmentation-free OCR. In Proceedings of the 2017 14th IAPR International Conference on Document Analysis and Recognition, Kyoto, Japan, 9–15 November 2017; IEEE: New York, NY, USA, 2017; pp. 155–160. [Google Scholar] [CrossRef]
  55. Noubigh, Z.; Mezghani, A.; Kherallah, M. Contribution on Arabic handwriting recognition using deep neural network. In Advances in Intelligent Systems and Computing, Proceedings of the 19th International Conference on Hybrid Intelligent Systems (HIS 2019) and the 14th International Conference on Information Assurance and Security (IAS 2019), Bhopal, India, 10–12 December 2019; Abraham, A., Shandilya, S.K., Garcia-Hernandez, L., Varela, M.L., Eds.; Springer International Publishing: Cham, Switzerland, 2019; Volume 1179, pp. 123–133. [Google Scholar] [CrossRef]
  56. Pattanayak, S.S.; Pradhan, S.K.; Malik, R.C. Performance evaluation of deep learning networks on printed odia characters. J. Comput. Sci. 2020, 16, 1011–1018. [Google Scholar] [CrossRef]
  57. Addis, D.; Liu, C.-M.; Ta, V.-D. Printed Ethiopic Script Recognition by Using LSTM Networks. In Proceedings of the 2018 International Conference on System Science and Engineering (ICSSE 2018), New Taipei City, Taiwan,, 28–30 June 2018; IEEE: New York, NY, USA, 2018. [Google Scholar] [CrossRef]
  58. Greff, K.; Srivastava, R.K.; Koutník, J.; Steunebrink, B.R.; Schmidhuber, J. LSTM: A search space odyssey. IEEE Trans. Neural Netw. Learn. Syst. 2017, 28, 2222–2232. [Google Scholar] [CrossRef] [Green Version]
  59. Ko, D.-G.; Song, S.-H.; Kang, K.-M.; Han, S.-W. Convolutional Neural Networks for Character-level Classification. IEIE Trans. Smart Process. Comput. 2017, 6, 53–59. [Google Scholar] [CrossRef] [Green Version]
  60. Zhang, X.; Zhao, J.; LeCun, Y. Character-level Convolutional Networks for Text Classification. In Proceedings of the 28th International Conference on Neural Information Processing Systems, Montreal, QC, Canada, 8–13 December 2014; Cowan, G., Germain, C., Guyon, I., Kégl, B., Rousseau, D., Eds.; Cornell University: Ithaca, NY, USA, 2015. [Google Scholar]
  61. Zhu, J.Y.; Cui, Y.; Liu, Y.; Sun, H.; Li, X.; Pelger, M.; Yang, T.; Zhang, L.; Zhang, R.; Zhao, H. TextGNN: Improving Text Encoder via Graph Neural Network in Sponsored Search. In Proceedings of the Web Conference 2021 (WWW 21), Ljubljana, Slovenia, 19–23 April 2021; ACM: New York, NY, USA, 2021; pp. 1–10. [Google Scholar] [CrossRef]
  62. Javaloy, A.; García-Mateos, G. Text Normalization Using Encoder–Decoder Networks Based on the Causal Feature Extractor. Appl. Sci. 2020, 10, 4551. [Google Scholar] [CrossRef]
  63. Stuner, B.; Chatelain, C.; Paquet, T. Handwriting recognition using cohort of LSTM and lexicon verification with extremely large lexicon. Multimed. Tools Appl. 2020, 79, 34407–34427. [Google Scholar] [CrossRef]
  64. Wigington, C.; Stewart, S.; Davis, B.; Barrett, B.; Price, B.; Cohen, S. Data augmentation for recognition of handwritten words and lines using a CNN-LSTM network. In Proceedings of the 2017 14th IAPR International Conference on Document Analysis and Recognition, Kyoto, Japan, 9–15 November 2017; IEEE: New York, NY, USA, 2017; pp. 639–645. [Google Scholar] [CrossRef]
  65. Carbune, V.; Gonnet, P.; Deselaers, T.; Rowley, H.A.; Daryin, A.; Calvo, M.; Wang, L.-L.; Keysers, D.; Feuz, S.; Gervais, P. Fast multi-language LSTM-based online handwriting recognition. Int. J. Doc. Anal. Recognit. 2020, 23, 89–102. [Google Scholar] [CrossRef] [Green Version]
  66. Sprovieri, J. Ink-jets for marking wire. Assembly 2019, 62. Available online: https://www.assemblymag.com/articles/94714-ink-jets-for-marking-wire (accessed on 17 March 2021).
  67. Camillio, J. Options for Wire Labeling. Assembly 2016, 59. Available online: https://www.assemblymag.com/articles/93182-options-for-wire-labeling (accessed on 17 March 2021).
  68. Webber, P. Ink jets for wire marking. Assembly 2001, 44, 38-X3. [Google Scholar]
  69. Mitchell, R.; Dalco, J.C., Jr.; Gemelli, D.J. Inkjet for wiremarking: Further improvements in a mature technology. Wire J. Int. 1998, 31, 84–89. [Google Scholar]
  70. Gray, W.T.; Falson, R. Wire marking: A changing technology. Electronics 1983, 29, 55–57. [Google Scholar]
  71. Tierney, J. Options for marking wire and cable. Assembly 2017, 60. Available online: https://www.assemblymag.com/articles/93782-options-for-marking-wire-and-cable (accessed on 17 March 2021).
  72. Antoine, C. Wire marking and its effect upon print-through perception of newsprint. Appita J. 2007, 60, 196–199. [Google Scholar]
  73. Markstein, H.W. Wire routing techniques in harness fabrication. Electron. Package Prod. 1982, 22, 43–56. [Google Scholar]
  74. Emmerich, H.H. Literaturverzeichnis. In Flexible Montage von Leitungssätzen mit Industrierobotern. IPA-IAO Forschung und Praxis (Berichte aus dem Fraunhofer-Institut für Produktionstechnik und Automatisierung (IPA), Stuttgart, Fraunhofer-Institut für Arbeitswirtschaft und Organisation (IAO), Stuttgart, und Institut für Industrielle Fertigung und Fabrikbetrieb der Universität Stuttgart); Springer: Berlin/Heidelberg, Germany, 1992; Volume 160, pp. 128–135. [Google Scholar] [CrossRef]
  75. Doyon, P. Harnessing high-mix, low-volume. Assembly 2005, 48. Available online: https://www.assemblymag.com/articles/83900-harnessing-high-mix-low-volume (accessed on 17 March 2021).
  76. Finsterbusch, T.; Petz, A.; Faber, M.; Härtel, J.; Kuhlang, P.; Schlick, C.M. A Comparative Empirical Evaluation of the Accuracy of the Novel Process Language MTM-Human Work Design. In Advances in Ergonomics of Manufacturing: Managing the Enterprise of the Future, Advances in Intelligent Systems and Computing; Schlick, C., Trzcieliński, S., Eds.; Springer: Cham, Switzerland, 2016; Volume 490. [Google Scholar] [CrossRef]
  77. Mrochen, M. MTM (Methods-Time-Measurement)—Droga do doskonałości (MTM (Methods-Time-Measurement)—The way to excellent). Przedsiębiorczość i Zarządzanie 2015, 16, 231–245. (In Polish) [Google Scholar]
  78. KFUP&M. King Fahd University of Petroleum & Minerals, Dhahran 31261, Saudi Arabia. 2019. Available online: https://faculty.kfupm.edu.sa/SE/atahir/SE%20323/Chapter-10-Predetermined-Motion-Time-Systems.ppt (accessed on 30 March 2021).
  79. Fijałkowski, J. Transport Wewnętrzny w Systemach Logistycznych. Wybrane Zagadnienia (Internal Transport in Logistic Systems. Selected Issues); Oficyna Wydawnicza Politechniki Warszawskiej: Warszawa, Poland, 2000. (In Polish) [Google Scholar]
  80. Fantoni, G.; Al-Zubaidi, S.Q.; Coli, E.; Mazzei, D. Automating the process of method-time-measurement. Int. J. Product. Perform. Manag. 2020, 70, 958–982. [Google Scholar] [CrossRef]
  81. Shadish, W.R.; Cook, T.D.; Campbell, D.T. Experimental and Quasi-Experimental Designs for Generalized Causal Inference; Wadsworth Cengage Learning: Belmont, CA, USA, 2002. [Google Scholar]
  82. Mitchell, O. Experimental Research Design. In The Encyclopedia of Crime and Punishment, 1st ed.; Jennings, W.G., Ed.; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2016. [Google Scholar] [CrossRef]
  83. Bovik, A. Handbook of Image and Video Processing, 2nd ed.; Academic Press: Cambridge, MA, USA, 2005. [Google Scholar] [CrossRef]
  84. Zygarlicka, M. Wybrane Metody Przetwarzania Obrazów w Analizach Czasowo-Częstotliwościowych na Przykładzie Zakłóceń w Sieciach Elektroenergetycznych (Selected Methods of Image Processing in Time-Frequency Analyses on the Example of the Interferences in the Energy Networks). Ph.D. Thesis, Faculty of Electrical Engineering, Automatic Control and Informatics, Opole University of Technology, Opole, Poland, 2011. Available online: https://www.dbc.wroc.pl/Content/13865/PDF/malgorzata_zygarlicka_pop..pdf (accessed on 25 April 2021). (In Polish).
  85. Hoske, M.T. Electrical Schematic Software Automates Wiring, Panel Design. Control Engineering. 1999. Available online: https://www.controleng.com/articles/electrical-schematic-software-automates-wiring-panel-design/ (accessed on 4 February 2021).
  86. Brady Worldwide, Inc. 2014. Available online: https://www.brady.co.uk/wire-cable-labels (accessed on 4 February 2021).
  87. Johanson, M. The Complete Guide to WIRING, 2014. Current with 2014–2017 Electrical Codes, 6th ed.; Cool Springs Press: Minneapolis, MN, USA, 2014. [Google Scholar]
  88. EPLAN Software & Service GmbH & Co., KG. 2015. Available online: https://www.pressebox.de/inaktiv/eplan-software-service-gmbh-co-kg/Eplan-Experience-die-ersten-365-Tage/boxid/769262 (accessed on 9 June 2020).
  89. Rittal Germany, Rittal at the SPS IPC Drives 2015 in Nuremberg. 7 December 2015. Available online: https://youtu.be/T-Pu1dVp4cI (accessed on 18 June 2020).
  90. Adaszyński, M.; Ciebiera, K.; Diks, K.; Kozlowski, T.; Szajna, A.; Szajna, J.; Zubowicz, C.; Zyciak, M. The Device for Identifying Wire Markings and the Method for Identifying Wire Markings. EP Patent 3460719, 27 March 2019. Available online: https://worldwide.espacenet.com/publicationDetails/biblio?CC=EP&NR=3460719A1&KC=A1&date=&FT=D&locale=en_EP (accessed on 9 June 2020).
  91. Adaszyński, M.; Szajna, J.; Ciebiera, K.; Diks, K.; Kozłowski, T.; Szajna, A. Device for Identification of Lead Designations and Method for Identification of Lead Designations. PL Patent 421368, 22 October 2018. Available online: https://worldwide.espacenet.com/publicationDetails/biblio?FT=D&date=20181022&DB=&locale=en_EP&CC=PL&NR=421368A1&KC=A1&ND=1 (accessed on 9 June 2020).
  92. Mirkowski, J. Analiza Stanu Techniki w Zakresie Inteligentnego Monitorowania Linii Produkcyjnej z Wykorzystaniem AR/VR (State of the Art Analysis of Intelligent Production Line Monitoring Using AR/VR). Digital Technology Poland. 10 October 2019. Available online: https://www.dtpoland.com/wersja (accessed on 23 October 2019). (In Polish).
  93. Kozłowski, T. Stan wiedzy (The State of Knowledge). Digital Technology Poland. 31 March 2017. Available online: https://www.dtpoland.com/wersja (accessed on 31 March 2017). (In Polish).
  94. Sadasivan, A.K.; Senthilkumar, T. Automatic Character Recognition in Complex Images. Procedia Eng. 2012, 30, 218–225. [Google Scholar] [CrossRef] [Green Version]
  95. Tadeusiewicz, R.; Korohoda, P. Computerised Image Analysis and Processing; Wydawnictwo Fundacji Postępu Telekomunikacji: Kraków, Poland, 1997. [Google Scholar]
  96. Scholz, F.W. Maximum Likelihood Estimation. In Encyclopedia of Statistical Sciences; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2006. [Google Scholar] [CrossRef]
  97. Kingma, D.P.; Ba, J.L. Adam: A method for stochastic optimization. In Proceedings of the ICLR 2015: International Conference on Learning Representations, San Diego, CA, USA, 7–9 May 2015; pp. 1–15. Available online: https://arxiv.org/abs/1412.6980 (accessed on 13 April 2021).
  98. Dünser, A.; Billinghurst, M. Evaluating Augmented Reality Systems. In Handbook of Augmented Reality; Furht, B., Ed.; Springer: New York, NY, USA, 2011; pp. 289–307. [Google Scholar] [CrossRef]
  99. Billinghurst, M.; Clark, A.; Lee, G. A Survey of Augmented Reality. Found. Trends® Hum. Comput. Interact. 2015, 8, 73–272. [Google Scholar] [CrossRef]
  100. Project SBA Deutsche MTM-Vereinigung e.v. Arbeitsgestaltung Mit MTM-HWD. Das Neue Bausteinsystem MTM-HWD. 2017. Available online: http://www.projekt-aba.de/files/aba/layout/images/Dokumente%20Thementage/2017-09-20%20MTM-HWD.pdf (accessed on 13 April 2021).
  101. Eurostat. Electricity Price Statistics. Available online: https://ec.europa.eu/eurostat/statistics-explained/index.php?title=Electricity_price_statistics (accessed on 18 May 2021).
  102. Flaherty, N. NXP Shows First Details of Edge AI i.MX9 Processor. eeNewsEurope. Available online: https://www.eenewseurope.com/news/nxp-imx9-processor-edge-ai (accessed on 18 May 2021).
  103. Feild, H.A.; Allan, J.; Jones, R. Predicting searcher frustration. In Proceedings of the 33rd International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ’10); Association for Computing Machinery: New York, NY, USA, 2010; pp. 34–41. [Google Scholar] [CrossRef] [Green Version]
  104. Kaur, D.; Islam, S.N.; Mahmud, M.A.; Dong, Z. Energy Forecasting in Smart Grid Systems: A Review of the State-of-the-art Techniques. arXiv 2020, arXiv:2011.12598. [Google Scholar]
  105. Ozay, M.; Esnaola, I.; Yarman-Vural, F.; Kulkarni, S.; Poor, H. Machine Learning Methods for Attack Detection in the Smart Grid. IEEE Trans. Neural Netw. Learn. Syst. 2016, 27, 1773–1786. [Google Scholar] [CrossRef] [Green Version]
  106. Alhafni, B.; Guedes, S.F.; Ribeiro, L.C.; Park, J.; Lee, J. Mapping Areas using Computer Vision Algorithms and Drones. arXiv 2019, arXiv:1901.00211. [Google Scholar]
  107. Zou, J.; Zhang, J.; Wang, L. Handwritten Chinese Character Recognition by Convolutional Neural Network and Similarity Ranking. arXiv 2019, arXiv:1908.11550. [Google Scholar]
  108. Saez-Trigueros, D.; Meng, L.; Hartnett, M. Face Recognition: From Traditional to Deep Learning Methods. arXiv 2018, arXiv:1811.00116. [Google Scholar]
  109. Yang, J.; Li, S.; Wang, Z.; Dong, H.; Wang, J.; Tang, S. Using Deep Learning to Detect Defects in Manufacturing: A Comprehensive Survey and Current Challenges. Materials 2020, 13, 5755. [Google Scholar] [CrossRef]
Figure 1. The main elements of the control cabinet production process. The orange background shows the production steps that are the focus of this paper.
Figure 1. The main elements of the control cabinet production process. The orange background shows the production steps that are the focus of this paper.
Energies 14 03659 g001
Figure 2. Subordinate process of preproduction of wires with three variants.
Figure 2. Subordinate process of preproduction of wires with three variants.
Energies 14 03659 g002
Figure 3. The result of the fully automated preproduction of unique wire with exact length, appropriate end ferrule, and marking (label).
Figure 3. The result of the fully automated preproduction of unique wire with exact length, appropriate end ferrule, and marking (label).
Energies 14 03659 g003
Figure 4. Wire assembly production steps in detail, with the usage of the reading prototype; green background shows the production steps that are the focus of this paper.
Figure 4. Wire assembly production steps in detail, with the usage of the reading prototype; green background shows the production steps that are the focus of this paper.
Energies 14 03659 g004
Figure 5. The wiring production station: (1) industrial enclosure and components already in place with certain number of wires assembled; (2) an assembly frame; (3) wiring assembly support software system (EPLAN Smart Wiring); (4) wire holder with preproduced wires; (5) the WLR device.
Figure 5. The wiring production station: (1) industrial enclosure and components already in place with certain number of wires assembled; (2) an assembly frame; (3) wiring assembly support software system (EPLAN Smart Wiring); (4) wire holder with preproduced wires; (5) the WLR device.
Energies 14 03659 g005
Figure 6. The WLR; the right side of the figure: the front panel of the chassis, removed; above the white background a blue wire, with white font markings, is placed by an operator; LED lamps are above both sides of the white background in order to create perfect exposure; a camera is located on top, directly over the wire.
Figure 6. The WLR; the right side of the figure: the front panel of the chassis, removed; above the white background a blue wire, with white font markings, is placed by an operator; LED lamps are above both sides of the white background in order to create perfect exposure; a camera is located on top, directly over the wire.
Energies 14 03659 g006
Figure 7. Wire reading with the WLR: the wire is placed by the operator on the image scene; the display on top shows the action in real time, which helps the operator to correctly place a wire.
Figure 7. Wire reading with the WLR: the wire is placed by the operator on the image scene; the display on top shows the action in real time, which helps the operator to correctly place a wire.
Energies 14 03659 g007
Figure 8. The picture taken by the WLR: clear, readable text; dotted font visible.
Figure 8. The picture taken by the WLR: clear, readable text; dotted font visible.
Energies 14 03659 g008
Figure 9. The actual picture taken by the WLR: clear, readable text; dotted font not visible (blurred).
Figure 9. The actual picture taken by the WLR: clear, readable text; dotted font not visible (blurred).
Energies 14 03659 g009
Figure 10. The actual picture taken by the WLR: faultily printed text; the figure was previously published in [10].
Figure 10. The actual picture taken by the WLR: faultily printed text; the figure was previously published in [10].
Energies 14 03659 g010
Figure 11. An enlargement of a single character: digit, printed on a wire: left side, a faultily printed text with a digit that the AI-supported system is challenged to recognize; right side: the result of AI-recognition, giving relevant pattern simulation; the figure was previously published in [10].
Figure 11. An enlargement of a single character: digit, printed on a wire: left side, a faultily printed text with a digit that the AI-supported system is challenged to recognize; right side: the result of AI-recognition, giving relevant pattern simulation; the figure was previously published in [10].
Energies 14 03659 g011
Figure 12. An enlargement of a single character: letter, printed on a wire; left side: the original, faultily printed text with a character that the AI-supported system is challenged to recognize; right side: the result of AI-recognition, giving propositions of relevant pattern simulation: “A”, “R”, “P”.
Figure 12. An enlargement of a single character: letter, printed on a wire; left side: the original, faultily printed text with a character that the AI-supported system is challenged to recognize; right side: the result of AI-recognition, giving propositions of relevant pattern simulation: “A”, “R”, “P”.
Energies 14 03659 g012
Figure 13. Sample of the recognition of empty spaces during identification of wire marking, using a DCNN.
Figure 13. Sample of the recognition of empty spaces during identification of wire marking, using a DCNN.
Energies 14 03659 g013
Figure 14. Sample of “zero” character recognition during identification of wire marking by using a DCNN.
Figure 14. Sample of “zero” character recognition during identification of wire marking by using a DCNN.
Energies 14 03659 g014
Figure 15. Recognition of the full spectrum of characters during identification of wire marking by using a DCNN.
Figure 15. Recognition of the full spectrum of characters during identification of wire marking by using a DCNN.
Energies 14 03659 g015
Figure 16. The isolated characters on the wire marking image.
Figure 16. The isolated characters on the wire marking image.
Energies 14 03659 g016
Figure 17. Example of an image generated synthetically for CNN training. Source: Ciebiera K.
Figure 17. Example of an image generated synthetically for CNN training. Source: Ciebiera K.
Energies 14 03659 g017
Figure 18. Validation of MTM-based results with experiment-based results.
Figure 18. Validation of MTM-based results with experiment-based results.
Energies 14 03659 g018
Figure 19. Comparison of the results obtained with and without the use of the WLR in the assembly process.
Figure 19. Comparison of the results obtained with and without the use of the WLR in the assembly process.
Energies 14 03659 g019
Table 1. Comparison of three alternatives in wire assembly process.
Table 1. Comparison of three alternatives in wire assembly process.
The Process Nowadays (2 Most Popular Scenarios of the Assembly Process; without the WLR)Approach with the WLR
Fundamental Operations in Assembly without WLR (A)Process Time (min)Fundamental Operations in Assembly without WLR (B)Process Time (min)Assembly with the WLRProcess Time (min)
Number of Characters in Wire Label Marking:
2820133
Look at an industrial enclosure0.00600Look at an industrial enclosure0.006000.006000.006000.00600Look at an industrial enclosure0.00600
Click the ESW assembly support system0.00300Reach to the holder on the left, for a wire0.012480.012480.012480.01248Reach to the holder on the left, for a wire0.01248
The ESW system opens the extended information with the 3D graphics of a selected wire on a wiring list0.01667Detach a wire from the holder0.018960.018960.018960.01896Detach a wire from the holder0.01896
Look at a displayed wire length0.09024Grab a wire with both hands0.006480.006480.006480.00648Grab a wire with both hands0.00648
Reach to the holder on the left, for a wire0.01584Rotate a wire’s label to be visible 0.006480.006480.006480.00648Rotate a wire to upward a label0.00648
Detach a wire with the closest predefined length from the holder0.02898Type a label into a search window of the ESW system0.599200.428000.278200.06420Move a wire with both hands towards the WLR0.00648
Adjust a wire’s ends preparation if needed (the predefined ends ones may not fit)0.02010Click the result so that extended information with 3D graphics opens on the wiring list0.003000.003000.003000.00300Place a wire on the white background of the WLR0.00312
Look at the ESW assembly support system to identify a wire routing0.35400Look at the ESW assembly support system to identify a wire routing0.003030.003030.003030.00303Wait for an acoustic signal that a picture for recognition was taken0.016667
Mount a wire0.04896Mount the wire0.048960.048960.048960.04896The ESW assembly support system opens the extended info with the 3D graphics of a wire on a wiring list6×10−7
Hide a wire surplus in a wiring duct0.12168(no wire surplus)----Move a hand with a wire from the WLR0
-------Look at the ESW assembly support system to identify a wire routing0.00303
-------Mount a wire (no wire surplus)0.04896
MTM-based result:0.70547MTM-based result:0.704590.533390.383590.16959MTM-based result:0.12866
Experiment-based result:Not analyzedExperiment-based result:0.716670.544440.383330.17222Experiment-based result:0.10000 ÷ 0.15000
Table 2. The energy consumption of the WLR during the full-wire assembly process (the WLR consumption time and energy for assembly of one control cabinet).
Table 2. The energy consumption of the WLR during the full-wire assembly process (the WLR consumption time and energy for assembly of one control cabinet).
Assembly with the WLR
Process time (h)MTM-based result:0.64330
Experiment-based result:0.75000
Energy consumption (kWh):MTM-based result:0.00772
Experiment-based result:0.00900
Cost of energy consumption (EUR/kWh):MTM-based result:0.00165
Experiment-based result:0.00192
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Szajna, A.; Kostrzewski, M.; Ciebiera, K.; Stryjski, R.; Woźniak, W. Application of the Deep CNN-Based Method in Industrial System for Wire Marking Identification. Energies 2021, 14, 3659. https://0-doi-org.brum.beds.ac.uk/10.3390/en14123659

AMA Style

Szajna A, Kostrzewski M, Ciebiera K, Stryjski R, Woźniak W. Application of the Deep CNN-Based Method in Industrial System for Wire Marking Identification. Energies. 2021; 14(12):3659. https://0-doi-org.brum.beds.ac.uk/10.3390/en14123659

Chicago/Turabian Style

Szajna, Andrzej, Mariusz Kostrzewski, Krzysztof Ciebiera, Roman Stryjski, and Waldemar Woźniak. 2021. "Application of the Deep CNN-Based Method in Industrial System for Wire Marking Identification" Energies 14, no. 12: 3659. https://0-doi-org.brum.beds.ac.uk/10.3390/en14123659

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop