Next Article in Journal
Ecological Impact of Artificial Light at Night: Effective Strategies and Measures to Deal with Protected Species and Habitats
Next Article in Special Issue
PARO as a Biofeedback Medical Device for Mental Health in the COVID-19 Era
Previous Article in Journal
Leveraging CSR for Sustainability: Assessing Performance Implications of Sustainability Reporting in a National Business System
Previous Article in Special Issue
The Effects of the Big Five Personality Traits on Stress among Robot Programming Students
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Sustainable Human–Robot Collaboration Based on Human Intention Classification

by
Chiuhsiang Joe Lin
* and
Rio Prasetyo Lukodono
Department of Industrial Management, National Taiwan University of Science and Technology, No. 43, Section 4, Keelung Rd., Da’an District, Taipei City 106, Taiwan
*
Author to whom correspondence should be addressed.
Sustainability 2021, 13(11), 5990; https://0-doi-org.brum.beds.ac.uk/10.3390/su13115990
Submission received: 31 March 2021 / Revised: 17 May 2021 / Accepted: 19 May 2021 / Published: 26 May 2021
(This article belongs to the Special Issue Human-Robot Interaction, Wellbeing, and Stress Management)

Abstract

:
Sustainable manufacturing plays a role in ensuring products’ economic characteristics and reducing energy and resource consumption by improving the well-being of human workers and communities and maintaining safety. Using robots is one way for manufacturers to increase their sustainable manufacturing practices. Nevertheless, there are limitations to directly replacing humans with robots due to work characteristics and practical conditions. Collaboration between robots and humans should accommodate human capabilities while reducing loads and ineffective human motions to prevent human fatigue and maximize overall performance. Moreover, there is a need to establish early and fast communication between humans and machines in human–robot collaboration to know the status of the human in the activity and make immediate adjustments for maximum performance. This study used a deep learning algorithm to classify muscular signals of human motions with accuracy of 88%. It indicates that the signal could be used as information for the robot to determine the human motion’s intention during the initial stage of the entire motion. This approach can increase not only the communication and efficiency of human–robot collaboration but also reduce human fatigue by the early detection of human motion patterns. To enhance human well-being, it is suggested that a human–robot collaboration assembly line adopt similar technologies for a sustainable human–robot collaboration workplace.

1. Introduction

1.1. Human–Robot Collaboration Aspect for Sustainable Manufacturing

Sustainability is a primary objective for a company to improve its manufacturing system. Some support for this change is related to economic value and minimizing the use of resources in product manufacturing. For measuring the impact on operational initiatives, sustainable manufacturing has three indicators: economic, environmental, and social sustainability indicators [1]. There are several aspects of the economic indicator for the implementation of sustainable manufacturing; namely, innovation, responsibility for taxes, job creation, sales and profit achievement, infrastructure investment, anti-corruption measures, and local economy contribution [2]. Automation advantages from industrial robotics actually perform a quantitative role in enhancing the productivity of human workers, which may potentially have a disruptive effect on the economy [3]. However, the use of robots in manufacturing systems faces several obstacles due to the target of increasing productivity through setting up collaboration scenarios with humans.
High productivity is a primary objective of any company in establishing their production system. In its production system, a company generally has many elements related to humans, machines/robots, environments, and objects. The setting for this production system is still a challenge because of the dynamic characteristics of humans. In many companies, humans are still widely used in the assembly system because they have adaptability and cognitive processability [4]. A higher proportion of human use in the system will increase the dynamic conditions due to uncertain circumstances. Humans tend to have high variability in behavior after being exhausted during the production process [5]. Some analyses have noted that human error is affected by human stress, repetition, fatigue, and conditions in the work environment [6]. Therefore, limitations exist when the human factor is considered in the assembly system.
A robot has many advantages in solving humans’ inconsistency. For example, they have less fatigue, make fewer errors, and are more consistent. However, the shift from humans to robots is not easy due to capital limitations. For more sophisticated robots that can accomplish processes requiring higher accuracy with objects of small dimensions, more capital is needed. Moreover, robots need to work in a static and structured environment because their movements and actions are programmed based on specific conditions in such an environment. The robots need to recognize the conditions of their surroundings related to obstacles, human motions, and object positions [7]. Due to this consideration, collaboration between humans and robots is a preferable option.
Precisely setting up a scenario is necessary for solving the disadvantages of humans working with robots. In addition, collaboration with robots helps to reduce fatigue and supports the safety factor for humans [8]. This setting will be a solution for companies when facing problems related to the human factor while also allocating affordable capital for robots. The collaboration system will offer the separation of jobs that are detailed and allocate them to humans and robots, which will mitigate human fatigue. This is supported by the collaboration’s main target, which is to give robots the ability to adapt and a greater capability for understanding human conditions [9]. This adaptive characteristic for robots allows them to identify the status of humans throughout the working process [10]. Additionally, for robots to understand the human status, we need to know each characteristic of the activities carried out by humans. A harmonious human–robot collaboration not only achieves the productivity aspect of sustainability, but also increases the well-being of workers, an important indicator of social sustainability with manufacturing systems.
There are some challenges on the human–robot task planning regarding providing low-cost and easy-to-use tools for the scenario to achieve an affordable production cost, an optimal combination between human and robot, increased efficiency and productivity, and minimize the stress and workload for the human [11]. Productivity can be achieved easily and rapidly by setting up a human–robot collaboration scenario in the workplace [12]. Alternatively, the safety aspect is important in the collaboration because of the necessary protection of the human factor from accidents. Some approaches using virtual commissioning, such as augmented reality, are used to support the safety aspect in the assembly task, which is required to perform the loading of a 30 kg object in the hybrid cells [13]. In supporting the advancements of technology in sensing and communication, augmented reality becomes one option to control the robot in the scenario [14]. Moreover, augmented reality could also be combined with other approaches, such as force sensors to manually guide robots in the assembly task [15]. Another consideration is to use an electroencephalogram to understand how the human brain adapts to different tasks [16]. However, using virtual commission and the electroencephalograph dictates us to fully control and manage the condition in the workplace and it will be another defiance in the developing scenario.
Human–robot collaboration (HRC) is the study of collaborative processes in human and robot agents working together to achieve shared goals. HRC is believed and expected to become widely adopted by industries in the near future. Currently, there are many research issues in HRC, which have mostly focused on safety concerns with human operators working with robots in the workplace. A review of the approaches that provide smooth interaction between humans and robots show that multimodal communication frameworks are of potential advantages either by visual guidance and imitation, voice commands, or haptic interaction [11]. Augmented reality glasses and smartwatches have also been used to build close communication and awareness between operators and robots under a fenceless environment by visualizing information [13,15]. This is carried out by identifying the pattern of human brain signals to determine the proper command to the robot [16]. Additionally, the process configures easy and fast set-up for the human–robot collaboration workplace layout and task generation based on certain criteria [12]. From these results, it shows that a smooth and socially friendly collaboration between humans and robots becomes a promising opportunity to improve workplace productivity with sustainable advantages. Collaborations become the best solution in this optimization era by still considering human contribution in the workplace. Up until today, little attention has been given to the sustainability potential of the human robot collaboration paradigm in the manufacturing area. It is therefore important to raise awareness for a sustainable HRC research focusing on better human intention prediction and communication with the collaborative robots. This paper describes a study that attempts to fulfill the goal of smooth collaboration by sensing human muscle signals during movement and use this information for a safer and more efficient robot work plan.

1.2. Motion Study for Human–Robot Collaboration

Humans perform many types of motions during assembly activities. Making sure that the robots understand all human motions in the activity is challenging due to the complex computational process. Another limitation is related to the difficulties in manually programming the whole-body motions of robots with numerous degrees of freedom [17]. Due to this limitation, the computation process must make some selections to characterize human motions and send them to the robots to perform recognition. The analysis of the human motions frequently performed during the assembly activity will prove useful for the human recognition process due to predictability. This condition implies a clear need for manufacturing scenarios with synergy of the skills from both humans and robots working together in the same area to maintain the sustainability of the working conditions and the environment, health and safety [18].
Work study is carried out to understand the characteristics of activities that have an impact on fatigue and can be allocated to robots. Methods–time measurement (MTM) can be used as the foundation for determining improvements in the operator’s work activities so that the movements used are efficient. Factors such as efficiency, quality, and safety must also be considered in designing collaborations between humans and machines [19,20]. Motion analysis can improve productivity by observing the stop locations of motions and detailing the motions in machine–human works [21]. Supporting the MTM analysis is another consideration from Gilbreth related to the identification of 18 movements of the human body, named Therbligs. Motion analysis based on Therbligs is efficient because of its cost-effective tool characteristics [22]. Other benefits of using Therbligs analysis are minimizing human error, achieving flexibility, and improving adaptability [23]. Therbligs can be applied broadly not only in the service industry but also in manufacturing. Therbligs can be complemented with other methods, such as value-stream mapping and energy supply modelling [24,25]. The results of motion analysis can provide information about the human status and thus inform the robot.

1.3. Fundamental Aspects of Human–Robot Collaboration

The various methods for connecting humans and robots are based on the specific characteristics of each party. Considering the reality in industry, concerns in the analysis are related to the human, object (product), and environment. Mostly, these concerns involve adjusting the robots to the human, object, and environmental conditions. Following these concerns, many types of research have been conducted to modify robots because of the characteristics of other parties (human, object, and environment). To accomplish this purpose, information on the characteristics, habits, or special features of the other parties is needed. An object in the production process can be depicted as the products assembled by the operator. The condition of objects’ affordance will inform the system about the kind of future activity that will be carried out by the operator [26]. This means that the object condition will follow the value-added and the sequential process in the workstation. It can be categorized as having a static condition and being predictable. Thus, it is more controllable by the process. The environment is related to the conditions around the humans while they are completing the production process. The environment can be managed to monitor some changes in the conditions under which humans perform their activities and generate data [27]. The environmental condition depends on the situational setting in the workplace. It can be easily controlled by the company to support the production process. Throughout the collaboration, the environment needs to be set in a structured condition to make it easier to distribute the sensors. As a result, the robots can steadily perform the task [7]. Another party is the human, where inconsistency still exists despite a good environment. This inconsistency is related to maintaining performance during tasks. Humans face difficulties in maintaining their performance in long-duration shifts. In many applications, humans must constantly repeat a task for 8 h, and sometimes longer if overtime is required. Thus, the company needs to solve this problem by collaborating with robots to improve the consistency of human performance. With this purpose, some objectives are related to fathoming better human characteristics in performing their tasks.
An understanding of the human characteristics during task performance can be attained from biofeedback from the human body. The human body provides much biofeedback, such as blood pressure, heart rate, skin conductance, and muscle activity [28]. Muscle activity analysis is adequate for depicting the human state in the characteristics of activities used in the collaboration. Muscle feedback will inform the system on how much humans use their muscles to carry out an activity. Muscle feedback provides signal patterns and intensity information about activities, which benefit the recognition process [29]. One device that is useful for analyzing human muscle activities is the electromyograph. A recognition process using electromyography (EMG) can predict the human intention in the collaboration and consider it as an indicator that can be sent to the robot. Based on this information, the robot can respond with a specific activity in the collaboration.

1.4. Electromyograph as a Communication Device

The first application of EMG in human–robot collaboration is for controlling robotic arm functions to perform an activity such as lifting based on the human hand activity [30]. Human hands are also used to control a wrist exoskeleton based on flexion, extension, ulnar deviation, and radial deviation [31]. Another application in the human–robot collaboration for lifting any object is based on human vertical directions (up, down, none) in movement gestures [32]. Attaching an EMG sensor to human muscles during a specific activity will provide information about any patterns. Information received from the EMG can be used to develop insights related to the kinds of human motions. The numerous hand activities that can be analyzed using EMG include closing, extension, flexion, finger straightening, grasping, pinching, and any pose [24,33,34,35]. For analyzing hand motions, two to six sensor channels are attached to the human hand. More channels attached to the human muscles will generate more muscle signals and will likely increase the accuracy of classification. However, the data collection process for analysis based on EMG, which requires attaching sensors, should not disturb the human or cause discomfort during the activity. Attaching as few sensors as possible to the human body is highly advisable to avoid interference with the human activity.
Attaching sensors to the human body to achieve collaboration with robots is still challenging. However, in applications with many uses of cameras, predictions are still based on an object’s position, rather than the human intention detected from biosignals. For example, a camera can detect or capture gestures from the human hand to provide instructions or commands to the robot [36]. Another study related to human gesture recognition for dynamic and static actions involved capturing images with the Kinect camera to interact with the robot [37]. On the other hand, using EMG for muscle detection will collect muscle signals related to the kinds of motions performed by a human during an activity. The electromyograph is a device which has the ability to predict movements in a simple and very efficient way [38]. Another consideration is that using EMG can solve problems related to light limitations and camera occlusion [39]. Considering the precision of movements in a limited area, it would be better to use sensors for recognizing human gestures than to use an optical-based approach [40]. Area limitations also make it difficult to place cameras for capturing human activities. This highlights one advantage of sensor-based human gesture detection in limited spaces or places with numerous objects in the collaboration area, which have a blocking effect. Using sensor-based methods such as EMG to detect human activities provides two important advantages: fast responses and a wide sensing area.
Supporting sustainable manufacturing in the innovation and social aspects, this paper proposes a human–robot collaboration (HRC) scenario using the electromyograph as a communication device. Another recommendation for better human–robot intention communication emphasizes sustainable manufacturing practices. One advantage of this study is that it demonstrates an HRC scenario which can improve the sustainable environment with higher productivity and maintain ergonomic work conditions by ensuring less fatigue, better understanding of human intentions by a robot, and smooth collaborations. The EMG built in this paper is low in cost, consisting of only a single-channel EMG sensor and an Arduino MKR1000 board. The rest of this paper follows the methodology for building a new scenario using motion analysis, characterizing each motion in the EMG signals, and classifying them into six basic human motions. In Section 4, the results of the scenario based on motion study, signal processing, and classification are presented as fundamental in the collaboration. The paper ends with the conclusion and the possibility of future works which could be implemented in the scenario.

2. Materials and Methods

2.1. Human–Robot Collaboration Framework for Sustainable Manufacturing

A paradigm shift within modern automated manufacturing system practices has allowed the dominant use of robots. However, this change is in contrast to the principle of sustainability concerns, as stipulated by the manufacturing companies. Sustainable manufacturing focuses on human well-being, which entails various objectives and responsibilities in social aspects of work [41]. In this case, humans have the dominant role in manufacturing due to their advantages, such as flexibility in their cognitive processes [4]. Sustainable manufacturing practices conceivably engage with human empowerment in social aspects. In this case, human workers play a pivotal role in manufacturing practices based on the provided advantages. Thus far, it is believed that the adjustment process is significant in maintaining the sustainable application of a modern manufacturing system. Consequently, the collaboration between humans and robots provides an effective attempt to achieve sustainable manufacturing practices. The framework in Figure 1 illustrates the proposed concept of this study, which is related to a collaborative scenario between humans and robots to support sustainable manufacturing practices. The proposed concept in this framework combines two approaches; namely, an ergonomic perspective and artificial intelligence. The ergonomic approach focuses on determining the activities which generate exhaustion in human activities and further support the utilization of robots. The artificial intelligence approach is thus employed to craft the expected collaboration between humans and robots through the classification of human intentions. Henceforth, it is expected that such collaboration could achieve sustainable manufacturing practices by regarding social aspects, such as the human role. This approach is pursued to provide job opportunities for human workers despite the paradigm shift towards automation. Accordingly, this approach is expected to mitigate the exhaustion effect of human activities owing to robot allocation. The proposed approach is additionally expected to achieve work performance efficiency. Since robots are involved to mitigate the exhaustion of human workers, collaboration between the two is predicted to be achieved by applying artificial intelligence based on human muscle signals.

2.2. The Human–Robot Collaboration Task Scenario

The analysis of the basic motions of an assembly system is the first step of the proposed method for determining the tasks to be allocated to humans and robots. The MTM analysis procedure breaks an activity into basic motions and estimates the standard predetermined time for each motion [42]. Gilbreth used 18 types of motions in the MTM analysis for classifying human activities. Gilbreth classified the 18 basic movements into effective and ineffective motions based on the motion characteristics, as shown in Table 1 [43]. The 18 different motions were generated from the analysis of human body movements during the performance of various kinds of work [44]. The results of this basic motion analysis will provide a new standard operating procedure in human–robot collaboration scenarios. In evaluating which task is to be allocated to the robot, one can consider many factors collected from previous experiences related to the characteristics of motions performed by humans. Figure 2 depicts the flow for evaluating the tasks to determine which are suitable for collaboration with the robot. Other considerations for the allocation might include the task characteristics in the assembly process and the robot characteristics.

2.3. Method

This section describes the method used for analyzing the human motion by the EMG signal and the method used as the human–robot interface. The human provides information about his/her intention in the collaboration and his/her hand activity to the robot, as shown by the EMG result. The EMG detects the human muscle activity with the path attached to the human hand. The data augmentation is used to increase the range of data without removing the data characteristics. Afterward, the continuous wavelet is used to transform the time domain to time–frequency domain and shows the power distribution for each signal. Finally, the result from the feature extraction is used as an input for the classification with a convolutional neural network (CNN) and the classification result directs the robot to perform specific actions. Figure 3 shows the scheme of the proposed method for analyzing the EMG data so that it may be used to inform the robot.

2.3.1. EMG Data Acquisition

A low-cost electromyograph, which combined a Groove EMG sensor and an Arduino MKR1000 for data acquisition, was used to collect the EMG data. The Groove EMG has two amplifiers and a high-pass filter. The electromyograph utilized the Arduino 12-bit resolution ADC. The two amplifiers were an instrumental amplifier and an operational amplifier with regulated gain. The high-pass filter removed the DC offset at 0.16 Hz. The sampling rate used in this device was 1000 Hz, and the EMG device captured muscle activity signals, as illustrated in Figure 4.
Humans dominantly use their hands and fingers to perform assembly processes, from which the locations for sensor placement were determined. Research was conducted to evaluate the human hand activity by considering the muscles in the forearm. One of the muscles with a connection to the human finger and wrist is the flexor digitorum profundus (FDP). Past researchers have analyzed finger and wrist manipulation from the FDP [45,46,47]. The FDP is one of the human forearm muscles which produces any kind of hand movement [48,49]. Figure 5 shows the position of the sensor placement for receiving signals from the FDP.
Eight volunteers (5 males and 3 females) participated in this analysis. The data collection process used a single-channel sensor to classify the human intention based on human flexor digitorum muscle activity, as shown in Figure 5. The data were obtained from six types of basic motions performed by the participants (Figure 6). The participants were asked to perform each motion for 2 s and to repeat each 40 times to simulate the tasks performed in a repetitive assembly job. From this data acquisition, 1920 data were collected, which resulted from 8 participants × 6 basic motions × 40 times replications.

2.3.2. Data Pre-Processing with Augmentation

A convolutional neural network (CNN) is a method belonging to the deep neural network used to process the data that has two-dimensional characteristics. The CNN has various layers that are generally categorized into feature learning and classification. The classification made by the CNN is based on probabilities for each class from the fully connected layer. However, there are conditions related to the poor performance of the result in applying this method due to the limited amount of data. The CNN needs a large amount of data as its input to allow good performance, which is considered one of its drawbacks [50]. On that account, data augmentation becomes a solution to data preprocessing for images as the input for the CNN to increase the number for training data [51].
This experimental study employed offline data augmentation for data received from the current participants. Data augmentation is effective for classification with a limited number of biological signals [52]. Data augmentation is able to increase the accuracy and optimize the results of any kind of feature extraction in the classification [53,54]. Data augmentation has been proven effective in showing the prediction of small datasets [52,55]. One of the methods of augmenting data is to add uniform random noise up to 40%, which can increase accuracy by approximately 9% [56]. The current data here were augmented using Gaussian noise—a very common approach in the application. Although there are other methods for augmenting the data, using Gaussian noise to augment data presents the benefits of increasing the data range without changing the local characteristics of the data [57]. Amplifying all-time data was used in this scenario with 0.5% standard deviation. Amplifying all-time data is one of the ways of data augmentation, which multiplies all sample data with a certain value [52]. The purpose of using data augmentation in this analysis is to increase the number of samples. From this data acquisition, a total of 1920 data were collected; 3096 resulted from data augmentation as an input for the model.

2.3.3. Continuous Wavelet Data Feature Extraction

A signal generated by the human body is influenced by electrical activity, such as that from the brain [58]. Similar to brain signals, muscles also provide human biofeedback that can be analyzed by its electrical activity. In one form of the non-stationary characteristics of EMG signals, the properties may vary over time [59]. Some conditions related to instantaneous changes in the signal pattern, which manifest as upward and downward spikes, are also present in the electrical muscle activity. Many types of spectral analysis, such as Fourier transform, short-time Fourier transform, and wavelet transform, are employed to analyze signal patterns [60,61,62].
Continuous wavelet transform is used to produce the scalogram, which is used for short- and long-time localization and for low- and high-frequency localization [63,64]. Using the scalogram with the Morse feature as an input for a convolutional neural network’s training process can improve the validation accuracy [65]. The scalogram shows the square magnitude of the continuous wavelet transform, which is defined in Equation (1) [66].
| W x ( b , a ) | 2 = | | a | 1 2 x ( t ) ψ ( t b a ) d t | 2
The scalogram provides information about the magnitudes of signals with different color plots between time and frequency [58]. Equation (2) is the continuous wavelet transform,
| W x ( b , a ) | 2 = x ( t ) ψ b , a * ( t ) d t  
where x(t) represents the original signal and ψ ( t ) is the mother wavelet function [66]. The variable a describes the scale factors for function ψ ( t ) , while b is a shifting factor for translating the function ψ ( t ) . Muscle signals have characteristics of varying amplitude, frequency, and discontinued localization on data over a period of time [67].
In this study, the continuous wavelet transform was used to convert the EMG signal for each basic motion to a scalogram. The Morse wavelet was adopted because it is useful for analyzing time-varying amplitude and frequency signal characteristics. Another advantage of the Morse wavelet is that it is helpful for localized discontinuities and extraction of the features into time and frequency domains [68]. Some parameters for converting the signals are as follows: a sampling frequency of 1000 Hz, signal length of 2048, and voice per octave of 48. The result of the scalogram for each basic motion signal is shown in Figure 6.
Figure 7 shows the different results from the 6 basic motions. Each figure shows the difference related to the magnitude of each motion, which represents the original human muscle EMG data, plotted in the frequency relative to the time. The scalogram result from Figure 6 shows the unique sudden shift in the EMG signal by the change in color. Additionally, the EMG signal, which has 1D characteristics, is transformed into a 2D signal matrix. By performing frequency–time analysis, this provides us with the distribution of the signal relative to the frequency and phase. This is used as an input for deep learning with the CNN model. Using input from the scalogram depicted by the signal energy for each motion, this study has the potential to produce better performance in the classification.

2.3.4. Classification with a Convolutional Neural Network

Many studies have utilized artificial neural networks (ANNs) to classify feature extraction, especially for analyzing a single-channel sensor to produce classifications. Some research has noted that it is difficult to consider only one sensor datum in producing classifications [69]. Using machine learning to analyze a single-channel sensor, some researchers have chosen to use a small number of classes [69,70,71].
An extension of the ANN, named deep learning, can reduce the learning cost. A convolutional neural network (CNN) is an algorithm in deep learning that is able to process images as the input and determine the characteristics used in the learning process for differentiating motions from one another. Here, the CNN was used to learn the characteristics of motion signals produced by the human in the scenario. The CNN is a famous architecture in deep learning and is used to classify images with remarkable results [72]. The CNN consists of a step for the extraction of characteristics, followed by a typical feedforward neural network (NN), which is used to perform discrimination. Generally, the CNN architecture consists of three layers: a convolutional layer with a finite number of filters for data input, a pooling layer to reduce the dimensions, and fully connected layers [73]. A CNN works with image input, which passes through many deep layers within the architecture to find the best characteristics of the images with a large scale of data [74]. Erözen employed a CNN to classify 6 hand gestures using spectrograms from STFT as input images with 8 channel sensors and achieved 83.97% accuracy [75].
The Alex Net architecture in deep learning was utilized to classify the scalograms from the EMG signals. The input image resolution in this analysis was set to 227 × 227. Figure 8 shows the architecture of the Alex Net. The Alex Net consists of 5 convolutional layers, which consist of convolutional filters and 6 ReLU (rectified linear unit) activations with nonlinear activation characteristics. ReLU uses deep learning to speed up the convergence result from the training process [76]. Three max-pooling layers are used for the pooling layer.

3. Results

3.1. The Product

The evaluation was performed in the assembly station of a GPU company. The GPU produced is shown in Figure 9. It had dimensions of 24 cm × 13.2 cm and weighed 1185 g. The process of assembly involved combining three components, as shown in Figure 9: a base, a main board, and a fan. Five activities were needed to assemble the three parts shown in Figure 10, as follows:
  • Workstation 1: Putting the rings into the main board;
  • Workstation 2: A screwing process to assemble the main board with the side part;
  • Workstation 3: A screwing process to assemble the GPU with the base;
  • Workstation 4: Gluing the fan and combining it with the GPU;
  • Workstation 5: A screwing process for connecting the GPU and the fan.

3.2. The Workstations

3.2.1. Workstation 1

At Workstation 1, the rings were inserted into the main board. The task was carried out by one operator without any tools. The operator inserted each ring by hand by selecting the area in which the seal ring was to be placed and applying some pressure to the main board. Figure 11 shows the operator inserting a ring into the main board. In total, six seal rings needed to be placed by the operator. The total time needed for this operation was 1525 TMU (54.9 s). The output rate for this operation was two GPUs. The output of Table 2 for this operation showed that nine motions were used to complete the task. Analysis of the proportion of basic motions showed the hold position to be dominant. A deeper analysis of these basic motions involved a classification of effective and non-effective motions. Based on Table 2, we were able to calculate that 32% were effective motions, while the rest were ineffective motions. The hold motion dominated the ineffective motions of the operator, with a percentage of 22%.
The object load presented the possibility of assigning the task to the robot. In contrast, the need for accuracy during the placement of the ring seals into the main board did not allow a robot to complete the task. Regarding the classification of human basic motions, it was possible to assign certain motions to a robot. The purpose here was to analyze the ineffective motions of the human and assign them to the robot. Since the proportion of hold was higher than the others, a scenario could be established for HRC based on this basic motion. Hold can also be applied to the robot with other operations, such as move, reach, and grasp.

3.2.2. Workstation 2

At Workstation 2, the main board was assembled with the side part of the GPU. This operation was performed by one operator using an automatic screwdriver. The assembly of the main board required four bolts. Figure 12 shows the operator applying a screw to assemble the main board with the side part of the GPU. The cycle time for this operation was 974.37 TMU (35 s) with one product output.
Table 3 shows that to finish this task, nine motions were needed. The proportion of ineffective motions was around 60%, and the hold position also made up the highest proportion of all motions performed by the operator. This position had a relatively high proportion compared to the others because the use of an automatic screwdriver required the operator to position some objects and tools. Despite the attachment to the main board, the object load here was still acceptable for a robot because the weight was less than 1 kg. Assigning this process to a robot would be possible because of the uniform conditions for screwing the bolt into the side part of the GPU. However, the position for attaching the part was on the side of the GPU. Hence, it would be difficult to set up the position due to complications in positioning the main board to support the screwing process (vertical position). Positioning the main board horizontally was possible, but it was difficult to change the automatic screwing position to the side position. For this reason, a human is better suited for the task. Regarding the motions performed by the operator to complete the task, the hold position could be assigned to the robot. The robot could hold the object while the operator completed the screwing process, and a chuck would be needed to establish a tight hold on the main board.

3.2.3. Workstation 3

At Workstation 3, the worker used an automatic screwdriver to assemble the main board with the base. Assembling this part required eight bolts, a cable attachment, and nine basic motions with 1326.6 TMU (47.76 s). Figure 13 shows the assembly process for the main board and side part of the GPU. The output of this process was one for each cycle.
The comparison of the effective and ineffective motions of this task in Table 4 with those at other workstations did not present a significant difference; the results were approximately 39% and 61%, respectively. In the whole task, the hold position was more dominant than others, at 27%. In addition, the movement activity had a higher percentage than those of the other types of motion, at 12%. Based on the main process activity, the process of screwing to assemble the object presented the possibility of not assigning the task to a human. The activity’s purpose of assembling two parts with horizontal positions provides a chance to assign this activity to a machine/robot.

3.2.4. Workstation 4

At Workstation 4, the main board was glued together with the GPU fan by one operator. The main purpose of this task was to apply glue and tape to attach the main board to the GPU fan. Figure 14 depicts an operator performing the gluing process of the main board before attaching the GPU fan to it. Based on Table 5, the total time for this operation was 897.56 TMU (32.31 s). The operator used tools to apply glue to the object.
Of the total 897.56 TMU, 37% were effective motions and 63% were ineffective motions. The idle time was greater than that of the other workstations because of the tendency of humans to use the right hand and the characteristics of the task. The position of motions was the next highest proportion due to the need for the operator to place an attachment on the object. The need for accuracy and pressure to attach the glue and tape make this operation difficult to assign to a robot. Up to nine pieces of tape needed to be placed onto the main board.

3.2.5. Workstation 5

Finally, at Workstation 5, the screwing process was used to assemble the main board with the GPU fan. The operator used a screwdriver and six bolts to perform the assembly. Referring to Table 6, the cycle time needed to complete this task was 1269.7 TMU (45.71 s), with an output rate of one product for each cycle. Figure 15 depicts the assembly process for this workstation. For this task, ineffective motions dominated, comprising up to 80% of the total. Further analysis of the ineffective motions revealed that the hold position had the highest portion, 35%. As previously shown, this task could be assigned to a robot. It would then be possible to assign the hold position to the robot while the human completed the screwing process.
The evaluation of all of these workstations in building a new scenario of HRC showed the need to consider the kinds of activities performed by the operator. This section compares the five workstations and the basic motions used by operators at each. The results of this comparison present possibilities for developing new collaborations between humans and robots. Another benefit is the possibility of reducing the human load related to the ineffective motions, such as hold and idle. Minimizing holding by humans is one purpose of Therbligs, in addition to eliminating ineffectiveness and performing multiple motions in combination [43]. Figure 16 shows that two workstations had higher hold times (Workstations 3 and 5). Another consideration is that the idle time was higher at these two workstations than at the others. This indicates that Workstations 3 and 5 present the possibility of HRC, based on MTM analysis.

3.3. Implementing Human–Robot Collaboration

3.3.1. The EMG-Based Communication for Human–Robot Collaboration

Supporting the application of motion study analysis, this study aimed to utilize a low-cost electromyograph, which was built with only one sensor channel. One reason for choosing a single-channel sensor is that such a design can minimize operator discomfort and subjective perceived interference, although in theory, more channels of muscular information would provide more accurate prediction of the motion intention. One study also found that low-cost EMG proved to be useful for human biofeedback application and clinical analysis [77]. The classification results for the EMG signals of six basic motions are shown in Figure 17, which presents the training process for the classification, with accuracy of up to 100% and 88% for validation. Its confusion matrix is shown in Figure 18. Another study considered using fewer classes (four gestures) while utilizing a single-channel EMG, and the accuracy reached around 90% [71].
Supporting this scenario is a preference to consider the high accuracy of the classification result in the confusion matrix in Figure 18, such as move, hold, idle, and release. For the scenario in Figure 16, detecting movement motions from classification will direct the robot to move the GPU to the holding position for tshe screwing process. After the human finishes the screwing activity, the human’s idle information will direct the robot to move the GPU to the conveyor. These two basic motions are chosen because the accuracy is higher than those of the other classes. This information could minimize the error information related to misclassification of the human motion intention. Figure 18 shows some misclassification conditions for reach, grasp, and release because of similar characteristics of the signal magnitudes relative to the time and frequency within these three classes.

3.3.2. Scenario Description

Previous analysis indicated the possibility combining tasks for HRC at Workstations 3 and 5. Workstation 3 entails the assembly of the main board with the base, while Workstation 5 involves screwing the main board onto the GPU fan. The first step is the identification of the time needed to complete the tasks at Workstations 3 and 5. A total of 94 s was needed for the completion of both workstation tasks. The next step is to set up the scenario of collaborating with the robot. Here, three elements, namely, a human, a robot, and an automatic screwdriver, were used. The set-up scenario is based on Figure 19. Figure 19 illustrates that, at the first workstation, the human performs the loading and unloading of the main board and the base to the automatic screwdriver. While the human performs the loading and unloading, the robot grasps, moves, and holds the GPU fan in the collaboration area. After the automatic screwdriver finishes the process, the human attaches the main board to the GPU fan and proceeds to the Workstation 5 scenario (the screwing process). Finally, after the human finishes the screwing process, the robot moves the object to the conveyor.
To accomplish this scenario, there is also a re-layout in the experiment. The re-layout serves to achieve simultaneous collaboration for each party. The re-layout separates the area into three areas: a human area, a robot area, and a collaboration area. The collaboration area is provided to accommodate the robot when it is holding the GPU fan of a certain weight for a given amount of time while the human carries out the operation. Using motion analysis for human–robot collaboration, unproductive characteristics of jobs carried out by a human can be minimized [78]. The motions performed by a human in the task-related movement and holding are unproductive and cause fatigue. Thus, hold and move should be assigned to the robot. The HRC considers the hold and some movements as shared tasks. This means that the scenario uses a shared workspace for a shared task with physical interaction [79]. The scenario of motions performed by the human, robot, and machine in the collaboration is arranged in Figure 20. Figure 21 shows the layout model of collaboration between a human, a robot, and the screwdriver. Figure 19 shows a scenario of the robot performing the move and hold motions. Reducing the hold motion from the human will reduce the fatigue that might result from repetition. Consideration of the biomechanical load in effective motion will help the designer to balance operational performance and human safety [80].
This will impact the sustainability condition because the human workload and suitable physical effort are a few of the important factors to support human well-being in the workplace [81]. It is increase synergy among areas/departments and provide a healthy, meaningful, and pleasurable work atmosphere for developing sustainable work [82]. This is in line with the main purpose of developing sustainable manufacturing operations with safety protection [83]. Although this framework has limitations in the human intention by only considering human motion, this gives promising insights for the sustainable development in the workplace for increasing human well-being. Furthermore, the reduction in lead time resulting from the HRC scenario would meet sustainable manufacturing goals. Lead-time reduction would increase productivity, which is very important for maintaining sustainable working conditions and environments. The lead time is one of the key metrics in sustainable manufacturing, along with energy consumption, carbon emissions, and production costs [84].

4. Discussion

Human–robot collaboration (HRC) is the study of collaborative processes in human and robot agents working together to achieve shared goals. There are numerous ways to build human–robot collaboration (HRC). One is to develop HRC based on human motion data. This paper presents an approach based on motion analysis to evaluate the current scenario in the company and construct an HRC scheme to reduce the robot’s uncertainty about human movements in the production line. As a robot has higher productivity over a prolonged duration of work than a human does [5], allocating different tasks to the human and the robot requires some information on the capability of the human in the task procedure because of possible changes in his/her physiological conditions [85].
Applying HRC in this scenario would provide an improvement over the current assembly condition. Several benefits of this proposed scenario would be as follows:
  • Combining Workstations 3 and 5 would reduce the total cycle time needed to finish the operations. Previous analysis showed the total cycle time for Workstations 3 and 5 to be 93.47 s. The new scenario proposed for HRC would be 81.85 s, as shown in Figure 20. This reduction in time would yield higher throughput.
  • Human fatigue would be reduced in this scenario because the move and hold motions were assigned to the robot. The human worker could focus on the screwing process, which adds value to the product (see Table 7).
This work shows that human–robot intention communication is achievable for a more socially friendly and sustainable HRC workplace for the assembly process in the manufacturing. Although the actual industry assembly line requires higher accuracy for uncertain human dynamics, the idea with a low-cost single-channel device is promising to demonstrate better HRC through movement intention capture. This is in line with EMG, with the ergonomics approach being one of the devices with the widest and most successful application in the industry environment without affecting the worker [86].
This scenario also supports sustainability, an important societal factor. In industrial robotics, societal factors focus on reducing the human load in work with heavy loads, high repetition, and unsanitary job conditions [87]. The proposed scenario supports the social aspect with respect to human rights in work and provides good conditions. Moreover, reducing the human load would also provide environmental and economic benefits by using resources efficiently and providing innovation in HRC. Combining robots with AI techniques in the scenario would help to increase effectiveness by considering the knowledge arrangement and practice [3]. The collaboration scenario with a focus on human intention would increase positive aspects, such as social connections, well-being, and interaction engagement [88]. Using a robot would support social relationships due to a high level of social interaction [89].
Social sustainability can be achieved by improving the workplace conditions and environment in the manufacturing industry [90]. However, with the dynamic characteristics illustrated by the human–robot collaboration, the human performance in the working area is influenced by environmental conditions and may have an impact on the safety aspect due to stressful conditions [91]. Focusing on human intention will bring forth a solution to the safety aspect issue. The human intention in the collaboration will be understood by the robot by means of muscle signals generated from EMG. The robots will perform a particular action when there are muscle signals that represent the human intention to support the safety aspect for social sustainability. This is in line with the main purpose of developing sustainable manufacturing operations with safety protection [83]. Additionally, this scenario only considers EMG data to obtain muscular level activity; data cannot be traced back to any personal identity. The EMG data depict the characteristics of any motions and is not related to any personal identity in the social perspective. Furthermore, the reduction in lead time resulting from the HRC scenario would meet sustainable manufacturing goals. Lead-time reduction would increase productivity, which is very important for maintaining sustainable working conditions and environments. The lead time is one of the key metrics in sustainable manufacturing, along with energy consumption, carbon emissions, and production costs [84].
The future improvement of the method may include integration with other types of sensors, such as vision-based sensors, to cross check the EMG classification. Another possibility is to add more EMG sensors to detect the human’s muscle activity. This would be challenging because, for low-cost DAQ (data acquisition), there is a tradeoff for computational process capability.

5. Conclusions

Building HRC is one option for supporting sustainable manufacturing due to the advancement of technology. Ergonomic analysis also supports collaboration using motion study to enhance worker well-being by reducing highly repetitive task loads. The combination of an ergonomic perspective and HRC would be useful for innovation and social aspects of sustainability. The ergonomic approach through motion study involves evaluating the company’s current scenario and constructing a human–robot collaboration. Ineffective motions will impact human performance over an 8-h work shift. In this case, holding can be shared with a robot. Based on the evaluation of the left- and right-hand charts, the operations at Workstations 3 and 5 had the highest hold times. A scenario was built to combine these two scenarios and assign the hold position to the robot and the human while the human screws in the bolts. The resulting percentage of human processing time is 72%. The robot provides 90% utilization to take an object, hold it, and release it to the conveyor. The screwdriver is less utilized, with only 47.22% utilization.
Low-cost EMG provides the possibility of using a device to communicate between humans and robots. Low-cost EMG could detect some characteristics of magnitudes in frequency and time domains for basic human motions. Based on the classification results, there is a possibility of communication between humans and robots. The result of the training accuracy was 100%, and the validation was 88%. From this information, some classes with a high accuracy of validation can be considered as information for the robot to convey the human’s status in the collaborations.
Finally, supported by the low-cost EMG, the proposed scenario developed from an ergonomic perspective would result in greater coordination between the human and the robot, which supports sustainable manufacturing practices. A further benefit of this result is the demonstration of HRC scenarios that can enhance a healthy environment with higher efficiency and cause less human fatigue, greater understanding of human intentions in robotics, and seamless collaborations.

Author Contributions

Conceptualization, C.J.L.; methodology, C.J.L. and R.P.L.; software, R.P.L.; validation, R.P.L.; formal analysis, C.J.L. and R.P.L.; investigation, R.P.L.; resources, C.J.L. and R.P.L.; data curation, R.P.L.; writing—original draft preparation, R.P.L.; writing—review and editing, C.J.L.; visualization, R.P.L.; supervision, C.J.L.; funding acquisition, C.J.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Ethical review and approval were waived for this study, due to the fact that the data collection involves no greater than minimal risk to participants and the data collected do not contain identifiable information of any individual participant.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Acknowledgments

This research was carried out in the TUL-NTUST Smart Manufacturing Laboratory. Authors acknowledge Distinguished Professor Kung-Jeng Wang as the head of TUL-NTUST Smart Manufacturing Laboratory for support and advice while conducting the experiment.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Labuschagne, C.; Brent, A.C. Sustainable Project Life Cycle Management: The need to integrate life cycles in the manufacturing sector. Int. J. Proj. Manag. 2005, 23, 159–168. [Google Scholar] [CrossRef] [Green Version]
  2. Alkhdur, A.M.A. Toward a Sustainable Human-Robot Collaborative Production Environment; KTH Royal Institute of Technology: Stockholm, Sweden, 2017. [Google Scholar]
  3. Bugmann, G.; Siegel, M.; Burcin, R. A role for robotics in sustainable development? IEEE Africon’11 2011, 1–4. [Google Scholar] [CrossRef]
  4. Liau, Y.Y.; Ryu, K. Task Allocation in Human-Robot Collaboration (HRC) Based on Task Characteristics and Agent Capability for Mold Assembly. Procedia Manuf. 2020, 51, 179–186. [Google Scholar] [CrossRef]
  5. Gołda, G.; Kampa, A.; Paprocka, I. Analysis of human operators and industrial robots performance and reliability. Manag. Prod. Eng. Rev. 2018, 9, 24–33. [Google Scholar]
  6. Yeow, J.A.; Ng, P.K.; Tan, K.S.; Chin, T.S.; Lim, W.Y. Effects of Stress, Repetition, Fatigue and Work Environment on Human Error in Manufacturing Industries. J. Appl. Sci. 2014, 14, 3464–3471. [Google Scholar] [CrossRef] [Green Version]
  7. Pyo, Y.; Nakashima, K.; Kuwahata, S.; Kurazume, R.; Tsuji, T.; Morooka, K.; Hasegawa, T. Service robot system with an informationally structured environment. Robot. Auton. Syst. 2015, 74, 148–165. [Google Scholar] [CrossRef] [Green Version]
  8. Cheng, H.; Xu, W.; Ai, Q.; Liu, Q.; Zhou, Z.; Pham, D.T. Manufacturing Capability Assessment for Human-Robot Collaborative Disassembly Based on Multi-Data Fusion. Procedia Manuf. 2017, 10, 26–36. [Google Scholar] [CrossRef]
  9. El Zaatari, S.; Marei, M.; Li, W.; Usman, Z. Cobot programming for collaborative industrial tasks: An overview. Robot. Auton. Syst. 2019, 116, 162–180. [Google Scholar] [CrossRef]
  10. Baraglia, J.; Cakmak, M.; Nagai, Y.; Rao, R.P.; Asada, M. Efficient human-robot collaboration: When should a robot take initiative? Int. J. Robot. Res. 2017, 36, 563–579. [Google Scholar] [CrossRef]
  11. Tsarouchi, P.; Makris, S.; Chryssolouris, G. Human–robot interaction review and challenges on task planning and programming. Int. J. Comput. Integr. Manuf. 2016, 29, 916–931. [Google Scholar] [CrossRef]
  12. Tsarouchi, P.; Spiliotopoulos, J.; Michalos, G.; Koukas, S.; Athanasatos, A.; Makris, S.; Chryssolouris, G. A Decision Making Framework for Human Robot Collaborative Workplace Generation. Procedia CIRP 2016, 44, 228–232. [Google Scholar] [CrossRef] [Green Version]
  13. Michalos, G.; Kousi, N.; Karagiannis, P.; Gkournelos, C.; Dimoulas, K.; Koukas, S.; Mparis, K.; Papavasileiou, A.; Makris, S. Seamless human robot collaborative assembly—An automotive case study. Mechatronics 2018, 55, 194–211. [Google Scholar] [CrossRef]
  14. Wang, L.; Vancza, J.; Kruger, J.; Gao, R.X.; Wang, X.V.; Makris, S.; Chryssolouris, G. CIRP Annals—Manufacturing Technology Symbiotic human-robot collaborative assembly. CIRP Ann. Manuf. Technol. 2019, 68, 701–726. [Google Scholar] [CrossRef] [Green Version]
  15. Papanastasiou, S.; Kousi, N.; Karagiannis, P.; Gkournelos, C.; Papavasileiou, A.; Dimoulas, K.; Baris, K.; Koukas, S.; Michalos, G.; Makris, S. Towards seamless human robot collaboration: Integrating multimodal interaction. Int. J. Adv. Manuf. Technol. 2019, 105, 3881–3897. [Google Scholar] [CrossRef]
  16. Mohammed, A.; Wang, L. Advanced Human-Robot Collaborative Assembly Using Electroencephalogram Signals of Human Brains. Procedia CIRP 2020, 93, 1200–1205. [Google Scholar] [CrossRef]
  17. Takano, W.; Murakami, Y.; Nakamura, Y. Representation and classification of whole-body motion integrated with finger motion. Robot. Auton. Syst. 2020, 124, 103378. [Google Scholar] [CrossRef]
  18. Evjemo, L.D.; Gjerstad, T.; Grøtli, E.I.; Sziebig, G. Trends in Smart Manufacturing: Role of Humans and Industrial Robots in Smart Factories. Curr. Robot. Rep. 2020, 1, 35–41. [Google Scholar] [CrossRef] [Green Version]
  19. Michalos, G.; Makris, S.; Papakostas, N.; Mourtzis, D.; Chryssolouris, G. Automotive assembly technologies review: Challenges and outlook for a flexible and adaptive approach. CIRP J. Manuf. Sci. Technol. 2010, 2, 81–91. [Google Scholar] [CrossRef]
  20. Malik, A.A.; Bilberg, A. Collaborative robots in assembly: A practical approach for tasks distribution. Procedia CIRP 2019, 81, 665–670. [Google Scholar] [CrossRef]
  21. Kim, H.-J.; Hwang, H.-Y.; Hong, J.-W. A Study on the Method of Task Management Using Motion Analysis. Int. J. Pure Appl. Math. 2017, 117, 389–397. [Google Scholar]
  22. Ferguson, D. Therbligs: The Keys to Simplifying Work. 2000. Available online: http://web.mit.edu/allanmc/www/Therblgs.pdf (accessed on 17 May 2021).
  23. Oyekan, J.; Hutabarat, W.; Turner, C.; Arnoult, C.; Tiwari, A. Using Therbligs to embed intelligence in workpieces for digital assistive assembly. J. Ambient. Intell. Humaniz. Comput. 2019, 11, 2489–2503. [Google Scholar] [CrossRef] [Green Version]
  24. Tang, X.; Liu, Y.; Lv, C.; Sun, D. Hand Motion Classification Using a Multi-Channel Surface Electromyography Sensor. Sensors 2012, 12, 1130–1147. [Google Scholar] [CrossRef] [Green Version]
  25. Jia, S.; Yuan, Q.; Lv, J.; Liu, Y.; Ren, D.; Zhang, Z. Therblig-embedded value stream mapping method for lean energy machining. Energy 2017, 138, 1081–1098. [Google Scholar] [CrossRef] [Green Version]
  26. Cramer, M.; Cramer, J.; Kellens, K.; Demeester, E. Towards robust intention estimation based on object affordance enabling natural human-robot collaboration in assembly tasks. Procedia CIRP 2018, 78, 255–260. [Google Scholar] [CrossRef]
  27. Lanza, F.; Seidita, V.; Chella, A. Agents and robots for collaborating and supporting physicians in healthcare scenarios. J. Biomed. Informatics 2020, 108, 103483. [Google Scholar] [CrossRef] [PubMed]
  28. Frank, D.L.; Khorshid, L.; Kiffer, J.F.; Moravec, C.S.; McKee, M.G. Biofeedback in medicine: Who, when, why and how? Ment. Health Fam. Med. 2010, 7, 85–91. [Google Scholar]
  29. Zhang, R.; Magnus, H.; Jonas, O.; Ya, Y. Nano Energy Sensing body motions based on charges generated on the body. Nano Energy 2019, 63, 103842. [Google Scholar] [CrossRef]
  30. Crawford, B.; Miller, K.; Shenoy, P.; Rao, R. Real-time classification of electromyographic signals for robotic control. Proc. Natl. Conf. Artif. Intell. 2005, 2, 523–528. [Google Scholar]
  31. Khokhar, Z.O.; Xiao, Z.G.; Menon, C. Surface EMG pattern recognition for real-time control of a wrist exoskeleton. Biomed. Eng. Online 2010, 9, 41. [Google Scholar] [CrossRef] [Green Version]
  32. DelPreto, J.; Rus, D. Sharing the load: Human-robot team lifting using muscle activity. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; Volume 2019, pp. 7906–7912. [Google Scholar]
  33. Yu, H.; Fan, X.; Zhao, L.; Guo, X. A novel hand gesture recognition method based on 2-channel sEMG. Technol. Health Care 2018, 26, 205–214. [Google Scholar] [CrossRef] [Green Version]
  34. Palkowski, A.; Redlarski, G. Basic Hand Gestures Classification Based on Surface Electromyography. Comput. Math. Methods Med. 2016, 2016, 1–7. [Google Scholar] [CrossRef] [Green Version]
  35. Subasi, A.; Alharbi, L.; Madani, R.; Qaisar, S.M. Surface EMG based Classification of Basic Hand Movements using Rotation Forest. In Proceedings of the 2018 Advances in Science and Engineering Technology International Conferences (ASET), Dubai, United Arab Emirates, 6 February–5 April 2018; pp. 1–5. [Google Scholar] [CrossRef]
  36. Tellaeche, A.; Kildal, J.; Maurtua, I. A flexible system for gesture based human-robot interaction. Procedia CIRP 2018, 72, 57–62. [Google Scholar] [CrossRef]
  37. Canal, G.; Escalera, S.; Angulo, C. A real-time Human-Robot Interaction system based on gestures for assistive scenarios. Comput. Vis. Image Underst. 2016, 149, 65–77. [Google Scholar] [CrossRef] [Green Version]
  38. Tabie, M.; Kirchner, E.A. EMG Onset Detection—Comparison of Different Methods for a Movement Prediction Task based on EMG. 2013, pp. 242–247. Available online: https://www.scitepress.org/Papers/2013/42501/42501.pdf (accessed on 31 March 2021).
  39. Ceolini, E.; Frenkel, C.; Shrestha, S.B.; Taverni, G.; Khacef, L.; Payvand, M.; Donati, E. Hand-Gesture Recognition Based on EMG and Event-Based Camera Sensor Fusion: A Benchmark in Neuromorphic Computing. Front. Neurosci. 2020, 14, 637. [Google Scholar] [CrossRef]
  40. Ahmed, S.; Popov, V.; Topalov, A.; Shakev, N. Hand Gesture based Concept of Human—Mobile Robot Interaction with Leap Motion Sensor. IFAC-PapersOnLine 2019, 52, 321–326. [Google Scholar] [CrossRef]
  41. Fantini, P.; Taisch, M.; Palasciano, C. Social Sustainability: Perspectives on the Role of Manufacturing; 20th Advances in Production Management Systems (APMS); Springer: State College, PA, USA, 2013; pp. 62–69. [Google Scholar]
  42. Fanti, M.; Murikoff, N. The general model of exposure analysis: Relevant definitions and their interaction with job analysis using the methods-time measurement (MTM) system. Elsevier Ergon. B Ser. 2002, 2, 23–29. [Google Scholar]
  43. Kiran, D.R. Micro motion study. In Work Organization and Methods Engineering for Productivity; ScienceDirect: Amsterdam, The Netherlands, 2020; pp. 211–217. [Google Scholar]
  44. Barnes, R.M.; Mundel, M.E. A Study of Hand Motions Used in Small Assembly Work; State University of Iowa: Iowa City, IA, USA, 1939. [Google Scholar]
  45. Tanaka, T.; Amadio, P.C.; Zhao, C.; Zobitz, M.E.; An, K.N. Flexor digitorum profundus tendon tension during finger manipulation: A study in human cadaver hands. J. Hand Ther. 2005, 18, 330–338. [Google Scholar] [CrossRef] [Green Version]
  46. Kursa, K.; Diao, E.; Lattanza, L.; Rempel, D. In vivo forces generated by finger flexor muscles do not depend on the rate of fingertip loading during an isometric task. J. Biomech. 2005, 38, 2288–2293. [Google Scholar] [CrossRef] [PubMed]
  47. Beringer, C.R.; Mansouri, M.; Fisher, L.E.; Collinger, J.L.; Munin, M.C.; Boninger, M.L.; Gaunt, R.A. The effect of wrist posture on extrinsic finger muscle activity during single joint movements. Sci. Rep. 2020, 10, 8377. [Google Scholar] [CrossRef]
  48. Ayhan, Ç.; Ayhan, E. Kinesiology of the Wrist and the Hand; Springer International Publishing: Cham, Switzerland, 2020. [Google Scholar]
  49. Lieber, R.; Amiel, D.; Kaufman, K.; Whitney, J.; Gelberman, R. Relationship between joint motion and flexor tendon force in the canine forelimb. J. Hand Surg. 1996, 21, 957–962. [Google Scholar] [CrossRef]
  50. Zhou, Z.-H.; Liu, X.-Y. Training cost-sensitive neural networks with methods addressing the class imbalance problem. IEEE Trans. Knowl. Data Eng. 2006, 18, 63–77. [Google Scholar] [CrossRef]
  51. Sezer, A.; Sezer, H.B. Deep Convolutional Neural Network-Based Automatic Classification of Neonatal Hip Ultrasound Images: A Novel Data Augmentation Approach with Speckle Noise Reduction. Ultrasound Med. Biol. 2020, 46, 735–749. [Google Scholar] [CrossRef] [PubMed]
  52. Sakai, A.; Minoda, Y.; Morikawa, K. Data augmentation methods for machine-learning-based classification of bio-signals. In Proceedings of the 2017 10th Biomedical Engineering International Conference, Hokkaido, Japan, 31 August–2 September 2017; IEEE: New York, NY, USA, 2017; Volume 2017, pp. 1–4. [Google Scholar]
  53. Mushtaq, Z.; Su, S.-F. Environmental sound classification using a regularized deep convolutional neural network with data augmentation. Appl. Acoust. 2020, 167, 107389. [Google Scholar] [CrossRef]
  54. Anonymous. Data Augmentation in Training CNNs: Injecting Noise to Images. 2020. Available online: https://openreview.net/attachment?id=SkeKtyHYPS&name=original_pdf (accessed on 17 May 2021).
  55. Lei, C.; Hu, B.; Wang, D.; Zhang, S.; Chen, Z. A preliminary study on data augmentation of deep learning for image classification. In Proceedings of the 11th Asia-Pacific Symposium on Internetware, Fukuoka, Japan, 28–29 October 2019; Volume 19, pp. 7–10. [Google Scholar]
  56. Brown, W.M.; Gedeon, T.D.; Groves, D.I. Use of Noise to Augment Training Data: A Neural Network Method of Mineral–Potential Mapping in Regions of Limited Known Deposit Examples. Nat. Resour. Res. 2003, 12, 141–152. [Google Scholar] [CrossRef]
  57. Wang, F.; Zhong, S.-H.; Peng, J.; Jiang, J.; Liu, Y. Data Augmentation for EEG-Based Emotion Recognition with Deep Convolutional Neural Networks; Springer International Publishing: Cham, Switzerland, 2018; Volume 1, pp. 82–93. [Google Scholar]
  58. Ugur, T.K.; Erdamar, A. An efficient automatic arousals detection algorithm in single channel EEG. Comput. Methods Programs Biomed. 2019, 173, 131–138. [Google Scholar] [CrossRef] [PubMed]
  59. Ismail, A.R.; Asfour, S.S. Continuous wavelet transform application to EMG signals during human gait. J. Histochem. Cytochem. 1998, 2, 325–329. [Google Scholar]
  60. Cohen, L. Time-frequency distributions-a review. Proc. IEEE 1989, 77, 941–981. [Google Scholar] [CrossRef] [Green Version]
  61. Semmlow, J.L. Biosignal and Biomedical Image Processing MATLAB-Based Applications; Marcel Dekker Inc.: New York, NY, USA, 2004. [Google Scholar]
  62. Moca, V.V.; Nagy-Dăbâcan, A.; Bârzan, H.; Mureşan, R.C. Superlets: Time-frequency super-resolution using wavelet sets. bioRxiv 2019, 12, 583732. [Google Scholar]
  63. Garg, D.; Verma, G.K. Emotion Recognition in Valence-Arousal Space from Multi-channel EEG data and Wavelet based Deep Learning Framework. Procedia Comput. Sci. 2020, 171, 857–867. [Google Scholar] [CrossRef]
  64. Abbaspour, S.; Fallah, A.; Lindén, M.; Gholamhosseini, H. A novel approach for removing ECG interferences from surface EMG signals using a combined ANFIS and wavelet. J. Electromyogr. Kinesiol. 2016, 26, 52–59. [Google Scholar] [CrossRef] [PubMed]
  65. Jayalakshmy, S.; Sudha, G.F. Scalogram based prediction model for respiratory disorders using optimized convolutional neural networks. Artif. Intell. Med. 2020, 103, 101809. [Google Scholar] [CrossRef] [PubMed]
  66. Mertins, A. Signal Analysis: Wavelets, Filter Banks, Time-Frequency Transforms and Applications; Wiley Online Library: Hoboken, NJ, USA, 2001. [Google Scholar]
  67. Olhede, S.C.; Walden, A.T. Generalized Morse wavelets. IEEE Trans. Signal Process. 2002, 50, 2661–2670. [Google Scholar] [CrossRef] [Green Version]
  68. Wan, Y. Pattern analysis of continuous analytic wavelet transforms of the COVID19 spreading and death. Big Data Inf. Anal. 2020, 5, 29–46. [Google Scholar] [CrossRef]
  69. Mane, S.; Kambli, R.; Kazi, F.; Singh, N. Hand Motion Recognition from Single Channel Surface EMG Using Wavelet & Artificial Neural Network. Procedia Comput. Sci. 2015, 49, 58–65. [Google Scholar]
  70. Gupta, R.; Agarwal, R. Single channel EMG-based continuous terrain identification with simple classifier for lower limb prosthesis. Biocybern. Biomed. Eng. 2019, 39, 775–788. [Google Scholar] [CrossRef]
  71. Tavakoli, M.; Benussi, C.; Lourenco, J.L. Single channel surface EMG control of advanced prosthetic hands: A simple, low cost and efficient approach. Expert Syst. Appl. 2017, 79, 322–332. [Google Scholar] [CrossRef]
  72. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 60, 1–9. [Google Scholar] [CrossRef]
  73. Sony, S.; Dunphy, K.; Sadhu, A.; Capretz, M. A systematic review of convolutional neural network-based structural condition assessment techniques. Eng. Struct. 2021, 226, 111347. [Google Scholar] [CrossRef]
  74. Gu, J.; Wang, Z.; Kuen, J.; Ma, L.; Shahroudy, A.; Shuai, B.; Liu, T.; Wang, X.; Wang, G.; Cai, J.; et al. Recent advances in convolutional neural networks. Pattern Recognit. 2018, 77, 354–377. [Google Scholar] [CrossRef] [Green Version]
  75. Erözen, A.T. A New CNN Approach for Hand Gesture Classification using sEMG Data. J. Innov. Sci. Eng. 2020, 4, 44–55. [Google Scholar]
  76. Hara, K.; Saito, D.; Shouno, H. Analysis of function of rectified linear unit used in deep learning. In Proceedings of the 2015 International Joint Conference on Neural Networks (IJCNN), Killarney, Ireland, 12–16 July 2015; Volume 2015, pp. 1–8. [Google Scholar]
  77. Heywood, S.; Pua, Y.H.; McClelland, J.; Geigle, P.; Rahmann, A.; Bower, K.; Clark, R. Low-cost electromyography—Validation against a commercial system using both manual and automated activation timing thresholds. J. Electromyogr. Kinesiol. 2018, 42, 74–88. [Google Scholar] [CrossRef] [PubMed]
  78. Banziger, T.; Kunz, A.; Wegener, K. Identifying the Potential of Human-Robot Collaboration in Automotive Assembly Lines using a Standardised Work Description. In Proceedings of the Twenty-Second International Conference Automation and Computing (ICAC), Colchester, UK, 7–8 September 2016; Association for Computing Machinery: New York, NY, USA, 2016; pp. 78–83. [Google Scholar]
  79. Bdiwi, M.; Pfeifer, M.; Sterzing, A. A new strategy for ensuring human safety during various levels of interaction with industrial robots. CIRP Ann. Manuf. Technol. 2017, 66, 453–456. [Google Scholar] [CrossRef]
  80. Ore, F.; Vemula, B.; Hanson, L.; Wiktorsson, M.; Fagerström, B. Simulation methodology for performance and safety evaluation of human–industrial robot collaboration workstation design. Int. J. Intell. Robot. Appl. 2019, 3, 269–282. [Google Scholar] [CrossRef] [Green Version]
  81. Carvajal-Arango, D.; Vasquez-Hernandez, A.; Botero-Botero, L.F. Assessment of subjective work place well—Being of construction workers: A bottom—up approach. J. Build. Eng. 2021, 36, 2021. [Google Scholar]
  82. Brunoro, C.M.; Bolis, I.; Sigahi, T.F.A.C.; Kawasaki, B.C.; Sznelwar, L.I. Defining the meaning of ‘sustainable work’ from activity-centered ergonomics and psychodynamics of Work’s perspectives. Appl. Ergon. 2020, 89, 103209. [Google Scholar] [CrossRef]
  83. Wang, L. Collaborative robot monitoring and control for enhanced sustainability. Int. J. Adv. Manuf. Technol. 2013, 81, 1433–1445. [Google Scholar] [CrossRef]
  84. Pangestu, P.; Pujiyanto, E.; Rosyidi, C.N. Heliyon Multi-objective cutting parameter optimization model of multi-pass turning in CNC machines for sustainable manufacturing. Heliyon 2021, 7, e06043. [Google Scholar] [CrossRef]
  85. Ranz, F.; Hummel, V.; Sihn, W. Capability-based Task Allocation in Human-robot Collaboration. Procedia Manuf. 2017, 9, 182–189. [Google Scholar] [CrossRef]
  86. del Olmo, M.; Domingo, R. EMG Characterization and Processing in Production Engineering. Materials 2020, 13, 5815. [Google Scholar] [CrossRef] [PubMed]
  87. Liu, Q.; Liu, Z.; Xu, W.; Tang, Q.; Zhou, Z.; Pham, D.T. Human-robot collaboration in disassembly for sustainable manufacturing. Int. J. Prod. Res. 2019, 57, 4027–4044. [Google Scholar] [CrossRef]
  88. Giger, J.; Piçarra, N.; Alves-Oliveira, P.; Oliveira, R.; Arriaga, P. Humanization of robots: Is it really such a good idea? Hum. Behav. Emerg. Technol. 2019, 1, 111–123. [Google Scholar] [CrossRef] [Green Version]
  89. Smids, J.; Nyholm, S.; Berkers, H. Robots in the Workplace: A Threat to—Or Opportunity for—Meaningful Work? Philos. Technol. 2020, 33, 503–522. [Google Scholar] [CrossRef] [Green Version]
  90. Digalwar, A.K.; Dambhare, S.; Saraswat, S. Materials Today: Proceedings Social sustainability assessment framework for indian manufacturing industry. Mater. Today Proc. 2020, 28, 591–598. [Google Scholar] [CrossRef]
  91. Panagou, S.; Fruggiero, F.; Lambiase, A. THE Sustainable Role of Human Factor in I4.0 scenarios The Sustainable Role of Human Factor in I4.0 scenarios. Procedia Comput. Sci. 2021, 180, 1013–1023. [Google Scholar] [CrossRef]
Figure 1. Human–robot collaboration framework for sustainable manufacturing.
Figure 1. Human–robot collaboration framework for sustainable manufacturing.
Sustainability 13 05990 g001
Figure 2. Basic motion-based framework for evaluating task suitability for collaboration.
Figure 2. Basic motion-based framework for evaluating task suitability for collaboration.
Sustainability 13 05990 g002
Figure 3. The analysis of EMG signal for human–robot collaboration.
Figure 3. The analysis of EMG signal for human–robot collaboration.
Sustainability 13 05990 g003
Figure 4. The EMG device.
Figure 4. The EMG device.
Sustainability 13 05990 g004
Figure 5. Sensor attachment on the human forearm.
Figure 5. Sensor attachment on the human forearm.
Sustainability 13 05990 g005
Figure 6. Participant’s posture for the 6 basic motions (a) reach; (b) grasp; (c) hold; (d) move; (e) release; (f) idle.
Figure 6. Participant’s posture for the 6 basic motions (a) reach; (b) grasp; (c) hold; (d) move; (e) release; (f) idle.
Sustainability 13 05990 g006
Figure 7. Scalograms for the six basic motions (a) reach; (b) grasp; (c) hold; (d) move; (e) release; (f) idle.
Figure 7. Scalograms for the six basic motions (a) reach; (b) grasp; (c) hold; (d) move; (e) release; (f) idle.
Sustainability 13 05990 g007
Figure 8. The Alex Net structure.
Figure 8. The Alex Net structure.
Sustainability 13 05990 g008
Figure 9. The GPU product.
Figure 9. The GPU product.
Sustainability 13 05990 g009
Figure 10. Components of GPU (a) main board; (b) base; (c) fan.
Figure 10. Components of GPU (a) main board; (b) base; (c) fan.
Sustainability 13 05990 g010
Figure 11. Inserting the seal rings at Workstation 1.
Figure 11. Inserting the seal rings at Workstation 1.
Sustainability 13 05990 g011
Figure 12. Assembling the main board with the side part of the GPU.
Figure 12. Assembling the main board with the side part of the GPU.
Sustainability 13 05990 g012
Figure 13. Assembly of the main board with the base of the GPU.
Figure 13. Assembly of the main board with the base of the GPU.
Sustainability 13 05990 g013
Figure 14. Gluing the main board and the GPU fan.
Figure 14. Gluing the main board and the GPU fan.
Sustainability 13 05990 g014
Figure 15. Screwing the main board and the GPU fan together.
Figure 15. Screwing the main board and the GPU fan together.
Sustainability 13 05990 g015
Figure 16. Comparison of motions at 5 workstations.
Figure 16. Comparison of motions at 5 workstations.
Sustainability 13 05990 g016
Figure 17. Training process for EMG signals.
Figure 17. Training process for EMG signals.
Sustainability 13 05990 g017
Figure 18. Confusion matrix for 6 classes of basic human motions.
Figure 18. Confusion matrix for 6 classes of basic human motions.
Sustainability 13 05990 g018
Figure 19. Set up scenario for human, robot, and machine collaboration.
Figure 19. Set up scenario for human, robot, and machine collaboration.
Sustainability 13 05990 g019
Figure 20. Human–robot–machine chart.
Figure 20. Human–robot–machine chart.
Sustainability 13 05990 g020
Figure 21. Illustration of human–robot collaboration scenario.
Figure 21. Illustration of human–robot collaboration scenario.
Sustainability 13 05990 g021
Table 1. Therbligs basic motion classification.
Table 1. Therbligs basic motion classification.
EffectiveIneffective
ReachHold
MoveRest
GraspPosition
Release loadSearch
UseSelect
AssemblePlan
DisassembleUnavoidable delay
Pre-positionAvoidable delay
Inspect
Table 2. Left- and right-hand chart for putting the seal ring operations (second).
Table 2. Left- and right-hand chart for putting the seal ring operations (second).
Type of MotionLeftRight
Turn5.41.3
Reach1.34.8
Grasp1.05.5
Move4.78.7
Hold22.32.0
Position4.19.3
Release0.81.4
Disengage3.43.4
Reading9.59.5
Idle2.58.8
Total54.954.9
Table 3. Left- and right-hand chart for assembly of the main board with the side of the GPU (second).
Table 3. Left- and right-hand chart for assembly of the main board with the side of the GPU (second).
Type of MotionLeftRight
Turn0.30.3
Reach2.70.9
Grasp2.50.4
Move2.15.4
Hold16.34.5
Position2.48.4
Release1.00.8
Press012.0
Reading3.962.0
Idle3.80.3
Total35.135.1
Table 4. Left- and right-hand chart for assembly of the main board with the base of the GPU (second).
Table 4. Left- and right-hand chart for assembly of the main board with the base of the GPU (second).
Type of MotionLeftRight
Turn00.5
Reach2.51.7
Grasp3.50.6
Move4.76.6
Hold24.12.1
Position3.97.8
Release1.01.2
Press0.016.0
Pull0.06.0
Idle8.05.3
Total47.847.8
Table 5. Left- and right-hand chart for gluing the main board and GPU fan (second).
Table 5. Left- and right-hand chart for gluing the main board and GPU fan (second).
Type of MotionLeftRight
Turn1.20.9
Reach2.93.9
Grasp1.61.2
Move2.77.6
Hold10.81.3
Position1.711.1
Release0.61.9
Press0.02.0
Reading0.00.0
Idle10.82.4
Total32.332.3
Table 6. Left- and right-hand chart for screwing the main board and the GPU fan together (second).
Table 6. Left- and right-hand chart for screwing the main board and the GPU fan together (second).
Type of MotionLeftRight
Turn1.52.3
Reach2.21.2
Grasp3.10.3
Move2.25.9
Hold14.718.0
Position4.39.7
Release0.70.7
Disengage0.00.0
Reading5.05.0
Idle12.02.6
Total45.745.7
Table 7. Comparison of types of activities performed by humans, robots, and machines.
Table 7. Comparison of types of activities performed by humans, robots, and machines.
TypeHumanRobotScrewdriver
LeftRight%Time%Time%
Process650.21758.771%1944.9885.6%555.224.4%
Idle1609.68515.220.8%328.914.5%1718.6875.6%
Hold14195.27.9%
Total2273.882469.1 2273.88 2273.88
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lin, C.J.; Lukodono, R.P. Sustainable Human–Robot Collaboration Based on Human Intention Classification. Sustainability 2021, 13, 5990. https://0-doi-org.brum.beds.ac.uk/10.3390/su13115990

AMA Style

Lin CJ, Lukodono RP. Sustainable Human–Robot Collaboration Based on Human Intention Classification. Sustainability. 2021; 13(11):5990. https://0-doi-org.brum.beds.ac.uk/10.3390/su13115990

Chicago/Turabian Style

Lin, Chiuhsiang Joe, and Rio Prasetyo Lukodono. 2021. "Sustainable Human–Robot Collaboration Based on Human Intention Classification" Sustainability 13, no. 11: 5990. https://0-doi-org.brum.beds.ac.uk/10.3390/su13115990

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop