Next Article in Journal
Distribution Network Reconfiguration Considering Voltage and Current Unbalance Indexes and Variable Demand Solved through a Selective Bio-Inspired Metaheuristic
Next Article in Special Issue
An Integrated Approach-Based FMECA for Risk Assessment: Application to Offshore Wind Turbine Pitch System
Previous Article in Journal
CO2 Capture by Virgin Ivy Plants Growing Up on the External Covers of Houses as a Rapid Complementary Route to Achieve Global GHG Reduction Targets
Previous Article in Special Issue
Fault Diagnosis Technology for Ship Electrical Power System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Improved Hidden Markov Model for Monitoring the Process with Autocorrelated Observations

1
College of Economics and Management, Nanjing Forestry University, Nanjing 210037, China
2
Department of Industrial Engineering & Management, Shanghai Jiao Tong University, Shanghai 200240, China
*
Author to whom correspondence should be addressed.
Submission received: 17 January 2022 / Revised: 13 February 2022 / Accepted: 16 February 2022 / Published: 24 February 2022
(This article belongs to the Special Issue Management of Energy and Manufacturing System)

Abstract

:
With the development of intelligent manufacturing, automated data acquisition techniques are widely used. The autocorrelations between data that are collected from production processes have become more common. Residual charts are a good approach to monitoring the process with data autocorrelation. An improved hidden Markov model (IHMM) for the prediction of autocorrelated observations and a new expectation maximization (EM) algorithm is proposed. A residual chart based on IHMM is employed to monitor the autocorrelated process. The numerical experiment shows that, in general, IHMMs outperform both conventional hidden Markov models (HMMs) and autoregressive (AR) models in quality shift diagnosis, decreasing the cost of missing alarms. Moreover, the times taken by IHMMs for training and prediction are found to be much less than those of HMMs.

1. Introduction

Process monitoring plays an essential role in intelligent manufacturing [1,2]. Statistical process control (SPC) is a quality control technique that uses statistical methods to monitor and control processes. The aim of SPC is to ensure that the process runs efficiently, producing more products that meet specifications, while reducing waste at the same time. Shewhart control charts are a key SPC tool used to determine whether a process is in control. If the observation value is within the upper and lower control limits, the process is in control; otherwise, the process is out of control. Shewhart charts often assume that the data collected from the process are independent. However, this assumption is not true in a variety of processes. For example, consecutive measurements on chemical and pharmaceutical processes or product characteristics are often highly autocorrelated, or the chronological measurements of every characteristic on every unit in automated test and inspection procedures are often autocorrelated. It has been shown in numerous studies that conventional charts do not work well, in the form of giving too many false alarms if the observations exhibit even a small amount of autocorrelation over time [3,4,5,6,7,8,9]. Clearly, better approaches are needed. In the following paragraphs, three common ways to solve the problems related to autocorrelation are introduced.
The first approach to solving an autocorrelation problem is simply to sample from the observation data stream at a lower frequency [10]. This seems to be an easy solution, although it has some shortcomings. Concretely speaking, it makes inefficient use of the available data. For example, if every tenth observation is sampled, approximately 90% of the information is discarded. In addition, since only every tenth datum is used, this approach may delay the discovery of a process shift.
The second approach is to re-estimate the real process variance, aiming to revise the upper and lower control limits. See, for example, [9,11,12,13,14,15,16,17,18,19].
The third approach is to use residual charts. The statistics in these charts are residuals; values are calculated by subtracting predicted values from observed values. In the implementation of residual charts, the key step is choosing a reasonable prediction model to obtain predicted values. Autoregressive integrated moving average (ARIMA) models are mostly used to model autocorrelated data. See, for example, [3,8,20,21,22,23,24,25,26]. In addition, multistage or multivariate autocorrelated processes are mostly studied by using Hotelling T2 control charts, such as in [27,28,29,30]. Some exceptions include Pan et al., who proposed an overall run length (ORL) to replace T2 charts [31], and S. Yang and C. Yang, who used a residual chart and cause-selecting control chart [32].
With the rapid development of artificial intelligence, machine-learning methods are becoming more and more popular in SPC in the case of observation autocorrelation [33,34,35,36,37,38]. Most of the related literature focuses on neural networks methods [39,40,41,42]. Some successful applications have also been introduced, for example, the failure diagnosis of wind turbine [43], and the prediction of the remaining useful life (RUL) of bearings [44]. Another machine-learning approach proposed in SPC is the hidden Markov model (HMM). Lee et al. proposed a modified HMM, combined with a Hotelling multivariate control chart to perform adaptive fault diagnosis [45]. The HMMs, whose training sets contain autocorrelated data, were employed to forecast observation values for residual charts in process monitoring [46]. However, although HMMs are supposed to deal with the autocorrelated processes, the essence of the model itself is inconsistent with the case of autocorrelations. This is because one key assumption in conventional HMMs is that the observations are independent of each other. Therefore, it is worth developing a modified HMM, by accounting for observation autocorrelations in models themselves, and observing whether it is better than a traditional HMM.
Therefore, to realize the goal of monitoring the autocorrelated process well, the residual chart based on an improved hidden Markov model (IHMM) with autocorrelated observations considered is developed. Due to the autocorrelations, the conventional expectation maximization (EM) algorithm for HMMs is not appropriate. A new EM algorithm is developed for the solutions. The Shewhart residual chart is employed in quality shift detection in conjunction with the IHMM. The residual is defined as the deviation of the predicted value by the IHMM and the current real observation value. Through the residual chart, we are able to see whether the process is in control. If the chart initiates an alarm, one running length is obtained. Thus, the average running lengths (ARLs) can be calculated with sufficient samples. The ARL is set as the comparison index for different models, including IHMM, HMM and AR.
The rest of this paper is organized as follows: Section 2 introduces the development of the IHMM model and its algorithm. In addition, the comparison of the prediction performances of different approaches is presented in Section 2. In Section 3, residual charts are introduced. In Section 4, numerical examples and interesting results are presented. The conclusions and possible areas for future research are given in Section 5.

2. Model Development

2.1. Hidden Markov Models

Denote a Markov chain with a finite state set { s 1 , s 2 , , s N } by { S n , n = 1 , 2 , } . Let a i j be the probabilities that the Markov chain enters state s j from state s i ( 1 i , j N ) and π i = P { S 1 = s i } , i = 1 , 2 , , N be the initial state probabilities. Denote a finite set of signals by ζ , and suppose a signal from ζ is sent each time the Markov chain enters a state. Suppose that when the Markov chain enters state s j independently of previous states and signals, the signal sent is o r with probability p ( o r | s j ) that meets o r ζ p ( o r | s j ) = 1 . That is, if O n represents the n th signal, then it can be written by
P { O 1 = o r | S 1 = s j } = p ( o r | s j ) ,
P { O n = o r | S 1 , O 1 , , S n 1 , O n 1 , S n = s j } = p ( o r | s j ) .
Such a model, in which the signal sequence O 1 , O 2 , can be observed while the underlying Markov chain state sequence S 1 , S 2 , cannot be observed, is called a hidden Markov model [47].
Normally, an HMM contains the following elements [48]:
  • A finite hidden state set s = { s 1 , s 2 , , s N } , where N is the hidden state number.
  • A set of possible observation values (signals) q = { q 1 , , q K } , where K is the number of possible observation values. Note that if the values of the observations are continuous, K should be infinite.
  • An observation sequence, o = ( o 1 , , o T ) , where T is the number of observations, o t q , 1 t T .
  • A distribution of state transition probabilities, A = { a i j } , where
    a i j = P { S t + 1 = s j | S t = s i } , 1 i , j N , j a i j = 1 .
  • A distribution of initial state probabilities, π = { π i } , where
    π i = P { S 1 = s i } , i = 1 , 2 , , N , i π i = 1 .
  • A conditional probability distribution of the observations, given S t = s i , B = { b i ( q k ) } , where
    b i ( q k ) = P { o t = q k | S t = s i } , 1 i N , 1 t T , q k b i ( q k ) = 1 .
In general, an HMM contains three elements, denoted by λ = { A , B , π } .

2.2. The Improved HMM with Autocorrelated Observations

2.2.1. The Selection for the Order of Autocorrelation

Different from the traditional HMMs, in which current observations are assumed to be independent of previous states and observations, autocorrelated observations are considered. The observations are assumed to follow a Gaussian distribution, and the current observations are required to be dependent, not only on the current hidden state, but also on the previous observations.
The autocorrelated data from production may be multi-order, and it seems not cost-effective for judging this detailed order by using engineering experience and professional knowledge. Therefore, it is hoped to find a suitable order that will keep the implementation of the IHMM feasible, efficient and cost-effective.
The numerical experiment on the order selection for autocorrelated data in SPC application is designed as follows:
Step 1. Generate stationary autocorrelated data by a d th-order autoregressive model, AR ( d ) , d 2 ;
Step 2. Use AR ( p ) , p = d , d 1 , , 1 models to fit the data, respectively, and obtain the parameters that need to be estimated by least-squares estimation;
Step 3. Generate 2000 stationary data sequences with a same length of 2000 under the processes with a mean shift magnitude of δ ;
Step 4. Predict each sequence by using AR ( p ) , p = d , d 1 , , 1 models, respectively, and calculate their residuals;
Step 5. Determine the central lines (CLs), upper control limits (UCLs) and lower control limits (LCLs) of p residual charts;
Step 6. Generate stationary autocorrelated data by a d th-order autoregressive model;
Step 7. Calculate the average running lengths (ARLs) of p residual charts.
In this study, twelve AR ( 2 ) and eight AR ( 3 ) models are used to generate data. The control limit coefficients of residual charts are 3. The shift magnitudes of the mean are set by 0, 1.5 and 3, respectively.
When data are generated by AR ( 2 ) models for an in-control process, the ARL of the residual charts is 371.3803 if the data are fitted by AR ( 2 ) models, and the ARL is 361.1688 if the data are fitted by AR ( 1 ) models. Thus, the probability of the type I error is increased by 2.83%. Similarly, when data are generated by AR ( 3 ) models, the ARL of the residual charts is 371.8250 if the data are fitted by AR ( 3 ) models, the ARL is 358.5401 if the data are fitted by AR ( 2 ) models, and the ARL is 345.8113 if the data are fitted by AR ( 1 ) models. Thus, the probabilities of the type I error are increased by 3.7% and 7.51%, respectively.
The ARLs of different situations are shown in Figure 1, in which the numbers on the x-axis represent the combinations of autocorrelation coefficients.
From Figure 1, it is seen that generally, AR(1) models outperformed other AR(d), d 2 models in shift detection regardless of the autocorrelation order. Although AR(1) models lead to a slight increase in type I error, it seems to be insignificant compared with their good performances in detecting quality shifts. Therefore, only the first-order autocorrelation is considered in this study. As a result, there is no need to judge what order the autocorrelation is so that the modeling cost can be saved.

2.2.2. The Development of the IHMM

According to the analysis in Section 2.2.1, the construction of the IHMM, in which the current observation is related to its previous observation, is shown in Figure 2.
In Figure 2, arrows pointing to the right indicate the correlation between neighboring states or observations. It is intuitive that the IHMM will become a traditional HMM if there is no correlation between neighboring observations.
Let o i ( t ) be the observation value at time t given state s i ; o i ( t ) can be fitted with the following function:
o i ( t ) = ς i + c i o t 1 + ε i , 1 i N , 1 t T ,
where c i is the first-order autocorrelation coefficient, ς i is the constant term and ε i is white noise, following a normal distribution with mean zero and variance σ i 2 .
Let μ i ( t ) be the mean of o i ( t ) given state s i , x t = ( 1 , o t 1 ) , then Equation (6) can be written as:
μ i ( t ) = C i x t ,
where C i is a 1 × 2 matrix consisting of { ς i } and { c i } .
Hence, we conclude that the states are a Markov process, and that the conditional observations given state s i follows a Gaussian distribution with mean C i x t and variance σ i 2 , i.e.,
P { S t + 1 | S 1 , , S t , o 1 , , o t } = p ( S t + 1 | S t ) ,
b i ( o t | o t 1 ) = P ( o t | o t 1 , S t = s i ) = 1 2 π σ i exp ( ( o t C i x t ) 2 2 σ i 2 ) .
If the current observation is not related to its previous observations, μ i ( t ) is the constant ς i . Thus, the IHMM becomes a traditional HMM.

2.3. Parameter Estimation

The aim of using an IHMM is to forecast observation values. Firstly, we estimate the parameters to obtain an optimal λ ^ by maximizing the probability P ( o | λ ) .
The EM algorithm is a popular method to estimate the parameters for HMMs. However, since autocorrelation between observations is considered, the traditional algorithm could not be used directly. We redefine the parameters as λ = { A , C , σ 2 , π } where C = { C i } , σ 2 = { σ i 2 } , and the definitions of C i and σ i 2 are provided in Section 2.2.2. The flowchart demonstrating how to estimate the parameters in IHMM with an improved EM algorithm is shown in Figure 3.
Firstly, we introduce two types of probabilities: forward probability, F t ( i ) , and backward probability, B t ( i ) , defined as:
F t ( i ) = P ( o 1 , , o t , S t = s i | λ ) ,
B t ( i ) = P ( o t + 1 , , o T | S t = s i , λ ) .
The calculation of the two probabilities here is similar to that of traditional HMMs. Slightly different from traditional HMMs, the initial values are defined by F d + 1 ( i ) = π i b i ( o d + 1 | o 1 , , o d   ) and B T ( i ) = 1 , respectively.
Based on the forward and backward probabilities, some intermediate probabilities are computed. Given λ and o , denote the probability that the process is in state s i at time t by γ t ( i ) and the probability that the process is in state s i at time t and in state s j at time t + 1 by ξ t ( i , j ) . γ t ( i ) and ξ t ( i , j ) can be written as Equations (12) and (13). Please refer to reference [48] for the details of the derivations.
γ t ( i ) = F t ( i ) B t ( i ) j F t ( i ) B t ( i ) ,
ξ t ( i , j ) = F t ( i ) a i j b j ( o t + 1 ) B t + 1 ( j ) i = 1 N j = 1 N F t ( i ) a i j b j ( o t + 1 ) B t + 1 ( j ) .
Next, we develop an improved EM algorithm for the IHMM, step by step.
Step 1. Determine the log-likelihood function for complete data.
The complete data are ( o , S ) = ( o 1 , , o T , s 1 , , s T ) , and its log-likelihood function is log P ( o , s | λ ) .
Step 2. E-step: determine Q functions.
Q 1 ( λ , λ ( n ) ) = E s [ log P ( o , s | λ ) | o , λ ( n ) ] = s log P ( o , s | λ ) P ( o , s | λ ( n ) ) ,
where λ is the parameter that will maximize the Q 1 function, while λ ( n ) is the current parameter value.
With P ( o , s | λ ) = π s 1 b s 1 ( o 1 ) a s 1 s 2 b s 2 ( o 2 ) a s T 1 s T b s T ( o T ) , Q 1 ( λ , λ ( n ) ) can be written by
Q 1 ( λ , λ ( n ) ) = s π s 1 log P ( o , s | λ ( n ) ) + s ( t = 1 T 1 log a s t s t + 1 ) P ( o , s | λ ( n ) ) + s ( t = 1 T log b s t ( o t ) ) P ( o , s | λ ( n ) ) .
For the re-estimation of b i ( q k ) , one more auxiliary function Q 2 is proposed by taking the conditional expectation of the log-likelihood of the observation sequence:
Q 2 ( λ , λ ( n ) ) = s t = 1 T γ t ( i ) ( ln ( 1 2 π ) + ln 1 σ i ( o t C i x t ) 2 2 σ i 2 ) .
Step 3. M-step: re-estimation.
Re-estimate λ that maximizes Q 1 ( λ , λ ( n ) ) , that is
λ ( n + 1 ) = arg max λ Q 1 ( λ , λ ( n ) ) .
The state transition probabilities are derived as:
a i j ( n + 1 ) = t = 1 T 1 ξ t ( i , j ) t = 1 T 1 γ t ( i ) .
The initial state probabilities are derived as:
π i ( n + 1 ) = γ 1 ( i ) .
By re-estimating λ that maximizes Q 2 ( λ , λ ( n ) ) , c i , τ ( n + 1 ) , ς i ( n + 1 ) , σ i 2 ( n + 1 ) can be written as:
c i ( n + 1 ) = t = 1 T γ t ( i ) o t 1 ( o t t = 1 T γ t ( i ) o t t = 1 T γ t ( i ) ) t = 1 T γ t ( i ) o t 1 ( o t 1 t = 1 T γ t ( i ) o t 1 t = 1 T γ t ( i ) ) ,
ς i ( n + 1 ) = t = 1 T γ t ( i ) o t t = 1 T γ t ( i ) o t 1 c i ( n + 1 ) t = 1 T γ t ( i ) ,
σ i 2 ( n + 1 ) = t = 1 T γ t ( i ) ( o t c i ( n + 1 ) o t 1 ς i ( n + 1 ) ) 2 t = 1 T γ t ( i ) .
Repeat Step 2 and Step 3 until the log-likelihood function converges.

2.4. Prediction

Once the parameter is determined by the improved EM algorithm, the model can be employed to forecast the expected value of the next observation.
The conditional probability distribution of the observation o T + 1 can be derived as
P ( o T + 1 | o T , , o 1 , λ ) = i = 1 N P ( o T + 1 | S T = s i , o T , , o 1 , λ ) P ( S T = s i | o T , , o 1 , λ ) = i = 1 N j = 1 N P ( o T + 1 | S T + 1 = s j , o T , , o 1 , λ ) P ( S T + 1 = s j | S T = s i , λ ) γ T ( i ) = i = 1 N j = 1 N γ T ( i ) a i j b j ( o T + 1 | o T , , o T + 1 d ) .
Therefore, the expectation of o T + 1 is computed by
o ^ T + 1 = o T + 1 P ( o T + 1 | o T , , o 1 , λ ) d o T + 1 .
Given an observation sequence o = ( o 1 , , o T ) , o ^ t is predicted by
o ^ t = o t P ( o t | o t 1 , λ ) d o t = o t i = 1 N j = 1 N γ t 1 ( i ) b j a i j ( o t | o t 1 ) d o t , t d .
Thus, the predicted values of the observations can be denoted by o ^ = ( o 1 , o ^ 2 , , o ^ T ) .

2.5. Performance Comparison

The mean squared error (MSE), absolute mean error (AME) and mean absolute percentage error (MAPE) are common tools for measuring, fitting and predicting accuracy [49]. Both MSE and AME values determine the average deviation between fitting values and original values, while MAPE provides a measurement for testing the relevant difference between them. In this study, we use MSE as the criterion to evaluate the models. The equation for MSE is:
MSE = 1 L T l = 1 L t = 1 T ( o t o ^ t ) 2 ,
where L is the number of predicted samples, and T is the length of each sample.
We assume that a variable from a production process follows a normal distribution with mean 100 and variance 25 when the process is under control, and that the observations are first-order autocorrelated with a correlation coefficient of 0.6. We use IHMM, HMM and AR(1) methods to predict observation values, respectively. The MSEs for the three models are 15.1276, 16.1867 and 15.7861, respectively. Since these MSEs are very close, we conclude that the predicted performances of the three approaches are similar. The predicted results of an observation sequence with a length of 50 from the in-control process are shown in Figure 4, from which we can see that the three models have close performances. However, the time taken for prediction using the three models are quite different. For the prediction of an observation sequence with a length of 50, IHMM is 14.6523 s, HMM is 93.6521 s, and AR(1) is almost instantaneous under the environment of win10 OS (Microsoft, Redmond, WA, USA) with CPU of Intel(R) Core(TM) i7-7500U (Santa Clara, CA, USA).
Then, we suppose the process has a shift magnitude of 3. We still use the three methods to predict observation values, respectively. The predicted results of an observation sequence with a length of 50 from the out-of-control process are shown in Figure 5, from which we can see distinctly different performances of the three models. By observing the distances between the lines with different colors, obviously, if the MSEs are calculated, the MSE from AR(1) is much less than that from IHMM, and the MSE from IHMM is much less than that from HMM. The IHMM only results in a medium-level performance in the prediction for the autocorrelated process. However, it is very interesting that the performances of corresponding residual charts have the best performances in detecting quality shifts. This seems to suggest that the residual charts integrating IHMM can achieve a surprising effect. This result is verified in Section 3.

3. Statistical Process Control with Residual Chart

Residual control charts are an effective tool for online monitoring in the presence of autocorrelations. A residual chart called e chart is developed in our study.
Residuals are obtained by subtracting the predicted values of observations from the original values, that is e = o o ^ = ( e 1 , , e T ) . The control limits of the e chart are given by
U C L = μ e + k σ e ,
L C L = μ e k σ e .
where U C L represents the upper limit, while L C L represents the lower limit,   k is the number of σ e , μ e represents the mean of e , and σ e represents the standard deviation of e . μ e and σ e can be obtained by simulations based on sufficient samples.
If the value of e t drops within U C L and L C L , the process is judged to be in control; otherwise, it is judged to be out of control.

4. Numerical Examples

We consider that the variable from a production process followed a normal distribution with mean 100 and variance 25 when the process is in control and that the observations were first-order autocorrelated. The correlation coefficient varies between −0.6 and 0.6 with increments of 0.3. Two shift magnitudes of 1.5 and 3 are considered. According to the definition of residual charts, the ARLs of in-control processes for all predicted methods are 370, so we focus our discussion on the out-of-control processes. By conducting multiple experiments, we find that it is appropriate to make the state number with 5 for both IHMM and HMM. The experimental results are shown in Figure 6 and Figure 7.
As shown in Figure 6 and Figure 7, when correlation coefficient changes from positive to negative, ARLs decrease dramatically, regardless of the approach used. Compared with positive correlations, the ARLs of negative correlations are relatively very small, and ARLs obtained by different models are very close to each other. Thus, the following discussions focus on positive correlations.
As pointed out in Section 3, although the predictions of both the IHMM and HMM are inferior to AR(1) models, the performances of residual charts from the former models are much better than the latter ones. As seen in Figure 4 and Figure 5, when the coefficients are larger than zero, the ARLs by IHMMs are shorter than those by HMMs, and by HMMs shorter than by AR(1) models.
As correlation coefficients increase, the ARLs generally increase, regardless of the approach used, especially as the shift magnitude decreases.
Generally speaking, when detecting quality shifts, the performances of IHMM, HMM and AR(1) models are ranked with IHMMs first, HMMs second and AR(1) last. Moreover, as pointed out in Section 2, the times taken by IHMMs are much shorter than HMMs under the same running environments.

5. Conclusions

In this paper, an IHMM with autocorrelated observations and a new EM algorithm are proposed. Residual charts in conjunction with the IHMM are employed for detecting quality shifts. The results demonstrates that: (1) the IHMM outperforms the HMM and AR(1) method with positive correlations; (2) the IHMM has similar performances with the HMM and AR(1) methods with negative correlations; (3) compared with positive correlations, the ARLs of the IHMM under negative correlations are relatively very small, as well as those of the HMMs and AR(1) models; (4) the IHMMs take a much shorter time than HMMs, for both training and prediction, but still longer than the AR(1) models.
Future research might focus on further experimental validations for the IHMM and its algorithm. The strict Gaussian distribution of observations could be extended to other probability distributions. Since multistage systems are commonplace in the manufacturing industry, it is worth extending this approach in this research direction.

Author Contributions

Conceptualization, Y.L. and Z.C.; methodology, Y.L. and Z.C.; software, Y.L.; validation, Y.L. and H.L.; formal analysis, Y.L. and H.L.; investigation, Y.L. and Y.Z.; resources, Y.L.; data curation, Y.L.; writing—original draft preparation, Y.L.; writing—review and editing, Y.L.; visualization, Y.L.; supervision, Y.L.; project administration, Y.L.; funding acquisition, Y.L. and Z.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant number 72171120, 71701098, 72001138 and Qing Lan Project of Jiangsu Province in China, grant number 2021.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ouyang, L.; Chen, J.; Park, C.; Ma, Y.; Jin, J. Bayesian closed-loop robust process design considering model uncertainty and data quality. IISE Trans. 2020, 52, 288–300. [Google Scholar] [CrossRef]
  2. Ouyang, L.; Zheng, W.; Zhu, Y.; Zhou, X. An interval probability-based FMEA model for risk assessment: A real-world case. Qual. Reliab. Eng. Int. 2020, 36, 125–143. [Google Scholar] [CrossRef]
  3. Montgomery, D.C.; Mastrangelo, C.M. Some Statistical Process Control Methods for Autocorrelated Data. J. Qual. Technol. 1991, 23, 179–204. [Google Scholar] [CrossRef]
  4. Maragah, H.D.; Woodall, W.H. The Effect of Autocorrelation on the Retrospective X-chart. J. Stat. Comput. Simul. 1992, 40, 29–42. [Google Scholar] [CrossRef]
  5. Runger, G.C. Assignable Causes and Autocorrelation: Control Charts for Observations or Residuals? J. Qual. Technol. 2002, 34, 165–170. [Google Scholar] [CrossRef]
  6. Franco, B.C.; Celano, G.; Castagliola, P.; Costa, A.F.B.; Machado, M.A.G. A New Sampling Strategy for the Shewhart Control Chart Monitoring A Process with Wandering Mean. Int. J. Prod. Res. 2015, 53, 4231–4248. [Google Scholar] [CrossRef]
  7. Kim, J.; Jeong, M.K.; Elsayed, E.A. Monitoring Multistage Processes with Autocorrelated Observations. Int. J. Prod. Res. 2017, 55, 2385–2396. [Google Scholar] [CrossRef]
  8. Yang, H.H.; Huang, M.L.; Lai, C.M.; Jin, J.R. An Approach Combining Data Mining and Control Charts-Based Model for Fault Detection in Wind Turbines. Renew. Energy 2018, 115, 808–816. [Google Scholar] [CrossRef]
  9. Li, Y.; Pan, E.; Xiao, Y. On autoregressive model selection for the exponentially weighted moving average control chart of residuals in monitoring the mean of autocorrelated processes. Qual. Reliab. Eng. Int. 2020, 36, 2351–2369. [Google Scholar] [CrossRef]
  10. Montgomery, D.C. Statistical Quality Control: A Modern Introduction, 6th ed.; John Wiley & Sons, Inc.: New York, NY, USA, 2009. [Google Scholar]
  11. Vasilopoulos, A.V.; Stamboulis, A.P. Modification of Control Chart Limits in the Presence of Data Correlation. J. Qual. Technol. 1978, 20, 20–30. [Google Scholar] [CrossRef]
  12. Wardell, D.G.; Moskowitz, H.; Plante, R.D. Control Charts in the Presence of Data Autocorrelation. Manag. Sci. 1992, 38, 1084–1105. [Google Scholar] [CrossRef]
  13. Yashchin, E. Performance of CUSUM Control Schemes for Serially Correlated. Technometrics 1993, 35, 37–52. [Google Scholar] [CrossRef]
  14. Schmid, W. On the Run Length of a Shewhart Chart for Correlated Data. Stat. Pap. 1995, 36, 111–130. [Google Scholar] [CrossRef]
  15. Jiang, W.; Tsui, K.L.; Woodall, W.H. A New SPC Monitoring Method: The ARMA Chart. Technometrics 2000, 42, 399–410. [Google Scholar] [CrossRef]
  16. Lu, C.W.; Reynolds, M.R., Jr. CUSUM Charts for Monitoring an Autocorrelated Process. J. Qual. Technol. 2001, 33, 316–334. [Google Scholar] [CrossRef]
  17. Castagliola, P.; Tsung, F. Auto-correlated Statistical Process Control for Non-Normal Situations. Qual. Reliab. Eng. Int. 2005, 21, 131–161. [Google Scholar] [CrossRef]
  18. Osei-Aning, R.; Abbasi, S.A.; Riaz, M. Optimization Design of the CUSUM and EWMA Charts for Autocorrelated Processes. Qual. Reliab. Eng. Int. 2017, 33, 1827–1841. [Google Scholar] [CrossRef]
  19. Garza-Venegas, J.A.; Tercero-Gomez, V.G.; Ho, L.L.; Castagliola, P.; Celano, G. Effect of Autocorrelation Estimators on the Performance of the (X)over-bar Control Chart. J. Stat. Comput. Simul. 2018, 88, 2612–2630. [Google Scholar] [CrossRef]
  20. Ryan, T.P. Discussion of Some Statistical Process Control Methods for Autocorrelated Data by D.C. Montgomery and C.M. Mastrangelo. J. Qual. Technol. 1991, 23, 200–202. [Google Scholar] [CrossRef]
  21. Wardell, D.G.; Moskowitz, H.; Plante, R.D. Run-length Distributions of Special-cause Control Charts for Correlated Process. Technometrics 1994, 36, 3–27. [Google Scholar] [CrossRef]
  22. Mastrangelo, C.M.; Montgomery, D.C. SPC with Correlated Observations for the Chemical and Process Industries. Int. J. Reliab. Qual. Saf. Eng. 1995, 11, 79–89. [Google Scholar] [CrossRef]
  23. Zhang, N.F. Detection Capability of Residual Control Chart for Stationary Process Data. J. Appl. Stat. 1997, 24, 363–380. [Google Scholar] [CrossRef]
  24. Lu, C.W.; Reynolds, M.R., Jr. EWMA Control Charts for Monitoring the Mean of Autocorrelated Processes. J. Qual. Technol. 1999, 31, 166–188. [Google Scholar] [CrossRef]
  25. Davoodi, M.; Niaki, S.T.A. Estimating the Step Change Time of the Location Parameter in Multistage Processes Using MLE. Qual. Reliab. Eng. Int. 2012, 28, 843–855. [Google Scholar] [CrossRef]
  26. Perez-Rave, J.; Munoz-Giraldo, L.; Correa-Morales, J.C. Use of Control Charts with Regression Analysis for Autocorrelated Data in the Context of Logistic Financial Budgeting. Comput. Ind. Eng. 2017, 112, 71–83. [Google Scholar] [CrossRef]
  27. Pan, X.; Jarrett, J. Using Vector Autoregressive Residuals to Monitor Multivariate Processes in the Presence of Serial Correlation. Int. J. Prod. Econ. 2007, 106, 204–216. [Google Scholar] [CrossRef]
  28. Hwarng, H.B.; Wang, Y. Shift Detection and Source Identification in Multivariate Autocorrelated Processes. Int. J. Prod. Res. 2010, 48, 835–859. [Google Scholar] [CrossRef]
  29. Vanhatalo, E.; Kulahci, M. The Effect of Autocorrelation on the Hotelling T-2 Control Chart. Qual. Reliab. Eng. Int. 2015, 31, 1779–1796. [Google Scholar] [CrossRef] [Green Version]
  30. Leoni, R.C.; Machado, G.; Aparecida, M.; Costa, B.; Fernando, A. The T-2 Chart with Mixed Samples to Control Bivariate Autocorrelated Processes. Int. J. Prod. Res. 2016, 54, 3294–3310. [Google Scholar] [CrossRef] [Green Version]
  31. Pan, J.N.; Li, C.I.; Wu, J.J. A New Approach to Detecting the Process Changes for Multistage Systems. Expert Syst. Appl. 2016, 62, 293–301. [Google Scholar] [CrossRef]
  32. Yang, S.F.; Yang, C.M. An Approach to Controlling Two Dependent Process Steps with Autocorrelated Observations. Int. J. Adv. Manuf. Technol. 2006, 29, 170–177. [Google Scholar] [CrossRef]
  33. Li, Y.; Zio, E.; Pan, E. An MEWMA-Based Segmental Multivariate Hidden Markov Model for Degradation Assessment and Prediction. Proc. Inst. Mech. Eng. Part O J. Risk Reliab. 2021, 235, 831–844. [Google Scholar] [CrossRef]
  34. Xia, T.; Sun, B.; Chen, Z.; Pan, E.; Wang, H.; Xi, L. Opportunistic maintenance policy integrating leasing profit and capacity balancing in service-oriented manufacturing. Reliab. Eng. Syst. Saf. 2021, 205, 107233. [Google Scholar] [CrossRef]
  35. Xia, T.; Dong, Y.; Pan, E.; Zheng, M.; Wang, H.; Xi, L. Fleet-level opportunistic maintenance for large-scale wind farms integrating real-time prognostic updating. Renew. Energy 2021, 163, 1444–1454. [Google Scholar] [CrossRef]
  36. Xia, T.; Zhuo, P.; Xiao, L.; Du, S.; Wang, D.; Xi, L. Multi-stage Fault Diagnosis Framework for Rolling Bearing Based on OHF Elman AdaBoost-Bagging Algorithm. Neurocomputing 2021, 433, 237–251. [Google Scholar] [CrossRef]
  37. Tang, D.; Yu, J.; Chen, X.; Makis, V. An optimal condition-based maintenance policy for a degrading system subject to the competing risks of soft and hard failure. Comput. Ind. Eng. 2015, 83, 100–110. [Google Scholar] [CrossRef]
  38. Tang, D.; Makis, V.; Jafari, L.; Yu, J. Optimal maintenance policy and residual life estimation for a slowly degrading system subject to condition monitoring. Reliab. Eng. Syst. Saf. 2015, 134, 198–207. [Google Scholar] [CrossRef]
  39. Chiu, C.C.; Chen, M.K.; Lee, K.M.S. Shifts Recognition in Correlated Process Data Using a Neural Network. Int. J. Syst. Sci. 2001, 32, 137–143. [Google Scholar] [CrossRef]
  40. Arkat, J.; Niaki, S.T.A.; Abbasi, B. Artificial Neural Networks in Applying MCUSUM Residuals Charts for AR(1) Processes. Appl. Math. Comput. 2007, 189, 1889–1901. [Google Scholar] [CrossRef]
  41. Pacella, M.; Semeraro, Q. Using Recurrent Neural Networks to Detect Changes in Autocorrelated Processes for Quality Monitoring. Comput. Ind. Eng. 2007, 52, 502–520. [Google Scholar] [CrossRef]
  42. Camargo, M.E.; Priesnitz, W.; Russo, S.L.; Dullius, A.I.D. Control Charts for Monitoring Autocorrelated Processes Based on Neural Networks Model. In Proceedings of the International Conference on Computers and Industrial Engineering, Troyes, France, 6–9 July 2009; pp. 1881–1884. [Google Scholar]
  43. Yang, H.H.; Huang, M.L.; Yang, S.W. Integrating Auto-Associative Neural Networks with Hotelling T-2 Control Charts for Wind Turbine Fault Detection. Energies 2015, 8, 12100–12115. [Google Scholar] [CrossRef] [Green Version]
  44. Rai, A.; Upadhyay, S.H. The Use of MD-CUMSUM and NARX Neural Network for Anticipating the Remaining Useful Life of Bearings. Measurement 2017, 111, 397–410. [Google Scholar] [CrossRef]
  45. Lee, S.; Li, L.; Ni, J. Online Degradation Assessment and Adaptive Fault Detection Using Modified Hidden Markov Model. J. Manuf. Sci. Eng.-Trans. ASME 2010, 132, 021010. [Google Scholar] [CrossRef]
  46. Alshraideh, H.; Runger, G. Process Monitoring Using Hidden Markov Models. Qual. Reliab. Eng. Int. 2014, 30, 1379–1387. [Google Scholar] [CrossRef]
  47. Ross, S.M. Introduction to Probability Models, 11th ed.; Posts & Telecom Press: Beijing, China, 2015. [Google Scholar]
  48. Chen, Z.; Xia, T.B.; Li, Y.; Pan, E.S. Degradation Modeling and Classification of Mixed Populations Using Segmental Continuous Hidden Markov Models. Qual. Reliab. Eng. Int. 2018, 34, 807–823. [Google Scholar] [CrossRef]
  49. Guo, X.J.; Liu, S.F.; Wu, L.F.; Gao, Y.B.; Yang, Y.J. A Multi-variable Grey Model with a Self-memory Component and Its Application on Engineering Prediction. Eng. Appl. Artif. Intell. 2015, 42, 82–93. [Google Scholar] [CrossRef]
Figure 1. The ARL results of the order selection experiment.
Figure 1. The ARL results of the order selection experiment.
Energies 15 01685 g001
Figure 2. The construction of the IHMM.
Figure 2. The construction of the IHMM.
Energies 15 01685 g002
Figure 3. The flowchart of the improved EM algorithm.
Figure 3. The flowchart of the improved EM algorithm.
Energies 15 01685 g003
Figure 4. The predicted results of three models under an in-control process.
Figure 4. The predicted results of three models under an in-control process.
Energies 15 01685 g004
Figure 5. The predicted results of three models under an out-of-control process.
Figure 5. The predicted results of three models under an out-of-control process.
Energies 15 01685 g005
Figure 6. The ARLs of residual charts obtained by different models when the shift magnitude is 1.5.
Figure 6. The ARLs of residual charts obtained by different models when the shift magnitude is 1.5.
Energies 15 01685 g006
Figure 7. The ARLs of residual charts obtained by different models when the shift magnitude is 3.
Figure 7. The ARLs of residual charts obtained by different models when the shift magnitude is 3.
Energies 15 01685 g007
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Li, Y.; Li, H.; Chen, Z.; Zhu, Y. An Improved Hidden Markov Model for Monitoring the Process with Autocorrelated Observations. Energies 2022, 15, 1685. https://0-doi-org.brum.beds.ac.uk/10.3390/en15051685

AMA Style

Li Y, Li H, Chen Z, Zhu Y. An Improved Hidden Markov Model for Monitoring the Process with Autocorrelated Observations. Energies. 2022; 15(5):1685. https://0-doi-org.brum.beds.ac.uk/10.3390/en15051685

Chicago/Turabian Style

Li, Yaping, Haiyan Li, Zhen Chen, and Ying Zhu. 2022. "An Improved Hidden Markov Model for Monitoring the Process with Autocorrelated Observations" Energies 15, no. 5: 1685. https://0-doi-org.brum.beds.ac.uk/10.3390/en15051685

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop