Next Article in Journal
Comparative Study of Sound Transmission Losses of Sandwich Composite Double Panel Walls
Next Article in Special Issue
Improving a Manufacturing Process Using the 8Ds Method. A Case Study in a Manufacturing Company
Previous Article in Journal
Time-Varying Dynamic Analysis of a Helical-Geared Rotor-Bearing System with Three-Dimensional Motion Due to Shaft Deformation
Previous Article in Special Issue
The Level of the Additive Manufacturing Technology Use in Polish Metal and Automotive Manufacturing Enterprises
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Evaluation of Machinery Readiness Using Semi-Markov Processes

1
Motor Transport Institute, 03-301 Warsaw, Poland
2
Logistics and Management, Faculty of Security, Military University of Technology, 00-908 Warsaw, Poland
3
Department of Mechanics and Machine Building, University of Economics and Innovation, 20-209 Lublin, Poland
*
Author to whom correspondence should be addressed.
Submission received: 24 January 2020 / Revised: 20 February 2020 / Accepted: 20 February 2020 / Published: 24 February 2020
(This article belongs to the Special Issue Design and Management of Manufacturing Systems)

Abstract

:
This article uses Markov and semi-Markov models as some of the most popular tools to estimate readiness and reliability. They allow to evaluate of both individual elements as well as entire systems—including production systems—as multi-state structures. To be able to distinguish states with varying degrees of technical readiness in complicated and complex objects (systems) allows to determine their individual impact on the tasks performed, as well as on the total reliability. The application of the Markov process requires, for the process dwell times in the individual states, to be random variables of exponential distribution and the fulfilling Markov’s property of the independence of these states. Omitting these assumptions may lead to erroneous results, which was the authors’ intention to show. The article presents a comparison of the results of the examination of the process of non-parametric distribution with an analysis in which its exponential form was (groundlessly) assumed. Significantly different results were obtained. The aim was to draw attention to the inconsistencies obtained and to the importance of a preliminary assessment of the data collected for examination. The diagnostics of the machine readiness operating in the studied production company was additionally performed. This allowed to evaluate its operational potential, especially in the context of solving process optimization problems.

1. Introduction

1.1. Background Introduction to the Study

Ensuring desired availability of all machine tools in a production line is an important issue [1,2]. It stands for their ability to obtain and maintain the functional state necessary to produce the required performance [3,4,5]. The technical readiness of machines is an important element of the company diagnostics and should be estimated, as its evaluation helps shape the capacity of a production line. High machine tools reliability translates into no unnecessary downtime and, consequently, greater process efficiency. Machine tools must be technically sound, adequately controlled and supplied with necessary materials, energy and information [6]. The availability of a machine tool is determined using a probability theory-based reliability model. In probability theory, the state of an object is defined as the result of one and only one event in a sequence of trials of finite or computable set of elementary events excluding each other in pairs [7]. This makes it possible to use the tools of probability calculus and mathematical statistics to analyze technical systems. When machine tools are in operation they stochastically transit from one state to another. As a result, transition probabilities are associated with all machine tools in a production line. Therefore, Markov chain and its derivatives are often used to set a model of reliability. Some of the relevant articles where Markov chain-based reliability models are used to study the availability of machines tools in a production line are described below.
The use of Markov processes and their generalization—semi-Markov processes—are popular. Their use is dictated by the multi-states condition of the technical objects and the assumption that the assessment of individual functional states the object is in, is a better measure than the readiness of the object as a whole. However, the use of these models is subject to restrictions. First of all, it is necessary to fulfil the Markov property which states that the probability of a future state is independent of the past states, and depends only on the present state. Identifying a model without meeting this assumption may lead to false conclusions, which is suggested by many authors [8,9]. They point out that ignoring the Markov property examination will results in incorrect analysis results, e.g., Shi et al. [10], Zhang et al. [8], or Kozłowski et al. [11]. Therefore, it is necessary to examine the randomness of sequences of subsequent operational states, as it is done by Yang et al. [12] or Komorowski and Raffa [13].
In addition, Markov’s models require meeting the assumption that the unconditional process dwell times in the individual states and the conditional durations of an individual state, are random variables of exponential distribution, provided that the next one is one of the remaining states [14,15]. Many authors point out that proper matching of distributions affects the reliability of results [13,16]. They use Markov’s model for exponential distributions [17,18], and the semi-Markov model for the remaining ones, e.g., Weibull [19] or Gamma [20]. Adoption of only the assumption on the form of distribution without a statistical survey of the collected sample may lead to wrong conclusions.

1.2. The Aim of the Study

The need to check Markov’s property is discussed in more detail in the literature [10,11], while less attention is paid to the distribution of variables studied. Therefore, this publication compares the results of process examination according to the semi-Markov model for variables of non-parametric distribution with the analysis according to the Markov model, in which their exponential form was (falsely) assumed. The differences in the results obtained clearly indicate that it is necessary to carry out a preliminary test before choosing the right model. Failure to meet the assumptions leads to an inaccurate analysis of the process.
The aim of the article was also to evaluate the readiness of a production machine, which is an important element of the analyzed production process. The results obtained made it possible to determine the probabilities of transitions between the individual states distinguished in the production process, as well as to define limit probabilities and the technical readiness coefficient. This allows to assess the compliance of the functioning of the analyzed process with the schedule adopted in the company, or to evaluate the results of production abilities. The proposed models can also be used to simulate the production process, e.g., at the design phase.
The article consists of five sections. The first one presents an analysis of the literature on the application of Markov models for studying the technical readiness of machine tools in a production line. Section 2 presents a mathematical formulation of the research problem. In Section 3, a description of the studied company was given and data analysis was carried out in terms of studying the Markov property and the form of distribution of variables. Section 4 presents a case study containing the estimation of Markov and semi-Markov models parameters, as well as a numerical example and accurate calculations according to the developed model. The article ends with conclusions describing the goals achieved and indicating the added value of the study.

2. Mathematical Modeling

Definition 1.
Let us consider a random process with a finite state space S = {1,..., s}, s < ∞. Let ( Ω ,   ,   P ) be a probabilistic space and { X   ( t ) :   t T } a stochastic process defined for ( Ω ,   ,   P ) , taking values from the finite or calculable set S . Process { X ( t ) :   t T } is called a Markov process if for each i ,   j ,   i 0 ,   i 1 , , i n 1 S and for each t 0 ,   t 1   ,   ,   t n , t n + 1   T meeting the requirement t 0   <   t 1   <   t n   <   t n + 1 the dependency given below is met:
P ( X ( t n + 1 ) = j | X ( t n ) = i ,   X ( t n 1 ) = i n 1 ,   ,   X ( t 0 ) = i 0 ) = P ( X ( t n + 1 ) = j | X ( t n ) = i ) ,
Assuming that t n = u ,   t n + 1 = τ , then the conditional probability:
P ( X ( τ ) = j | X ( u ) = i ) = p i j ( u ,   s ) ,
for i, jS, where p i j ( u , s ) denotes the probability of transition from state i at time u, to state j at time s.
Assuming that t 0 ,   t 1   ,   ,   t n 1 denote time (instants) from the past, t n denotes the present instant, and t n + 1 the time in the future, the equation says that the future does not depend on the past when the present is known, thus the probability of the future state is independent of the past states, but only of the present state. This property is called Markov’s property, and the stochastic process that satisfies it, a memoryless process. If instants of time are discrete, T = N 0 = { 0 ,   1 ,   2 ,   } , then we are dealing with Markov’s chain, and when the process is realized in a continuous time T = R + = [ 0 , ) , it is a continuous-time Markov process.
For the stochastic process { X ( t ) :   t > 0 } taking values from the finite or countable set S with fixed and right-hand continuous phase trajectories in some sections, and for τ 0 = 0 which marks the start of the process and τ 1 ,   τ 2 ,   which denotes successive times of change of states, the random variable:
T i = τ n + 1 τ n | X ( τ n ) = i ,   i     S ,
denotes the waiting time in the state i when a successor state is unknown. From the Chapman–Kolmogorov equation, it follows [21,22] that the process dwelling times in the individual states constitute random variables with exponential distributions and with the parameter λ i > 0 :
G i ( t ) = P ( T i t ) = P ( τ n + 1 τ n t | X ( τ n ) = i ) = 1 e λ i   t ,   t 0 ,   i S ,
where G i is a cumulative probability distribution of a random variable T i [23] when a successor state is unknown.
The generalization of Markov processes are semi-Markov processes, for which dwelling times in the individual states can have arbitrary distributions, concentrated in the set [ 0 , ] . Based on [24,25] it was assumed in this article to define the semi-Markov process with a finite set of states starting from Markov renewal process.
In the probabilistic space ( Ω , , Ρ ) random variables are defined for each n Ν :
ξ n : Ω S ,
ϑ n : Ω R +
A two-dimensional sequence of random variables { ( ξ n , ϑ n ) : n N } is referred to as the Markov renewal process if for each n N , i , j S , t R + :
P { ξ n + 1 = j ,   ϑ n + 1 < t / ξ n = i , ξ n 1 , ξ 0 , ϑ n , ϑ 0 } = P { ξ n + 1 = j , ϑ n + 1 < t / ξ n = i } ,
and
P { ξ 0 = i , ϑ 0 = 0 } = P { ξ 0 = i } ,
This definition shows that the Markov renewal process is a specific case of the two-dimensional Markov process. Transition probabilities of this process depend solely on the discrete value of the coordinate. The Markov renewal process { ( ξ n , ϑ n ) : n N } is called homogeneous if the probabilities:
P { ξ n + 1 = j , ϑ n + 1 < t / ξ n = i } = Q i j ( t ) ,
Do not depend on n .
From the above definition, it follows that for each pair ( i , j ) S x S function Q i j ( t ) is [24,25]:
  • non-decreasing,
  • right-hand continuous,
  • Q i j ( 0 ) = 0 ,
  • Q i j ( t ) 1 ,
  • j S lim t Q i j ( t ) = 1 .
Functional matrix:
Q ( t ) = [ Q i j ( t ) ] ,   i , j S ,
is called the renewal kernel of the semi-Markov process and together with the initial distribution:
p i = P { ξ n = 1 } ,   i S ,
characterizes the homogeneous Markov renewal process.
Semi-Markov process is defined based on the homogeneous Markov renewal process { ( ξ n , ϑ n ) : n N } . Let:
τ 0 = ϑ 0 = 0 ,
τ n = ϑ 1 + + ϑ n ,
τ = sup { τ n : n N 0 } .
The stochastic process { X ( t ) : t R + } , which assumes a constant value in the range ( τ n + 1 ) , n N :
X ( t ) = ξ n ,
is called the semi-Markov process.
Markov and semi-Markov models are particularly often used to assess the readiness and reliability of technical facilities or their individual components [26,27,28]. Various systems, including production ones [29,30], are analyzed both in terms of maintaining operability [31], production organization [32] as well as shaping of the demand [33]. This article analyzes the production system from the point of view of machine readiness to perform production tasks.

3. Data Handling

3.1. Description of the Company Studied

The subject of the research is a company manufacturing plastic garbage bags. It is a three shift serial production, with 8-h shifts. The roller welding machines, which weld and perforate finished rolls of polyethylene film, constitute a critical element of the whole process. Among all the machines in the production line, the efficiency of the roller welding machines is the lowest, they have the highest failure rate and their downtimes lead to substantial increase in costs, making them a bottleneck in the process. This is why they became the subject of the study. The analysis was carried out on the example of a selected model, marked with the H2 symbol.
The analysis of the activities carried out when operating the roller welding machines allowed to distinguish the states of the machine. They are presented in the Table 1.
Among the selected states, those directly related to the production process should be distinguished: S1—manufacturing process, S4—necessary maintenance activities to keep the machine in good working order and to prepare it for the manufacturing process. Planned employee breaks, resulting from the Labor Code (S5), also constitute a necessary element of the manufacturing process. Other states should be identified as undesirable. These include the stoppage in the manufacturing process, due to lack of orders, S3, and lack of raw materials, S6.
The relationships between the individual states are shown in Figure 1.

3.2. Studying Markov’s Property

In the first stage, the lack of memory characteristic of the process was assessed. The goodness of fit test χ 2 was used for the study, defining at the level of significance α = 0.05 the zero hypothesis assuming that the chain studied meets the Markov property, and the alternative hypothesis that the Markov property is not met [11]. The test statistic χ 2 = 228.7 , while related to it p-value = 0.264, which means that there are no grounds to reject the zero hypothesis on the chain meeting the Markov property.

3.3. Fit of Distributions

The next step was to assess the form of distributions of the individual states. Considerations in this respect were presented using the example of state S6, and for the others the same was done. The goodness of fit to the selected theoretical distributions that were considered most likely was verified based on a Cullen and Frey graph, presented for state S6 in Figure 2.
For further analysis, the Weibull and Beta distributions were selected, for which the estimated parameters are presented in Table 2, while the goodness of fit of empirical data to the individual distributions is presented in Figure 3.
For each of them the A I C (Akaike information criterion) was calculated according to the formula (16) and based on that, the one with the better fit was selected.
A I C = 2 ln L + 2 k ,
where k —number of parameters in the model, L —credibility function.
The same calculations were made for the other states. The proposed distributions are presented in Table 3.
Not all the distributions could be fitted to the parametric ones. Moreover, none of the distributions belong to the family of exponential distributions, which is a condition for using the Markov process [25]. The form of distributions makes the parameters estimation possible based on the semi-Markov model only. In order to compare whether meeting the condition of the distribution form affects the obtained results, the subsequent part of the article compares the limit values of the probabilities of the object’s dwelling time according to two different models.

4. Case Study

4.1. Estimation of the Semi-Markov Model Parameters

First of all, based on the actual relationship between the states defined in Figure 1, the transition probability matrix was calculated. If n i denotes the number of instants of the system waiting in state s i , while n i j denotes the number of state transitions from state s i to state s j , then the transition estimator from state s i to state s j shall be determined from the formula:
p ^ i j = n i j n i .
The distribution of probability of changes of the distinguished operating states (in one step), assuming that each graph arch of the exploitation process representation (Equations (2) and (10)) corresponds to the value of probability p i j , is presented in Table 4.
For the process studied, some limits exist:
lim n p i j ( n ) = π j   i ,   j = 1 , 2 , , 6     i j ,
where p i j ( n ) —probability of transition from state S i to state S j in n steps.
Definition 2.
A probability distribution [25]:
π = [ π j :   j S ] ,
Satisfying a system of linear equations:
i S π i · p i j = π j , j S ,
and
i ϵ S π i = 1 ,
is said to be a stationary probability distribution of the Markov chain witch transition matrix P = [ p i j : i ,   j S ] . In matrix form, the Equation (20) takes the following form:
Π T P = Π T ( P T I ) · Π = 0 ,
Stationary probabilities π j were calculated in accordance with Equation (22). For the process studied, for the 6-state model, the determination of the stationary probabilities π j required solving the following matrix equation:
[ π 1 π 2 π 3 π 4 π 5 π 6 ] T · [ 0 p 12 p 13 p 14 0 0 p 21 0 p 23 p 24 0 0 p 31 0 0 p 34 0 0 p 41 0 0 0 p 45 0 p 51 p 52 p 53 0 0 p 56 p 61 0 p 63 0 0 0 ] = [ π 1 π 2 π 3 π 4 π 5 π 6 ] T ,
with the normalization condition:
π 1 + π 2 + π 3 + π 4 + π 5 + π 6 = 1 ,
which is equivalent to the following system of equations:
{ π 2 · p 21 + π 3 · p 31 + π 4 · p 41 + π 5 · p 51 + π 6 · p 61 = π 1 π 1 · p 12 + π 5 · p 52 = π 2 π 1 · p 13 + π 2 · p 23 + π 5 · p 53 + π 6 · p 63 = π 3 π 1 · p 14 + π 2 · p 24 + π 3 · p 34 = π 4 π 4 · p 45 = π 5 π 5 · p 56 = π 6 π 1 + π 2 + π 3 + π 4 + π 5 + π 6 = 1 .
After substituting the figures we get:
{ π 2 · 0.029 + π 3 · 0.043 + π 4 · 0.003 + π 5 · 0.676 + π 6 · 0.982 = π 1 π 1 · 0.141 + π 5 · 0.004 = π 2 π 1 · 0.143 + π 2 · 0.117 + π 5 · 0.001 + π 6 · 0.018 = π 3 π 1 · 0.716 + π 2 · 0.854 + π 3 · 0.957 = π 4 π 4 · 0.997 = π 5 π 5 · 0.319 = π 6 π 1 + π 2 + π 3 + π 4 + π 5 + π 6 = 1 .
The solution are the stationary probabilities presented in Table 5.
The analysis of the stationary distribution showed (Table 5) that the limit highest transitions probabilities concern states (S1, S4, S5) related to standard activities resulting from the production process technology (all over 27%). Indications of undesirable conditions such as failure (S2) or downtime (S3, S6) range from 4% to 8%, which is a good result.
The calculated limit probabilities relate to the frequency of observations in the sample and do not take into account the duration of individual states, therefore, the limit distribution of the semi-Markov process represents more significant diagnostics. It can be determined using the stationary distribution of the Markov chain and the expected duration of the process states [24,25]. Then the limit probabilities of semi-Markov process are expressed by the formula:
P j = lim t P ( t ) = π j E ( T j ) j S π j E ( T j ) .
The solution requires to calculate the following forms from the sample of average conditional durations of the process states:
T = [ T ¯ i j ] , i , j   = 1 ,   2 ,   ,   6 .
that are presented in Table 6.
Based on the transitions probabilities matrix P = [ p i j ] (Table 5) and the matrix of average conditional durations of the states of the process T = [ T   i j ] of random variables T ¯ i j (Table 6), dependencies describing average unconditional durations of the process states were determined T ¯ j according to the formula:
T ¯ j = i = 1 6 p i j   T ¯ i j .
For this purpose, the following equation system was solved:
{ T ¯ 1 = p 12 · T 12 + p 13 · T 13 + p 14 · T 14 T ¯ 2 = p 21 · T 21 + p 23 · T 23 + p 24 · T 24 T ¯ 3 = p 31 · T 31 + p 34 · T 34 T ¯ 4 = p 41 · T 41 + p 45 · T 45 T ¯ 5 = p 51 · T 51 + p 52 · T 52 + p 53 · T 53 + p 56 · T 56 T ¯ 6 = p 61 · T 61 + p 63 · T 63 .
The obtained expected values of unconditional dwelling T ¯ i times of the process X(t) in the individual operational states are presented in Table 7.
The calculated random variables T i have finite, positive expected values. This allows to calculate, based on theorem (27), the limit probabilities P j which are presented in Table 8.
Thus determined probabilities P j are limit probabilities determining that the system will remain, for a longer period ( t ), in the given operational state. This prognosis is more satisfactory than for the frequency of the states occurring. The highest values are achieved by state S1, i.e., operation (over 65%) and less than 17% by state S4, which stems from the necessity to perform maintenance activities. The remaining limit values are satisfactorily small, which shows the correct operation of the machines.
The technical readiness factor was also determined in the form of the sum of appropriate probabilities of reliability states [34]. For the system under analysis, S 1 , S 4 , S 5 were considered as fitness states, while the states S 2 S 3 and S 6 as unfitness states. Then, the readiness of the 6-element semi-Markov model can be calculated as the sum of limit probabilities of the respective states:
K = j P j = P 1 + P 4 + P 5 ,
This gives K = 0.85 , which means that the machine is in the readiness state for over 85% of the time, which is a very good result.

4.2. Calculations According to the Markov Model

Markov processes concern exponential distributions, the most popular ones in reliability theory [11,35]. They are described by two parameters, which fully define them. The first of these is the already calculated probability matrix of interstate transitions p i j (Table 4).
The second, important parameter is the function describing the transitions of objects between states, called the process transition intensity λ i j ( t ) , which characterizes the rate of changes in the probability of transition p i j ( t ) [36].
λ i j ( t ) = lim Δ t 0 1 Δ t p i j ( t ,   t + Δ t )   for   i ,   j = 0 , 1 , 2 , i j ,
For homogeneous Markov processes, the transition intensity is constant and equal to the inverse of the expected duration t i j of the state S i before S j [37]:
λ i j ( t ) = 1 E ( t i j ) ,
where:
  • λ i j ( t ) —intensity of transitions from the state i to state j ,
  • E ( t i j ) —expected duration value t i j .
The intensities λ i i   0 for i = j are defined as a complement to the sum of transition intensity from state Si for i     j to 0:
λ i i + Σ j   λ i i = 0
thus:
λ i i = Σ j   λ i i .
The modules | λ i i | = λ i i are called the exit intensities from the state S i .
Calculated according to the above formulas (33)–(35), the element λ i j of the matrix Λ of transition intensity is shown in Table 9.
Then, using the relationship (36), ergodic probabilities p j were calculated for the Markov model in continuous time.
T Λ = 0 ,
where:
  • T = [ p j ] T = [ p 1 ; ; p n s ] —transposed vector of limit probabilities p j ,
  • | Λ | —transition intensity matrix:
  • j p j = 1 —the normalization condition.
This way, for the process studied, we obtain the following matrix Equation (37):
[ p 1 p 2 p 3 p 4 p 5 p 6 ] T · [ λ 11 λ 12 λ 13 λ 14 0 0 λ 21 λ 22 λ 23 λ 24 0 0 λ 31 0 λ 33 λ 34 0 0 λ 41 0 0 λ 44 p 45 0 λ 51 λ 52 λ 53 0 λ 55 λ 56 λ 61 0 λ 63 0 0 λ 66 ] = [ 0 0 0 0 0 0 ] ,
Taking into account the normalization condition: j = 1 6 p j = 1 , we get the limit probabilities p j of the system’s dwelling time in the states S1–S6, which are shown in Table 10.
The results obtained deviate from the values determined for the semi-Markov process, disturbingly revealing that the system studied tends primarily to remain in the downtime state (S3). The state in which the production takes place (S1) comes only second and takes the value lower by over 35% in relation to calculations made according to the semi-Markov process. A comparison of the other results is presented in Table 11. The highest difference concerns state S3, and is over 534%.
The technical readiness coefficient was also calculated based on the (31), which for the Markov process amounts to 45% ( K = 0.45 )—almost half the size of what was determined according to the semi-Markov process.

5. Conclusions

The study achieved two important goals. The first of them was a presentation of the method of evaluating the readiness of a selected element of a machine tools in the production system. The analysis according to the Markov chain allowed to determine the probabilities of interstate transitions, which reflect the frequency of the occurrence of individual states. The highest values were achieved for relations S6–S4, S3–S4, S4–S5. They suggest a high incidence of unsuitability states—S3 and S6—and the need to determine their causes and reduce their occurrence.
The limit values of transition probability were also calculated. The analysis of stationary distribution showed that the greatest indications concern states related to activities resulting from production process technology (S1, S4, S5), which is a good result.
However, a complete evaluation is only ensured by an analysis according to the semi-Markov process, taking into account the average dwell times of an object in the individual operating states. The calculated probability limits, examining the behavior of the object for t , were the highest for state S1—operation (over 65%) and state S4—service (almost 17%). The remaining limit values were found to be satisfactorily low, which means that the operation of the machine should be considered as proper. The calculated technical readiness rate of 85% should also be viewed as positive.
Such an analysis not only provides information on the assessment of the current and expected functioning of the machine, but also reveals areas where modifications can be made in order to increase the level of availability and, as a result, ensure more efficient execution of production orders.
Another goal was to compare the results according to the assumptions made, concerning the forms of distribution of the examined variables. In the literature this analysis is often omitted and it is assumed that the examined variable has an exponential distribution. This allows to use Markov’s processes, whose parameter estimation is simpler and is described in more detail in publications. Such an assumption—as the study has shown—may lead to different results and effectively to form an incorrect assessment of the process/system studied. The intention of the authors was to indicate that omitting an important stage of statistical analysis of the collected data and assuming a priori the form of distributions does not guarantee the correctness of the obtained analyses.
In the presented study, the differences in the values of the calculated limit probabilities are large, reaching even over 530%. The overall evaluation of system readiness indicates a value lower by 46% in the case of the Markov process analysis.
However, the problem is not only the value of the calculated probabilities, but also the main aim of the system. According to the semi-Markov process, the system tends primarily to occupy state S 1 (operation) which is a satisfactory result, emphasizing the proper implementation of tasks. The results according to the Markov process show that the system tends to occupy mainly state S 3 —downtime, which indicates mismanagement and system inactivity.
The goals set by the authors have been achieved, but it should be stressed out that the results obtained concern only one selected machine. As part of further research, it is worth considering a comprehensive analysis of the entire production system using the method indicated in the article. It will provide complete information on its readiness, determine the level of impact of individual elements (machines), and identify areas for improvement.

Author Contributions

Conceptualization, A.B. and L.G.; Formal analysis, A.B. and A.Ś.; Methodology, A.B.; Resources, M.G.; Writing—original draft, A.B. and A.Ś.; Writing—review & editing, A.B. and M.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kozłowski, E.; Mazurkiewicz, D.; Żabiński, T.; Prucnal, S.; Sęp, J. Assessment model of cutting tool condition for real-time supervision system. Eksploat. Niezawodn. Maint. Reliab. 2019, 21, 679–685. [Google Scholar] [CrossRef]
  2. Kosicka, E.; Kozłowski, E.; Mazurkiewicz, D. The use of stationary tests for analysis of monitored residual processes. Eksploat. Niezawodn. Maint. Reliab. 2015, 17, 604–609. [Google Scholar] [CrossRef]
  3. Jasiulewicz-Kaczmarek, M.; Żywica, P. The concept of maintenance sustainability performance assessment by integrating balanced scorecard with non-additive fuzzy integral. Eksploat. Niezawodn. Maint. Reliab. 2018, 20, 650–661. [Google Scholar] [CrossRef]
  4. Antosz, K. Maintenance—Identification and analysis of the competency gap. Eksploat. Niezawodn. Maint. Reliab. 2018, 20, 484–494. [Google Scholar] [CrossRef]
  5. Kishawy, H.A.; Hegab, E.; Umer, U.; Mohany, A. Application of acoustic emissions in machining processes: Analysis and critical review. Int. J. Adv. Manuf. Technol. 2018, 98, 1391–1407. [Google Scholar] [CrossRef]
  6. Rymarczyk, T.; Kozłowski, E.; Kłosowski, G.; Niderla, K. Logistic Regression for Machine Learning in Process Tomography. Sensors 2019, 19, 3400. [Google Scholar] [CrossRef] [Green Version]
  7. Fisz, M. Rachunek Prawdopodobieństwa i Statystyka Matematyczna; PWN: Warszawa, Poland, 1967. [Google Scholar]
  8. Zhang, Y.; Zhang, Q.; Yu, R. Markov property of Markov chains and its test. In Proceedings of the International Conference on Machine Learning and Cybernetics, Qingdao, China, 11–14 July 2010; pp. 1864–1867. [Google Scholar] [CrossRef]
  9. Chen, B.; Hong, Y. Testing for the Markov property in time series. Econ. Theory 2012, 28, 130–178. [Google Scholar] [CrossRef]
  10. Shi, S.; Lin, N.; Zhang, Y.; Cheng, J.; Huang, C.; Liu, L.; Lu, B. Research on Markov property analysis of driving cycles and its application. Transp. Res. Part D Transp. Environ. 2016, 47, 171–181. [Google Scholar] [CrossRef]
  11. Kozłowski, E.; Borucka, A.; Świderski, A. Application of the logistic regression for determining transition probability matrix of operating states in the transport systems. Eksploat. Niezawodn. Maint. Reliab. 2020, 22, 192–200. [Google Scholar] [CrossRef]
  12. Yang, H.; Nair, V.N.; Chen, J.; Sudjianto, A. Assessing Markov property in multistatetransition models with applications to credit risk modeling. Appl. Stoch. Models Business Ind. 2019, 35, 552–570. [Google Scholar] [CrossRef]
  13. Komorowski, M.; Raffa, J. Markov Models and Cost Effectiveness Analysis: Applications in Medical Research. In Secondary Analysis of Electronic Health Records; Springer: Cham, Switzerland, 2016; pp. 351–367. [Google Scholar]
  14. Gercbach, I.B.; Kordonski, C.B. Modele Niezawodnościowe Obiektów Technicznych; WNT: Warszawa, Poland, 1968. [Google Scholar]
  15. Gichman, I.I.; Skorochod, A.W. Wstęp Do Teorii Procesów Stochastycznych; PWN: Warszawa, Poland, 1968. [Google Scholar]
  16. Li, Y.; Dong, Y.; Zhang, H.; Zhao, H.; Shi, H.; Zhao, X. Spectrum Usage Prediction Based on High-order Markov Model for Cognitive Radio Networks. In Proceedings of the 10th IEEE International Conference on Computer and Information Technology, Bradford, UK, 29 June–1 July 2010; pp. 2784–2788. [Google Scholar] [CrossRef]
  17. Perman, M.; Senegacnik, A.; Tuma, M. Semi-Markov models with an application to power-plant reliability analysis. IEEE Transac. Reliab. 1997, 46, 526–532. [Google Scholar] [CrossRef]
  18. Lana, X.; Burgueño, A. Daily dry–wet behaviour in Catalonia (NE Spain) from the viewpoint of Markov chains. Int. J. Climatol. 1998, 18, 793–815. [Google Scholar] [CrossRef]
  19. Pang, W.; Forster, J.J.; Troutt, M.D. Estimation of Wind Speed Distribution Using Markov Chain Monte Carlo Techniques. J. Appl. Meteorol. 2010, 40, 1476–1484. [Google Scholar] [CrossRef]
  20. Love, C.E.; Zhang, Z.G.; Zitron, M.A.; Guo, R. A discrete semi-Markov decision model to determine the optimal repair/replacement policy under general repairs. Eur. J. Oper. Res. 2010, 125, 398–409. [Google Scholar] [CrossRef]
  21. Doob, J.L. Stochastic Processes; John Wiley & Sons/Chapman & Hall: New York, NY, USA, 1953. [Google Scholar]
  22. Iosifescu, M. Finite Markov Processes and Their Applications; John Wiley & Sons: Chichester, UK, 1980. [Google Scholar]
  23. Howard, R.A. Dynamic Probabilistic System, Volume II: Semi-Markow and Decision Processes; Wiley: New York, NY, USA, 1971. [Google Scholar]
  24. Jaźwiński, J.; Grabski, F. Niektóre Problemy Modelowania Systemów Transportowych; Biblioteka Problemów Eksploatacji: Warszawa, Poland, 2003. [Google Scholar]
  25. Grabski, F. Semi-Markow Processes: Applications in System Reliability and Maintenance; Elsevier: Gdynia, Poland, 2014. [Google Scholar]
  26. Geng, J.; Xu, S.; Niu, J.; Wei, K. Research on technical condition evaluation of equipments based on matter element theory and hidden Markov model. In Proceedings of the 4th Annual International Workshop on Materials Science and Engineering (IWMSE2018), Xi’an, China, 18–20 May 2018; Volume 381, p. 012134. [Google Scholar] [CrossRef] [Green Version]
  27. Yuriy, E.; Obzherin, M.; Siodorov, S. Application of hidden Markov models for analyzing the dynamics of technical systems. AIP Conf. Proc. 2019, 2188, 050019. [Google Scholar] [CrossRef]
  28. Iscioglu, F.; Kocak, A. Dynamic reliability analysis of a multi-state manufacturing system. Eksploat. Niezawodn. Maint. Reliab. 2019, 21, 451–459. [Google Scholar] [CrossRef]
  29. Liu, C.; Duan, H.; Chen, P.; Duan, L. Improve Production Efficiency and Predict Machine Tool Status using Markov Chain and Hidden Markov Model. In Proceedings of the 8th International Conference of Computer Science and Information Technology (CIST), Amman, Jordan, 26–28 May 2018; pp. 276–281. [Google Scholar] [CrossRef]
  30. Gola, A. Reliability analysis of reconfigurable manufacturing system structures using computer simulation methods. Eksploat. Niezawodn. Maint. Reliab. 2019, 21, 90–102. [Google Scholar] [CrossRef]
  31. Wang, C.H.; Sheu, S.H. Determining the optimal production-maintenance policy with inspection errors: Using a Markov chain. Comput. Oper. Res. 2003, 30, 1–17. [Google Scholar] [CrossRef]
  32. Dung, K.; Lei, J.; Chan, F.; Hui, J.; Zhang, F.; Wang, Y. Hidden Markov model-based autonomous manufacturing task orchestration in smart shop floors. Robot. Comput. Integr. Manuf. 2020, 61, 101845. [Google Scholar] [CrossRef]
  33. Xiang, L.; Guan, J.; Wu, S. Measuring the impact of final demand on global production system based on Markov process. Phys. A Stat. Mech. Appl. 2018, 502, 148–163. [Google Scholar] [CrossRef]
  34. Żurek, J.; Tomaszewska, J. Analiza system eksploatacji z punktu widzenia gotowości. Prace Nauk. Politech. Warsz. 2016, 114, 471–477. [Google Scholar]
  35. Pavlov, N.; Golev, A.; Rahney, A.; Kyurkchiev, N. A Note on the Generalized Inverted Exponential Software Reliability Model. Int. J. Adv. Res. Comput. Commun. Eng. 2018, 7, 484–487. [Google Scholar]
  36. Borucka, A.; Niewczas, A.; Hasilova, K. Forecasting the readiness of special vehicles using the semi-Markov model. Eksploat. Niezawodn. Maint. Reliab. 2019, 21, 662–669. [Google Scholar] [CrossRef]
  37. Filipowicz, B. Modele Stochastyczne w Badaniach opeRacyjnych, aNaliza i Synteza Systemów Obsługi i Sieci Kolejkowych; Wydawnictwa Naukowo-Techniczne: Warszawa, Poland, 1996. [Google Scholar]
Figure 1. Diagram of the inter-state transitions of the process studied.
Figure 1. Diagram of the inter-state transitions of the process studied.
Applsci 10 01541 g001
Figure 2. Cullen and Frey graph for the state S6.
Figure 2. Cullen and Frey graph for the state S6.
Applsci 10 01541 g002
Figure 3. The assessment of goodness of fit of the empirical data of S 6 state to the Weibull and Beta distributions.
Figure 3. The assessment of goodness of fit of the empirical data of S 6 state to the Weibull and Beta distributions.
Applsci 10 01541 g003
Table 1. Operating states under consideration.
Table 1. Operating states under consideration.
State
S1Operation
S2Failure
S3Downtime due to lack of orders
S4Maintenance activities
(cleaning, reorientation, knife adjustment, inspection, roll change, consultation, raw material preparation)
S5Scheduled employee breaks
S6Downtime due to lack of raw materials
Table 2. Estimated model parameters according to Weibull and Beta distributions.
Table 2. Estimated model parameters according to Weibull and Beta distributions.
Distribution ParametersKolmogorov–Smirnov Test Statisticp-ValueAkaike Criterion
Weibull DistributionScaleShape
Parameters186.571.66D = 0.070.3851872.38
Std. Error9.530.11
Beta distributionshape 1shape 2
Parameters1.939.76D = 0.060.56−297.29
Std. Error0.211.13
Table 3. Goodness of fit results for states S1–S6.
Table 3. Goodness of fit results for states S1–S6.
StateDistributionTest Statistic KSp-Value
S1non-parametric
S2BetaD = 0.070.793
S3non-parametric
S4GammaD = 0.03 0.726
S5non-parametric
S6Beta D = 0.070.385
Table 4. The p i j inter-states transition probability matrix.
Table 4. The p i j inter-states transition probability matrix.
p i j S1S2S3S4S5S6
S100.1410.1430.71600
S20.02900.1170.85400
S30.043000.95700
S40.0030000.9970
S50.6760.0040.001000.319
S60.98200.018000
Table 5. Stationary probabilities of the Markov chain.
Table 5. Stationary probabilities of the Markov chain.
S1S2S3S4S5S6
p i j 0.2760.040.0460.2760.2750.088
Table 6. Average conditional durations of the semi-Markov process states.
Table 6. Average conditional durations of the semi-Markov process states.
T ¯   i j   [ m i n u t e s ] S1S2S3S4S5S6
S1 795.48637.631082.14
S24800 256.67277.27
S34020 508.6
S421.5 248.62
S542.334015 40.36
S6156.3 226.25
Table 7. Unconditional times T ¯ i   [ minutes ] for the 6-state model.
Table 7. Unconditional times T ¯ i   [ minutes ] for the 6-state model.
T ¯   1 T ¯   2 T ¯   3 T ¯   4 T ¯   5 T ¯   6
T ¯ i [ m i n u t e s ] 978.16406.02659.59247.9641.67157.56
Table 8. Values of limit probabilities P j of the 6-state model.
Table 8. Values of limit probabilities P j of the 6-state model.
P 1 0.6579 P 1   [ % ] 65.79
P 2 0.0396 P 2 [%]3.96
P 3 0.0740 P 3   [ % ] 7.4
P 4 0.1667 P 4   [ % ] 16.67
P 5 0.0279 P 5 [%]2.79
P 6 0.0337 P 6 [%]3.37
Table 9. The transition intensity matrix of the process studied.
Table 9. The transition intensity matrix of the process studied.
λ   i j   [ 1 / m i n u t e s ] S1S2S3S4S5S6
S1−0.00370.00130.00160.000900
S20.0002−0.00770.00390.003600
S30.00020−0.00220.002000
S40.046500−0.05050.00400
S50.02360.02500.06670−0.14010.0248
S60.006400.004400−0.0108
Table 10. Limit probabilities p j in the continuous physical time for the Markov process.
Table 10. Limit probabilities p j in the continuous physical time for the Markov process.
S1S2S3S4S5S6
p j 0.42280.07420.46950.03060.00090.0020
p j % 42.287.4246.953.060.090.20
Table 11. Comparison of results for the Markov and semi-Markov models.
Table 11. Comparison of results for the Markov and semi-Markov models.
S1S2S3S4S5S6
p j semi-Markov model0.65890.03960.0740.16670.02790.0337
p j Markov model0.42280.07420.46950.03060.00090.0020
difference in %−35.7487.41534.50−81.65−96.87−94.05

Share and Cite

MDPI and ACS Style

Świderski, A.; Borucka, A.; Grzelak, M.; Gil, L. Evaluation of Machinery Readiness Using Semi-Markov Processes. Appl. Sci. 2020, 10, 1541. https://0-doi-org.brum.beds.ac.uk/10.3390/app10041541

AMA Style

Świderski A, Borucka A, Grzelak M, Gil L. Evaluation of Machinery Readiness Using Semi-Markov Processes. Applied Sciences. 2020; 10(4):1541. https://0-doi-org.brum.beds.ac.uk/10.3390/app10041541

Chicago/Turabian Style

Świderski, Andrzej, Anna Borucka, Małgorzata Grzelak, and Leszek Gil. 2020. "Evaluation of Machinery Readiness Using Semi-Markov Processes" Applied Sciences 10, no. 4: 1541. https://0-doi-org.brum.beds.ac.uk/10.3390/app10041541

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop