Next Article in Journal
Wearable Plasma Pads for Biomedical Applications
Previous Article in Journal
The Use of Heat-Resistant Concrete Made with Ceramic Sanitary Ware Waste for a Thermal Energy Storage
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An NHPP Software Reliability Model with S-Shaped Growth Curve Subject to Random Operating Environments and Optimal Release Time

1
Department of Computer Science and Statistics, Chosun University, 309 Pilmun-daero Dong-gu, Gwangju 61452, Korea
2
Department of Industrial and Systems Engineering, Rutgers University, 96 Frelinghuysen Road, Piscataway, NJ 08855-8018, USA
*
Author to whom correspondence should be addressed.
Submission received: 26 October 2017 / Revised: 8 December 2017 / Accepted: 12 December 2017 / Published: 16 December 2017

Abstract

:
The failure of a computer system because of a software failure can lead to tremendous losses to society; therefore, software reliability is a critical issue in software development. As software has become more prevalent, software reliability has also become a major concern in software development. We need to predict the fluctuations in software reliability and reduce the cost of software testing: therefore, a software development process that considers the release time, cost, reliability, and risk is indispensable. We thus need to develop a model to accurately predict the defects in new software products. In this paper, we propose a new non-homogeneous Poisson process (NHPP) software reliability model, with S-shaped growth curve for use during the software development process, and relate it to a fault detection rate function when considering random operating environments. An explicit mean value function solution for the proposed model is presented. Examples are provided to illustrate the goodness-of-fit of the proposed model, along with several existing NHPP models that are based on two sets of failure data collected from software applications. The results show that the proposed model fits the data more closely than other existing NHPP models to a significant extent. Finally, we propose a model to determine optimal release policies, in which the total software system cost is minimized depending on the given environment.

1. Introduction

‘Software’ is a generic term for a computer program and its associated documents. Software is divided into operating systems and application software. As new hardware is developed, the price decreases; thus, hardware is frequently upgraded at low cost, and software becomes the primary cost driver. The failure of a computer system because of a software failure can cause significant losses to society. Therefore, software reliability is a critical issue in software development. This problem requires finding a balance between meeting user requirements and minimizing the testing costs. It is necessary to know in the planning cycle the fluctuation of software reliability and the cost of testing, in order to reduce costs during the software testing stage, thus a software development process that considers the release time, cost, reliability, and risk is indispensable. In addition, it is necessary to develop a model to predict the defects in software products. To estimate reliability metrics, such as the number of residual faults, the failure rate, and the overall reliability of the software, various non-homogeneous Poisson process (NHPP) software reliability models have been developed using a fault intensity rate function and mean value function within a controlled testing environment. The purpose of many NHPP software reliability models is to obtain an explicit formula for the mean value function, m(t), which is applied to the software testing data to make predictions on software failures and reliability in field environments [1]. A few researchers have evaluated a generalized software reliability model that captures the uncertainty of an environment and its effects on the software failure rate, and have developed a NHPP software reliability model when considering the uncertainty of the system fault detection rate per unit of time subject to the operating environment [2,3,4]. Inoue et al. [5] developed a bivariate software reliability growth model that considers the uncertainty of the change in the software failure-occurrence phenomenon at the change-point for improved accuracy. Okamura and Dohi [6] introduced a phase-type software reliability model and developed parameter estimation algorithms using grouped data. Song et al. [7,8] recently developed an NHPP software reliability model to consider a three-parameter fault detection rate, and applied a Weibull fault detection rate function during the software development process. They related the model to the error detection rate function by considering the uncertainty of the operating environment. In addition, Li and Pham [9] proposed a model accounting for the uncertainty of the operating environment under the condition that the fault content function is a linear function of the testing time, and that the fault detection rate is based on the testing coverage.
In this paper, we discuss a new NHPP software reliability model with S-shaped growth curve applicable to the software development process and relate it to the fault detection rate function when considering random operating environments. We examine the goodness-of-fit of the proposed model and other existing NHPP models that are based on several sets of software failure data, and then determine the optimal release times that minimize the expected total software cost under given conditions. The explicit solution of the mean value function for the new NHPP software reliability model is derived in Section 2. Criteria for the model comparisons and the selection of the best model are discussed in Section 3. The optimal release policy is discussed in Section 4, and the results of a model analysis and the optimal release times are discussed in Section 5. Finally, Section 6 provides some concluding remarks.

2. A New NHPP Software Reliability Model

2.1. Non-Homogeneous Poisson Process

The software fault detection process has been formulated using a popular counting process. The counting process { N ( t ) ,   t 0 } is a non-homogeneous Poisson process (NHPP) with an intensity function λ ( t ) , if it satisfies the following condition.
(I)
N ( 0 ) = 0
(II)
Independent increments
(III)
t 1 t 2 λ ( t ) dt , ( t 2 t 1 ): the average of the number of failures in the interval [ t 1 ,   t 2 ]
Assuming that the software failure/defect conforms to the NHPP condition, N ( t ) ( t 0 ) represents the cumulative number of failures up to the point of execution, and m ( t ) is the mean value function. The mean value function m ( t ) and the intensity function λ ( t ) satisfy the following relationship.
m ( t ) = 0 t λ ( s ) ds , dm ( t ) dt = λ ( t ) .
N ( t ) is a Poisson distribution involving the mean value function, m ( t ) , and can be expressed as:
Pr { N ( t ) = n } = { m ( t ) } n n ! exp { m ( t ) } ,   n = 0 , 1 , 2 , 3 .

2.2. General NHPP Software Reliability Model

Pham et al. [10] formalized the general framework for NHPP-based software reliability and provided analytical expressions for the mean value function m ( t ) using differential equations. The mean value function m ( t ) of the general NHPP software reliability model with different values for a ( t ) and b ( t ) , which reflects various assumptions of the software testing process, can be obtained with the initial condition N ( 0 ) = 0 .
d   m ( t ) dt = b ( t ) [ a ( t ) m ( t ) ] .
The general solution of (1) is
m ( t ) = e B ( t ) [ m 0 + t 0 t a ( s ) b ( s ) e B ( s ) bs ]
where B ( t ) = t 0 t b ( s ) ds , and m ( t 0 ) = m 0 is the marginal condition of (2).

2.3. New NHPP Software Reliability Model

Pham [3] formulated a generalized NHPP software reliability model that incorporated uncertainty in the operating environment as follows:
d   m ( t ) dt = η [ b ( t ) ] [ N m ( t ) ] ,
where η is a random variable that represents the uncertainty of the system fault detection rate in the operating environment with a probability density function g; b ( t ) is the fault detection rate function, which also represents the average failure rate caused by faults; N is the expected number of faults that exists in the software before testing; and, m(t) is the expected number of errors detected by time t (the mean value function).
Thus, a generalized mean value function, m(t), where the initial condition m(0) = 0, is given by
m ( t ) = η N ( 1 e η 0 t b ( x ) dx ) dg ( η ) .
The mean value function [11] from (4) using the random variable η has a generalized probability density function g with two parameters α 0 and β 0 and is given by
m ( t ) = N ( 1 β β + 0 t b ( s ) ds ) α ,
where b(t) is the fault detection rate per fault per unit of time.
We propose an NHPP software reliability model including the random operating environment using Equations (3)–(5) and the following assumptions [7,8]:
(a)The occurrence of a software failure follows a non-homogeneous Poisson process.
(b)Faults during execution can cause software failure.
(c)The software failure detection rate at any time depends on both the fault detection rate and the number of remaining faults in the software at that time.
(d)Debugging is performed to remove faults immediately when a software failure occurs.
(e) New faults may be introduced into the software system, regardless of whether other faults are removed or not.
(f)The fault detection rate b ( t ) can be expressed by (6).
(g)The random operating environment is captured if unit failure detection rate b ( t ) is multiplied by a factor η that represents the uncertainty of the system fault detection rate in the field
In this paper, we consider the fault detection rate function b ( t ) to be as follows:
b ( t ) =   a 2 t 1 + at ,   a > 0 ,   a , b > 0 ,
We obtain a new NHPP software reliability model with S-shaped growth curve subject to random operating environments, m(t), that can be used to determine the expected number of software failures detected by time t   by substituting function b(t) above into (5) so that:
m ( t ) = N ( 1 β β + at ln ( 1 + at ) ) α .

3. Criteria for Model Comparisons

Theoretically, once the analytical expression for mean value function m ( t ) is derived, then the parameters in m ( t ) can be estimated using parameter estimation methods (MLE: the maximum likelihood estimation method, LSE: the least square estimation method); however, in practice, accurate estimates may not be obtained by the MLE, particularly under certain conditions where the mean value function m ( t ) is too complex. The model parameters to be estimated in the mean value function m ( t ) can then be obtained using a MATLAB program that is based on the LSE method. Six common criteria; the mean squared error (MSE), Akaike’s information criterion (AIC), the predictive ratio risk (PRR), the predictive power (PP), the sum of absolute errors (SAE), and R-square (R2) will be used for the goodness-of-fit estimation of the proposed model, and to compare the proposed model with other existing models, as listed in Table 1. These criteria are described as follows.
The MSE is
MSE = i = 0 n ( m ^ ( t i ) y i ) 2 n m .
AIC [12] is
AIC = 2 log L + 2 m .
The PRR [13] is
PRR = i = 0 n ( m ^ ( t i ) y i m ^ ( t i ) ) 2 .
The PP [13] is
PP = i = 0 n ( m ^ ( t i ) y i y i ) 2 .
The SAE [8] is
SAE = i = 0 n | m ^ ( t i ) y i | .
The correlation index of the regression curve equation ( R 2 ) [9] is
R 2 = 1 i = 0 n ( m ^ ( t i ) y i ) 2 i = 0 n ( y i y i ¯ ) 2 .
Here, m ^ ( t i )   is the estimated cumulative number of failures at t i for i = 1 , 2 , , n ; y i is the total number of failures observed at time t i ; n is the actual data which includes the total number of observations; and, m is the number of unknown parameters in the model.
The MSE measures the distance of a model estimate from the actual data that includes the total number of observations and the number of unknown parameters in the model. AIC is measured to compare the capability of each model in terms of maximizing the likelihood function (L), while considering the degrees of freedom. The PRR measures the distance of the model estimates from the actual data against the model estimate. The PP measures the distance of the model estimates from the actual data. The SAE measures the absolute distance of the model. For five of these criteria, i.e., MSE, AIC, PRR, PP, and SAE, the smaller the value is, the closer the model fits relative to other models run on the same dataset. On the other hand, R 2 should be close to 1.
We use (8) below to obtain the confidence interval [13] of the proposed NHPP software reliability model. The confidence interval is described as follows;
m ^ ( t ) ± Z α / 2 m ^ ( t ) ,
where, Z α / 2 is 100 ( 1 α ) , the percentile of the standard normal distribution.
Table 1 summarizes the different mean value functions of the proposed new model and several existing NHPP models. Note that models 9 and 10 consider environmental uncertainty.

4. Optimal Software Release Policy

In this section, we next discuss the use of the software reliability model under varying situations to determine the optimal software release time, and to determine the optimal software release time, T*, which minimizes the expected total software cost. Many studies have been conducted on the optimal software release time and its related problems [20,21,22,23,24]. The quality of the system will normally depend on the testing efforts, such as the testing environment, times, tools, and methodologies. If testing is short, the cost of the system testing is lower, but the consumers may face a higher risk e.g., buying an unreliable system. This also involves the higher costs of the operating environment because it is much more expensive to detect and correct a failure during the operational phase than during the testing phase. In contrast, the longer the testing time, the more faults that can be removed, which leads to a more reliable system; however, the testing costs for the system will also increase. Therefore, it is very important to determine when to release the system based on test cost and reliability. Figure 1 shows the system development lifecycle considered in the following cost model: the testing phase before release time T, the testing environment period, the warranty period, and the operational life in the actual field environment, which is usually quite different from the testing environment [24].
The expected total software cost C ( T ) [24] can be expressed as
C ( T ) = C 0 + C 1 T + C 2 m ( T ) μ y + C 3 ( 1 R ( x | T ) ) + C 4 [ m ( T + T w ) m ( T ) ] μ w
where, C 0 is the set-up cost of testing, C 1 T is the cost of testing, C 2 m ( T ) μ y is the expected cost to remove all errors detected by time T during the testing phase, C 3 ( 1 R ( x | T ) ) is the penalty cost owing to failures that occurs after the system release time T , and C 4 [ m ( T + T w ) m ( T ) ] μ w is the expected cost to remove all of the errors that are detected during the warranty period [ T ,   T + T w ]. The cost that is required to remove faults during the operating period is higher than during the testing period, and the time that is needed is much longer.
Finally, we aim to find the optimal software release time, T*, with the expected minimum in the environment as follows:
Minimize   C ( T ) .

5. Numerical Examples

5.1. Data Information

Dataset #1 (DS1), presented in Table 2, was reported by Musa [25] based on software failure data from a real time command and control system (RTC&CS), and represents the failures that were observed during system testing (25 hours of CPU time). The number of test object instructions delivered for this system, which was developed by Bell Laboratories, was 21,700.
Dataset #2 (DS2), as shown in Table 3, is the second of three releases of software failure data collected from three different releases of a large medical record system (MRS) [26], consisting of 188 software components. Each component contains several files. Initially, the software consisted of 173 software components. All three releases added new functionality to the product. Between three and seven new components were added in each of the three releases, for a total of 15 new components. Many other components were modified during each of the three releases as a side effect of the added functionality. Detailed information of the dataset can be obtained in the report by Stringfellow and Andrews [26].
Dataset #3 (DS3), as shown in Table 4, is from one of four major releases of software products at Tandom Computers (TDC) [27]. There are 100 failures that are observed within testing CPU hours. Detailed information of the dataset can be obtained tin the report by Wood [27].

5.2. Model Analysis

Table 5, Table 6 and Table 7 summarize the results of the estimated parameters of all 10 models in Table 1 using the LSE technique and the values of the six common criteria: MSE, AIC, PRR, PP, SAE, and R 2 . We obtained the six common criteria at t = 1 ,   2 , , 25 from DS1 (Table 2), at t = 1 , 2 , , 17 from DS2 (Table 3), and at cumulative testing CPU hours from DS3 (Table 4). As can be seen in Table 5, when comparing all of the models, the MSE and AIC values are the lowest for the newly proposed model, and the PRR, PP, SAE, and R 2 values are the second best. The MSE and AIC values of the newly proposed model are 7.361, 114.982, respectively, which are significantly less than the values of the other models. In Table 6, when comparing all of the models, all criteria values for the newly proposed model are best. The MSE value of the newly proposed model is 60.623, which is significantly lower than the value of the other models. The AIC, PRR, PP, and SAE values of the newly proposed model are 151.156, 0.043, 0.041, and 98.705, respectively, which are also significantly lower than the other models. The value of R2 is 0.960 and is the closest to 1 for all of the models. In Table 7, when comparing all of the models, all the criteria values for the newly proposed model are best. The MSE value of the newly proposed model is 6.336, which is significantly lower than the value of the other models. The PRR, PP, and SAE values of the newly proposed model are 0.086, 0.066, and 36.250, respectively, which are also significantly lower than the other models. The value of R2 is 0.9940 and is the closest to 1 for all of the models.
Figure 2, Figure 3 and Figure 4 show the graphs of the mean value functions for all 10 models for DS1, DS2, and DS3, respectively. Figure 5, Figure 6 and Figure 7 show the graphs of the 95% confidence limits of the newly proposed model for DS1, DS2, and DS3. Table A1, Table A2 and Table A3 in Appendix A list the 95% confidence intervals of all 10 NHPP software reliability models for DS1, DS2, and DS3. In addition, the relative error value of the proposed software reliability model confirms its ability to provide more accurate predictions as it remains closer to zero when compared to the other models (Figure 8, Figure 9 and Figure 10).

5.3. Optimal Software Release Time

Factor η captures the effects of the field environmental factors based on the system failure rate as described in Section 2. System testing is commonly carried out in a controlled environment, where we can use a constant factor η equal to 1. The newly proposed model becomes a delayed S-shaped model when η = 1 in (7). Thus, we apply different mean value functions m ( t ) to the cost model C ( T ) of (8) when considering the three conditions described below. We apply the cost model to these three conditions using DS1 (Table 2). Using the LSE method, the parameters of the delayed S-shaped model and the newly proposed model are obtained, as described in Section 5.2.
(1) The expected total software cost with controlled environmental factor (η = 1) is
C 1 ( T ) = C 0 + C 1 T + C 2 m ( T ) μ y + C 3 ( 1 R ( x | T ) ) + C 4 [ m ( T + T w ) m ( T ) ] μ w
where
m ( T ) = a ( 1 ( 1 + bT ) e bT ) , m ( T + T w ) = a ( 1 ( 1 + b ( T + T w ) ) e b ( T + T w ) ) .
(2) The expected total software cost with a random operating environmental factor (η = f(x)) is
C 2 ( T ) = C 0 + C 1 T + C 2 m ( T ) μ y + C 3 ( 1 R ( x | T ) ) + C 4 [ m ( T + T w ) m ( T ) ] μ w
where
m ( T ) = N ( 1 β β + aT ln ( 1 + aT ) ) α , m ( T + T w ) = N ( 1 β β + a ( T + T w ) ln ( 1 + a ( T + T w ) ) ) α .
(3) The expected total software cost between the testing environment (η = 1) and field environment (η = f(x)) is
C 3 ( T ) = C 0 + C 1 T + C 2 m 1 ( T ) μ y + C 3 ( 1 R ( x | T ) ) + C 4 [ m 2 ( T + T w ) m 1 ( T ) ] μ w
where
m 1 ( T ) = a ( 1 ( 1 + bT ) e bT ) ,   m 2 ( T + T w ) = N ( 1 β β + a ( T + T w ) ln ( 1 + a ( T + T w ) ) ) α .
We consider the following coefficients in the cost model for the baseline case:
C 0 = 100 , C 1 = 20 , C 2 = 50 , C 3 = 2000 , C 4 = 400 , T w = 10 , x = 20 , μ y = 0.1 , μ w = 0.2
The results of the baseline case are listed in Table 8, and the expected total cost for the three conditions above is 1338.70, 2398.24, and 2263.33, respectively. For the second condition, the expected total cost and the optimal release time are high. The expected total cost is the lowest for the first condition, and the optimal release time is shortest for the third condition.
To study the impact of different coefficients on the expected total cost and the optimal release time, we vary some of the coefficients and then compare them with the baseline case. First, we evaluate the impact of the warranty period on the expected total cost by changing the value of the corresponding warranty time and comparing the optimal release times for each condition. Here, we change the values of Tw from 10 h to 2, 5, and 15 h, and the values of the other parameters remain unchanged. Regardless of the warranty period, the optimal release time for the third condition is the shortest, and the expected total cost for the first condition is the lowest overall. Figure 11 shows the graph of the expected total cost for the baseline case. Figure 12, Figure 13 and Figure 14 show the graphs of the expected total cost subject to the warranty period for the three conditions.
Next, we examine the impact of the cost coefficients, C1, C2, C3, and C4 on the expected total cost by changing their values and comparing the optimal release times. Without loss of generality, we change only the values of C2, C3, and C4, and keep the values of the other parameters C0 and C1 unchanged, because different values of C0 and C1 will certainly increase the expected total cost. When we change the values of C2 from 50 to 25 and 100, the optimal release time is only changed significantly for the second condition. As can be seen from Table 9, the optimal release time T* is 37.5 when the value of C2 is 25, and 29.1 when the value of C2 is 100. When we change the value of C3 from 2000 to 500 and 4000, the optimal release time is only changed significantly for the first condition. As Table 10 shows, the optimal release time T* is 16.5 when the value of C3 is 500, and 14.6 when the value of C3 is 4000. When we change the value of C4 from 400 to 200 and 1000, the optimal release time is changed for all of the conditions. As can be seen from Table 11, the optimal release time T* is 14.3 for the first condition when the value of C4 is 200, and 16.3 when the value of C4 is 1000. In addition, the optimal release time T* is 20.0 for the second condition when the value of C4 is 200, and 61.0 when the value of C4 is 1000. The optimal release time T* is 11.6 for the third condition when the value of C4 is 200, and 12.8 when the value of C4 is 1000. Thus, the second condition has a much greater variation in optimal release time than the other conditions. As a result, we can confirm that the cost model of the first condition does not reflect the influence of the operating environment, and that the cost model of the second condition does not reflect the influence of the test environment. Figure 15 shows the graph of the expected total cost according to the cost coefficient C2 in the 2nd condition. Figure 16, Figure 17 and Figure 18 show the graphs of the expected total cost according to cost coefficient C4 in the three conditions.

6. Conclusions

Existing well-known NHPP software reliability models have been developed in a test environment. However, a testing environment differs from an actual operating environment, so we considered random operating environments. In this paper, we discussed a new NHPP software reliability model, with S-shaped growth curve that accounts for the randomness of an actual operating environment. Table 5, Table 6 and Table 7 summarize the results of the estimated parameters of all ten models that are applied using the LSE technique and six common criteria (MSE, AIC, PRR, PP, SAE, and R2) for the DS1, DS2, and DS3 datasets. As can be seen from Table 5, Table 6 and Table 7, the newly proposed model displays a better overall fit than all of the other models when compared, particularly in the case of DS2. In addition, we provided optimal release policies for various environments to determine when the total software system cost is minimized. Using a cost model for a given environment is beneficial as it provides a means for determining when to stop the software testing process. In this paper, faults are assumed to be removed immediately when a software failure has been detected, and the correction process is assumed to not introduce new faults. Obviously, further work in revisiting these assumptions is worth the effort as our future study. We hope to present some new results on this aspect in the near future.

Acknowledgements

This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2015R1D1A1A01060050). We are pleased to thank the Editor and the Referees for their useful suggestions.

Author Contributions

The three authors equally contributed to the paper.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. 95% Confidence interval of all 10 models (DS1).
Table A1. 95% Confidence interval of all 10 models (DS1).
ModelTime Index123456789
GOMLCL9.32921.58632.81042.82351.67159.45266.27772.25377.479
m ^ ( t ) 17.53732.81446.12157.71367.81176.60784.26990.94496.758
UCL25.74544.04159.43172.60283.95093.761102.261109.635116.037
DSMLCL1.35211.19224.28937.79250.27161.12070.19177.57183.458
m ^ ( t ) 6.25319.94536.05851.91466.22078.48488.64496.861103.387
UCL11.15428.69847.82766.03582.17095.847107.097116.150123.316
ISMLCL9.32821.58432.80842.82051.66859.44966.27472.25077.476
m ^ ( t ) 17.53532.81146.11857.70967.80776.60384.26690.94196.755
UCL25.74344.03859.42872.59983.94793.758102.258109.631116.034
YIDMLCL14.26328.93640.44949.46656.63762.46867.33371.50675.185
m ^ ( t ) 23.83141.57354.98265.30573.43379.99885.45190.11194.209
UCL33.39954.21169.51581.14490.22897.528103.568108.717113.232
PNZMLCL14.19128.84040.36049.40056.59862.45567.34371.53475.227
m ^ ( t ) 23.74141.46054.87965.23073.38979.98485.46290.14394.255
UCL33.29154.08069.39981.05990.17997.513103.581108.752113.283
PZMLCL9.32921.58632.81042.82351.67159.45266.27772.25377.479
m ^ ( t ) 17.53732.81346.12157.71367.81176.60784.26990.94496.758
UCL25.74544.04159.43172.60283.95093.761102.261109.635116.037
DPMLCL55.30655.64956.22357.02858.06859.34460.85962.61464.611
m ^ ( t ) 71.92972.31672.96473.87475.04776.48578.18980.16282.403
UCL88.55188.98489.70690.72092.02693.62695.52097.710100.195
TCMLCL17.97430.40839.98147.85154.56160.41965.62170.30274.555
m ^ ( t ) 28.42343.30654.44363.46571.08677.69583.53588.76893.508
UCL38.87256.20468.90579.08087.61194.971101.449107.234112.461
3PFDMLCL12.01125.45836.75146.22754.25261.12367.06572.25176.816
m ^ ( t ) 20.99137.45250.70861.61170.73778.48785.15190.94296.021
UCL29.97049.44764.66576.99587.22195.851103.237109.633115.227
NewLCL18.35830.52639.95047.74554.42660.28365.50170.20874.492
m ^ ( t ) 28.89343.44454.40763.34470.93377.54283.40188.66393.438
UCL39.42856.36368.86478.94487.44094.801101.300107.118112.384
ModelTime Index1011121314151617
GOMLCL82.04586.03389.51492.55295.20197.51299.527101.284
m ^ ( t ) 101.823106.235110.078113.426116.342118.882121.095123.023
UCL121.600126.436130.641134.300137.483140.253142.663144.762
DSMLCL88.08391.67294.43196.53498.12799.326100.225100.895
m ^ ( t ) 108.498112.457115.494117.807119.557120.874121.861122.596
UCL128.914133.241136.557139.080140.988142.423143.497144.298
ISMLCL82.04386.03189.51292.55095.20097.51199.526101.283
m ^ ( t ) 101.820106.232110.076113.424116.340118.881121.094123.022
UCL121.597126.434130.639134.298137.481140.251142.662144.761
YIDMLCL78.51281.58884.48587.25789.93992.55895.13397.678
m ^ ( t ) 97.905101.316104.523107.586110.546113.433116.267119.064
UCL117.298121.044124.561127.916131.153134.307137.401140.451
PNZMLCL78.56281.64284.54087.31089.98792.60095.16897.703
m ^ ( t ) 97.960101.376104.584107.645110.600113.479116.305119.092
UCL117.359121.110124.628127.980131.212134.358137.442140.481
PZMLCL82.04586.03389.51492.55295.20197.51299.527101.284
m ^ ( t ) 101.823106.235110.078113.426116.342118.882121.095123.023
UCL121.600126.436130.641134.300137.483140.253142.663144.762
DPMLCL66.85569.34672.08775.08178.33181.83885.60689.636
m ^ ( t ) 84.91687.70190.75994.09397.704101.593105.762110.212
UCL102.977106.055109.432113.105117.078121.348125.918130.788
TCMLCL78.45382.05085.38788.50091.41694.15796.74499.191
m ^ ( t ) 97.840101.828105.521108.959112.174115.193118.038120.726
UCL117.227121.606125.654129.418132.933136.229139.332142.261
3PFDMLCL80.86384.47587.71890.64793.30395.72497.94099.974
m ^ ( t ) 100.512104.512108.096111.326114.253116.917119.352121.586
UCL120.162124.549128.473132.006135.203138.110140.764143.198
NewLCL78.42382.05185.41788.55591.49194.24896.84499.296
m ^ ( t ) 97.806101.829105.554109.019112.257115.293118.148120.841
UCL117.189121.607125.690129.484133.024136.338139.452142.387
ModelTime Index1819202122232425
GOMLCL102.8153104.15105.3133106.3271107.2106107.9804108.6512109.2357
m ^ ( t ) 124.7022126.165127.4392128.5491129.516130.3582131.0919131.731
UCL146.5892148.1799149.565150.7711151.8214152.736153.5326154.2263
DSMLCL101.3931101.7621102.0346102.2353102.3828102.4909102.57102.6278
m ^ ( t ) 123.1427123.5475123.8463124.0664124.2281124.3466124.4333124.4967
UCL144.8924145.3328145.658145.8975146.0734146.2023146.2967146.3656
ISMLCL102.8143104.1492105.3126106.3265107.21107.9799108.6508109.2353
m ^ ( t ) 124.7012126.164127.4383128.5484129.5154130.3577131.0914131.7306
UCL146.588148.1789149.5641150.7703151.8207152.7354153.5321154.2259
YIDMLCL100.2015102.7107105.2103107.7037110.1934112.6811115.1679117.6547
m ^ ( t ) 121.8354124.5875127.3263130.0555132.7779135.4956138.2097140.9215
UCL143.4693146.4644149.4423152.4073155.3625158.31161.2516164.1883
PNZMLCL100.2169102.7154105.2037107.6855110.1632112.6385115.1129117.587
m ^ ( t ) 121.8523124.5927127.3191130.0356132.7449135.4491138.1497140.8477
UCL143.4877146.47149.4345152.3856155.3266158.2597161.1866164.1084
PZMLCL102.8153104.15105.3133106.3271107.2106107.9804108.6512109.2357
m ^ ( t ) 124.7022126.165127.4392128.5491129.516130.3582131.0919131.731
UCL146.5892148.1799149.565150.7711151.8214152.736153.5326154.2263
DPMLCL93.9312898.49414103.3268108.4316113.8108119.4664125.4007131.6157
m ^ ( t ) 114.9445119.961125.2629130.8518136.7288142.8956149.3535156.1038
UCL135.9577141.4278147.199153.2719159.6469166.3248173.3063180.5919
TCMLCL101.5124103.7205105.8252107.8352109.7585111.6019113.3714115.0726
m ^ ( t ) 123.2736125.6944127.9996130.1994132.3026134.3169136.2493138.1057
UCL145.0349147.6682150.174152.5635154.8467157.032159.1271161.1389
3PFDMLCL101.8497103.5836105.1914106.6864108.0801109.3824110.6019111.7464
m ^ ( t ) 123.6436125.5443127.3057128.9424130.4673131.8914133.2244134.4748
UCL145.4374147.505149.4199151.1983152.8544154.4004155.8469157.2032
NewLCL101.6161103.8169105.9085107.8999109.799111.6129113.3476115.009
m ^ ( t ) 123.3873125.8128.0908130.2702132.3469134.3289136.2233138.0364
UCL145.1586147.783150.2732152.6404154.8947157.045159.099161.0638
Table A2. 95% Confidence interval of all 10 models (DS2).
Table A2. 95% Confidence interval of all 10 models (DS2).
ModelTime Index123456789
GOMLCL49.14788.100114.753132.788144.944153.123158.620162.312164.791
m ^ ( t ) 64.942108.518137.757157.376170.540179.373185.300189.276191.945
UCL80.737128.935160.761181.963196.135205.623211.980216.241219.099
DSMLCL29.75481.610119.319141.607153.581159.671162.660164.090164.762
m ^ ( t ) 42.537101.340142.735166.930179.867186.433189.651191.191191.914
UCL55.320121.071166.151192.253206.153213.194216.643218.291219.066
ISMLCL49.13888.084114.731132.764144.918153.095158.591162.282164.761
m ^ ( t ) 64.931108.500137.733157.349170.511179.343185.269189.245191.913
UCL80.725128.915160.736181.935196.104205.590211.946216.207219.065
YIDMLCL51.98990.820116.125132.604143.452150.736155.770159.386162.108
m ^ ( t ) 68.171111.517139.254157.176168.926176.797182.228186.125189.057
UCL84.354132.215162.383181.748194.400202.858208.686212.864216.007
PNZMLCL51.94690.780116.107132.609143.476150.772155.811159.427162.145
m ^ ( t ) 68.123111.474139.234157.181168.952176.835182.272186.169189.097
UCL84.300132.168162.361181.753194.428202.899208.733212.912216.049
PZMLCL49.06488.017114.677132.724144.892153.080158.585162.284164.769
m ^ ( t ) 64.848108.425137.674157.306170.483179.327185.263189.247191.921
UCL80.631128.834160.672181.888196.074205.573211.940216.210219.074
DPMLCL123.469124.167125.336126.981129.105131.715134.816138.411142.507
m ^ ( t ) 147.252148.012149.283151.071153.379156.212159.574163.470167.904
UCL171.036171.857173.230175.161177.652180.709184.333188.529193.301
TCMLCL60.74993.551114.901129.743140.446148.355154.305158.844162.346
m ^ ( t ) 78.066114.526137.919154.071165.673174.225180.648185.542189.313
UCL95.384135.501160.936178.399190.901200.096206.991212.239216.281
3PFDMLCL57.54993.474115.850130.807141.307148.941154.636158.970162.319
m ^ ( t ) 74.461114.441138.954155.226166.605174.858181.005185.677189.284
UCL91.374135.408162.058179.645191.903200.776207.374212.384216.249
NewLCL62.34693.042114.265129.443140.426148.466154.433158.930162.375
m ^ ( t ) 79.861113.965137.225153.745165.652174.345180.786185.634189.345
UCL97.377134.889160.184178.048190.878200.224207.138212.338216.315
ModelTime index10111213.14151617
GOMLCL166.455167.572168.321168.824169.162169.389169.541169.643
m ^ ( t ) 193.735194.937195.743196.284196.647196.890197.054197.163
UCL221.016222.302223.164223.743224.132224.392224.567224.684
DSMLCL165.073165.216165.280165.309165.322165.328165.331165.332
m ^ ( t ) 192.249192.402192.472192.503192.517192.523192.526192.527
UCL219.424219.588219.663219.696219.711219.718219.721219.722
ISMLCL166.425167.542168.291168.794169.131169.358169.510169.612
m ^ ( t ) 193.703194.904195.710196.251196.614196.857197.021197.130
UCL220.981222.267223.129223.708224.096224.357224.532224.649
YIDMLCL164.269166.076167.661169.107170.464171.767173.035174.281
m ^ ( t ) 191.383193.328195.033196.587198.047199.446200.809202.147
UCL218.498220.580222.405224.068225.629227.126228.583230.014
PNZMLCL164.298166.095167.669169.101170.445171.733172.986174.217
m ^ ( t ) 191.415193.349195.041196.581198.025199.410200.756202.078
UCL218.531220.602222.413224.061225.606227.087228.526229.940
PZMLCL166.437167.557168.309168.814169.152169.380169.532169.635
m ^ ( t ) 193.716194.921195.729196.272196.636196.881197.045197.155
UCL220.995222.285223.150223.731224.120224.382224.558224.675
DPMLCL147.109152.223157.853164.006170.687177.901185.654193.951
m ^ ( t ) 172.880178.401184.474191.101198.286206.034214.349223.235
UCL198.650204.580211.094218.195225.885234.167243.045252.519
TCMLCL165.073167.214168.906170.251171.327172.192172.889173.455
m ^ ( t ) 192.249194.552196.371197.818198.974199.903200.653201.260
UCL219.424221.890223.837225.384226.621227.614228.416229.065
3PFDMLCL164.940167.013168.668170.001171.083171.968172.697173.303
m ^ ( t ) 192.105194.335196.115197.548198.711199.662200.446201.097
UCL219.270221.658223.563225.096226.340227.357228.195228.891
NewLCL165.057167.175168.872170.249171.380172.319173.107173.773
m ^ ( t ) 192.231194.510196.335197.815199.031200.040200.886201.602
UCL219.405221.845223.797225.381226.682227.761228.666229.431
Table A3. 95% Confidence interval of all 10 models (DS3).
Table A3. 95% Confidence interval of all 10 models (DS3).
ModelTime Index51996814301893249030583625442252185823
GOMLCL3.6419.40715.37621.17628.27434.58940.46448.02254.80359.482
m ^ ( t ) 9.76717.63925.21832.31840.79148.19655.00063.66071.35976.641
UCL15.89225.87035.06043.46053.30961.80369.53579.29887.91693.799
DSMLCL −0.4093.0598.74415.54024.80233.36641.23350.79158.51463.256
m ^ ( t ) 2.9668.91016.77025.42236.67146.77055.88566.81175.55080.883
UCL6.34214.76024.79635.30548.54060.17470.53782.83292.58698.510
ISMLCL3.6419.40715.37621.17628.27334.58940.46448.02254.80259.482
m ^ ( t ) 9.76717.63925.21832.31840.79148.19655.00063.66071.35976.641
UCL15.89225.87035.06043.46053.30961.80369.53579.29887.91693.799
YIDMLCL3.6319.38415.33721.12228.20234.50340.36747.91654.69859.387
m ^ ( t ) 9.75117.60825.17132.25440.70748.09654.88863.53971.24176.534
UCL15.87125.83235.00443.38553.21361.68969.40979.16287.78493.680
PNZMLCL3.7019.50615.49321.29428.37334.65740.49347.99354.72459.378
m ^ ( t ) 9.85317.76725.36332.46040.90948.27555.03363.62771.27176.523
UCL16.00526.02835.23443.62753.44561.89369.57379.26187.81793.668
PZMLCL3.4589.13015.08520.93328.15034.60840.62748.35455.23859.940
m ^ ( t ) 9.49917.27724.85632.02440.64648.21855.18764.03971.85277.157
UCL15.53925.42434.62843.11653.14161.82869.74779.72388.46694.373
DPMLCL26.40726.67827.16127.87329.16630.82732.94336.75641.62846.096
m ^ ( t ) 38.58138.90339.47540.31841.84543.79946.27550.71356.34061.461
UCL50.75451.12851.78952.76354.52356.77059.60864.67171.05176.827
TCMLCL5.92911.85117.43622.63428.87134.40639.60546.44852.81657.386
m ^ ( t ) 12.99320.78727.76334.07541.49747.98254.00961.86369.11074.278
UCL20.05829.72338.09045.51654.12261.55968.41377.27985.40491.170
3PFDMLCL3.9909.96216.01621.80428.78934.94140.63147.93854.52259.105
m ^ ( t ) 10.27118.36126.01233.07641.40048.60555.19163.56471.04176.216
UCL16.55226.75936.00844.34854.01162.27069.75279.19087.56193.327
NewLCL5.98112.09617.80023.07629.37534.94640.16847.02753.40357.974
m ^ ( t ) 13.06621.09928.21034.60642.09148.61154.65862.52569.77474.941
UCL20.15130.10238.62146.13554.80762.27669.14878.02386.14691.909
ModelTime index65397083748778468205856489239282964110,000
GOMLCL64.53568.04970.48972.54374.49576.35078.11279.78681.37682.886
m ^ ( t ) 82.31886.25188.97791.26793.44195.50497.46199.319101.081102.754
UCL100.101104.454107.465109.992112.387114.658116.810118.851120.786122.621
DSMLCL67.77370.52672.24873.57674.73675.74876.62777.39178.05378.626
m ^ ( t ) 85.94389.01890.93992.41893.71094.83495.81296.66097.39598.031
UCL104.112107.510109.629111.260112.683113.921114.997115.930116.738117.437
ISMLCL64.53568.04970.48972.54374.49576.35078.11279.78681.37682.886
m ^ ( t ) 82.31886.25188.97791.26793.44195.50497.46199.319101.081102.754
UCL100.100104.454107.465109.992112.387114.658116.810118.851120.786122.621
YIDMLCL64.46067.99670.45772.53274.50876.38978.18179.88781.51183.059
m ^ ( t ) 82.23486.19288.94191.25593.45595.54797.53799.430101.231102.945
UCL100.007104.388107.425109.978112.403114.706116.894118.974120.951122.831
PNZMLCL64.41767.93670.38972.46274.44076.32878.13179.85481.50083.074
m ^ ( t ) 82.18686.12588.86591.17793.38095.48097.48399.394101.218102.962
UCL99.954104.314107.342109.892112.320114.631116.834118.934120.937122.850
PZMLCL64.95368.38770.74272.70474.54776.27977.90579.43080.86082.201
m ^ ( t ) 82.78686.62989.26091.44793.49995.42597.23198.924100.510101.995
UCL100.619104.872107.777110.189112.451114.571116.558118.418120.160121.789
DPMLCL52.28857.68162.08366.28770.77175.54180.60085.95491.60797.562
m ^ ( t ) 68.51174.61079.56684.28089.29294.604100.222106.148112.385118.937
UCL84.73491.54097.049102.274107.812113.668119.843126.341133.163140.312
TCMLCL62.52866.25768.93671.25573.51975.73077.89180.00382.06984.090
m ^ ( t ) 80.06584.24787.24389.83292.35594.81597.21699.560101.849104.086
UCL97.603102.237105.550108.408111.190113.900116.541119.116121.629124.082
3PFDMLCL64.11567.65170.13872.25774.29576.25678.14479.96381.71783.410
m ^ ( t ) 81.84785.80688.58690.94993.21895.39997.49799.515101.459103.333
UCL99.578103.961107.033109.641112.142114.543116.849119.067121.202123.257
NewLCL63.11566.84469.52271.84074.10476.31578.47680.59082.65784.680
m ^ ( t ) 80.72584.90387.89790.48493.00695.46597.866100.210102.500104.739
UCL98.334102.963106.272109.128111.907114.615117.255119.830122.343124.797

References

  1. Pham, T.; Pham, H. A generalized software reliability model with stochastic fault-detection rate. Ann. Oper. Res. 2017, 1–11. [Google Scholar] [CrossRef]
  2. Teng, X.; Pham, H. A new methodology for predicting software reliability in the random field environments. IEEE Trans. Reliab. 2006, 55, 458–468. [Google Scholar] [CrossRef]
  3. Pham, H. A new software reliability model with Vtub-Shaped fault detection rate and the uncertainty of operating environments. Optimization 2014, 63, 1481–1490. [Google Scholar] [CrossRef]
  4. Chang, I.H.; Pham, H.; Lee, S.W.; Song, K.Y. A testing-coverage software reliability model with the uncertainty of operation environments. Int. J. Syst. Sci. Oper. Logist. 2014, 1, 220–227. [Google Scholar]
  5. Inoue, S.; Ikeda, J.; Yamada, S. Bivariate change-point modeling for software reliability assessment with uncertainty of testing-environment factor. Ann. Oper. Res. 2016, 244, 209–220. [Google Scholar] [CrossRef]
  6. Okamura, H.; Dohi, T. Phase-type software reliability model: Parameter estimation algorithms with grouped data. Ann. Oper. Res. 2016, 244, 177–208. [Google Scholar] [CrossRef]
  7. Song, K.Y.; Chang, I.H.; Pham, H. A Three-parameter fault-detection software reliability model with the uncertainty of operating environments. J. Syst. Sci. Syst. Eng. 2017, 26, 121–132. [Google Scholar] [CrossRef]
  8. Song, K.Y.; Chang, I.H.; Pham, H. A software reliability model with a Weibull fault detection rate function subject to operating environments. Appl. Sci. 2017, 7, 983. [Google Scholar] [CrossRef]
  9. Li, Q.; Pham, H. NHPP software reliability model considering the uncertainty of operating environments with imperfect debugging and testing coverage. Appl. Math. Model. 2017, 51, 68–85. [Google Scholar] [CrossRef]
  10. Pham, H.; Nordmann, L.; Zhang, X. A general imperfect software debugging model with S-shaped fault detection rate. IEEE Trans. Reliab. 1999, 48, 169–175. [Google Scholar] [CrossRef]
  11. Pham, H. A generalized fault-detection software reliability model subject to random operating environments. Vietnam J. Comput. Sci. 2016, 3, 145–150. [Google Scholar] [CrossRef]
  12. Akaike, H. A new look at statistical model identification. IEEE Trans. Autom. Control 1974, 19, 716–719. [Google Scholar] [CrossRef]
  13. Pham, H. System Software Reliability; Springer: London, UK, 2006. [Google Scholar]
  14. Goel, A.L.; Okumoto, K. Time dependent error detection rate model for software reliability and other performance measures. IEEE Trans. Reliab. 1979, 28, 206–211. [Google Scholar] [CrossRef]
  15. Yamada, S.; Ohba, M.; Osaki, S. S-shaped reliability growth modeling for software fault detection. IEEE Trans. Reliab. 1983, 32, 475–484. [Google Scholar] [CrossRef]
  16. Ohba, M. Inflexion S-shaped software reliability growth models. In Stochastic Models in Reliability Theory; Osaki, S., Hatoyama, Y., Eds.; Springer: Berlin, Germany, 1984; pp. 144–162. [Google Scholar]
  17. Yamada, S.; Tokuno, K.; Osaki, S. Imperfect debugging models with fault introduction rate for software reliability assessment. Int. J. Syst. Sci. 1992, 23, 2241–2252. [Google Scholar] [CrossRef]
  18. Pham, H.; Zhang, X. An NHPP software reliability models and its comparison. Int. J. Reliab. Qual. Saf. Eng. 1997, 4, 269–282. [Google Scholar] [CrossRef]
  19. Pham, H. Software Reliability Models with Time Dependent Hazard Function Based on Bayesian Approach. Int. J. Autom. Comput. 2007, 4, 325–328. [Google Scholar] [CrossRef]
  20. Li, X.; Xie, M.; Ng, S.H. Sensitivity analysis of release time of software reliability models incorporating testing effort with multiple change-points. Appl. Math. Model. 2010, 34, 3560–3570. [Google Scholar] [CrossRef]
  21. Pham, H. Software reliability and cost models: Perspectives, comparison, and practice. Eur. J. Oper. Res. 2003, 149, 475–489. [Google Scholar] [CrossRef]
  22. Pham, H.; Zhang, X. NHPP software reliability and cost models with testing coverage. Eur. J. Oper. Res. 2003, 145, 443–454. [Google Scholar] [CrossRef]
  23. Kimura, M.; Toyota, T.; Yamada, S. Economic analysis of software release problems with warranty cost and reliability requirement. Reliab. Eng. Syst. Saf. 1999, 66, 49–55. [Google Scholar] [CrossRef]
  24. Sgarbossa, F.; Pham, H. A cost analysis of systems subject to random field environments and reliability. IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. 2010, 40, 429–437. [Google Scholar] [CrossRef]
  25. Musa, J.D.; Iannino, A.; Okumoto, K. Software Reliability: Measurement, Prediction, and Application; McGraw-Hill: New York, NY, USA, 1987. [Google Scholar]
  26. Stringfellow, C.; Andrews, A.A. An empirical method for selecting software reliability growth models. Empir. Softw. Eng. 2002, 7, 319–343. [Google Scholar] [CrossRef]
  27. Wood, A. Predicting software reliability. IEEE Comput. Soc. 1996, 11, 69–77. [Google Scholar] [CrossRef]
Figure 1. System cost model infrastructure.
Figure 1. System cost model infrastructure.
Applsci 07 01304 g001
Figure 2. Mean value function of the ten models for DS1.
Figure 2. Mean value function of the ten models for DS1.
Applsci 07 01304 g002
Figure 3. Mean value function of the ten models for DS2.
Figure 3. Mean value function of the ten models for DS2.
Applsci 07 01304 g003
Figure 4. Mean value function of the ten models for DS3.
Figure 4. Mean value function of the ten models for DS3.
Applsci 07 01304 g004
Figure 5. 95% confidence limits of the newly proposed model for DS1.
Figure 5. 95% confidence limits of the newly proposed model for DS1.
Applsci 07 01304 g005
Figure 6. 95% confidence limits of the newly proposed model for DS2.
Figure 6. 95% confidence limits of the newly proposed model for DS2.
Applsci 07 01304 g006
Figure 7. 95% confidence limits of the newly proposed model for DS3.
Figure 7. 95% confidence limits of the newly proposed model for DS3.
Applsci 07 01304 g007
Figure 8. Relative error of the ten models for DS1.
Figure 8. Relative error of the ten models for DS1.
Applsci 07 01304 g008
Figure 9. Relative error of the ten models for DS2.
Figure 9. Relative error of the ten models for DS2.
Applsci 07 01304 g009
Figure 10. Relative error of the ten models for DS3.
Figure 10. Relative error of the ten models for DS3.
Applsci 07 01304 g010
Figure 11. Expected total cost for the baseline case.
Figure 11. Expected total cost for the baseline case.
Applsci 07 01304 g011
Figure 12. Expected total cost subject to the warranty period for the 1st condition.
Figure 12. Expected total cost subject to the warranty period for the 1st condition.
Applsci 07 01304 g012
Figure 13. Expected total cost subject to the warranty period for the 2nd condition.
Figure 13. Expected total cost subject to the warranty period for the 2nd condition.
Applsci 07 01304 g013
Figure 14. Expected total cost subject to the warranty period for the 3rd condition.
Figure 14. Expected total cost subject to the warranty period for the 3rd condition.
Applsci 07 01304 g014
Figure 15. Expected total cost according to cost coefficient C2 for the 2nd condition.
Figure 15. Expected total cost according to cost coefficient C2 for the 2nd condition.
Applsci 07 01304 g015
Figure 16. Expected total cost according to cost coefficient C4 for the 1st condition.
Figure 16. Expected total cost according to cost coefficient C4 for the 1st condition.
Applsci 07 01304 g016
Figure 17. Expected total cost according to cost coefficient C4 for the 2nd condition.
Figure 17. Expected total cost according to cost coefficient C4 for the 2nd condition.
Applsci 07 01304 g017
Figure 18. Expected total cost according to cost coefficient C4 for the 3rd condition.
Figure 18. Expected total cost according to cost coefficient C4 for the 3rd condition.
Applsci 07 01304 g018
Table 1. NHPP software reliability models.
Table 1. NHPP software reliability models.
No.Model m ( t )
1GO Model [14] m ( t ) = a ( 1 e bt )
2Delayed S-shaped Model [15] m ( t ) = a ( 1 ( 1 + bt ) e bt )
3Inflection S-shaped Model [16] m ( t ) = a ( 1 e bt ) 1 + β e bt
4Yamada ImperfectDebugging Model [17] m ( t ) = a [ 1 e bt ] [ 1 α b ] + α at
5PNZ Model [10] m ( t ) = a [ 1 e bt ] [ 1 α b ] + α at 1 + β e bt
6PZ Model [18] m ( t ) = ( ( c + a ) [ 1 e bt ] [ ab b α ( e α t e bt ) ] ) 1 + β e bt
7Dependent Parameter Model [19] m ( t ) = m 0 ( γ t + 1 γ t 0 + 1 ) e γ ( t t 0 ) + α ( γ t + 1 ) ( γ t 1 + ( 1 γ t 0 ) e γ ( t t 0 )
8Testing Coverage Model [4] m ( t ) = N [ 1 ( β β + ( at ) b ) α ]
9Three parameter Model [7] m ( t ) = N [ 1 ( β β a b ln ( ( 1 + c ) e bt 1 + ce bt ) ) ]
10Proposed New Model m ( t ) = N ( 1 β β + at ln ( 1 + at ) ) α
Table 2. Dataset #1 (DS1) : real time command and control system (RTC&CS) data set.
Table 2. Dataset #1 (DS1) : real time command and control system (RTC&CS) data set.
Hour IndexFailuresCumulative FailuresHour IndexFailuresCumulative Failures
12727145111
21643155116
31154166122
41064170122
51175185127
6783191128
7284201129
8589212131
9392221132
10193232134
11497241135
127104251136
132106---
Table 3. DS2 : medical record system (MRS) data set.
Table 3. DS2 : medical record system (MRS) data set.
Week IndexFailuresCumulative FailuresWeek IndexFailuresCumulative Failures
19090100190
217107112192
319126120192
419145130192
526171140192
6171881511203
71189160203
81190171204
90190---
Table 4. DS3 : Tandom Computers (TDC) data set.
Table 4. DS3 : Tandom Computers (TDC) data set.
Time Index (CPU hours)Cumulative FailuresTime Index (CPU hours)Cumulative FailuresTime Index (CPU hours)Cumulative Failures
51916442258820596
96824521869856498
143027582375892399
1893336539819282100
2490417083869641100
30584974879010,000100
362554784693--
Table 5. Model parameter estimation and comparison criteria from RTC&CS data set (DS1). Least-squares estimate (LSE); mean squared error; Akaike’s information criterion (AIC); predictive ratio risk (PRR); predictive power (PP), sum absolute error (SAE), correlation index of the regression curve equation ( R 2 ).
Table 5. Model parameter estimation and comparison criteria from RTC&CS data set (DS1). Least-squares estimate (LSE); mean squared error; Akaike’s information criterion (AIC); predictive ratio risk (PRR); predictive power (PP), sum absolute error (SAE), correlation index of the regression curve equation ( R 2 ).
ModelLSE’sMSEAICPRRPPSAER2
GOM a   ^ = 136.050, b ^   = 0.13833.822121.8780.4790.262118.5300.972
DSM a ^   = 124.665, b   ^ = 0.356134.582210.28712.7871.181239.3350.889
ISM a ^   = 136.050, b ^   = 0.138
β ^ = 0.0001
35.363123.8780.4790.262118.5320.972
YIDM a ^   = 81.252, b ^   = 0.340
α ^   = 0.0333
9.435116.4030.0350.03160.8420.993
PNZM a ^ = 81.562, b   ^ = 0.337
α ^   = 0.033, β ^   = 0.00
9.888118.3880.0370.03260.8770.993
PZM a ^   = 0.01, b ^   = 0.138
α   ^ = 800.0,   β ^ = 0.00, c ^   = 136.04
38.895127.8780.4790.262118.5300.972
DPM α ^ = 28650, β ^   = 0.003
t 0   = 0.00, m 0   = 71.8
274.911382.1430.8573.568304.2120.792
TCM a ^   = 0.000035, b   ^ = 0.734,
α   ^ = 0.29, β   ^ = 0.002, N   ^ = 427
7.640116.9320.0190.01947.3040.995
3PFDM a ^   = 1.696, b   ^ = 0.001
c ^ = 6.808, β ^   = 1.574
N   ^ = 173.030
17.827119.5230.1370.10081.3130.987
New Model a   ^ = 0.277, α ^ = 0.328
β   ^ = 17.839, N   ^ = 228.909
7.361114.9820.0220.02247.8690.994
Table 6. Model parameter estimation and comparison criteria from MRS data set (DS2).
Table 6. Model parameter estimation and comparison criteria from MRS data set (DS2).
ModelLSE’sMSEAICPRRPPSAER2
GOM a ^   = 197.387, b   ^ = 0.39980.678184.3310.1700.101104.4030.939
DSM a ^   = 192.528, b ^   = 0.882232.628331.8571.2910.333142.5440.823
ISM a ^   = 197.354, b ^   = 0.399
β   ^ = 0.000001
86.440186.3340.1710.101104.3700.939
YIDM a ^   = 182.934, b ^   = 0.464
α   ^ = 0.0071
78.837157.8250.1280.087100.6170.944
PNZM a   ^ = 183.124, b ^   = 0.463
α ^   = 0.007, β ^   = 0.00
84.902159.8730.1280.087100.6080.944
PZM a ^   = 195.990, b   ^ =0.3987
α ^ = 1000.00, β ^ = 0.00, c ^ = 1.390
100.989190.3320.1720.102104.3540.939
DPM α   ^ = 26124.0, γ   ^ = 0.0044
t 0   = 0.00, m 0   = 147.00
769.282480.3410.4150.712334.1280.494
TCM a ^   = 0.053, b   ^ = 0.774,
α ^   = 181.0, β ^   = 38.6, N ^   = 204.1
72.283158.9330.0520.048103.1960.956
3PFDM a ^   = 0.028, b ^   = 0.210
c ^   = 9.924, β ^   = 0.005
N ^   = 206.387
81.090163.7970.0730.061106.3410.951
New Model a ^   = 0.008, α   ^ = 0.275,
β   ^ = 0.001, N   ^ = 207.873
60.623151.1560.0430.04198.7050.960
Table 7. Model parameter estimation and comparison criteria from MRS data set (DS3).
Table 7. Model parameter estimation and comparison criteria from MRS data set (DS3).
ModelLSE’sMSEAICPRRPPSAER2
GOM a ^   = 133.835, b ^   = 0.0001468.62086.1360.5560.24242.1660.991
DSM a ^   = 101.918, b   ^ = 0.00050745.783117.31622.6921.318101.6590.951
ISM a ^   = 133.835, b   ^ = 0.000146
β ^   = 0.000001
9.12788.1360.5560.24242.1660.991
YIDM a ^   = 130.091, b   ^ = 0.00015
α ^   = 0.000003
9.08488.2670.5610.24342.0520.991
PNZM a   ^ = 121.178, b ^   = 0.000163
α ^   = 0.000009, β   ^ = 0.00
9.53290.3260.5300.23441.5380.991
PZM a ^ = 122.259,   b   ^ = 0.0002
α ^   = 9955.597, β ^   = 0.305
c   ^ = 0.569
11.49192.0200.6430.26844.8480.990
DPM α ^ = 123.193, γ ^   = 0.0001
t 0   = 0.0001, m 0   = 38.459
156.480212.8670.9172.879196.3600.851
TCM a ^   = 0.000013, b ^   = 0.78,
α ^   = 141.399, β   ^ = 54.71,
N   ^ = 254.707
7.09090.7580.0910.06837.8800.9937
3PFDM a ^   = 0.016, b   ^ = 0.07
c   ^ = 0.00001, β   ^ = 157.458
N   ^ = 205.025
9.41092.3600.4200.20039.9090.992
New Model a ^   = 0.064, α ^   = 0.731,
β   ^ = 2509.898, N   ^ = 337.765
6.33688.8850.0860.06636.2500.9940
Table 8. Optimal release time T* subject to the warranty period.
Table 8. Optimal release time T* subject to the warranty period.
Warrnaty PeriodC1(T)T*C2(T)T*C3(T)T*
Tw = 21173.4114.21403.7811.6599.8810.5
Tw = 51286.9514.91928.6322.81334.7211.3
Tw = 10(basic)1338.7015.12398.2434.72263.3312.3
Tw = 151348.8815.22702.3342.72969.8813.0
Table 9. Optimal release time T* according to cost coefficient C2.
Table 9. Optimal release time T* according to cost coefficient C2.
Cost Coefficient C2C1(T)T*C2(T)T*C3(T)T*
C2 = 251036.0215.22013.2537.51972.0612.5
C2 = 50 (basic)1338.7015.12398.2434.72263.3312.3
C2 = 1001943.6415.13141.2029.12843.3512.1
Table 10. Optimal release time T* according to cost coefficient C3.
Table 10. Optimal release time T* according to cost coefficient C3.
Cost Coefficient C3C1(T)T*C2(T)T*C3(T)T*
C3 = 5001270.6516.52398.2434.72262.9612.4
C3 = 2000 (basic)1338.7015.12398.2434.72263.3312.3
C3 = 40001376.2614.62398.2434.72263.7712.3
Table 11. Optimal release time T* according to cost coefficient C4.
Table 11. Optimal release time T* according to cost coefficient C4.
Cost Coefficient C4C1(T)T*C2(T)T*C3(T)T*
C4 = 2001183.1414.31859.29201590.0211.6
C4 = 400 (basic)1338.7015.12398.2434.72263.3312.3
C4 = 10001680.9916.33272.23614253.4512.8

Share and Cite

MDPI and ACS Style

Song, K.Y.; Chang, I.H.; Pham, H. An NHPP Software Reliability Model with S-Shaped Growth Curve Subject to Random Operating Environments and Optimal Release Time. Appl. Sci. 2017, 7, 1304. https://0-doi-org.brum.beds.ac.uk/10.3390/app7121304

AMA Style

Song KY, Chang IH, Pham H. An NHPP Software Reliability Model with S-Shaped Growth Curve Subject to Random Operating Environments and Optimal Release Time. Applied Sciences. 2017; 7(12):1304. https://0-doi-org.brum.beds.ac.uk/10.3390/app7121304

Chicago/Turabian Style

Song, Kwang Yoon, In Hong Chang, and Hoang Pham. 2017. "An NHPP Software Reliability Model with S-Shaped Growth Curve Subject to Random Operating Environments and Optimal Release Time" Applied Sciences 7, no. 12: 1304. https://0-doi-org.brum.beds.ac.uk/10.3390/app7121304

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop