Next Article in Journal
Positive Solutions for Discontinuous Systems via a Multivalued Vector Version of Krasnosel’skiĭ’s Fixed Point Theorem in Cones
Previous Article in Journal
Preliminary Analysis of a Fully Ceramic Microencapsulated Fuel Thermal–Mechanical Performance
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Testing Coverage Model Based on NHPP Software Reliability Considering the Software Operating Environment and the Sensitivity Analysis

1
Department of Industrial and Systems Engineering, Rutgers University, 96 Frelinghuysen Road, Piscataway, NJ 08855, USA
2
Department of Computer Science and Statistics, Chosun University, 309 Pilmun-daero Dong-gu, Gwangju 61452, Korea
*
Author to whom correspondence should be addressed.
Submission received: 1 April 2019 / Revised: 12 May 2019 / Accepted: 17 May 2019 / Published: 20 May 2019
(This article belongs to the Section Mathematics and Computer Science)

Abstract

:
We have been attempting to evaluate software quality and improve its reliability. Therefore, research on a software reliability model was part of the effort. Currently, software is used in various fields and environments; hence, one must provide quantitative confidence standards when using software. Therefore, we consider the testing coverage and uncertainty or randomness of an operating environment. In this paper, we propose a new testing coverage model based on NHPP software reliability with the uncertainty of operating environments, and we provide a sensitivity analysis to study the impact of each parameter of the proposed model. We examine the goodness-of-fit of a new testing coverage model based on NHPP software reliability and other existing models based on two datasets. The comparative results for the goodness-of-fit show that the proposed model does significantly better than the existing models. In addition, the results for the sensitivity analysis show that the parameters of the proposed model affect the mean value function.

1. Introduction

Software combines technologies such as artificial intelligence (AI), Internet of Things (IOT), and big data, which are important elements in the fourth industrial revolution, to create new forms of value. Software is a very important factor in the fourth industrial revolution [1]. Software reliability is defined as the probability that a software will run error free for a period of time. It is very difficult and complex to develop the new technologies and theories necessary to improve it. Therefore, the focus of software development is to improve the reliability and stability of software systems. In addition, unpredictable results occur during the software development process. Generally, the software development process consists of four stages: specification, design, coding, and testing [1]. Software faults are detected and corrected in the testing phase, which is the final stage of software development. Because the number of software faults and the interval between faults has a significant impact on software stability, software failure prediction is an important area of study for software developers, enterprises, and research institutions. The software reliability model makes it easier to evaluate software reliability using the fault data collected in a test or live environment. Furthermore, the software reliability model can measure the number of software faults, fault interval, and reliability, and the fault detection rate can be estimated and variously predicted. Conversely, testing coverage is one of the most important issues not solved in the software development process, and it is important for both software development and the customers of software products. Testing coverage is a measure that enables software developers to evaluate the quality of the software tested and determine how much additional effort is needed to improve the reliability of the software [2,3]. Testing coverage can provide customers with a quantitative confidence criterion when they plan to buy or use software products [4,5]. In addition, the software is used in a variety of operating environments. Therefore, the uncertainty or randomness of the software operating environment must be considered.
In the past, various statistical models have been proposed for the purpose of evaluating software reliability. The model based on the non-homogeneous Poisson process (NHPP) proved to be a very successful approach for practical software reliability [6]. On the basis of the NHPP, the mean value function can be used to obtain the expected number of faults to a certain point in time. Various NHPP software reliability models have been proposed to date. In the early days, most NHPP software reliability models were developed based on the assumption that faults detected in the testing phase were removed immediately with no debugging time delay, no new faults were introduced, and software systems used in the field environments were the same as or close to those used in the development-testing environment [7,8,9,10]. In the mid-1990s, studies began on a software reliability model consistent with a variety of software operating environments due to rapid changes in the industrial structure and environment. In the early 2000s, researchers began to explore new approaches such as the application of calibration factors to the software reliability model considering the uncertainty of the operating environment [11,12,13]. Recently, an NHPP software reliability model considering the uncertainty of the software operating environment has been proposed [14,15,16,17,18,19]. In addition, many testing coverage functions have been proposed in terms of different distributions, and software reliability models based on different testing coverage function have also been developed [2,20,21,22].
In this paper, we discuss a new testing coverage model based on NHPP software reliability with the uncertainty of operating environments and sensitivity analysis in order to study the impact of each parameter of the proposed model. We examine the goodness-of-fit of a new testing coverage model based on NHPP software reliability and other existing NHPP models based on two datasets. The explicit solution of the mean value function for a new testing coverage model is derived in Section 2. The various criteria for comparative model analysis are discussed in Section 3. Model analysis and results based on two actual datasets are discussed in Section 4. The impact of each parameter of the proposed model based on sensitivity analysis is discussed in Section 5. Finally, Section 6 presents the conclusions and remarks.

2. Testing Coverage Model based on NHPP Software Reliability

In this study, the basic assumption is to utilize the NHPP to describe the failure phenomenon during the testing phase, and the counting process N ( t ) of the NHPP represents the cumulative number of failures up to the point of execution time t.
Pr { N ( t )   =   n }   =   { m ( t ) } n n ! exp { m ( t ) } ,   n = 0 , 1 , 2 , ,   t 0 .
The mean value function m ( t ) is
m ( t )   =   0 t λ ( s ) d ,
where, λ ( s ) is the intensity function.

2.1. A General Testing Coverage Model based on NHPP Software Reliability

A general mean value function m ( t ) of the testing coverage models based on NHPP software reliability using the differential equation is as follows [2]
d   m ( t ) dt   =   c ( t ) 1 c ( t ) [ a ( t ) m ( t ) ] ,
where, a ( t ) is the total number of faults detected in the software by time t , c ( t ) is the testing coverage function, i.e., the percentage of the code coverage by time t , c ( t ) is the derivative of the testing coverage function.
Solving Equation (1) using different functions a ( t ) and c ( t ) 1 c ( t ) yields the following mean value function m ( t ) [2]:
m ( t )   =   e B ( t ) [ m 0 + t 0 t a ( τ ) c ( τ ) 1 c ( τ ) e B ( τ ) b τ ] ,
where, B ( t ) = t 0 t c ( s ) 1 c ( s ) ds and m ( t 0 ) = m 0 is the marginal condition of Equation (2), with t 0 representing the start time of the testing process.

2.2. A General Testing Coverage Model Based on NHPP Software Reliability with the Uncertainty of the Operating Environments

A general mean value function m ( t ) of the testing coverage models based on NHPP software reliability using the differential equation considering the uncertainty of the operating environments is as follows [14]:
d   m ( t ) dt   =   η [ c ( t ) 1 c ( t ) ] [ a ( t ) m ( t ) ] .
We find the mean value function m ( t ) as in Equation (4) using the differential equation of Equation (3) by applying the random variable η to a probability density function in the uncertain operating environments [14].
m ( t )   =   e η B ( t ) [ m 0 + t 0 t a ( τ ) c ( τ ) 1 c ( τ ) e η B ( τ ) b τ ] .

2.3. A New Testing Coverage Model based on NHPP Software Reliability Considering the Uncertainty of the Operating Environments

In this paper, a new testing coverage model based on NHPP software reliability considering the uncertainty of the operating environments is presented. We apply Equations (3) and (4), which are assumptions of the existing test coverage model, and add the following assumptions [3,14]:
The initial condition of the mean value function m ( t ) is m ( 0 ) = 0 ;
a ( t ) = N is the expected number of faults that exist in the software before testing;
η has a generalized probability density function g with two parameters α and β ;
The fault detection rate can be expressed by c ( τ ) 1 c ( τ ) .
We can derive the mean value function m ( t ) based on the assumptions and differential equations [16].
m ( t )   =   η N ( 1 e η 0 t c ( s ) 1 c ( s ) ds ) dg ( η ) ,
m ( t )   =   N ( 1 β β + 0 t c ( s ) 1 c ( s ) ds ) α ,   α 0 ,   β 0 .
In this study, we consider a testing coverage function c ( t ) as follows:
c ( t )   =   1 ( 1 + dt ) e bt ,   b ,   d > 0 ,
where, b is the failure detection rate and d represents the shape factor.
We obtain a new mean value function m(t) for the testing coverage model based on NHPP software reliability with uncertainty in the operating environment that can be used to determine the expected number of software failures detected by time t by substituting the function c(t) above into Equation (5):
m ( t )   =   N ( 1 β β + bt ln ( 1 + dt ) ) α .

3. Various Criteria for Comparative Model Analysis

We estimate the parameters of the NHPP software reliability models in Table 1 using the least squares estimation (LSE) method. We derived the parameters of the mean value function m ( t ) using Matlab (2016a, MathWorks, Natick, MA, USA) and R (Version 3.3.1) programs based on the LSE method.

3.1. Criteria for Model Comparison

The statistical significance of model comparisons can be confirmed through well-known criteria. We use twelve criteria to estimate the goodness-of-fit of all model and to compare the proposed model with other models in Table 1.
The mean squared error (MSE) measures the average of the squares of the errors that is the average squared difference between the estimated values and the actual data. The root mean square error (RMSE) is a frequently used measure of the differences between values predicted by a model or an estimator and the values observed. Akaike’s information criterion (AIC) is used to compare the capability of each model to maximize the likelihood function (L), while considering the degrees of freedom [23]. The variance measures the standard deviation of the prediction bias [19]. The root mean square prediction error (RMSPE) is a measure of the closeness with which the model predicts the observation [19]. The predictive ratio risk (PRR) measures the distance of the model estimates from the actual data over the model estimate [1]. The Theil statistic (TS) is the average deviation percentage over all periods with regard to the actual values [24]. The predictive power (PP) measures the distance of the model estimates from the actual data [1]. The sum of absolute errors (SAE) measures the absolute distance of the model [16]. The mean absolute errors (MAE) measures the deviation by using the absolute distance of the model [24]. The R2 measures how successful the fit is in explaining the variations in the data [4]. The adjusted R2 (Adj R2) is a modification to the R2 that accounts for the number of explanatory terms in a model relative to the number of data points [4]. The smaller the value of these ten criteria, i.e., MSE, RMSE, AIC, variance, RMSPE, PRR, TS, PP, SAE, and MAE, the better the model fit is (close to 0). Conversely, the higher the value of these two criteria, i.e., R 2 and Adj   R 2 , the better the model fit is (close to 1). These criteria are described as follows in Appendix Table A1.
From Table A1, m ^ ( t i ) is the estimated cumulative number of failures at t i for i = 1 , 2 , , n ; y i is the total number of failures observed at time t i ; n is the total number of observations; and m is the number of unknown parameters in the model.

3.2. Distance of the Normalized Criteria

Pham [25] and Li and Pham [4] mentioned the distance of the normalized criteria (NCD) methods for ranking and selecting the best model from various software reliability models according to the characteristics of a set of criteria. We compare the performance of the software reliability models using the twelve criteria in Section 3.1. It is difficult to check the performance of the software reliability model due to the diversity in criteria, so a criteria method that integrates them is needed. It is shown that 10 out of 12 criteria show that the goodness-of-fit (similarity to actual data) improves as the value of the criteria is closer to 0, while for 2 of the 12 criteria, i.e., R 2 and Adj   R 2 , being closer to 1 indicates a better goodness-of-fit. In addition, the weight of each criterion is not considered because each criterion is different. We have improved the NCD method according to the characteristics of the criteria proposed in Section 3.1.
The value of NCD is defined as follows:
D k   =   j = 1 d ( C kj i = 1 s C ij ) 2 + j = 1 f ( 1 Z kj i = 1 s ( 1 Z ij ) ) 2 ,   k   =   1 , 2 , ,   s ,
where, s is the total number of models; C ij denotes the value for the ith model of the jth criterion where i = 1 ,   2 , ,   s ; d denotes the total number of criteria, e.g., MSE, RMSE, AIC, variance, RMSPE, PRR, TS, PP, SAE, and MAE; f denotes the total number of criteria, e.g., R 2 and Adj   R 2 . Table 1 summarizes the mean value functions for NHPP software reliability models.

3.3. Confidence Interval

We use the following (7) to obtain the confidence interval [1] of the proposed model and of the existing NHPP software reliability models.
m ^ ( t ) ± Z α / 2 m ^ ( t ) ,
where, Z α / 2 is 100 ( 1 α ) , the percentile of the standard normal distribution. It is possible to confirm whether the value of the mean value function is included in the confidence interval at each time point or not and how much the confidence interval actually contains the value.

4. Data and the Result of Model Analysis

We estimate the parameters on two datasets, find the results of all criteria and the NCD value, and compare the results against each other.

4.1. Dataset 1

Dataset #1 was given by [6] and provided in Table 2. In dataset #1, the week index ranges from 1 week to 21 weeks, and there are 38 cumulative failures at 14 weeks. Detailed information is recorded in [6].
First, in Table 3, we obtained the parameter estimates for all twenty models and the values of all twelve criteria using t = 1 ,   2 ,   ,   21 of the week index from dataset #1. As shown in Table 3, we can see that the proposed model achieves the best results when comparing the twelve criteria to the other models. In addition, as shown in Table 4, the proposed model achieves the best results when comparing the NCD value to the other models. Looking at Table 3 in detail, the MSE, RMSE, AIC, variance, RMSPE, PRR, TS, PP, and SAE values for the proposed model are the lowest as compared with all models. The R 2 and Adj   R 2 values for the proposed model are the largest as compared with all models. First, the MSE value of the proposed model is 2.4340, which is the lowest value compared to the other models; next, the MSE value of the YID 1 model is 2.9149. The RMSE value of the proposed model is 1.5601, which is the lowest value among the models compared. Furthermore, the RMSE value of the YID 1 model is 1.7073. The AIC value of the proposed model is 60.2184, which is also the lowest value among the models compared, and the AIC value of the GO model is 62.8309. The variance value of the proposed model is 1.2993, which is the lowest value among the models compared, and the variance value of the YID 1 model is 1.5841. The RMSPE value of the proposed model is 1.2997, which is the lowest value among models compared, and the RMSPE value of the YID 1 model is 1.5883. The PRR value of the proposed model is 0.0709, which is the lowest value among the models compared, and the PRR value of the Vtub model is 0.4394. The TS value of the proposed model is 4.6588, which is the lowest value among the models compared, and the TS value of the YID 1 model is 5.6364. The PP value of the proposed model is 0.0599, which is the lowest value among the models compared, and the PP value of the DS model is 0.3575. The SAE value of the proposed model is 13.8037, which is the lowest value among models compared, and the SAE value of the PNZ model is 16.1109. The R 2 and Adj   R 2 values of the proposed model are 0.9843 and 0.9744, respectively, which are the largest among the models compared. The R 2 and Adj   R 2 values of the YID 1 model are 0.9970 and 0.9701, respectively. Although both the YID 1 and YID 2 models are slightly smaller than our proposed model based on the MAE criterion (i.e., the MAE value of YID 1 and YID 2 models are 1.4727 and 1.4741 as compared with our proposed model which is 1.5337), the performance of our proposed model, based on all the other remaining eleven criteria, is not only much better as compared with the YID 1 and YID 2 models but also as compared with all the models as shown in Table 3. Figure 1 shows a graph of the mean value functions for all models based on dataset #1. Figure 2 shows a graph of the relative error value of all models for dataset #1.
From Table 4, looking at the results for the NCD values, the proposed model achieves the lowest values among all models. The NCD value of the proposed model is 0.0940763, which is the lowest value among all the models, and the NCD values of the YID 1 model and the YID 2 model are 0.1245350 and 0.1254775, respectively. Figure 3a shows a 3D plot of the MSE value, model number and NCD value, and Figure 3b shows a 3D plot of the R2 value, model number and NCD value for dataset #1. Table 5 lists the 95% and 99% confidence intervals (lower confidence limit (LCL) and upper confidence limit (UCL)) for the proposed model for dataset #1, respectively. Figure 4 shows a graph of the 95% and 99% confidence intervals for the proposed model. The relative value and the confidence interval graphs aim to confirm the ability to improve accuracy and whether the number returned by the mean value function is included in the confidence interval of each time point.

4.2. Dataset 2

Dataset #2 was provided by [34] and is shown in Table 6. In dataset #2, the weekly index uses cumulative system days, and there are 33 cumulative failures in 58,633 system days. The detailed information is recorded in [34].
In Table 7, we obtained the parameter estimates of all twenty models and the values of all twelve criteria using t = 1249 , 4721 , 8786 , , 58 , 633 from the cumulative system days from dataset #2. As shown in Table 7, we can see that the proposed model achieves the best results when comparing the twelve criteria to the other models. As shown in Table 8, the proposed model also achieves the best results when comparing NCD values to the other models.
Looking at Table 7 in detail, we can observe that the MSE, RMSE, AIC, variance, RMSPE, PRR, TS, PP, and SAE values for the proposed model are the lowest values among all models compared. The R 2 and Adj   R 2 values for the proposed model are the largest values among all the models compared. First, the MSE value of the proposed model is 0.8790, which is the lowest value among all the models compared, and the MSE value of the YID 2 model is 0.9229. The RMSE value of the proposed model is 0.9376, which is the lowest among all the models compared, and the RMSE value of the YID 2 model is 0.9607. The variance value of the proposed model is 0.7662, which is the lowest among all the models compared, and the variance value of the CT model is 0.7878. The RMSPE value of the proposed model is 0.7664, which is the lowest among all the models compared, and the RMSPE value of the CT model is 0.7889. The PRR value of the proposed model is 0.0149, which is the lowest among all the models compared, and the PRR value of the CT model is 0.0191. The TS value of the proposed model is 2.8923, which is the lowest among all the models compared, and the TS value of the CT model is 2.9636. The PP value of the proposed model is 0.0149, which is the lowest among all the models compared, and the PP value of the CT model is 0.0184. The SAE value of the proposed model is 7.6843, which is the lowest among all the models compared, and the SAE value of the CT model is 7.8468. The MAE value of the proposed model is 0.9605, which is the lowest among all the models compared, and the MAE value of the CT model is 0.9809. The R 2 and Adj   R 2 values of the proposed model are 0.0.9937 and 0.9891, respectively, which are the largest among all the models compared. The R 2 and Adj   R 2 values of the YID 1 model are 0.9933 and 0.9886, respectively. Although the YID 2 model is smaller than our proposed model based on AIC criterion, the performance of our proposed model based on all the other remaining eleven criteria are not only much better as compared with the YID 2 model but also as compared with all the models, as shown in Table 7. Figure 5 shows a graph of the mean value functions for all the models based on dataset #2. Figure 6 shows a graph of the relative error value of all models for dataset #2.
Moreover, from Table 8, looking at the results for the NCD values, the proposed model achieves the lowest values among all the models compared. The NCD value of the proposed model is 0.0713073, which is the lowest among all the models compared, and the NCD values of the CT model and the YID 2 model are 0.0724187 and 0.0773360, respectively. Figure 7a shows a 3D plot of the MSE value, model number and NCD value, and Figure 7b shows a 3D plot of the R2 value, model number and NCD value for dataset #2. Table 9 lists the 95% and 99% confidence intervals of the proposed model for dataset #2. Figure 8 shows a graph of the 95% and 99% confidence intervals of the proposed model for dataset #2.

5. Sensitivity Analysis

In this study, we investigated the effect of each parameter of the proposed model on the mean value function by performing sensitivity analysis. We conducted the sensitivity analysis by changing one of the parameters of the model and fixing all the other parameters. We examined how the value of the estimated mean function changes when the estimated parameter values obtained from dataset #1 and dataset #2 change from –20% to +20%. Figure 9 shows sensitivity analysis for five parameters in the proposed model based on dataset #1. As can be seen in Figure 9, the cumulative number of detected failures will change with each estimated parameter. Figure 10 shows the sensitivity analysis for five parameters in the proposed model based on dataset #2. From Figure 10, it can be seen that the cumulative number of detected failures will be change with each estimated parameter. Therefore, it appears that the parameters are all influential in the proposed model.

6. Conclusions

Software is used in a variety of areas and environments, driven by the needs of different consumers. That is why we must develop high-quality software and improve its reliability as well. However, in general, software is developed in a controlled testing environment. Therefore, we considered the uncertainty or randomness of the software operating environment, and testing coverage where quantitative confidence criteria for software products can be applied. In this paper, we discussed a new testing coverage model based on NHPP software reliability with the uncertainty of operating environments and provided the sensitivity analysis regarding the impact of each parameter of the proposed model. As a result, we provided twelve criteria and the NCD value to compare the goodness-of-fit for the proposed model with several existing NHPP software reliability models. As can be seen in the results of model analysis, the results show that the proposed model achieves significantly better goodness-of-fit than other models. In addition, we investigated the impact of each parameter of the proposed model on the mean value function by performing sensitivity analysis.

Author Contributions

Conceptualization, H.P.; software, K.Y.S.; formal analysis, K.Y.S.; data curation, K.Y.S. and I.H.C.; writing-original draft preparation, K.Y.S.; writing-review and editing, I.H.C. and H.P; supervision, H.P.; funding acquisition, I.H.C. and K.Y.S.

Funding

This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2018R1D1A1B07045734, NRF-2018R1A6A3A03011833).

Acknowledgments

This research was supported by the National Research Foundation of Korea.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. List of criteria for model comparisons.
Table A1. List of criteria for model comparisons.
No.CriteriaFormula
1MSE i = 1 n ( m ^ ( t i ) y i ) 2 n m
2RMSE i = 1 n ( m ^ ( t i ) y i ) 2 n m
3AIC 2 log L + 2 m
4Variance i = 1 n ( y i m ^ ( t i ) Bias ) 2 n 1
5RMSPE Variance 2 + Bias 2
6PRR i = 0 n ( m ^ ( t i ) y i m ^ ( t i ) ) 2
7TS 100 i = 1 n ( y i m ^ ( t i ) ) 2 i = 1 n y i 2 %
8PP i = 0 n ( m ^ ( t i ) y i y i ) 2
9SAE i = 0 n | m ^ ( t i ) y i |
10MAE i = 1 n | m ^ ( t i ) y i | n m
11 R 2 1 i = 0 n ( m ^ ( t i ) y i ) 2 i = 0 n ( y i y i ¯ ) 2
12Adj R 2 1 ( 1 R 2 ) ( n 1 ) n m 1

References

  1. Pham, H. System Software Reliability; Springer: London, UK, 2006. [Google Scholar]
  2. Pham, H.; Zhang, X. NHPP software reliability and cost models with testing coverage. Eur. J. Oper. Res. 2003, 145, 443–454. [Google Scholar] [CrossRef]
  3. Chang, I.H.; Pham, H.; Lee, S.W.; Song, K.Y. A testing-coverage software reliability model with the uncertainty of operation environments. Int. J. Syst. Sci: Oper. Logist. 2014, 1, 220–227. [Google Scholar]
  4. Li, Q.; Pham, H. A testing-coverage software reliability model considering fault removal efficiency and error generation. PloS ONE 2017, 12, e0181524. [Google Scholar] [CrossRef] [PubMed]
  5. Li, Q.; Pham, H. NHPP software reliability model considering the uncertainty of operating environments with imperfect debugging and testing coverage. Appl. Math. Model. 2017, 51, 68–85. [Google Scholar] [CrossRef]
  6. Musa, J.D.; Iannino, K.; Okumoto, K. Software Reliability Measurement Prediction Application; McGraw-Hill: New York, NY, USA, 2006. [Google Scholar]
  7. Goel, A.L.; Okumoto, K. Time-dependent error-detection rate model for software reliability and other performance measures. IEEE Trans. Reliab. 1979, 28, 206–211. [Google Scholar] [CrossRef]
  8. Yamada, S.; Ohba, M.; Osaki, S. S-shaped reliability growth modeling for software fault detection. IEEE Trans. Reliab. 1983, 32, 475–484. [Google Scholar] [CrossRef]
  9. Ohba, M. Inflexion S-shaped software reliability growth models. In Stochastic Models in Reliability Theory; Osaki, S., Hatoyama, Y., Eds.; Springer: Berlin, Germany, 1984; pp. 144–162. [Google Scholar]
  10. Yamada, S.; Ohtera, H.; Narihisa, H. Software Reliability Growth Models with Testing-Effort. IEEE Trans. Reliab. 1986, 35, 19–23. [Google Scholar] [CrossRef]
  11. Yang, B.; Xie, M. A study of operational and testing reliability in software reliability analysis. Reliab. Eng. Syst. Saf. 2000, 70, 323–329. [Google Scholar] [CrossRef]
  12. Huang, C.Y.; Kuo, S.Y.; Lyu, M.R.; Lo, J.H. Quantitative software reliability modeling from testing from testing to operation. In Proceedings of the 11th International Symposium on Software Reliability Engineering (ISSRE 2000), San Jose, CA, USA, 8–11 October 2000; pp. 72–82. [Google Scholar]
  13. Zhang, X.; Jeske, D.; Pham, H. Calibrating software reliability models when the test environment does not match the user environment. Appl. Stoch. Models. Bus. Ind. 2002, 18, 87–99. [Google Scholar] [CrossRef]
  14. Teng, X.; Pham, H. A new methodology for predicting software reliability in the random field environments. IEEE Trans. Reliab. 2006, 55, 458–468. [Google Scholar] [CrossRef]
  15. Song, K.Y.; Chang, I.H.; Pham, H. A Three-parameter fault-detection software reliability model with the uncertainty of operating environments. J. Syst. Sci. Syst. Eng. 2017, 26, 121–132. [Google Scholar] [CrossRef]
  16. Song, K.Y.; Chang, I.H.; Pham, H. A software reliability model with a Weibull fault detection rate function subject to operating environments. App. Sci. 2017, 7, 983. [Google Scholar] [CrossRef]
  17. Song, K.Y.; Chang, I.H.; Pham, H. An NHPP software reliability model with S-shaped growth curve subject to random operating environments and optimal release time. App. Sci. 2017, 7, 1304. [Google Scholar] [CrossRef]
  18. Song, K.Y.; Chang, I.H.; Pham, H. Optimal Release Time and Sensitivity Analysis Using a New NHPP Software Reliability Model with Probability of Fault Removal Subject to Operating Environments. App. Sci. 2018, 8, 714. [Google Scholar] [CrossRef]
  19. Zhu, M.; Pham, H. A two-phase software reliability modeling involving with software fault dependency and imperfect fault removal. Comput. Lang. Syst. Struct. 2018, 53, 27–42. [Google Scholar] [CrossRef]
  20. Huang, C.Y.; Kuo, S.Y.; Lyu, M.R. An assessment of testing-effort dependent software reliability growth models. IEEE Trans. Reliab. 2007, 56, 198–211. [Google Scholar] [CrossRef]
  21. Shibata, K.; Rinsaka, K.; Dohi, T. Metrics-based software reliability models using non-homogeneous Poisson processes. In Proceedings of the 17th International Symposium on Software Reliability Engineering, Raleigh, NC, USA, 7–10 November 2006; pp. 52–61. [Google Scholar]
  22. Malaiya, Y.K.; Li, M.N.; Bieman, J.M.; Karcich, R. Software reliability growth with test coverage. IEEE Trans. Reliab. 2002, 51, 420–426. [Google Scholar] [CrossRef]
  23. Akaike, H. A new look at statistical model identification. IEEE Trans. Autom. Control 1974, 19, 716–719. [Google Scholar] [CrossRef]
  24. Anjum, M.; Haque, M.A.; Ahmad, N. Analysis and ranking of software reliability models based on weighted criteria value. I. J. Inform. Tech. Comp. Sci. 2013, 2, 1–14. [Google Scholar] [CrossRef]
  25. Pham, H. A new software reliability model with Vtub-Shaped fault detection rate and the uncertainty of operating environments. Optimization 2014, 63, 1481–1490. [Google Scholar] [CrossRef]
  26. Yamada, S.; Tokuno, K.; Osaki, S. Imperfect debugging models with fault introduction rate for software reliability assessment. Int. J. Syst. Sci. 1992, 23, 2241–2252. [Google Scholar] [CrossRef]
  27. Hossain, S.A.; Dahiya, R.C. Estimating the parameters of a non-homogeneous Poisson-process model for software reliability. IEEE Trans. Reliab. 1993, 42, 604–612. [Google Scholar] [CrossRef]
  28. Pham, H.; Nordmann, L.; Zhang, X. A general imperfect software debugging model with S-shaped fault detection rate. IEEE Trans. Reliab. 1999, 48, 169–175. [Google Scholar] [CrossRef]
  29. Pham, H.; Zhang, X. An NHPP software reliability models and its comparison. Int. J. Reliab. Qual. Saf. Eng. 1997, 4, 269–282. [Google Scholar] [CrossRef]
  30. Zhang, X.M.; Teng, X.L.; Pham, H. Considering fault removal efficiency in software reliability assessment. IEEE Trans. Syst. Man. Cybern. Part A Syst. Hum. 2003, 33, 114–120. [Google Scholar] [CrossRef]
  31. Pham, H. An Imperfect-debugging Fault-detection Dependent-parameter Software. Int. J. Automat. Comput. 2007, 4, 325–328. [Google Scholar] [CrossRef]
  32. Kapur, P.K.; Pham, H.; Anand, S.; Yadav, K. A unified approach for developing software reliability growth models in the presence of imperfect debugging and error generation. IEEE Trans. Reliab. 2011, 60, 331–340. [Google Scholar] [CrossRef]
  33. Roy, P.; Mahapatra, G.S.; Dey, K.N. An NHPP software reliability growth model with imperfect debugging and error generation. Int. J. Reliab. Qualit. Saf. Eng. 2014, 21, 1–3. [Google Scholar]
  34. Daniel, R.J.; Zhang, X. Some successful approaches to software reliability modeling in industry. J. Syst. Softw. 2005, 74, 85–99. [Google Scholar]
Figure 1. Mean value functions for all models in Table 1 for dataset #1.
Figure 1. Mean value functions for all models in Table 1 for dataset #1.
Mathematics 07 00450 g001
Figure 2. Relative error value for all models in Table 1 for dataset #1.
Figure 2. Relative error value for all models in Table 1 for dataset #1.
Mathematics 07 00450 g002
Figure 3. 3D plot for dataset #1 (a) MSE value, model number, NCD value; (b) R2 value model number, NCD value.
Figure 3. 3D plot for dataset #1 (a) MSE value, model number, NCD value; (b) R2 value model number, NCD value.
Mathematics 07 00450 g003
Figure 4. The 95% and 99% confidence intervals for the proposed model in dataset #1.
Figure 4. The 95% and 99% confidence intervals for the proposed model in dataset #1.
Mathematics 07 00450 g004
Figure 5. Mean value functions for all models in Table 1 for dataset #2.
Figure 5. Mean value functions for all models in Table 1 for dataset #2.
Mathematics 07 00450 g005
Figure 6. Relative error value for all models in Table 1 for dataset #2.
Figure 6. Relative error value for all models in Table 1 for dataset #2.
Mathematics 07 00450 g006
Figure 7. The 3D plot for dataset #2 (a) MSE value, model number, NCD value; (b) R2 value model number, NCD value.
Figure 7. The 3D plot for dataset #2 (a) MSE value, model number, NCD value; (b) R2 value model number, NCD value.
Mathematics 07 00450 g007aMathematics 07 00450 g007b
Figure 8. 95% and 99% confidence intervals for the proposed model for dataset #2.
Figure 8. 95% and 99% confidence intervals for the proposed model for dataset #2.
Mathematics 07 00450 g008
Figure 9. Sensitivity analysis of the parameters of the proposed model for dataset #1 (a) parameter b; (b) parameter α; (c) parameter β; (d) parameter N; (e) parameter d.
Figure 9. Sensitivity analysis of the parameters of the proposed model for dataset #1 (a) parameter b; (b) parameter α; (c) parameter β; (d) parameter N; (e) parameter d.
Mathematics 07 00450 g009aMathematics 07 00450 g009b
Figure 10. Sensitivity analysis of the parameters of the proposed model for dataset #2 (a) parameter b; (b) parameter α; (c) parameter β; (d) parameter N; (e) parameter d.
Figure 10. Sensitivity analysis of the parameters of the proposed model for dataset #2 (a) parameter b; (b) parameter α; (c) parameter β; (d) parameter N; (e) parameter d.
Mathematics 07 00450 g010aMathematics 07 00450 g010b
Table 1. NHPP software reliability models.
Table 1. NHPP software reliability models.
No.Model Mean   Value   Function   m ( t )
1Goel-Okumoto (GO) [7] m ( t ) = a ( 1 e bt )
2Yamada et al. (DS) [8] m ( t ) = a ( 1 ( 1 + bt ) e bt )
3Ohba (IS) [9] m ( t ) = a ( 1 e bt ) 1 + β e bt
4Yamada et al. (YE) [10] m ( t ) = a ( 1 e γ α ( 1 e β t ) )
5Yamata et al. (YR) [10] m ( t ) = a ( 1 e γ α ( 1 e β t 2 / 2 ) )
6Yamada et al. (YID 1) [26] m ( t ) = ab α + b ( e α t e bt )
7Yamada et al. (YID 2) [26] m ( t ) = a [ 1 e bt ] [ 1 α b ] + α at
8Hossain-Dahiya (HDGO) [27] m ( t ) = log [ ( e a c ) / ( e ae bt c ) ]
9Pham et al. (PNZ) [28] m ( t ) = a [ 1 e bt ] [ 1 α b ] + α at 1 + β e bt
10Pham-Zhang (PZ) [29] m ( t ) = 1 1 + β e bt ( ( c + a ) [ 1 e bt ] [ ab b α ( e α t e bt ) ] )
11Zhang et al. (ZFR) [30] m ( t ) = a p β [ 1 ( ( 1 + α ) e bt 1 + α e bt ) c b ( p β ) ]
12Teng-Pham (TP) [14] m ( t ) = a p q [ 1 ( β β + ( p q ) ln ( c + e bt c + 1 ) ) α ]
13Pham (IFD) [1] m ( t ) = a ( 1 e bt ) ( 1 + ( b + d ) t + bdt 2 )
14Pham (PDP) [31] m ( t ) = m 0 ( γ t + 1 γ t 0 + 1 ) e γ ( t t 0 ) + α ( γ t + 1 ) ( γ t 1 + ( 1 γ t 0 ) e γ ( t t 0 )
15Kapur et al. (KSRGM) [32] m ( t ) = A 1 α [ 1 ( ( 1 + bt + b 2 t 2 2 ) e bt ) p ( 1 α ) ]
16Roy et al. (RMD) [33] m ( t ) = a α [ 1 e bt ] [ ab b β ( e β t e bt ) ]
17Chang et al. (CT) [3] m ( t ) = N [ 1 ( β β + ( at ) b ) α ]
18Pham (Vtub) [25] m ( t ) = N [ 1 ( β β + a t b 1 ) α ]
19Song et al. (3PFD) [15] m ( t ) = N [ 1 ( β β a b ln ( ( 1 + c ) e bt 1 + ce bt ) ) ]
20Proposed Model m ( t ) = N ( 1 β β + bt ln ( 1 + dt ) ) α
Table 2. Failure data-dataset #1.
Table 2. Failure data-dataset #1.
Week IndexDataset #1
FailuresCumulative Failures
122
21113
3215
4419
5322
6123
7124
8226
9430
10030
11434
12135
13338
14038
Table 3. Results of model parameter estimation and criteria from dataset #1.
Table 3. Results of model parameter estimation and criteria from dataset #1.
No.ModelParameter EstimationMSERMSEAICVariationRMSPEPRRTSPPSAEMAER2Adj R2
1GO a ^ = 46.1404
b ^ = 0.1182
3.63431.906462.83091.83271.83300.52836.57342.574320.43741.70310.96870.9630
2DS a ^ = 35.9507
b ^ = 0.3988
9.18853.031268.63753.00813.03710.954110.45210.357532.60412.71700.92080.9064
3IS a ^ = 46.1403, b ^ = 0.1182
β ^ = 5.93 × 10−9
3.96471.991264.83091.83271.83300.52836.57342.574320.43751.85800.96870.9593
4YE a ^ = 64.4549, α ^ = 0.0183
β ^ = 0.0574, γ ^ = 80.1118
4.11622.028866.70831.77941.77940.51906.38612.808519.63611.96360.97040.9573
5YR a ^ = 38.1089, α ^ = 1.6323
β ^ = 0.0364, γ ^ = 1.4194
16.09724.012182.01863.67073.71641.918612.62890.546339.58953.95900.88440.8331
6YID 1 a ^ = 22.4227, b ^ = 0.3115
α ^ = 0.0505
2.91491.707364.87451.58411.58830.50515.63644.373216.19981.47270.97700.9701
7YID 2 a ^ = 18.5582, b ^ = 0.3808
α ^ = 0.0952
2.97951.726164.76031.59911.60260.50925.69844.398516.21501.47410.97650.9694
8HDGO a ^ = 46.1403, b ^ = 0.1182
c ^ = 0.00022
3.96471.991264.83091.83271.83300.52836.57342.574320.43751.85800.96870.9593
9PNZ a ^ = 17.7782, b ^ = 0.4256
α ^ = 0.1008, β ^ = 0.0836
3.23781.799466.66741.59081.59470.50695.66394.364116.11091.61110.97680.9664
10PZ a ^ = 4.33 × 10−8, b ^ = 0.1182
α ^ = 0.03204, β ^ = 6.35 × 10−8
c ^ = 46.14
4.84582.201368.83091.83271.83300.52836.57342.574220.43792.27090.96870.9491
11ZFR a ^ = 6.8991, b ^ = 0.0132
α ^ = 0.0095, β ^ = 0.1242
c ^ = 0.7981, p ^ = 0.2738
5.45292.335170.83191.83271.83300.52826.57432.575320.43722.55460.96870.9418
12TP a ^ = 78.530, b ^ = 0.3843
α ^ = 0.06219, β ^ = 0.2910
c ^ = 3.19 × 10−8, p ^ = 0.8530
q ^ = 0.6711
5.25112.291572.69731.68361.68430.51256.03483.565617.82692.54670.97360.9428
13IFD a ^ = 3.2868, b ^ = 0.8524
d ^ = 1.0 × 10−11
13.05333.612967.89283.83363.97831.211811.92740.992536.91373.35580.89690.8660
14PDP α ^ = 1833.3580, γ ^ = 0.0119
t ^ 0 = 28.7597, m ^ 0 = 143.1294
26.25425.1239138.48624.49404.49410.933016.128342.167643.59074.35910.81150.7277
15KSRGM A ^ = 0.9468, b ^ = 10.5138
α ^ = 0.9774, p ^ = 0.6679
5.32042.306665.79192.06162.07340.51767.26040.978423.29802.32980.96180.9448
16RMD a ^ = 473.90, b ^ = 0.3889
α ^ = 1.0380, β ^ = 0.003857
3.33031.824966.75881.60421.60530.51155.74434.279616.70581.67060.97610.9655
17CT a ^ = 0.3318, b ^ = 1.110
α ^ = 0.03498, β ^ = 0.9627
N ^ = 584.00
4.05472.013668.06061.67771.67840.49616.01302.990618.15772.01750.97380.9574
18Vtub a ^ = 91.140, b ^ = 0.5612
α ^ = 0.04156, β ^ = 21.950
N ^ = 74.030
3.77331.942566.26481.61961.62060.43945.80062.100018.44842.04980.97560.9604
193PFD a ^ = 0.3353, b ^ = 1.46 × 10−7
β ^ = 0.18198, N ^ = 68.60676
c ^ = 20.31988
4.34882.085468.64191.73591.73610.51366.22723.068718.82582.09180.97190.9543
20New Model b ^ = 0.2627, α ^ = 0.3487
β ^ = 193.7390, N ^ = 184.0433
d ^ = 0.2997
2.43401.560160.21841.29931.29970.07094.65880.059913.80371.53370.98430.9744
Table 4. Results of criteria and NCD values for comparison with dataset #1.
Table 4. Results of criteria and NCD values for comparison with dataset #1.
No.ModelMSERMSEAICVariationRMSPEPRRTSPPSAEMAER2Adj R2NCD ValueRank
1GO3.63431.906462.83091.83271.83300.52836.57342.574320.43741.70310.96870.96300.13397498
2DS9.18853.031268.63753.00813.03710.954110.45210.357532.60412.71700.92080.90640.230768617
3IS3.96471.991264.83091.83271.83300.52836.57342.574320.43751.85800.96870.95930.137130710
4YE4.11622.028866.70831.77941.77940.51906.38612.808519.63611.96360.97040.95730.137286211
5YR16.09724.012182.01863.67073.71641.918612.62890.546339.58953.95900.88440.83310.343222819
6YID 12.91491.707364.87451.58411.58830.50515.63644.373216.19981.47270.97700.97010.12453502
7YID 22.97951.726164.76031.59911.60260.50925.69844.398516.21501.47410.97650.96940.12547753
8HDGO3.96471.991264.83091.83271.83300.52836.57342.574320.43751.85800.96870.95930.13713079
9PNZ3.23781.799466.66741.59081.59470.50695.66394.364116.11091.61110.97680.96640.12753185
10PZ4.84582.201368.83091.83271.83300.52836.57342.574220.43792.27090.96870.94910.145896513
11ZFR5.45292.335170.83191.83271.83300.52826.57432.575320.43722.55460.96870.94180.152274015
12TP5.25112.291572.69731.68361.68430.51256.03483.565617.82692.54670.97360.94280.148176614
13IFD13.05333.612967.89283.83363.97831.211811.92740.992536.91373.35580.89690.86600.292548118
14PDP26.25425.1239138.48624.49404.49410.933016.128342.167643.59074.35910.81150.72770.651364220
15KSRGM5.32042.306665.79192.06162.07340.51767.26040.978423.29802.32980.96180.94480.154340616
16RMD3.33031.824966.75881.60421.60530.51155.74434.279616.70581.67060.97610.96550.12897736
17CT4.05472.013668.06061.67771.67840.49616.01302.990618.15772.01750.97380.95740.13370597
18Vtub3.77331.942566.26481.61961.62060.43945.80062.100018.44842.04980.97560.96040.12712224
193PFD4.34882.085468.64191.73591.73610.51366.22723.068718.82582.09180.97190.95430.138725212
20New Model2.43401.560160.21841.29931.29970.07094.65880.059913.80371.53370.98430.97440.09407631
Table 5. Results of the 95% and 99% confidence intervals for the proposed model for dataset #1.
Table 5. Results of the 95% and 99% confidence intervals for the proposed model for dataset #1.
Time IndexData Value95%99%
LCLUCLLCLUCL
1205.968305.0607
2132.302319.19034.321317.1714
3155.057425.04317.446622.6539
4197.377029.497510.021526.8531
5229.408233.181112.250230.3391
62311.226836.354214.230733.3503
72412.880039.158016.021536.0165
82614.400141.679417.661238.4182
93015.810143.976719.177340.6094
103017.127346.091020.589842.6285
113418.365148.052721.914244.5036
123519.534049.884823.162346.2565
133820.642451.605524.343947.9040
143821.697253.229125.466849.4595
Table 6. Failure data–Dataset #2.
Table 6. Failure data–Dataset #2.
Time IndexCumulative System DaysDataset #2
FailuresCumulative Failures
1124944
24721610
38786414
413,669317
519,094623
624,750124
732,299226
840,594430
949,476131
1055,596031
1158,061132
1258,588133
1358,633033
Table 7. Results of model parameter estimation and criteria from dataset #2.
Table 7. Results of model parameter estimation and criteria from dataset #2.
No.ModelParameter EstimationMSERMSEAICVariationRMSPEPRRTSPPSAEMAER2Adj R2
1GO a ^ = 33.0367
b ^ = 5.8 × 10−5
1.62661.275449.40701.27701.29370.62514.61360.242012.90851.17350.98390.9806
2DS a ^ = 31.0119
b ^ = 1.44 × 10−4
7.37132.715072.23782.86662.944065.18319.82141.166425.64792.33160.92690.9122
3IS a ^ = 33.0368, b ^ = 5.8×10−5
β ^ = 8.6 × 10−8
1.78921.337651.40701.27701.29370.62514.61360.242012.90841.29080.98390.9785
4YE a ^ = 4438.167, α ^ = 0.2021
β ^ = 5.76 × 10−5, γ ^ = 0.03696
1.98121.407653.36551.29841.32190.63634.60570.244512.94251.43810.98390.9759
5YR a ^ = 32.7479, α ^ = 2.3803
β ^ = 3.87 × 10−9, γ ^ = 1.1396
12.81313.579591.67133.43103.5266199.536611.71261.427031.67313.51920.89600.8440
6YID 1 a ^ = 23.8781, b ^ = 9.83 × 10−5
α ^ = 6.45 × 10−6
1.02191.010948.49290.94070.94610.22393.48670.120310.04501.00450.99080.9877
7YID 2 a ^ = 22.6444, b ^ = 0.000106
α ^ = 9.02 × 10−6
0.99790.999048.48040.92210.92520.19793.44550.11079.88820.98880.99100.9880
8HDGO a ^ = 33.0367, b ^ = 5.78 × 10−5
c ^ = 8.6 × 10−8
1.78831.337351.38501.29041.31100.63574.61240.244412.93531.29350.98390.9785
9PNZ a ^ = 22.6389, b ^ = 1.06 × 10−4
α ^ = 0.9 × 10−5, β ^ = 0.00096
1.10901.053150.48300.92790.93270.19933.44580.11119.90671.10070.99100.9865
10PZ a ^ = 29.0937, b ^ = 0.00063
α ^ = 4.98 × 10−5, β ^ = 0.0006
c ^ = 4.7088
1.30391.141953.36260.99501.01340.11803.52260.080110.23181.27900.99060.9839
11ZFR a ^ = 33.9535, b ^ = 0.01158
α ^ = 0.01502, β ^ = 0.01344
c ^ = 5.6 × 10−5, p ^ = 1.0412
2.55911.599757.36221.31061.33680.65174.61630.248112.97321.85330.98380.9677
12TP a ^ = 228.3963, b ^ = 0.000117
α ^ = 0.00517, β ^ = 0.03607
c ^ = 0.1539, p ^ = 0.1941
q ^ = 0.0837
1.33441.155156.01820.82450.82690.03323.08610.02928.27871.37980.99280.9827
13IFD a ^ = 3.283, b ^ = 0.000174
d ^ = 2 × 10−15
38.83836.232065.74277.25817.676623.680521.49491.898464.68636.46860.64970.5330
14PDP α ^ = 278.0266, γ ^ = 5.8 × 10−6
t ^ 0 = 4.9046, m ^ 0 = 15.3182
34.45405.8698134.45925.11885.12971.055719.20648.549549.54365.50480.72030.5805
15KSRGM A ^ = 31.8755, b ^ = 0.012
α ^ = 0.0179, p ^ = 0.005
3.24681.801955.09542.23882.40942.59195.89600.488616.08651.78740.97360.9605
16RMD a ^ = 168.8421, b ^ = 5.8 × 10−5
α ^ = 0.1956, β ^ = 0.1229
2.04111.428753.52391.30971.33120.71944.67470.260913.07071.45230.98340.9751
17CT a ^ = 0.0068, b ^ = 0.8298
α ^ = 0.7555, β ^ = 58.3462
N ^ = 53.3901
0.92290.960751.97690.78780.78890.01912.96360.01847.84680.98090.99330.9886
18Vtub a ^ = 1.0006, b ^ = 0.5024
α ^ = 9.3307, β ^ = 2.9469
N ^ = 85.1389
2.09501.447452.96721.21061.21930.11504.46520.198911.69211.46150.98490.9741
193PFD a ^ = 0.0059, b ^ = 0.000001
β ^ = 2.870, N ^ = 41.540
c ^ = 34.600
1.24051.113852.90050.92900.93490.21+343.43600.11659.85631.23200.99100.9847
20New Model b ^ = 0.000108, α ^ = 0.6413
β ^ = 2.5635, N ^ = 42.4984
d ^ = 5.4 × 10−5
0.87900.937652.08590.76620.76640.01492.89230.01497.68430.96050.99370.9891
Table 8. Results for criteria and NCD value for comparison with dataset #2.
Table 8. Results for criteria and NCD value for comparison with dataset #2.
No.ModelMSERMSEAICVariationRMSPEPRRTSPPSAEMAER2Adj R2NCD ValueRank
1GO1.62661.275449.40701.27701.29370.62514.61360.242012.90851.17350.98390.98060.09805829
2DS7.37132.715072.23782.86662.944065.18319.82141.166425.64792.33160.92690.91220.319318617
3IS1.78921.337651.40701.27701.29370.62514.61360.242012.90841.29080.98390.97850.100702310
4YE1.98121.407653.36551.29841.32190.63634.60570.244512.94251.43810.98390.97590.104351413
5YR12.81313.579591.67133.43103.5266199.536611.71261.427031.67313.51920.89600.84400.741478018
6YID 11.02191.010948.49290.94070.94610.22393.48670.120310.04501.00450.99080.98770.07832184
7YID 20.99790.999048.48040.92210.92520.19793.44550.11079.88820.98880.99100.98800.07733603
8HDGO1.78831.337351.38501.29041.31100.63574.61240.244412.93531.29350.98390.97850.101040711
9PNZ1.10901.053150.48300.92790.93270.19933.44580.11119.90671.10070.99100.98650.08004135
10PZ1.30391.141953.36260.99501.01340.11803.52260.080110.23181.27900.99060.98390.08587968
11ZFR2.55911.599757.36221.31061.33680.65174.61630.248112.97321.85330.98380.96770.113958515
12TP1.33441.155156.01820.82450.82690.03323.08610.02928.27871.37980.99280.98270.08249436
13IFD38.83836.232065.74277.25817.676623.680521.49491.898464.68636.46860.64970.53300.741829819
14PDP34.45405.8698134.45925.11885.12971.055719.20648.549549.54365.50480.72030.58050.820156720
15KSRGM3.24681.801955.09542.23882.40942.59195.89600.488616.08651.78740.97360.96050.147044916
16RMD2.04111.428753.52391.30971.33120.71944.67470.260913.07071.45230.98340.97510.105635814
17CT0.92290.960751.97690.78780.78890.01912.96360.01847.84680.98090.99330.98860.07241872
18Vtub2.09501.447452.96721.21061.21930.11504.46520.198911.69211.46150.98490.97410.101347712
193PFD1.24051.113852.90050.92900.93490.21343.43600.11659.85631.23200.99100.98470.08317687
20New Model0.87900.937652.08590.76620.76640.01492.89230.01497.68430.96050.99370.98910.07130731
Table 9. Results of the 95% and 99% confidence intervals for the proposed model for dataset #2.
Table 9. Results of the 95% and 99% confidence intervals for the proposed model for dataset #2.
Time IndexData Value95%99%
LCLUCLLCLUCL
1249409.37450.15008.1220
4721101.657917.67483.572715.7600
8786144.442723.80286.757221.4884
13,669177.160029.09359.782126.4714
19,094239.556033.442912.411630.5873
24,7502411.533936.880514.564033.8504
32,2992613.580240.325616.777537.1283
40,5943015.281943.120518.609939.7925
49,4763116.671345.362320.101241.9324
55,5963117.442346.592420.927143.1076
58,0613217.719047.031521.223243.5273
58,5883317.775947.121721.284143.6135
58,6333317.780747.129321.289243.6208

Share and Cite

MDPI and ACS Style

Song, K.Y.; Chang, I.H.; Pham, H. A Testing Coverage Model Based on NHPP Software Reliability Considering the Software Operating Environment and the Sensitivity Analysis. Mathematics 2019, 7, 450. https://0-doi-org.brum.beds.ac.uk/10.3390/math7050450

AMA Style

Song KY, Chang IH, Pham H. A Testing Coverage Model Based on NHPP Software Reliability Considering the Software Operating Environment and the Sensitivity Analysis. Mathematics. 2019; 7(5):450. https://0-doi-org.brum.beds.ac.uk/10.3390/math7050450

Chicago/Turabian Style

Song, Kwang Yoon, In Hong Chang, and Hoang Pham. 2019. "A Testing Coverage Model Based on NHPP Software Reliability Considering the Software Operating Environment and the Sensitivity Analysis" Mathematics 7, no. 5: 450. https://0-doi-org.brum.beds.ac.uk/10.3390/math7050450

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop