Next Article in Journal
Analyzing the Impact of the Renewable Energy Sources on Economic Growth at the EU Level Using an ARDL Model
Previous Article in Journal
The Minimal Perimeter of a Log-Concave Function
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Software Reliability Model with Dependent Failures and SPRT

1
Department of Computer Science and Statistics, Chosun University, 309 Pilmun-daero, Dong-gu, Gwangju 61452, Korea
2
Department of Industrial and Systems Engineering, Rutgers University, 96 Frelinghuysen Road, Piscataway, NJ 08855-8018, USA
*
Authors to whom correspondence should be addressed.
Submission received: 13 July 2020 / Revised: 30 July 2020 / Accepted: 13 August 2020 / Published: 14 August 2020

Abstract

:
Software reliability and quality are crucial in several fields. Related studies have focused on software reliability growth models (SRGMs). Herein, we propose a new SRGM that assumes interdependent software failures. We conduct experiments on real-world datasets to compare the goodness-of-fit of the proposed model with the results of previous nonhomogeneous Poisson process SRGMs using several evaluation criteria. In addition, we determine software reliability using Wald’s sequential probability ratio test (SPRT), which is more efficient than the classical hypothesis test (the latter requires substantially more data and time because the test is performed only after data collection is completed). The experimental results demonstrate the superiority of the proposed model and the effectiveness of the SPRT.

1. Introduction

Software performance and reliability are essential in various fields, such as the Internet of Things. If software reliability is not guaranteed, economic and human losses may occur. Hence, various studies have been conducted to improve software reliability and prevent loss.
Software reliability growth models (SRGMs) are tools that estimate the quality and reliability of software products. SRGMs not only provide information regarding software reliability for developers and consumers, but also help establish an optimal release policy.
Most SRGMs assume that the mean value function, m(t), is a nonhomogeneous Poisson process (NHPP). The m(t) of each model is a single function that reflects the failure intensity, failure detection, number of remaining failures, and the environment. In particular, assumptions regarding the test and development environments are essential; furthermore, the parameters and the form of m ( t ) depend on the assumptions regarding the environment.
Previous studies have considered the types of fault detection functions and the shape of m(t). The Goel–Okumoto (GO) model [1] is a basic stochastic NHPP SRGM that has inspired several researchers. Ohba et al. [2] proposed an inflection S-shaped curve model. Yamada [3] developed a similar delayed model. Pham et al. [4] reported a nondecreasing inflection S-shaped model with an error introduction rate as an exponential function; the model is known as the Pham–Zhang (PZ) model. Moreover, Pham et al. [5] proposed a generalized NHPP software model that reflected the testing coverage. Subsequently, researchers became interested in environmental factors. In particular, many studies have focused on uncertain operating or testing environments [6,7,8,9,10,11,12,13]. Some recent research has been studied on SRGMs with considerations of time-dependent fault detection [14,15,16,17,18,19], random operating environments [20,21], and multi release software [22,23,24].
Recent studies combined deep learning and machine learning with software reliability [25,26,27,28]. Wang et al. discussed an optimal release policy and the selection of the best software reliability model [28]. Lee et al. proposed an SRGM to consider the actual test time instead of the designed test time [29]. Minamino et al. [30] discussed software reliability and release policies that considered the change-points of data based on the theory of multiproperty utilities. Rani et al. [31] developed a hazard rate model that introduced an imperfect debugging parameter and a single change-point parameter.
Some studies suggest statistical techniques to predict software reliability. Typically, parameters of reliability models are estimated using the least-square estimator (LSE) method. Additionally, the maximum likelihood estimator (MLE) and Bayesian methods have been used [32,33]. However, software reliability models have complex structures and numerous parameters, rendering it difficult to apply the MLE and Bayesian methods; hence, we used the LSE to estimate the parameters in this study.
We applied the sequential probability ratio test (SPRT), a statistical test technique, to efficiently determine software reliability. The SPRT was designed for military and naval equipment development by Wald [34]. The main advantage of a sequential test is that it requires a shorter test time compared with that required by the classical test, which requires a fixed sample size. In addition, the SPRT can instantly determine software reliability whenever new faults occur. Hence, reliability can be assessed based on the change-points of data. This procedure is described in Section 2.
Software reliability studies using Wald’s SPRT procedure have been conducted continually. Stieber applied the SPRT to an NHPP SRGM for the first time [35]. This method has been applied to various SRGMs in many studies [36,37,38,39,40]. Furthermore, the authors of [38] performed the SPRT using order statistics.
Herein, we propose a new SRGM that assumes interdependent software failures. In Section 2, we discuss the efficiency of the SPRT. The proposed model is described in Section 3. In Section 4, we introduce the criteria and data used in the experiments. Subsequently, we discuss the results in Section 5. Finally, Section 6 concludes the paper.

2. Wald’s SPRT

Wald described the efficiency of the SPRT [34]. The probability ratio p 1 / p 0 is used as a test statistic of the SPRT for testing the null hypothesis against an alternative hypothesis. The test is continued (until the next decision) if the following condition is satisfied:
B < p 1 p 0 < A ,
where A and B are constants used to decide the acceptance and rejection of the null hypothesis H 0 , respectively. If p 1 / p 0 A , then H 0 is rejected. If p 1 / p 0 B , then H 0 is accepted.
Moreover, A and B depend on α and β, as shown in Equations (2)–(5). Here, α and β are type-1 and -2 errors, respectively. In other words, α is the producer’s risk, whereas β is the consumer’s risk.
1 β A α
A 1 β α
β ( 1 α ) B
B β 1 α
The SPRT can be expressed visually. Figure 1 shows the method to assess software reliability using the SPRT.
In Figure 1, N L ( t ) and N U ( t ) denote the lower and upper limits of the area in which the software is reliable, respectively. Here, N ( t ) is the expected number of failures at time t for the NHPP. If N ( t ) is in the reliable region, then the software is reliable.
N L ( t ) = at b 1 ,   N U ( t ) = at + b 2 ,
where a ,   b 1 , and b 2 are expressed as:
a = λ 1 λ 0 ln ( λ 1 λ 0 ) ,
b 1 = ln ( 1 α β ) ln ( λ 1 λ 0 ) ,   b 2 = ln ( 1 β α ) ln ( λ 1 λ 0 )   .
Stieber applied the SPRT to an NHPP SRGM [35] with λ 0 = λ ln ( q ) q 1 and λ 1 = q λ ln ( q ) q 1 , where q = λ 1 / λ 0 . A general NHPP SRGM is expressed as:
Pr { N ( t ) = n } = [ m ( t ) ] n n ! e m ( t ) ,   n = 0 ,   1 ,   2 ,   .
where m(t) indicates the expected number of detected failures until time t, which is described as:
m ( t ) = 0 t λ ( s ) ds .
Stieber defined the probability ratio, which is used as a statistic of the SPRT for NHPP SRGMs, as in Equation (1).
p 0 = e m 0 ( t ) [ m 0 ( t ) ] N ( t ) N ( t ) ! ,   p 1 = e m 1 ( t ) [ m 1 ( t ) ] N ( t ) N ( t ) ! .
Using Equations (3), (5) and (11), we can rewrite Equation (1) as follows:
ln ( β 1 α ) + m 1 ( t ) m 0 ( t ) ln m 1 ( t ) ln m 0 ( t ) < N ( t ) < ln ( 1 β α ) + m 1 ( t ) m 0 ( t ) ln m 1 ( t ) ln m 0 ( t ) .
In Equation (12), the left term is constant B, and the right term is constant A. Moreover, N ( t ) refers to the probability ratio, p 1 / p 0 .

3. New SRGM

A general SRGM that follows the NHPP, as shown in Equation (9), can be obtained by solving the differential equation shown in Equation (13). Function b ( t ) varies according to each assumption as follows:
dm ( t ) dt = [ b ( t ) ] [ N m ( t ) ] ,
where b ( t ) is the fault detection rate function. The model assumes independent failures.
However, the proposed model is different. For example, if a fault occurs because of a syntax error and a developer cannot fix it completely, new faults will be affected. Consequently, failures may depend on one another (Figure 2). Hence, we assume that the failures are dependent. The SRGM based on these assumptions can be obtained from the differential equation expressed in Equation (14).
dm ( t ) dt = [ b ( t ) ] [ a ( t ) m ( t ) ] m ( t ) ,
where a ( t ) and b ( t ) are the functions of the total failure content rate and failure detection rate, respectively. In the proposed model, a ( t )   and b ( t ) are expressed as:
a ( t ) = a ,   b ( t ) = b 1 + ce bt   .
Therefore, we can calculate the m ( t ) of the proposed model. The initial value is m ( 0 ) = h .
m ( t ) = a 1 + a h ( b + c c + be bt ) a b
Table 1 summarizes the m(t) for existing NHPP SRGMs.

4. Experiments

4.1. Criteria

In this section, we elucidate the use of eight criteria to compare the NHPP SRGMs for two real-world datasets.
(1) The mean squared error (MSE) [42] measures the distance between the estimated and actual data, considering the number of observations and the number of parameters in the models. The MSE is defined as follows:
MSE = i = 1 n ( m ^ ( t i ) y i ) 2 n N ,
where n is the number of total cumulative failures, N the number of model parameters, and y i the number of cumulative failures at t i , obtained from the dataset.
(2) The predictive ratio risk (PRR) [42] indicates the distance of model estimates from actual data with respect to the model estimates. It is calculated as:
PRR = i = 1 n ( m ^ ( t i ) y i m ^ ( t i ) ) 2 .
(3) The predictive power (PP) [43] measures the distance of actual data from the estimate with regard to the actual data. It is defined as:
PP = i = 1 n ( m ^ ( t i ) y i y i ) 2 .
(4) R-square ( R 2 ) [44] is the correlation index of the regression curve equation. It is used to explain the fitting power of the SRGMs and is described as follows:
R 2 = 1 i = 1 n ( y i m ^ ( t i ) ) 2 i = 1 n ( y i y ¯ ) 2 .
(5) Akaike’s information criterion (AIC) [45] is used to maximize the likelihood function. It can be considered as an approximate distance from the true probability model, as follows:
AIC = 2 lnL + 2 N ,
where N is the degree of freedom. Here, L and lnL are written as follows:
L = i = 1 n ( m ( t i ) m ( t i 1 ) ) y i y i 1 ( y i y i 1 ) ! e ( m ( t i ) m ( t i 1 ) ) ,
lnL = i = 1 n { ( y i y i 1 )   ln ( ( m ( t i ) m ( t i 1 ) ) ( m ( t i ) m ( t i 1 ) ) ln ( ( y i y i 1 ) ! ) } .
(6) The sum of absolute error (SAE) [11] measures the distance between the predicted number of failures and the observed data. It is defined as:
SAE = i = 1 n | m ^ ( t i ) y i | .
(7) The variation [46] is the standard deviation of the prediction bias. It is expressed as follows:
Variation = i = 1 n ( ( m ^ ( t i ) y i ) Bias ) 2 n 1 ,
where bias is expressed as:
Bias   = i = 1 n [ m ^ ( t i ) y i n ] .
(8) The root-mean-square prediction error (RMSPE) [46] estimates the closeness of the predicted values with the actual observations.
RMSPE = Variation 2 + Bias 2
The closer the value of R 2 is to 1, the better the goodness-of-fit of the dataset. For other criteria, a smaller value indicates a better fit.

4.2. Datasets

Table 2 shows the two datasets used in the experiments. Dataset 1 was collected from a telecommunication system that manages the radio access of wireless systems on a weekly basis [42,47]. It is a test dataset corresponding to two releases of the software. For dataset 1, the number of cumulative failures is { 1 , 1 , , 26 } at t i = 1 ,   2 ,   ,   21 , and the number of total cumulative failures is 26. Dataset 2 includes one of the three releases of weekly medical record system test data that correspond to 188 software tools [42,48]. For dataset 2, the number of cumulative failures is { 90 , 107 , , 204 } at t i = 1 ,   2 ,   ,   17 .

5. Results

5.1. Results of Parameter Estimation and Goodness-of-Fit

Before comparing the fit and applying the SPRT, we estimated the parameters of all models listed in Table 1 using the LSE method. Table 3 summarizes the estimated parameters for datasets.
Table 4 and Table 5 present the results for the abovementioned eight criteria of all SRGMs for each dataset. Moreover, Figure 3 and Figure 4 show the m(t).
As shown in Table 4, the proposed model achieved the best results for all criteria except for the PP and AIC. However, the PP and AIC of the proposed model were the third smallest and fifth smallest values, respectively. In general, the proposed model outperformed other models.
As shown in Table 5, the proposed model indicated the best results for most criteria (except for the AIC, variation, and RMSPE). Similarly, the variation and RMSPE of the proposed model were the second smallest values. In general, the proposed model exhibited better fitting than that shown by other models for dataset 2. However, because m ( t ) converged for t i = { 14 ,   15 ,   16 ,   17 } , the AIC was calculated as not-a-number (NaN). If the AIC of the proposed model was calculated for the time before convergence, t i = { 1 ,   2 , ,   13 } , then the value would be 83.51093 (which would be the smallest).
Table 6 and Table 7 and Figure 5 and Figure 6 show the predicted values at t i for each dataset. The predicted values mean the expected cumulative number of failures by the proposed model. In Table 6, the predicted values for future points are {25.1102, 25.19946, 25.2552, 25.28936} at t = { 22 ,   23 ,   24 ,   25 } . In Table 7, the predicted values for future points are {194.766, 194.766, 194.766} at t = { 18 ,   19 ,   20 } .

5.2. Confidence Interval

The two-sided limit confidence interval [42] of NHPP SRGMs is defined as:
m ^ ( t ) ± z α / 2 m ^ ( t ) ,
where z α / 2 is the 100(1 α ) percentile of the standard normal distribution.
Table 8, Figure 7 and Figure 8 show the lower-limit confidence interval (LC) and the upper-limit confidence interval (UC) of the proposed model for datasets 1 and 2 at t i .

5.3. Results of the SPRT for Datasets

We determined the software reliability by applying the SPRT based on the parameters of the proposed model. In particular, a sensitivity analysis showed that a and b were more sensitive than other parameters of the proposed model; therefore, we focused on a and b . The related assumptions are listed in Table 9.
Subsequently, we set a 0 = a δ , a 1 = a + δ for Case 1 and b 0 = b δ , b 1 = b + δ for Case 2. Substituting a 0 and a 1 for the m ( t ) of the proposed model instead of a , we obtained m 0 ( t ) and m 1 ( t ) , respectively. Likewise,   m 0 ( t ) and m 1 ( t ) were obtained by substituting b 0 and b 1 into m ( t ) , respectively. Substituting m 0 ( t ) and m 1 ( t ) , obtained in Equation (12), allows us to discuss the SPRT procedure.
Table 10 shows the results of the SPRT for parameter a for both datasets. Judging the reliability using the proposed model, we conclude that software testing should continue because we cannot determine the software reliability yet.
Table 11 shows the SPRT results for parameter b for both datasets. When t = 7 in dataset 1, the test was accepted. Using the proposed model for dataset 2, we conclude that software testing should continue because we cannot determine the software reliability yet. However, as converged for t i = { 14 ,   15 ,   16 ,   17 } , the result was “Inf”.

6. Conclusions and Remarks

Herein, we proposed a new SRGM. General SRGMs assume independent failures. However, in the proposed model, software failures depend on one another. We presented experiments on real-world datasets using eight evaluation criteria. The proposed model achieved the best goodness-of-fit on both datasets. In addition, we evaluated the software reliability using the SPRT, which is more efficient than the classical hypothesis test. As shown in Table 10, we demonstrated that the test based on parameter a should be continued for both datasets because no reliable judgment was obtained. The test based on parameter b was accepted, as shown in Table 11, when t = 7 for dataset 1. However, the test should be continued for dataset 2 because the judgment was unreliable.
In the future, experiments based on more recent data should be performed to further validate the superiority of the proposed model. In addition, after estimating the parameters of software reliability models using the MLE and Bayesian techniques, we plan to discuss software reliability by applying the SPRT.

Author Contributions

Conceptualization, H.P.; Funding acquisition, I.H.C.; Software, D.H.L.; Writing—original draft, D.H.L.; Writing—review & editing, I.H.C. and H.P. The three authors equally contributed to the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2018R1D1A1B07045734, NRF-2019R1A6A3A13090204).

Acknowledgments

This research was supported by the National Research Foundation of Korea.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Goel, A.L.; Okumoto, K. Time dependent error detection rate model for software reliability and other performance measures. IEEE Trans. Reliab. 1979, 28, 206–211. [Google Scholar] [CrossRef]
  2. Ohba, M.; Yamada, S. S-Shaped Software Reliability Growth Models. In Proceedings of the 4th International Conference on Reliability and Maintainability, Lannion, France, 21–25 May 1984; pp. 430–436. [Google Scholar]
  3. Yamada, S.; Ohba, M.; Osaki, S. S-shaped software reliability growth models and their applications. IEEE Trans. Reliab. 1984, 33, 289–292. [Google Scholar] [CrossRef]
  4. Pham, H.; Zhang, X. An NHPP software reliability models and its comparison. Int. J. Reliab. Qual. Saf. Eng. 1997, 4, 269–282. [Google Scholar] [CrossRef]
  5. Pham, H.; Zhang, X. NHPP Software Reliability and Cost Models with Testing Coverage. Eur. J. Oper. Res. 2003, 145, 443–454. [Google Scholar] [CrossRef]
  6. Teng, X.; Pham, H. A new methodology for predicting software reliability in the random field environments. IEEE Trans. Reliab. 2006, 55, 458–468. [Google Scholar] [CrossRef]
  7. Pham, H. Loglog Fault-Detection Rate and Testing Coverage Software Reliability Models Subject to Random Environments. Vietnam J. Comput. Sci. 2014, 1, 39–45. [Google Scholar] [CrossRef] [Green Version]
  8. Inoue, S.; Ikeda, J.; Yamda, S. Bivariate change-point modeling for software reliability assessment with uncertainty of testing-environment factor. Ann. Oper. Res. 2016, 244, 209–220. [Google Scholar] [CrossRef]
  9. Chang, I.H.; Pham, H.; Lee, S.W.; Song, K.Y. A testing-coverage software reliability model with the uncertainty of operation environments. Int. J. Syst. Sci. Oper. Logist. 2014, 1, 220–227. [Google Scholar]
  10. Song, K.Y.; Chang, I.H.; Pham, H. A three-parameter fault-detection software reliability model with the uncertainty of operating environments. J. Syst. Sci. Syst. Eng. 2017, 26, 121–132. [Google Scholar] [CrossRef]
  11. Song, K.Y.; Chang, I.H.; Pham, H. An NHPP Software Reliability Model with S-Shaped Growth Curve Subject to Random Operating Environments and Optimal Release Time. Appl. Sci. 2017, 7, 1304. [Google Scholar] [CrossRef] [Green Version]
  12. Song, K.Y.; Chang, I.H.; Pham, H. A software reliability model with a Weibull fault detection rate function subject to operating environments. Appl. Sci. 2017, 7, 983. [Google Scholar] [CrossRef] [Green Version]
  13. Song, K.Y.; Chang, I.H.; Pham, H. A Testing Coverage Model Based on NHPP Software Reliability Considering the Software Operating Environment and the Sensitivity Analysis. Mathematics 2019, 7, 450. [Google Scholar] [CrossRef] [Green Version]
  14. Pham, H. On estimating the number of deaths related to Covid-19. Mathematics 2020, 8, 655. [Google Scholar] [CrossRef]
  15. Chatterjee, S.; Maji, B.; Pham, H. A fuzzy rule-based generation algorithm in interval type-2 fuzzy logic system for fault prediction in the early phase of software development. J. Exp. Theor. Artif. Intell. 2019, 31, 369–391. [Google Scholar] [CrossRef]
  16. Pham, T.; Pham, H. A generalized software reliability model with stochastic fault-detection rate. Ann. Oper. Res. 2019, 277, 83–93. [Google Scholar] [CrossRef]
  17. Chatterjee, S.; Shukla, A.; Pham, H. Modeling and analysis of software fault detectability and removability with time variant fault exposure ratio, fault removal efficiency, and change point. Proc. Inst. Mech. Eng. O J. Risk. Reliab. 2019, 233, 246–256. [Google Scholar] [CrossRef]
  18. Pham, H. A Logistic Fault-Dependent Detection Software Reliability Model. J. UCS 2018, 24, 1717–1730. [Google Scholar]
  19. Pavlov, N.; Iliev, A.; Rahnev, A.; Kyurkchiev, N. Some Software Reliability Models: Approximation and Modeling Aspects; LAP LAMBERT Academic Publishing: Saarbrucken, Germany, 2018. [Google Scholar]
  20. Li, Q.; Pham, H. A generalized software reliability growth model with consideration of the uncertainty of operating environments. IEEE Access 2019, 7, 84253–84267. [Google Scholar] [CrossRef]
  21. Zhu, M.; Pham, H. A software reliability model incorporating martingale process with gamma-distributed environmental factors. Ann. Oper. Res. 2018, 1–22. [Google Scholar] [CrossRef]
  22. Sharma, M.; Pham, H.; Singh, V.B. Modeling and analysis of leftover issues and release time planning in multi-release open source software using entropy based measure. Comput. Syst. Sci. Eng. 2019, 34, 33–46. [Google Scholar]
  23. Zhu, M.; Pham, H. A two-phase software reliability modeling involving with software fault dependency and imperfect fault removal. Comput. Lang. Syst. Struct. 2018, 53, 27–42. [Google Scholar] [CrossRef]
  24. Singh, V.B.; Sharma, M.; Pham, H. Entropy based software reliability analysis of multi-version open source software. IEEE Trans. Softw. Eng. 2017, 44, 1207–1223. [Google Scholar] [CrossRef]
  25. Caiuta, R.; Pozo, A.; Vergilio, S.R. Meta-learning based selection of software reliability models. Autom. Softw. Eng. 2017, 24, 575–602. [Google Scholar] [CrossRef]
  26. Tamura, Y.; Yamda, S. Software Reliability Model Selection Based on Deep Learning with Application to the Optimal Release Problem. J. Ind. Eng. Manag. Sci. 2019, 2019, 43–58. [Google Scholar] [CrossRef] [Green Version]
  27. Tamura, Y.; Matsumoto, M.; Yamada, S. Software reliability model selection based on deep learning. In Proceedings of the 2016 International Conference on Industrial Engineering, Management Science and Application (ICIMSA), Jeju, South Korea, 23–26 May 2016; IEEE: New York, NY, USA, 2019; pp. 1–5. [Google Scholar]
  28. Wang, J.; Zhang, C. Software reliability prediction using a deep learning model based on the RNN encoder-decoder. Reliab. Eng. Syst. Saf. 2018, 170, 73–82. [Google Scholar] [CrossRef]
  29. Lee, D.H.; Chang, I.H.; Pham, H.; Song, K.Y. A software reliability model considering the syntax error in uncertainty environment, optimal release time, and sensitivity analysis. Appl. Sci. 2018, 8, 1483. [Google Scholar] [CrossRef] [Green Version]
  30. Minamino, Y.; Inoue, S.; Yamada, S. Change-Point–Based Software Reliability Modeling and Its Application for Software Development Management. In Recent Advancements in Software Reliability Assurance; CRC Press: Boca Raton, FL, USA, 2019; pp. 59–92. [Google Scholar]
  31. Rani, P.; Mahapatra, G.S. A Single Change Point Hazard Rate Software Reliability Model with Imperfect Debugging. In Proceedings of the 2019 IEEE International Systems Conference (SysCon), Orlando, FL, USA, 8–11 April 2019; IEEE: New York, NY, USA, 2019; pp. 1–7. [Google Scholar]
  32. Zeephongsekul, P.; Jayasinghe, C.L.; Fiondella, L.; Nagaraju, V. Maximum-Likelihood Estimation of Parameters of NHPP Software Reliability Models Using Expectation Conditional Maximization Algorithm. IEEE Trans. Reliab. 2016, 65, 1571–1583. [Google Scholar] [CrossRef]
  33. Candini, F.; Gioletta, A. A Bayesian Monte Carlo-based algorithm for the estimation of small failure probabilities of systems affected by uncertainties. Reliab. Eng. Syst. Saf. 2016, 153, 15–27. [Google Scholar] [CrossRef]
  34. Wald, A. Sequential Analysis; John Wiley and Sons: New York, NY, USA, 1947. [Google Scholar]
  35. Stieber, H.A. Statistical Quality Control: How to detect unreliable software components. In Proceedings of the Eighth International Symposium on Software Reliability Engineering, Albuquerque, NM, USA, 2–5 November 1997; IEEE: New York City, NY, USA, 1997; pp. 8–12. [Google Scholar]
  36. Prasad, R.S.K.; Rao, K.P.G.; Mohan, G.K. Software Reliability using SPRT: Inflection S-shaped Model. Int. J. Appl. Innov. Eng. Manag. 2013, 2, 349–355. [Google Scholar]
  37. Gutta, S.; Ravi, S.P. Detection of Reliable Pareto Software Using SPRT. Int. J. Comput. Sci. Issues 2014, 11, 130. [Google Scholar]
  38. Kotha, S.K.; Prasad, R.S. Pareto Type II Software Reliability Growth Model–An Order Statistics Approach. Int. J. Comput. Sci. Trends Technol. 2014, 2, 49–54. [Google Scholar]
  39. Smitha, C.H.; Prasad, R.S.; Kumar, R.K. Burr Type III Process Model with SPRT for Software Reliability. Int. J. Innov. Res. Adv. Eng. ISSN 2014, 6, 2349–2763. [Google Scholar]
  40. Chowdary, C.S.; Prasad, D.R.S.; Sobhana, K. Burr Type III Software Reliability Growth Model. IOSR J. Comput. Eng. 2015, 17, 49–54. [Google Scholar]
  41. Pham, H.; Nordmann, L.; Zhang, X. A general imperfect software debugging model with S-shaped fault detection rate. IEEE Trans. Reliab. 1999, 48, 169–175. [Google Scholar] [CrossRef]
  42. Pham, H. System Software Reliability; Springer: London, UK, 2006. [Google Scholar]
  43. Pham, H. A new software reliability model with Vtub-shaped fault-detection rate and the uncertainty of operating environments. Optimization 2014, 63, 1481–1490. [Google Scholar] [CrossRef]
  44. Li, Q.; Pham, H. NHPP software reliability model considering the uncertainty of operating environments with imperfect debugging and testing coverage. Appl. Math. Model. 2017, 51, 68–85. [Google Scholar] [CrossRef]
  45. Akaike, H. A new look at statistical model identification. IEEE Trans. Autom. Control. 1974, 19, 716–719. [Google Scholar] [CrossRef]
  46. Pillai, K.; Nair, V.S. A model for software development effort and cost estimation. IEEE Trans. Softw. Eng. 1997, 23, 485–497. [Google Scholar] [CrossRef]
  47. Zhang, X.; Jeske, D.R.; Pham, H. Calibrating software reliability models when the test environment does not match the user environment. Appl. Stoch. Models. Bus. Ind. 2002, 18, 87–99. [Google Scholar] [CrossRef]
  48. Stringfellow, C.; Andrews, A.A. An empirical method for selecting software reliability growth models. Empir. Softw. Eng. 2002, 7, 319–343. [Google Scholar] [CrossRef]
Figure 1. Reliable region in SPRT.
Figure 1. Reliable region in SPRT.
Mathematics 08 01366 g001
Figure 2. Structure of proposed model.
Figure 2. Structure of proposed model.
Mathematics 08 01366 g002
Figure 3. Mean value functions, m(t), of all models for dataset 1.
Figure 3. Mean value functions, m(t), of all models for dataset 1.
Mathematics 08 01366 g003
Figure 4. Mean value functions, m(t), of all models for dataset 2.
Figure 4. Mean value functions, m(t), of all models for dataset 2.
Mathematics 08 01366 g004
Figure 5. Predicted values by proposed model for dataset 1.
Figure 5. Predicted values by proposed model for dataset 1.
Mathematics 08 01366 g005
Figure 6. Predicted values by proposed model for dataset 2.
Figure 6. Predicted values by proposed model for dataset 2.
Mathematics 08 01366 g006
Figure 7. Confidence interval of proposed model for dataset 1.
Figure 7. Confidence interval of proposed model for dataset 1.
Mathematics 08 01366 g007
Figure 8. Confidence interval of proposed model for dataset 2.
Figure 8. Confidence interval of proposed model for dataset 2.
Mathematics 08 01366 g008
Table 1. Mean value functions of software reliability growth models (SRGMs).
Table 1. Mean value functions of software reliability growth models (SRGMs).
No.Model m ( t )
1Goel Okumoto (GO) [1] m ( t ) = a ( 1 e bt )
2Delayed S-shaped (DS) [3] m ( t ) = a ( 1 ( 1 + bt ) e bt )
3Inflection S-shaped (IS) [2] m ( t ) = a ( 1 e bt ) 1 + β e bt
4Yamada Imperfect Debugging (YID) [3] m ( t ) = a ( 1 e bt ) ( 1 α b ) + α at
5Pham–Nordmann–Zhang (PNZ) [41] m ( t ) = a ( 1 e bt ) [ 1 α b ] + α at 1 + β e bt
6Pham–Zhang (PZ) [4] m ( t ) = ( ( c + a ) [ 1 e bt ] [ ab b α ] ( e at e bt ) ) 1 + β e bt
7Testing Coverage (TC) [9] m ( t ) = N [ 1 ( β β + ( at ) b ) α ]
8New model m ( t ) = a 1 + ( a h ) ( b + c c + be bt ) a b
Table 2. Datasets.
Table 2. Datasets.
Dataset 1Dataset 2
TimeFailureCum. FailureTimeFailureCum. Failure
11119090
201217107
312319126
413419145
525526171
605617188
70571189
83881190
91990190
10211100190
11213112192
12215120192
13419130192
14019140192
153221511203
16022160203
17123171204
18124
19024
20024
21226
Table 3. Estimation of parameters for datasets.
Table 3. Estimation of parameters for datasets.
ModelDataset 1Dataset 2
GO a ^ = 254045 , b ^ = 0.000005 a ^ = 197.387 , b ^ = 0.399
DS a ^ = 39.8212 , b ^ = 0.11041 a ^ = 192.528 , b ^ = 0.882
IS a ^ = 26.693 , b ^ = 0.2919 , β ^ = 21.71 a ^ = 197.354 , b ^ = 0.399 , β ^ = 0.000001
YID a ^ = 0.008 , b ^ = 0.462 , β ^ = 185.571 a ^ = 182.934 , b ^ = 0.464 , β ^ = 0.0071
PNZ a ^ = 26.686 , b ^ = 0.292 , β ^ = 0.00001 ,  
β ^ = 21.726
a ^ = 183.125 , b ^ = 0.463 , α ^ = 0.007 ,  
β ^ = 0.0001
PZ a ^ = 0.0001 , b ^ = 0.298 , α ^ = 2000 ,
β ^ = 22.987 , c ^ = 26.536
a ^ = 195.99 , b ^ = 0.3987 , α ^ = 1000 ,
β ^ = 0 , c ^ = 1.39
TC a ^ = 0.149 , b ^ = 2.234 , α ^ = 1961.82 ,
β ^ = 8176.811 , N ^ = 26.838
a ^ = 0.053 , b ^ = 0.774 , α ^ = 181 ,
β ^ = 38.6 , N ^ = 204.14
New a ^ = 25.338 , b ^ = 0.032 , c ^ = 3.260 , h ^ = 1.115 a ^ = 194.766 , b ^ = 0.304 , c ^ = 304.566 ,
h ^ = 135.464
Table 4. Comparison of all criteria for dataset 1.
Table 4. Comparison of all criteria for dataset 1.
ModelMSEPRRPP R 2 SAEAICVariationRMSPE
GO3.85161.33194.85080.954633.933965.36111.83231.9091
DS1.493812.06800.96760.982419.996763.93991.19081.1913
IS0.67442.85090.65610.992512.946564.17790.77270.7788
YID2.38425.76630.85790.973423.462767.17151.46491.4649
PNZ0.71412.85840.96980.992512.950266.17850.77280.7788
PZ0.76213.22720.70290.992413.184868.2760.77220.78
TC1.0939102.15991.74680.989116.053270.55940.91980.9348
New0.55030.45180.80200.994211.768566.84640.68390.6839
Table 5. Comparison of all criteria for dataset 2.
Table 5. Comparison of all criteria for dataset 2.
ModelMSEPRRPP R 2 SAEAICVariationRMSPE
GO80.67790.17050.10130.9388104.4025184.33140.68390.6839
DS232.62821.29150.33300.8234142.5442331.85678.67348.6955
IS86.43950.17060.10130.9388104.3703186.333714.642314.7605
YID78.83670.12760.08660.9442100.6173157.82528.67118.6953
PNZ84.90770.12810.08670.9442100.6045159.87448.29158.3047
PZ100.98940.17190.10170.9387104.3539190.33218.29158.3049
TC72.28120.05210.04790.9561103.1593158.93198.67678.7014
New26.81040.00960.00920.982463.9541Nan4.66734.6673
Table 6. Predicted values by proposed model for dataset 1.
Table 6. Predicted values by proposed model for dataset 1.
Time m ^ ( t ) Time m ^ ( t ) Time m ^ ( t ) Time m ^ ( t )
11.3554387.4230021521.096652225.1102
21.72766999.2191622.334092325.19946
32.2094911011.243051723.270832425.2552
42.8311891113.4111823.951312525.28936
53.6280041215.602291924.42872
64.6376971317.683392024.75394
75.8951411419.539032124.96993
Table 7. Predicted values by proposed model for dataset 2.
Table 7. Predicted values by proposed model for dataset 2.
Time m ^ ( t ) Time m ^ ( t ) Time m ^ ( t ) Time m ^ ( t )
190.762926185.062211194.76616194.766
2105.70087192.273912194.76617194.766
3125.28194.385113194.76618194.766
4147.98139194.736314194.76619194.766
5169.753410194.76515194.76620194.766
Table 8. Confidence intervals for datasets 1 and 2.
Table 8. Confidence intervals for datasets 1 and 2.
Dataset 1Dataset 2
TimeLCUCTimeLCUC
10.03.637278172.09043109.4354
20.04.303861285.55025125.8514
30.05.1228513103.2694147.1306
40.06.1290524124.1388171.8238
50.07.3612105144.2171195.2896
60.4168538.8585416158.3993211.7251
71.13636610.653927165.0965219.4513
82.08304312.762968167.0589221.7113
93.26799915.170009167.3854222.0871
104.67116417.8149410167.4121222.1180
16.23341220.5885911167.4130222.1190
27.86048123.3440912167.4130222.1190
39.44142425.9253613167.4130222.1190
1410.8754128.2026514167.4130222.1190
1512.0943330.0989815167.4130222.1190
1613.071531.5966716167.4130222.1190
1713.8159932.7256717167.4130222.1190
1814.3592333.54339
1914.7415234.11592
2015.0024634.50541
2115.17634.76385
Table 9. Case for applying the sequential probability ratio test (SPRT).
Table 9. Case for applying the sequential probability ratio test (SPRT).
Case δ α   ( Type   1   Error ) β   ( Type   2   Error )
Case 1 (for parameter a )0.90.10.1
Case 2 (for parameter b )0.030.10.1
Table 10. SPRT results for both datasets (Case 1, parameter a ).
Table 10. SPRT results for both datasets (Case 1, parameter a ).
Dataset 1Dataset 2
TN(t)Acceptance
Region
Rejection
Region
ResultTN(t)Acceptance
Region
Rejection
Region
Result
11−105.151107.8621continue190−314.127495.6517continue
21−55.327558.78291continue2107−196.369407.7699continue
32−36.705241.12448continue3126−116.805367.2043continue
43−26.773132.43642continue4145−63.6092359.5706continue
55−20.412427.67021continue5171−34.4734373.9780continue
65−15.807725.08616continue6188−28.1345398.2559continue
75−12.151323.94599continue7189−34.7014419.2460continue
88−9.0407223.89200continue8190−40.7869429.5541continue
99−6.2800524.72266continue9190−42.7149432.1846continue
1011−3.8049426.29212continue10190−42.9682432.4955continue
1113−1.6456328.46133continue11192−42.9805432.5098continue
12150.10704331.08023continue12192−42.9807432.5099continue
13191.34496933.99174continue13192−42.9807432.5099continue
14191.99119537.04500continue14192−42.9807432.5099continue
15222.03806240.10500continue15203−42.9807432.5099continue
16221.56090743.05311continue16203−42.9807432.5099continue
17230.70308945.78459continue17204−42.9807432.5099continue
1824−0.3600348.21173continue
1924−1.4627250.27382continue
2024−2.480251.94671continue
2126−3.3408653.24403continue
Table 11. SPRT results for both datasets (Case 2, parameter b ).
Table 11. SPRT results for both datasets (Case 2, parameter b ).
Dataset 1Dataset 2
TN(t)Acceptance
Region
Rejection
Region
ResultTN(t)Acceptance
Region
Rejection
Region
Result
11−3.5540706.287044continue19011.45807170.0957continue
21−0.6369604.216873continue210771.55002139.9607continue
320.7762053.995391continue3126103.6526146.8394continue
431.9773994.409981continue4145129.8891165.5602continue
553.2058865.201452continue5171149.0555188.4585continue
654.4347856.177075continue6188152.8588214.1708continue
755.5090777.107788accept7189120.5342261.5581continue
8190−56.62470444.3444continue
9190−1258.1601647.398continue
10190−14,878.8015,268.35continue
11192−325,116325,505.2continue
12192−1.8 × 10718,130,355continue
13192−3.5 × 1093.47 × 109continue
14192−3.3 × 10123.28 × 1012continue
15203-InfInfcontinue
16203-InfInfcontinue
17204-InfInfcontinue

Share and Cite

MDPI and ACS Style

Lee, D.H.; Chang, I.H.; Pham, H. Software Reliability Model with Dependent Failures and SPRT. Mathematics 2020, 8, 1366. https://0-doi-org.brum.beds.ac.uk/10.3390/math8081366

AMA Style

Lee DH, Chang IH, Pham H. Software Reliability Model with Dependent Failures and SPRT. Mathematics. 2020; 8(8):1366. https://0-doi-org.brum.beds.ac.uk/10.3390/math8081366

Chicago/Turabian Style

Lee, Da Hye, In Hong Chang, and Hoang Pham. 2020. "Software Reliability Model with Dependent Failures and SPRT" Mathematics 8, no. 8: 1366. https://0-doi-org.brum.beds.ac.uk/10.3390/math8081366

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop