Next Article in Journal
A Fractional Equation with Left-Sided Fractional Bessel Derivatives of Gerasimov–Caputo Type
Next Article in Special Issue
Cumulative Sum Chart Modeled under the Presence of Outliers
Previous Article in Journal
A New Record of Graph Enumeration Enabled by Parallel Processing
Previous Article in Special Issue
Reliability Evaluation for a Stochastic Flow Network Based on Upper and Lower Boundary Vectors
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Criterion for Model Selection

Department of Industrial and Systems Engineering, Rutgers University, Piscataway, NJ 08854, USA
Submission received: 5 November 2019 / Revised: 2 December 2019 / Accepted: 5 December 2019 / Published: 10 December 2019
(This article belongs to the Special Issue Statistics and Modeling in Reliability Engineering)

Abstract

:
Selecting the best model from a set of candidates for a given set of data is obviously not an easy task. In this paper, we propose a new criterion that takes into account a larger penalty when adding too many coefficients (or estimated parameters) in the model from too small a sample in the presence of too much noise, in addition to minimizing the sum of squares error. We discuss several real applications that illustrate the proposed criterion and compare its results to some existing criteria based on a simulated data set and some real datasets including advertising budget data, newly collected heart blood pressure health data sets and software failure data.

1. Introduction

Model selection has become an important focus in recent years in statistical learning, machine learning, and big data analytics [1,2,3,4]. Currently there are several criteria in the literature for model selection. Many researchers [3,5,6,7,8,9,10,11] have studied the problem of selecting variables in regression in thepast three decades. Today it receives much attention due to growing areas in machine learning, data mining and data science. The mean squared error (MSE), root mean squared error (RMSE), R2, Adjusted R2, Akaike’s Information Criterion (AIC), Bayesian Information Criterion (BIC), AICc are among common criteria that have been used to measure model performance and select the best model from a set of potential models. Yet choosing an appropriate criterion on the basis of which to compare the many candidate models remains not an easy task to many analysts since some criteria may taking toll on the model size of estimated parameters while the others could emphasis more on the sample size of a given data.
In this paper, we discuss a new criterion PIC that can be used to select the best model among a set of candidate models. The proposed PIC takes into account a larger penalty from adding too many coefficients in the model when there is too small a sample. We also discuss briefly several common existing criteria include AIC, BIC, AICc, R2, adjusted R2, MSE, and RMSE. To illustrate the proposed criterion, we discuss the results based on a simulated data and some real applications including advertising budget data and recent collected heart blood pressure health data sets. The new PIC takes into account a larger penalty when there are too many coefficients to be estimated from too small a sample in the presence of too much noise.

2. Some Criteria for Model Comparisons

Suppose there are n observations on a response variable Y that relates to a set of independent variables: X1, X2, …, Xk−1 in the form of
Y = f ( X 1 , X 2 , , X k 1 ) .
The statistical significance of model comparisons can be determined based on existing goodness-of-fit criteria in the literature [12]. In this section, we first briefly discuss some existing criteria that commonly used in model selection. Then we discuss a new PIC for selecting the best model from a set of candidates. Following are some common criteria, for instance.
The MSE measures the average of the squares deviation between the fitted values with the actual data observation [13]. The RMSE is the square root of the variance of the residuals or the square root of MSE. The coefficient of determinations R2 or R2 value measures the amount of variation accounted for the fitted model. It is frequently used to compare models and assess which model provides the best fit to the data. R2 always increases with the model size. The adjusted R2 is a modification to the R2 which takes into account the number of estimated parameters or the number of explanatory variables in a model relative to the number of data points [14]. The adjusted R2 gives the percentage of variation explained by only those independent variables that actually affect the dependent variable.
The AIC was introduced by Akaike [5] and is calculated by obtaining the maximum value of the likelihood function for the model and the penalty term due to the number of estimated parameters in the model. This criterion implies that by adding more parameters in the model, it improves the goodness of the fit but also increases the penalty imposed by adding more parameters.
The BIC was introduced by Schwarz [10]. The difference between these two criteria, BIC and AIC, isa penalty term. In BIC, it depends on the sample size n that shows how strongly they impacts the penalty of the number of parameters in the model while in AIC it does not depend on the sample size. When the sample size is small, there is likely that AIC will select models that include many parameters. The second order information criterion, called AICc, takes into account sample size by increasing the relative penalty for model complexity with small data sets. As n gets larger, AICc converges to AIC.
It should be noted that the lower value of MSE, RMSE, AIC, BIC, AICc indicates the better the model goodness-of-fit. Conversely, the larger the value of R2, adjusted R2 indicates better fit.

New PIC

We now discuss a new criterion for selecting a model among several candidate models. Suppose there are n observations on a response variable Y and (k − 1) explanatory variables X1, X2, …, Xk−1. Let
yi be the ith response (dependent variable), i = 1, 2, …, n
ŷi be the fitted value of yi
ei be the ith residual, i.e., ei = yiŷi
From Equation (1), the sum of squared error can be defined as follows:
S S E = i = 1 n e i 2 = i = 1 n ( y i y ^ i ) 2
In general, the adjusted R2 attaches a small penalty for adding more variables in the model. The difference between the adjusted R2 and R2 is usually slightly small unless there are too many unknown coefficients in the model to be estimated from too small a sample in the presence of too much noise. In other words, the adjusted R2 penalizes the loss of degrees of freedom that result from adding independent variables to the model. Our motivation of this study is to propose a new criterion by addressing the above situation. According to the unbiased estimators of the adjusted R2 and R2 that, respectively, correct for the sample size and numbers of estimated coefficients, we can easily show that the following function k ( 1 R a d j 2 1 R 2 ) or equivalently, that k ( n 1 n k ) indicates a larger penalty for adding too many coefficients (or estimated parameters) in the model from too small a sample in the presence of too much noise where n is the sample size and k is the number of estimated parameters.
Based on the above, we propose a new criterion, PIC, for selecting the best model. The PIC value of the model is as follows:
PIC   =   SSE   +   k ( n 1 n k )
where n is the number of observations in the model
  k is the number of estimated parameters or (k−1) explanatory variables in the model, and
  SSE is the sum of squares error as given in Equation (2).
Table 1 presents a summary of criteria for model selection in this study. The best model from among candidate models is the one that yields the smaller the value of MSE, RMSE, AIC, BIC, AICc and the new criterion value given in Equation (3) or the larger the value of R2, adjusted R2.

3. Experimental Validation

3.1. Numerical Examples

In this section, we illustrate the proposed criterion based on a simulated data from a multiple linear regression with three independent variables X1, X2 and X3 for a set of 100 observations (Case 1) and 20 observations (Case 2).
Case 1: 100 observations based on simulated data.
Table 2 presents a list of the first 10 observations from the simulated data consisting of 100 observations based on a multiple linear regression function. From Table 3 we can observe that based on the new proposed criterion the multiple regression models including all three independent variables provides the best fit. The results are also consistent with all of the criteria such as MSE, AIC, AICc, BIC, RMSE, R2, and adjusted R2.
Case 2: 20 observations.
Table 4 presents a simulated data set consisting of 20 observations based on a multiple linear regression function. From Table 5 we can observe that a multiple regression model including all three independent variables based on the new criterion provides the best fit from among all 7 models (see Table 5). This result is also consistent with all the criteria such as MSE, AIC, AICc, BIC, RMSE, R2 and adjusted R2.

3.2. Applications

In this section we demonstrate the proposed criterion with several real applications including advertising products, heart blood pressure health and software reliability analysis. Based on our preliminary study on the collected data, the multiple linear regression model assumption is appropriate to be used in our applications 1 and 2 to illustrate the model selection.
Application 1: Advertising Budget.
In this study, we use the advertising budget data set [15] to illustrate the proposed criterion where the sales for a particular product is a dependent variable of multiple regressionand the three different media channels such as TV, Radio, and News paper are independent variables. The advertising dataset consists of the sales of a product in 200 different markets (200 rows), together with advertising budgets for the product in each of those markets for three different media channels: TV, radio and newspaper. The sales are in thousands of units and the budget is in thousands of dollars. Table 6 shows the first few rows of the advertising budget data set.
We now discuss the results of the linear regression model using this advertising data. Figure 1 and Figure 2 present the data plot and the correction coefficients between the pairs of variables of the advertising budget data, respectively. It shows that the pair of Sales and TV variables has the highest correlation. This implies that the TV advertising has a direct positive effect on the Sale. Results also show that there is a statistical significant positive effect of both TV and Radio advertisings on the Sales. From Table 7, TV media is the most significant media among the three advertising channels and it has strongest impacts on the Sales. The R2 is 0.8972, so 89.72% of the variability is explained by all three media channels. From Table 8, the values of R2 with all three variables and just two variables (TV and Radio advertisings) in the model are the same. This implies that we can select the model with two variables (TV and Radio) in the regression. We can now examine the adjusted R2 measure. For the regression model with TV and Radio variables, the adjust R2 is 0.8962 while adding the third variable (Newspaper) into the model, the adjusted R2 of the full model size is then reduced to 0.8956. Based on the new proposed criterion, the model with the two advertising media channels (TV and Radio) is the best model from a set of seven candidate models as shown in Table 8. This result is consistent with all criteria such as MSE, AIC, AICc, BIC, RMSE, and adjusted R2.
Application 2: Heart Blood Pressure Health Data.
Blood pressure (BP) is one of the main risk factors for cardiovascular diseases. BP is the force of blood pushing against your artery walls as it goes through your body [16]. Abnormal BP has been a forceful issue that causes strokes, heart attacks, and kidney failureso it is important to check your blood pressure on a regular basis. The author has monitored blood pressure daily of an individual since January 2019 using Microlife product. He measured his blood pressure each morning and evening each day within the same time interval and recorded the results of all three measures such as Systolic Blood Pressure ("systolic"), Diastolic Blood Pressure ("diastolic"), and Heart Rate ("pulse") each time as shown in Table 9. The Systolic BP is the pressure when the heart beats – while the heart muscle is contracting (squeezing) and pumping oxygen-rich blood into the blood vessels. Diastolic BP is the pressure on the blood vessels when the heart muscle relaxes. The diastolic pressure is always lower than the systolic pressure [17]. The Pulse or Heart rate measures the heart rate by counting the number of beats per minute (BPM).
The newly heart blood pressure health data set consists of the heart rate (pulse) of such individual in 86 days with 2 data points measured each day, making a total of 172 observations. The first few rows of the data set are shown in Table 9. In Table 9 for example, the first row of the data set can be read as follows: on a Thursday ("day" = 5) morning ("time"=0), the high blood "systolic", low blood "diastolic", and heart rate "pulse" measurements were 154, 99, and 71, respectively. Similarly, on a Thursday afternoon (i.e., the second row of the data set in Table 9, and "time" =1), the high blood, low blood and heart rate measurements were 144, 94, and 75, respectively.
From Figure 3, the systolic BP and diastolic BP have the highest correlation. In this study, we decided not to include the Time variable (i.e., column 2 in Table 9) in this model analysis since it may not necessary reflect the health measurement much. The analysis shows that the Systolic blood pressure seems to be the most significant factor that can have strong impacts on the heart rate measure. The R2 is 0.09997, so 9.99% of the variability is explained by all three variables (Day, Systolic, Diastolic) as shown in Table 10. Based on the new proposed criterion, the model with only Systolic blood pressure variable is the best model from the set of seven candidate models as shown in Table 10. This result stands alone compared to all other criteria, except BIC. In other words, the best model based on our proposed criterion will only obtain Systolic BP variable in the model.
Application 3: Software Reliability Dataset #1.
In this example, we use the numerical results recently studied by Song et al. [12] to illustrate the new criterion by comparing it to some existing criteria based on the two real data sets in the applications of software reliability engineering. Table 11 shows the numericalresults of 19 different software reliability models based on four existing criteria such as MSE, AIC, R2, and adjusted R2 and a new criterion, called Pham criterion, using dataset #1 [18]. In dataset #1, the week index ranges from 1 week to 21 weeks, and there are 38 cumulative failures at 14 weeks. Detailed information is recorded in Musa et al. [18]. Model 6 as shown in Table 11 provides the best fit based on the MSE, R2, adjusted R2 and new criteria. However, Model 1 seems to be the best fit based on the AIC.
Application 4: Software Reliability based on Dataset #2.
Similarly, in this example we use the numerical results recently studied by Song et al. [12] to illustrate the new criterion based on a real dataset #2 [19]. In dataset #2, the weekly index uses cumulative system days, and the failures in 58,633 system days. The detailed information is recorded in [19]. Table 12 presents the numerical results of 19 different software reliability models based on four existing criteria such as MSE, AIC, R2, and adjusted R2 and the new proposed criterion.
Based on dataset #2, Model 7 (see Table 12) provides the best fit based on the AIC and new criteria where Model 17 indicates to be the best fit based on the MSE, R2, and adjusted R2.

4. Conclusions

In this paper we proposed a new PIC that can be used to select the best model from a set of candidate models. The proposed criterion takes into account a larger penalty when adding too many coefficients (or estimated parameters) in the model from too small a sample in the presence of too much noise where n is the sample size and k is the number of estimated parameters.
The paper illustrates the proposed criterion with several applications based on the Advertising budget data, the newly Heart Blood Pressure Health dataset, and software failure data. Given the number of estimated parameters k and sample size n, it is straightforward to obtain the new criterion value. Based on the three real applications studied in this paper, PIC has a very attractive performance which is accuracy based on both simulated data and several real world applications discussed in Section 3 for selecting the best model among a set of candidates.

Funding

This research received no external funding

Conflicts of Interest

The author declares no conflict of interest.

Acronyms

SSEsum of squared error
MSEmean squared error
RMSEroot mean squared error
R2Coefficient of determination
Adjusted R2Adjusted R-squared
AICAkaike’s information criterion
BICBayesian information criterion
AICcSecond-order AIC
PICnew criterion in this paper (i.e., Pham Information Criterion)

References

  1. Burnham, K.P.; Anderson, D.R. Model Selection and Multimodel Inference: A Practical Information-Theoretic Approach, 2nd ed.; Springer: Berlin, Germany, 2002. [Google Scholar]
  2. Burnham, K.P.; Anderson, D.R. Multimodel inference: Understanding AIC and BIC in model selection. Sociol. Methods Res. 2004, 33, 261–304. [Google Scholar] [CrossRef]
  3. Burnham, K.P.; Anderson, D.R.; Huyvaert, K.P. AIC model selection and multimodel inference in behavioral ecology: Some background, observations, and comparisons. Behav. Ecol. Sociobiol. 2011, 65, 23–35. [Google Scholar] [CrossRef]
  4. Guyon, I.; Elisseeff, A. An introduction to variable and feature selection. J. Mach. Learn. Res. 2003, 3, 1157–1182. [Google Scholar]
  5. Akaike, H. Information theory and an extension of the maximum likelihood principle. In Proceedings of the Second International Symposium on Information Theory; Petrov, B.N., Caski, F., Eds.; AkademiaiKiado: Budapest, Hungary, 1973; pp. 267–281. [Google Scholar]
  6. Allen, D.M. Mean square error of prediction as a criterion for selecting variables. Technometrics. 1971, 13, 469–475. [Google Scholar] [CrossRef]
  7. Bozdogan, H. Akaike’s Information criterion and recent developments in information complexity. J. Math. Psychol. 2000, 44, 62–91. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  8. Chai, T.; Draxler, R.R. Root mean square error (RMSE) or mean absolute error (MAE)? Arguments against avoiding RMSE in the literature. Geosci. Model Dev. 2014, 7, 1247–1250. [Google Scholar] [CrossRef] [Green Version]
  9. Schönbrodt, F.D.; Wagenmakers, E.-J. Bayes factor design analysis: Planning for compelling evidence. Psychon. Bull. Rev. 2017, 25, 128–142. [Google Scholar] [CrossRef] [PubMed]
  10. Schwarz, G. Estimating the dimension of a model. Ann. Stat. 1978, 6, 461–464. [Google Scholar] [CrossRef]
  11. Wagenmakers, E.-J.; Farrell, S. AIC model selection using Akaike weights. Psychon. Bull. Rev. 2004, 11, 192–196. [Google Scholar] [CrossRef] [PubMed]
  12. Song, K.Y.; Chang, I.-H.; Pham, H. A testing coverage model based on NHPP software reliability considering the software operating environment and the sensitivity analysis. Mathematics 2019, 7, 450. [Google Scholar] [CrossRef] [Green Version]
  13. Pham, H. System Software Reliability; Springer: London, UK, 2006. [Google Scholar]
  14. Li, Q.; Pham, H. A testing-coverage software reliability model considering fault removal efficiency and error generation. PloS ONE 2017, 12, e0181524. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Advertising Budgets Data Set. Available online: https://www.kaggle.com/ishaanv/ISLR-Auto#Advertising.csv (accessed on 30 May 2019).
  16. Blood Pressure. Available online: https://www.webmd.com/hypertension-high-blood-pressure/guide/hypertension-home-monitoring#1 (accessed on 30 April 2019).
  17. Systolic. Available online: https://0-www-ncbi-nlm-nih-gov.brum.beds.ac.uk/books/NBK279251/ (accessed on 30 April 2019).
  18. Musa, J.D.; Iannino, K.; Okumoto, K. Software Reliability Measurement Prediction Application; McGraw-Hill: New York, NY, USA, 2006. [Google Scholar]
  19. Daniel, R.J.; Zhang, X. Some successful approaches to software reliability modeling in industry. J. Syst. Softw. 2005, 74, 85–99. [Google Scholar]
Figure 1. The scatter plot of the advertising data with four variables.
Figure 1. The scatter plot of the advertising data with four variables.
Mathematics 07 01215 g001
Figure 2. The correlation coefficient between the variables.
Figure 2. The correlation coefficient between the variables.
Mathematics 07 01215 g002
Figure 3. The correlation coefficient between the variables.
Figure 3. The correlation coefficient between the variables.
Mathematics 07 01215 g003
Table 1. Some criteria model selection.
Table 1. Some criteria model selection.
No.CriteriaFormula
1MSE MSE   =   i = 1 n ( y i y ^ i ) 2 n k Measures the deviation between the fitted values with the actual data observation.
2RMSE RMSE   =   i = 1 n ( y i y ^ i ) 2 n k The square root of the MSE.
3R2 R 2 =   1   i = 1 n ( y i y ^ i ) 2 i = 1 n ( y i y ¯ ) 2 Measures the amount of variation accounted for the fitted model.
4Adj R2 R a d j 2   =   1 ( n 1 n k ) ( 1 R 2 ) Take into account a small penalty for adding more variables in the model.
5AIC A I C = 2 log ( L ) + 2 k The model improves the goodness of the fit but also increases the penalty by adding more parameters.
6BIC B I C = 2 log ( L ) + k log ( n ) Depend on the sample size n that shows how strongly BIC impacts the penalty of the number of parameters in the model.
7AICc A I C c = 2 log ( L ) +   2 k   +   2 k ( k + 1 ) n k 1 AICc takes into account sample size by increasing the relative penalty for model complexity with small data sets.
8PIC PIC   =   S S E   +   k ( n - 1 n - k )   where   S S E = i = 1 n ( y i y ^ i ) 2 This new criterion takes into account a larger the penalty when adding too many coefficients in the model when there is too small a sample.
Table 2. A simulated dataset of 100 observations from a multiple linear regression consisting of 3 independent variables.
Table 2. A simulated dataset of 100 observations from a multiple linear regression consisting of 3 independent variables.
X1X2X3Y
1168.028164021.46126
1269.086446023.28792
1384.806730118.71906
1419.011313118.50209
1563.046323122.37717
1682.686964124.19955
1759.263664120.64198
1888.756598029.50144
1977.884304124.49684
209.346073027.15191
Table 3. Criteria values of independent variables based on 100 simulated data observations consisting of three independent variables X1, X2, and X3.
Table 3. Criteria values of independent variables based on 100 simulated data observations consisting of three independent variables X1, X2, and X3.
CriteriaX1, X2, X3X1, X2X1, X3X2, X3X1X2X3
MSE2.7159048.8446763.632836214.91569.259849213.7444217.8576
AIC389.594506.7012417.7079417.7079510.3048824.2031826.1659
AICc390.0151506.9512417.9579417.9579510.4285824.3268826.2896
BIC400.0147514.5167425.5234425.5234515.5151829.4134831.3762
RMSE1.6482.9741.90614.663.04314.6214.76
R20.98790.96010.98360.029040.95780.02510.005776
Adjusted R20.98750.95920.98330.009020.95730.01515−0.00437
PIC264.8518860.9954355.446920849.88909.485620948.9721352.07
Table 4. A simulated dataset of 20 observations from a multiple linear regression consisting of 3 independent variables.
Table 4. A simulated dataset of 20 observations from a multiple linear regression consisting of 3 independent variables.
X1X2X3Y
1168.028164028.05442
1269.086446025.44146
1384.806730118.73395
1419.011313118.00611
1563.046323124.69284
1682.686964028.57457
1759.263664122.69636
1888.756598029.59276
1977.884304029.19881
209.346073121.72717
2180.920814125.78641
2291.528869126.44676
2313.096270123.20765
245.530196025.37850
2573.659765032.86512
2647.619990029.91239
2784.961929035.45651
2867.516106034.46896
2916.024371031.97531
309.566489033.54677
Table 5. Criteria values of independent variables based on 20 simulated data observations consisting of three independent variables X1, X2, and X3.
Table 5. Criteria values of independent variables based on 20 simulated data observations consisting of three independent variables X1, X2, and X3.
CriteriaX1, X2, X3X1, X2X1, X3X2, X3X1X2X3
MSE2.627648.9341215.6929969.90990414.3262323.5516110.44582
AIC81.6276105.300996.2897107.3718113.8972123.8301107.5755
AICc84.29427106.800997.7897108.8718114.6031124.536108.2814
BIC85.61053108.288199.2769110.359115.8887125.8216109.567
RMSE1.6212.9892.3863.1483.7854.8533.232
R20.91110.67920.79560.64420.45510.10460.6028
Adjusted R20.89450.64150.77150.60240.42480.054870.5807
PIC46.79226155.233100.1339171.8213259.9833426.0401190.1359
Table 6. Advertising budget data in 200 different markets.
Table 6. Advertising budget data in 200 different markets.
TVRadioNewspaperSales
230.137.869.222.1
44.539.345.110.4
17.245.969.39.3
151.541.358.518.5
180.810.858.412.9
8.748.9757.2
57.532.823.511.8
120.219.611.613.2
8.62.114.8
Table 7. The relative important metrics of all three media channels (TV, Radio, Newspaper).
Table 7. The relative important metrics of all three media channels (TV, Radio, Newspaper).
Relative importance metrics:
LmgLastFirstPratt
TV0.652325050.69188325370.61431510.656553238
Radio0.321871490.30809667380.33335660.344548723
Newspaper0.025803460.00002007250.0523283−0.001101961
Table 8. Criteria values of independent variables (TV, Radio, Newspaper) of regression models. (X1, X2, and X3 be denoted as the TV, Radio and Newspaper, respectively).
Table 8. Criteria values of independent variables (TV, Radio, Newspaper) of regression models. (X1, X2, and X3 be denoted as the TV, Radio and Newspaper, respectively).
CriteriaX1, X2, X3X1, X2X1, X3X2, X3X1X2X3
MSE2.84092.82709.738918.34910.61918.27525.933
AIC782.36780.391027.81154.51044.11152.71222.7
AICc782.49780.461027.841154.531044.151152.741222.73
BIC795.55790.291037.71164.41050.71159.31229.3
RMSE1.68551.68143.12074.28363.25874.27505.0925
R20.89720.89720.64580.33270.61190.33200.0521
Adjusted R20.89560.89620.64220.32590.60990.32870.0473
PIC5.74674.71186.15127.31415.26886.28507.1026
Table 9. Sample heart blood pressure health data set of an individual in 86-day interval.
Table 9. Sample heart blood pressure health data set of an individual in 86-day interval.
DayTimeSystolicDiastolicPulse
501549971
511449475
601399373
611287685
701297378
711256574
101298070
111308372
201448374
211248784
301207773
311247080
Table 10. Criteria values of variables (day, systolic, diastolic) of regression models (X1, X2, and X3 be denoted as the day, systolic, diastolic, respectively).
Table 10. Criteria values of variables (day, systolic, diastolic) of regression models (X1, X2, and X3 be denoted as the day, systolic, diastolic, respectively).
CriteriaX1, X2, X3X1, X2X1, X3X2, X3X1X2X3
MSE43.117543.738147.010143.585946.845044.135247.2311
AIC1141.4631142.9421155.3511142.3421153.761143.5111155.172
BIC1154.0531152.3841164.7931151.7841160.0551149.8061161.467
RMSE6.56646.61356.85646.60206.84436.64346.8725
R20.099970.08160.012870.084770.01050.06780.00236
Adjusted R20.083890.07070.001190.073940.004690.0623−0.00351
PIC10.63789.64909.89199.63758.85618.65528.8843
Table 11. Results for criteria based on dataset #1 [12].
Table 11. Results for criteria based on dataset #1 [12].
No.Value k ModelMSEAICR2Adj R2PIC
12GO3.634362.83090.96870.963045.7783
22DS9.188568.63750.92080.9064112.4287
33IS3.964764.83090.96870.959347.1572
44YE4.116266.70830.97040.957346.3621
54YR16.097282.01860.88440.8331166.1721
63YID12.914964.87450.97700.970135.6094
73YID22.979564.76030.97650.969436.3199
83HDGO3.964764.83090.96870.959347.1572
94PNZ3.237866.66740.97680.966437.5782
105PZ4.845868.83090.96870.949150.8344
116ZFR5.452970.83190.96870.941853.3732
125TP5.251172.69730.97360.942854.4821
133IFD13.053367.89280.89690.8660147.132
144PDP26.2542138.48620.81150.7277267.742
154KSRGM5.320465.79190.96180.944858.4041
164RMD3.330366.75880.97610.965538.5032
175CT4.054768.06060.97380.957443.7145
185Vtub3.773366.26480.97560.960441.1819
1953PFD4.348868.64190.97190.954346.3614
Table 12. Results for criteria based on dataset #2 [12].
Table 12. Results for criteria based on dataset #2 [12].
No.Value kModelMSEAICR2Adj R2PIC
12GO1.626649.40700.98390.980620.0744
22DS7.371372.23780.92690.912283.2661
33IS1.789251.40700.98390.978521.4920
44YE1.981253.36550.98390.975923.1641
54YR2.813191.67130.89600.8440120.6512
63YID11.021948.49290.99080.987713.8190
73YID20.997948.48040.99100.988013.5790
83HDGO1.788351.38500.98390.978521.4830
94PNZ1.109050.48300.99100.986515.3143
105PZ1.303953.36260.99060.983917.9312
116ZFR2.559157.36220.98380.967728.4794
125TP1.334456.01820.99280.982718.1752
133IFD38.838365.74270.64970.5330391.983
144PDP34.4540134.45920.72030.5805351.4193
154KSRGM3.246855.09540.97360.960534.5545
164RMD2.041153.52390.89340.975123.7032
175CT0.922951.97690.99330.988614.8832
185Vtub2.095052.96720.98490.974124.2600
1953PFD1.240552.90050.99100.984717.4241

Share and Cite

MDPI and ACS Style

Pham, H. A New Criterion for Model Selection. Mathematics 2019, 7, 1215. https://0-doi-org.brum.beds.ac.uk/10.3390/math7121215

AMA Style

Pham H. A New Criterion for Model Selection. Mathematics. 2019; 7(12):1215. https://0-doi-org.brum.beds.ac.uk/10.3390/math7121215

Chicago/Turabian Style

Pham, Hoang. 2019. "A New Criterion for Model Selection" Mathematics 7, no. 12: 1215. https://0-doi-org.brum.beds.ac.uk/10.3390/math7121215

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop