Next Article in Journal
An Upper Bound of the Bias of Nadaraya-Watson Kernel Regression under Lipschitz Assumptions
Previous Article in Journal
On the Number of Independent Pieces of Information in a Functional Linear Model with a Scalar Response
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Biased Estimator to Combat the Multicollinearity of the Gaussian Linear Regression Model

1
Department of Mathematics, Al-Aqsa University, Gaza 4051, Palestine
2
Department of Mathematics and Statistics, Florida International University, Miami, FL 33199, USA
*
Author to whom correspondence should be addressed.
Submission received: 24 September 2020 / Revised: 27 October 2020 / Accepted: 3 November 2020 / Published: 6 November 2020

Abstract

:
In a multiple linear regression model, the ordinary least squares estimator is inefficient when the multicollinearity problem exists. Many authors have proposed different estimators to overcome the multicollinearity problem for linear regression models. This paper introduces a new regression estimator, called the Dawoud–Kibria estimator, as an alternative to the ordinary least squares estimator. Theory and simulation results show that this estimator performs better than other regression estimators under some conditions, according to the mean squares error criterion. The real-life datasets are used to illustrate the findings of the paper.

1. Introduction

Consider the following linear regression model:
y = X β + ε ,
where y is an n × 1 vector of the dependent variable, X is a known n × p full rank matrix of explanatory variables, and β is a p × 1 vector of unknown regression parameter. The ordinary least squares estimator (OLS) of β in (1) is defined by
β ^ = S 1 X y ,
where S = X X and ε is an n × 1 vector of disturbances with zero mean and variance–covariance matrix, C o v ( ε ) = σ 2 I n ; I n   is an identity matrix of order nxn. Under the normality assumption of the disturbances, β ^ follows N ( β ,   σ 2 S 1 ) distribution.
In a multiple linear regression model, it is assumed that the explanatory variables are independent. However, in real-life situations, there may be strong or near-to-strong linear relationships among the explanatory variables. This causes the problem of multicollinearity. In the presence of multicollinearity, it is difficult to estimate the unique effect of individual variables in the regression equations. Moreover, the OLS estimator becomes unstable or inefficient and may produce the wrong sign (see Hoerl and Kennard) [1]. To overcome these problems, many authors have introduced different kinds of one- and two-parameter estimators: to mention a few, Stein [2], Massy [3], Hoerl and Kennard [1], Mayer and Willke [4], Swindel [5], Liu [6], Akdeniz and Kaçiranlar [7], Ozkale and Kaçiranlar [8], Sakallıoglu and Kaçıranlar [9], Yang and Chang [10], Roozbeh [11], Akdeniz and Roozbeh [12], Lukman et al. [13,14], and, very recently, Kibria and Lukman [15], among others.
The objective of this paper is to introduce a new class of two-parameter estimator for the regression parameter when the explanatory variables are correlated and then to compare the performance of the new estimator with the OLS estimator, the ordinary ridge regression (ORR) estimator, the Liu estimator, the Kibria–Lukman (KL) estimator, the two-parameter (TP) estimator proposed by Ozkale and Kaciranlar [8], and the new two-parameter (NTP) estimator that is proposed by Yang and Chang [10].

Some Alternative Biased Estimators and the Proposed Estimator

The canonical form of Equation (1) is as follows:
y = Z α + ε ,
where Z = X P and α = P β . Here, P is an orthogonal matrix such that Z Z = P X X P = Λ = d i a g ( λ 1 , λ 2 , , λ p ) . The OLS estimator of α is as follows:
α ^ = Λ 1 Z y ,
and the mean squared error matrix (MSEM) of α ^ is given by
M S E M ( α ^ ) = σ 2 Λ 1 .
The ORR of α [1] is given by
α ^ ( k ) = W ( k ) α ^ ,
where W ( k ) = [ I p + k Λ 1 ] 1 , k is the biasing parameter, and
M S E M   ( α ^ ( k ) ) = σ 2 W ( k ) Λ 1 W ( k ) + ( W ( k ) I p ) α α ( W ( k ) I p ) .
The Liu estimator of α [6] is given by
α ^ ( d ) = F ( d ) α ^ ,
where F ( d ) = [ Λ + I p ] 1 [ Λ + d I p ] , d is the biasing parameter of Liu estimator, and
M S E M   ( α ^ ( d ) ) = σ 2 F ( d ) Λ 1 F ( d ) + ( 1 d ) 2 ( Λ + I p ) 1 α α ( Λ + I p ) 1 .
The KL estimator of α [15] is given by
α ^ K L = W ( k ) M ( k ) α ^ ,
where M ( k ) = [ I p k Λ 1 ] and
M S E M   ( α ^ K L ) = σ 2 W ( k ) M ( k ) Λ 1 M ( k ) W ( k ) + [ W ( k ) M ( k ) I p ] α α [ W ( k ) M ( k ) I p ]
The two-parameter (TP) estimator of α (Ozkale and Kaçiranlar [8]) is given by
α ^ T P = R α ^ ,
where R = ( Λ + k Ι p ) 1 ( Λ + k d Ι p ) , k and d are the biasing parameters, and
M S E M ( α ^ T P ) = σ 2 R Λ 1 R + [ R I p ] α α [ R I p ] .
The new two-parameter (NTP) estimator of α (Yang, H.; Chang [10]) is given by
α ^ N T P = F ( d ) W ( k ) α ^ ,
M S E M ( α ^ N T P ) = σ 2 F ( d ) W ( k ) Λ 1 W ( k ) F ( d ) + [ F ( d ) W ( k ) I p ] α α [ F ( d ) W ( k ) I p ]
The proposed new class of two-parameter estimator of α is obtained by minimizing ( y Z α ) ( y Z α ) , subject to ( α + α ^ ) ( α + α ^ ) = c , where c is a constant,
( y Z α ) ( y Z α ) + k ( 1 + d ) [ ( α + α ^ ) ( α + α ^ ) c ] .
Here, k and 1 + d are the Lagrangian multipliers.
The solution of minimizing the objective function
( y Z α ) ( y Z α ) + k [ ( α + α ^ ) ( α + α ^ ) c ]
is obtained by Kibria and Lukman [15] for getting the KL estimator and defined in Equation (10).
Now, the solution to (16) gives the proposed estimator as follows:
α ^ D K = ( Z Z + k ( 1 + d ) Ι p ) 1 ( Z Z k ( 1 + d ) Ι p ) α ^ = W ( k , d ) M ( k , d ) α ^ ,
where W ( k , d ) = [ I p + k ( 1 + d ) Λ 1 ] 1 and M ( k , d ) = [ I p k ( 1 + d ) Λ 1 ] .
The proposed estimator will be called the Dawoud–Kibria (DK) estimator and is denoted by α ^ D K .
Moreover, the proposed DK estimator is also obtained by augmenting k 1 + d α ^ = k 1 + d α + ε to (3) and then using the OLS estimate. The MSEM of the DK estimator is given by
M S E M ( α ^ D K ) = σ 2 W ( k , d ) M ( k , d ) Λ 1 M ( k , d ) W ( k , d ) + [ W ( k , d ) M ( k , d ) I p ] α α [ W ( k , d ) M ( k , d ) I p ]
The main differences between the KL estimator and the proposed DK estimator are as follows:
-
The KL is a one-parameter estimator, while the proposed DK is a two-parameter estimator.
-
The KL estimator is obtained based on the objective function ( y Z α ) ( y Z α ) + k [ ( α + α ^ ) ( α + α ^ ) c ] , while the proposed DK estimator is obtained from a different objective function, which is ( y Z α ) ( y Z α ) + k ( 1 + d ) [ ( α + α ^ ) ( α + α ^ ) c ] .
-
The KL estimator is a function of the shrinkage estimator k , while the proposed DK estimator is a function of k and d .
-
Since the KL estimator has one parameter and the proposed DK estimator has two parameters, their MSEs are different.
-
In the KL estimator, shrinkage parameter k needs to be estimated, while in the proposed DK estimator, both k and d need to be estimated.
-
The KL estimator is a special case of the proposed DK estimator when d = 0 , so the proposed DK estimator is the general estimator.
The following lemmas will be used to make some theoretical comparisons among estimators in the following section.
Lemma 1 [16]. 
Let n × n matrices N > 0 and B > 0 (or B 0 ), then N > B if and only if λ max ( B N 1 ) < 1 , where λ max ( B N 1 ) is the maximum eigenvalue of matrix B N 1 .
Lemma 2 [17]. 
Let B be an n × n positive definite matrix that is B > 0 and α be some vector, then B α α > 0 if and only if α B 1 α < 1 .
Lemma 3 [18]. 
Let α i = B i y , i = 1 , 2 be two linear estimators of α . Suppose that D = C o v ( α ^ 1 ) C o v ( α ^ 2 ) > 0 , where C o v ( α ^ i ) i = 1 , 2 is the covariance matrix of α ^ i and b i = B i a s ( α ^ i ) = ( B i X I ) α , i = 1 , 2 . Consequently,
Δ ( α ^ 1 α ^ 2 ) = M S E M ( α ^ 1 ) M S E M ( α ^ 2 ) = σ 2 D + b 1 b 1 b 2 b 2 > 0
if and only if b 2 [ σ 2 D + b 1 b 1 ] b 2 < 1 , where M S E M ( α ^ i ) = C o v ( α ^ i ) + b i b i .
The rest of this article is organized as follows: In Section 2, we give the theoretical comparisons among the abovementioned estimators and derive the biasing parameters of the proposed DK estimator. A simulation study is conducted in Section 3. Two numerical examples are illustrated in Section 4. Finally, some concluding remarks are given in Section 5.

2. Comparison among the Estimators

2.1. Theoretical Comparisons among the Proposed DK Estimator and the OLS, ORR, Liu, KL, TP, and NTP Estimators

Theorem 1.
The proposed estimator α ^ D K is superior to estimator α ^ if and only if
α [ W ( k , d ) M ( k , d ) I p ] [ σ 2 ( Λ 1 W ( k , d ) M ( k , d ) Λ 1 M ( k , d ) W ( k , d ) ) ] [ W ( k , d ) M ( k , d ) I p ] α < 1 .
Proof. 
The difference of the dispersion matrices is given by
D ( α ^ ) D ( α ^ D K ) = σ 2 ( Λ 1 W ( k , d ) M ( k , d ) Λ 1 M ( k , d ) W ( k , d ) ) = σ 2 d i a g { 1 λ i ( λ i k ( 1 + d ) ) 2 λ i ( λ i + k ( 1 + d ) ) 2 } i = 1 p
where
Λ 1 W ( k , d ) M ( k , d ) Λ 1 M ( k , d ) W ( k , d )
will be positive definite (pd) if and only if
( λ i + k ( 1 + d ) ) 2 ( λ i k ( 1 + d ) ) 2 > 0 .
We observed that for k > 0 and 0 < d < 1 ,
( λ i + k ( 1 + d ) ) 2 ( λ i k ( 1 + d ) ) 2 = 4 k ( 1 + d ) λ i > 0 .
Consequently, Λ 1 W ( k , d ) M ( k , d ) Λ 1 M ( k , d ) W ( k , d ) is positive definite. □
Theorem 2.
When λ max ( H G 1 ) < 1 , the proposed estimator α ^ D K is superior to estimator α ^ ( k ) if and only if
α [ W   ( k , d ) M   ( k , d ) I p ] [ V 1 + ( W   ( k ) I p ) α α ( W   ( k ) I p ) ] [ W   ( k , d ) M   ( k , d ) I p ] α < 1
λ max ( H G 1 ) < 1 ,
where
V 1 = σ 2 ( W ( k ) Λ 1 W ( k ) W ( k , d ) M ( k , d ) Λ 1 M ( k , d ) W ( k , d ) ) ,
H = W ( k , d ) Λ W ( k , d ) + k 2 ( 1 + d ) 2 W ( k , d ) Λ 1 W ( k , d ) ,
G = W ( k ) Λ W ( k ) + 2 k ( 1 + d ) W ( k , d ) W ( k , d ) .
Proof. 
V 1 = σ 2 ( W ( k ) Λ 1 W ( k ) W ( k , d ) M ( k , d ) Λ 1 M ( k , d ) W ( k , d ) ) = σ 2 ( W ( k ) Λ 1 W ( k ) W ( k , d ) ( I p k ( 1 + d ) Λ 1 ) Λ 1 ( I p k ( 1 + d ) Λ 1 ) W ( k , d ) ) = σ 2 Λ 1 ( W ( k ) Λ W ( k ) + 2 k ( 1 + d ) W ( k , d ) W ( k , d )         ( W ( k , d ) Λ W ( k , d ) + k 2 ( 1 + d ) 2 W ( k , d ) Λ 1 W ( k , d ) ) ) Λ 1 = σ 2 Λ 1 ( G H ) Λ 1 .
It is clear that for k > 0 and 0 < d < 1 , G > 0 and H > 0 . It is obvious that G H > 0 if and only if
λ max ( H G 1 ) < 1 ,
where λ max ( H G 1 ) is the maximum eigenvalue of the matrix H G 1 . Consequently, V 1 is positive definite. □
Theorem 3.
The proposed estimator α ^ D K is superior to estimator α ^ ( d ) if and only if
α [ W ( k , d ) M ( k , d ) I p ] [ V 2 + ( 1 d ) 2 ( Λ + I p ) 1 α α ( Λ + I p ) 1 ] [ W ( k , d ) M ( k , d ) I p ] α < 1
where V 2 = σ 2 ( F ( d ) Λ 1 F ( d ) W ( k , d ) M ( k , d ) Λ 1 M ( k , d ) W ( k , d ) ) .
Proof. 
Using the difference between the dispersion matrices
V 2 = σ 2 ( F ( d ) Λ 1 F ( d ) W ( k , d ) M ( k , d ) Λ 1 M ( k , d ) W ( k , d ) ) = σ 2 d i a g { ( λ i + d ) 2 λ i ( λ i + 1 ) 2 ( λ i k ( 1 + d ) ) 2 λ i ( λ i + k ( 1 + d ) ) 2 } i = 1 p
where
F ( d ) Λ 1 F ( d ) W ( k , d ) M ( k , d ) Λ 1 M ( k , d ) W ( k , d )
will be pd if and only if
( λ i + k ( 1 + d ) ) 2 ( λ i + d ) 2 ( λ i k ( 1 + d ) ) 2 ( λ i + 1 ) 2 > 0   or ( λ i + k ( 1 + d ) ) ( λ i + d ) ( λ i k ( 1 + d ) ) ( λ i + 1 ) > 0 .
So, if k > 0 and 0 < d < 1 , ( λ i + k ( 1 + d ) ) ( λ i + d ) ( λ i k ( 1 + d ) ) ( λ i + 1 ) = k ( 1 + d ) ( 2 λ i + d + 1 ) + λ i ( d 1 ) > 0 . Consequently,
F ( d ) Λ 1 F ( d ) W ( k , d ) M ( k , d ) Λ 1 M ( k , d ) W ( k , d )
is positive definite. □
Theorem 4.
The proposed estimator α ^ D K is superior to estimator α ^ K L if and only if
α [ W ( k , d ) M ( k , d ) I p ] [ V 3 + [ W ( k ) M ( k ) I p ] α α [ W ( k ) M ( k ) I p ] ] [ W ( k , d ) M ( k , d ) I p ] α < 1
where V 3 = σ 2 ( W ( k ) M ( k ) Λ 1 M ( k ) W ( k ) W ( k , d ) M ( k , d ) Λ 1 M ( k , d ) W ( k , d ) ) .
Proof. 
Using the difference between the dispersion matrices
V 3 = σ 2 ( W ( k ) M ( k ) Λ 1 M ( k ) W ( k ) W ( k , d ) M ( k , d ) Λ 1 M ( k , d ) W ( k , d ) ) = σ 2 d i a g { ( λ i k ) 2 λ i ( λ i + k ) 2 ( λ i k ( 1 + d ) ) 2 λ i ( λ i + k ( 1 + d ) ) 2 } i = 1 p ,
where
W ( k ) M ( k ) Λ 1 M ( k ) W ( k ) W ( k , d ) M ( k , d ) Λ 1 M ( k , d ) W ( k , d )
will be pd if and only if
( λ i + k ( 1 + d ) ) 2 ( λ i k ) 2 ( λ i k ( 1 + d ) ) 2 ( λ i + k ) 2 > 0   or   ( λ i + k ( 1 + d ) ) ( λ i k ) ( λ i k ( 1 + d ) ) ( λ i + k ) > 0 .
Obviously, for k > 0 and 0 < d < 1 ,
( λ i + k ( 1 + d ) ) ( λ i k ) ( λ i k ( 1 + d ) ) ( λ i + k ) = 2 k d λ i > 0 .
Consequently,
W ( k ) M ( k ) Λ 1 M ( k ) W ( k ) W ( k , d ) M ( k , d ) Λ 1 M ( k , d ) W ( k , d )
is positive definite. □
Theorem 5.
The proposed estimator α ^ D K is superior to estimator α ^ T P if and only if
α [ W ( k , d ) M ( k , d ) I p ] [ V 4 + ( R I p ) α α ( R I p ) ] [ W ( k , d ) M ( k , d ) I p ] α < 1
where V 4 = σ 2 ( R Λ 1 R W ( k , d ) M ( k , d ) Λ 1 M ( k , d ) W ( k , d ) ) .
Proof.  
V 4 = σ 2 ( R Λ 1 R W ( k , d ) M ( k , d ) Λ 1 M ( k , d ) W ( k , d ) ) = σ 2 d i a g { ( λ i + k d ) 2 λ i ( λ i + k ) 2 ( λ i k ( 1 + d ) ) 2 λ i ( λ i + k ( 1 + d ) ) 2 } i = 1 p ,
where
R Λ 1 R W ( k , d ) M ( k , d ) Λ 1 M ( k , d ) W ( k , d )
will be positive definite if and only if
( λ i + k d ) 2 ( λ i + k ( 1 + d ) ) 2 ( λ i + k ) 2 ( λ i k ( 1 + d ) ) 2 > 0   or   ( λ i + k d ) ( λ i + k ( 1 + d ) ) ( λ i + k ) ( λ i k ( 1 + d ) ) > 0 .
Clearly, for k > 0 and 0 < d < 1 , ( λ i + k d ) ( λ i + k ( 1 + d ) ) ( λ i + k ) ( λ i k ( 1 + d ) ) = λ i k ( 3 d + 1 ) + k 2 ( 1 + d ) 2 > 0 . Consequently, R Λ 1 R W ( k , d ) M ( k , d ) Λ 1 M ( k , d ) W ( k , d ) is pd. □
Theorem 6.
The proposed estimator α ^ D K is superior to estimator α ^ N T P if and only if
α [ W ( k , d ) M ( k , d ) I p ] [ V 5 + ( F ( d ) W ( k ) I p ) α α ( F ( d ) W ( k ) I p ) ] [ W ( k , d ) M ( k , d ) I p ] α < 1
where V 5 = σ 2 ( F ( d ) W ( k ) Λ 1 W ( k ) F ( d ) W ( k , d ) M ( k , d ) Λ 1 M ( k , d ) W ( k , d ) ) .
Proof. 
V 5 = σ 2 ( F ( d ) W ( k ) Λ 1 W ( k ) F ( d ) W ( k , d ) M ( k , d ) Λ 1 M ( k , d ) W ( k , d ) ) = σ 2 d i a g { λ i ( λ i + d ) 2 ( λ i + 1 ) 2 ( λ i + k ) 2 ( λ i k ( 1 + d ) ) 2 λ i ( λ i + k ( 1 + d ) ) 2 } i = 1 p ,
where
F ( d ) W ( k ) Λ 1 W ( k ) F ( d ) W ( k , d ) M ( k , d ) Λ 1 M ( k , d ) W ( k , d )
will be pd if and only if
λ i 2 ( λ i + d ) 2 ( λ i + k ( 1 + d ) ) 2 ( λ i + 1 ) 2 ( λ i + k ) 2 ( λ i k ( 1 + d ) ) 2 > 0   or λ i ( λ i + d ) ( λ i + k ( 1 + d ) ) ( λ i + 1 ) ( λ i + k ) ( λ i k ( 1 + d ) ) > 0 .
Clearly, for k > 0 and 0 < d < 1 ,
λ i ( λ i + d ) ( λ i + k ( 1 + d ) ) ( λ i + 1 ) ( λ i + k ) ( λ i k ( 1 + d ) ) ) = λ i 2 ( k ( 1 + 2 d ) + d 1 ) + λ i ( k d ( 2 + d + k ) + k 2 ) + k 2 ( 1 + d ) .
Consequently, F ( d ) W ( k ) Λ 1 W ( k ) F ( d ) W ( k , d ) M ( k , d ) Λ 1 M ( k , d ) W ( k , d ) is positive definite. □

2.2. Determination of the Parameters k and d

Since both biasing parameters k and d are unknown and need to be estimated from the observed data, we will give a short discussion on the estimation of the parameters in this subsection. The biasing parameter k in the ORR estimator and the biasing parameter d in the Liu estimator were derived by Hoerl and Kennard [1] and Liu [6], respectively. Different authors for different kinds of models have proposed different estimators of k and d : to mention a few, Hoerl et al. [19], Kibria [20], Kibria and Banik [21], Lukman and Ayinde [22], Mansson et al. [23], and Khalaf and Shukur [24], among others.
Now, we will discuss the estimation of the optimal values of k and d for the proposed DK estimator. First, we assume that d is fixed, then the optimal value of k can be obtained by minimizing
M S E M ( α ^ D K ) = E ( ( α ^ D K α ) ( α ^ D K α ) ) ,
m ( k , d ) = t r ( M S E M ( α ^ D K ) ) ,
m ( k , d ) = σ 2 i = 1 p ( λ i k ( 1 + d ) ) 2 λ i ( λ i + k ( 1 + d ) ) 2 + 4 k 2 ( 1 + d ) 2 i = 1 p α i 2 ( λ i + k ( 1 + d ) ) 2
Differentiating m ( k , d ) with respect to k and setting ( m ( k , d ) / k ) = 0 , we obtain
k = σ 2 ( 1 + d ) ( σ 2 λ i + 2 α i 2 )
Since the optimal value of k in (33) depends on the unknown parameters σ 2 and α i 2 , we replace them with their corresponding unbiased estimators. Consequently, we have
k ^ = σ ^ 2 ( 1 + d ) ( σ ^ 2 λ i + 2 α ^ i 2 )
and
k ^ min ( D K ) = min { σ ^ 2 ( 1 + d ) ( σ ^ 2 / λ i + ( 2 α ^ i 2 ) ) } i = 1 p
Furthermore, the optimal value of d can be obtained by differentiating m ( k , d ) with respect to d for a fixed k and setting ( m ( k , d ) / d ) = 0 , and we obtain
d = σ 2 λ i m 1 ,
where m = k ( σ 2 + 2 λ i α i 2 ) .
Additionally, the optimal d with known parameters is
d ^ = σ ^ 2 λ i m ^ 1 ,
where m ^ = k ^ ( σ ^ 2 + 2 λ i α ^ i 2 ) .
In addition,
d ^ min ( D K ) = { σ ^ 2 λ i k ^ min ( D K ) ( σ ^ 2 + 2 λ i α ^ i 2 ) 1 } i = 1 p
The estimator determination of the parameters k and d in α ^ D K is obtained iteratively as follows:
Step 1: Obtain an initial estimate of d using d ^ = min ( σ ^ 2 α ^ i 2 ) .
Step 2: Obtain k ^ min ( D K ) from (35) using d ^ in Step 1.
Step 3: Estimate d ^ min ( D K ) in (38) by using k ^ min ( D K ) in Step 2.
Step 4: In case d ^ min ( D K ) is not between 0 and 1, use d ^ min ( D K ) = d ^ .
Additionally, Hoerl et al. [19] defined the biasing parameter k for the ORR estimator as
k ^ = p σ ^ 2 i = 1 p α ^ i 2
The biasing parameter d is given by Ozkale and Kaciranlar [8] and adopted for the Liu estimator
d ^ = min ( σ ^ 2 α ^ i 2 + σ ^ 2 λ i )
Then, Kibria and Lukman [15] found the biasing parameter estimator for the KL estimator as
k ^ min = min ( σ ^ 2 2 α ^ i 2 + σ ^ 2 λ i )
In addition, k ^ min of the KL estimator is also obtained when d = 0 in the derived biasing parameter estimator k ^ min ( D K ) for the proposed DK estimator.

3. Simulation Study

To support a theoretical comparison of the estimators, a Monte Carlo simulation study was conducted to compare the performance of the estimators in this section. As such, this section will contain (i) the simulation technique and (ii) a discussion of the results.

3.1. Simulation Technique

Following Gibbons [25] and Kibria [20], we generated the explanatory variables using the following equation:
x i j = ( 1 ρ 2 ) 1 / 2 z i j + ρ z i , p + 1 , i = 1 , 2 , , n , j = 1 , 2 , , p
where z i j are independent standard normal pseudo-random numbers, and ρ represents the correlation between any two explanatory variables and is considered here to be 0.90 and 0.99. We consider p = 3 in the simulation. These variables are standardized so that X X and X y are in correlation forms. The n observations for the dependent variable y are determined by the following equation:
y i = β 1 x i 1 + β 2 x i 2 + + β p x i p + e i , i = 1 , 2 , , n
where e i are i . i . d N ( 0 , σ 2 ) . The values of β are chosen such that β β = 1 [26]. Since we aimed to compare the performance of the DK estimator with OLS, ORR, Liu, KL, TP, and NTP estimators, we chose k (0.3, 0.6, 0.9) between 0 and 1, as did Wichern and Churchill [27] and Kan et al. [28], where ORR gives better results and d (0.2, 0.5, 0.8). The replication of this simulation study is 1000 times for the sample sizes n = 50 and 100 and σ 2 = 1, 25, and 100. For each replicate, we computed the mean square error (MSE) of the estimators by using the equation below:
M S E ( α * ) = 1 1000 j = 1 1000 ( α i j * α i ) ( α i j * α i )
where α i j * is the estimator values and α i is the true parameter values. The estimated MSEs of the estimators are shown in Table 1, Table 2, Table 3 and Table 4.

3.2. Simulation Results Discussions

From Table 1, Table 2, Table 3. Table 4, it appears that as σ and ρ increase, the estimated MSE values increase, while as n increases, the estimated MSE values decrease. As expected, when the multicollinearity problem exists, the OLS estimator gives the highest MSE values and performs the worst among all estimators. Additionally, the results show that the proposed DK estimator is performing better than the rest of the estimators, followed by NTP and KL estimators, most of the time for all conditions. The NTP estimator gives better results in MSE values when d and k are near zero. The proposed DK estimator always performs better than the KL estimator. The NTP estimator performance is between the KL and DK estimators most of the time, while the KL estimator performance is between the NTP estimator and the proposed DK estimator some of the time. Thus, simulation results are consistent with the theoretical results.
To see the effect of various parameters on MSE, we plotted MSE vs. the parameters in Figure 1, Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6.
It appears from Figure 1 that as ρ increases, the MSE values of the estimators increase for σ = 10, n = 50, k = 0.9, and d = 0.8, and the proposed DK estimator has the smallest MSE value among all estimators.
Figure 2 shows that as n increases, the MSE values of the estimators decrease for σ = 10, ρ = 0.99, k = 0.9, and d = 0.8, and the proposed DK estimator has the smallest MSE value among all estimators.
Figure 3 shows the behavior of σ , where as σ increases, the MSE values of the estimators increase for n = 100, ρ = 0.99, k = 0.9, d = 0.8, and for other values of these factors.
Figure 4 shows the behavior of the estimators for different values of d when k = 0.3 . It is evident from Figure 4 that the proposed DK estimator gives the smallest MSE values when d is greater than 0.3, while the NTP estimator gives better results when d is less than 0.3 for n = 100, ρ = 0.99, σ = 10, and for other values of these factors.
Figure 5 shows the behavior of the estimators for different values of k when d = 0.5 , such that the proposed DK estimator gives the smallest MSE values among all other estimators for n = 100, ρ = 0.99, σ = 10, and for other values of these factors.
Figure 6 shows the behavior of the estimators for different values of k when d = 0.8 , such that the proposed DK estimator gives the smallest MSE values among all other estimators for n = 100, ρ = 0.99, σ = 10, and for other values of these factors.

4. Application

4.1. Portland Cement Data

We use the Portland cement data, which was originally adopted by Woods et al. [29] to explain their theoretical results. The data were analyzed by various researchers: to mention a few, Kaciranlar et al. [30], Li and Yang [31], Lukman et al. [13], and, recently, Kibria and Lukman [15], among others.
The regression model for these data is defined as
y i = β 0 + β 1 X 1 + β 2 X 2 + β 3 X 3 + β 4 X 4 + ε i .
For more details about these data, see Woods et al. [29].
The variance inflation factors are V I F 1 = 38.50 , V I F 2 = 254.42 , V I F 3 = 46.87 , and V I F 4 = 282.51 . Eigenvalues of S are λ 1 = 44676.206 , λ 2 = 5965.422 , λ 3 = 809.952 , and λ 4 = 105.419 , and the condition number of S is approximately 20.58. The VIFs, the eigenvalues, and the condition number all indicate that severe multicollinearity exists. The estimated parameters and the MSE values of the estimators are presented in Table 5. It appears from Table 5 that the proposed DK estimator performs the best among the mentioned estimators as it gives the smallest MSE value.

4.2. Longley Data

Longley data were originally used by Longley [32] and then by other authors (Yasin and Murat [33]; Lukman and Ayinde [22]). The regression model of this data is defined as
y = β 1 x 1 + β 2 x 2 + + β 5 x 5 + β 6 x 6 + ε
For more details about these data, see Longley [32].
The variance inflation factors are V I F 1 = 135 . 53 , V I F 2 = 1788 . 51 , V I F 3 = 33 . 62 V I F 4 = 3 . 59 , V I F 5 = 399 . 15 , and V I F 6 = 758 . 98 . Eigenvalues of S are as follows: 2.76779 × 1012, 7,039,139,179, 11,608,993.96, 2,504,761.021, 1738.356, 13.309, and the condition number of S is approximately 456,070. The VIFs, the eigenvalues, and the condition number all indicate that severe multicollinearity exists. The estimated parameters and the MSE values of the estimators are presented in Table 6. It appears from Table 6 that the proposed DK estimator performs the best among the mentioned estimators as it gives the smallest MSE value.

5. Summary and Concluding Remarks

In this paper, we introduced a new class of two-parameter estimator, namely, the Dawoud–Kibria (DK) estimator, to solve the multicollinearity problem for linear regression models. We theoretically compared the proposed DK estimator with some existing estimators, for example, the ordinary least squares (OLS) estimator, the ordinary ridge regression (ORR) estimator, the Liu (1993) estimator, the new modified ridge-type estimator of Kibria and Lukman (KL; 2020), the two-parameter (TP) estimator of Ozkale and Kaciranlar (2007), and the new two-parameter (NTP) estimator of Yang and Chang (2010), and derived the biasing parameters d and k of the proposed DK estimator. A simulation study has been conducted to compare the performance of the OLS, ORR, Liu, KL, TP, NTP, and the proposed DK estimators. It is evident from simulation results that the proposed DK estimator gives better results than the rest of the estimators under some conditions. Real-life datasets were analyzed to illustrate the findings of the paper. Hopefully, the paper will be useful for practitioners of various fields.

Author Contributions

I.D.: Conceptualization, methodology, original draft preparation. B.M.G.K.: Conceptualization, Results Discussion and Review and Editing. Both authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

Authors are grateful to three anonymous referees and the editor for their valuable comments and suggestions, which certainly improved the presentation and quality of the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hoerl, A.E.; Kennard, R.W. Ridge regression: Biased estimation for nonorthogonal problems. Technometrics 1970, 12, 55–67. [Google Scholar] [CrossRef]
  2. Stein, C. Inadmissibility of the usual estimator for the mean of a multivariate normal distribution. In Proceedings of the Third Berkeley Symposium on Mathematical Statistics and Probability; MR0084922; University California Press: Berkeley, CA, USA, 1956; Volume 197–206, pp. 1954–1955. [Google Scholar]
  3. Massy, W.F. Principal components regression in exploratory statistical research. J. Am. Stat. Assoc. 1965, 60, 234–256. [Google Scholar] [CrossRef]
  4. Mayer, L.S.; Willke, T.A. On biased estimation in linear models. Technometrics 1973, 15, 497–508. [Google Scholar] [CrossRef]
  5. Swindel, B.F. Good ridge estimators based on prior information. Commun. Stat. Theory Methods 1976, 5, 1065–1075. [Google Scholar] [CrossRef]
  6. Liu, K. A new class of biased estimate in linear regression. Commun. Stat. Theory Methods 1993, 22, 393–402. [Google Scholar]
  7. Akdeniz, F.; Kaçiranlar, S. On the almost unbiased generalized liu estimator and unbiased estimation of the bias and mse. Commun. Stat. Theory Methods 1995, 24, 1789–1797. [Google Scholar] [CrossRef]
  8. Ozkale, M.R.; Kaçiranlar, S. The restricted and unrestricted two-parameter estimators. Commun. Stat. Theory Methods 2007, 36, 2707–2725. [Google Scholar] [CrossRef]
  9. Sakallıoglu, S.; Kaçıranlar, S. A new biased estimator based on ridge estimation. Stat. Pap. 2008, 49, 669–689. [Google Scholar] [CrossRef]
  10. Yang, H.; Chang, X. A new two-parameter estimator in linear regression. Commun. Stat. Theory Methods 2010, 39, 923–934. [Google Scholar] [CrossRef]
  11. Roozbeh, M. Optimal QR-based estimation in partially linear regression models with correlated errors using GCV criterion. Comput. Stat. Data Anal. 2018, 117, 45–61. [Google Scholar] [CrossRef]
  12. Akdeniz, F.; Roozbeh, M. Generalized difference-based weighted mixed almost unbiased ridge estimator in partially linear models. Stat. Pap. 2019, 60, 1717–1739. [Google Scholar] [CrossRef]
  13. Lukman, A.F.; Ayinde, K.; Binuomote, S.; Clement, O.A. Modified ridge-type estimator to combat multicollinearity: Application to chemical data. J. Chemother. 2019, 33, e3125. [Google Scholar] [CrossRef]
  14. Lukman, A.F.; Ayinde, K.; Sek, S.K.; Adewuyi, E. A modified new two-parameter estimator in a linear regression model. Model. Eng. Simul. 2019, 2019, 6342702. [Google Scholar] [CrossRef]
  15. Kibria, B.M.G.; Lukman, A.F. A New Ridge-Type Estimator for the Linear Regression Model: Simulations and Applications. Hindawi Sci. 2020, 2020, 9758378. [Google Scholar] [CrossRef]
  16. Wang, S.G.; Wu, M.X.; Jia, Z.Z. Matrix Inequalities; Science Chinese Press: Beijing, China, 2006. [Google Scholar]
  17. Farebrother, R.W. Further results on the mean square error of ridge regression. J. R. Stat. Soc. B 1976, 38, 248–250. [Google Scholar] [CrossRef]
  18. Trenkler, G.; Toutenburg, H. Mean squared error matrix comparisons between biased estimators-an overview of recent results. Stat. Pap. 1990, 31, 165–179. [Google Scholar] [CrossRef]
  19. Hoerl, A.E.; Kannard, R.W.; Baldwin, K.F. Ridge regression: Some simulations. Commun. Stat. 1975, 4, 105–123. [Google Scholar] [CrossRef]
  20. Kibria, B.M.G. Performance of some new ridge regression estimators. Commun. Stat. Simul. Comput. 2003, 32, 419–435. [Google Scholar] [CrossRef]
  21. Kibria, B.M.G.; Banik, S. Some ridge regression estimators and their performances. J. Mod. Appl. Stat. Methods 2016, 15, 206–238. [Google Scholar] [CrossRef]
  22. Lukman, A.F.; Ayinde, K. Review and classifications of the ridge parameter estimation techniques. Hacet. J. Math. Stat. 2017, 46, 953–967. [Google Scholar] [CrossRef]
  23. Månsson, K.; Kibria, B.M.G.; Shukur, G. Performance of some weighted Liu estimators for logit regression model: An application to Swedish accident data. Commun. Stat. Theory Methods 2015, 44, 363–375. [Google Scholar] [CrossRef]
  24. Khalaf, G.; Shukur, G. Choosing ridge parameter for regression problems. Commun. Stat. Theory Methods 2005, 21, 2227–2246. [Google Scholar] [CrossRef]
  25. Gibbons, D.G. A simulation study of some ridge estimators. J. Am. Stat. Assoc. 1981, 76, 131–139. [Google Scholar] [CrossRef]
  26. Newhouse, J.P.; Oman, S.D. An evaluation of ridge estimators. In A Report Prepared for United States Air Force Project; RAND: Santa Monica, CA, USA, 1971. [Google Scholar]
  27. Wichern, D.W.; Churchill, G.A. A comparison of ridge estimators. Technometrics 1978, 20, 301–311. [Google Scholar] [CrossRef]
  28. Kan, B.; Alpu, O.; Yazıcı, B. Robust ridge and robust Liu estimator for regression based on the LTS estimator. J. Appl. Stat. 2013, 40, 644–655. [Google Scholar] [CrossRef]
  29. Woods, H.; Steinour, H.H.; Starke, H.R. Effect of composition of Portland cement on heat evolved during hardening. J. Ind. Eng. Chem. 1932, 24, 1207–1214. [Google Scholar] [CrossRef]
  30. Kaciranlar, S.; Sakallioglu, S.; Akdeniz, F.; Styan, G.P.H.; Werner, H.J. A new biased estimator in linear regression and a detailed analysis of the widely-analysed dataset on portland cement. Sankhya Indian J. Stat. B 1999, 61, 443–459. [Google Scholar]
  31. Li, Y.; Yang, H. Anew Liu-type estimator in linear regression model. Stat. Pap. 2012, 53, 427–437. [Google Scholar] [CrossRef]
  32. Longley, J.W. An appraisal of least squares programs for electronic computer from the point of view of the user. J. Am. Stat. Assoc. 1967, 62, 819–841. [Google Scholar] [CrossRef]
  33. Yasin, A.; Murat, E. Influence Diagnostics in Two-Parameter Ridge Regression. J. Data Sci. 2016, 14, 33–52. [Google Scholar]
Figure 1. MSE values versus ρ values.
Figure 1. MSE values versus ρ values.
Stats 03 00033 g001
Figure 2. MSE values versus n values.
Figure 2. MSE values versus n values.
Stats 03 00033 g002
Figure 3. MSE values versus σ values.
Figure 3. MSE values versus σ values.
Stats 03 00033 g003
Figure 4. MSE values versus d values when k = 0.3.
Figure 4. MSE values versus d values when k = 0.3.
Stats 03 00033 g004
Figure 5. MSE values versus k values when d = 0.5.
Figure 5. MSE values versus k values when d = 0.5.
Stats 03 00033 g005
Figure 6. MSE values versus k values when d = 0.8.
Figure 6. MSE values versus k values when d = 0.8.
Stats 03 00033 g006
Table 1. Estimated MSE for ordinary least squares estimator (OLS), ordinary ridge regression (ORR), Liu, Kibria–Lukman (KL), two-parameter (TP), new two-parameter (NTP), and Dawoud–Kibria (DK).
Table 1. Estimated MSE for ordinary least squares estimator (OLS), ordinary ridge regression (ORR), Liu, Kibria–Lukman (KL), two-parameter (TP), new two-parameter (NTP), and Dawoud–Kibria (DK).
ρ = 0.90 , n = 50
k d σ OLSORRLiuKLTPNTPDK
0.30.210.21360.20050.18210.18790.20310.17110.1832
55.33945.01354.55074.69825.07784.27494.5799
1021.35720.05418.20318.79320.31117.09918.319
0.510.21360.20050.19360.18790.20700.18180.1764
55.33945.01354.83884.69825.17514.54464.4080
1021.35720.05419.35518.79320.70018.17817.632
0.810.21360.20050.20540.18790.21090.19290.1698
55.33945.01355.13614.69825.27344.82314.2427
1021.35720.05420.54418.79321.09319.29216.970
0.60.210.21360.18870.18210.16550.19360.16110.1574
55.33944.71764.55074.13614.83884.02453.9308
1021.35718.87018.20316.54419.35516.09815.723
0.510.21360.18870.19360.16550.20090.17120.1459
55.33944.71764.83884.13615.02354.27773.6422
1021.35718.87019.35516.54420.09417.11014.568
0.810.21360.18870.20540.16550.20850.18160.1353
55.33944.71765.13614.13615.21184.53893.3748
1021.35718.87020.54416.54420.84718.15513.498
0.90.210.21360.17800.18210.14590.18480.15210.1353
55.33944.44834.55073.64224.61973.79653.3748
1021.35717.79318.20314.56818.47915.18613.498
0.510.21360.17800.19360.14590.19530.16150.1209
55.33944.44834.83883.64224.88324.03463.0101
1021.35717.79319.35514.56819.53316.13812.039
0.810.21360.17800.20540.14590.20620.17130.1081
55.33944.44835.13613.64225.15444.28032.6846
1021.35717.79320.54414.56820.61717.12110.736
Minimum mean squared error (MSE) value is bolded in each row.
Table 2. Estimated MSE for OLS, ORR, Liu, KL, TP, NTP, and DK.
Table 2. Estimated MSE for OLS, ORR, Liu, KL, TP, NTP, and DK.
ρ = 0.99 , n = 50
k d σ OLSORRLiuKLTPNTPDK
0.30.211.94521.12581.07860.62611.50750.25480.5308
548.62828.14526.96515.65137.6866.368913.268
10194.51112.58107.8662.607150.7425.47553.074
0.511.94521.12581.56790.53081.76330.90830.1548
548.62828.14539.19713.26844.08322.7063.8693
10194.51112.58156.7953.074176.3390.82615.477
0.811.94520.73490.68130.10720.93040.26120.0457
548.62818.37217.0312.678223.2586.52621.1386
10194.5173.48968.12410.71293.03426.1054.5545
0.60.211.94520.73491.07860.10721.26720.41010.0109
548.62818.37226.9652.678231.68010.2510.2678
10194.5173.489107.8610.712126.7241.0061.0709
0.511.94520.73491.56790.10721.65650.59350.0178
548.62818.37239.1972.678241.41214.8370.4391
10194.5173.489156.7910.712165.6559.3481.7561
0.811.94520.51840.68130.01090.73020.18590.0108
548.62812.95817.0310.267818.2544.64420.2391
10194.5151.83468.1241.070973.01718.5761.0561
0.90.211.94520.51841.07860.01091.11690.29050.0108
548.62812.95826.9650.267827.9217.25900.2118
10194.5151.834107.861.0709111.6829.0361.0684
0.511.94520.51841.56790.01091.58630.41920.0107
548.62812.95839.1970.267839.65610.4770.2540
10194.5151.834156.791.0709158.6241.9091.0611
0.811.94521.12581.07860.53081.50750.62610.2548
548.62828.14526.96513.26837.68615.6516.3689
10194.51112.58107.8653.074150.7462.60725.475
Minimum MSE value is bolded in each row.
Table 3. Estimated MSE for OLS, ORR, Liu, KL, TP, NTP, and DK.
Table 3. Estimated MSE for OLS, ORR, Liu, KL, TP, NTP, and DK.
ρ = 0.90 , n = 100
k d σ OLSORRLiuKLTPNTPDK
0.30.210.10640.10320.09820.10000.10380.09520.0987
52.66112.57932.45382.49892.59562.37872.4678
1010.64410.3179.81499.995610.3829.51479.8709
0.510.10640.10320.10120.10000.10480.09810.0969
52.66112.57932.53052.49892.62002.45292.4218
1010.64410.31710.1219.995610.4809.81169.6869
0.810.10640.10320.10430.10000.10580.10110.0951
52.66112.57932.60842.49892.64462.52842.3767
1010.64410.31710.4339.995610.57810.1139.5065
0.60.210.10640.10010.09820.09390.10130.09230.0916
52.66112.50152.45382.34712.53302.30722.2891
1010.64410.0059.81499.388210.1319.22879.1561
0.510.10640.10010.10120.09390.10320.09520.0882
52.66112.50152.53052.34712.58062.37912.2048
1010.64410.00510.1219.388210.3229.51628.8190
0.810.10640.10010.10430.09390.10520.09810.0850
52.66112.50152.60842.34712.62872.45212.1238
1010.64410.00510.4339.388210.5149.80848.4947
0.90.210.10640.09710.09820.08820.09890.08960.0850
52.66112.42732.45382.20482.47312.23912.1238
1010.6449.70909.81498.81909.89248.95618.4947
0.510.10640.09710.10120.08820.10170.09240.0804
52.66112.42732.53052.20482.54282.30872.0079
1010.6449.709010.1218.819010.1719.23478.0312
0.810.10640.09710.10430.08820.10450.09520.0761
52.66112.42732.60842.20482.61342.37951.8985
1010.6449.709010.4338.819010.4539.51787.5934
Minimum MSE value is bolded in each row.
Table 4. Estimated MSE for OLS, ORR, Liu, KL, TP, NTP, and DK.
Table 4. Estimated MSE for OLS, ORR, Liu, KL, TP, NTP, and DK.
ρ = 0.99 , n = 100
k d σ OLSORRLiuKLTPNTPDK
0.30.210.99130.74460.52880.53410.79110.39900.4714
524.78218.61513.22013.35319.7769.973811.784
1099.12874.46352.88253.41279.10739.89547.136
0.510.99130.74460.68500.53410.86340.51580.3900
524.78218.61517.12513.35321.58612.8949.7508
1099.12874.46368.50253.41286.34351.57739.003
0.810.99130.74460.86190.53410.93910.64800.3218
524.78218.61521.54713.35323.47616.1998.0436
1099.12874.46386.18853.41293.90564.79632.174
0.60.210.99130.58110.52880.28240.65420.31250.2162
524.78214.52613.2207.059816.3547.81105.4042
1099.12858.10752.88228.23965.41931.24321.616
0.510.99130.58110.68500.28240.77220.40330.1419
524.78214.52617.1257.059819.30610.0813.5462
1099.12858.10768.50228.23977.22340.32614.184
0.810.99130.58110.86190.28240.90030.50600.0901
524.78214.52621.5477.059822.50812.6492.2524
1099.12858.10786.18828.23990.03150.5989.0095
0.90.210.99130.46680.52880.14190.55570.25180.0901
524.78211.66813.2203.546213.8926.29372.2524
1099.12846.67452.88214.18455.56825.1749.0095
0.510.99130.46680.68500.14190.70410.32450.0422
524.78211.66817.1253.546217.6018.11161.0520
1099.12846.67468.50214.18470.40632.4464.2074
0.810.99130.46680.86190.14190.87040.40670.0182
524.78211.66821.5473.546221.76010.1660.4524
1099.12846.67486.18814.18487.04040.6671.8092
Minimum MSE value is bolded in each row
Table 5. The results of regression coefficients and the corresponding MSE values.
Table 5. The results of regression coefficients and the corresponding MSE values.
Coef. α ^ α ^ ( k ^ ) α ^ ( d ^ ) α ^ K L ( k ^ min ) α ^ T P ( k ^ , d ^ ) α ^ N T P ( k ^ , d ^ ) α ^ D K ( k ^ min , d ^ min )
α 0 62.4058.587127.66527.62732.3863.829527.588
α 1 1.55112.1046 *1.9008 *1.9088 *1.8598 *2.1459 *1.9092 *
α 2 0.51011.0648 *0.8699 *0.8685 *0.8196 *1.1157 *0.8689 *
α 3 0.10190.6680 *0.46190.46780.41770.7126 *0.4682
α 4 −0.14400.3995 *0.20800.20720.15920.4488 *0.2076
k -----------0.007676-0.0004710.0076760.0076760.000471
d ----------------------0.442224-----------0.4422240.4422240.001536
M S E 4912.0902989.8202170.9672170.96042222.6823450.7102170.9602
* Coefficient is significant at 0.05.
Table 6. The results of regression coefficients and the corresponding MSE values.
Table 6. The results of regression coefficients and the corresponding MSE values.
Coef. α ^ α ^ ( k ^ ) α ^ ( d ^ ) α ^ K L ( k ^ min ) α ^ T P ( k ^ , d ^ ) α ^ N T P ( k ^ , d ^ ) α ^ D K ( k ^ min , d ^ min )
α 1 −52.9941.0931−49.641−5.0190−7.79331.2529−5.0188
α 2 0.0711 *0.0526 *0.0704 *0.0609 *0.0556 *0.0525 *0.0609 *
α 3 −0.4235−0.6457 *−0.4316−0.5426−0.6092 *−0.6464 *−0.5427
α 4 −0.5726 *−0.5611−0.5745−0.5985 *−0.5630−0.5610−0.5984 *
α 5 −0.4142−0.2062−0.4083−0.3266−0.2404−0.2056−0.3267
α 6 48.418 *37.119 *48.046 *42.918 *38.976 *37.085 *42.918 *
k ---------262.88---------9.5600262.88262.888.2110
d ------------------0.1643---------0.16430.16430.1643
M S E 170953190.6151832915.12945.33204.12914.7
* Coefficient is significant at 0.05.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Dawoud, I.; Kibria, B.M.G. A New Biased Estimator to Combat the Multicollinearity of the Gaussian Linear Regression Model. Stats 2020, 3, 526-541. https://0-doi-org.brum.beds.ac.uk/10.3390/stats3040033

AMA Style

Dawoud I, Kibria BMG. A New Biased Estimator to Combat the Multicollinearity of the Gaussian Linear Regression Model. Stats. 2020; 3(4):526-541. https://0-doi-org.brum.beds.ac.uk/10.3390/stats3040033

Chicago/Turabian Style

Dawoud, Issam, and B. M. Golam Kibria. 2020. "A New Biased Estimator to Combat the Multicollinearity of the Gaussian Linear Regression Model" Stats 3, no. 4: 526-541. https://0-doi-org.brum.beds.ac.uk/10.3390/stats3040033

Article Metrics

Back to TopTop