Next Article in Journal
Evaluation of Elite Athletes Training Management Efficiency Based on Multiple Criteria Measure of Conditioning Using Fewer Data
Previous Article in Journal
Quasi (s,r)-Contractive Multi-Valued Operators and Related Fixed Point Theorems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Comparing Parameter Estimation of Random Coefficient Autoregressive Model by Frequentist Method

by
Autcha Araveeporn
Department of Statistics, Faculty of Science, King Mongkut’s Institute of Technology Ladkrabang, Bangkok 10520, Thailand
Submission received: 20 November 2019 / Revised: 20 December 2019 / Accepted: 20 December 2019 / Published: 2 January 2020
(This article belongs to the Section Computational and Applied Mathematics)

Abstract

:
This paper compares the frequentist method that consisted of the least-squares method and the maximum likelihood method for estimating an unknown parameter on the Random Coefficient Autoregressive (RCA) model. The frequentist methods depend on the likelihood function that draws a conclusion from observed data by emphasizing the frequency or proportion of the data namely least squares and maximum likelihood methods. The method of least squares is often used to estimate the parameter of the frequentist method. The minimum of the sum of squared residuals is found by setting the gradient to zero. The maximum likelihood method carries out the observed data to estimate the parameter of a probability distribution by maximizing a likelihood function under the statistical model, while this estimator is obtained by a differential parameter of the likelihood function. The efficiency of two methods is considered by average mean square error for simulation data, and mean square error for actual data. For simulation data, the data are generated at only the first-order models of the RCA model. The results have shown that the least-squares method performs better than the maximum likelihood. The average mean square error of the least-squares method shows the minimum values in all cases that indicated their performance. Finally, these methods are applied to the actual data. The series of monthly averages of the Stock Exchange of Thailand (SET) index and daily volume of the exchange rate of Baht/Dollar are considered to estimate and forecast based on the RCA model. The result shows that the least-squares method outperforms the maximum likelihood method.

1. Introduction

The modeling of time series data has been applied in a field of finance, business, and economics. Normally the time series data exhibit changing data as trend, volatility, stationary, nonstationary, and random walk, especially when the time series data are a sequence taken at successive equally space points in time. The modeling of time series data can help estimate parameters and forecast the values in future time.
A model that is widely used to fit the stationary data is the Autoregressive (AR) Moving Average (MA) model. The Autoregressive Moving Average (ARMA) model provides a parsimonious description of a weakly stationary stochastic process. When the time series data show evidence of non-stationarity, the Autoregressive Integrated Moving Average (ARIMA) model can be applied to eliminate the non-stationarity. These models have some problems to overspecify the model and estimate integration parameter. The Conditional Heteroscadastic Autoregressive Moving average (CHARMA) model [1] is an alternative way to model by using when volatility arises. Another model that is approached to CHARMA models is called the Random Coefficient Autoregressive (RCA) model studied by Nicholls and Quinn [2]. Normally, the RCA model is now being concentrated on the past of time series data to determine the order and obtain estimates of the unknown parameter as a volatility model.
For parameter estimation, there has been an increasing interest in the unknown parameter for the RCA model. Nicholls and Quinn [3] model shown to be strongly consistent estimator and satisfy a central limit theorem. By means of a two-stage regression procedure, estimates of the unknown parameters of this model are obtained. Hwang and Basawa [4] studied the generalized random coefficient autoregressive process in which the error process is introduced. Conditional least squares and weight least squares estimators are derived to estimate the unknown parameter. Aue, Horvath, and Steinebach [5] proposed the quasi-maximum likelihood method to estimate parameters of an RCA model of order 1. The strong consistent estimator and the asymptotic normal estimations are derived under the estimation.
Under the frequentist method, parameter and hypothesis are viewed as unknown but fixed quantities and, consequently, there is no possibility of making probability statements about the unknowns [6]. The most popular methods used by a statistician for estimating the parameter on several models are the frequentist methods named as Least-Squares (LS) and Maximum Likelihood (ML). The Least-squares method is used by minimizing the class of sum squared residuals, and the process of estimating unknown parameter is shown as the differential coefficient. Another method is the maximum likelihood method, which is a common method that can be developed using flexible statistics from point estimation. The likelihood function is used for obtaining the observed data and differential as the least square method.
The frequentist method is approached by the random coefficient model [7], which has proposed two estimation methods for the coefficients of the explanatory variables, namely, the generalized least squares and the maximum likelihood estimator for the covariance matrix. The random coefficient model uses dependent and explanatory variables, but the random coefficient autoregressive model mentions just one variable. This study considers the frequentist methods for estimating the parameter of the RCA model based on the least-squares and maximum likelihood methods. The performance of these methods uses the criterion as the Average Mean Square Error (AMSE) on simulated and Mean Square Error (MSE) on real data.

2. The Random Coefficient Autoregressive (RCA) Model

The Random Coefficient Autoregressive (RCA) model of order p [2] is written by
x t   =   α + i = 1 p β t i x t i + ε t , t = 2 , 3 , , n .
For Equation (1) the following assumption are made.
(i)
{ ε t ;   t = 2 , 3 , , n } is an independent sequence of random variables with mean zero and covariance matrix G .
(ii)
The α and β _ t i = ( β t 1 , β t 2 , , β t p ) are constant.
(iii)
Letting β _ t i = ( β t 1 , β t 2 , , β t p ) is an independent sequence with mean zero and covariance matrix C .
(iv)
β _ t i = ( β t 1 , β t 2 , , β t p ) is also independent of { ε t ;   t = 2 , 3 , , n } .
Wang and Ghosh [8] suggested
β _ t i   =   μ _ β + Ω β   u _ t ,
where α is a constant value, β _ t i = ( β t 1 , β t 2 , , β t p ) is the independent random vectors with mean μ _ β = ( μ t 1 , μ t 2 , , μ t p ) and covariance matrix Ω β .
In this literature, we mention the simplicity case study of the first order on RCA1 following
x t   =   α + β t 1 x t 1 + ε t ,   t = 2 , 3 , , n β t 1   =   μ β + σ β   v t ,
where x t ’s are iid (independent and identically distributed random variables) with mean μ β , and variance σ β 2 , ε t ’s are iid random variables with mean zero and variance σ ε 2 . The RCA1 model can be rewritten as
x t   =   α + β t x t 1 + ε t = α + μ β x t 1 + u t ,
where u t = σ β v t x t 1 + ε t ; where v t is set as a random variable with mean zero and unit variance and independent of ε t .

3. Parameter Estimations

In the parameter estimation of the RCA1 model, we present the concept of the least-squares method and maximum likelihood method.

3.1. Least-Squares Method

The classical method for the estimation of parameter by the fitting model is the least-squares method. Araveeporn [9] proposed the least-square criteria to estimate parameters of the random coefficient dynamic regression model. The proposed coefficient provided asymptotically unbiased estimator. Hsiao [7] proposed the random coefficient model that consisted of the dependent variable y i j for the cross-section unit and the explanatory variable x i k t . The model is
y i t = k = 1 K ( β k + δ i k + γ k t ) x i k t + ε i t , i = 1 , , N ;   t = 1 , , T )
Equation (4) can be written as matrix form
Y _ = X _ β + X δ _ + X ˜ γ _ + ε ˜   =   X _ β + e _ ,
where e _ = X δ _ + X ˜ γ _ + ε ˜ , and E ( e _   e _ ) = Ω . If Ω is known as the covariance matrix, the best linear unbiased estimator of β will be computed from the least-squares criterion as
b = ( X _ Ω X _ ) 1 X _ Ω Y _ .
For the random coefficient autoregressive (RCA) model, we suggest the least-squares criterion to approximate parameter θ = ( α , μ β ) by minimizing sum of squared residuals.
The RCA1 in Equation (2) can be written in terms of the matrix form as the following regression model
Y = X θ ˜ + ε ˜
where Y = [ x 1 x n ] ,   X =   [ 1 1 1 0 x 1 x n 1 ] ,   θ ˜   =   [ α μ β ] ,   ε ˜   =   [ ε 1 ε n ] .
The RCA1 estimator, the estimator θ ˜ ^ = [ α ^ , μ ^ β ] , is obtained by θ ˜ ^ = ( X X ) 1 X Y [10]. Then
θ ˜ ^   =   [ α ^ μ ^ β ]   = [ n t = 2 n x t 1 t = 2 n x t 1 t = 2 n x t 1 2 ] 1 [ t = 1 n x t t = 2 n x t 1 x t ] =   1 n t = 2 n x t 1 2 ( t = 2 n x t 1 ) 2   [ t = 2 n x t 1 2 t = 2 n x t 1 t = 2 n x t 1 n ]   [ t = 2 n x t t = 2 n x t 1 x t ]   =   [ t = 2 n x t 1 2 t = 1 n x t t = 2 n x t 1 t = 2 n x t 1 x t n t = 2 n x t 1 2 ( t = 2 n x t 1 ) 2 n t = 2 n x t 1 x t t = 2 n x t 1 t = 1 n x t n t = 2 n x t 1 2 ( t = 2 n x t 1 ) 2 ]
The least-square estimates α ^ L S and μ ^ β , L S are obtained by
α ^ L S   =   t = 2 n x t 1 2 t = 1 n x t t = 2 n x t 1 t = 2 n x t 1 x t ( n t = 2 n x t 1 2 ( t = 2 n x t 1 ) 2 ) ,
and   μ ^ β , L S =   n t = 2 n x t 1 x t t = 2 n x t 1 t = 1 n x t ( n t = 2 n x t 1 2 ( t = 2 n x t 1 ) 2 ) .
For RCA1 model, it can be fitted model as
x ^ t   =   α ^ L S + μ ^ β , L S x t 1   , t = 2 , 3 , n .

3.2. Maximum Likelihood Method

The maximum likelihood method has been widely used in a field of statistical inference for estimating parameters on the distribution function. Araveeporn [11] developed the maximum likelihood method to estimate the parameter on the random coefficient dynamic regression model. From a random coefficient model [7] on Equation (4), the maximum likelihood estimator is obtained by assuming that δ i k ~ N ( 0 , Δ ) ,   γ k t ~ N ( 0 , Γ ) and ε i t ~ N ( 0 , σ 2 ) . Then, e _ = X δ _ + X ˜ γ _ + ε ˜ is normally distributed. The density of y i j is
1 ( 2 π ) 1 2 N T | Ω | 1 2 exp ( 1 2 ( Y _ X _ β ) Ω 1 ( Y _ X _ β ) ) .
Maximum likelihood estimation of β ,   Δ ,   Γ , and σ 2 will require the solution of highly nonlinear equations.
For any set of observations { x 1 , , x n } of the RCA model, let L ( θ ) be the likelihood function based on the joint probability density of the observed data. Furthermore, the function of the unknown parameters in likelihood function held to be fixed.
Let γ t be the iid random variable of t , let u t = x t μ β x t 1 , and it can be shown that
E ( u t | γ t 1 )   =   0 ,   E ( u t 2 | γ t 1 )   =   σ ε 2 + σ β 2   x t 1 2 ,
and   V a r ( u t | γ t 1 )   =   σ ε 2 + σ β 2   x t 1 2 .
Nicholls and Quinn [3] model is shown to be strongly consistent estimator and satisfied a central limit theorem to normal distribution. So, the likelihood function is concerned in terms of normal distribution.
The maximum likelihood method considers the likelihood function as
L ( θ )   =   L ( θ | x t x t 1 )   =   t = 2 n f ( x t | x t 1 )   =   ( 1 2 π ) n / 2 t = 2 n ( σ ε 2 + σ β 2   x t 1 2 ) 1 / 2 exp { 1 2 t = 2 n ( x t α μ β x t 1 ) 2 σ ε 2 + σ β 2   x t 1 2 }
From (8), construct the new likelihood function by setting parameter and let σ 2 = σ ε 2 = σ β 2 , so it can be written as
L ( θ )   =   ( 1 2 π ) n / 2 σ 2 t = 2 n ( 1 +   x t 1 2 ) 1 / 2 exp { 1 2 σ 2 t = 2 n ( x t α μ β x t 1 ) 2 1 + x t 1 2 }
From (9), take ln on likelihood function following
ln L ( θ )   =   n 2 ln   ( 2 π ) n 2 ln σ 2 1 2 t = 2 n ln ( 1 + x t 1 2 ) 1 2 σ 2 t = 2 n ( x t α μ β x t 1 ) 2 1 + x t 1 2 .
From (10), differential with respect to parameters α   and μ β ,
α ln L ( θ )   =   t = 2 n ( x t α μ β x t 1 ) 1 + x t 1 2
μ β ln L ( θ )   =   t = 2 n ( x t α μ β x t 1 ) x t 1 1 + x t 1 2
Now we get
( α , μ β ) ln L ( θ )   =   0
We obtain the α ^ by
α ln L ( θ )   =   t = 2 n x t 1 + x t 1 2 α t = 2 n 1 1 + x t 1 2 μ β t = 2 n x t 1 1 + x t 1 2 = 0
Then
α ^   =   t = 2 n x t 1 + x t 1 2 μ β t = 2 n x t 1 1 + x t 1 2 t = 2 n 1 1 + x t 1 2 .
We obtain the μ ^ β by
μ β ln L ( θ )   =   t = 2 n x t x t 1 1 + x t 1 2 α t = 2 n x t 1 1 + x t 1 2 μ β t = 2 n x t 1 2 1 + x t 1 2 = 0
Then
μ ^ β   =   t = 2 n x t x t 1 1 + x t 1 2 α t = 2 n x t 1 1 + x t 1 2 t = 2 n x t 1 2 1 + x t 1 2 .
From (11) and (12), it can be rewritten as
α ^   =   c 1 μ ^ β c 2 c 3   and   μ ^ β   =   c 4   α ^   c 2 c 5 ,
where
c 1 = t = 2 n x t 1 + x t 1 2 ,   c 2 = t = 2 n x t 1 1 + x t 1 2 ,   c 3 = t = 2 n 1 1 + x t 1 2 c 4 = t = 2 n x t 1 2 1 + x t 1 2 ,   c 5 = t = 2 n x t x t 1 1 + x t 1 2 .
Lastly, we obtain two estimators by
α ^ M L   =   c 1 c 5 c 2 c 4 c 3 c 5 c 2 2   and   μ ^ β , M L   =   c 3 c 5   c 1   c 2 c 3 c 4 c 2 2 .
For observed fitting values of RCA1 model, it can be written as
x ^ t   =   α ^ M L + μ ^ β , M L x t 1   , t = 2 , 3 , n .

4. Simulation Study

The objective of this study is to estimate the parameter θ = ( α , μ β ) from RCA1 by using the least-squares and maximum likelihood methods. The results have been shown to compare these estimators in the sample sizes 100, 300, and 500. The Mean Square Error (MSE) is evaluated as the mean square of difference between the estimated values and simulated values. We also computed the MSE as the criterion defined as the following:
M S E j   =     t = 2 n ( x t x ^ t ) 2 n 1 ,
where x t denotes the simulated values, x ^ t denotes the estimated values, and j is the number of replication. The simulation study is divided into two parts. In the first process, we generated data x t ,   t = 1 , 2 , , n from RCA1 as
x t = α + μ β   x t 1 + u t ,   u t ~ N ( 0 , σ u 2 ) ,   σ u 2 = σ ε 2 + σ β 2   x t 1 2 .
The parameter of RCA1 model is mentioned in 4 cases as
  • α = 0.5 ,   μ β = 0.5 ,   σ ε 2   =   σ β 2   =   0.5
  • α = 0.5 ,   μ β = 0.5 ,   σ ε 2   =   σ β 2   =   0.1
  • α = 0.5 ,   μ β = 1 ,   σ ε 2   =   σ β 2   =   0.01
  • α = 0 ,   μ β = 0 ,   σ ε 2   =   σ β 2   =   0.01
Figure 1, Figure 2 and Figure 3 show generating data of 100, 300, and 500 sample sizes. It should be noted that cases 1 and 2 tend to oscillate around its mean zero and one; case 3 is displayed as the random walk.
In the second part, we obtain the estimator θ ^ L S = ( α ^ L S , μ ^ β , L S ) from the least-squares method and θ ^ M L = ( α ^ M L , μ ^ β , M L ) from the maximum likelihood method. From the least-squares method, we estimate x t from
x ^ t   =   α ^ L S + μ ^ β , L S x t 1   , t = 2 , 3 , , n ,
And x t from maximum likelihood is approximated by
x ^ t   =   α ^ M L + μ ^ β , M L x t 1   , t = 2 , 3 , , n .
Finally, we simulated data at 500 replications from RCA1. The estimation of α ^ j = ( α ^ L S , j , α ^ M L , j ) and μ ^ β , j = ( μ ^ β , L S , j , μ ^ β , M L , j )   ,   j = 1 , 2 , , m , ( m = 500 ) is obtained. We also compute the Monte Carlo mean and standard deviation (sd) for each parameter and sample sizes. The bias of parameter is approximated by
α b i a s   =   1 m j = 1 m ( α ^ j α )  
μ β , b i a s   =   1 m j = 1 m ( μ ^ β , j μ β )
A t-test employed to determine the means of bias is different from the zero. In this case, the hypotheses are H 0 :   μ θ ^ = 0   and H 1 :   μ θ ^ 0 where θ ^   =   ( α ^ ,   μ ^ β ) .
To see this, we compute the average of MSE (AMSE) for comparing the effective estimation between least-squares method and maximum likelihood method by
A M S E   =   j = 1 m M S E j m .
The results of mean, standard deviation (sd), bias, and AMSE are shown in Table 1 and Table 2. However, the percentage of the difference between the least-squares and maximum likelihood methods presented especially in Table 2.
From Table 1, it appears that the means of bias in Cases 1 and 3 are significantly different for two parameters. However, the means of bias in Cases 2 and 4 are not significantly different for α and provide asymptotically unbiased estimates for α . The histograms of bias are presented in Figure 4, Figure 5, Figure 6 and Figure 7.
From Figure 4, Figure 5, Figure 6 and Figure 7, the histograms are apparent that relative biases are reduced with increasing sample sizes, and the distribution of biases appears to be more normal distribution for large sample sizes.
From Table 2, it appears that the number of means of bias significantly different is less than Table 1. Especially, Cases 1, 3, and 4 are not significantly different for α , β , and provide asymptotically unbiased estimates for α , β . The histograms of bias are presented in Figure 8, Figure 9, Figure 10 and Figure 11. The AMSEs of the least-squares method in Table 1 are less than AMSEs of the maximum likelihood method in Table 2. It means that the least-squares method outperforms the maximum likelihood method. So, the AMSEs of two methods show slightly different values and, hence, the percentage of the difference between 2 methods have appeared on the last row in each case.
From Figure 8, Figure 9, Figure 10 and Figure 11, the histograms are apparent that relative biases are reduced with increasing sample sizes and the distribution of biases appears to be more normally distributed for large sample sizes except μ β in Case 4.

5. Application of Actual Data

In this part, we examine the RCA1 model by using the least-squares method and the maximum likelihood method developed in the previous section. At first, the monthly average of the Stock Exchange of Thailand index, or called SET index, is an important index in Thailand that discussed in terms of actual data. The trading of SET index is started on April 30, 1975, and we estimated parameter of RCA1 model by using data from 1975 to 2016, then we obtained forecast future data from 2017 to 2018. These data are shown in Figure 12a, which is obtained from [12].
Based on estimating the RCA1 model to fit the future SET index, we approximate the forecasting values in Figure 12b. The plots of SET index are presented as the actual data, and the dashed line of the least-squares method and the dotted line of the maximum likelihood method are presented as the forecasting data.
Compared with Figure 12b, it is difficult to see that the forecasting values of the least-squares and maximum likelihood methods are fairly close to the actual observed series. The Mean Square Error (MSE) is the criterion to decide the performance of two methods approximated by the mean square of the difference between the forecasting data ( x ^ t ) and actual data ( x t ) . We can compute the MSE as
M S E   =   t = 2 n ( x t x ^ t ) 2 n 1 .
Therefore, we should be more convinced by the MSE of the least-squares method given 2649.224, but the MSE of the maximum likelihood method is shown by 2676.102.
For the second real data, we analyzed the daily exchange rate of Thai Baht to one U.S. Dollar as 1426 records from 12 January 2013, to 12 December 2018. We carried on the least-squares and maximum likelihood models to forecast from 12 December 2018, to 12 January 2019. These data are shown in Figure 13a, which is obtained from [13].
Comparing with Figure 13b, it can be seen that the forecasting values of the least-squares (LS) method are close to the actual observed data sets more than the maximum likelihood (ML) method. Therefore, we should be presented by the MSE of LS given as 0.0076 is smaller than the maximum likelihood as 0.0573.

6. Conclusions

The least-squares and maximum likelihood methods are studied to estimate first order in RCA, or called RCA1 model. Through a Monte Carlo simulation, we evaluated the performance of these methods and showed the mean and standard deviation of the parameter, and played the AMSEs of various data and sample sizes. It appears that all cases of the least-squares method perform well in picking up the correct model, to see the AMSE is minimum. It is indicated that the RCA1 model is affected by past observed data more than the informative prior to the maximum likelihood method.
For actual data, we used the power of estimating by the minimum value of the mean square error. We can see that the least-squares method outperforms the maximum likelihood, similar to the results of the simulation study. For RCA1 model, we suggest the least-squares method to estimate parameters where stationary and non-stationary data are expected. These results are supported by Araveeporn [14] who showed limiting distribution and consistent estimator of the RCA1 model based on the least-squares method.
As a part of further work, the Bayesian approach is used to study the estimating parameter in the RCA1 model by considering the prior and posterior distribution.

Funding

This research received no external funding.

Acknowledgments

This research was supported by King Mongkut’s Institute of Technology Ladkrabang.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Tsay, R. Conditional Heteroscedastic Time Series Models. J. Am. Stat. Assoc. 1987, 82, 590–604. [Google Scholar] [CrossRef]
  2. Nicholls, D.F.; Quinn, B.G. Random Coefficient Autoregressive Models: An Introduction; Springer: New York, NY, USA, 1982; pp. 2–6. [Google Scholar]
  3. Nicholls, D.F.; Quinn, B.G. The Estimation of Random Coefficient Autoregressive Model. J. Time Ser. Anal. 1980, 1, 37–47. [Google Scholar] [CrossRef]
  4. Hwang, S.Y.; Basawa, I.V. Parameter Estimation for Generalized Random Coefficient Autoregressive Processes. J. Stat. Plan. Inference 1998, 68, 323–337. [Google Scholar] [CrossRef]
  5. Aue, A.; Horvath, L.; Steinebach, J. Estimation in Random Coefficient Autoregressive Models. J. Time Ser. Anal. 2006, 27, 61–67. [Google Scholar] [CrossRef]
  6. Wakefiels, J. Bayesian and Frequentist Regression Methods; Springer: New York, NY, USA, 2013; pp. 27–28. [Google Scholar]
  7. Hsiao, C. Some Estimation Methods for a Random Coefficient Model. Econom. J. Econom. Soc. 1975, 43, 305–325. [Google Scholar] [CrossRef]
  8. Wang, D.; Ghosh, S.K. Bayesian Estimation and Unit Root Tests for Random Coefficient Autoregressive Models. Model Assist. Stat. Appl. 2008, 3, 281–295. [Google Scholar] [CrossRef]
  9. Araveeporn, A. The Least-Squares Criteria of the Random Coefficient Dynamic Regression Model. J. Stat. Theory Pract. 2012, 6, 315–333. [Google Scholar] [CrossRef]
  10. Freund, R.J.; Wilson, W.J. Regression Analysis Statistical Modeling of a Response Variable; Academic Press: London, UK, 1998; pp. 78–81. [Google Scholar]
  11. Araveeporn, A. The Maximum Likelihood of Random Coefficient Dynamic Regression Model. World Acad. Sci. Eng. Technol. 2013, 78, 1185–1190. [Google Scholar]
  12. The Stock Exchange of Thailand. Available online: http://www.set.or.th/th/market/market/statistics.html (accessed on 30 January 2019).
  13. The Exchange Rate of Baht/Dollar. Available online: http://www.bot.or.th/thai/exchangerate (accessed on 30 January 2019).
  14. Araveeporn, A. A Comparison of Two Least-Squared Random Coefficient Autoregressive Models: With and without Autoregressive Errors. Int. J. Adv. Stat. Probab. 2013, 1, 151–162. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The time series plot of Case1–4 for generated data (100 sample sizes).
Figure 1. The time series plot of Case1–4 for generated data (100 sample sizes).
Mathematics 08 00062 g001
Figure 2. The time series plot of Case 1–4 for generated data (300 sample sizes).
Figure 2. The time series plot of Case 1–4 for generated data (300 sample sizes).
Mathematics 08 00062 g002
Figure 3. The time series plot of Case 1–4 for generated data (500 sample sizes).
Figure 3. The time series plot of Case 1–4 for generated data (500 sample sizes).
Mathematics 08 00062 g003
Figure 4. Histogram of estimated parameter α and μ β with the least-squares method in Case 1.
Figure 4. Histogram of estimated parameter α and μ β with the least-squares method in Case 1.
Mathematics 08 00062 g004
Figure 5. Histogram of estimated parameter α and μ β with the least-squares method in Case 2.
Figure 5. Histogram of estimated parameter α and μ β with the least-squares method in Case 2.
Mathematics 08 00062 g005
Figure 6. Histogram of estimated parameter α and μ β with the least-squares method in Case 3.
Figure 6. Histogram of estimated parameter α and μ β with the least-squares method in Case 3.
Mathematics 08 00062 g006
Figure 7. Histogram of estimated parameter α and μ β with the least-squares method in Case 4.
Figure 7. Histogram of estimated parameter α and μ β with the least-squares method in Case 4.
Mathematics 08 00062 g007
Figure 8. Histogram of estimated parameter α and μ β with maximum likelihood method in Case 1.
Figure 8. Histogram of estimated parameter α and μ β with maximum likelihood method in Case 1.
Mathematics 08 00062 g008
Figure 9. Histogram of estimated parameter α and μ β with maximum likelihood method in Case 2.
Figure 9. Histogram of estimated parameter α and μ β with maximum likelihood method in Case 2.
Mathematics 08 00062 g009
Figure 10. Histogram of estimated parameter α and μ β with maximum likelihood method in Case 3.
Figure 10. Histogram of estimated parameter α and μ β with maximum likelihood method in Case 3.
Mathematics 08 00062 g010
Figure 11. Histogram of estimated parameter α and μ β with maximum likelihood method in Case 4.
Figure 11. Histogram of estimated parameter α and μ β with maximum likelihood method in Case 4.
Mathematics 08 00062 g011
Figure 12. The time series plot for the Stock Exchange of Thailand (SET) index and the plot of SET index for forecasting of least squares (LS) vs maximum likelihood methods.
Figure 12. The time series plot for the Stock Exchange of Thailand (SET) index and the plot of SET index for forecasting of least squares (LS) vs maximum likelihood methods.
Mathematics 08 00062 g012
Figure 13. The time series plot for the exchange rate of Baht/Dollar, and the plot of the exchange rate of Baht/Dollar for forecasting of least squares (LS) vs. maximum likelihood methods.
Figure 13. The time series plot for the exchange rate of Baht/Dollar, and the plot of the exchange rate of Baht/Dollar for forecasting of least squares (LS) vs. maximum likelihood methods.
Mathematics 08 00062 g013
Table 1. The mean (sd), bias, and the Average Mean Square Error (AMSE) of parameter estimation in the least-squares method.
Table 1. The mean (sd), bias, and the Average Mean Square Error (AMSE) of parameter estimation in the least-squares method.
CaseParameterSample Sizes
n = 100n = 300n = 500
Case 1
σ ε 2 = σ β 2 = 0.5
α = 0.5 0.5617
(0.1788)
0.5634
(0.1208)
0.5265
(0.1308)
α b i a s −0.0617 *−0.0036 *−0.0265 *
μ β = 0.5 0.4176
(0.1563)
0.4490
(0.1236)
0.4648
(0.1060)
μ β , b i a s 0.0823 *0.0509 *0.0351 *
AMSE3.05002.86822.8415
Case 2
σ ε 2 = σ β 2 = 0.1
α = 0.5 0.5118
(0.0881)
0.5036
(0.0559)
0.5033
(0.0440)
α b i a s −0.0118 *−0.00360.0351
μ β = 0.5 0.4812
(0.0998)
0.4923
(0.0631)
0.4947
(0.0497)
μ β , b i a s 0.0187 *0.0076 *0.0052 *
AMSE0.22700.22780.2283
Case 3
σ ε 2 = σ β 2 = 0.01
α = 0.5 0.8988
(0.4075)
1.4964
(0.8428)
1.8823
(1.4514)
α b i a s −0.3988 *−0.9964 *−1.3823 *
μ β = 1 0.9814
(0.0227)
0.9823
(0.0130)
0.9846
(0.0108)
μ β , b i a s 0.1852 *0.0176 *0.0153 *
AMSE11.1952130.6887492.6607
Case 4
σ ε 2 = σ β 2 = 0.01
α = 0 0.0012
(0.0408)
−0.0011
(0.0292)
−0.0003
(0.0335)
α b i a s −0.00120.00110.0003
μ β = 1 0.9424
(0.0464)
0.9766
(0.0172)
0.9833
(0.0116)
μ β , b i a s 0.0575 *0.0233 *0.0166 *
AMSE0.01740.03890.0959
* indicates significance at 5% level.
Table 2. The mean (sd), bias, AMSE, and percentage of differences of parameter estimation in the maximum likelihood method.
Table 2. The mean (sd), bias, AMSE, and percentage of differences of parameter estimation in the maximum likelihood method.
CaseParameterSample Sizes
n = 100n = 300n = 500
Case 1
σ ε 2 = σ β 2 = 0.5
α = 0.5 0.5178
(0.1124)
0.5047
(0.0615)
0.5043
(0.0476)
α b i a s −0.0178 *0.0509 *−0.0043 *
μ β = 0.5 0.4867
(0.1194)
0.4925
(0.0641)
0.4957
(0.0508)
μ β , b i a s 0.0132 *−0.00470.00425
AMSE3.26552.98762.9310
Percentage of Difference7.06504.16283.1497
Case 2
σ ε 2 = σ β 2 = 0.1
α = 0.5 0.5180
(0.0838)
0.5059
(0.0473)
0.5038
(0.0367)
α b i a s −0.0180 *−0.0059 *−0.0038 *
μ β = 0.5 0.4825
(0.0925)
0.4925
(0.0516)
0.4958
(0.0404)
μ β , b i a s 0.0174 *0.0074 *0.0041 *
AMSE0.22810.22830.2286
Percentage of Difference0.48450.21940.1314
Case 3
σ ε 2 = σ β 2 = 0.01
α = 0.5 0.5087
(0.0571)
0.5112
(0.0548)
0.5085
(0.0558)
α b i a s −0.0087 *−0.1123 *−0.0085 *
μ β = 1 1.0000
(0.0115)
0.9994
(0.0059)
0.9998
(0.0048)
μ β , b i a s −0.00000.00050.0001
AMSE11.5103132.899501.1316
Percentage of Difference2.81401.69121.7180
Case 4
σ ε 2 = σ β 2 = 0.01
α = 0 0.0008
(0.0362)
−0.0003
(0.0197)
0.0002
(0.0155)
α b i a s −0.000800003−0.0002
μ β = 1 0.9473
(0.0481)
0.9825
(0.0177)
0.9891
(0.0120)
μ β , b i a s 0.0526 *0.0174 *0.0108 *
AMSE0.01750.03920.0968
Percentage of Difference0.54740.77120.9384
* indicates significance at 5% level.

Share and Cite

MDPI and ACS Style

Araveeporn, A. Comparing Parameter Estimation of Random Coefficient Autoregressive Model by Frequentist Method. Mathematics 2020, 8, 62. https://0-doi-org.brum.beds.ac.uk/10.3390/math8010062

AMA Style

Araveeporn A. Comparing Parameter Estimation of Random Coefficient Autoregressive Model by Frequentist Method. Mathematics. 2020; 8(1):62. https://0-doi-org.brum.beds.ac.uk/10.3390/math8010062

Chicago/Turabian Style

Araveeporn, Autcha. 2020. "Comparing Parameter Estimation of Random Coefficient Autoregressive Model by Frequentist Method" Mathematics 8, no. 1: 62. https://0-doi-org.brum.beds.ac.uk/10.3390/math8010062

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop