# Normality Testing of High-Dimensional Data Based on Principle Component and Jarque–Bera Statistics

^{1}

^{2}

^{*}

Next Article in Journal

Previous Article in Journal

School of Mathematics and Statistics, Lanzhou University, Lanzhou 730000, China

School of Mathematics and Statistics, Xi’an Jiaotong University, Xi’an 710049, China

Author to whom correspondence should be addressed.

Received: 28 January 2021
/
Revised: 10 March 2021
/
Accepted: 12 March 2021
/
Published: 17 March 2021

(This article belongs to the Section Computational Statistics)

The testing of high-dimensional normality is an important issue and has been intensively studied in the literature, it depends on the variance–covariance matrix of the sample and numerous methods have been proposed to reduce its complexity. Principle component analysis (PCA) has been widely used in high dimensions, since it can project high-dimensional data into a lower-dimensional orthogonal space. The normality of the reduced data can then be evaluated by Jarque–Bera (JB) statistics in each principle direction. We propose a combined test statistic—the summation of one-way JB statistics upon the independence of the principle directions—to test the multivariate normality of data in high dimensions. The performance of the proposed method is illustrated by the empirical power of the simulated normal and non-normal data. Two real data examples show the validity of our proposed method.

Normality plays an important role in statistical analysis and there are numerous methods for normality testing presented in the literature. Koziol [1] and Slate [2] used the properties of normal distribution function to assess multivariate normality. Reference [3] checked normality using a class of goodness-of-fit tests and this kind of method was also discussed in [4,5]. Various statistics have also been used in recent years, such as the Cramér-Von Mises(CM) statistic [5], skewness and kurtosis [6], sample entropy [7], Shapiro–Wilk’s W statistic [8] and the Kolmogorov-Smirnov(KS) statistic (see also in [9,10,11]).

It is noticed that many studies of the aforementioned statistics are based on univariate normality, while the practical research we concentrate on is based on multivariate normality. Therefore, generalization should be used to enlarge the conclusions from univariate to multivariate. This is a common practice in multivariate normality testing when some useful statistics are adopted. Projection methods such as principle component analysis (PCA) can be exploited to obtain such achievement, as described in [8,12]. Convenient principle component analysis can project a high dimensional dataset into several lower dimensions in independent directions, then statistical tests in each direction can be summarized together to give a total test for multivariate normality, using the fact that the joint probability distribution is the product of all marginal probability distributions for independent variables. With the help of these orthogonal projections, the dimension can be reduced and the computation can be more efficient.

In this paper, the Jarque–Bera statistic, a combination of skewness and kurtosis, instead of the two statistics, as in [8], is investigated to test the normality in each principle direction. Then, a new kind of statistic $J{B}_{sum}$ is constructed to test the high-dimensional normality. The performance of the proposed method and its empirical power of testing are illustrated based on some high-dimensional simulated data.

This paper is organized as follows—Section 2 provides the theory of principle component analysis and gives the methodologies of statistical inference for multivariate normality. In Section 3, some simulated examples of normal data and non-normal data are used to illustrate the efficiency of our proposed method. Two real examples are then investigated in Section 4 to verify the methods’ effectiveness.

For observed data $\mathbf{X}={\left({x}_{ij}\right)}_{n\times p}$ with sample size n and dimension p, the principle component analysis reduces the dimension of p-variate random vector $\mathbf{X}$ through linear combinations, and it searches the linear combinations with larger spread among the observed value of $\mathbf{X}$, i.e., the larger variances. Specifically, it searches for the orthogonal directions ${\omega}_{i}(i=1,2,\cdots ,p)$, which satisfy

$$\begin{array}{ccc}& & \omega =arg\underset{\omega}{max}\mathrm{Var}\left(\mathbf{X}\omega \right)=arg\underset{\omega}{max}{\omega}^{T}\mathrm{Var}\left(\mathbf{X}\right)\omega ,\hfill \\ \hfill s.t.& & {\omega}^{T}\omega =1.\hfill \end{array}$$

Denoted by $\mathsf{\Sigma}$, the covariance matrix of $\mathbf{X}$, the eigenvalue ${\lambda}_{i}$ and principle components ${\omega}_{i}(i=1,2,\cdots ,p)$ can be obtained by spectral decomposition of the covariance matrix $\mathsf{\Sigma}$. Therefore, the observed data can be projected to the archived lower-dimension space $\{{\omega}_{1},{\omega}_{2},\cdots ,{\omega}_{p}\}$ by ${\mathbf{z}}_{i}=\mathbf{X}{\omega}_{i}$, which gives the projected observed matrix $\mathbf{z}$.

For each ${\mathbf{z}}_{i}$, the skewness and kurtosis can be calculated by
where ${\overline{z}}_{i}$ stands for the sample mean. Then, the univariate JB statistic can be given by

$${S}_{k}\left({\mathbf{z}}_{i}\right)=\frac{\frac{1}{n}{\sum}_{j=1}^{n}{\left({z}_{ij}-{\overline{z}}_{i}\right)}^{3}}{{\left(\frac{1}{n}{\sum}_{j=1}^{n}{\left({z}_{ij}-{\overline{z}}_{i}\right)}^{2}\right)}^{3/2}},$$

$${K}_{u}\left({\mathbf{z}}_{i}\right)=\frac{\frac{1}{n}{\sum}_{j=1}^{n}{\left({z}_{ij}-{\overline{z}}_{i}\right)}^{4}}{{\left(\frac{1}{n}{\sum}_{j=1}^{n}{\left({z}_{ij}-{\overline{z}}_{i}\right)}^{2}\right)}^{2}},$$

$$JB\left({\mathbf{z}}_{i}\right)=\frac{n}{6}\left({S}_{k}^{2}\left({\mathbf{z}}_{i}\right)+\frac{{\left({K}_{u}\left({\mathbf{z}}_{i}\right)-3\right)}^{2}}{4}\right).$$

To test the normality of high-dimensional data, $\mathbf{z}=({\mathbf{z}}_{1},{\mathbf{z}}_{2},\cdots ,{\mathbf{z}}_{r})$, define
where r stands for the number of principle components ultimately selected, which satisfies:

$$J{B}_{sum}\left(\mathbf{z}\right)=\sum _{i=1}^{r}JB\left({\mathbf{z}}_{i}\right),$$

$$\sum _{i=1}^{r}{\lambda}_{i}/\sum _{i=1}^{p}{\lambda}_{i}\ge 1-s.$$

Considering the hypothesis:

$${\mathrm{H}}_{\mathbf{0}}:thedataisnormallydistributed;\phantom{\rule{3.33333pt}{0ex}}\mathsf{v}.\mathsf{s}.\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}{\mathrm{H}}_{\mathbf{1}}:thedataisnonnormallydistributed$$

Under the null hypothesis ${H}_{0}$, the JB statistic will be asymptotically ${\chi}^{2}\left(2\right)$ distributed [13], then the $J{B}_{sum}$ will be asymptotically ${\chi}^{2}\left(2r\right)$ distributed. For a given significance $\alpha $, the critical region will be

$$R\left(Z\right)=\left\{Z\right|J{B}_{sum}\left(Z\right)>{\chi}_{\alpha}^{2}\left(2r\right)\}.$$

Upon $J{B}_{sum}$, an exact critical region $R\left(X\right)$ can be deduced, and therefore the testing can be implemented based on these critical regions.

Evaluating the performance of the proposed PC-type Jarque–Bera testing depends on (1) whether the orthogonal axes are chosen due to the cumulative proportion; and (2) whether the hypothesis is rejected or accepted. Composed by the well known power function, the error will be:
where $\alpha $ is the probability of a Type-I error and $\beta $ is the probability of a Type-II error. Therefore, we can see that the power is a non-decreasing function of the parameter s.

$$Power=\left\{\begin{array}{cc}\alpha & \mathrm{with}\phantom{\rule{3.33333pt}{0ex}}{\mathrm{H}}_{0}\\ 1-\left[\right(s+(1-s)\beta \left)\right]=(1-s)(1-\beta )& \mathrm{with}\phantom{\rule{3.33333pt}{0ex}}{\mathrm{H}}_{1}\end{array}\right.,$$

To evaluate the performance of the aforementioned testing, some simulation experiments are carried out in this section.

A series of normally distributed data were investigated with different data dimension p and different sample size n. Let $n\times p$ simulated data matrix ${\mathbf{X}}_{n\times p}\sim N\left(\mu ,\mathsf{\Sigma}\right)$, where $\mu =\mathbf{0}$. Consider two kinds of covariance matrix:

- (I)
- $\mathsf{\Sigma}={\rho}^{{I}_{(\left|i-j\right|\ne 0)}}$;
- (II)
- $\mathsf{\Sigma}=0.5{\rho}^{{I}_{(\left|i-j\right|\ne 0)}}+0.5{\rho}^{|i-j|}$.

Define
where ${n}_{false}$ is the number of rejected samples, ${n}_{normal}$ is the number of samples that obey the normal distribution. Table 1, Table 2, Table 3, Table 4, Table 5 and Table 6 describe separately the Empirical power of the PC-type JB testing $J{B}_{sum}$ compared with ${S}_{k}$-type statistics ${\chi}_{sk}^{2}$, ${S}_{kmax}$ [14], ${K}_{u}$-type statistics ${\chi}_{ku}^{2}$, ${K}_{umax}$ [14], Mardia’s method ${Z}_{M1}^{*}$ [15], Srivastava’s method ${Z}_{S1}^{*}$ [16], Kauyuki’s method $MJ{B}_{m}^{*}$, $MJ{B}_{s}^{*}$ [16], Kazuyuki’s method $mJBM$ [17] in these two cases with significance level $\alpha $ = 0.01, 0.05, 0.10 respectively.

$$\mathrm{Empirical}\phantom{\rule{4pt}{0ex}}\mathrm{power}={\mathit{n}}_{\mathit{false}}/{\mathit{n}}_{\mathit{normal}},$$

From the table above we can conclude that in the case of normal data, the empirical power of $J{B}_{sum}$ is small and stable whenever $p/n$ is large or small. Although the empirical power of each statistic converges to the given significance level as n increases, the performance of ${S}_{kmax}$(especially when $\alpha $ = 0.1), ${K}_{umax}$, ${Z}_{M1}^{*}$, $MJ{B}_{m}^{*}$ and $ZNT$ is not so good as p increase. Besides, it is noticed that ${Z}_{S1}^{*}$, $MJ{B}_{s}^{*}$ and $mJBM$ are inapplicable to $p>n$ whereas $J{B}_{sum}$ still works well. For all six tables, the numbers in bold represent the empirical power that is closest to the significance level among the eleven statistics in each situation.

In this part, non-normal datasets are simulated to evaluate the performance of the proposed method according to Empirical power. Define
where ${n}_{true}$ is the number of accepted samples, ${n}_{non-normal}$ is the number of samples that do not obey the normal distribution. The performance is evaluated in three databases as follows:

$$\mathrm{Empirical}\phantom{\rule{4pt}{0ex}}\mathrm{power}={\mathit{n}}_{\mathit{true}}/{\mathit{n}}_{\mathit{non}-\mathit{normal}},$$

- (III)
- $Shifted\phantom{\rule{4pt}{0ex}}{\chi}^{2}\left(1\right):$ every variable in ${\mathbf{X}}_{n\times p}$ was centralized, with independently identical distribution ${\chi}^{2}\left(1\right)$.
- (IV)
- $Shifted\phantom{\rule{4pt}{0ex}}exp\left(1\right):$ every variable in ${\mathbf{X}}_{n\times p}$ was centralized, with independently identical distribution $exp\left(1\right)$.
- (V)
- $N(0,1)+{\chi}^{2}\left(2\right):$ the first $\left[p/2\right]$ variables in ${\mathbf{X}}_{n\times p}$ are from $N\left(0,1\right)$ distribution, while the last $p-\left[p/2\right]$ variables independently identically distributed from ${\chi}^{2}\left(2\right)$, where $\left[p/2\right]$ stands for the integer part of $p/2$.

The performance of $J{B}_{sum}$ compared with the ${S}_{k}$-type statistics ${\chi}_{sk}^{2}$, ${S}_{kmax}$ [14], ${K}_{u}$-type statistics ${\chi}_{ku}^{2}$, ${K}_{umax}$ [14], Mardia’s statistics ${Z}_{M1}^{*}$, ${Z}_{M2}^{*}$ [15], Srivastava’s statistics ${Z}_{S1}^{*}$, ${Z}_{S2}^{*}$ [16], Kazuyuki’s statistic $mJBM$ [17] and Rie’s statistic $ZNT$ [18] are illustrated in Figure 1, Figure 2, Figure 3, Figure 4 and Figure 5. Since $J{B}_{sum}$, ${\chi}_{sk}^{2}$, and ${\chi}_{ku}^{2}$ are based on the sum of ${\chi}^{2}$, we call them $sum$-type. ${S}_{kmax}$ and ${K}_{umax}$ come from the maximum of ${\chi}^{2}$, and thus we call them $max$-type.

All of these methods are studied in 2000 simulated data. Figure 1, Figure 2, Figure 3, Figure 4 and Figure 5 show the comparisons of the empirical power of different dimension p and various sample size n.

- (1)
- Figure 1 indicates that in the case of $p=5$, ${Z}_{M1}^{*}$’s performance is best in all three cases. Though ${Z}_{M2}^{*}$ performs well in Case I and Case II, it is not as good in Case III. Comparatively, ${Z}_{S1}^{*}$, ${\chi}_{sk}^{2}$ and $J{B}_{sum}$ perform similarly well and better than ${\chi}_{ku}^{2}$ and ${K}_{umax}$.
- (2)
- In the case of $p=30$, as in Figure 2, although ${Z}_{M1}^{*}$ and ${Z}_{M2}^{*}$ perform better than $J{B}_{sum}$ in Case II, they do not maintain stable results like $J{B}_{sum}$ in Case III. In fact, $J{B}_{sum}$’s performance is generally better than the other methods mentioned here among all three cases.
- (3)
- In Figure 3, where $p=50$, $J{B}_{sum}$’s performance is best among others except ${Z}_{M1}^{*}$ and ${Z}_{M2}^{*}$. As in Figure 2, ${Z}_{M1}^{*}$ and ${Z}_{M2}^{*}$ are unstable in Case III when p is close to n. This phenomenon can also be seen in $mJBM$. Combining the information shown in Figure 2, we can see that ${Z}_{M1}^{*}$, ${Z}_{M2}^{*}$, and $mJBM$ are not as stable as $J{B}_{sum}$.
- (4)
- With the increase in dimension, as seen in Figure 4, ${Z}_{M1}^{*}$ and ${Z}_{M2}^{*}$ no longer perform as well as before, and $mJBM$ is still not stable enough when n is close to p. Although ${K}_{umax}$’s performance is better than $J{B}_{sum}$’s at first, it is surpassed by the latter when $n>100$.
- (5)
- In Figure 5, as in $p=100$, the power of ${K}_{umax}$ is initially higher than $J{B}_{sum}$, and is eventually surpassed by $J{B}_{sum}$. Except for ${K}_{umax}$, $J{B}_{sum}$’s performance is the best.

From the phenomenon above, we may conclude that $J{B}_{sum}$ performs well compared to the other statistics, in that its empirical power is relatively higher than the others and the corresponding simulation results are more stable. Thus, it can be used to test the non-normality of low- or high-dimensional data effectively.

In this section, we investigated two real examples to illustrate the performance of our proposed method compared with the nine aforementioned existing methods.

The SPECTF heart dataset [19] provides data on cardiac single proton emission computed tomography (SPECT) images. It describes the diagnosis of cardiac single proton emission computed tomography (SPECT) images, and each patient is classified into two categories: normal and abnormal. The data contain 267 instances, with each instance belonging to a patient along with 44 continuous feature patterns summarized from the original SPECT images. The other attribute is a binary variable that indicates the diagnosis of each patient, with 0 for normal and 1 for abnormal.

In this dataset, we simultaneously evaluate the normality of the whole dataset and each class within it. The testing p-value of each method mentioned above is shown in Table 7. The highest p-value of each relatively normal data set and the lowest p-value of each relatively non-normal data set are in bold.

Let ${S}_{0}$ describe the whole data set and ${S}_{1}$ and ${S}_{2}$ denote the normal class dataset and abnormal class dataset, respectively. We calculate the p-values of our PC-type statistic as well as the ${S}_{k}$-type and ${K}_{u}$-type statistics and other methods mentioned in [16,17] of these three datasets. Since all ten statistics’ p-values of data ${S}_{0}$ and ${S}_{1}$ are very close to 0, we will not describe them here, which indicates a non-normal distribution of the whole dataset and abnormal dataset.

We may see from Table 7 that ${S}_{2}$’s corresponding p-values are a little different from the former two sets, in which the p-values of ${\chi}_{sk}^{2}$, ${Z}_{M1}^{*}$ and $MJ{B}_{M}^{*}$ depart from 0. The relatively high p-values motivate us to conduct a detailed survey to investigate the normality of the SPECTF heart data’s normal class by selecting some kinds of different variables that belong to a variety of degrees of normality.

In this normal category, we extract some variables and construct a new dataset ${S}_{3}$ from several experiments. The selected variables included in ${S}_{3}$ are ${X}_{2},{X}_{4},{X}_{6},{X}_{7},{X}_{9}$∼${X}_{12},$${X}_{14}$∼${X}_{21}$, ${X}_{23}$∼${X}_{28},$${X}_{31}$∼${X}_{34}$, and ${X}_{37}$∼${X}_{43}$. We then compute the p-values of this dataset, and the results are shown in Table 7. It can be seen that all normality testing methods have a relatively high p-value, which demonstrates the multivariate normality of set ${S}_{3}$. For comparison, we constructed another two datasets, ${S}_{4}$ and ${S}_{5}$, which consist of several verified normal variables and non-normal variables, respectively. Specifically, ${S}_{4}$ contains the variables ${X}_{3},{X}_{5},{X}_{6}$∼${X}_{8},{X}_{11}$∼${X}_{14},{X}_{17},{X}_{21},{X}_{22}$, ${X}_{27}$∼${X}_{32},{X}_{35},{X}_{36},{X}_{38},{X}_{40},{X}_{43},$ and ${X}_{44}$, while ${S}_{5}$ contains variables ${X}_{3}$∼${X}_{8},{X}_{13},{X}_{15},{X}_{22},{X}_{29},{X}_{30},{X}_{35},{X}_{36},{X}_{42},$ and ${X}_{44}$. From Table 7 we can see the results of these two sets. This time, the p-values of the ten methods are no longer as high as before, meaning that our method performs well in assessing the normality of normal and non-normal data.

In this part, we analyze the normality of body data investigated in [14] to show the consistency of our method with other existing methods and conclusions before. This data set contains 100 human individuals and each individual has 12 measurements of the human body (see [14] for details). As before, the p-values of the PC-type statistics and the ${S}_{k}$-type, ${K}_{u}$-type, and Kazuyuki’s statistics are computed.

Let ${B}_{0}$ describe the whole dataset, and the multivariate normality of it can be investigated by the resulting p-values of each method shown in Table 7. Since all the p-values approach 0, we may conclude that this dataset contains non-normal data. As with the discussion in [14], we also investigate the other six datasets to show the validity of our proposed method, as well as making a comparison with other methods. For convenience, we denote ${B}_{1}=\left({X}_{1},{X}_{3},{X}_{8},{X}_{10},{X}_{12}\right)$, ${B}_{2}=\left({X}_{1},{X}_{3},{X}_{8},{X}_{10}\right)$, ${B}_{3}=\left({X}_{1},{X}_{8},{X}_{10},{X}_{12}\right)$, ${B}_{4}=\left({X}_{3},{X}_{8},{X}_{10},{X}_{12}\right)$, ${B}_{5}=\left({X}_{4},{X}_{5},{X}_{6},{X}_{11}\right)$, and ${B}_{6}=\left({X}_{2},{X}_{4},{X}_{6},{X}_{11}\right)$. From Table 8, we can conclude that the normality testing results of our proposed PC-type statistic $J{B}_{sum}$ are nearly the same as those for ${S}_{k}$-type statistics, ${K}_{u}$-type statistics, and Kazuyuki’s methods. Since ${B}_{1}$, ${B}_{2}$, ${B}_{3}$, and ${B}_{4}$ have multivariate distribution, whereas ${B}_{5}$ and ${B}_{6}$ have non-normal distribution [14], our method is closer to the truth in the sense of relatively higher p-values in multivariate normal situations and lower p-values in non-normal situations. Same as in Table 7, the highest p-value of each normal data set and the lowest p-value of each non-normal data set are in bold.

This phenomenon indicates that our proposed PC-type statistic $J{B}_{sum}$ constitutes an effective way of testing normality both in normal data and non-normal data, with more stable testing results.

The purpose of this paper is to use a JB-type testing method to test high-dimensional normality. The statistics we proposed here used the generalized statistic $J{B}_{sum}$ of JB statistics to test normality based on the dimensional reduction performed by PCA.

Through simulated experiments, we find that, in both low and high dimensions, $J{B}_{sum}$ performs well in testing normal and non-normal data and it is more stable than many other compared methods. Therefore, it can be used to test normality effectively.

From two real examples, we can also see that our proposed method possesses the superiority of stability in performing the normality testing of real datasets, as well as the inclination of detecting the true normality from the perspective of p-values.

Data curation, Y.S.; Methodology, X.Z.; Project administration, X.Z.; Software, Y.S.; Supervision, X.Z. All authors have read and agreed to the published version of the manuscript.

This research was funded by National Natural Science Foundation of China (No. 11971214, 81960309), sponsored by the Scientific Research Foundation for the Returned Overseas Chinese Scholars, Ministry of Education of China, and supported by Cooperation Project of Chunhui Plan of the Ministry of Education of China 2018.

Not applicable.

Not applicable.

The study used publicly available data from the UC Irvine Machine Learning Repository, https://archive.ics.uci.edu/ml/datasets/SPECTF+Heart.

The authors would also like to thank Edit-in-chief and the referees for their suggestions to improve the paper.

The authors declare no conflict of interest.

- Koziol, J.A. On assessing multivariate normality. J. R. Stat. Soc.
**1983**, 45, 358–361. [Google Scholar] [CrossRef] - Slate, E.H. Assessing multivariate nonnormality using univariate distributions. Biometrika
**1999**, 86, 191–202. [Google Scholar] [CrossRef] - Romeu, J.L.; Ozturk, A. A comparative study of goodness-of-fit tests for multivariate normality. J. Multivar. Anal.
**1993**, 46, 309–334. [Google Scholar] [CrossRef] - Székely, G.J.; Rizzo, M.L. A new test for multivariate normality. J. Multivar. Anal.
**2005**, 93, 58–80. [Google Scholar] [CrossRef] - Chiu, S.N.; Liu, K.I. Generalized cramér-von mises goodness-of-fit tests for multivariate distributions. Comput. Stat. Data Anal.
**2009**, 53, 3817–3834. [Google Scholar] [CrossRef] - Small, N.J.H. Marginal skewness and kurtosis in testing multivariate normality. J. R. Stat. Soc. Ser. (Appl. Stat.)
**1980**, 29, 85–87. [Google Scholar] [CrossRef] - Zhu, L.-X.; Wong, H.L.; Fang, K.-T. A test for multivariate normality based on sample entropy and projection pursuit. J. Stat. Plan. Inference
**1995**, 45, 373–385. [Google Scholar] [CrossRef] - Liang, J.; Tang, M.-L.; Chan, P.S. A generalized shapiro-wilk w statistic for testing high-dimensional normality. Comput. Stat. Data Anal.
**2009**, 53, 3883–3891. [Google Scholar] [CrossRef] - Doornik, J.A.; Hansen, H. An omnibus test for univariate and multivariate normality. Oxf. Bull. Econ. Stat.
**2008**, 70, 927–939. [Google Scholar] [CrossRef] - Horswell, R.L.; Looney, S.W. A comparison of tests for multivariate normality that are based on measures of multivariate skewness and kurtosis. J. Stat. Comput. Simul.
**1992**, 42, 21–38. [Google Scholar] [CrossRef] - Tenreiro, C. An affine invariant multiple test procedure for assessing multivariate normality. Comput. Stat. Data Anal.
**2011**, 55, 1980–1992. [Google Scholar] [CrossRef] - Liang, J.; Li, R.; Fang, H.; Fang, K.-T. Testing multinormality based on low-dimensional projection. J. Stat. Plan. Inference
**2000**, 86, 129–141. [Google Scholar] [CrossRef] - Jönsson, K. A robust test for multivariate normality. Econ. Lett.
**2011**, 113, 199–201. [Google Scholar] [CrossRef] - Liang, J.; Tang, M.-L.; Zhao, X. Testing high-dimensional normality based on classical skewness and kurtosis with a possible small sample size. Commun. Stat. Theory Methods
**2019**, 48, 5719–5732. [Google Scholar] [CrossRef] - Mardia, K.V. Applications of some measures of multivariate skewness and kurtosis in testing normality and robustness studies. Sankhyá Indian J. Stat. Ser. B
**1974**, 36, 115–128. [Google Scholar] - Kazuyuki, K.; Naoya, O.; Takashi, S. On Jarque-Bera tests for assessing multivariate normality. J. Stat. Adv. Theory Appl.
**2008**, 1, 207–220. [Google Scholar] - Kazuyuki, K.; Masashi, H.; Tatjana, P. Modified Jarque-Bera Type Tests for Multivariate Normality in a High-Dimensional Framework. J. Stat. Theory Pract.
**2014**, 8, 382–399. [Google Scholar] - Rie, E.; Zofia, H.; Ayako, H.; Takashi, S. Multivariate normality test using normalizing transformation for Mardia’s multivariate kurtosis. Commun. Stat. Simul. Comput.
**2020**, 49, 684–698. [Google Scholar] - Dua, D.; Graff, C. UCI Machine Learning Repository; School of Information and Computer Science, University of California: Irvine, CA, USA, 2017. [Google Scholar]

${\mathit{\chi}}_{\mathit{sk}}^{2}$ | ${\mathit{\chi}}_{\mathit{ku}}^{2}$ | ${\mathit{S}}_{\mathit{kmax}}$ | ${\mathit{K}}_{\mathit{umax}}$ | ${\mathit{Z}}_{\mathit{M}1}^{*}$ | ${\mathit{Z}}_{\mathit{S}1}^{*}$ | ${\mathit{MJB}}_{\mathit{m}}^{*}$ | ${\mathit{MJB}}_{\mathit{s}}^{*}$ | $\mathit{mJBM}$ | $\mathit{ZNT}$ | ${\mathit{JB}}_{\mathit{sum}}$ | |
---|---|---|---|---|---|---|---|---|---|---|---|

$p=5$ | |||||||||||

$n=25$ | 0.0065 | 0.0090 | 0.0080 | 0.0140 | 0.0195 | 0.0195 | 0.0310 | 0.0345 | 0.0110 | 0.0005 | 0.0215 |

$n=50$ | 0.0110 | 0.0190 | 0.0105 | 0.0245 | 0.0195 | 0.0170 | 0.0280 | 0.0295 | 0.0115 | 0.0025 | 0.0250 |

$n=100$ | 0.0160 | 0.0245 | 0.0145 | 0.0295 | 0.0205 | 0.0185 | 0.0290 | 0.0265 | 0.0150 | 0.0105 | 0.0245 |

$n=200$ | 0.0115 | 0.0215 | 0.0135 | 0.0270 | 0.0130 | 0.0180 | 0.0140 | 0.0225 | 0.0110 | 0.0040 | 0.0275 |

$n=500$ | 0.0110 | 0.0185 | 0.0115 | 0.0215 | 0.0110 | 0.0070 | 0.0125 | 0.0095 | 0.0090 | 0.0070 | 0.0255 |

$p=30$ | |||||||||||

$n=25$ | 0.0100 | 0.0125 | 0.0150 | 0.0290 | 0.1880 | 0.0035 | 0.1880 | 0.0560 | - | 0.3030 | 0.0215 |

$n=50$ | 0.0225 | 0.0210 | 0.0230 | 0.0565 | 0.0005 | 0.0235 | 0.0025 | 0.0300 | 0.0300 | 0.0000 | 0.0265 |

$n=100$ | 0.0295 | 0.0290 | 0.0230 | 0.0730 | 0.0195 | 0.0130 | 0.0220 | 0.0200 | 0.0195 | 0.0005 | 0.0335 |

$n=200$ | 0.0295 | 0.0345 | 0.0160 | 0.0740 | 0.0175 | 0.0160 | 0.0180 | 0.0160 | 0.0125 | 0.0030 | 0.0265 |

$n=500$ | 0.0300 | 0.0255 | 0.0130 | 0.0555 | 0.0115 | 0.0140 | 0.0120 | 0.0155 | 0.0140 | 0.0115 | 0.0250 |

$p=50$ | |||||||||||

$n=25$ | 0.0150 | 0.0065 | 0.0185 | 0.0335 | 0.1890 | - | 0.1890 | - | - | 0.3000 | 0.0190 |

$n=50$ | 0.0275 | 0.0255 | 0.0260 | 0.0660 | 0.1215 | 0.0145 | 0.1215 | 0.0145 | 0.0720 | 0.1435 | 0.0255 |

$n=100$ | 0.0415 | 0.0215 | 0.0235 | 0.0870 | 0.0050 | 0.0165 | 0.0050 | 0.0220 | 0.0165 | 0.0000 | 0.0340 |

$n=200$ | 0.0460 | 0.0270 | 0.0155 | 0.0910 | 0.0265 | 0.0120 | 0.0270 | 0.0145 | 0.0150 | 0.0010 | 0.0295 |

$n=500$ | 0.0415 | 0.0265 | 0.0135 | 0.0655 | 0.0180 | 0.0125 | 0.0195 | 0.0150 | 0.0145 | 0.0060 | 0.0210 |

$p=100$ | |||||||||||

$n=25$ | 0.0215 | 0.0115 | 0.0195 | 0.0320 | 0.1840 | - | 0.1840 | - | - | 0.3075 | 0.0135 |

$n=50$ | 0.0495 | 0.0170 | 0.0275 | 0.0695 | 0.1260 | - | 0.1260 | - | - | 0.2055 | 0.0210 |

$n=100$ | 0.0530 | 0.0305 | 0.0295 | 0.1045 | 0.0875 | 0.0140 | 0.0875 | 0.0145 | 0.0470 | 0.1070 | 0.0285 |

$n=200$ | 0.0630 | 0.0315 | 0.0270 | 0.1110 | 0.0080 | 0.0110 | 0.0080 | 0.0125 | 0.0150 | 0.0000 | 0.0305 |

$n=500$ | 0.0650 | 0.0290 | 0.0190 | 0.1010 | 0.0225 | 0.0160 | 0.0225 | 0.0165 | 0.0140 | 0.0030 | 0.0160 |

$p=200$ | |||||||||||

$n=25$ | 0.0340 | 0.0105 | 0.0305 | 0.0360 | 0.1730 | - | 0.1730 | - | - | 0.3055 | 0.0145 |

$n=50$ | 0.0615 | 0.0200 | 0.0500 | 0.0845 | 0.1540 | - | 0.1540 | - | - | 0.2295 | 0.0225 |

$n=100$ | 0.0835 | 0.0285 | 0.0400 | 0.1315 | 0.1235 | - | 0.1235 | - | - | 0.1815 | 0.0310 |

$n=200$ | 0.0820 | 0.0250 | 0.0260 | 0.1470 | 0.0835 | 0.0110 | 0.0835 | 0.0115 | 0.0360 | 0.0915 | 0.0205 |

$n=500$ | 0.0995 | 0.0185 | 0.0210 | 0.1265 | 0.0090 | 0.0145 | 0.0090 | 0.0145 | 0.0115 | 0.0000 | 0.0165 |

${\mathit{\chi}}_{\mathit{sk}}^{2}$ | ${\mathit{\chi}}_{\mathit{ku}}^{2}$ | ${\mathit{S}}_{\mathit{kmax}}$ | ${\mathit{K}}_{\mathit{umax}}$ | ${\mathit{Z}}_{\mathit{M}1}^{*}$ | ${\mathit{Z}}_{\mathit{S}1}^{*}$ | ${\mathit{MJB}}_{\mathit{m}}^{*}$ | ${\mathit{MJB}}_{\mathit{s}}^{*}$ | $\mathit{mJBM}$ | $\mathit{ZNT}$ | ${\mathit{JB}}_{\mathit{sum}}$ | |
---|---|---|---|---|---|---|---|---|---|---|---|

$p=5$ | |||||||||||

$n=25$ | 0.0320 | 0.0185 | 0.0380 | 0.0255 | 0.0605 | 0.0710 | 0.0695 | 0.0835 | 0.0515 | 0.0205 | 0.0345 |

$n=50$ | 0.0415 | 0.0355 | 0.0390 | 0.0490 | 0.0735 | 0.0630 | 0.0815 | 0.0665 | 0.0485 | 0.0330 | 0.0500 |

$n=100$ | 0.0500 | 0.0515 | 0.0510 | 0.0660 | 0.0675 | 0.0620 | 0.0765 | 0.0655 | 0.0575 | 0.0510 | 0.0530 |

$n=200$ | 0.0485 | 0.0485 | 0.0450 | 0.0580 | 0.0560 | 0.0585 | 0.0630 | 0.0640 | 0.0520 | 0.0465 | 0.0680 |

$n=500$ | 0.0570 | 0.0475 | 0.0530 | 0.0555 | 0.0540 | 0.0510 | 0.0590 | 0.0505 | 0.0525 | 0.0485 | 0.0690 |

$p=30$ | |||||||||||

$n=25$ | 0.0255 | 0.0265 | 0.0460 | 0.0495 | 0.1900 | 0.0260 | 0.1900 | 0.3175 | - | 0.3050 | 0.0345 |

$n=50$ | 0.0465 | 0.0395 | 0.0650 | 0.0960 | 0.0195 | 0.0755 | 0.0215 | 0.0815 | 0.0940 | 0.0000 | 0.0570 |

$n=100$ | 0.0595 | 0.0595 | 0.0660 | 0.1205 | 0.0670 | 0.0650 | 0.0685 | 0.0625 | 0.0695 | 0.0105 | 0.0700 |

$n=200$ | 0.0745 | 0.0645 | 0.0615 | 0.1300 | 0.0755 | 0.0530 | 0.0760 | 0.0585 | 0.0660 | 0.0295 | 0.0715 |

$n=500$ | 0.0745 | 0.0625 | 0.0610 | 0.1080 | 0.0550 | 0.0645 | 0.0560 | 0.0675 | 0.0565 | 0.0425 | 0.0680 |

$p=50$ | |||||||||||

$n=25$ | 0.0295 | 0.0205 | 0.0500 | 0.0580 | 0.1900 | - | 0.1900 | - | - | 0.3000 | 0.0340 |

$n=50$ | 0.0595 | 0.0425 | 0.0730 | 0.1120 | 0.1280 | 0.0525 | 0.1280 | 0.0465 | 0.1685 | 0.1505 | 0.0495 |

$n=100$ | 0.0745 | 0.0550 | 0.0740 | 0.1340 | 0.0350 | 0.0645 | 0.0375 | 0.0680 | 0.0675 | 0.0005 | 0.0630 |

$n=200$ | 0.0860 | 0.0585 | 0.0630 | 0.1545 | 0.0655 | 0.0465 | 0.0660 | 0.0470 | 0.0565 | 0.0130 | 0.0660 |

$n=500$ | 0.0935 | 0.0700 | 0.0565 | 0.1315 | 0.0595 | 0.0515 | 0.0595 | 0.0495 | 0.0625 | 0.0330 | 0.0590 |

$p=100$ | |||||||||||

$n=25$ | 0.0370 | 0.0180 | 0.0580 | 0.0565 | 0.1840 | - | 0.1840 | - | - | 0.3075 | 0.0225 |

$n=50$ | 0.0785 | 0.0360 | 0.0820 | 0.1280 | 0.1265 | - | 0.1265 | - | - | 0.2060 | 0.0420 |

$n=100$ | 0.0915 | 0.0575 | 0.0770 | 0.1815 | 0.0940 | 0.0510 | 0.0940 | 0.0500 | 0.1150 | 0.1090 | 0.0560 |

$n=200$ | 0.1065 | 0.0615 | 0.0760 | 0.1940 | 0.0355 | 0.0595 | 0.0355 | 0.0615 | 0.0640 | 0.0000 | 0.0640 |

$n=500$ | 0.1095 | 0.0655 | 0.0645 | 0.1850 | 0.0820 | 0.0635 | 0.0820 | 0.0665 | 0.0610 | 0.0230 | 0.0615 |

$p=200$ | |||||||||||

$n=25$ | 0.0430 | 0.0150 | 0.0760 | 0.0675 | 0.1735 | - | 0.1735 | - | - | 0.3055 | 0.0240 |

$n=50$ | 0.0875 | 0.0355 | 0.1145 | 0.1425 | 0.1540 | - | 0.1540 | - | - | 0.2295 | 0.0395 |

$n=100$ | 0.1130 | 0.0530 | 0.1045 | 0.2300 | 0.1235 | - | 0.1235 | - | - | 0.1815 | 0.0660 |

$n=200$ | 0.1235 | 0.0575 | 0.0790 | 0.2405 | 0.0855 | 0.0485 | 0.0855 | 0.0485 | 0.1025 | 0.0940 | 0.0540 |

$n=500$ | 0.1475 | 0.0570 | 0.0695 | 0.2315 | 0.0505 | 0.0665 | 0.0505 | 0.0655 | 0.0565 | 0.0000 | 0.0585 |

${\mathit{\chi}}_{\mathit{sk}}^{2}$ | ${\mathit{\chi}}_{\mathit{ku}}^{2}$ | ${\mathit{S}}_{\mathit{kmax}}$ | ${\mathit{K}}_{\mathit{umax}}$ | ${\mathit{Z}}_{{\mathit{M}}_{1}}^{*}$ | ${\mathit{Z}}_{\mathit{S}1}^{*}$ | ${\mathit{MJB}}_{\mathit{m}}^{*}$ | ${\mathit{Z}}_{\mathit{S}1}^{*}$ | $\mathit{mJBM}$ | $\mathit{ZNT}$ | ${\mathit{JB}}_{\mathit{sum}}$ | |
---|---|---|---|---|---|---|---|---|---|---|---|

$p=5$ | |||||||||||

$n=25$ | 0.0575 | 0.0285 | 0.0660 | 0.0380 | 0.1090 | 0.1150 | 0.1140 | 0.1115 | 0.0950 | 0.0635 | 0.0455 |

$n=50$ | 0.0710 | 0.0560 | 0.0745 | 0.0645 | 0.1250 | 0.1055 | 0.1305 | 0.1060 | 0.0945 | 0.0765 | 0.0725 |

$n=100$ | 0.0900 | 0.0755 | 0.0940 | 0.0880 | 0.1205 | 0.1110 | 0.1160 | 0.1010 | 0.1005 | 0.0970 | 0.0865 |

$n=200$ | 0.0900 | 0.0795 | 0.0810 | 0.0895 | 0.1080 | 0.1135 | 0.1085 | 0.1080 | 0.0920 | 0.0985 | 0.0985 |

$n=500$ | 0.1035 | 0.0855 | 0.1135 | 0.0915 | 0.1110 | 0.1015 | 0.1075 | 0.0950 | 0.1070 | 0.0900 | 0.1110 |

$p=30$ | |||||||||||

$n=25$ | 0.0385 | 0.0350 | 0.0780 | 0.0670 | 0.1905 | 0.0425 | 0.1905 | 0.5345 | - | 0.3050 | 0.0450 |

$n=50$ | 0.0740 | 0.0565 | 0.1030 | 0.1210 | 0.0540 | 0.1300 | 0.0570 | 0.1320 | 0.1465 | 0.0000 | 0.0805 |

$n=100$ | 0.0920 | 0.0870 | 0.1140 | 0.1580 | 0.1060 | 0.1115 | 0.1070 | 0.1170 | 0.1150 | 0.0345 | 0.1020 |

$n=200$ | 0.1140 | 0.0950 | 0.1210 | 0.1710 | 0.1390 | 0.1030 | 0.1400 | 0.1045 | 0.1085 | 0.0715 | 0.1085 |

$n=500$ | 0.1180 | 0.1020 | 0.0975 | 0.1535 | 0.1030 | 0.1105 | 0.1040 | 0.1165 | 0.1135 | 0.0870 | 0.1130 |

$p=50$ | |||||||||||

$n=25$ | 0.0430 | 0.0295 | 0.0870 | 0.0775 | 0.1905 | - | 0.1905 | - | - | 0.3000 | 0.0470 |

$n=50$ | 0.0880 | 0.0610 | 0.1195 | 0.1420 | 0.1340 | 0.0980 | 0.1340 | 0.0945 | 0.2460 | 0.1540 | 0.0725 |

$n=100$ | 0.1085 | 0.0805 | 0.1205 | 0.1900 | 0.0850 | 0.1170 | 0.0850 | 0.1140 | 0.1200 | 0.0030 | 0.0900 |

$n=200$ | 0.1225 | 0.0935 | 0.1105 | 0.1995 | 0.1175 | 0.0960 | 0.1170 | 0.0990 | 0.1055 | 0.0370 | 0.1000 |

$n=500$ | 0.1370 | 0.1095 | 0.1100 | 0.1830 | 0.1130 | 0.1035 | 0.1130 | 0.1030 | 0.1045 | 0.0815 | 0.1035 |

$p=100$ | |||||||||||

$n=25$ | 0.0445 | 0.0275 | 0.0955 | 0.0735 | 0.1840 | - | 0.1840 | - | - | 0.3075 | 0.0410 |

$n=50$ | 0.0990 | 0.0485 | 0.1375 | 0.1600 | 0.1270 | - | 0.1270 | - | - | 0.2060 | 0.0625 |

$n=100$ | 0.1195 | 0.0825 | 0.1395 | 0.2345 | 0.0960 | 0.1050 | 0.0960 | 0.0985 | 0.1915 | 0.1105 | 0.0845 |

$n=200$ | 0.1360 | 0.0905 | 0.1260 | 0.2520 | 0.0715 | 0.1085 | 0.0720 | 0.1110 | 0.1230 | 0.0005 | 0.0945 |

$n=500$ | 0.1470 | 0.1045 | 0.1125 | 0.2470 | 0.1375 | 0.1160 | 0.1375 | 0.1185 | 0.1075 | 0.0550 | 0.1030 |

$p=200$ | |||||||||||

$n=25$ | 0.0520 | 0.0225 | 0.1205 | 0.0870 | 0.1735 | - | 0.1735 | - | - | 0.3055 | 0.0330 |

$n=50$ | 0.1020 | 0.0510 | 0.1585 | 0.1765 | 0.1540 | - | 0.1540 | - | - | 0.2295 | 0.0580 |

$n=100$ | 0.1430 | 0.0755 | 0.1620 | 0.2770 | 0.1235 | - | 0.1235 | - | - | 0.1815 | 0.0905 |

$n=200$ | 0.1530 | 0.0875 | 0.1365 | 0.3110 | 0.0875 | 0.0905 | 0.0875 | 0.0910 | 0.1690 | 0.0945 | 0.0905 |

$n=500$ | 0.1825 | 0.1055 | 0.1145 | 0.2965 | 0.1095 | 0.1205 | 0.1095 | 0.1190 | 0.1140 | 0.0010 | 0.0960 |

${\mathit{\chi}}_{\mathit{sk}}^{2}$ | ${\mathit{\chi}}_{\mathit{ku}}^{2}$ | ${\mathit{S}}_{\mathit{kmax}}$ | ${\mathit{K}}_{\mathit{umax}}$ | ${\mathit{Z}}_{\mathit{M}1}^{*}$ | ${\mathit{Z}}_{\mathit{S}1}^{*}$ | ${\mathit{MJB}}_{\mathit{m}}^{*}$ | ${\mathit{MJB}}_{\mathit{s}}^{*}$ | $\mathit{mJBM}$ | $\mathit{ZNT}$ | ${\mathit{JB}}_{\mathit{sum}}$ | |
---|---|---|---|---|---|---|---|---|---|---|---|

$p=5$ | |||||||||||

$n=25$ | 0.0045 | 0.0125 | 0.0090 | 0.0140 | 0.0245 | 0.0255 | 0.0385 | 0.0410 | 0.0150 | 0.0005 | 0.0230 |

$n=50$ | 0.0115 | 0.0180 | 0.0130 | 0.0280 | 0.0190 | 0.0115 | 0.0275 | 0.0180 | 0.0145 | 0.0050 | 0.0255 |

$n=100$ | 0.0130 | 0.0215 | 0.0185 | 0.0295 | 0.0195 | 0.0130 | 0.0250 | 0.0190 | 0.0160 | 0.0055 | 0.0260 |

$n=200$ | 0.0140 | 0.0225 | 0.0140 | 0.0310 | 0.0150 | 0.0135 | 0.0220 | 0.0215 | 0.0115 | 0.0115 | 0.0230 |

$n=500$ | 0.0120 | 0.0205 | 0.0135 | 0.0230 | 0.0095 | 0.0130 | 0.0110 | 0.0160 | 0.0080 | 0.0090 | 0.0205 |

$p=30$ | |||||||||||

$n=25$ | 0.0025 | 0.0120 | 0.0150 | 0.0305 | 0.1985 | 0.0050 | 0.1985 | 0.0595 | - | 0.3480 | 0.0180 |

$n=50$ | 0.0065 | 0.0205 | 0.0215 | 0.0525 | 0.0015 | 0.0185 | 0.0020 | 0.0255 | 0.0280 | 0.0000 | 0.0310 |

$n=100$ | 0.0090 | 0.0260 | 0.0170 | 0.0700 | 0.0235 | 0.0150 | 0.0250 | 0.0150 | 0.0180 | 0.0015 | 0.0260 |

$n=200$ | 0.0105 | 0.0295 | 0.0200 | 0.0655 | 0.0225 | 0.0110 | 0.0230 | 0.0150 | 0.0120 | 0.0035 | 0.0245 |

$n=500$ | 0.0085 | 0.0195 | 0.0120 | 0.0495 | 0.0140 | 0.0115 | 0.0140 | 0.0125 | 0.0125 | 0.0075 | 0.0205 |

$p=50$ | |||||||||||

$n=25$ | 0.0010 | 0.0075 | 0.0185 | 0.0235 | 0.2605 | - | 0.2605 | - | - | 0.4230 | 0.0150 |

$n=50$ | 0.0055 | 0.0240 | 0.0250 | 0.0770 | 0.1075 | 0.0180 | 0.1075 | 0.0165 | 0.0725 | 0.1615 | 0.0245 |

$n=100$ | 0.0100 | 0.0250 | 0.0175 | 0.0915 | 0.0065 | 0.0170 | 0.0070 | 0.0165 | 0.0185 | 0.0000 | 0.0260 |

$n=200$ | 0.0090 | 0.0365 | 0.0210 | 0.0910 | 0.0155 | 0.0125 | 0.0155 | 0.0130 | 0.0105 | 0.0015 | 0.0275 |

$n=500$ | 0.0135 | 0.0205 | 0.0145 | 0.0670 | 0.0240 | 0.0115 | 0.0240 | 0.0115 | 0.0105 | 0.0100 | 0.0180 |

$p=100$ | |||||||||||

$n=25$ | 0.0005 | 0.0070 | 0.0250 | 0.0360 | 0.2600 | - | 0.2600 | - | - | 0.4315 | 0.0145 |

$n=50$ | 0.0050 | 0.0225 | 0.0405 | 0.0850 | 0.1970 | - | 0.1970 | - | - | 0.3345 | 0.0265 |

$n=100$ | 0.0035 | 0.0260 | 0.0275 | 0.1195 | 0.0845 | 0.0130 | 0.0845 | 0.0125 | 0.0510 | 0.1310 | 0.0295 |

$n=200$ | 0.0110 | 0.0265 | 0.0255 | 0.1295 | 0.0075 | 0.0105 | 0.0080 | 0.0115 | 0.0130 | 0.0000 | 0.0260 |

$n=500$ | 0.0180 | 0.0210 | 0.0135 | 0.0885 | 0.0250 | 0.0075 | 0.0250 | 0.0080 | 0.0090 | 0.0040 | 0.0210 |

$p=200$ | |||||||||||

$n=25$ | 0.0005 | 0.0080 | 0.0360 | 0.0395 | 0.3050 | - | 0.3050 | - | - | 0.4800 | 0.0110 |

$n=50$ | 0.0040 | 0.0195 | 0.0475 | 0.1120 | 0.2500 | - | 0.2500 | - | - | 0.4035 | 0.0265 |

$n=100$ | 0.0085 | 0.0225 | 0.0405 | 0.1365 | 0.1475 | - | 0.1475 | - | - | 0.2465 | 0.0245 |

$n=200$ | 0.0115 | 0.0230 | 0.0295 | 0.1520 | 0.0650 | 0.0100 | 0.0650 | 0.0105 | 0.0285 | 0.0960 | 0.0250 |

$n=500$ | 0.0125 | 0.0195 | 0.0160 | 0.1310 | 0.0100 | 0.0120 | 0.0100 | 0.0120 | 0.0120 | 0.0005 | 0.0195 |

${\mathit{\chi}}_{\mathit{sk}}^{2}$ | ${\mathit{\chi}}_{\mathit{ku}}^{2}$ | ${\mathit{S}}_{\mathit{kmax}}$ | ${\mathit{K}}_{\mathit{umax}}$ | ${\mathit{Z}}_{\mathit{M}1}^{*}$ | ${\mathit{Z}}_{\mathit{S}1}^{*}$ | ${\mathit{MJB}}_{\mathit{m}}^{*}$ | ${\mathit{MJB}}_{\mathit{s}}^{*}$ | $\mathit{mJBM}$ | $\mathit{ZNT}$ | ${\mathit{JB}}_{\mathit{sum}}$ | |
---|---|---|---|---|---|---|---|---|---|---|---|

$p=5$ | |||||||||||

$n=25$ | 0.0205 | 0.0200 | 0.0280 | 0.0250 | 0.0735 | 0.0695 | 0.0795 | 0.0825 | 0.0685 | 0.0260 | 0.0390 |

$n=50$ | 0.0375 | 0.0410 | 0.0450 | 0.0510 | 0.0620 | 0.0420 | 0.0700 | 0.0495 | 0.0495 | 0.0320 | 0.0505 |

$n=100$ | 0.0555 | 0.0445 | 0.0550 | 0.0515 | 0.0715 | 0.0480 | 0.0770 | 0.0515 | 0.0475 | 0.0370 | 0.0585 |

$n=200$ | 0.0525 | 0.0530 | 0.0530 | 0.0665 | 0.0580 | 0.0560 | 0.0660 | 0.0605 | 0.0595 | 0.0420 | 0.0565 |

$n=500$ | 0.0535 | 0.0500 | 0.0515 | 0.0625 | 0.0515 | 0.0565 | 0.0595 | 0.0530 | 0.0530 | 0.0460 | 0.0675 |

$p=30$ | |||||||||||

$n=25$ | 0.0105 | 0.0240 | 0.0520 | 0.0515 | 0.2010 | 0.0230 | 0.2010 | 0.3135 | - | 0.3500 | 0.0300 |

$n=50$ | 0.0225 | 0.0360 | 0.0625 | 0.0855 | 0.0170 | 0.0580 | 0.0175 | 0.0670 | 0.0895 | 0.0000 | 0.0535 |

$n=100$ | 0.0385 | 0.0515 | 0.0520 | 0.1285 | 0.0670 | 0.0610 | 0.0680 | 0.0605 | 0.0650 | 0.0145 | 0.0580 |

$n=200$ | 0.0430 | 0.0655 | 0.0720 | 0.1250 | 0.0715 | 0.0520 | 0.0725 | 0.0590 | 0.0545 | 0.0345 | 0.0670 |

$n=500$ | 0.0530 | 0.0615 | 0.0480 | 0.1035 | 0.0685 | 0.0570 | 0.0680 | 0.0590 | 0.0565 | 0.0460 | 0.0640 |

$p=50$ | |||||||||||

$n=25$ | 0.0055 | 0.0140 | 0.0535 | 0.0470 | 0.2610 | - | 0.2610 | - | - | 0.4240 | 0.0295 |

$n=50$ | 0.0250 | 0.0450 | 0.0800 | 0.1260 | 0.1130 | 0.0655 | 0.1130 | 0.0620 | 0.1685 | 0.1690 | 0.0470 |

$n=100$ | 0.0370 | 0.0535 | 0.0710 | 0.1445 | 0.0475 | 0.0710 | 0.0480 | 0.0715 | 0.0680 | 0.0005 | 0.0570 |

$n=200$ | 0.0470 | 0.0715 | 0.0685 | 0.1720 | 0.0645 | 0.0580 | 0.0645 | 0.0655 | 0.0615 | 0.0135 | 0.0615 |

$n=500$ | 0.0510 | 0.0625 | 0.0640 | 0.1405 | 0.0705 | 0.0585 | 0.0705 | 0.0575 | 0.0560 | 0.0420 | 0.0710 |

$p=100$ | |||||||||||

$n=25$ | 0.0025 | 0.0155 | 0.0660 | 0.0725 | 0.2600 | - | 0.2600 | - | - | 0.4315 | 0.0305 |

$n=50$ | 0.0170 | 0.0395 | 0.0915 | 0.1430 | 0.1975 | - | 0.1975 | - | - | 0.3345 | 0.0490 |

$n=100$ | 0.0290 | 0.0510 | 0.0885 | 0.1935 | 0.0900 | 0.0540 | 0.0900 | 0.0525 | 0.1370 | 0.1355 | 0.0605 |

$n=200$ | 0.0430 | 0.0590 | 0.0795 | 0.2110 | 0.0395 | 0.0545 | 0.0395 | 0.0545 | 0.0525 | 0.0000 | 0.0665 |

$n=500$ | 0.0515 | 0.0545 | 0.0610 | 0.1755 | 0.0755 | 0.0535 | 0.0760 | 0.0525 | 0.0475 | 0.0210 | 0.0665 |

$p=200$ | |||||||||||

$n=25$ | 0.0020 | 0.0150 | 0.0790 | 0.0670 | 0.3050 | - | 0.3050 | - | - | 0.4805 | 0.0225 |

$n=50$ | 0.0125 | 0.0405 | 0.1205 | 0.1685 | 0.2510 | - | 0.2510 | - | - | 0.4040 | 0.0515 |

$n=100$ | 0.0225 | 0.0485 | 0.1025 | 0.2340 | 0.1475 | - | 0.1475 | - | - | 0.2465 | 0.0560 |

$n=200$ | 0.0430 | 0.0520 | 0.0870 | 0.2660 | 0.0685 | 0.0520 | 0.0685 | 0.0510 | 0.0970 | 0.0980 | 0.0545 |

$n=500$ | 0.0530 | 0.0490 | 0.0650 | 0.2460 | 0.0570 | 0.0510 | 0.0570 | 0.0500 | 0.0625 | 0.0010 | 0.0615 |

${\mathit{\chi}}_{\mathit{sk}}^{2}$ | ${\mathit{\chi}}_{\mathit{ku}}^{2}$ | ${\mathit{S}}_{\mathit{kmax}}$ | ${\mathit{K}}_{\mathit{umax}}$ | ${\mathit{Z}}_{\mathit{M}1}^{*}$ | ${\mathit{Z}}_{\mathit{S}1}^{*}$ | ${\mathit{MJB}}_{\mathit{m}}^{*}$ | ${\mathit{MJB}}_{\mathit{s}}^{*}$ | $\mathit{mJBM}$ | $\mathit{ZNT}$ | ${\mathit{JB}}_{\mathit{sum}}$ | |
---|---|---|---|---|---|---|---|---|---|---|---|

$p=5$ | |||||||||||

$n=25$ | 0.0380 | 0.0260 | 0.0515 | 0.0330 | 0.1195 | 0.1210 | 0.1230 | 0.1200 | 0.1160 | 0.0665 | 0.0490 |

$n=50$ | 0.0650 | 0.0550 | 0.0795 | 0.0690 | 0.1070 | 0.0940 | 0.1090 | 0.0855 | 0.0915 | 0.0780 | 0.0705 |

$n=100$ | 0.0940 | 0.0710 | 0.1065 | 0.0770 | 0.1215 | 0.0990 | 0.1215 | 0.0885 | 0.0900 | 0.0925 | 0.0860 |

$n=200$ | 0.0930 | 0.0870 | 0.1065 | 0.0940 | 0.1070 | 0.1020 | 0.1060 | 0.0965 | 0.1010 | 0.0970 | 0.0925 |

$n=500$ | 0.1000 | 0.0945 | 0.1015 | 0.1005 | 0.1010 | 0.1090 | 0.1040 | 0.1035 | 0.0995 | 0.0915 | 0.1075 |

$p=30$ | |||||||||||

$n=25$ | 0.0190 | 0.0320 | 0.0875 | 0.0740 | 0.2015 | 0.0410 | 0.2015 | 0.5340 | - | 0.3525 | 0.0430 |

$n=50$ | 0.0465 | 0.0545 | 0.0995 | 0.1165 | 0.0470 | 0.1165 | 0.0495 | 0.1145 | 0.1420 | 0.0000 | 0.0735 |

$n=100$ | 0.0715 | 0.0775 | 0.1055 | 0.1685 | 0.1115 | 0.1070 | 0.1110 | 0.1050 | 0.1090 | 0.0400 | 0.0920 |

$n=200$ | 0.0840 | 0.1040 | 0.1175 | 0.1695 | 0.1300 | 0.1015 | 0.1290 | 0.1050 | 0.1085 | 0.0740 | 0.1090 |

$n=500$ | 0.0985 | 0.1030 | 0.1090 | 0.1450 | 0.1235 | 0.1085 | 0.1240 | 0.1050 | 0.1140 | 0.0870 | 0.1150 |

$p=50$ | |||||||||||

$n=25$ | 0.0085 | 0.0210 | 0.0880 | 0.0650 | 0.2620 | - | 0.2620 | - | - | 0.4265 | 0.0435 |

$n=50$ | 0.0400 | 0.0635 | 0.1235 | 0.1580 | 0.1175 | 0.1105 | 0.1175 | 0.1095 | 0.2365 | 0.1720 | 0.0710 |

$n=100$ | 0.0655 | 0.0735 | 0.1100 | 0.1875 | 0.0875 | 0.1210 | 0.0885 | 0.1230 | 0.1190 | 0.0030 | 0.0810 |

$n=200$ | 0.0815 | 0.1025 | 0.1190 | 0.2275 | 0.1185 | 0.1175 | 0.1190 | 0.1195 | 0.1060 | 0.0400 | 0.0925 |

$n=500$ | 0.1070 | 0.0980 | 0.1245 | 0.1870 | 0.1210 | 0.1095 | 0.1210 | 0.1135 | 0.1070 | 0.0775 | 0.1130 |

$p=100$ | |||||||||||

$n=25$ | 0.0050 | 0.0245 | 0.1050 | 0.0895 | 0.2600 | - | 0.2600 | - | - | 0.4315 | 0.0400 |

$n=50$ | 0.0280 | 0.0550 | 0.1370 | 0.1800 | 0.1975 | - | 0.1975 | - | - | 0.3345 | 0.0650 |

$n=100$ | 0.0575 | 0.0745 | 0.1430 | 0.2445 | 0.0930 | 0.0970 | 0.0930 | 0.0960 | 0.2085 | 0.1385 | 0.0865 |

$n=200$ | 0.0760 | 0.0980 | 0.1455 | 0.2780 | 0.0740 | 0.1145 | 0.0740 | 0.1150 | 0.1025 | 0.0005 | 0.1020 |

$n=500$ | 0.0935 | 0.0935 | 0.1135 | 0.2485 | 0.1295 | 0.0990 | 0.1295 | 0.0965 | 0.0875 | 0.0515 | 0.1025 |

$p=200$ | |||||||||||

$n=25$ | 0.0025 | 0.0210 | 0.1335 | 0.0880 | 0.3050 | - | 0.3050 | - | - | 0.4810 | 0.0340 |

$n=50$ | 0.0180 | 0.0580 | 0.1875 | 0.2175 | 0.2510 | - | 0.2510 | - | - | 0.4040 | 0.0740 |

$n=100$ | 0.0385 | 0.0665 | 0.1630 | 0.2910 | 0.1475 | - | 0.1475 | - | - | 0.2465 | 0.0785 |

$n=200$ | 0.0700 | 0.0835 | 0.1510 | 0.3390 | 0.0710 | 0.0990 | 0.0710 | 0.0965 | 0.1585 | 0.0985 | 0.0830 |

$n=500$ | 0.0945 | 0.0890 | 0.1140 | 0.3205 | 0.1055 | 0.1055 | 0.1055 | 0.1035 | 0.1080 | 0.0030 | 0.1025 |

Data Set | ${\mathit{\chi}}_{\mathit{sk}}^{2}$ | ${\mathit{\chi}}_{\mathit{ku}}^{2}$ | ${\mathit{S}}_{\mathit{kmax}}$ | ${\mathit{K}}_{\mathit{umax}}$ | ${\mathit{Z}}_{\mathit{M}1}^{*}$ | ${\mathit{Z}}_{\mathit{S}1}^{*}$ | ${\mathit{MJB}}_{\mathit{m}}^{*}$ | ${\mathit{MJB}}_{\mathit{s}}^{*}$ | $\mathit{mJBM}$ | ${\mathit{JB}}_{\mathit{sum}}$ |
---|---|---|---|---|---|---|---|---|---|---|

${S}_{2}$ | 0.2076 | 0.0141 | 0.0713 | 0.0000 | 0.4533 | 0.0329 | 0.4553 | 0.0241 | 0.1367 | 0.0138 |

${S}_{3}$ | 0.5345 | 0.5318 | 0.6518 | 0.4207 | 0.0087 | 0.2109 | 0.0066 | 0.1935 | 0.4567 | 0.5560 |

${S}_{4}$ | 0.1956 | 0.1201 | 0.3231 | 0.0728 | 0.0000 | 0.0244 | 0.0000 | 0.0050 | 0.0212 | 0.0780 |

${S}_{5}$ | 0.0096 | 0.0045 | 0.0056 | 0.0038 | 0.0000 | 0.0415 | 0.0000 | 0.0111 | 0.0464 | 0.0003 |

Data Set | ${\mathit{\chi}}_{\mathit{sk}}^{2}$ | ${\mathit{\chi}}_{\mathit{ku}}^{2}$ | ${\mathit{S}}_{\mathit{kmax}}$ | ${\mathit{K}}_{\mathit{umax}}$ | ${\mathit{Z}}_{\mathit{M}1}^{*}$ | ${\mathit{Z}}_{\mathit{S}1}^{*}$ | ${\mathit{MJB}}_{\mathit{m}}^{*}$ | ${\mathit{MJB}}_{\mathit{s}}^{*}$ | $\mathit{mJBM}$ | ${\mathit{JB}}_{\mathit{sum}}$ |
---|---|---|---|---|---|---|---|---|---|---|

${B}_{0}$ | 0.0005 | 0.0046 | 0.0007 | 0.0007 | 0.0018 | 0.0051 | 0.0014 | 0.0037 | 0.0253 | 0.0000 |

${B}_{1}$ | 0.6148 | 0.7214 | 0.5606 | 0.6502 | 0.5602 | 0.5568 | 0.5632 | 0.6345 | 0.9584 | 0.7879 |

${B}_{2}$ | 0.3568 | 0.5468 | 0.3083 | 0.4704 | 0.1893 | 0.2897 | 0.2303 | 0.3771 | 0.8128 | 0.5087 |

${B}_{3}$ | 0.6069 | 0.4335 | 0.5813 | 0.5116 | 0.3277 | 0.5817 | 0.3309 | 0.6405 | 0.8588 | 0.6097 |

${B}_{4}$ | 0.6447 | 0.4297 | 0.5759 | 0.5776 | 0.7257 | 0.5863 | 0.5275 | 0.5285 | 0.6280 | 0.6694 |

${B}_{5}$ | 0.0109 | 0.0628 | 0.0338 | 0.0422 | 0.0028 | 0.0163 | 0.0014 | 0.0099 | 0.0405 | 0.0048 |

${B}_{6}$ | 0.0538 | 0.2003 | 0.1183 | 0.2662 | 0.1124 | 0.0290 | 0.1252 | 0.0221 | 0.0777 | 0.0533 |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).