Next Article in Journal
A Single-Key Variant of LightMAC_Plus
Next Article in Special Issue
Robust and Nonrobust Linking of Two Groups for the Rasch Model with Balanced and Unbalanced Random DIF: A Comparative Simulation Study and the Simultaneous Assessment of Standard Errors and Linking Errors with Resampling Techniques
Previous Article in Journal
Fuzzy Mixed Variational-like and Integral Inequalities for Strongly Preinvex Fuzzy Mappings
Previous Article in Special Issue
Cumulants of Multivariate Symmetric and Skew Symmetric Distributions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On a Vector-Valued Measure of Multivariate Skewness

Dipartimento di Economia, Società e Politica, Università degli Studi di Urbino “Carlo Bo”, Via Saffi 42, 61029 Urbino, Italy
Submission received: 26 August 2021 / Revised: 17 September 2021 / Accepted: 22 September 2021 / Published: 29 September 2021
(This article belongs to the Special Issue Symmetry and Asymmetry in Multivariate Statistics and Data Science)

Abstract

:
The canonical skewness vector is an analytically simple function of the third-order, standardized moments of a random vector. Statistical applications of this skewness measure include semiparametric modeling, independent component analysis, model-based clustering, and multivariate normality testing. This paper investigates some properties of the canonical skewness vector with respect to representations, transformations, and norm. In particular, the paper shows its connections with tensor contraction, scalar measures of multivariate kurtosis and Mardia’s skewness, the best-known scalar measure of multivariate skewness. A simulation study empirically compares the powers of tests for multivariate normality based on the squared norm of the canonical skewness vector and on Mardia’s skewness. An example with financial data illustrates the statistical applications of the canonical skewness vector.

1. Introduction

Let x be a p-dimensional random vector with mean μ , nonsingular covariance matrix Σ and finite third-order moments. Ref. [1] introduced the vector-valued measure of multivariate skewness as follows:
γ 1 V = E z zz ,
where z = Σ 1 / 2 x μ is the standardization of x and Σ 1 / 2 is the positive definite symmetric square root of the concentration matrix Σ 1 , that is, the inverse of Σ :
Σ 1 / 2 > 0 , Σ 1 / 2 = Σ 1 / 2 , Σ 1 / 2 Σ 1 / 2 = Σ 1 , Σ 1 / 2 Σ Σ 1 / 2 = I p .
The i-th element of γ 1 V is as follows:
E z z Z i = E Z 1 2 Z i + . . . + E Z p 2 Z i ,
where Z k is the k-th component of z . We shall refer to γ 1 V as the canonical skewness vector to distinguish it from less known vector-valued measures of multivariate skewness [2,3]. In the following, all vectors are regarded as column vectors.
The intuition behind γ 1 V might be better appreciated by considering some special but relevant cases. In the univariate case, that is, when the only component of x is the random variable X, the skewness vector coincides with the skewness of X, that is, its third standardized moment, as follows:
γ 1 V x = γ 1 X = E X μ 3 σ 3 ,
where μ and σ are the expected value and the standard deviation, respectively, of X. The canonical skewness vector is a null vector when x is centrally symmetric, that is, if x μ and if μ x are identically distributed [4]. In the bivariate case, the skewness vector admits the simpler representation as follows:
γ 1 V X 1 X 2 = E Z 1 3 + E Z 1 2 Z 2 E Z 2 3 + E Z 2 2 Z 1 .
The canonical skewness vector γ 1 V appears in several areas of multivariate statistical analysis. In model-based clustering, the projection of the standardized data onto the direction of the sample counterpart of γ 1 V is used to estimate the best discriminant projection [5]. In independent component analysis (ICA), γ 1 V is the product of an orthogonal matrix and the vector whose i-th element is the skewness of the i-th independent component ([6]). In the semiparametric model posed by [7], γ 1 V is instrumental in identifying the parameter of the model. Within an invariant coordinate selection (ICS) approach, the vector γ 1 V might be regarded as the standardized difference between two appropriately chosen random vectors ([8]).
Unfortunately, the canonical skewness vector might be a null vector, even if the underlying distribution is skewed with finite third moments and a positive definite covariance matrix. For example, let the density function of the random vector x = X 1 , X 2 , X 3 be f x 1 , x 2 , x 3 ; θ = 2 ϕ x 1 ϕ x 2 ϕ x 3 Φ θ x 1 x 2 x 3 , where ϕ · is the standard normal probability density function, Φ · is the standard normal cumulative distribution function, and θ is a non-null real value. Then, x is a standard random vector, and its only non-null third moment is E X 1 X 2 X 3 . The canonical skewness vector is then the three-dimensional null vector.
The partial skewness β 1 P x of x is just the squared norm of its canonical skewness vector:
β 1 P x = γ 1 V x 2 = γ 1 V x γ 1 V x = p i = 1 p h = 1 p k = 1 E Z h 2 Z i E Z k 2 Z i .
It was independently proposed by several authors (see, for example, refs. [1,8,9]) as a scalar measure of multivariate skewness; its name reminds that it does not depend on the cross-product moment E Z i Z j Z k when i, j, and k differ from each other [6]. Partial skewness is nonnegative, equals zero if x is centrally symmetric, and is invariant with respect to affine, nonsingular transformations. Its statistical applications include multivariate normality testing [10] and multivariate analysis of variance [9].
Ref. [11] proposed to measure the skewness of x by the following expectation:
β 1 T x = E z w 3 ,
where z and w are identically distributed and mutually independent. In the following, we shall refer to β 1 T as the total skewness since it depends on all third-order standardized moments of x . Just like partial skewness, total skewness is nonnegative, equals zero if the underlying distribution is centrally symmetric, and is invariant with respect to nonsingular affine transformations. Moreover, just like partial skewness, total skewness is used in multivariate normality testing [11] and in multivariate analysis of variance [9]. However, β 1 T is by far more popular than β 1 P , to the point of being a default measure of multivariate skewness, as remarked by [3].
The total skewness is always positive when some third-order cumulants of the underlying distribution are non-null and the covariance matrix is positive definite. This feature constitutes a major advantage of total skewness over partial skewness. For example, the total skewness and the partial skewness of the random vector x = X 1 , X 2 , X 3 with probability density function f x 1 , x 2 , x 3 ; θ = 2 ϕ x 1 ϕ x 2 ϕ x 3 Φ θ x 1 x 2 x 3 are β 1 T x = 6 E 2 X 1 X 2 X 3 and β 1 P x = 0 . As remarked before, the sample counterparts of both skewness measures were used to test normality, but they are suited for testing symmetry since there are many multivariate distributions which are centrally symmetric without being normal, such as the multivariate Student t distribution.
There are many more measures of multivariate skewness (see, for example, ref. [12]). Similarly, there are other notions of multivariate symmetry other than central symmetry, such as weak symmetry, sign symmetry and elliptical symmetry. A detailed investigation of notions of multivariate symmetry and measures of multivariate skewness falls outside the scope of the present paper. We defer the interested reader to [4], where the previous literature on both topics is thoroughly reviewed.
This paper contributes to the literature on the canonical skewness vector both with empirical and theoretical results. The theorems in Section 2 provide some alternative representations for γ 1 V , β 1 P and β 1 T , together with some skewness–kurtosis inequalities and some insights into the behavior of these skewness measures under some well-known transformations. The simulation studies in Section 3 compare the performance of partial skewness and total skewness for testing multivariate symmetry. Section 4 uses financial data to illustrate the statistical applications of the canonical skewness vector and partial skewness. Section 5 contains some concluding remarks and hints for future research. The proofs of the theorems are relegated to Appendix A.

2. Theory

This section contains several theoretical results related to the canonical skewness vector. Theorems 1 and 2 represent the canonical skewness vector by means of the star product of matrices and the vectorization operator. Theorems 3 and 4 investigate the behavior of the canonical skewness vector under linear transformations and convolution. Theorems 5 and 6 focus on partial skewness and total skewness by providing alternative representations and by establishing skewness–kurtosis inequalities.
The third moment matrix of a p-dimensional random vector x with finite third-order moments is as follows:
M 3 , x = E x x x .
The matrix A B denotes the Kronecker product of the matrices A = a i j R p × R q and B = b i j R h × R k , that is, the block matrix whose i , j -th block is the matrix a i j B . The third cumulant of x , that is, the third moment of x μ , is the following:
K 3 , x = M 3 , x μ = E x μ x μ x μ .
The third standardized moment of x , which coincides with its third standardized cumulant, is the third moment (cumulant) of z , given as follows:
K 3 , z = M 3 , z = E z z z .
The matrices M 3 , x , K 3 , x and K 3 , z are the matricized versions of the third moment tensor, as follows:
M 3 , x = E X i X j X h ,
the third cumulant tensor, as follows:
K 3 , x = E X i μ i X j μ j X h μ h
and the third standardized moment tensor, as follows:
M 3 , z = E Z i Z j Z h ,
that is, the third-order arrays containing all the third moments of x , x μ , and z . Third-order tensors provide a natural tool for representing the skewness of a random vector [1]. In particular, the canonical skewness vector is just the tensor contraction of K 3 , z and I p . Its i-th component is as follows:
p j = 1 p h = 1 E Z i Z j Z h δ j h , where δ j h = 1 j = h 0 j h .
Some tensor contractions might be represented by means of the star product of matrices as defined by [13]. The star product of the a × b matrix M = m i j and the a c × b d block matrix N = N i j R c × R d is the c × d matrix as follows:
M N = a i = 1 b j = 1 m i j N i j ,
that is, the linear combination of the blocks of N where the coefficients are the elements of M . The following theorem shows that the canonical skewness vector is the star product of an identity matrix and the third standardized moment.
Theorem 1.
Let z be the standardization of a p -dimensional random vector x with a positive definite covariance matrix and finite third-order moments. Then, the canonical skewness vector γ 1 V of x is the star product of the p × p identity matrix I p and the third standardized cumulant matrix K 3 , z :
γ 1 V = I p K 3 , z .
We illustrate the above theorem with the bivariate random vector X 1 , X 2 , whose standardization is Z 1 , Z 2 . The star product of the 2 × 2 identity matrix and the third standardized moment of X 1 , X 2 is as follows:
1 0 0 1 E Z 1 3 E Z 1 2 Z 2 E Z 1 2 Z 2 E Z 1 Z 2 2 E Z 1 2 Z 2 E Z 1 Z 2 2 E Z 1 Z 2 2 E Z 2 3 = 1 · E Z 1 3 E Z 1 2 Z 2 + 0 · E Z 1 2 Z 2 E Z 1 Z 2 2 + 0 · E Z 1 2 Z 2 E Z 1 Z 2 2 + 1 · E Z 1 Z 2 2 E Z 2 3 = E Z 1 3 E Z 1 2 Z 2 + E Z 1 Z 2 2 E Z 2 3 = E Z 1 3 + E Z 1 Z 2 2 E Z 1 2 Z 2 + E Z 2 3 ,
which coincides with the canonical skewness vector of X 1 , X 2 , as defined in the Introduction.
A location vector l x and a scatter matrix S x of a p-dimensional random vector x are a p-dimensional vector and a symmetric, positive definite p × p matrix, satisfying the following:
l Ax + b = Al x + b and S Ax + b = AS x A
for any p × p matrix A of full rank and for any p-dimensional real vector b (see, for example, ref. [14]). The mean μ and
μ 3 , x = E x μ Σ 1 x μ x
are examples of location vectors [8]. The covariance and
Q x = E x μ Σ 1 x μ xx
are examples of scatter matrices [8]. Let l 1 x , l 2 x and S x be two location vectors and a scatter matrix of a p-dimensional random vector x . Ref. [14] suggests using the following:
a x 2 = l 1 x l 2 x S 1 x l 1 x l 2 x
as a scalar measure of multivariate skewness, where S 1 x is the inverse of S x . It measures the skewness in the direction of the vector-valued measure of multivariate skewness:
a x = S 1 / 2 x l 1 x l 2 x ,
where S 1 / 2 x is the positive definite symmetric square root of S 1 x . The following theorem represents the canonical skewness vector and the partial skewness as a x and a x 2 , where the following holds:
l 1 x = μ 3 , x , l 2 x = p μ and S x = Σ .
The theorem also represents the canonical skewness vector by means of simple matrix functions acting on the third cumulant and on the covariance matrix.
Theorem 2.
Let x be a p-dimensional random vector with a positive, definite covariance matrix and finite third-order moments. Additionally, let μ , Σ 1 , and K 3 , x be the mean vector, the concentration matrix and the third cumulant of x . Then, the canonical skewness vector of x is as follows:
γ 1 V = Σ 1 / 2 K 3 , x v e c Σ 1 = Σ 1 / 2 E x μ Σ 1 x μ x p μ ,
where Σ 1 / 2 is the positive definite symmetric square root of Σ 1 .
The canonical skewness vector is location invariant: γ 1 V x = γ 1 V x + v for any p-dimensional real vector v . On the other hand, the canonical skewness vector is neither invariant nor equivariant with respect to linear transformations: γ 1 V Ax may differ from both γ 1 V x and A γ 1 V x , where A is a nonsingular p × p real matrix. Ref. [5] conjectures that γ 1 V Ax = U γ 1 V x for some orthogonal p × p matrix U . The following theorem makes this statement more precise.
Theorem 3.
Let γ 1 V x be the canonical skewness vector of the p-dimensional random vector x . Additionally, let γ 1 V Ax + b be the canonical skewness vector of Ax + b , where A is a nonsingular p × p matrix and b is a p-dimensional real vector. Finally, let U = Σ 1 / 2 A 1 Ω 1 / 2 , where Σ 1 / 2 and Ω 1 / 2 are the positive definite symmetric square roots of the concentration matrix of x and of the covariance matrix of Ax + b . Then, U is orthogonal, coincides with A when A itself is orthogonal and γ 1 V x = U γ 1 V Ax + b .
Let X 1 , ..., X n be independent and identically distributed random variables whose third standardized cumulant is γ 1 . It is well-known that the third cumulant of the sum X 1 + . . . + X n is γ 1 / n . The following theorem generalizes this statement to multivariate random vectors.
Theorem 4.
Let x 1 , ..., x n be n independent and identically distributed p -dimensional random vectors with positive definite covariance matrices and finite third-order moments. Additionally, let γ 1 V x i , β 1 P x i and β 1 T x i be the canonical skewness vector, the partial skewness and the total skewness of x i , for i = 1 , ..., n . Finally, let γ 1 V s n , β 1 P s n and β 1 T s n be the canonical skewness vector, the partial skewness and the total skewness of s n = x 1 + . . . + x n . Then, the following identities hold true:
γ 1 V s n = γ 1 V x i n , β 1 P s n = β 1 P x i n and β 1 T s n = β 1 T x i n .
The total skewness and the partial skewness are the squared norms of the third standardized moment matrix and the canonical skewness vector. Therefore, both skewness measures are functions of the third standardized moments. The following theorem shows that they can also be represented as functions of the second and third central moments, by means of simple matrix operations acting on the covariance and on the third cumulant. Thus, the following theorem provides an alternative representation for the partial and total skewness measures.
Theorem 5.
Let x be a p-dimensional random vector with finite third-order moments and positive definite covariance matrix. Additionally, let K 3 , x and Σ 1 be the third cumulant and the concentration matrix of x . Then, the total skewness and the partial skewness of x are as follows:
β 1 T = t r Σ 1 K 3 , x Σ 1 Σ 1 K 3 , x and β 1 P = v e c Σ 1 K 3 , x Σ 1 K 3 , x v e c Σ 1 .
The fourth-order standardized moment of a p-dimensional random vector x with finite fourth-order moments and positive definite covariance is as follows:
M 4 , z = E z z z z ,
where z is the standardization of x . Refs. [11,15] defined the partial kurtosis (also known as Mardia’s kurtosis) and the total kurtosis (also known as Koziol’s kurtosis) as the trace and the squared norm of the following: M 4 , z :
β 2 P x = t r M 4 , z and β 2 T x = M 4 , z 2 .
Ref. [1] as well as [16] provided some skewness–kurtosis inequalities involving partial skewness. The following theorem contributes to the literature on inequalities between multivariate measures of skewness and kurtosis.
Theorem 6.
Let x be a p -dimensional random vector with a positive definite covariance matrix and finite fourth-order moments. Additionally, let β 1 T , β 1 P , β 2 T , and β 2 P be the total skewness, the partial skewness, the total kurtosis and the partial kurtosis of x . Then, the following inequalities hold true:
β 1 T p p 2 + β 2 T 2 and β 1 P p p 4 + β 2 P 2 2 .
As a corollary, the inequality 2 γ 1 2 β 2 holds true for a random variable X whose third and fourth standardized moments are γ 1 and β 2 .

3. Simulations

In this section, we use simulations for comparing the powers of symmetry tests based on partial and total skewness. To this end, we simulated from the two-component normal mixture with proportional covariances and the multivariate skew-normal distribution. For both models, testing symmetry is a crucial issue, where ordinary likelihood-based methods are somewhat problematic. For each choice of the parameters, variables and units, we computed the percentage of samples rejected at the 0.05 level by the testing procedure proposed by [1,11]. We refer to the former and the latter testing procedures as “Mori” and “Mardia”, respectively. The results given by the simulations clearly hint that “Mori” is a strong competitor for “Mardia”.
We first simulated 10,000 samples of 100 observations from the mixture π 1 N p 0 p , I p + 1 π 1 N p 1 p , c · I p , for π 1 = 0.1 , 0.2 , 0.3 , 0.4 , p = 5 , 10, 15, 20 and c = 0.5 , 1, 2. The symbols 0 p , 1 p and I p denote the p dimensional vector of zeros, the p dimensional vector of ones and the p dimensional identity matrix, respectively. The two-component normal mixture with proportional covariances describes the worst case from a multivariate outlier detection perspective and leads to an unbounded likelihood function (see, for example, ref. [5]). The scatter plot in Figure 1 depicts 10,000 outcomes from 0.2 · N 2 0 2 , I 2 + 0.8 · N 2 2.5 · 1 2 , I 2 and exemplifies the skewness of normal mixtures.
The results of this simulation study are reported in Table 1 and may be summarized as follows. When less weight is placed on the more dispersed component, “Mori” always outperforms “Mardia”, although both tests show a very high power. When the components are equally dispersed, “Mardia” always outperforms “Mori”, although only slightly. Both tests show a very low power, which tends to decrease with the absolute difference between the components’ weights. When more weight is placed on the more dispersed component, “Mori” always outperforms “Mardia”, except when the weight of the less dispersed component is 0.1 . The powers of “Mardia” and “Mori” increase as 2 π 1 1 increases, that is, as the absolute difference between the mixture weights π 1 and 1 π 1 increases.
We also simulated 10,000 samples of sizes n = 100 , 200, 500 from the multivariate skew–normal density function 2 ϕ p x ; 0 p , I p Φ α 1 p x , where α = 1 , 2, 3, 4 and ϕ p x ; 0 p , I p is the probability density function of a p-dimensional standard normal distribution, with p = 5 , 10, 15, and 20. The information matrix of the multivariate skew–normal distribution is singular when α = 0 , thus preventing the use of standard likelihood-based methods when testing symmetry. Table 2 reports the results given in this simulation study, which somewhat differ from those in the previous one. Firstly, “Mardia” nearly always outperforms “Mori”, although only slightly. Secondly, the performance of both “Mardia” and “Mori” improves with the number of units and the parameter α , but deteriorate with the number of variables. Thirdly, the difference between the performance of “Mardia” and “Mori” tends to decrease as the number of units increases. The scatter plot in Figure 2 depicts 10,000 outcomes from 2 ϕ 2 x 1 , x 2 ; 0 2 , I 2 Φ 2.5 x 1 + 2.5 x 2 and exemplifies the skewness of skew–normal distributions.

4. Example

This section uses the financial data in [17] to illustrate a statistical application of the canonical skewness vector and the partial skewness. The dataset, which we shall refer to as ALL, includes the percentage logarithmic daily returns recorded in the French, Dutch and Spanish financial markets from 25 June 2003 to 23 June 2008. Let GOOD be the subset of ALL, which includes the returns in ALL following a positive return of the U.S. financial market, that is, good news from the leading world financial market. Similarly, let BAD be the subset of ALL, which includes all the returns in ALL following a negative return of the U.S. financial market, that is, bad news from the leading world financial market. The three datasets are instrumental in investigating the behavior of financial markets in the presence of either good or bad news [18,19].
Let X i be the i-th univariate observation in a sample of size n. The sample mean, the sample variance and the sample skewness are as follows:
X ¯ = 1 n n i = 1 X i , S 2 = 1 n n i = 1 X i X ¯ 2 and g 1 = 1 n n i = 1 X i X ¯ S 3 .
The sample skewnesses of the French, Danish and Spanish data in ALL are negative: 0.349 , 0.236 and 0.461 .The sample skewnesses of the French, Danish and Spanish data in BAD are negative as well: 0.942 , 0.734 and 1.283 . On the other hand, the sample skewnesses of the French, Danish and Spanish data in GOOD are positive: 0.578 , 0.585 and 0.774 . The absolute skewnesses of the three markets in ALL are smaller than the corresponding absolute skewnesses in GOOD, which in turn are smaller than the corresponding absolute skewnesses in BAD. All these features are well-known stylized facts of financial markets and are modeled by the SGARCH model in [18].
Let the vector x i be the i-th multivariate observation in a sample of size n. The sample mean, the sample variance, the sample canonical skewness vector and the sample partial skewness are as follows:
x ¯ = 1 n n i = 1 x i , S = 1 n n i = 1 x i x ¯ x i x ¯ , z i = S 1 / 2 x i x ¯ , g 1 V = 1 n n i = 1 z i z i z i and b 1 P = g 1 V 2 = 1 n 2 n i = 1 n j = 1 z i z i z j z j z i z j .
In the ALL, GOOD and BAD datasets, x i is the trivariate vector whose elements are the returns of the French, Dutch and Spanish financial markets recorded during the i-th day. The canonical skewness vectors of ALL, GOOD and BAD are negative, positive and negative:
g 1 V A L L = 0.359 0.111 0.341 , g 1 V G O O D = 0.275 0.362 0.633 , g 1 V B A D = 0.771 0.295 1.008 .
Their squared norms, that is, the partial skewnesses of ALL, GOOD and BAD are as follows:
b 1 P A L L = 0.258 , b 1 P G O O D = 0.607 , b 1 P B A D = 1.697 .
The partial skewness of ALL is smaller than the partial skewness of GOOD, which in turn is smaller than the partial skewness of BAD. Therefore, for the data at hand, the canonical skewness vector and the partial skewness nicely generalize to the multivariate case and the univariate features of financial returns.

5. Conclusions

This paper investigated some theoretical and empirical properties of the canonical skewness vector. The theoretical contribution is threefold. Theorems 1 and 2 pertain to the representation of the canonical skewness vector, either with the star product or with some location vectors. Theorems 3 and 4 deal with transformations, which may be either linear transformations or convolutions. Theorems 5 and 6 highlight some connections between the partial skewness and the total skewness, with respect to representations and inequalities. The empirical contribution is twofold. Firstly, the results of the simulation studies hint that partial skewness might outperform total skewness as a test statistic for symmetry. Secondly, the canonical skewness vector, when applied to the financial data in Section 4, generalizes to multivariate financial returns—a well-known feature of univariate financial returns.
Regrettably, the literature on the analytical forms and properties of the canonical skewness vector under well-known parametric assumptions is quite sparse. For example, to the best of our knowledge, there does not exist any theoretical result regarding the canonical skewness vector of skew–elliptical distributions [20], which are very useful in modeling linear functions of order statistics (see, for example, ref. [21]). We believe that the problem might be conveniently addressed within the smaller class of scale mixtures of skew–normal distributions by jointly using the results on their moments [22,23,24] and those in this paper (see, for example, Theorem 5). Additionally, the projection of a random vector onto the direction of its canonical skewness vector has a simple parametric interpretation when the underlying distribution is a location mixture of two normal distributions with proportional covariance matrices [5]. It would be interesting to know whether this result might be somewhat extended to scale mixtures of skew–normal distributions.
For the dataset in Section 4, the canonical skewness vector and the partial skewness generalize to the multivariate case the tendency of univariate financial returns to be more skewed in the presence of bad news. This empirical result encourages addressing the following research questions. Firstly, do the results for the dataset in Section 4 hold for other financial markets and for other time periods? Secondly, could the canonical skewness vector be meaningfully used to measure the effect of news about the COVID-19 pandemic on financial markets? Thirdly, does the canonical skewness vector or the partial skewness have a simple analytical form under the multivariate SGARCH model? Fourthly, is the partial skewness of several small, capitalized financial markets smaller than the partial skewness of the same number of larger financial markets? We hope to address these interesting research questions in future works.

Funding

This research received no external funding.

Data Availability Statement

The dataset used in this paper is available from Morgan Stanley Capital International (https://www.msci.com/, accessed on 1 September 2021).

Acknowledgments

The author would like to thank three anonymous reviewers whose insightful comments greatly helped to improve the quality of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Proof of Theorem 1.
By definition, the third standardized cumulant of x is as follows:
K 3 , z = E z z z ,
which might be represented as follows:
K 3 , z = E z zz
by applying the following identities:
ab = a b = b a ,
where a and b are any two vectors ([25], page 199). As a direct consequence, the third standardized cumulant might be represented as a block column vector:
K 3 , z = K 1 . . . K p , where K i = E Z i zz , for i 1 , . . . , p .
Let k i j be the j-th column of the matrix K i :
k i j = E Z i Z j z , for i , j 1 , . . . , p .
By definition, the star product of the p × p identity matrix I p and the third standardized cumulant K 3 , z is the sum of the vectors k i j multiplied by the elements of I p with the same indices, that is, the following:
I p K 3 , z = i = 1 p j = 1 p k i j δ i j ,
where δ i j is the element of I p belonging to its i-th row and to its j-th column, given as follows:
δ i j = 1 i = j 0 i j .
The definitions of k i j and δ i j , together with linear properties of the expectation, yield the following:
I p K 3 , z = i = 1 p j = 1 p E Z i Z j z δ i j = i = 1 p E Z i 2 z = E i = 1 p Z i 2 z .
The last expectation coincides with the definition of the canonical skewness vector. The proof is then complete. □
Proof of Theorem 2.
By definition, the vectorial skewness of x is as follows:
γ 1 V = E z zz , where z = Σ 1 / 2 x μ .
The replacement of z with Σ 1 / 2 x μ in the definition of γ 1 V yields the following:
γ 1 V = E x μ Σ 1 x μ Σ 1 / 2 x μ .
Applying the linear properties of the expectation yields the following:
γ 1 V = E z zz = Σ 1 / 2 E z zx μ E z z = Σ 1 / 2 E x μ Σ 1 x μ x p μ
where the last equality follows by taking into account that z is a random vector having a zero mean vector and a covariance matrix equal to the identity matrix I p .
We now use two properties of the vectorization and Kronecker operators: (1) ab = a b = b a (2) v e c ABC = C A v e c B , where A R p × R q , B R q × R r and C R r × R s . Applying these properties, we obtain the following:
K 3 , x = E x μ x μ x μ = E x μ x μ x μ
and K 3 , x = E x μ x μ x μ , from which we obtain the following:
K 3 , x v e c Σ 1 = E x μ x μ x μ v e c Σ 1 = E v e c x μ x μ Σ 1 x μ = E x μ x μ Σ 1 x μ = E x x μ Σ 1 x μ E x μ Σ 1 x μ μ = E x x μ Σ 1 x μ p μ
from which the following holds true:
Σ 1 / 2 K 3 , x v e c Σ 1 = Σ 1 / 2 E x x μ Σ 1 x μ p μ = γ 1 V ,
as we aimed to prove. □
Proof of Theorem 3.
Let Σ , Σ 1 and Σ 1 / 2 be the covariance matrix of x , the concentration matrix of x and the positive definite symmetric square root of Σ 1 . Additionally, let Ω , Ω 1 and Ω 1 / 2 be the covariance matrix of Ax + b , the concentration matrix of Ax + b and the positive definite symmetric square root of Ω 1 . Let us also denote by A the transpose of the matrix A 1 .
We first prove that U = Σ 1 / 2 A 1 Ω 1 / 2 is orthogonal as follows:
UU = U U = Σ 1 / 2 A 1 Ω 1 / 2 Σ 1 / 2 A 1 Ω 1 / 2 = Σ 1 / 2 A 1 Ω 1 / 2 Ω 1 / 2 A Σ 1 / 2 = Σ 1 / 2 A 1 Ω A Σ 1 / 2 .
The definitions of Ω and Σ 1 / 2 imply the following identities:
Ω = A Σ A and Σ 1 / 2 Σ Σ 1 / 2 = I p ,
so that the product of U and its transpose is an identity matrix as follows:
UU = Σ 1 / 2 A 1 Ω A Σ 1 / 2 = Σ 1 / 2 A 1 A Σ A A Σ 1 / 2 = Σ 1 / 2 Σ Σ 1 / 2 = I p .
Therefore U is an orthogonal matrix; this concludes the first part of the proof.
We assume without loss of generality that x is centered at the origin and that b is a null vector: E x = b = 0 p . The following identities hold true when A is an orthogonal matrix:
A 1 = A , Ω 1 / 2 = A Σ 1 / 2 A .
The canonical skewness vector of Ax is then the following:
γ 1 V Ax = E x Σ 1 x Ω 1 / 2 Ax = E x Σ 1 x A Σ 1 / 2 A Ax .
Applying now the linear properties of the expected value completes the second part of the proof:
γ 1 V Ax = E x Σ 1 x A Σ 1 / 2 x = A E x Σ 1 x Σ 1 / 2 x = A γ 1 V x .
By definition, the canonical skewness vectors of x and Ax are as follows:
γ 1 V x = E x Σ 1 x Σ 1 / 2 x and γ 1 V Ax = E Ax Ω 1 Ax Ω 1 / 2 Ax ,
By the ordinary properties of matrix inverses and the definition of Ω , we have the following:
Ω 1 = A Σ 1 A 1 .
The canonical skewness vector of Ax is then the following:
γ 1 V Ax = E x Σ 1 x Ω 1 / 2 Ax .
Apply now the linear properties of the expected value and recall the definition of U to obtain the following identity:
γ 1 V x = Σ 1 / 2 A 1 Ω 1 / 2 γ 1 V Ax = U γ 1 V Ax .
 □
Proof of Theorem 4.
The canonical skewness vector, the partial skewness, and the total skewness are location invariant. Therefore, we can assume without loss of generality that the means of x i , ..., x n and of their sum s n coincide with the p-dimensional null vector as follows:
E x i = E s n = 0 p .
Let Σ , Σ 1 and Σ 1 / 2 be the covariance matrix of x i , the concentration matrix of x i and the positive definite symmetric square root of Σ 1 . Then, the covariance matrix of s n , the concentration matrix of s n and its positive definite symmetric square root are as follows:
n Σ , Σ 1 n and Σ 1 / 2 n .
By definition, the canonical skewness vectors of x i and s n are as follows:
γ 1 V x i = E x i Σ 1 x i Σ 1 / 2 x i and γ 1 V s n = E s n 1 n Σ 1 s n 1 n Σ 1 / 2 s n .
The definitions of s n and the linear properties of the expectation yield the following identities:
γ 1 V s n = E s n 1 n Σ 1 s n 1 n Σ 1 / 2 s n = 1 n 3 Σ 1 / 2 E s n Σ 1 s n s n = 1 n 3 Σ 1 / 2 n i = 1 E s n Σ 1 s n x i = 1 n 3 Σ 1 / 2 n i = 1 E n j = 1 n h = 1 x j Σ 1 x h x i .
By assumption, the random vectors x 1 , ..., x n are mutually independent and centered. Therefore, the mean of x j Σ 1 x h x i is the p-dimensional null vector if at least one of the three indices differs from the other two:
E n j = 1 n h = 1 x j Σ 1 x h x i = 0 p if either i j , i h or j h .
The above identity and linear properties of the expectation yield the following:
γ 1 V s n = 1 n 3 Σ 1 / 2 n i = 1 E x i Σ 1 x i x i = 1 n 3 n i = 1 E x i Σ 1 x i Σ 1 / 2 x i .
The random vectors x 1 , ..., x n are identically distributed, and their canonical skewness vector is γ 1 V x i . Therefore, we can complete the first part of the proof:
γ 1 V s n = 1 n 3 n γ 1 V x i = 1 n γ 1 V x i .
By definition, the partial skewness of a random vector is just the squared norm of its canonical skewness vector, so we can complete the second part of the proof:
β 1 P s n = γ 1 V s n 2 = γ 1 V s n γ 1 V s n = γ 1 V x i γ 1 V x i n = β 1 P x i n .
Let y 1 , ..., y n be n random vectors independent of each other and independent of x 1 , ..., x n . Additionally, let y i and x j be identically distributed, for i , j = 1 , ..., n. Finally, let the following hold:
t n = n i = 1 y i and b i j = x i Σ 1 y j .
Then, the total skewness of s n is as follows:
β 1 T s n = E s n 1 n Σ 1 t n 3 = 1 n 3 E s n Σ 1 t n 3 .
By recalling the definitions of s n , t n and b i j , we obtain the following:
β 1 T s n = 1 n 3 E i , j b i j 3 = 1 n 3 E i , j b i j 3 + 3 i , j h , k b i j 2 b h k .
By assumption, the random vectors y 1 , ..., y n , x 1 , ..., x n are mutually independent and centered, so the mean of b i j 2 b h k is zero whenever at least one index differs from the others:
E b i j 2 b h k = 0 if either i j , i h , i k , j h , j k or h k .
The total skewness of s n is then as follows:
β 1 T s n = 1 n 3 E i , j b i j 3 = i , j E b i j 3 .
By assumption, the random vectors y i and x j are identically distributed, for i , j = 1 , ..., n:
β 1 T s n = 1 n 3 E i , j b i j 3 = E b i j 3 n .
We conclude the proof by recalling the definition of the total skewness of x i :
β 1 T s n = β 1 T x i n .
 □
Proof of Theorem 5.
Let z = Σ 1 / 2 x μ be the standardization of x , where μ is the mean of x and Σ 1 / 2 is the symmetric, positive definite square root of Σ 1 :
Σ 1 / 2 Σ 1 / 2 = Σ 1 , Σ 1 / 2 = Σ 1 / 2 , Σ 1 / 2 > 0 .
By definition, the third standardized cumulant of x is the third cumulant of z , that is, the following:
K 3 , z = E z z z .
The total skewness of x might be represented as the squared Euclidean norm of K 3 , z [26]:
β 1 T = K 3 , z 2 = t r K 3 , z K 3 , z .
The third standardized cumulant of x might be represented by means of Σ 1 / 2 and K 3 , x :
K 3 , z = Σ 1 / 2 Σ 1 / 2 K 3 , x Σ 1 / 2 .
The two identities above imply the following one:
β 1 T = t r Σ 1 / 2 K 3 , x Σ 1 / 2 Σ 1 / 2 Σ 1 / 2 Σ 1 / 2 K 3 , x Σ 1 / 2 .
The trace of the product of two symmetric matrices with the same number of rows does not depend on the order of the matrices themselves:
β 1 T = t r Σ 1 / 2 Σ 1 / 2 K 3 , x Σ 1 / 2 Σ 1 / 2 Σ 1 / 2 Σ 1 / 2 K 3 , x .
By the distributive property of the Kronecker product and the ordinary matrix product, if matrices A , B , C and D are of appropriate size, then the following holds:
A B C D = AC BD
(see, for example, ref. [25], page 194). By applying this property to the above identity, we obtain the following:
β 1 T = t r Σ 1 / 2 Σ 1 / 2 K 3 , x Σ 1 / 2 Σ 1 / 2 Σ 1 / 2 Σ 1 / 2 K 3 , x .
By definition, the product of Σ 1 / 2 by itself coincides with the concentration matrix Σ 1 so that the following holds:
β 1 T = t r Σ 1 K 3 , x Σ 1 Σ 1 K 3 , x
and the first part of the proof is complete.
We now prove the second part of the theorem. As a straightforward implication of Theorem 2, we have the following:-4.6cm0cm
β 1 P = γ 1 V γ 1 V = v e c Σ 1 K 3 , x Σ 1 / 2 Σ 1 / 2 K 3 , x v e c Σ 1 = v e c Σ 1 K 3 , x Σ 1 K 3 , x v e c Σ 1 ,
as we aimed to prove. □
Proof of Theorem 6.
Without loss of generality, we can assume that x = X 1 , . . . , X p is a standard random vector:
E X i = 0 , E X i 2 = 1 , E X i X j = 0 , for i , j = 1 , . . . , p and i j .
Let y = Y 1 , . . . , Y p be a random vector independent of x and with its same distribution. The first and second moments of x y are as follows:
E x y = E p i = 1 X i Y i = p i = 1 E X i Y i = p i = 1 E X i E Y i = 0 and E x y 2 = E p i = 1 X i Y i 2 = p i = 1 p j = 1 E X i Y i X j Y j = p i = 1 E X i 2 E Y i 2 = p .
The variance of the random variable x y 2 x y is nonnegative, thus implying the following inequality:
E x y 2 x y 2 p 2 .
By expanding the square and using linear properties of the expected value, we obtain the following:
E x y 4 2 E x y 3 + E x y 2 p 2 .
The second, third and fourth moments of x y are p, β 1 T and β 2 T :
β 2 T 2 β 1 T + p p 2 so that β 1 T p p 2 + β 2 T 2 .
We now prove the second part of the theorem. The expectation of x x is as follows:
E x x = E p i = 1 X i 2 = p i = 1 E X i 2 = p .
The random vectors x and y are independent and identically distributed by the assumption so that the expectation of x x y y x y is as follows:
E x x y y x y = p 2 .
The variance of the random variable x x y y x y is nonnegative, thus implying the following inequality:
E x x y y x y 2 p 4 .
Expansion of the square and linear properties of the expected value yields the following:
E x x 2 y y 2 2 E x y x x y y + E x y 2 p 4 .
The partial skewness and the partial kurtosis of x may be represented as follows (Ref. [27], page 82; ref. [26]):
β 1 P = E x y x x y y and β 2 P = E x x 2
thus leading to the following inequality:
β 2 P 2 2 β 1 P + p p 4 , which is equivalent to β 1 P p p 4 + β 2 P 2 2 .
 □

References

  1. Mòri, T.; Rohatgi, V.; Székely, G. On multivariate skewness and kurtosis. Theory Probab. Its Appl. 1993, 38, 547–551. [Google Scholar] [CrossRef]
  2. Balakrishnan, N.; Brito, M.; Quiroz, A. A Vectorial Notion of Skewness and Its Use in Testing for Multivariate Symmetry. Commun. Stat.-Theory Methods 2007, 36, 1757–1767. [Google Scholar] [CrossRef]
  3. Kollo, T. Multivariate skewness and kurtosis measures with an application in ICA. J. Multivar. Anal. 2008, 99, 2328–2338. [Google Scholar] [CrossRef] [Green Version]
  4. Serfling, R.J. Multivariate symmetry and asymmetry. In Encyclopedia of Statistical Sciences, 2nd ed.; Kotz, S., Read, C.B., Balakrishnan, N., Vidakovic, B., Eds.; Wiley: New York, NY, USA, 2006. [Google Scholar]
  5. Loperfido, N. Vector-Valued Skewness for Model-Based Clustering. Stat. Probab. Lett. 2015, 99, 230–237. [Google Scholar] [CrossRef]
  6. Loperfido, N. Singular Value Decomposition of the Third Multivariate Moment. Linear Algebra Its Appl. 2015, 473, 202–216. [Google Scholar] [CrossRef]
  7. Nordhausen, K.; Oja, H.; Ollila, E. Multivariate models and the first four moments. In Nonparametric Statistics and Mixture Models—A Festschrift for Thomas P. Hettmansperger; Hunter, D., Richards, D., Rosenberger, J.L., Eds.; World Scientific Publishing Co Pte Ltd.: Singapore, 2011. [Google Scholar]
  8. Ilmonen, P.; Oja, H.; Serfling, R. On Invariant Coordinate System (ICS) Functionals. Int. Stat. Rev. 2012, 80, 93–110. [Google Scholar] [CrossRef]
  9. Davis, A. On the Effects of Moderate Multivariate Nonnormality on Wilks’s Likelihood Ratio Criterion. Biometrika 1980, 67, 419–427. [Google Scholar] [CrossRef]
  10. Henze, N. Limit laws for multivariate skewness in the sense of Mòri, Rohatgi and Székely. Stat. Probab. Lett. 1997, 33, 299–307. [Google Scholar] [CrossRef]
  11. Mardia, K. Measures of multivariate skewness and kurtosis with applications. Biometrika 1970, 57, 519–530. [Google Scholar] [CrossRef]
  12. Malkovich, J.; Afifi, A. On Tests for Multivariate Normality. J. Am. Stat. Assoc. 1973, 68, 176–179. [Google Scholar] [CrossRef]
  13. MacRae, E. Matrix derivatives with an application to an adaptive linear decision problem. Ann. Stat. 1974, 2, 337–346. [Google Scholar] [CrossRef]
  14. Kankainen, A.; Taskinen, S.; Oja, H. Tests of multinormality based on location vectors and scatter matrices. Stat. Methods Appl. 2007, 16, 357–379. [Google Scholar] [CrossRef]
  15. Koziol, J. A note on measures of multivariate kurtosis. Biom. J. 1989, 31, 619–624. [Google Scholar] [CrossRef]
  16. Ogasawara, H. Extensions of Pearson’s inequality between skewness and kurtosis to multivariate cases. Stat. Probab. Lett. 2017, 30, 12–16. [Google Scholar] [CrossRef]
  17. De Luca, G.; Loperfido, N. Modelling Multivariate Skewness in Financial Returns: A SGARCH Approach. Eur. J. Financ. 2015, 21, 1113–1131. [Google Scholar] [CrossRef]
  18. De Luca, G.; Loperfido, N. A Skew-in-Mean GARCH Model for Financial Returns. In Skew-Elliptical Distributions and Their Applications: A Journey Beyond Normality; Genton, M.G., Ed.; Chapman & Hall, CRC: Boca Raton, FL, USA, 2004; pp. 205–222. [Google Scholar]
  19. De Luca, G.; Genton, M.; Loperfido, N. A Multivariate Skew-Garch Model. In Advances in Econometrics: Econometric Analysis of Economic and Financial Time Series, Part A (Special Volume in Honor of Robert Engle and Clive Granger, the 2003 Winners of the Nobel Prize in Economics); Terrell, D., Ed.; Elsevier: Oxford, UK, 2006; Volume 20, pp. 33–56. [Google Scholar]
  20. Branco, M.; Dey, D. A general class of skew-elliptical distributions. J. Multivar. Anal. 2001, 79, 99–113. [Google Scholar] [CrossRef] [Green Version]
  21. Loperfido, N. A Note on Skew-Elliptical Distributions and Linear Functions of Order Statistics. Stat. Probab. Lett. 2008, 78, 3184–3186. [Google Scholar] [CrossRef] [Green Version]
  22. Kim, H.M. Moments of variogram estimator for a generalized skew t distribution. J. Korean Stat. Soc. 2005, 34, 109–123. [Google Scholar]
  23. Kim, H.M. A note on scale mixtures of skew-normal distributions. Stat. Probab. Lett. 2008, 78, 1694–1701. [Google Scholar] [CrossRef]
  24. Kim, H.M. Corrigendum to “A note on scale mixtures of skew normal distribution” [Statist. Probab. Lett. 78 (2008) 1694–1701]. Stat. Probab. Lett. 2013, 83, 1937. [Google Scholar] [CrossRef]
  25. Rao, C.; Rao, M. Matrix Algebra and Its Applications to Statistics and Econometrics; World Scientific Co. Pte. Ltd.: Singapore, 1998. [Google Scholar]
  26. Kollo, T.; Srivastava, M. Estimation and testing of parameters in multivariate Laplace distribution. Commun. Stat.-Theory Methods 2005, 33, 2363–2687. [Google Scholar] [CrossRef]
  27. Kotz, S.; Balakrishnan, N.; Johnson, N. Continuous Multivariate Distributions, Volume 1: Models and Applications, 2nd ed.; Wiley: New York, NY, USA, 2000. [Google Scholar]
Figure 1. Scatterplot of 10,000 outcomes from 0.2 · N 2 0 2 , I 2 + 0.8 · N 2 2.5 · 1 2 , I 2 .
Figure 1. Scatterplot of 10,000 outcomes from 0.2 · N 2 0 2 , I 2 + 0.8 · N 2 2.5 · 1 2 , I 2 .
Symmetry 13 01817 g001
Figure 2. Scatterplot of 10,000 outcomes from 2 ϕ 2 x 1 , x 2 ; 0 2 , I 2 Φ 2.5 x 1 + 2.5 x 2 .
Figure 2. Scatterplot of 10,000 outcomes from 2 ϕ 2 x 1 , x 2 ; 0 2 , I 2 Φ 2.5 x 1 + 2.5 x 2 .
Symmetry 13 01817 g002
Table 1. For samples generated from normal mixture distributions, the columns contain the percentages of samples rejected by the test based on Mardia’s skewness (header: Mardia) and by the test based on partial skewness (header: Mori) at the 0.05 level for π 1 = 0.1 , 0.2 , 0.3 , 0.4 , p = 5 , 10 , 15 , 20 and c = 0.5 , 1 , 2 .
Table 1. For samples generated from normal mixture distributions, the columns contain the percentages of samples rejected by the test based on Mardia’s skewness (header: Mardia) and by the test based on partial skewness (header: Mori) at the 0.05 level for π 1 = 0.1 , 0.2 , 0.3 , 0.4 , p = 5 , 10 , 15 , 20 and c = 0.5 , 1 , 2 .
p π 1 c = 0.5 c = 1 c = 2
5
0.1
0.2
0.3
0.4
Mardia  Mori
82.29   87.81
86.94   97.32
78.89   96.74
66.24   92.45
Mardia  Mori
10.82   9.82
7.47   7.30
5.41   4.68
3.45   3.08
Mardia  Mori
5.62   4.81
10.19   10.31
18.55   25.88
31.33   48.33
10
0.1
0.2
0.3
0.4
Mardia  Mori
93.90   97.50
94.88   99.83
92.37   99.92
85.82   99.68
Mardia  Mori
10.83   8.64
6.11   5.23
3.79   3.11
2.63   2.10
Mardia  Mori
6.33   4.08
15.23   19.06
32.18   52.37
53.60   82.97
15
0.1
0.2
0.3
0.4
Mardia  Mori
95.94   98.56
98.29   99.92
97.53   99.98
95.74   99.98
Mardia  Mori
7.12   4.86
3.81   2.52
2.45   1.45
1.66   0.89
Mardia  Mori
6.19   3.97
20.69   29.58
47.77   75.36
72.81   95.62
20
0.1
0.2
0.3
0.4
Mardia  Mori
96.13   98.31
99.08   99.96
99.28   99.99
98.61   99.99
Mardia  Mori
4.10   2.08
1.88   0.91
1.17   0.47
0.78   0.33
Mardia  Mori
5.67   3.76
26.45   38.11
61.38   84.68
86.13   98.41
Table 2. For samples generated from skew–normal distributions, the columns contain the percentages of samples rejected by the test based on Mardia’s skewness (header: Mardia) and by the test based on partial skewness (header: Mori) at the 0.05 level for α = 1 , 2 , 3 , 4 , p = 5 , 10 , 15 , 20 and n = 100 , 200 , 500 .
Table 2. For samples generated from skew–normal distributions, the columns contain the percentages of samples rejected by the test based on Mardia’s skewness (header: Mardia) and by the test based on partial skewness (header: Mori) at the 0.05 level for α = 1 , 2 , 3 , 4 , p = 5 , 10 , 15 , 20 and n = 100 , 200 , 500 .
p α n = 100 n = 200 n = 500
5
1
2
3
4
Mardia  Mori
12.58   11.59
26.67   24.97
33.46   32.24
35.99   34.40
Mardia  Mori
24.05   24.08
62.61   58.06
75.08   68.85
78.95   72.82
Mardia  Mori
67.51   62.43
99.55   97.52
99.96   99.17
99.96   99.48
10
1
2
3
4
Mardia  Mori
6.92   5.81
10.48   9.15
11.33   10.10
11.82   10.20
Mardia  Mori
15.57   15.73
28.48   28.06
31.83   31.83
32.90   32.66
Mardia  Mori
50.83   49.70
84.04   77.13
88.89   82.61
91.34   85.02
15
1
2
3
4
Mardia  Mori
4.17   2.45
4.88   3.04
4.94   3.19
5.38   3.22
Mardia  Mori
10.06   8.56
13.55   13.05
15.52   14.23
15.81   15.01
Mardia  Mori
33.29   34.03
51.70   51.34
57.02   55.55
58.27   56.17
20
1
2
3
4
Mardia  Mori
2.19   0.75
2.52   1.00
2.36   1.02
2.25   0.97
Mardia  Mori
6.02   5.32
7.69   6.78
7.82   7.12
8.39   7.02
Mardia  Mori
21.27   23.43
30.64   33.01
32.63   34.13
33.89   35.27
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Loperfido, N. On a Vector-Valued Measure of Multivariate Skewness. Symmetry 2021, 13, 1817. https://0-doi-org.brum.beds.ac.uk/10.3390/sym13101817

AMA Style

Loperfido N. On a Vector-Valued Measure of Multivariate Skewness. Symmetry. 2021; 13(10):1817. https://0-doi-org.brum.beds.ac.uk/10.3390/sym13101817

Chicago/Turabian Style

Loperfido, Nicola. 2021. "On a Vector-Valued Measure of Multivariate Skewness" Symmetry 13, no. 10: 1817. https://0-doi-org.brum.beds.ac.uk/10.3390/sym13101817

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop