Next Article in Journal
Correction: Ali, M., et al. Study on the Development of Neutrosophic Triplet Ring and Neutrosophic Triplet Field. Mathematics 2018, 6, 46
Next Article in Special Issue
Discrete Two-Dimensional Fourier Transform in Polar Coordinates Part I: Theory and Operational Rules
Previous Article in Journal
Neutrosophic Triplets in Neutrosophic Rings
Previous Article in Special Issue
Subordination Approach to Space-Time Fractional Diffusion
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Introduction to Space–Time Exterior Calculus

Department of Information and Communication Technologies, Universitat Pompeu Fabra, 08018 Barcelona, Spain
*
Author to whom correspondence should be addressed.
Submission received: 21 May 2019 / Revised: 17 June 2019 / Accepted: 18 June 2019 / Published: 21 June 2019
(This article belongs to the Special Issue Advanced Mathematical Methods: Theory and Applications)

Abstract

:
The basic concepts of exterior calculus for space–time multivectors are presented: Interior and exterior products, interior and exterior derivatives, oriented integrals over hypersurfaces, circulation and flux of multivector fields. Two Stokes theorems relating the exterior and interior derivatives with circulation and flux, respectively, are derived. As an application, it is shown how the exterior-calculus space–time formulation of the electromagnetic Maxwell equations and Lorentz force recovers the standard vector-calculus formulations, in both differential and integral forms.

1. Introduction

Vector calculus has, since its introduction by J. W. Gibbs [1] and Heaviside, been the tool of choice to represent many physical phenomena. In mechanics, hydrodynamics and electromagnetism, quantities such as forces, velocities and currents are modeled as vector fields in space, while flux, circulation, divergence or curl describe operations on the vector fields themselves.
With relativity theory, it was observed that space and time are not independent but just coordinates in space–time [2] (pp. 111–120). Tensors like the Faraday tensor in electromagnetism were quickly adopted as a natural representation of fields in space–time [3] (pp. 135–144). In parallel, mathematicians such as Cartan generalized the fundamental theorems of vector calculus, i.e., Gauss, Green, and Stokes, by means of differential forms [4]. Later on, differential forms were used in Hamiltonian mechanics, e.g., to calculate trajectories as vector field integrals [5] (pp. 194–198).
A third extension of vector calculus is given by geometric and Clifford algebras [6], where vectors are replaced by multivectors and operations such as the cross and the dot products subsumed in the geometric product. However, the absence of an explicit formula for the geometric product hinders its widespread use. An alternative would have been the exterior algebra developed by Grassmann which nevertheless has received little attention in the literature [7]. An early work in this direction was Sommerfeld’s presentation of electromagnetism in terms of six-vectors [8].
We present a generalization of vector calculus to exterior algebra and calculus. The basic notions of space–time exterior algebra, introduced in Section 2, are extended to exterior calculus in Section 3 and applied to rederive the equations of electromagnetism in Section 4. In contrast to geometric algebra, our interior and exterior products admit explicit formulations, thereby merging the simplicity and intuitiveness of standard vector calculus with the power of tensors and differential forms.

2. Exterior Algebra

Vector calculus is constructed around the vector space R 3 , where every point is represented by three spatial coordinates. In relativity theory the underlying vector space is R 1 + 3 and time is treated as a coordinate in the same footing as the three spatial dimensions. We build our theory in space–time with k time dimensions and n space dimensions. The number of space–time dimensions is thus k + n and we may refer to a ( k , n ) - or ( k + n ) -space–time, R k + n . We adopt the convention that the first k indices, i.e., i = 0 , , k 1 , correspond to time components and the indices i = k , , k + n 1 represent space components and both k and n are non-negative integers. A point or position in this space–time is denoted by x , with components { x i } i = 0 k + n 1 in the canonical basis { e i } i = 0 k + n 1 , that is
x = i = 0 k + n 1 x i e i .
Given two arbitrary canonical basis vectors e i and e j , then their dot product in space–time is
e i · e j = 1 , i = j , 0 i k 1 , + 1 , i = j , k i k + n 1 , 0 , i j .
For convenience, we define the symbol Δ i j = e i · e j as the metric diagonal tensor in Minkowski space–time [2] (pp. 118–120), such that time unit vectors e i have negative norm Δ i i = 1 , whereas space unit vectors e i have positive norm Δ i i = + 1 . The dot product of two vectors x and y is the extension by linearity of the product in Equation (2), namely
x · y = i = 0 k + n 1 x i y i Δ i i = i = 0 k 1 x i y i + i = k k + n 1 x i y i .

2.1. Grade, Multivectors, and Exterior Product

In addition to the ( k + n ) -dimensional vector space R k + n with canonical basis vectors e i , there exist other natural vector spaces indexed by ordered lists I = ( i 1 , , i m ) of m non-identical space and time indices for every m = 0 , , k + n . As there are k + n m such lists, the dimension of this vector space is k + n m . We shall refer to m as grade and to these vectors as multivectors or grade-m vectors if we wish to be more specific. A general multivector can be written as
v = I v I e I ,
where the summation extends to all possible ordered lists with m indices. If m = 0 , the list is empty and the corresponding vector space is R . The direct sum of these vector spaces for all m is a larger vector space of dimension m = 0 k + n k + n m = 2 k + n , the exterior algebra. In tensor algebra, multivectors correspond to antisymmetric tensors of rank m. In this paper, we study vector fields v ( x ) , namely multivector-valued functions v varying over the space–time position x .
The basis vectors for any grade m may be constructed from the canonical basis vectors e i by means of the exterior product (also known as wedge product), an operation denoted by ∧ [9] (p. 2). We identify the vector e I for the ordered list I = ( i 1 , i 2 , , i m ) with the exterior product of e i 1 , e i 2 , , e i m :
e I = e i 1 e i 2 e i m .
In general, we may compute the exterior product as follows. Let two basis vectors e I and e J have grades m = | I | and m = | J | , where | I | and | J | are the lengths of the respective index lists. Let ( I , J ) = { i 1 , , i m , j 1 , , j m } denote the concatenation of I and J, let σ ( I , J ) denote the signature of the permutation sorting the elements of this concatenated list of m + m indices, and let ε ( I , J ) denote the resulting sorted list, which we also denote by I + J . Then, the exterior product e I of e J is defined as
e I e J = σ ( I , J ) e ε ( I , J ) .
The exterior product of vectors v and w is the bilinear extension of the product in Equation (6),
v w = I , J v I w J e I e J .
Since permutations with repeated indices have zero signature, the exterior product is zero if m + m > k + n or more generally if both vectors have at least one index in common. Therefore, the exterior product is either zero or a vector of grade m + m . Further, the exterior product is a skew-commutative operation, as we can also write Equation (6) as e I e J = ( 1 ) | I | | J | e J e I .
At this point, we define the dot product · for arbitrary grade-m basis vectors e I and e J as
e I · e J = Δ I , J = Δ i 1 , j 1 Δ i 2 , j 2 Δ i m , j m ,
where I and J are the ordered lists I = ( i 1 , i 2 , , i m ) and J = ( j 1 , j 2 , , j m ) . As before, we extend this operation to arbitrary grade-m vectors by linearity.
Finally, we define the complement of a multivector. For a unit vector e I with grade m, its Grassmann or Hodge complement [10] (pp. 361–364), denoted by e I H , is the unit ( k + n m ) -vector
e I H = Δ I , I σ ( I , I c ) e I c ,
where I c is the complement of the list I, namely the ordered sequence of indices not included in I. As before, σ ( I , I c ) is the signature of the permutation sorting the elements of the concatenated list ( I , I c ) containing all space–time indices. In other words e I c is the basis vector of grade k + n m whose indices are in the complement of I. In addition, we define the inverse complement transformation as
e I H 1 = Δ I c , I c σ ( I c , I ) e I c .
We extend the complement and its inverse to general vectors in the space–time algebra by linearity.

2.2. Interior Products

While the exterior product of two multivectors is an operation that outputs a multivector whose grade is the addition of the input grades, the dot product takes two multivectors of identical grade and subtracts their grades, yielding a zero-grade multivector, i.e., a scalar. We say that the exterior product raises the grade while the dot product lowers the grade. In this section, we define the left and right interior products of two multivectors as operations that lower the grade and output a multivector whose grade is the difference of the input multivector grades.
As always, we start by defining the operation for the canonical basis vectors. Let e I and e J be two basis vectors of respective grades | I | and | J | . The left interior product, denoted by ⨼, is defined as
e I e J = Δ I , I σ ε ( I , J c ) c , I e ε ( I , J c ) c .
If I is not a subset of J, that is when there are elements in I not present in J, e.g., for | I | > | J | , the signature of the permutation sorting the concatenated list ε ( I , J c ) c , I is zero as there are repeated indices in the list to be sorted, and the left interior product is zero. Otherwise, if I is a subset of J, the permutation rearranges the indices in J in such a way that the last | I | positions coincide with I and ε ( I , J c ) c represents the first | J | | I | elements in the rearranged sequence, that is ε ( I , J c ) c = J \ I .
The right interior product, denoted by ⨽, of two basis vectors e I and e J is defined as
e I e J = Δ J , J σ J , ε ( I c , J ) c e ε ( I c , J ) c .
As with the left interior product, if J is a subset of I, ε ( I c , J ) c = I \ J then the permutation rearranges the indices in I so that the first | J | positions coincide with J, otherwise the right interior product is zero.
In general, we have that e I e J = e J e I ( 1 ) | I | ( | J | | I | ) , as verified in Appendix A.1. We note that these interior products are not commutative, unless either | J | | I | or | I | is an even number, e.g., when | I | = | J | , in which case both interior products coincide with the dot product of the two vectors. The interior products may therefore be seen as generalizations of the dot product.
As with the dot and the exterior products, the value of the interior products does not depend on the choice of basis and we may thus compute the left interior product of two vectors v and w as
v w = I , J v I w J e I e J ,
and a similar expression holds for the right interior product v w . Both are grade-lowering operations, as the left (resp. right) interior product is either zero or a multivector of grade m m (resp. m m ).
The interior products are not independent operations from the exterior product, as they can be expressed in terms of the latter, the Hodge complement and its inverse (proved in Appendix A.2):
e I e J = e I e J H H 1 ,
e I e J = e I H 1 e J H .
If u and v are 1-vectors and w is an r-vector, then we have the following expression
u ( v w ) = ( 1 ) r ( u · v ) w + v ( u w ) ,
as proved in Appendix A.3. This expression can be seen as a generalization of the vectorial expression
a × ( b × c ) = ( a · c ) b ( a · b ) c
in the vector space R 3 , i.e., a k = 0 , n = 3 space–time. This fact is built of the realization that the cross product between two vectors v and w can be expressed in the following alternative ways
v × w = ( v w ) H 1 = v w H 1 = v w H .
Whenever it holds that I J , the interior and exterior products are related by the following:
( e I e J ) e I = Δ I , I e J ,
e I ( e J e I ) = Δ I , I e J .
Having introduced the basic notions of space–time exterior algebra, the next section focuses on operations with elements in the exterior algebra, namely integrals and derivatives of vector fields.

3. Integrals and Derivatives of Vector Fields: Circulation and Flux

3.1. Oriented Integrals

Integrals are, together with derivatives, the fundamental mathematical objects of calculus. For example, operations on vectors fields lying in exterior algebra such as the flux and the circulation are expressed in terms of integrals over high-dimensional geometric objects. The integral of an m-graded vector field v over a hypersurface V m of the same dimension, denoted as
V m d m x · v ,
is the limit of the Riemann sums for the dot product d m x · v over points in the hypersurface, where d m x is an m-dimensional infinitesimal vector element. For any = 0 , , k + n , the infinitesimal vector element d x is given by the sum of all possible differentials for ℓ-dimensional hypersurfaces in a ( k , n ) space–time, and is represented in the canonical basis as
d x = I = ( i 1 , , i ) d x I e I ,
where for a given list I = ( i 1 , , i ) each differential is given by d x I = d x i 1 d x i .
As in traditional calculus, the integral in Equation (21) exhibits coordinate invariance, while the integrand d m x · v is regarded as an oriented object. Orientation is well defined for integrals along a curve from one point to another, or integrals over a surface oriented at the direction of the normal to the surface. Switching the extreme points of the curve, or taking the opposite direction of the normal would induce a change of sign in the line and surface integrals. In our generalization of vector calculus, a positive orientation is implicit in the ordering of the canonical basis. The skew-symmetry property of the exterior product Equation (6) may introduce sign changes to compensate an eventual change of orientation after changes of coordinates such as permutations of the space–time components.
For a given hypersurface V m , a convenient transformation for solving the integral in Equation (21) is one such that, at a given point x in the hypersurface, the infinitesimal vector element d m x has one component that is tangent to the hypersurface at that point. Let e be a unit m-graded vector parallel to V m at point x , and let e 0 , , e k + n 1 form an orthonormal basis of R k + n such that e = e k + n m e k + n 1 for the given point x in V m . This change of coordinates from the canonical basis to the new basis is described by a unitary matrix U, dependent on x , and that satisfies
e 0 e k + n 1 = det ( U ) e 0 e k + n 1 .
Being a unitary matrix, the determinant of U is ± 1 . Assuming an orientation-preserving change of coordinates, that is det ( U ) = 1 , the infinitesimal vector element in Equation (22) for = m can be expressed as
d m x = d x e + I = ( i 1 , , i m ) : I d x I e I ,
where = { 0 , , k + n m 1 } is the set of indices for the unit vectors in the new basis orthogonal to V m . Since all elements in the summation in Equation (24) have at least one differential element lying outside the integration hypersurface, their integrals vanish and therefore
V m d m x = V m d x e .
In analogy to e , a multivector of grade m, we define a unit ( k + n m ) -grade vector e normal to V m at point x such that e e = e 0 e k + n 1 . From Equation (10), we see that one such normal multivector with the correct orientation is
e = e H 1 e H 1 · e H 1 .
For the common spaces considered in vector calculus, R 2 and R 3 , and according to Equation (23), orientation-preserving changes of coordinates must respectively satisfy e e = e 0 e 1 and e e = e 0 e 1 e 2 , where e is the basis element normal to V m . These two equalities turn out to describe the counterclockwise (resp. right-hand rule) orientation when e conventionally points outside an integration path for R 2 (resp. a surface for R 3 ) [5] (pp. 184–185).
Building on the concepts and operations of circulation and flux in vector calculus, the right and left interior products lead to general definitions of circulation and flux of multivector fields in exterior algebra along and across hypersurfaces of arbitrary number of dimensions.

3.2. Circulation and Flux of Multivector Fields

Definition 1.
The circulation of a vector field v ( x ) of grade m along an ℓ-dimensional hypersurface V , denoted by C ( v , V ) , is given by
C ( v , V ) = V d x v .
Expressing the vector field in the canonical basis and using the definition of d x in Equation (22), the circulation can be specified in some cases of interest. For = m , the circulation reads
V m d m x · v = I = ( i 1 , , i m ) Δ I , I V m d x I v I .
For instance, for = m = 1 and R n , this formula recovers the definition the circulation of a vector field along a closed path with the appropriate orientation.
Alternatively, using Equation (25), we note that v is integrated along the direction of e , tangential to the hypersurface, in an orientation-preserving change of coordinates, that is
V m d m x v = V m d x e v .
Intuitively, the circulation Equation (27) measures the alignment of an m-vector field v with respect to V for any and m, with the circulation being an ( m ) -vector if m and zero otherwise.
Definition 2.
The flux of a vector field v ( x ) of grade m across an ℓ-dimensional hypersurface V , denoted by F ( v , V ) , is given by
F ( v , V ) = V d x H 1 v .
Expressing both v and d x in the canonical basis, and using the inverse Hodge operation in Equation (10), the flux in the special case of = k + n m can be written as
V d x H 1 · v = I = ( i 1 , , i m ) σ ( I , I c ) V d x I c v I .
As an example in R 3 , the flux of a vector field v through a surface V 2 reads
V 2 d 2 x H 1 · v = V 2 I , i I d x I σ ( i , I ) e i · v .
The right-hand side of Equation (32) is a conventional surface integral, upon the identification of I , i I d x I σ ( i , I ) e i as an infinitesimal surface element d S .
Alternatively, using the analogous of Equation (25) for the differential vector element d x H 1 , the equivalent to Equation (29) for the flux is
V d x H 1 v = V d x e H 1 v .
This equation implies that v is integrated along a normal component to the hypersurface since e H 1 is a multivector of grade k + n orthogonal to V . Intuitively, the flux Equation (30) measures the magnitude of the multivector field crossing the hypersurface. In general, the flux is a vector of grade ( m + n k ) if k + n m and zero otherwise. For instance, if = k + n , the flux of v over an ( k + n ) -dimensional hypersurface V k + n gives the integral of v over V k + n , an extension of the volume integral to R k + n ,
V k + n d k + n x H 1 v = V k + n d x i 1 , , i k + n v ,
where we used the relation 1 H = e i 1 , , i k + n , implying that d k + n x H 1 = d x i 1 , , i k + n , and that 1 v = v .

3.3. Exterior and Interior Derivatives

In vector calculus, extensive use is made of the nabla operator ∇, a vector operator that takes partial space derivatives. For instance, operations such as gradient, divergence or curl are expressed in terms of this operator. In our case, we need the generalization to ( k , n ) space–time to the differential vector operator , defined as ( 0 , 2 , , k 1 , k , , k + n 1 ) , that is
= i = 0 k + n 1 Δ i i e i i .
For a given vector field v of grade m, we define the exterior derivative of v as v , namely
v = i = 0 k + n 1 I Δ i i i v I σ ( i , I ) e ε ( i , I ) .
The grade of the exterior derivative of v is m + 1 , unless m = k + n , in which case the exterior derivative is zero, as can be deduced from the fact that all signatures are zero.
In addition, we define the interior derivative of v as v , namely
v = i , I : i I i v I σ ( I \ i , i ) e I \ i .
The grade of the interior derivative of v is m 1 , unless m = 0 , in which case the interior derivative is zero, as implied by the fact that the grade of is larger than the grade of v . Using Equation (16) with u = and assuming that v and w are 1-vectors, we obtain a generalization of Leibniz’s product rule
( v w ) = v ( · w ) ( · v ) w .
The formulas for the exterior and interior derivatives allow us express some common expressions in vector calculus. For a scalar function ϕ , its gradient is given by its exterior derivative ϕ = ϕ , while for a vector field v , its divergence · v is given by its interior derivative · v = v . From Equation (16) we further observe that for a scalar function ϕ we recover the relation
· ( ϕ ) = ( · ) ϕ .
In addition, for a vector fields v in R 3 , taking into account Equation (18) then the curl can be variously expressed as
× v = ( v ) H 1 = v H 1 = v H .
This formula allows us to write the curl of a vector field × v in terms of the exterior and interior products and the Hodge complement, while generalizing both the cross product and the curl to grade-m vector fields in space–time algebras with different dimensions. Moreover, from Equation (16) we can recover for r = 1 the well-known formula for the curl of the curl of a vector,
× ( × v ) = ( · v ) 2 v .
It is easy to verify that the exterior derivative of an exterior derivative is zero, as is the interior derivative of an interior derivative, that is for any vector field v , we have that
( v ) = 0
( v ) = 0 .
In regard to the vector space R 3 , and using Equation (18), these expressions imply the well-known facts that the curl of the gradient and the divergence of the curl are zero:
× ( ϕ ) = ( ( ϕ ) ) H 1 = 0
· ( × v ) = ( v H ) = 0 .

3.4. Stokes Theorem for the Circulation

In vector calculus in R 3 , the Kelvin-Stokes theorem for the circulation of a vector field v of grade 1 along the boundary V 2 of a bidimensional surface V 2 relates its value to that of the surface integral of the curl of the vector field over the surface itself. In the notation used in the previous section, the surface integral is the flux of the curl of the vector field across the surface and this theorem reads
V 2 d x · v = V 2 d 2 x H 1 · ( × v ) .
Taking into account the identity × v = ( v ) H 1 in Equation (40), we rewrite the right-hand side in Equation (46) as
V 2 d 2 x H 1 · ( × v ) = V 2 d 2 x H 1 · ( v ) H 1 = V 2 d 2 x · ( v ) ,
where we used that u · w = u H 1 · w H 1 = u H · w H for vectors u , w . The flux of the curl of the vector field across a surface is also the circulation of the exterior derivative of the vector field along that surface.
The generalized Stokes theorem for differential forms [4] (p. 80) allows us to extend the Kelvin-Stokes theorem to multivectors of any grade m as we do in the following theorem.
Theorem 1.
The circulation of a grade-m vector field v along the boundary V of an ℓ-dimensional hypersurface V is equal to the circulation of the exterior derivative of v along V :
C ( v , V ) = C ( v , V ) .
As hinted at above, the role of the vector curl in the right-hand side of Equation (46) is played by the exterior derivative in this generalized theorem.
Proof. 
We start by stating the generalized Stokes Theorem for differential forms [4] (p. 80)
V ω = V d ω ,
where ω is a differential form and d ω its exterior derivative, represented by the operator
d = j d x j j .
Expressing the circulations in Equation (48) by means of the integrals in Equation (27), we obtain
V d 1 x v = V d x ( v ) .
In the integral in the left-hand side of Equation (51), the integrand is a differential form ω = d 1 x v . After expanding the interior product using the definitions of d 1 x and v we obtain
ω = J 1 d x J e J I m v I e I = J 1 , I m : I J Δ I , I σ ( I , ε ( I , J c ) c ) v I d x J e ε ( I , J c ) c .
Then, computing the exterior derivative of this form with Equation (50) gives
d ω = J 1 , I m : I J j J Δ I , I σ ( I , ε ( I , J c ) c ) j v I σ ( j , J ) d x ε ( j , J ) e ε ( I , J c ) c .
We next write down the integrand in the right-hand side of Equation (51), d x ( v ) , that is
d x ( v ) = K + 1 d x K e K I m j I Δ j , j j v I σ ( j , I ) e ε ( j , I ) = K , I m : ε ( j , I ) K j I Δ j , j j v I d x K σ ( j , I ) Δ ε ( j , I ) , ε ( j , I ) σ ( ε ( j , I ) , ε ( K c , ε ( j , I ) ) c ) e ε ( K c , ε ( j , I ) ) c = K , I m : ε ( j , I ) K + 1 j I Δ I , I j v I d x K σ ( j , I ) σ ( ε ( j , I ) , ε ( K c , ε ( j , I ) ) c ) e ε ( K c , ε ( j , I ) ) c ,
and verify that it coincides with exterior derivative in Equation (53). As the set of m indices I m is included in the sets J 1 or K in Equation (53) or (54), we may write K = ε ( J 1 , j ) for some j J 1 . Then, we obtain the following chain of equalities for the basis elements in Equations (53) and (54):
e ε ( K c , ε ( j , I ) ) c = e ε ( K c { j } I ) c = e ε ( J c \ { j } { j } I ) c = e ε ( J c I ) c = e ε ( I , J c ) c .
Therefore, and using that ε ( J c \ { j } , ε ( j , I ) ) c = J \ I , we can write Equation (54) as
d x ( v ) = J 1 , I m , j I : ε ( j , I ) J { j } Δ I , I j v I d x ε ( J , j ) σ ( j , I ) σ ( ε ( j , I ) , J \ I ) e ε ( I , J c ) c .
Comparing Equation (56) with Equation (53), the expressions coincide if this identity holds:
σ ( j , J ) σ ( I , J \ I ) = σ ( j , I ) σ ( j + I , J \ I ) .
To prove Equation (57) we exploit that the σ are permutation signatures and that the signature of the composition of permutations is the product of the respective signatures. We proceed with the help of a visual aid in Figure 1, which depicts the identity between two different ways of sorting the concatenated list ( j , I , J \ I ) . On the left column we first sort the list ( I , J \ I ) to obtain J and then sort the list ( j , J ) . On the right column, we first sort the list ( j , I ) and then the list ( j + I , J \ I ) . This proves Equation (57) and the theorem.
Finally, we note that, had we defined the circulation with the left interior product, we would have got an incompatible relation in Equation (57), which could not be solved. □

3.5. Stokes Theorem for the Flux

In vector calculus in R 3 , the Gauss theorem relates the volume integral of the divergence of a vector field v over a region V 3 to the surface integral of the vector field over the region boundary V 3 . In the notation used in previous sections, and taking into account that both the surface integral and the volume integral can be expressed as fluxes for R 3 , this theorem reads
V 3 d 2 x H 1 · v = V 3 d 3 x H 1 ( · v ) .
Making use of the identity · v = v , we can rewrite the right-hand side in Equation (58) as
V 3 d 3 x H 1 ( · v ) = V 3 d 3 x H 1 ( v ) .
In other words, the Gauss theorem relates the flux of the interior derivative of a vector field v across a region V 3 to the flux of the vector field itself across the region boundary V 3 .
The generalized Stokes theorem for differential forms allows us to extend the Gauss theorem to multivectors of any grade m as we do in the following theorem.
Theorem 2.
The flux of a grade-m vector field v across the boundary V of an ℓ-dimensional hypersurface V is equal to the flux of the interior derivative of v across V :
F ( v , V ) = F ( v , V ) .
Proof. 
Expressing the fluxes in Equation (60) by means of the integrals in Equation (30), we obtain
V d 1 x H 1 v = V d x H 1 ( v )
As in the proof of Theorem 1, we apply the Stokes theorem for differential forms in Equation (49) upon the identifications ω with d 1 x H 1 v and d ω with d x H 1 ( v ) . First, for ω , we get
J 1 d x J Δ J c , J c σ ( J c , J ) e J c I m v I e I = J 1 , I m : J c I v I d x J σ ( J c , J ) σ ( ε ( J c , I c ) c , J c ) e ε ( J c , I c ) c = J 1 , I m : J c I v I d x J σ ( J c , J ) σ ( I \ J c , J c ) e I \ J c .
Now, taking the exterior derivative of Equation (62), we obtain
d ω = J 1 , I m : J c I j J j v I d x ε ( j , J ) σ ( j , J ) σ ( J c , J ) σ ( I \ J c , J c ) e I \ J c .
This quantity should be equal to d x H 1 ( v ) in the right-hand side of Equation (61), which we expand as
d x H 1 ( v ) = K d x K Δ K c , K c σ ( K c , K ) e K c I : j I j v I σ ( I \ j , j ) e I \ j = K , I m : K c I \ j j I j v I d x K σ ( K c , K ) σ ( I \ j , j ) σ ( I \ j \ K c , K c ) e I \ j \ K c .
We first consider the sets in the summations in the alternative expressions for d ω , Equations (63) and (64). Since J c contains j and is a subset of I, but K c does not contain j and is also a subset of I (with j I ), then we can assert that K = J { j } so that the conditions in the summations are equivalent. The basis elements coincide and so do the differentials and derivatives, and it remains to verify the identity
σ ( j , J ) σ ( J c , J ) σ ( I \ J c , J c ) = σ ( K c , K ) σ ( I \ j , j ) σ ( I \ j \ K c , K c ) .
With the definition L = I \ J c , and expressed in terms of j, J, and L, this condition gives
σ ( j , J ) σ ( J c , J ) σ ( L , J c ) = σ ( J c \ j , J + j ) σ ( J c \ j + L , j ) σ ( L , J c \ j ) .
Multiplying both sides of the equation by σ ( J c , J ) , σ ( J c \ j , J + j ) and σ ( J c \ j , j ) , and taking into account that the square of a signature is + 1 , we obtain
σ ( J c \ j , j ) σ ( L , J c ) σ ( j , J ) σ ( J c \ j , J + j ) = σ ( L , J c \ j ) σ ( J c \ j + L , j ) σ ( J c \ j , j ) σ ( J c , J ) .
We start by simplifying Equation (67) by noting that
σ ( J c \ j , j ) σ ( L , J c ) = σ ( L , J c \ j ) σ ( J c \ j + L , j ) ,
with help of the visual aid in Figure 2. The permutations on the left column first merge ( J c \ j ) with j and then the resulting J c with L. Similarly, on the right column, we start with L, ( J c \ j ) and { j } , then concatenate ( L , J c \ j ) and then add j, getting the same result as the left column.
Therefore, we have reduced Equation (67) to the simpler form
σ ( j , J ) σ ( J c \ j , J + j ) = σ ( J c \ j , j ) σ ( J c , J ) ,
which we prove with the aid depicted in Figure 3. On the left column, j and J are first merged and then the concatenation ( J c \ j , J + j ) gives the sorted ε ( J c , J ) . On the right column, after sorting ( J c \ j ) with j, merging it with J leads to the same final sequence. □

4. An Application to Electromagnetism in 1 + 3 Dimensions

In this section, we show how to recover the standard form of Maxwell equations and Lorentz force in 1 + 3 dimensions from a formulation with exterior calculus involving an electromagnetic bivector field F and a 4-dimensional current density vector J . In the appropriate units, the bivector field F can be decomposed as F = F E + F B , where F E contains the electric-field E time-space components and F B contains the space-space components for the magnetic field B . Similarly, the current density depends on the charge density ρ and the spatial current density j . More specifically,
J = ρ e 0 + j
F = F E + F B = e 0 E + B H .
Here the Hodge complement acts only on the space components, and B H = B H 1 . The bivector field F is closely related to the Faraday tensor, a rank-2 antisymmetric tensor.
Maxwell equations, in their differential form, constrain the divergence of the electric and the magnetic field, Equations (72) and (73), respectively, and the curl of E and B , namely Equations (74) and (75) [11] (p. 4-1).
· E = ρ
· B = 0
× E = 0 B
× B = 0 E + j .
We refer to Equations (73) and (74) as homogeneous Maxwell equations and to Equations (72) and (75) as inhomogeneous Maxwell equations, as they include the fields and the sources given by charge and current densities. In exterior-calculus notation, both pairs of equations can be combined into simple multivector equations,
F = 0
F = J ,
where is the differential operator = 0 e 0 + for k = 1 and n = 3 . As a consistency check, note that the wedge product raises the grade of F , and the zero in Equation (76) is the zero trivector; also, as the left interior product lowers the grade of F , both sides of Equation (77) relate space–time vectors.
Next to Maxwell equations, the Lorentz force density f characterizes, after integrating over the appropriate region, the force exerted by the electromagnetic field upon a system of charges described by the charge and current densities ρ and j [11] (pp. 13-1–13-3),
f = ρ E + j × B .
In relativistic form, the Lorentz force density becomes a four-dimensional vector f [2] (pp. 153–157). The time component of this vector is j · E , the power dissipated per unit of volume, or after integrating over the appropriate region, the rate of work being done on the charges by the fields. In exterior-calculus notation, the Lorentz force density vector can be computed as a left interior product, namely
f = J F .

4.1. Equivalence of the Lorentz Force Density

In this section, we prove that Equation (79) indeed recovers the relativistic Lorentz force density by verifying that its components in both vector-calculus and exterior-calculus coincide. From the definitions of J and F , and using the distributive property of the interior product, we get
f = ( ρ e 0 + j ) ( F E + F B ) = ρ e 0 F E + ρ e 0 F B + j F E + j F B = ρ e 0 ( e 0 E ) + ρ e 0 B H + j ( e 0 E ) + j B H 1 .
Some straightforward calculations give e 0 ( e 0 E ) = E , e 0 B H = 0 , and j ( e 0 E ) = e 0 j · E . In addition, the formula for the left interior product in Equation (18) gives j B H 1 = ( j B ) H 1 = j × B , where the cross product is only valid for three dimensions. With these calculations, we obtain
f = ρ E + e 0 j · E + j × B ,
namely, a time-component j · E and a spatial component equal to the Lorentz force density ρ E + j × B .

4.2. Equivalence of the Differential Form of Maxwell Equations

In this section, we prove that Equation (76) indeed recovers the homogeneous Maxwell equations and that Equation (77) recovers the inhomogeneous Maxwell equations.
First, we observe that the exterior derivative F gives a trivector with 4 components, while the homogeneous Maxwell equations are a scalar, Equation (73), and a vector, Equation (74). We shall verify that the scalar equation turns out to be given by the trivector component e 123 of F , while the vector equation is given by the trivector components e 012 , e 013 , and e 023 of the exterior derivative.
We evaluate the exterior derivative F using the decomposition of F in Equation (71),
F = 0 e 0 e 0 E 0 e 0 B H + e 0 E + B H = 0 e 0 B H e 0 ( E ) + B H = e 0 ( 0 B H + E ) + B H ,
where we used that e 0 e 0 = 0 and that e 0 = e 0 in the second step of Equation (82). Taking advantage of Equation (40) we have the equality E = ( × E ) H , while B H = ( · B ) H , and
F = e 0 ( 0 B H + ( × E ) H ) + ( · B ) H .
Indeed, the first summand vanishes when 0 B H + ( × E ) H = 0 or, taking the inverse Hodge complement, when Equation (74) holds. In terms of components, the spatial Hodge complement in this equation transforms a spatial vector into a bivector with components e 12 , e 13 , and e 23 only and this equation recovers the homogeneous Maxwell equation in Equation (74). After taking the exterior product with e 0 , we obtain the trivector components e 012 , e 013 , and e 023 . Similarly, the second term vanishes for ( · B ) H = 0 , recovering Equation (73). In terms of components, the spatial Hodge complement directly transforms a scalar into a trivector with a unique component e 123 , recovering the homogeneous Maxwell equation in Equation (73).
We move on to the inhomogeneous Maxwell equations. We compute the interior derivative F ,
F = ( 0 e 0 + ) ( F E + F B ) = 0 E 0 e 0 B H + e 0 · E + B H = 0 E + e 0 · E + B H ,
since e 0 B H = 0 . The interior derivative F gives a space–time vector with 4 components, while the inhomogeneous Maxwell equations are a scalar, Equation (72), and a spatial vector, Equation (75).
We can verify that the scalar equation turns out to be given by the vector component e 0 of F , while the spatial vector equation is given by the spatial vector components e 1 , e 2 , and e 3 of F . Indeed, if we match this expression with the current density vector J , then the time component e 0 of F gives Equation (72). Selecting the space components of F , the differential equation is
0 E + B H = j ,
which, using the relation B H = × B can be written as Equation (75).

4.3. Equivalence of the Integral Form of Maxwell Equations

After studying the exterior-calculus differential formulation of Maxwell equations, we recover the standard integral formulation. Applying the Stokes Theorem 1 to Equation (76), we find that the circulation of the bivector field F along the boundary of any three-dimensional space–time volume V 3 is zero:
V 3 d 2 x · F = V 3 d 3 x · ( F ) = 0 .
At this point, Equation (86) is a scalar equation and we obtain the pair of homogeneous Maxwell equations by considering two different hypersurfaces V 3 .
First, let the domain V 3 = V contain only spatial coordinates. There are no tangential components to V with time indices and the contribution of F E to the circulation of F over V 3 in Equation (86) is zero, i.e.,
V d 2 x · F = V d 2 x · F B .
Using that u · w = u H 1 · w H 1 for any vectors u , w , and therefore d 2 x · F = d 2 x H 1 · F B H 1 and the definition F B = B H , the integral in the right-hand side of Equation (87) becomes
V d 2 x · F B = V d 2 x H 1 · B = V d S · B ,
where we used Equation (32) to write the last surface integral. Substituting Equation (88) back into Equation (86) gives the Gauss law for the magnetic field [11] (pp. 1-5–1-9).
Let now V 3 be a time-space domain ( t 0 , t 1 ) × S , where S is a two-dimensional spatial surface. With no real loss of generality we assume that S lies on the e 1 e 2 plane. Its boundary V 3 is the union of the sets ( t 0 , t 1 ) × S , t 0 × S and t 1 × S . For the first set, we choose e as the vector normal to S pointing outwards on the plane defined by S and e = e 0 e S , where e S is a vector tangent to S with a counterclockwise orientation, so that e e = e 012 . Further, since e is a time-space bivector, the contribution of F B to the circulation of F over this first set in Equation (86) is zero, and
( t 0 , t 1 ) × S d 2 x · F = ( t 0 , t 1 ) × S d 2 x · F E .
Writing the differential vector as d 2 x = d t d x e 0 x , parameterizing the line integral over the boundary S by the variable x with unit vector e 0 x , and using that e 0 x · F E = e x · E and therefore d x e 0 x · F E = d x · E , the integral of the right-hand side of Equation (89) becomes
t 0 t 1 d t S d x · E .
For the second and third sets the normal vector to the integration surface pointing outwards are e = e 0 and e = e 0 respectively. Since e is a space-space bivector in both cases, then the contribution of F E to the circulation is zero. We express the circulations of F B as fluxes of B and surface integrals as done in Equation (88). Using these observations the integral for the circulation of F over these two sets in Equation (86) is given by
t 0 × S d 2 x · F + t 1 × S d 2 x · F = S d S · B ( t 0 ) + S d S · B ( t 1 ) .
Combining Equations (90) and (91) in Equation (86) we recover the integral over time of the so called Faraday law [11] (pp. 17-1–17-2). Equivalently, taking the time derivative recovers the usual Faraday law, namely
S d x · E + t S d S · B = 0 .
In regard to the inhomogeneous Maxwell equations, applying the Stokes Theorem 2 to Equation (77), we find that the flux of the bivector field F across the boundary of any three-dimensional space–time volume is equal to the flux of the current density J across the three-dimensional space–time volume:
V 3 d 2 x H 1 · F = V 3 d 3 x H 1 · ( F ) = V 3 d 3 x H 1 · J .
As with the homogeneous Maxwell equations, the scalar Equation (93) yields the inhomogeneous Maxwell equations by considering two different hypersurfaces V 3 .
First, let the integration domain V 3 be a spatial volume V. Since there are no normal components to V with space indices only, the contribution of F B to the flux is zero so that Equation (93) becomes
V d 2 x H 1 · F E = V d 3 x H 1 · J .
From the definition of inverse Hodge complement in Equation (10), we write the differential vectors
d 2 x H 1 = I , i I d 2 x I σ ( 0 i , I ) e 0 i
d 3 x H 1 = d V e 0 .
Plugging these expressions in Equation (94), using the definitions of F E and J , and computing the dot products on both sides of the equality, we obtain that Equation (94) simplifies as
V I , i I d 2 x I σ ( 0 i , I ) e 0 i · j E j e 0 j = V d V e 0 · ( ρ e 0 + j )
V I , i I d 2 x I σ ( i , I ) E i = V d V ρ
V d S · E = V d V ρ .
In Equation (98) we used that σ ( 0 i , I ) = σ ( i , I ) and in Equation (99) we used that I , i I d 2 x I σ ( i , I ) E i = d 2 x H 1 · E . Since the Hodge complement is over space, the result is a surface integral with positive orientation as in Equation (32). We recovered in Equation (99) the Gauss law for the electric field [11] (pp. 4-7–4-9).
For V 3 = ( t 0 , t 1 ) × S where S is a two-dimensional surface lying on the e 1 e 2 plane, the boundary V 3 is the union of the sets ( t 0 , t 1 ) × S , t 0 × S and t 1 × S . For the first set, since d 2 x H 1 has no time components, the contribution of F E to this set is zero, that is
( t 0 , t 1 ) × S d 2 x H 1 · F = ( t 0 , t 1 ) × S d 2 x H 1 · F B .
As in the homogeneous case, we choose e as the vector normal to S pointing outwards on the plane defined by S and e = e 0 e S , where e S is a vector tangent to S with a counterclockwise orientation, such that e e = e 012 introduces a change of sign. Expressing d 2 x H 1 and F B in the canonical basis, defining I = ( 0 , i ) so that I c contains only space indexes, and using that e I c · e i c = 1 and σ ( I c , I ) = σ ( I c , 0 , i ) = σ ( 0 , I c , i ) = σ ( I c , i ) , we obtain that Equation (100) simplifies to
( t 0 , t 1 ) × S d 2 x H 1 · F = ( t 0 , t 1 ) × S I d x I σ ( I c , I ) e I c · i B i e i c σ ( i c , i ) = t 0 t 1 d t S d x B x = t 0 t 1 d t S d x · B .
For the second and third sets we respectively choose e = e 0 and e = e 0 pointing outside V 3 , implying that the contribution of F B is zero for this set as the inverse Hodge complement of e is a space vector. Expressing d 2 x H 1 in Equation (95) and using similar steps as in Equations (97)–(99), the left-hand side of Equation (93) over these two sets is given by
t 0 × S d 2 x H 1 · F + t 1 × S d 2 x H 1 · F = S d S · E ( t 0 ) + S d S · E ( t 1 )
Finally, for the right-hand side of Equation (93), we choose e as the vector normal to V 3 pointing outside. Since e = e 012 implies that e e = e 0123 , we obtain that
V 3 d 3 x H 1 · J = t 0 t 1 d t S d S · j .
We have thusly recovered the integral form of the Ampere-Maxwell equation [11] (p. 18-1–18-4) integrated over the time interval ( t 0 , t 1 ) by combining Equations (101)–(103) into Equation (93), that is
t 0 t 1 d t S d x · B = t 0 t 1 d t S d S · j + S d S · E ( t 1 ) S d S · E ( t 0 ) .

5. Summary

In this paper, we aimed at showing how exterior calculus provides a tool merging the simplicity and intuitiveness of standard vector calculus with the power of tensors and differential forms. Set in the context of a general space–time algebra with multiple space and time components, we provided the basic concepts of exterior algebra and calculus, such as multivectors, wedge product and interior products, with a distinction between left and right products, Hodge complement, and exterior and interior derivatives. While a space–time with multiple time coordinates leads to several issues from the physical point of view [12], we did not deal with these problems as this paper focuses on the mathematical constructions. We also defined oriented integrals, with two important examples being the flux and circulation of grade m-vector fields as integrals of the normal and tangent components of the field to a hypersurface respectively. These operations extend the standard circulation of a vector field as a line integral and the flux of a vector field as a surface integral in three dimensions to any number of dimensions and any vector grade.
Armed with the theory of differential forms, we proved two exterior-calculus Stokes theorems, one for the circulation and one for the flux, that generalize the Kelvin-Stokes, Gauss and Green theorems. We saw that the flux of the curl of a vector field in three dimensions across a surface is also the circulation of the exterior derivative of the vector field along that surface. In exterior calculus, these Stokes theorems hold for any number of dimensions and any vector grade and are simply expressed in terms of the exterior and interior derivatives for the circulation and flux respectively.
As an application of our tools, we showed how to recover the classical laws of electromagnetism, Maxwell equations and Lorentz force, from a exterior-calculus formalism in relativistic space–time with one temporal and three spatial dimensions. The electromagnetic field is described by a bivector field with six components, closely related to Faraday’s antisymmetric tensor, containing both electric and magnetic fields. The differential form of Maxwell equations relates the exterior derivative of the bivector field with the zero trivector and the interior derivative of the field with the current density vector. In the integral form, these equations correspond to the statements that the circulation of the bivector field along the boundary of any three-dimensional space–time volume is zero, and that the flux of the bivector field across the boundary of any three-dimensional space–time volume is equal to the flux of the current density across the same space–time volume.

Author Contributions

Conceptualization, A.M.; Methodology, I.C., J.F.-S., A.M.; Investigation, I.C.; Writing—Original Draft Preparation, I.C.; Writing—Review & Editing, I.C., J.F.-S., A.M.

Acknowledgments

This work has been funded in part by the Spanish Ministry of Economy and Competitiveness under grants TEC2016-78434-C3-1-R and BES-2017-081360.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Proofs of Product Identities

In this appendix, we verify the relations about interior products introduced in Section 2.

Appendix A.1. Relation between Left and Right Interior Products

We now prove the formula
e I e J = e J e I ( 1 ) | I | ( | J | | I | ) ,
relating left and right interior products. For two lists I and J, we have
e I e J = Δ I , I σ J \ I , I e J \ I ,
e J e I = Δ I , I σ I , J \ I e J \ I ,
where we assumed that I J with no loss of generality and used that ε ( I , J c ) c = J \ I in this case. The only difference between the expressions lies in the signatures, that are related by setting A = J \ I and B = I in the following lemma.
Lemma A1.
Given two arbitrary lists A and B, of length | A | and | B | respectively, then the permutations sorting the concatenated lists ( A , B ) and ( B , A ) satisfy the formula
σ ( A , B ) = σ ( B , A ) ( 1 ) | A | | B | .
Proof. 
Given a list A, let A ¯ be the reversed list, namely the list where the order of all the elements is reversed. Counting the number of position jumps needed to reverse the list, we obtain the signature of this reversing operation as
σ r ( A ) = σ r ( A ¯ ) = ( 1 ) | A | 1 + | A | 2 + + 1 = ( 1 ) | A | ( | A | 1 ) 2 .
The proof is based on the identity between two different ways of rearranging the concatenated list ( A , B ) into the ordered list ε ( A , B ) , as depicted in Figure A1.
First, in the left column of Figure A1 we depict how a single permutation with signature σ ( A , B ) orders the list ( A , B ) . In the right column of Figure A1 we depict how a different series of permutations achieves the same result. We start by reversing the concatenated list ( A , B ) , an operation with signature σ r ( B ¯ , A ¯ ) . Then, we separately partially reverse the lists B ¯ and A ¯ , operations with respective signatures σ r ( B ¯ ) and σ r ( A ¯ ) . A final permutation with signature σ ( B , A ) orders the list ( B , A ) into ε ( A , B ) . Since the signature of a composition of permutations is the product of the signatures, we obtain that
σ ( A , B ) = σ r ( B ¯ , A ¯ ) σ r ( A ¯ ) σ r ( B ¯ ) σ ( B , A ) .
Using Equation (A5) in every σ r in Equation (A6) and carrying out some simplifications yields Equation (A4). □
Figure A1. Visual aid for the relation between σ ( A , B ) and σ ( B , A ) .
Figure A1. Visual aid for the relation between σ ( A , B ) and σ ( B , A ) .
Mathematics 07 00564 g0a1

Appendix A.2. Relation between Interior and Exterior Products

We start with the expression for the left interior product Equation (14). From Equations (9) and (10), we compute
e I e J H H 1 = Δ J , J σ ( J , J c ) e I e J c H 1 = Δ J , J Δ ε ( I , J c ) c , ε ( I , J c ) c σ ( J , J c ) σ ( I , J c ) σ ( ε ( I , J c ) c , ε ( I , J c ) ) e ε ( I , J c ) c ,
and since Δ ε ( I , J c ) c , ε ( I , J c ) c = Δ J \ I , J \ I , we can conclude that Δ J , J Δ ε ( I , J c ) c , ε ( I , J c ) c = Δ I , I . If we now compare the result with Equation (11), we need just to verify the identity
σ ε ( I , J c ) c , I = σ ( J , J c ) σ ( I , J c ) σ ( ε ( I , J c ) c , ε ( I , J c ) ) ,
or equivalently
σ ε ( I , J c ) c , I σ ( J , J c ) = σ ( ε ( I , J c ) c , ε ( I , J c ) ) σ ( I , J c ) .
The left-hand side of Equation (A9) corresponds to taking the sets ε ( I , J c ) c = J \ I , I and J c , in this order, and then merging and sorting J \ I with I and then merging and sorting the resulting set J with J c , as shown in the left column of Figure A2. On the right-hand side, we start we the same three lists, but we first merge and sort I with J c , and then we get the whole list by merging and sorting the result with J \ I , as represented in the right column of Figure A2. Thus, starting from the three sets and rearranging them in different ways, we get the same final ordered list, and since the signatures of the left-hand side and right-hand side are the same, and Equation (A9) is proved. As a consequence, Equation (14) is verified.
Figure A2. Visual aid for the permutations in Equation (A9).
Figure A2. Visual aid for the permutations in Equation (A9).
Mathematics 07 00564 g0a2
Afterwards, we prove the formula for the right interior product Equation (15). Using Equations (9) and (10), we write
e I H 1 e J H = Δ I c , I c σ ( I c , I ) e I c e J H = Δ I c , I c σ ( I c , I ) σ ( I c , J ) e ε ( I c , J ) H = Δ I c , I c σ ( I c , I ) σ ( I c , J ) Δ ε ( I c , J ) , ε ( I c , J ) σ ( ε ( I c , J ) , ε ( I c , J ) c ) e ε ( I c , J ) c .
Using that Δ ε ( I c , J ) , ε ( I c , J ) Δ I c , I c = Δ J , J , in order to prove the validity of Equation (15), we need to prove the relation
σ ( J , ε ( I c , J ) c ) = σ ( I c , I ) σ ( I c , J ) σ ( ε ( I c , J ) , ε ( I c , J ) c ) .
We can prove it applying Lemma A1 to obtain the expression Equation (A8), or by following the same procedure as before, paying attention to the difference that now list J is included in I.

Appendix A.3. Triple mixed product

Given two 1-vectors u and v and a r-vector w , we prove the relation
u ( v w ) = ( 1 ) r ( u · v ) w + v ( u w ) .
Proof. 
We start by evaluating u ( v w ) explicitly, separating terms i = j and i j , namely
u ( v w ) = i , j , I j I , i I Δ i , i u i v j w I σ ( j , I ) σ ( I + j \ i , i ) e I + j \ i + i , I i I Δ i , i u i v i w I σ ( i , I ) σ ( I , i ) e I ,
then, using σ ( i , I ) σ ( I , i ) = ( 1 ) r and adding and removing a term ( 1 ) r i , I i I Δ i , i u i v i w I e I , we get
u ( v w ) = i , j , I i I , j I \ i Δ i , i u i v j w I σ ( j , I ) σ ( I + j \ i , i ) e I + j \ i + ( 1 ) r i , I Δ i , i u i v i w I e I .
More concretely, the left-hand side u ( v w ) is given by
i , j , I j I , i I Δ i , i u i v j w I σ ( j , I ) σ ( I + j \ i , i ) e I + j \ i ( 1 ) r i , I i I Δ i , i u i v i w I e I = i , j , I j I , i I Δ i , i u i v j w I σ ( j , I ) σ ( I + j \ i , i ) e I + j \ i i , j , I j = i , j I \ i , i I Δ i , i u i v j w I σ ( j , I ) σ ( I + j \ i , i ) e I + j \ i = i , j , I i I , j I \ i Δ i , i u i v j w I σ ( j , I ) σ ( I + j \ i , i ) e I + j \ i .
Similarly, we evaluate the right-hand side v ( u w ) as
v ( u w ) = i , j , I i I , j I \ i Δ i , i u i v j w I σ ( I \ i , i ) σ ( j , I \ i ) e I + j \ i .
Comparing Equations (A15) and (A16), it remains to prove the equality, and now we prove the equality
σ ( j , I ) σ ( I + j \ i , i ) = σ ( I \ i , i ) σ ( j , I \ i ) .
We rewrite Equation (A17) multiplying both sides for σ ( j , I ) σ ( j , I \ i ) so that we obtain
σ ( j , I \ i ) σ ( I + j \ i , i ) = σ ( I \ i , i ) σ ( j , I )
which we verify with the help of Figure A3. On the left column, we first merge j with I \ i and then the resulting list with i. On the right columns, the permutations first join I \ i and i and the resulting I is then merged with j, getting the same result in both sides of the relation.
Thus, we can write
u ( v w ) = i , j , I i I , j I \ i Δ i , i u i v j w I σ ( I \ i , i ) σ ( j , I \ i ) e I + j \ i + ( 1 ) r i Δ i , i u i v i I w I e I ,
where we identify the term ( 1 ) r ( u · v ) w , and finally conclude
u ( v w ) v ( u w ) = ( 1 ) r ( u · v ) w ,
which proves our initial formula. □
Figure A3. Visual aid for the identity σ ( j , I \ i ) σ ( I + j \ i , i ) = σ ( I \ i , i ) σ ( j , I ) .
Figure A3. Visual aid for the identity σ ( j , I \ i ) σ ( I + j \ i , i ) = σ ( I \ i , i ) σ ( j , I ) .
Mathematics 07 00564 g0a3

References

  1. Gibbs, J.W.; Wilson, E.B. Vector Analysis; Yale University Press: New Haven, CT, USA, 1929. [Google Scholar]
  2. Lorentz, H.A.; Einstein, A.; Minkowski, H.; Weyl, H. The Principle of Relativity: A Collection of Original Memoirs on the Special and General Theory of Relativity; Dover Publications: Mineola, NY, USA, 1923. [Google Scholar]
  3. Ricci, M.M.G.; Levi-Civita, T. Méthodes de calcul différentiel absolu et leurs applications. Math. Ann. 1900, 54, 125–201. [Google Scholar] [CrossRef]
  4. Cartan, E. Les Systemes Differentiels Exterieurs Et Leurs Applications Geometriques; Hermann & Cie: Paris, France, 1945. [Google Scholar]
  5. Arnold, V.I.; Weinstein, A.; Vogtmann, K. Mathematical Methods of Classical Mechanics; Springer: Berlin, Germany, 1989. [Google Scholar]
  6. Clifford, W.K. Mathematical Papers; Macmillan: London, UK, 1882. [Google Scholar]
  7. Grassmann, H. Extension Theory (History of Mathematics, 19); American Mathematical Society: Providence, RI, USA; London Mathematical Society: London, UK, 2000. [Google Scholar]
  8. Sommerfeld, A. Zur Relativitätstheorie. I. Vierdimensionale Vektoralgebra. Ann. Phys. 1910, 337, 749–776. [Google Scholar] [CrossRef]
  9. Winitzki, S. Linear Algebra via Exterior Products; Free Software Foundation: Boston, MA, USA, 2010. [Google Scholar]
  10. Frankel, T. The Geometry of Physics, 3rd ed.; Cambridge University Press: Cambridge, UK, 2012. [Google Scholar]
  11. Feynman, R.P.; Leyton, R.B.; Sands, M. The Feynman Lectures on Physics; Addison–Wesley: Boston, MA, USA, 1964; Volume 2. [Google Scholar]
  12. Velev, M. Relativistic mechanics in multiple time dimensions. Phys. Essays 2012, 25, 403–438. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Visual aid for the identity among permutations in Equation (57).
Figure 1. Visual aid for the identity among permutations in Equation (57).
Mathematics 07 00564 g001
Figure 2. Visual aid for the identity σ ( J c \ j , j ) σ ( L , J c ) = σ ( L , J c \ j ) σ ( J c \ j + L , j ) .
Figure 2. Visual aid for the identity σ ( J c \ j , j ) σ ( L , J c ) = σ ( L , J c \ j ) σ ( J c \ j + L , j ) .
Mathematics 07 00564 g002
Figure 3. Visual aid for the identity σ ( j , J ) σ ( J c \ j , J + j ) = σ ( J c \ j , j ) σ ( J c , J ) .
Figure 3. Visual aid for the identity σ ( j , J ) σ ( J c \ j , J + j ) = σ ( J c \ j , j ) σ ( J c , J ) .
Mathematics 07 00564 g003

Share and Cite

MDPI and ACS Style

Colombaro, I.; Font-Segura, J.; Martinez, A. An Introduction to Space–Time Exterior Calculus. Mathematics 2019, 7, 564. https://0-doi-org.brum.beds.ac.uk/10.3390/math7060564

AMA Style

Colombaro I, Font-Segura J, Martinez A. An Introduction to Space–Time Exterior Calculus. Mathematics. 2019; 7(6):564. https://0-doi-org.brum.beds.ac.uk/10.3390/math7060564

Chicago/Turabian Style

Colombaro, Ivano, Josep Font-Segura, and Alfonso Martinez. 2019. "An Introduction to Space–Time Exterior Calculus" Mathematics 7, no. 6: 564. https://0-doi-org.brum.beds.ac.uk/10.3390/math7060564

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop