Next Article in Journal
Significance of Ternary Hybrid Nanoparticles on the Dynamics of Nanofluids over a Stretched Surface Subject to Gravity Modulation
Next Article in Special Issue
Runge–Kutta–Nyström Pairs of Orders 8(6) for Use in Quadruple Precision Computations
Previous Article in Journal
The Mixed Finite Element Reduced-Dimension Technique with Unchanged Basis Functions for Hydrodynamic Equation
Previous Article in Special Issue
Time-Varying Pseudoinversion Based on Full-Rank Decomposition and Zeroing Neural Networks
 
 
Correction published on 22 May 2023, see Mathematics 2023, 11(10), 2395.
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Algorithmic Aspects of Simulation of Magnetic Field Generation by Thermal Convection in a Plane Layer of Fluid

by
Daniil Tolmachev
1,
Roman Chertovskih
2,* and
Vladislav Zheligovsky
1
1
Institute of Earthquake Prediction Theory and Mathematical Geophysics, Russian Academy of Sciences, 84/32 Profsoyuznaya St., 117997 Moscow, Russia
2
Faculty of Engineering, University of Porto, Rua Dr. Roberto Frias, s/n, 4200-465 Porto, Portugal
*
Author to whom correspondence should be addressed.
Submission received: 6 December 2022 / Revised: 16 January 2023 / Accepted: 21 January 2023 / Published: 5 February 2023 / Corrected: 22 May 2023
(This article belongs to the Special Issue Numerical Analysis and Scientific Computing II)

Abstract

:
We present an algorithm for numerical solution of the equations of magnetohydrodynamics describing the convective dynamo in a plane horizontal layer rotating about an arbitrary axis under geophysically sound boundary conditions. While in many respects we pursue the general approach that was followed by other authors, our main focus is on the accuracy of simulations, especially in the small scales. We employ the Galerkin method. We use products of linear combinations (each involving two to five terms) of Chebyshev polynomials in the vertical Cartesian space variable and Fourier harmonics in the horizontal variables for space discretisation of the toroidal and poloidal potentials of the flow (satisfying the no-slip conditions on the horizontal boundaries) and magnetic field (for which the boundary conditions mimick the presence of a dielectric over the fluid layer and an electrically conducting bottom boundary), and of the deviation of temperature from the steady-state linear profile. For the chosen coefficients in the linear combinations, the products satisfy the respective boundary conditions and constitute non-orthogonal bases in the weighted Lebesgue space. Determining coefficients in the expansion of a given function in such a basis (for instance, for computing the time derivatives of these coefficients) requires solving linear systems of equations for band matrices. Several algorithms for determining the coefficients, which are exploiting algebraically precise relations, have been developed, and their efficiency and accuracy have been numerically investigated for exponentially decaying solutions (encountered when simulating convective regimes which are spatially resolved sufficiently well). For the boundary conditions satisfied by the toroidal component of the flow, our algorithm outperforms the shuttle method, but the latter proves superior when solving the problem for the conditions characterising the poloidal component. While the conditions for the magnetic field on the horizontal boundaries are quite specific, our algorithms for the no-slip boundary conditions are general-purpose and can be applied for treating other boundary-value problems in which the zero value must be admitted on the boundary.

1. Introduction

We present an algorithm for numerical study of the fully nonlinear processes of magnetic field generation by thermal convection in a plane layer of electrically conducting fluid rotating about an inclined axis. This is required, for instance, for investigating magnetic reversals and their precursors, which is at present a hot topic in geophysics. The dynamo model must comply with the intention to apply it for the purposes of this study. The equations describing the geodynamo have two essential properties: they have a quadratic nonlinearity and they are symmetric with respect to reversing the orientation of the magnetic field [1]. Many other specific features of the natural Earth’s dynamo are of a lesser importance and can be disregarded in order to reduce the numerical complexity of computations. For instance, we neglect the processes of sedimentation of the heavy fraction, crystallisation and other phase transitions, as well as chemical reactions in the melt, the complex character of the rheology of substance at high pressures and temperatures, and the small-scale geometric irregularity of the core–mantle boundary. The choice of the geometry of the fluid volume is again a compromise between the numerical efficiency of the algorithm and its subsequent application for solving the geophysical problems. We consider a rectangular fluid cell that is supposed to represent a small segment of the melted outer Earth’s core, having the form of a spherical layer. Computing in a fluid parallelepiped is numerically more efficient than in a part of a spherical layer, and for a relatively small cell we expect the rudimentary sphericity not to affect the flow and dynamo behaviour significantly. Since the cell can reside at an arbitrary latitude, the axis of rotation can have an arbitrary direction. The assumed boundary conditions emulate the conditions under which the Earth’s dynamo operates: there is a largely dielectric mantle over the outer core and a predominantly iron solid inner core, which is a good electric conductor. Consequently, we assume that there is a dielectric over the upper horizontal boundary of the fluid layer, the lower horizontal boundary is electrically conducting, and the flow satisfies the no-slip boundary conditions (see the illustrative Figure 1 reflecting the physical process with a coordinate system). All the unknown fields (the flow velocity, magnetic field and temperature) are supposed to be periodic in the horizontal Cartesian variables. While convective dynamos with rotation about the vertical axis are studied in many papers (see, e.g., [2]), rotation about an inclined axis has not attracted the attention of many investigators, Refs. [3,4,5] being notable exceptions.
We know from the analysis of the equations of magnetohydrodynamics [6] that the behaviour of the small-scale (spatial scales are meant here) component of the solution is consequential: it can provoke the hypothetical break-ups of the spatial smoothness and analyticity of solutions, and it is also likely to carry the signature of the developing magnetic reversals. We therefore pay special attention to the accuracy of the numerical solution throughout the entire spatial spectrum. We apply a spatial discretisation for the partial differential equations similar to that employed in [3,4,7], in particular, Chebyshev polynomials are employed ibid. to approximate the behaviour of the unknown fields in the vertical (i.e., perpendicular to the layer) direction. Chebyshev polynomials are widely used in numerical analysis [8,9]. Methods relying on them for computing solenoidal flows satisfying the no-slip boundary conditions were developed, e.g., for a horizontal layer of fluids in [9,10,11,12] and in the cylindrical geometry in [13]. Among recent applications, let us mention the use of Chebyshev polynomials in [14] for expansion of marginally stable modes in the quasilinear reduction of equations of the Rayleigh–Bénard convection and in [15] for a numerical study of the complex dynamics in convection in an inclined fluid layer, where a trajectory in a phase space follows a sequence of connections between unstable invariant states before advancing towards a stable attracting state. Employing Chebyshev polynomials for discretising the fields in the vertical (or radial) direction can be advantageous for two reasons. First, the standard pseudospectral methods can then be employed together with the fast Chebyshev transform, which is essentially the usual fast cosine transform. Second, zeroes of these polynomials increasingly accumulate near the ends of the interval [ 1 , 1 ], this suggesting that they are well-suited for representing functions that feature a complex behaviour near the end points, e.g., when boundary layers emerge in convection for low Ekman numbers. This clustering of roots of Chebyshev polynomials proved useful for exploring the steady two-dimensional amagnetic convection in the plane layer at high Rayleigh numbers by Chebyshev collocations [16,17] and for tackling the magnetic dynamo problem [18] for a uniform shear flow between a solid plate sliding over another one, both having anisotropic electrical conductivity. (It can be exploited directly: for instance, finite differences in the radial direction were used in [19] with the nodes located at the roots of Chebyshev polynomials.) When applying the Galerkin method, one faces the dilemma: either the orthogonality of the basis must be sacrificed (if the numerical complexity of the algorithm must be kept minimal) or the basic functions do not satisfy the boundary conditions. For instance, the so-called tau method [12] (used, e.g., in [20] to perform the linear stability analysis of convection in an infinite rotating tilted cavity), which is a particular case of the Petrov–Galerkin method, follows the latter path and can suffer from numerical instability [21]. We apply the Galerkin method because it guarantees the minimisation of the discrepancy due to the spatial discretisation at each time step. Let us note that for the Navier–Stokes equation, the issue of the boundary conditions is intricate. For instance, the flow being solenoidal, the no-slip boundary condition (three scalar conditions) on a horizontal boundary implies the fourth independent scalar condition for the normal derivative of the vertical component of the flow, v 3 / x 3 = 0 , on this boundary. This additional condition serves, in fact, as an implicit boundary condition for the pressure; while it is difficult to elicit the latter in the explicit form, the strategy of solving the Poisson equation for the pressure when solving the hydrodynamic boundary value problems was explored by many authors (see [22] and references therein). Our approach to the treatment of the boundary conditions is original. We split the flow and magnetic field into the toroidal, poloidal and mean-field components, identify the (distinct) conditions on the horizontal boundaries for the potentials of these components, and choose the coefficients of linear combinations of Chebyshev polynomials in the vertical variable (each involving two to five terms) that satisfy these conditions. The sought vector potentials are expanded into finite series of products of these linear combinations and Fourier harmonics in the horizontal variables. Expansion in the basic functions that satisfy the required boundary conditions is a hallmark of the Galerkin method. The linear combinations constitute a non-orthogonal basis in the functional weighted Lebesgue space (the weight required for the orthogonality of the Chebyshev polynomials is assumed). As a result, for determining the time derivatives of the coefficients in the expansion of the fields, we need to solve linear systems of equations for band matrices. The matrices emerging when tackling the problems for toroidal parts of the flow and magnetic field, as well as for the poloidal part of the flow, have special properties: for instance, the sum of the entries is zero in all rows except for the upper two and the lower two. We propose specialised (sub)algorithms for determining the coefficients in the expansion of a given function in the basis of these polynomials, where such properties of the problem are taken into account. These algorithms are based on exact formulae and are therefore algebraically equivalent, but in applications with a finite-precision arithmetic the equivalence is lost. We have analysed the efficiency and accuracy of the proposed algorithms, and identified the algorithms yielding the most (in the statistical sense) accurate solutions that have an exponentially decaying energy spectrum, as happens when a sufficiently high spatial resolution is used. The algorithms are applicable for a wide range of linear problems arising when Chebyshev polynomials are used for the spatial discretisation of a solution to a boundary-value problem for a partial differential equation. Notably, the choice of algorithms is very sensitive to the problem to be solved: we find that while our original algorithm is optimal for no-slip boundaries, a similar algorithm is inferior to the shuttle method when applied to an analogous problem involving a five-diagonal band matrix that arises for somewhat different boundary conditions. To offset the influence of the stiffness of the Fourier–Galerkin system of equations, which we integrate numerically, we apply the third-order Runge–Kutta scheme with exponential differencing [23] for integration in time, which has proved useful in similar computations [24,25,26]. The statement of the problem is presented in the next section. Discretisation of the flow and magnetic field, and algorithms for computing their time derivatives are discussed in Section 3 and Section 4, respectively. The algorithms considered in Section 3.1 for the toroidal component of the flow are general-purpose; they are applicable for all functions vanishing at the boundary and are also employed, in particular, for the deviation of the temperature from the linear profile. Our concluding remarks are summarised in the last section.

2. The Governing Equations

Equations governing the geodynamo in the Earth’s outer core were discussed in detail in [27]. We focus on the nonlinear magnetic dynamo powered by the Rayleigh–Bénard convection in a plane layer of rotating fluid heated from below, the axis of rotation being inclined at an arbitrary constant angle. The Boussinesq approximation is considered: the buoyancy is assumed to depend linearly on temperature, density variation is neglected and the flow is deemed incompressible. Electrically insulating horizontal boundaries of the layer are supposed to be held at constant temperatures; the flow satisfies the no-slip conditions at the boundary and periodicity in horizontal directions is assumed. In contrast with [3,4,27], we neglect neither the time derivative of the flow, nor the advective term in the momentum equation. In the Cartesian coordinate system, co-rotating with the fluid layer, the governing equations for the nondimensionalised quantities are as follows: the Navier–Stokes equation
v t = v × ( × v ) + P 2 v p + σ θ e 3 + τ v × q b × ( × b )
(the last three terms in the r.h.s. of (1a) express the Archimedes buoyancy, Coriolis and Lorentz forces, respectively), the magnetic induction equation
b t = × ( v × b ) + η 2 b
and the heat transfer equation
θ t = v · ( θ e 3 ) + 2 θ .
The flow and magnetic field are solenoidal:
· v = · b = 0 .
Here v denotes the flow velocity, t time, p the modified pressure, b the magnetic field, θ the difference between the temperature of fluid T and the linear temperature profile (if the horizontal boundaries x 3 = 1 and x 3 = 1 are kept at constant temperatures T 1 and T 1 , respectively, then θ = T ( T 1 + T 1 + ( T 1 T 1 ) x 3 ) / 2 ), a unit vector q is aligned with the axis of rotation of the fluid layer, and e n are the unit vectors of the Cartesian coordinate system x = ( x 1 , x 2 , x 3 ) , e 3 being vertical. Thermal convection is often characterised by the Rayleigh number, Ra, reflecting the magnitude of the buoyancy force; the Prandtl number, P, the ratio of kinematic viscosity to thermal diffusivity; the magnetic Prandtl number, P m , the ratio of kinematic viscosity to magnetic diffusivity; and the Taylor number, Ta, measuring the speed of rotation. We have denoted σ = PRa , τ / P = Ta is proportional to the angular speed of rotation of the fluid, and η = P / P m . We assume no-slip conditions for the flow velocity at the electrically insulating upper and perfectly electrically conducting lower horizontal boundaries. The magnetic field in the insulator over the upper boundary is a gradient of a harmonic function and the field is continuous at the boundary:
v | | x 3 = ± 1 = 0 ,
b = h , 2 h = 0 for x 3 1 , h 0 for x 3 ,
b 1 x 3 x 3 = 1 = b 2 x 3 x 3 = 1 = x 3 b 3 x 3 = 1 = 0 ,
and by construction
θ | | x 3 = ± 1 = 0 .
The fields have periods L i = 2 π / α i in the horizontal directions: for all integer n 1 and n 2 ,
v ( x ) = v ( x 1 + n 1 L 1 , x 2 + n 2 L 2 , x 3 ) , b ( x ) = b ( x 1 + n 1 L 1 , x 2 + n 2 L 2 , x 3 ) , θ ( x ) = θ ( x 1 + n 1 L 1 , x 2 + n 2 L 2 , x 3 ) .
The equations are solved numerically by the pseudospectral methods [28]. However, care must be taken in satisfying the boundary conditions (Section 2).

3. Discretisation of the Flow

Let us consider the implications of the boundary conditions (2a) for the flow. Solenoidality of v ( x , t ) (1d) and the no-slip conditions for the horizontal components of the flow imply that in addition to (2a), the condition
v 3 x 3 x 3 = ± 1 = 0
holds (which is a boundary condition for the pressure in an implicit form). The flow can be decomposed into the sum of the toroidal, T v ( x , t ) , poloidal, P v ( x , t ) , and mean-field, M v ( x , t ) , (understood here in the sense of averaging over the horizontal variables x 1 and x 2 ) components (see, e.g., [29]):
v ( x , t ) = T v ( x , t ) + P v ( x , t ) + M v ( x 3 , t ) ,
T v ( x , t ) = × ( T v ( x , t ) e 3 ) ,
P v ( x , t ) = × × ( P v ( x , t ) e 3 ) ,
M v ( x 3 , t ) = ( v 1 ( x , t ) , v 2 ( x , t ) , 0 ) h ,
where
f h = ( L 1 L 2 ) 1 0 L 2 0 L 1 f ( x ) d x 1 d x 2 .
In view of the boundary conditions (2a) for the flow, the potentials T v and P v , and the mean-field component satisfy the boundary conditions
T v | | x 3 = ± 1 = 0 ,
P v | | x 3 = ± 1 = P v x 3 | x 3 = ± 1 = 0 ,
M v | | x 3 = ± 1 = 0 .
Adding an arbitrary function of x 3 (and time) to the toroidal or poloidal potential does not alter the respective components of a vector field; thus, we assume
T v h = P v h = 0 .
The equation for the toroidal potential is obtained by taking the vertical component of the curl of (1a):
h 2 T v t = P 2 h 2 T v + F t v , F t v = τ ( q · ) v 3 e 3 · ( × ( v × ( × v ) b × ( × b ) ) ) ,
where h 2 = 2 / x 1 2 + 2 / x 2 2 is the Laplacian in horizontal variables. The inverse operator ( h 2 ) 1 acts in the subspace of vector fields, whose mean · h vanishes. Equation (7) is equivalent to the parabolic equation
T v t = P 2 T v + ( h 2 ) 1 F t v
(the mean over horizontal variables of the vertical component of the curl of a field periodic in horizontal directions is zero; therefore, applying the inverse of h 2 is legitimate here).
Following the Galerkin method, we expand the potentials in series of functions satisfying the respective boundary conditions and consider ordinary differential equations in time for the coefficients of the expansions that express the orthogonality of the resultant discrepancies in the partial differential equations to certain test functions. The full set of functions used for expansion of approximate solutions should constitute a basis in the suitable functional space, as well as the full set of the test functions. The number of the test functions employed for the orthogonal projection when constructing an approximate solution should be equal to the dimension of the functional subspace, where the approximate solution is sought. The traditional Galerkin method, in which the same functional subspaces are employed for approximating the solution and for the orthogonal projection, is usually advantageous (e.g., one can derive energy estimates useful for controlling the convergence of the approximate solutions to the precise ones on increasing the dimension of the subspace). (In the Petrov–Galerkin method, distinct functional subspaces are used for expanding solutions and for the projection [10].) By contrast, it was proposed in [7] to use the collocation method (this can be regarded as “orthogonalisation” to δ -functions, see [28]) to expand solutions in Chebyshev polynomials and to employ two higher-degree Chebyshev polynomials to satisfy the boundary conditions. This procedure is reminiscent of the tau method due to Lancoz [30] (see also [31]). However, then the coefficients of the correcting Chebyshev polynomials depend on the coefficients of the low-degree Chebyshev polynomials in such a way that, in principle, the former coefficients can remain large instead of tending to zero on increasing the number of the polynomials used for approximation.
We follow the traditional Galerkin method and use the polynomials (10) for both expanding and projecting. Expansion in the vertical coordinate, x 3 , is not straightforward. In view of (6a), it may seem natural to expand the toroidal potential in the sine Fourier series. However, this basis is not particularly suitable for resolving the boundary layers emerging at high Ekman numbers. Moreover, every zero of the sine is also a zero of its second derivative; if solutions to the system of equations (Section 2) do not feature this property at the horizontal boundaries (and it is unlikely that they do), then sine series are guaranteed to converge poorly. We therefore resort to the next simplest basis in the Lebesgue space, consisting of Chebyshev polynomials, T n ( x 3 ) = cos ( n arccos x 3 ) for n 0 and 1 x 3 1 [32]. They are orthogonal on this interval with the weight ( 1 x 3 2 ) 1 / 2 and we always carry out orthogonalisation in x 3 with this weight function. The relations
T n ( x 3 ) = x 3 n , d T n d x 3 ( x 3 ) = x 3 n + 1 n 2 , d 2 T n d x 3 2 ( x 3 ) = x 3 n ( n 4 n 2 ) / 3 f o r x 3 = ± 1
(see, e.g., [32]) are useful in treating the boundary conditions. They imply that a linear combination of Chebyshev polynomials vanishes at the end points as long as the sum of the coefficients of the even-degree polynomials vanishes, as well as the sum of the coefficients of the odd-degree polynomials. We expand functions satisfying the conditions (6a) in the series of the polynomials
T n ( x 3 ) = T n + 2 ( x 3 ) T n ( x 3 ) , n 0 ,
as recommended in [33] because of the relative simplicity of the matrices resulting from discretisation of the second- and fourth-order differential operators.
The Gram–Schmidt process yields orthogonal polynomials
T n ( x 3 ) = T n + 2 ( x 3 ) T n ( x 3 ) + β n T n 2 ( x 3 ) , n 0 ,
β 0 = β 1 = 0 , β n = π / ( 4 T n 2 2 ) , n 2 ,
T 0 2 = 3 π / 4 , T 1 2 = π / 2 , T n 2 = π / 2 ( π / ( 4 T n 2 ) ) 2 , n 2 .
However, when the polynomials (10) are used, computation of the time derivatives of their coefficients requires solving a system of linear equations for a band matrix; as a result, this approach is computationally more efficient than the use of the orthogonal polynomials T n .

3.1. Algorithms for Determining the Coefficients of a Linear Combination of  T n ( x 3 )

Consequently, we approximate the toroidal potential by finite series
T v ( x , t ) = | n 1 | N , | n 2 | N , 0 n N 3 t n v ( t ) e i ( α 1 n 1 x 1 + α 2 n 2 x 2 ) T n ( x 3 ) ,
where n = ( n 1 , n 2 , n ) . The equation for the mean-field component of the flow is derived by averaging over horizontal variables of the horizontal component of (1a). This component is also expanded in polynomials (10),
M v ( x 3 , t ) = 0 n N 3 M n v ( t ) T n ( x 3 ) ,
and equations for the coefficients in the expansion are also obtained by orthogonal projection on these polynomials. In view of the boundary conditions (2d), following the same approach for discretisation of temperature we expand
θ ( x , t ) = | n 1 | N , | n 2 | N , 0 n N 3 θ n ( t ) e i ( α 1 n 1 x 1 + α 2 n 2 x 2 ) T n ( x 3 ) .
The vector d of the time derivatives of the coefficients of the expansion is a solution to the linear system of equations of the form G d = r . Here r is the vector (of length M = N 2 ) of the dot products of the r.h.s. of (8) with T m , and G is the Gram matrix for the polynomials T m , both (G and r ) divided by L 1 L 2 π / 2 . The matrix has the following non-zero entries: G 1 , 1 = 3 , G k , k = 2 for all k 2 on the diagonal, and G k , k + 2 = G k + 2 , k = 1 on two subdiagonals. The linear system of equations can be solved by seven algorithms.
Algorithms 1 and 1 are the standard shuttle (also known as Thomas) algorithms for a three-diagonal matrix [34] applied in the direct and reverse direction.
Algorithms 2 and 2 . We observe that all the equations except for the first two and the last two involve the coefficients 1 , 2 , 1 that are also encountered in the simplest finite differences approximation of the second derivative. Consequently, the l.h.s. of these equations do not change when the solution d n is altered by any linear function of n, i.e., the “intermediate” equations are invariant under the transformation d n d n + β 1 n + β 2 , where β 1 and β 2 are arbitrary constant numbers. Following this observation, we set, in the spirit of the shuttle algorithm,
b 1 = b 2 = 0 , b 3 = r 1 , b 4 = r 2 , b n = 2 b n 2 b n 4 r n 2 for 5 n N
and find (N is assumed to be even)
d 1 = b N 1 / ( N 1 ) , d n = n d 1 + b n for odd n , 3 n M 1 ; d 2 = b N / ( N / 2 ) , d n = ( n / 2 ) d 2 + b n for even n , 4 n M .
This is algorithm 2. While here all unknown coefficients d n are expressed in terms of d 1 and d 2 , in algorithm 2   d M 1 and d M play the role of such “basic” variables: setting
b M = r M , b M 1 = r M 1 , b M 2 = 2 r M r M 2 , b M 3 = 2 r M 1 r M 3 ,
b n = 2 b n + 2 b n + 4 r n for 1 n M 4 ,
we obtain
d M 1 = b 1 + b 3 N 1 , d n = N n 1 2 d M 1 + b n + 2 for   odd n , 1 n M 3 ,
d M = b 2 N / 2 , d n = N n 2 d M + b n + 2 for   even n , 2 n M 2 .
Both versions of this algorithm have the numerical complexity O(N); by contrast, using the orthogonal polynomials T n results in the O( N 2 ) complexity of computations, which is prohibitively large. Given that the Chebyshev series coefficients r n tend to zero for large n, the recurrence relation (12b), where computations of b n proceed from smaller in magnitude values to larger ones and all d n involve products of large numbers N n with small factors d M 1 and d M , is less affected by round-off errors. Thus, algorithm 2 (12) apparently suits our purposes better than algorithm 2.
The analogy between the simplest finite-difference approximation of second-order derivatives of a fictitious function in the l.h.s. of the system at hand is more closely exploited in derivation of algorithms 3 and 3 . This gives an opportunity to find explicit solutions for d k .
In terms of the variables z k = d k + 2 d k for 1 k M 2 (“the first-order derivatives” approximated by the Euler scheme for the unit mesh size), the system of equations takes the form
2 d 1 z 1 = r 1 ,
d 2 z 2 = r 2 ,
z k 2 z k = r k for 3 k M 2 ,
d M 1 + z M 3 = r M 1 ,
d M + z M 2 = r M .
We will now “twice integrate” the fictitious function. The system (13) splits into separate systems for even-index and odd-index variables. In the odd case, we find sequentially from (13a), (13c) and (13d) (“the first integration”)
z 1 = 2 d 1 r 1 , z 3 = 2 d 1 r 1 r 3 , z 5 = 2 d 1 r 1 r 3 r 5 , z 2 k 1 = 2 d 1 r 1 r 3 r 2 k 1 , d M 1 = 2 d 1 r 1 r 3 r M 1 .
Summing up all these M / 2 relations (“the second integration”) yields
d 1 = z 1 + . . . + z M 3 d M 1 = M d 1 ( M / 2 ) r 1 ( M / 2 1 ) r 3 . . . 1 r M 1 ,
whereby
d 1 = 1 M + 1 j = 1 M / 2 N 2 j r 2 j 1 .
Summing up the relations from the first up to the kth one now yields
d 2 k 1 = ( 2 k 1 ) d 1 j = 1 k 1 ( k j ) r 2 j 1 for 1 < k M / 2 .
We find the same way from the system in the even-index unknowns
d 2 = 2 N j = 1 M / 2 N 2 j r 2 j ,
d 2 k = k d 2 j = 1 k 1 ( k j ) r 2 j for 1 < k M / 2 .
In algorithm 3 the variables d M 1 and d M play the role of the basic variables d 1 and d 2 in algorithm 3, resulting in the relations
d M 1 = 1 M + 1 j = 1 M / 2 ( 2 j 1 ) r 2 j 1 , d M 1 2 k = ( k + 1 ) d M 1 + j = N / 2 k M / 2 M 2 k j r 2 j 1 for 1 k M / 2 1 , d M = 2 N j = 1 M / 2 j r 2 j , d M 2 k = ( k + 1 ) d M + j = N / 2 k M / 2 M 2 k j r 2 j for 1 k M / 2 1 .
Rearranging sums in (15) and (16), we obtain alternative equivalent expressions
d 2 k + 1 = M 2 k M d 1 + j = 2 k ( j 1 ) r 2 j 1 + k M j = k + 1 M / 2 ( N 2 j ) r 2 j 1 ,
d 2 k + 2 = M 2 k M d 2 + j = 2 k ( j 1 ) r 2 j + k M j = k + 1 M / 2 ( N 2 j ) r 2 j .
Algorithm 4 exploits them instead of (15) and (16).
In order to assess the performance of the seven algorithms, we have conducted preliminary numerical experiments, tracking their efficiency (execution times) and the quality of the output (numerical errors originating from rounding-off in the course of computations). Regarding a set of numbers d n = p n e γ n n / M , where p n and γ n are pseudorandom numbers uniformly distributed in the intervals [ 1 , 1 ] and [5, 8], respectively, as a sample test solution to the problem G d = r , we compute the respective r.h.s., r , and analyse the discrepancies q n = d ˜ n d n for the approximate solutions d ˜ n obtained by each algorithm. This procedure has been performed for M = 126 (mimicking a poor resolution of the convective dynamo problem discretization), M = 510 (intermediate), and M = 2046 (high resolution) for 10 6 sets of d n , comprising 3 × 10 6 test problems in total. The results are summarised in Table 1 and Table 2, and Figure 2 and Figure 3.
For each resolution parameter (the number M of the sought coefficients in the expansion at hand) and each algorithm, Table 1 presents the ranges (i.e., the maximum and minimum values) of the error norms for 10 6 approximate solutions to the test problems. Two norms are considered: the maximum norm q max = max n | q n | and the so-called “energy” norm q en = n | q n | 2 f .
Table 2 illustrates the execution times for each algorithm. Since each individual computation of a sample solution is very fast, the individual durations of computations by the same algorithm for the same M significantly vary from run to run. To reduce the influence of this noise, we have employed the following procedure: for a sample problem, we measure the CPU time required for 1000 (identical) applications of the algorithm under consideration; we make 1000 measurements of this time (for test problems for the same resolution parameter M solved by the same algorithm); the average of the obtained numbers is reported in Table 2. These mean CPU execution times are reproduced to at least 3 significant digits for all the 21 test problems used for measuring the times; thus, the influence of noise in these values is reduced as desirable. In principle, algorithm 1 execution times should not differ from those of algorithm 1 ; similarly for the algorithm pairs 2 and 2 , and 3 and 3 . The nature of the variation of these values is related to the intimate details of application of the compiler optimisation techniques (in this experiment we have used the maximum optimisation). For algorithms 3 and 3 the variation also reflects the difference in the programming of computation of the sums involved aimed at enhancing the accuracy.
Figure 2 illustrates the error distributions among the approximate solutions d ˜ n to the sample test problems. Plots for two test problems for each of the three considered values of M are shown. The problem splits into two independent subproblems for odd- and even-number variables d k ; errors for odd-index coefficients are shown by continuous lines and for even-index coefficients by dashed lines. Graph colours indicate by which algorithms the curves have been obtained. We observe that algorithms 2 and 3 yield blatantly high errors; algorithms 2 and 2 are also imperfect in that they produce higher errors for higher-index unknowns d ˜ n . The errors experience a wild spiky behaviour for low-index coefficients d n , but, except for algorithm 2 , it becomes more ordered roughly for n > M / 6 .
To assess how statistically relatively poor or good the considered algorithms are, we present in Figure 3 the distributions of error norms measured in the units of the smallest error norm for the given test problem. More precisely, the plots have been constructed by the following procedure: A set of 10 6 test problems with solutions, pseudorandomly generated as discussed above, have been solved by each of the seven algorithms. For the pth problem and the kth algorithm, the error norms q ( p ; k ) have been determined. We denote the smallest of these seven norms by ω ( p ) = min k q ( p ; k ) and define the quantities N m ( k ) as the number of cases (among the considered 10 6 test problems), in which the error norm obtained by algorithm k relative to the smallest norm ω ( p ) falls into the mth bin, or in other words, the ratio q ( p ; k ) / ω ( p ) satisfies the inequality m 1 < q ( p ; k ) / ω ( p ) m . In particular, N 1 ( k ) is the number of test problems, for which algorithm k delivers the smallest, over all the seven algorithms, norm, i.e., ω ( p ) = q ( p ; k ) . These computations have been performed for both norms, the maximum norm q max and the energy norm q en .
For each algorithm and both error norms, in Figure 3 we show the quantities N m ( k ) versus the bin number m. The data points for six algorithms are joined into plots up to the smallest m such that N m ( k ) = 0 (the use of the logarithmic scale for N m ( k ) forces us to break the plot at this point). The outlier values N m ( k ) > 0 for larger m are shown as disjoint points; all these values are small. The data distribution for algorithm 2 has a different nature, more resembling a cloud; we therefore render the data as two plots (for odd- and even-index numbers) only for the small-dimensional problem ( M = 126 ) for N m ( 3 ) < 100 . The outliers N m ( k ) > 0 for m > 5 × 10 3 for M = 126 , and for m > 5 × 10 4 for M = 510 and 2046 are ignored in Figure 3. There are 211 and 1529 such outliers for the odd- and even-index test problems, respectively, for algorithm 3 for N = 128 , 18 for the even-index problems for M = 510 , and 2265 and 8831 for the odd- and even-index test problems for M = 2046 . These statistics demonstrate that the larger the number of unknown coefficients M, the more the output of algorithm 2 is affected by the numerical noise.
The numerical results lead us to the following conclusions:
i.
Except for algorithm 2, in applications for solving the 3 × 10 6 test problems, for any M and for both error norms each algorithm delivers approximate solutions d ˜ n of any possible accuracy from the best to the worst one, i.e., for each algorithm, solutions to some test problems obtained by this algorithm have the smallest, the largest, or any differently placed intermediate error norm among the seven error norms obtained for this problem.
ii.
Algorithm 2 yields solutions of blantly poor accuracy. For M = 126 , its output still includes solutions occupying any possible place between the best-accuracy and worst-accuracy solutions. However, for M = 126 , it yields just three most accurate solutions in terms of the maximum error norm and just one, if the energy error norm is used. These numbers gradually increase via 46 and 15 penultimately worst-accuracy solutions to 999,934 and 999,975 least accurate solutions. By contrast, for M = 510 and 2046 all its outputs are the least accurate solutions for both error norms, except for one case of penultimately worst-accuracy solutions for the maximum norm.
iii.
The error norms (see Table 1) are compatible with the standard “double” (in the Fortran speak; 64 bit words) computer precision (the 10 15 “machine epsilon”). The poor performance of algorithm 2 and the second-worst (accuracy-wise) algorithm 3 stems from the involvement in the expressions used in their formulation of the products of d 1 and d 2 with the numbers proportional to the indices of the unknown coefficients that increase up to the large numbers M.
iv.
We can regard the accuracy results at a different angle. In the worst-case scenario, the norm of a discrepancy in the r.h.s. of the equation G d = r is multiplied by the condition number of the matrix G, which is defined as the product of norms of the matrices G and G 1 . The former norm is obviously order 1; to estimate the latter, we use (14) and (15) for the r.h.s. where all r k = 1 , and find G 1 d en / r en = O ( M 2 ) . Hence, the condition number of G is (at least) order M 2 , which is compatible with the accuracy results for algorithm 2 (see maximum errors in Table 1). We observe that numerical errors are amplified by an algorithm-specific effective condition number [35], which can be much smaller than the worst-case theoretical condition number, provided a specialised algorithm takes into account particular properties of the problem,
v.
For all the three M values used, most frequently the smallest-error approximate solutions d ˜ n are provided by algorithm 2 for whichever error norm: 315,174 and 236,707 times out of 10 6 for M = 126 , 325,841 and 236,544 for M = 510 , and 333,062 and 239,009 for M = 2046 (the first number in a pair is for the maximum error norm and the second one for the energy norm). Accuracy-wise, the shuttle algorithms 1 and 1 are mutually close and not significantly inferior to algorithm 2 .
vi.
The quantities N m ( k ) have the maxima at k = 2 (see Figure 3), i.e., for each algorithm and each M, the error norms fall into the bin 1 < q ( p ; k ) / ω ( p ) 2 with the highest probability.
vii.
Solutions d ˜ n obtained by all algorithms except for two and three have significantly larger errors q n for small n (say, for n < M / 6 ) than for larger n (see Figure 3). By contrast, algorithms 2 and 3 yield maximum errors for intermediate and high n. In addition, the errors in solutions computed by algorithm 3 wildly oscillate, which is not typical for the behaviour of errors generated by the other six algorithms. Consequently, algorithms 2 and 3 yield approximate solutions in which d ˜ n for intermediate and high n have exceptionally high relative errors.
viii.
Algorithms 2 and 2 are significantly faster than the other five algorithms (see Table 2). Their execution CPU times are mutually close and differ much more from the execution times registered for the other ones.
Properties iii and vii render algorithms 2 and 3 inapplicable. None of the seven algorithms is “perfect”, but the compromise between the highest efficiency and accuracy reveals the optimality of algorithm 2 .

3.2. Algorithms for Determining the Coefficients of a Linear Combination of  T n ( x 3 )

The equation for the poloidal potential is obtained by applying twice the curl to (1a) and taking the vertical component of the result:
h 2 2 P v t = P h 2 ( 2 ) 2 P v + F p v , F p v = σ h 2 θ + e 3 · ( τ ( q · ) × v ( × ) 2 ( v × ( × v ) b × ( × b ) ) ) .
The above equation is equivalent to the parabolic equation
2 P v t = P ( 2 ) 2 P v + ( h 2 ) 1 F p v .
To apply the inverse Laplacian ( 2 ) 1 is now tempting but unfeasible, since the conditions (6b) do not imply suitable boundary conditions for 2 P v . By (9), the polynomials
T n ( x 3 ) = ( n + 1 ) ( T n + 4 ( x 3 ) T n + 2 ( x 3 ) ) ( n + 3 ) ( T n + 2 ( x 3 ) T n ( x 3 ) ) , n 0 ,
satisfy (6b), and the poloidal potentials can be approximated by finite series
P v ( x , t ) = | n 1 | N , | n 2 | N , 0 n N 5 p n v ( t ) e i ( α 1 n 1 x 1 + α 2 n 2 x 2 ) T n ( x 3 ) ,
where n = ( n 1 , n 2 , n ) .
Substituting the series into (17), we obtain an equation, whose l.h.s. is of the form
| n 1 | N , | n 2 | N , 0 n N 1 d p n v d t e i ( α 1 n 1 x 1 + α 2 n 2 x 2 ) p ˜ n ( x 3 ) ,
where p ˜ n is a polynomial of degree n + 4 . As usual, we apply the Fourier transform in the horizontal directions to deduce the equation for each pair of indices n 1 and n 2 , and orthogonally project p ˜ n on the polynomials T m ( x 3 ) with the weight ( 1 x 3 2 ) 1 / 2 . Computing d p n v / d t requires inverting the matrices acting on the columnar vectors consisting of the time derivatives d p n v / d t for 0 n N 1 and n 1 , n 2 fixed. As in the case of the toroidal potential of the flow, these are band matrices. To see this, we integrate twice by parts the integral 1 1 d 2 T n / d x 3 2 ( 1 x 3 2 ) 1 / 2 T m d x 3 encountered in the projection of the l.h.s. of (17) (noting that the test polynomial T m satisfies the boundary conditions (6b)) and apply the relation
d 2 d x 3 2 ( 1 x 3 2 ) 1 / 2 T m = 4 ( m + 1 ) ( m + 2 ) ( m + 3 ) ( 1 x 3 2 ) 1 / 2 T m + 2 .
It can be proven by using the identities
2 ( x 3 2 1 ) d T m d x 3 = m ( T m + 1 T m 1 ) , d d x 3 ( 1 x 3 2 ) 1 / 2 d T m d x 3 = m 2 ( 1 x 3 2 ) 1 / 2 T m .
Relation (19) is also useful for computing
1 1 ( 2 ) 2 e i ( α 1 n 1 x 1 + α 2 n 2 x 2 ) T n ( 1 x 3 2 ) 1 / 2 T m d x 3 = e i ( α 1 n 1 x 1 + α 2 n 2 x 2 ) [ π Γ 4 2 ( ( m + 3 ) ( m 3 ) δ n + 4 m 2 ( m + 3 ) m + ( m + 2 ) ( m 1 ) δ n + 2 m + ( m + 1 ) 2 + ( 2 m + 4 ) 2 + ( m + 3 ) 2 ( 1 + δ 0 m ) δ n m 2 ( m + 1 ) ( m + 4 ) + ( m + 2 ) ( m + 5 ) δ n 2 m + ( m + 1 ) ( m + 7 ) δ n 4 m ) + 4 ( m + 1 ) ( m + 2 ) ( m + 3 ) ( π Γ 2 ( m 1 ) δ n + 2 m 2 ( m + 2 ) δ n m + ( m + 5 ) δ n 2 m + 1 1 d 2 T n d x 3 2 ( 1 x 3 2 ) 1 / 2 T m + 2 d x 3 ) ] ,
where Γ = ( α 1 2 n 1 2 + α 2 2 n 2 2 ) 1 / 2 and δ n m is the Kronecker symbol. Differentiation in x 3 can now be performed ([9], see also [36]) using the recurrence relation
d d x 3 T k + 1 = k + 1 k 1 d d x 3 T k 1 + 2 ( k + 1 ) T k .
Thus, the vector d of the time derivatives of the expansion coefficients is a solution to the linear system of equations of the form
G d = r ,
where r is the vector (of length M = N 4 ) of the dot products of the r.h.s. of (17) with T m ; both G and r are divided by π L 1 L 2 Γ 2 / 2 . The matrix has non-zero entries on the diagonal:
G k , k = 6 k 2 12 k 8 9 δ 1 k 2 Γ k ( k + 1 )
and on four subdiagonals:
G k , k 4 = ( k 4 ) ( k + 2 ) , G k , k 2 = 4 k 2 8 + Γ k ( k 2 ) ,
G k , k + 2 = 4 ( k + 2 ) 2 8 + Γ k ( k + 4 ) , G k , k + 4 = k ( k + 6 ) ,
where
Γ k = 4 k ( k + 1 ) ( k + 2 ) / Γ 2 .
Algorithm 1 is the standard shuttle algorithm for a pentadiagonal matrix [37]. We note that the odd-index subproblem is separated from the even-index one and use the ansatz
d n = A n d n + 2 + B n d n + 4 + C n .
The first four equations imply
A 1 = G 1 , 3 / G 1 , 1 , B 1 = G 1 , 5 / G 1 , 1 , C 1 = r 1 / G 1 , 1 ,
A 2 = G 2 , 4 / G 2 , 2 , B 2 = G 2 , 6 / G 2 , 2 , C 2 = r 2 / G 2 , 2 ,
A 3 = B 1 G 3 , 1 + G 3 , 5 A 1 G 3 , 1 + G 3 , 3 , B 3 = G 3 , 7 A 1 G 3 , 1 + G 3 , 3 , C 3 = r 3 G 3 , 1 C 1 A 1 G 3 , 1 + G 3 , 3 ,
A 4 = B 2 G 4 , 2 + G 4 , 6 A 2 G 4 , 2 + G 4 , 4 , B 4 = G 4 , 8 A 2 G 4 , 2 + G 4 , 4 , C 4 = r 4 G 4 , 2 C 2 A 2 G 4 , 2 + G 4 , 4 .
The M 4 equations for 5 n M yield the recurrence relations
A n = ( G n , n 4 A n 4 + G n , n 2 ) B n 2 + G n , n + 2 ( G n , n 4 A n 4 + G n , n 2 ) A n 2 + G n , n 4 B n 4 + G n , n ,
B n = G n , n + 4 ( G n , n 4 A n 4 + G n , n 2 ) A n 2 + G n , n 4 B n 4 + G n , n ,
C n = r n ( G n , n 4 A n 4 + G n , n 2 ) C n 2 G n , n 4 C n 4 ( G n , n 4 A n 4 + G n , n 2 ) A n 2 + G n , n 4 B n 4 + G n , n ,
where it is assumed G n , m = 0 for m > M . This yields d n = C n for n = M 1 and M, and we use the recurrence relation (21) to compute sequentially all the unknowns d n for decreasing n. (Note B n = 0 for n = M 1 and M.)
Algorithm 1 is the same shuttle procedure applied to the system of equations with the reverse numbering of equations and variables.
Algorithms 2 and 2 are modifications of the shuttle algorithm based on the observation that the sum of coefficients in the equation number k for 4 < k < M 3 is zero. Hence, in new variables d k = d k d k 2 , k 3 , the system reduces to a problem with a matrix G involving three subdiagonals for 4 < k < M 3 :
G k , k 2 = ( k 1 ) 2 9 , G k , k = 3 k 2 2 k Γ k ( k 2 ) ,
G k , k + 2 = 3 k 2 + 10 k + 8 + Γ k ( k + 4 ) , G k , k + 4 = k ( k + 6 ) .
The problem splits into two independent subproblems for odd- and even-number variables d k . We formulate algorithm 2 for solving the subproblem for even-number variables; it is straightforward to reformulate it for the subproblem for odd-number variables. We introduce M + 1 = M / 2 + 1 variables z 1 = d 2 , z k = d 2 k d 2 k 2 for 2 k M and z M + 1 = d 2 M . They satisfy M original equations and the gauge equation
z M + 1 k = 1 M z k = 0 .
The first equation is equivalent to z 1 = A 1 z 2 + B 1 z 3 + C 1 . Equations for 2 k M take the form
G 2 k , 2 k 2 z k 1 + G 2 k , 2 k z k + G 2 k , 2 k + 2 z k + 1 + G 2 k , 2 k + 4 z k + 2 = r 2 k ,
whereby
z k = A k z k + 1 + B k z k + 2 + C k ,
A k = G 2 k , 2 k + 2 + B k 1 G 2 k , 2 k 2 G 2 k , 2 k + A k 1 G 2 k , 2 k 2 , B k = G 2 k , 2 k + 4 G 2 k , 2 k + A k 1 G 2 k , 2 k 2 , C k = r 2 k C k 1 G 2 k , 2 k 2 G 2 k , 2 k + A k 1 G 2 k , 2 k 2 ,
which defines the first, “direct run of the shuttle”. Here, z M + 2 = 0 . For k = M , the relation (23) reduces to just
z k = D k z M + 1 + E k ,
where D M = A M and E M = C M . We can now use (23) for k = M 1 , . . . , 1 to obtain the coefficients in (24) recursively,
D k = A k D k + 1 + B k D k + 2 , E k = A k E k + 1 + B k E k + 2 + C k
(the second, “reverse run of the shuttle”). Simultaneously, we compute the coefficients F k and R k in the partial sums
z M + 1 n = k M z n = F k z M + 1 + R k .
By (22), we then find z M + 1 = R 1 / F 1 . Now, the third “run of the shuttle” yields
d 2 n = k = 1 n z k = z M + 1 k = n + 1 M z k z M + 1 k = 1 M z k = F n + 1 z M + 1 + R n + 1 .
This algorithm is apparently more involved than the standard shuttle algorithm. However, since it works with the differences z k = d 2 k d 2 k 2 , it may be advantageous in yielding more accurate values of d 2 k .
Algorithm 2 amounts to the same computational procedure applied to the system of equations, where the numbering of equations and variables is reversed. Thus, we obtain the relations, similar to (25), in which all d 2 n are expressed in terms of z 1 instead of z M + 1 . Normally, the coefficients d k tend to zero, whereby z 1 is a priori much larger than z M + 1 ; for a given precision of computations (in our case, the standard real*8 “double” precision of the floating point arithmetics), this is a tighter constraint on the number of correct digits after the decimal point. Consequently, we may expect algorithm 2 to yield less accurate values of d k than algorithm 2.
We have investigated the performance of the four algorithms following the same approach as for the algorithms for determining the coefficients of linear combinations of the polynomials T n : we have synthesized 10 6 pseudorandom sample test solutions d n by the same procedure, outlined in Section 3.1, as for the polynomials T n , and analysed the errors q n = d ˜ n d n for the approximate solutions obtained by each of the four algorithms for M = 124 , 508 and 2044 terms in linear combinations of T n .
For each resolution parameter M and each algorithm, Table 3 shows the ranges (i.e., the maximum and minimum values) of the maximum and energy norms of the approximate solution errors. Table 4 illustrates the execution times for each algorithm, measured by the same procedure (see Section 3.1) as for the polynomials T n . The obtained mean execution times are again accurate to at least three significant digits. We expect the algorithm 1 execution times to coincide with those for algorithm 1 , similarly for algorithms 2 and 2 . However, the times presented in Table 4 for algorithms 1 and 2 are slightly smaller than those for algorithms 1 and 2 , respectively. This difference is insignificant, it just reflects the way we have programmed the latter algorithms: in our implementation, the orders of the equations and unknown variables are reversed at run time, which can be avoided by a more careful programming of the algorithms using index reversion.
Figure 4 illustrates the error distributions among the unknowns detected for solutions to six sample test problems (plots for two problems for all the considered M = 124 , 508 and 2044 are shown). Figure 5 presents the distributions of error norms measured by the same procedure, discussed in Section 3.1, as for the polynomials T n . For the pth problem and the kth algorithm, we find the error norms q ( p ; k ) and the smallest norm for this problem, ω ( p ) = min k q ( p ; k ) . We again denote by N m ( k ) the number of cases, among the 10 6 test problems, in which the error norm produced by algorithm k, measured in the units of ω ( p ) , falls into the mth bin, i.e., m satisfies the inequality m 1 < q ( p ; k ) / ω ( p ) m . For each algorithm and both error norms (the maximum norm and the energy norm), Figure 5 shows the histogram N m ( k ) versus the bin number m. The data points are joined up to the smallest m such that N m ( k ) = 0 , and the outlier values N m ( k ) > 0 (always small) for larger m are shown as disjoint points; unlike in Figure 3, all outliers are shown.
The results reveal the following properties of the algorithms:
i.
None of the four algorithms is “perfect”: for all considered M and for both error norms, when solving the 3 × 10 6 test problems each algorithm delivers solutions of any possible relative accuracy from the best to the worst one, in the sense that solutions to some test problems obtained by any algorithm have the smallest, the largest or any intermediate error norms among the four error norms produced by the four algorithms for this problem.
ii.
The quantities N m ( k ) have the maxima at m = 2 (see Figure 5), i.e., for each algorithm the error norms fall into the second bin, 1 < q ( p ; k ) / ω ( p ) 2 , with the highest probability.
iii.
For any M and for both error norms, algorithm 1 delivers the best accuracy solutions (has the maximum N 1 ( k ) over k) more often than the other three. For M = 124 and 508, the least accurate solutions are obtained most frequently by algorithm 2 and for M = 2044 by algorithm 2.
iv.
The errors q n for intermediate and high n (say, for n > M / 6 ) in approximate solutions computed by algorithm 2 are significantly larger than in solutions obtained by any of the other three algorithms.
v.
The execution CPU times of algorithms 1 and 1 are slightly larger than those for algorithms 2 and 2 .
Property iv implies that algorithm 2 yields approximate solutions, where d ˜ n for intermediate and high n are polluted by high relative errors, rendering this algorithm inapplicable. The smallest execution time and the highest frequency of yielding the smallest errors distinguish algorithm 1 , but algorithms 2 and 2 follow closely.

4. Discretisation of the Magnetic Field

Here we consider the implications of the boundary conditions (2b) for the magnetic field. Since the magnetic field is solenoidal (1d), it can be decomposed into the sum of the toroidal, T b ( x , t ) , poloidal, P b ( x , t ) , and mean-field, M b ( x , t ) , components:
b ( x , t ) = T b ( x , t ) + P b ( x , t ) + M b ( x 3 , t ) ,
T b ( x , t ) = × ( T b ( x , t ) e 3 ) ,
P b ( x , t ) = × × ( P b ( x , t ) e 3 ) ,
M b ( x 3 , t ) = ( b 1 ( x , t ) , b 2 ( x , t ) , 0 ) h
such that
T b h = P b h = 0 .
In view of the boundary conditions (2b) and (2c) for the magnetic field, the potentials and the mean-field component satisfy the boundary conditions
T b | x 3 = 1 = T b x 3 | x 3 = 1 = 0 ,
P b | x 3 = 1 = h | x 3 = 1
2 P b x 3 2 | x 3 = 1 = P b | x 3 = 1 = 0 ,
M b | x 3 = 1 = M b x 3 | x 3 = 1 = 0 .
The poloidal potential is determined from the vertical component of (1b), the toroidal one from the vertical component of the curl of (1b),
T b t = η 2 T b + ( h 2 ) 1 F t b , F t b = e 3 · ( × ) 2 ( v × b ) ,
and the mean-field component of the magnetic field from the equation derived by averaging over horizontal variables of the horizontal component of (1b).
The harmonic function h referred to in (27a) is defined by (2b). The boundary condition (27b) can be further detailed for individual harmonics in horizontal variables for the poloidal potential expanded as the Fourier series
P b ( x , t ) = | n 1 | N , | n 2 | N P n 1 , n 2 b ( x 3 , t ) e i ( α 1 n 1 x 1 + α 2 n 2 x 2 ) P b = | n 1 | N , | n 2 | N i α 1 n 1 P n 1 , n 2 b , i α 2 n 2 P n 1 , n 2 b , P n 1 , n 2 b x 3 e i ( α 1 n 1 x 1 + α 2 n 2 x 2 ) .
At the upper boundary x 3 = 1 , the horizontal components of the gradient of P n 1 , n 2 b coincide with the gradient of the harmonic function
h ( x , t ) = P n 1 , n 2 b ( 1 , t ) e i ( α 1 n 1 x 1 + α 2 n 2 x 2 ) ( α 1 2 n 1 2 + α 2 2 n 2 2 ) 1 / 2 ( x 3 1 )
and therefore (27b) is satisfied as long as
P n 1 , n 2 b x 3 | | x 3 = 1 = ( α 1 2 n 1 2 + α 2 2 n 2 2 ) 1 / 2 P n 1 , n 2 b | | x 3 = 1 .
In view of (9), the boundary condition (27c) is satisfied by the polynomials
Π n = ( n + 1 ) ( 2 n + 1 ) T n 1 + ( 4 n 2 + 2 ) T n + ( n 1 ) ( 2 n 1 ) T n + 1 f o r n 0
and both conditions, (27c) and (29), are satisfied by the polynomials
T n ( x 3 ) = ( ( 4 ( n + 1 ) 4 2 ( n + 1 ) 2 + 1 + Γ ( 4 ( n + 1 ) 2 + 2 ) ) Π n ( 4 n 4 2 n 2 + 1 + Γ ( 4 n 2 + 2 ) ) Π n + 1 ) / ( 2 n + 1 ) = κ n + 1 T n 1 + ( κ n + λ n ) T n ( κ n + 1 λ n ) T n + 1 κ n T n + 2
for n 1 , where
κ n = n ( 4 n 4 2 n 2 + 1 + Γ ( 4 n 2 + 2 ) ) ,             λ n = 16 n ( n + 1 ) ( n ( n + 1 ) + 1 ) .
Thus, P n 1 ,   n 2 b ( x 3 , t )   can be expanded as
P b ( x , t ) = | n 1 | N , | n 2 | N , 1 n N 2 p n b ( t ) e i ( α 1 n 1 x 1 + α 2 n 2 x 2 ) T n ( x 3 ) .
The vector d of the time derivatives of the coefficients of the expansion is a solution to the linear system of equations of the form G d = r . Here G is the heptadiagonal Gram matrix for the basic functions T m and r is the vector (of length M = N 3 ) of dot products of the r.h.s. with T m . Unlike in the two linear problems considered above, there is no separation into odd- and even-index subproblems.
The shuttle process can again be applied for solving this problem. We use the ansatz
d n = A n d n + 1 + B n d n + 2 + C n d n + 3 + D n .
The first equation is equivalent to
A 1 = G 1 , 2 / G 1 , 1 , B 1 = G 1 , 3 / G 1 , 1 , C 1 = G 1 , 4 / G 1 , 1 , D 1 = r 1 / G 1 , 1 .
The second equation implies
A 2 = ( G 2 , 1 B 1 + G 2 , 3 ) / A 2 , B 2 = ( G 2 , 1 C 1 + G 2 , 4 ) / A 2 ,
C 2 = G 2 , 5 / A 2 , D 2 = ( r 2 G 2 , 1 D 1 ) / A 2 ,
where
A 2 = G 2 , 1 A 1 + G 2 , 2 .
From the third equation,
A 3 = ( A 3 B 2 + G 3 , 1 C 1 + G 3 , 4 ) / A 3 , B 3 = ( A 3 C 2 + G 3 , 5 ) / A 3 , C 3 = G 3 , 6 / A 3 ,
D 3 = ( r 3 G 3 , 1 D 1 A 3 D 2 ) / A 3 ,
where
A 3 = G 3 , 1 A 1 + G 3 , 2 , A 3 = A 3 A 2 + G 3 , 1 B 1 + G 3 , 3 .
For n 4 , it is convenient to introduce the additional variables
A n = G n , n 3 A n 3 + G n , n 2 , A n = A n A n 2 + G n , n 3 B n 3 + G n , n 1 ,
A n = A n A n 1 + A n B n 2 + G n , n 3 C n 3 + G n , n .
For 4 n M 3 , in the nth equation we use (21) to make consecutive substitutions of d n 2 , d n 1 and d n . This yields the recurrence relations
A n = ( A n B n 1 + A n C n 2 + G n , n + 1 ) / A n , B n = ( A n C n 1 + G n , n + 2 ) / A n ,
C n = G n , n + 3 / A n , D n = ( r n G n , n 3 D n 3 A n D n 2 A n D n 1 ) / A n .
Finally, the remaining equations for M 2 n M yield a system of three equations in d M 2 , d M 1 and d M . The equations are:
( A n A n 1 + A n B n 2 + G n , n 3 C n 3 + G n , n ) d M 2 + ( A n B n 1 + A n C n 2 + G n , n + 1 ) d M 1
+ ( A n C n 1 + G n , n + 2 ) d M = r M 2 A n D n 1 A n D n 2 G n , n 3 D n 3
for n = M 2 ;
( A n A n 2 + G n , n 3 B n 3 + G n , n 1 ) d M 2 + ( A n B n 2 + G n , n 3 C n 3 + G n , n ) d M 1 +
( A n C n 2 + G n , n + 1 ) d M = r M 1 A n D n 2 G n , n 3 D n 3
for n = M 1 ;
( G n , n 3 A n 3 + G n , n 2 ) d M 2 + ( G n , n 3 B n 3 + G n , n 1 ) d M 1 +
( G n , n 3 C n 3 + G n , n ) d M = r M G n , n 3 D n 3
for n = M .
The boundary conditions (27a) and (27d) for the toroidal potential and mean-field components of the magnetic field are satisfied by the polynomials
T n ( x 3 ) = μ n T n 1 4 n T n μ n 1 T n + 1 , where   μ n = 2 n ( n + 1 ) + 1 ,
and hence we use the expansions
T b ( x , t ) = | n 1 | N ,   | n 2 | N ,   1 n N 2 t n b ( t ) e i ( α 1 n 1 x 1 + α 2 n 2 x 2 ) T n ( x 3 ) ,
M b ( x 3 , t ) = 0 n N M n b ( t ) T n ( x 3 ) .
The coefficients in the expansion of a function in this basis can be determined by solving a linear system of equations with a pentadiagonal Gram matrix G, whose non-zero entries are as follows:
G k , k 2 = μ k 3 μ k ,      G k , k 1 = 4 ( k 1 ) μ k + 4 k μ k 2 ,       G k , k = μ k 2 ( 1 + δ 1 k ) + 16 k 2 + μ k 1 2 ,
G k , k + 1 = 4 k μ k + 1 + 4 ( k + 1 ) μ k 1 ,      G k , k + 2 = μ k 1 μ k + 2 .
Apparently, the system has no special properties enabling us to develop specialized methods for solving it numerically; it can be solved by the shuttle method (see Algorithm 1 in Section 3.2).

5. Conclusions

We have presented an original algorithm for numerical solution of the equations of convective dynamo in a plane horizontal layer rotating about an inclined axis under geophysically sound boundary conditions. In many respects, our approach does not differ significantly from the general approach that was followed by other authors. Focusing mainly on the accuracy of computations, especially in the small scales, we have discussed the numerical techniques which we have designed with this purpose in mind.
The Galerkin method is applied for computing the toroidal and poloidal components of physical vector fields and their mean components. We exploit the bases of functions that are products of linear combinations ( T n ( 10 ) , T n ( 18 ) , T n ( 30 ) , and T n ( 32 ) ) of Chebyshev polynomials in the vertical coordinate and Fourier harmonics in the horizontal coordinates. The basic functions involving the polynomials T n are used for the spatial discretisation of the unknown functions that take zero values on the horizontal boundaries x 3 = ± 1 , namely, in the problem under discussion, the toroidal potential ofthe flow and its mean fields, and the deviation of temperature from the steady-state linear profile. The poloidal potential of the flow, which, for the no-slip boundary conditions, must vanish on the horizontal boundaries together with the first derivative in the vertical direction, is expanded in a series of functions involving the polynomial factors T n . The functions involving T n are used for discretising the poloidal potential of the magnetic field in the presence of the dielectric over the fluid layer and an electric conductor below it; while the no-slip boundary conditions for a solenoidal field are often considered in different problems, the latter boundary conditions are rather special. Finally, the toroidal potential of the magnetic field, which, for the boundary conditions considered here, is zero on the upper fluid boundary, and whose first derivative in x 3 vanishes on the lower boundary, is expanded in the series of the basic functions involving the polynomials T n .
The basic functions satisfy the boundary conditions, but they are non-orthogonal. Specialized algorithms for determining the coefficients in the expansion of an arbitrary function in a series of the basic functions involving polynomials T n and T n are proposed and analyzed.. They are general-purpose and are applicable whenever the same boundary conditions are imposed. We find that when the spatial resolution is sufficiently high, our original algorithm 2 is optimal from the point of view of accuracy and efficiency when T n are used, and algorithm 1 (the reverse shuttle algorithm) is optimal for T n .

Author Contributions

D.T. designed algorithm 4 for finding coefficients in expansions in T n , developed the codes and performed computations. R.C. wrote auxiliary codes for post-processing and analysis of the results. V.Z. conceived the project, designed algorithms 2, 2 , 3 and 3 for finding coefficients in expansions in T n , and algorithms 2 and 2 in expansions in T n , prepared the figures, and wrote the paper. All authors were involved in the discussion of the results and planning the implementation of the project, and provided critical feedback about the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

The project was financed by the grant № 22-17-00114 of the Russian Science Foundation, https://rscf.ru/project/22-17-00114/.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Podvigina, O.M. A route to magnetic field reversals: An example of an ABC-forced non-linear dynamo. Geophys. Astrophys. Fluid Dyn. 2003, 97, 149–174. [Google Scholar] [CrossRef]
  2. Chertovskih, R.; Gama, S.M.A.; Podvigina, O.; Zheligovsky, V. Dependence of magnetic field generation by thermal convection on the rotation rate: A case study. Physica D 2010, 239, 1188–1209. [Google Scholar] [CrossRef]
  3. Jones, C.A.; Roberts, P.H. Convection-driven dynamos in a rotating plane layer. J. Fluid Mech. 2000, 404, 311–343. [Google Scholar] [CrossRef]
  4. Rotvig, J.; Jones, C.A. Rotating convection-driven dynamos at low Ekman number. Phys. Rev. E 2002, 66, 056308. [Google Scholar] [CrossRef]
  5. Eltayeb, I.A.; Rahman, M.M. Model III: Benard convection in the presence of horizontal magnetic field and rotation. Phys. Earth Planet. Inter. 2013, 221, 38–59. [Google Scholar] [CrossRef]
  6. Zheligovsky, V. Space analyticity and bounds for derivatives of solutions to the evolutionary equations of diffusive magnetohydrodynamics. Mathematics 2021, 9, 1789. [Google Scholar] [CrossRef]
  7. Stellmach, S.; Hansen, U. An efficient spectral method for the simulation of dynamos in Cartesian geometry and its implementation on massively parallel computers. Geochem. Geophys. Geosyst. 2008, 9, Q05003. [Google Scholar] [CrossRef]
  8. Fox, L.; Parker, I.B. Chebyshev Polynomials in Numerical Analysis; Oxford University Press: London, UK, 1968. [Google Scholar]
  9. Canuto, C.; Hussaini, M.Y.; Quarteroni, A.; Zang, T.A. Spectral Methods in Fluid Dynamics; Springer: Berlin/Heidelberg, Germany, 1988. [Google Scholar] [CrossRef]
  10. Peyret, R. Spectral Methods for Incompressible Viscous Flow; Springer: Berlin/Heidelberg, Germany, 2002. [Google Scholar] [CrossRef]
  11. Kleiser, L.; Schumann, U. Treatment of incompressibility and boundary conditions in 3-D numerical spectral simulations of plane channel flows. In Proceedings of the Third GAMM—Conference on Numerical Methods in Fluid Mechanics, Cologne, Germany, 10–12 October 1979; Hirschel, E.H., Ed.; Notes on Numerical Fluid Mechanics; Vieweg+Teubner Verlag: Wiesbaden, Germany, 1980; Volume 2, pp. 165–173. [Google Scholar]
  12. Werne, J. Incompressibility and no-slip boundaries in the Chebyshev-Tau approximation: Correction to Kleiser and Schumann’s influence-matrix solution. J. Comput. Phys. 1995, 120, 260–265. [Google Scholar] [CrossRef]
  13. Meseguer, Á.; Trefethen, L.N. Linearized pipe flow to Reynolds number 107. J. Comput. Phys. 2003, 186, 178–197. [Google Scholar] [CrossRef]
  14. O’Connor, L.; Lecoanet, D.; Anders, E.H. Marginally stable thermal equilibria of Rayleigh–Bénard convection. Phys. Rev. Fluids 2021, 6, 093501. [Google Scholar] [CrossRef]
  15. Reetz, F.; Schneider, T. Invariant states in inclined layer convection. Part 1. Temporal transitions along dynamical connections between invariant states. J. Fluid Mech. 2020, 898, A22. [Google Scholar] [CrossRef]
  16. Wen, B.; Goluskin, D.; LeDuc, M.; Chini, G.P.; Doering, C.R. Steady Rayleigh–Bénard convection between stress-free boundaries. J. Fluid Mech. 2020, 905, R4. [Google Scholar] [CrossRef]
  17. Wen, B.; Goluskin, D.; Doering, C.R. Steady Rayleigh–Bénard convection between no-slip boundaries. J. Fluid Mech. 2022, 933, R4. [Google Scholar] [CrossRef]
  18. Alboussière, T.; Drif, K.; Plunian, F. Dynamo action in sliding plates of anisotropic electrical conductivity. Phys. Rev. E 2020, 101, 033107. [Google Scholar] [CrossRef]
  19. Sreenivasan, B.; Gopinath, V. Confinement of rotating convection by a laterally varying magnetic field. J. Fluid Mech. 2017, 822, 590–616. [Google Scholar] [CrossRef]
  20. Perez-Espejel, D.; Avila, R. Linear stability analysis of the natural convection in inclined rotating parallel plates. Phys. Lett. A 2019, 383, 859–866. [Google Scholar] [CrossRef]
  21. Gottlieb, D.; Orszag, S.A. Numerical Analysis of Spectral Methods: Theory and Applications; SIAM-CBMS: Philadelphia, USA, 1977. [Google Scholar]
  22. Tuckerman, L.S. Divergence-free velocity fields in nonperiodic geometries. J. Comput. Phys. 1989, 80, 403–441. [Google Scholar] [CrossRef]
  23. Cox, S.M.; Matthews, P.C. Exponential time differencing for stiff systems. J. Comput. Phys. 2002, 176, 430–455. [Google Scholar] [CrossRef]
  24. Chertovskih, R.; Rempel, E.L.; Chimanski, E.V. Magnetic field generation by intermittent convection. Phys. Lett. A 2017, 381, 3300–3306. [Google Scholar] [CrossRef]
  25. Chertovskih, R.; Chimanski, E.V.; Rempel, E.L. Route to hyperchaos in Rayleigh–Bénard convection. Europhys. Lett. 2015, 112, 14001. [Google Scholar] [CrossRef]
  26. Jeyabalan, S.R.; Chertovskih, R.; Gama, S.; Zheligovsky, V. Nonlinear large-scale perturbations of steady thermal convective dynamo regimes in a plane layer of electrically conducting fluid rotating about the vertical axis. Mathematics 2022, 10, 2957. [Google Scholar] [CrossRef]
  27. Braginsky, S.I.; Roberts, P.H. Equations governing convection in Earth’s core and the geodynamo. Geophys. Astrophys. Fluid Dynamics 1995, 79, 1–97. [Google Scholar] [CrossRef]
  28. Boyd, J.P. Chebyshev and Fourier Spectral Methods; Dover: Downers Grove, IL, USA, 2001. [Google Scholar]
  29. Schmitt, B.J.; von Wahl, W. Decomposition of solenoidal fields into poloidal fields, toroidal fields and the mean flow. Applications to the Boussinesq–equations. In The Navier–Stokes Equations II—Theory and Numerical Methods. Proceedings, Oberwolfach 1991; Lecture Notes in Math; Heywood, J.G., Masuda, K., Rautmann, R., Solonnikov, S.A., Eds.; Springer: Berlin, Germany, 1992; Volume 1530, pp. 291–305. [Google Scholar] [CrossRef]
  30. Lanczos, C. Applied Analysis; Prentice-Hall: Englewood Cliffs, NJ, USA, 1956. [Google Scholar]
  31. Madabhushi, R.K.; Balachandar, S.; Vanka, S.P. A divergence-free Chebyshev collocation procedure for incompressible flows with two non-periodic directions. J. Comput. Phys. 1993, 105, 199–206. [Google Scholar] [CrossRef]
  32. Rivlin, T.J. Chebyshev Polynomials: From Approximation Theory to Algebra and Number Theory; Wiley-Interscience: Hoboken, NJ, USA, 1974. [Google Scholar]
  33. Shen, J. Efficient spectral-Galerkin method II. Direct solvers of second and fourth order equations by using Chebyshev polynomials. SIAM J. Sci. Comput. 1995, 16, 74–87. [Google Scholar] [CrossRef]
  34. Godunov, S.K.; Ryabenkii, V.S. Difference Schemes. An Introduction to the Underlying Theory, 1st ed.; Elsevier: Amsterdam, The Netherlands, 1987. [Google Scholar]
  35. Li, Z.-C.; Chien, C.-S.; Huang, H.-T. Effective condition number for finite difference method. J. Comp. Appl. Math. 2007, 198, 208–235. [Google Scholar] [CrossRef]
  36. Mason, J.C.; Handscomb, D.C. Chebyshev Polynomials; CRC Press: Boca Raton, FL, USA, 2003. [Google Scholar] [CrossRef]
  37. Il’in, V.P.; Karpov, V.V.; Maslennikov, A.M. Numerical Methods for Solving Structural Mechanics Problems. A Handbook; Vyshaishaya Shkola: Minsk, Russia, 1990. [Google Scholar]
Figure 1. An illustrative sketch of the physical process: there is a dielectric over the upper horizontal boundary (blue) of the fluid layer (white), the lower horizontal boundary (gray) is electrically conducting, and the flow satisfies the no-slip boundary conditions at the boundaries. The fluid layer is rotating about the axis, whose arbitrary direction is shown by the unit vector q (magenta), the Cartesian coordinate system (orange colour, respectively) is co-rotating with the fluid.
Figure 1. An illustrative sketch of the physical process: there is a dielectric over the upper horizontal boundary (blue) of the fluid layer (white), the lower horizontal boundary (gray) is electrically conducting, and the flow satisfies the no-slip boundary conditions at the boundaries. The fluid layer is rotating about the axis, whose arbitrary direction is shown by the unit vector q (magenta), the Cartesian coordinate system (orange colour, respectively) is co-rotating with the fluid.
Mathematics 11 00808 g001
Figure 2. Errors (vertical axes) of approximate solutions obtained for sample test problems by algorithms 1 (red), 1 (green), 2 (blue), 2 (yellow), 3 (cyan), 3 (magenta) and 4 (black colour, respectively) for determining the coefficients of a linear combination of T n ( x 3 ) vs. the coefficient index number (horizontal axes). The panels show the plots for six randomly chosen sample runs for the three values of the resolution parameter M. Continuous line: odd-index coefficients; dashed line: even-index coefficients. In order to fit the panels, the error values have been divided by 10 3 for algorithm 2 and by 10 2 for algorithm 3.
Figure 2. Errors (vertical axes) of approximate solutions obtained for sample test problems by algorithms 1 (red), 1 (green), 2 (blue), 2 (yellow), 3 (cyan), 3 (magenta) and 4 (black colour, respectively) for determining the coefficients of a linear combination of T n ( x 3 ) vs. the coefficient index number (horizontal axes). The panels show the plots for six randomly chosen sample runs for the three values of the resolution parameter M. Continuous line: odd-index coefficients; dashed line: even-index coefficients. In order to fit the panels, the error values have been divided by 10 3 for algorithm 2 and by 10 2 for algorithm 3.
Mathematics 11 00808 g002
Figure 3. Distribution of error norms of approximate solutions obtained for 10 6 test problems by algorithms 1 (red), 1 (green), 2 (blue), 2 (yellow), 3 (cyan), 3 (magenta) and 4 (black colour, respectively) for determining the coefficients of a linear combination of T n ( x 3 ) . Horizontal axis: the error norms relative to the smallest error norm obtained for the test problem by the seven algorithms, m; vertical axis: the number of cases, N m ( k ) , for the kth algorithm and each error norm. Dashed lines, saturated colours: the maximum error norm; continuous lines, diluted colours: the energy error norm. The panels show the plots for the three values of the resolution parameter M.
Figure 3. Distribution of error norms of approximate solutions obtained for 10 6 test problems by algorithms 1 (red), 1 (green), 2 (blue), 2 (yellow), 3 (cyan), 3 (magenta) and 4 (black colour, respectively) for determining the coefficients of a linear combination of T n ( x 3 ) . Horizontal axis: the error norms relative to the smallest error norm obtained for the test problem by the seven algorithms, m; vertical axis: the number of cases, N m ( k ) , for the kth algorithm and each error norm. Dashed lines, saturated colours: the maximum error norm; continuous lines, diluted colours: the energy error norm. The panels show the plots for the three values of the resolution parameter M.
Mathematics 11 00808 g003
Figure 4. Errors (vertical axes) of approximate solutions obtained for sample test problems by algorithms 1 (red), 1 (green), 2 (blue) and 2 (black colour, respectively) for determining the coefficients of a linear combination of T n ( x 3 ) vs. the coefficient index number (horizontal axes). The panels show the plots for six randomly chosen sample runs for the three values of the resolution parameter M. Continuous line: odd-index coefficients; dashed line: even-index coefficients.
Figure 4. Errors (vertical axes) of approximate solutions obtained for sample test problems by algorithms 1 (red), 1 (green), 2 (blue) and 2 (black colour, respectively) for determining the coefficients of a linear combination of T n ( x 3 ) vs. the coefficient index number (horizontal axes). The panels show the plots for six randomly chosen sample runs for the three values of the resolution parameter M. Continuous line: odd-index coefficients; dashed line: even-index coefficients.
Mathematics 11 00808 g004
Figure 5. Distribution of error norms of approximate solutions obtained for 10 6 sample test problems by algorithms 1 (red), 1 (green), 2 (blue) and 2 (black colour, respectively) for determining the coefficients of a linear combination of T n ( x 3 ) . Horizontal axis: the error norms relative to the smallest error norm obtained for the test problem by the four algorithms, m; vertical axis: the number of cases, N m ( k ) , for the kth algorithm and each error norm. Dashed lines, saturated colours: the maximum error norm; thin continuous lines, diluted colours: the energy error norm. The panels show the plots for the three values of the resolution parameter M.
Figure 5. Distribution of error norms of approximate solutions obtained for 10 6 sample test problems by algorithms 1 (red), 1 (green), 2 (blue) and 2 (black colour, respectively) for determining the coefficients of a linear combination of T n ( x 3 ) . Horizontal axis: the error norms relative to the smallest error norm obtained for the test problem by the four algorithms, m; vertical axis: the number of cases, N m ( k ) , for the kth algorithm and each error norm. Dashed lines, saturated colours: the maximum error norm; thin continuous lines, diluted colours: the energy error norm. The panels show the plots for the three values of the resolution parameter M.
Mathematics 11 00808 g005
Table 1. Minimum and maximum error norms detected among solutions to the 10 6 test problems by the seven algorithms for determination of coefficients of a linear combination of the polynomials T n .
Table 1. Minimum and maximum error norms detected among solutions to the 10 6 test problems by the seven algorithms for determination of coefficients of a linear combination of the polynomials T n .
MAlgorithm q max q en
Min. ErrorMax. ErrorMin. ErrorMax. Error
1261 3.38× 10 17 3.26× 10 15 1.29× 10 16 1.73× 10 14
1 3.21× 10 17 3.44× 10 15 1.16× 10 16 1.70× 10 14
2 5.97× 10 16 1.19× 10 12 3.17× 10 15 6.53× 10 12
2 3.12× 10 17 3.44× 10 15 1.48× 10 16 1.72× 10 14
3 2.22× 10 16 3.82× 10 14 6.06× 10 16 1.01× 10 13
3 1.11× 10 16 3.61× 10 15 3.22× 10 16 1.62× 10 14
4 8.33× 10 17 3.77× 10 15 3.34× 10 16 1.83× 10 14
5101 2.78× 10 14 2.20× 10 14 1.75× 10 15 2.35× 10 13
1 2.43× 10 14 2.28× 10 14 1.69× 10 15 2.25× 10 13
2 7.75× 10 11 3.47× 10 11 8.98× 10 14 3.63× 10 10
2 2.78× 10 14 2.03× 10 14 1.43× 10 15 2.14× 10 13
3 1.42× 10 13 1.59× 10 13 8.70× 10 15 7.48× 10 13
3 1.11× 10 14 2.30× 10 14 5.52× 10 15 2.28× 10 13
4 1.19× 10 14 2.36× 10 14 5.65× 10 15 2.24× 10 13
20461 2.22× 10 15 1.64× 10 13 2.74× 10 14 3.50× 10 12
1 2.35× 10 15 1.75× 10 13 2.63× 10 14 3.57× 10 12
2 2.34× 10 13 1.16× 10 9 5.23× 10 12 2.39× 10 8
2 2.61× 10 15 1.72× 10 13 2.64× 10 14 3.32× 10 12
3 1.43× 10 14 6.29× 10 13 1.53× 10 13 6.11× 10 12
3 9.24× 10 15 1.76× 10 13 8.31× 10 14 3.89× 10 12
4 9.71× 10 15 1.77× 10 13 9.37× 10 14 3.51× 10 12
Table 2. Mean execution times (seconds) of 1000 applications of the seven algorithms for computing coefficients of a linear combination of T n .
Table 2. Mean execution times (seconds) of 1000 applications of the seven algorithms for computing coefficients of a linear combination of T n .
Algorithm1 1 2 2
M = 126 1.32× 10 3 1.39× 10 3 8.94× 10 4 9.98× 10 4
M = 510 5.42× 10 3 5.36× 10 3 3.81× 10 3 3.83× 10 3
M = 2046 2.171× 10 2 2.162× 10 2 1.506× 10 2 1.542× 10 2
Algorithm3 3 4
M = 126 1.45× 10 3 1.34× 10 3 1.90× 10 3
M = 510 5.68× 10 3 5.26× 10 3 7.97× 10 3
M = 2046 2.262× 10 2 2.109× 10 2 3.197× 10 2
Table 3. Minimum and maximum error norms detected among solutions to 10 6 test problems (20a) by the four algorithms.
Table 3. Minimum and maximum error norms detected among solutions to 10 6 test problems (20a) by the four algorithms.
MAlgorithm q max q en
Min. ErrorMax. ErrorMin. ErrorMax. Error
1241 4.51× 10 17 3.22× 10 15 1.10× 10 16 8.47× 10 15
1 5.55× 10 17 2.83× 10 15 1.49× 10 16 7.83× 10 15
2 5.55× 10 17 3.55× 10 15 1.42× 10 16 1.02× 10 14
2 8.33× 10 17 4.22× 10 15 2.68× 10 16 1.54× 10 14
5081 4.30× 10 16 1.95× 10 14 1.81× 10 15 1.12× 10 13
1 2.78× 10 16 2.03× 10 14 1.98× 10 15 1.10× 10 13
2 3.61× 10 16 2.21× 10 14 2.19× 10 15 1.29× 10 13
2 5.00× 10 16 2.07× 10 14 2.47× 10 15 1.27× 10 13
20441 2.78× 10 15 1.35× 10 13 2.53× 10 14 1.69× 10 12
1 2.53× 10 15 1.33× 10 13 2.95× 10 14 1.62× 10 12
2 2.89× 10 15 2.55× 10 13 3.61× 10 14 2.10× 10 12
2 3.16× 10 15 1.53× 10 13 3.32× 10 14 1.88× 10 12
Table 4. Mean execution times (seconds) of 1000 applications of the four algorithms for computing coefficients of a linear combination of T n .
Table 4. Mean execution times (seconds) of 1000 applications of the four algorithms for computing coefficients of a linear combination of T n .
Algorithm1 1 2 2
M = 124 1.41× 10 3 1.51× 10 3 1.43× 10 3 1.51× 10 3
M = 508 6.04× 10 3 6.30× 10 3 5.88× 10 3 6.18× 10 3
M = 2044 2.506× 10 2 2.632× 10 2 2.464× 10 2 2.595× 10 2
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tolmachev, D.; Chertovskih, R.; Zheligovsky, V. Algorithmic Aspects of Simulation of Magnetic Field Generation by Thermal Convection in a Plane Layer of Fluid. Mathematics 2023, 11, 808. https://0-doi-org.brum.beds.ac.uk/10.3390/math11040808

AMA Style

Tolmachev D, Chertovskih R, Zheligovsky V. Algorithmic Aspects of Simulation of Magnetic Field Generation by Thermal Convection in a Plane Layer of Fluid. Mathematics. 2023; 11(4):808. https://0-doi-org.brum.beds.ac.uk/10.3390/math11040808

Chicago/Turabian Style

Tolmachev, Daniil, Roman Chertovskih, and Vladislav Zheligovsky. 2023. "Algorithmic Aspects of Simulation of Magnetic Field Generation by Thermal Convection in a Plane Layer of Fluid" Mathematics 11, no. 4: 808. https://0-doi-org.brum.beds.ac.uk/10.3390/math11040808

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop