Next Article in Journal
Effect of Magnetohydrodynamics on Heat Transfer Behaviour of a Non-Newtonian Fluid Flow over a Stretching Sheet under Local Thermal Non-Equilibrium Condition
Next Article in Special Issue
Data-Driven Reduced-Order Modeling of Convective Heat Transfer in Porous Media
Previous Article in Journal
Improving the Performance of Surface Flow Generated by Bubble Plumes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Stochastic Galerkin Reduced Basis Methods for Parametrized Linear Convection–Diffusion–Reaction Equations

Department of Mathematics, Technical University of Darmstadt, Dolivostraße 15, 64293 Darmstadt, Germany
*
Author to whom correspondence should be addressed.
Submission received: 27 May 2021 / Revised: 11 July 2021 / Accepted: 19 July 2021 / Published: 22 July 2021
(This article belongs to the Special Issue Reduced Order Models for Computational Fluid Dynamics)

Abstract

:
We consider the estimation of parameter-dependent statistics of functional outputs of steady-state convection–diffusion–reaction equations with parametrized random and deterministic inputs in the framework of linear elliptic partial differential equations. For a given value of the deterministic parameter, a stochastic Galerkin finite element (SGFE) method can estimate the statistical moments of interest of a linear output at the cost of solving a single, large, block-structured linear system of equations. We propose a stochastic Galerkin reduced basis (SGRB) method as a means to lower the computational burden when statistical outputs are required for a large number of deterministic parameter queries. Our working assumption is that we have access to the computational resources necessary to set up such a reduced-order model for a spatial-stochastic weak formulation of the parameter-dependent model equations. In this scenario, the complexity of evaluating the SGRB model for a new value of the deterministic parameter only depends on the reduced dimension. To derive an SGRB model, we project the spatial-stochastic weak solution of a parameter-dependent SGFE model onto a reduced basis generated by a proper orthogonal decomposition (POD) of snapshots of SGFE solutions at representative values of the parameter. We propose residual-corrected estimates of the parameter-dependent expectation and variance of linear functional outputs and provide respective computable error bounds. We test the SGRB method numerically for a convection–diffusion–reaction problem, choosing the convective velocity as a deterministic parameter and the parametrized reactivity or diffusivity field as a random input. Compared to a standard reduced basis model embedded in a Monte Carlo sampling procedure, the SGRB model requires a similar number of reduced basis functions to meet a given tolerance requirement. However, only a single run of the SGRB model suffices to estimate a statistical output for a new deterministic parameter value, while the standard reduced basis model must be solved for each Monte Carlo sample.

1. Introduction

Convection–diffusion–reaction equations appear in many fields of science and engineering when modelling flow phenomena. They describe the behavior of a physical quantity of interest in a considered domain under the influence of diffusive and convective effects when there is also production. These types of equations are, for example, used to model combustion and chemotaxis, and also appear in the context of the incompressible Navier–Stokes equations when solving the Oseen system or vorticity formulations. An overview of different applications can, for example, be found in [1]. We investigate convection–diffusion–reaction equations in the abstract framework of linear elliptic partial differential equations and revisit the concrete equations in the Results Section 3. We consider boundary-value problems subject to a finite number of random and deterministic input parameters. Our goal is to compute the parameter-dependent expected value and variance of a functional output of interest. In this context, a reduced basis model provides a computationally inexpensive map between the deterministic input parameters and the corresponding output statistics. Moreover, it provides a computable a posteriori bound for the error between the reduced basis statistical estimate and a corresponding high-fidelity estimate. Reduced basis methods for linear elliptic boundary value problems with affinely parametrized deterministic data are well-understood [2,3,4]. We consider two approaches to include stochastic parameters:
  • The Monte Carlo reduced basis (MCRB) method: The underlying equations are formulated weakly regarding the physical space, that is, the problem depends on both the deterministic and stochastic parameters. Monte Carlo sampling is used to estimate the parameter-dependent expected value and variance of a functional output of interest. An MCRB method for linear elliptic problems with error bounds for the expectation and variance of a linear functional output is derived in [5]. Improved error bounds are provided by [6]. Further advances are the introduction of a weighted error estimator [7] and the embedding in a multi-level procedure [8]. MCRB methods have also been applied to parabolic problems [9], saddle point problems [10], Bayesian inverse problems [11,12,13], and the assessment of rare events [14].
  • Stochastic Galerkin reduced basis (SGRB) method: The underlying equations are formulated weakly regarding the spatial and stochastic dimensions, so that the problem depends on the deterministic parameters only. Parameter-dependent estimates of the expected value and variance of a functional output are obtained by direct integration of the reduced solution. The principle of SGRB methods is introduced in [15] for stochastic time-dependent incompressible Navier–Stokes problems, formulated weakly regarding the spatial and stochastic dimensions, with time acting as a parameter. Applications to linear dynamical systems are studied in [16,17]. SGRB methods can be related to space-time reduced basis methods [18,19], which rely on a weak formulation with respect to space and time. The idea of using SGRB methods to estimate parameter-dependent expected values is discussed in ([20], Section 8.2.1).
The main contributions of the paper can be split into three parts: First, we derive a novel residual-corrected parameter-dependent estimate of the variance of a functional output of interest of a linear elliptic partial differential equation for a Monte Carlo reduced basis approach by carefully combining techniques from [5,6]. In contrast to some other available variance estimators as the one described in [21], our estimator converges quadratically in terms of residual norms, see also [6]. Second, we adopt ideas from [22,23] to derive a new computable residual-corrected estimate for the variance of a functional output of interest for an SGRB method in the same setting. Estimates for the variance of functional outputs without residual correction are, for example, considered in [24]. We eventually show that the SGRB method can yield an online speedup with respect to the MCRB method in the order of magnitude of the Monte Carlo samples for two numerical examples.
The creation of a stochastic Galerkin reduced basis model requires an underlying stochastic Galerkin finite element (SGFE) model or an equivalent high-fidelity Galerkin approximation. At least a few snapshots of the SGFE solution for different values of the deterministic parameters are needed to provide a suitable reduced basis. Moreover, evaluations of the SGFE linear and bilinear forms are necessary to derive the respective reduced-order Galerkin model and error bounds. We assume that the resources necessary for these computations are available in the setup phase. We briefly discuss the associated costs in Section 3.3 but refer to [25] for a more detailed analysis of the costs of stochatic Galerkin methods. We point out that the presented SGRB method is not a tool to reduce the computational burden associated with a single solution of an SGFE model as it is the objective in, for example, [26,27] using proper generalized decomposition, in [28,29] using a rational Krylov method and a low-rank tensor approximation, respectively, and in [30,31] using problem-tailored preconditioned iterative solvers. Instead, the SGRB approach targets the situation where a certain number of SGFE simulations are feasible in an expensive pre-processing step to create a reduced-order model which can be evaluated cheaply for any given deterministic parameter. As an extreme scenario, one could imagine having supercomputer resources available in the setup phase, whereas the reduced-order model shall be evaluated on a microcontroller in real time. Therefore, SGRB models can be particularly useful in settings where statistical estimates are required for many values of the deterministic parameters, like in robust optimal control or the real-time exploration of parameter-dependent statistics.
Compared to Monte Carlo reduced basis methods, stochastic Galerkin reduced basis methods can substantially decrease the computational cost of estimating the expectation and variance for a given value of the deterministic parameters. The reason is that MCRB methods require sampling the reduced-order solution, which may lead to a large number of reduced-order simulations for a single query of the deterministic parameters. The same issue arises when a stochastic collocation method is applied instead of Monte Carlo [32]. SGRB methods overcome this drawback by evaluating the stochastic integrals in the offline stage, that is, during the setup of the reduced-order model. As a result, the cost of solving an SGRB model is similar to the cost of a single solution of a comparable RB model within an MC loop. At the same time, the SGRB model directly delivers a statistical estimate without sampling. Therefore, one can expect a speed-up factor in the order of magnitude of the number of MC samples.

2. Materials and Methods

2.1. Monte Carlo Reduced Basis Method

We introduce a complete probability space ( Θ , F , P ) consisting of a set Θ of elementary events, a σ -algebra F on Θ , and a probability measure P on F . For k = 1 , , K with K N , we define independent random variables ξ k : Θ Ξ k , where Ξ k R is the image of ξ k . We introduce respective probability distributions P ξ k and probability densities p ξ k : Ξ k R + , so that P ξ k ( B ) = B p ξ k ( y ) d y = P ( ξ k 1 ( B ) ) for all B in the Borel σ -algebra of ξ k . We collect the random variables in a random vector ξ : Θ Ξ , where ξ = ( ξ 1 , , ξ K ) T and Ξ = Ξ 1 × × Ξ K , with joint distribution P ξ and density p ξ : ξ R + . We denote the expectation of any P ξ -measurable function g : Ξ R with density p ξ by E [ g ] = Ξ g ( y ) d P ξ ( y ) = Ξ g ( y ) p ξ ( y ) d y . We define the variance V [ g ] = E [ ( g E [ g ] ) 2 ] for any g L ξ 2 ( Ξ ) , where L ξ 2 ( Ξ ) : = { v : Ξ R Ξ v ( y ) 2 p ξ ( y ) d y < } .
We introduce a deterministic parameter μ P with a domain P R P with P N . The final statistical outputs are scalar-valued μ -dependent functions representing approximations to the expectation and variance of a linear functional of a PDE solution.
We let { ξ 1 , , ξ N ξ } be a set of independent copies of the random vector ξ . For some g L ξ 2 ( Ξ ) , we define Monte Carlo estimators
E [ g ] : = 1 N ξ n = 1 N ξ g ( ξ n ) , E ̲ [ g ] : = 1 N ξ 1 n = 1 N ξ g ( ξ n ) , V [ g ] : = E ̲ [ g 2 ] E ̲ [ g ] E [ g ] ,
for which E [ E [ g ] ] = E [ g ] and E [ V [ g ] ] = V [ g ] hold, that is, the estimators are unbiased. They are thus estimators of the true expectation and variance of g and converge to the true values almost surely as N ξ , see ([33], Section 4.3). These different estimators also appear in the derivation of the output bounds as explained in Section 2.1.3. We let Ξ N ξ : = { y 1 , , y N ξ } be a realization of { ξ 1 , , ξ N ξ } . A realization of a Monte Carlo estimate is obtained after substituting ξ n by y n in (1), assuming that g ( y ) is computable for any y Ξ N ξ . In our approach, we view N ξ as a discretization parameter and fix Ξ N ξ before we build the reduced basis model.
We focus on the case where g depends on its argument via a discretized PDE problem. In the following, we provide a full-order model (Section 2.1.1) and a reduced-order model (Section 2.1.2) to approximate the solution of the PDE for a given realization of the deterministic and random input parameters. The computation of linear outputs and the corresponding statistics are described in Section 2.1.3, together with the respective error bounds. For the separation of the computation into an expensive offline phase and an inexpensive online phase, we refer to [5,6].

2.1.1. Monte Carlo Finite Element (MCFE) Model

We use a stochastic strong form of a parametrized PDE problem with random data to formulate an MCFE model. Samples of the solution of the PDE problem are characterized by a separable Hilbert space X with inner product ( · , · ) X and norm · X . We introduce a parametrized bilinear and linear form a ( · , · ; y , μ ) : X × X R and f ( · ; y , μ ) : X R , respectively. This allows a stochastic strong formulation of a linear elliptic PDE problem:
Problem 1
(MCFE model). For given ( y , μ ) Ξ N ξ × P , find
u ( y , μ ) X : a ( u ( y , μ ) , v ; y , μ ) = f ( v ; y , μ ) v X .
We assume that a ( · , · ; y , μ ) is ( y , μ ) -uniformly bounded and coercive on X and that f ( · ; y , μ ) is ( y , μ ) -uniformly bounded on X. Then, Problem 1 has a unique solution for any given ( y , μ ) Ξ N ξ × P according to the Lax–Milgram lemma.
It is a usual premise in reduced basis methods that the discretization space of the underlying full-order model is assumed to be large enough to capture the solution with sufficient precision, see [4]. The assessment of the error of the full-order solution with respect to the infinite-dimensional exact solution is delegated to the choice of the full-order discretization. In this spirit, we assume X to be a sufficiently well-resolving finite element space with M FE degrees of freedom. Similarly, we assume Ξ N ξ to be a large enough sample set so that the error associated with the MC sampling is sufficiently small. Consequently, the errors associated with the MC sampling and the FE discretization are not represented in our error estimates.

2.1.2. Monte Carlo Reduced Basis Model

Let X R X be an R-dimensional subspace. An example is given in Section 2.3.1. A reduced-order model of Problem 1 is:
Problem 2
(MCRB model). For given ( y , μ ) Ξ N ξ × P , find
u R ( y , μ ) X R : a ( u R ( y , μ ) , v ; y , μ ) = f ( v ; y , μ ) v X R .
The unique solvability of Problem 2 is a direct consequence of Problem 1 being well-posed and X R being a subspace of X.

2.1.3. Output Statistics and Error Estimates

We derive residual-corrected RB approximations of MCFE estimates of the expectation and variance of linear outputs of the parametrized PDE problem. We provide error bounds converging quadratically in terms of residual norms. In particular, we transfer the dual-based error bounds of [6], considering the true expectation and variance of RB outputs, to the setting of [5], considering MC approximations of the expectation and variance. This requires an additional dual problem, as well as a careful handling of different MC discretizations of the expected value, namely E [ · ] and E ̲ [ · ] , according to (1). Throughout this section, we assume the same dependency on the deterministic and stochastic parameters as in Section 2.1.1 and Section 2.1.2, but often omit an explicit notation of the parameter dependence for clarity.
We introduce a parametrized linear form l ( · ; y , μ ) : X R , assumed to be ( y , μ ) -uniformly bounded on X. This linear form is the quantity of interest for the MCRB model. We complement Problems 1 and 2 with auxiliary sets of dual problems to allow for residual-corrected output computations by carefully combining techniques presented in [5,6]. These auxiliary problems are essential for the estimates of the output statistics below. For brevity, we provide the definitions and problems all at once as they fit in the same structural framework:
Definition 1.
Subspaces X ( 1 ) R , , X ( 4 ) R and linear forms l ( 1 ) , , l ( 4 ) are given by
X ( 1 ) R X , l ( 1 ) ( · ) : = l ( · ) , X ( 2 ) R X , l ( 2 ) ( · ) : = 2 ( l ( u R ) r ( u ( 1 ) R ) ) l ( · ) , X ( 3 ) R X , l ( 3 ) ( · ) : = E [ l ( u R ) r ( u ( 1 ) R ) ] l ( · ) , X ( 4 ) R X , l ( 4 ) ( · ) : = E ̲ [ l ( u R ) r ( u ( 1 ) R ) ] l ( · ) .
Problem 3
(dual MCFE models). For given ( y , μ ) Ξ N ξ × P , find
u ( i ) X : a ( v , u ( i ) ) = l ( i ) ( v ) v X , i = 1 , , 4 .
Problem 4
(dual MCRB models). For given ( y , μ ) Ξ N ξ × P , find
u ( i ) R X ( i ) R : a ( v , u ( i ) R ) = l ( i ) ( v ) v X ( i ) R , i = 1 , , 4 .
Definition 2.
A primal residual r and dual residuals r ( 1 ) , , r ( 4 ) are given by
r ( · ) : = f ( · ) a ( u R , · ) ,
r ( i ) ( · ) : = l ( i ) ( · ) a ( · , u ( i ) R ) , i = 1 , 4 .
The following error bounds require a coercivity factor
α ( y , μ ) : = inf v X { 0 } a ( v , v ; y , μ ) v X 2 ( y , μ ) Ξ N ξ × P ,
and a dual space X of X, with norm
F X : = sup v X { 0 } | F ( v ) | v X F X .
For efficiency, an offline/online decomposition of the dual norms of the functional is possible, and the coercivity factor can be replaced by a strictly positive lower bound [5,6].
First, we provide a bound for the error of the RB solution to Problem 2 with respect to the FE solution to Problem 1, point-wise in Ξ N ξ × P , see ([6], Proposition 3.1):
Theorem 1
(solution bound). For given ( y , μ ) Ξ N ξ × P ,
u u R X r X α .
Proof. 
We define e : = u u R and derive
α e X 2 ( 5 ) a ( e , e ) = ( 2 ) f ( e ) a ( u R , e ) = ( 3 ) r ( e ) ( 6 ) r X e X .
Dividing by α e X gives the result.    □
We approximate a parameter-dependent linear output l ( u ( y , μ ) ; y , μ ) point-wise with a residual-corrected reduced-order approximation, see ([6], Theorem 3.6):
Theorem 2
(output bound). For given ( y , μ ) Ξ N ξ × P ,
| l ( u ) l ( u R ) + r ( u ( 1 ) R ) | r X r ( 1 ) X α .
Proof. 
We define e : = u u R and reformulate
| l ( u ) l ( u R ) + r ( u ( 1 ) R ) | = ( 3 ) | l ( e ) + f ( u ( 1 ) R ) a ( u R , u ( 1 ) R ) | = ( 2 ) | l ( e ) + a ( e , u ( 1 ) R ) | = ( 4 ) | r ( 1 ) ( e ) | ( 6 ) r ( 1 ) X e X ( 7 ) r ( 1 ) X r X α .
   □
We approximate the MCFE estimate E [ l ( u ( · , μ ) ; · , μ ) ] of the parameter-dependent expected linear output as follows, see ([6], Corollary 4.2.):
Theorem 3
(expected output bound). For given μ P ,
| E [ l ( u ) ] E [ l ( u R ) ] + E [ r ( u ( 1 ) R ) ] | E r X r ( 1 ) X α .
Proof. 
By Jensen’s inequality
| E [ l ( u ) ] E [ l ( u R ) ] + E [ r ( u ( 1 ) R ) ] | E [ | l ( u ) l ( u R ) + r ( u ( 1 ) R ) | ] ( 8 ) E r X r ( 1 ) X α .
   □
Finally, we approximate the MCFE estimate V [ l ( u ( · , μ ) ; · , μ ) ] of the parameter-dependent variance of the linear output, see ([6], Theorem 4.5):
Theorem 4
(output variance bound). For given μ P ,
| V [ l ( u ) ] V [ l ( u R ) ] + V [ r ( u ( 1 ) R ) ] + E ̲ [ r ( u ( 2 ) R ) ] E ̲ [ r ( u ( 3 ) R ) ] E [ r ( u ( 4 ) R ) ] | E ̲ r X 2 r ( 1 ) X 2 α 2 + E r X r ( 1 ) X α E ̲ r X r ( 1 ) X α + E ̲ r ( 2 ) r ( 3 ) N ξ 1 N ξ r ( 4 ) X r X α .
Proof. 
By (1), defining e : = u u R ,
| E ̲ [ l ( u ) 2 ] E ̲ [ l ( u ) ] E [ l ( u ) ] E ̲ [ l ( u R ) 2 ] + E ̲ [ l ( u R ) ] E [ l ( u R ) ] + E ̲ [ r ( u ( 1 ) R ) 2 ] E ̲ [ r ( u ( 1 ) R ) ] E [ r ( u ( 1 ) R ) ] + E ̲ [ r ( u ( 2 ) R ) ] E ̲ [ r ( u ( 3 ) R ) ] E [ r ( u ( 4 ) R ) ] | | E ̲ [ ( l ( u ) l ( u R ) + r ( u ( 1 ) R ) ) 2 ] | ( 8 ) E ̲ r X 2 r ( 1 ) X 2 α 2 + | E ̲ [ l ( u ) l ( u R ) + r ( u ( 1 ) R ) ] | ( 8 ) E ̲ r X r ( 1 ) X α | E [ l ( u ) l ( u R ) + r ( u ( 1 ) R ) ] | ( 8 ) E r X r ( 1 ) X α + | E ̲ [ 2 ( l ( u R ) r ( u ( 1 ) R ) ) l ( e ) + r ( u ( 2 ) R ) = ( 3 ) , ( 2 ) , ( 4 ) r ( 2 ) ( e ) ] E ̲ [ E [ l ( u R ) r ( u ( 1 ) R ) ] l ( e ) + r ( u ( 3 ) R ) = ( 3 ) , ( 2 ) , ( 4 ) r ( 3 ) ( e ) ] E [ E ̲ [ l ( u R ) r ( u ( 1 ) R ) ] l ( e ) + r ( u ( 4 ) R ) = ( 3 ) , ( 2 ) , ( 4 ) r ( 4 ) ( e ) ] | ,
where
| E ̲ [ r ( 2 ) ( e ) ] E ̲ [ r ( 3 ) ( e ) ] E [ r ( 4 ) ( e ) ] | ( 6 ) , ( 7 ) E ̲ r ( 2 ) r ( 3 ) N ξ 1 N ξ r ( 4 ) X r X α .
   □

2.2. Stochastic Galerkin Reduced Basis Method

In the following, we replace the Monte Carlo sampling by a stochastic Galerkin procedure, see [33]. One can also use stochastic collocation methods in this context, see [32]. The benefit of the stochastic Galerkin method is, however, that no additional sampling is necessary in the online phase. We provide a full-order model (Section 2.2.1) and a reduced-order model (Section 2.2.2) to approximate the stochastic solution of the PDE problem for a given realization of the deterministic input parameters. The computation of statistics of linear outputs are described in Section 2.2.3, together with the respective error bounds. The computation can be separated into an expensive offline phase and an inexpensive online phase by standard means [3].

2.2.1. Stochastic Galerkin Finite Element Model

We introduce a stochastic Galerkin discretization space S L ξ 2 ( Ξ ) . An example is given in Section 3.2. We define the product space X ¯ : = S X , which is a Hilbert space with inner product ( · , · ) X ¯ : = ( · , · ) L ξ 2 ( Ξ , X ) and norm · X ¯ : = · L ξ 2 ( Ξ , X ) in terms of the Bochner-type space L ξ 2 ( Ξ , X ) : = { v : Ξ X Ξ v ( y ) X 2 p ξ ( y ) d y < } .
We derive a spatial-stochastic weak formulation by taking the expectation of (2). Defining a ¯ ( w , v ; μ ) : = E [ a ( w , v ; · , μ ) ] and f ¯ ( v ; μ ) : = E [ f ( v ; · , μ ) ] provides
Problem 5
(SGFE model). For given μ P , find
u ¯ ( μ ) X ¯ : a ¯ ( u ¯ ( μ ) , v ; μ ) = f ¯ ( v ; μ ) v X ¯ .
As a consequence of the coercivity and boundedness properties associated with Problem 1, the bilinear form a ¯ ( · , · ; μ ) is μ -uniformly bounded and coercive on X ¯ and the linear form f ¯ ( · ; μ ) is μ -uniformly bounded on X ¯ . Therefore, Problem 5 has a unique solution for any given μ P according to the Lax–Milgram lemma.

2.2.2. Stochastic Galerkin Reduced Basis Model

We introduce an R-dimensional reduced space X ¯ R X ¯ . Suitable reduced spaces are provided in Section 2.3.2. A reduced form of Problem 5 is given as follows:
Problem 6
(SGRB model). For given μ P , find
u ¯ R ( μ ) X ¯ R : a ¯ ( u ¯ R ( μ ) , v ; μ ) = f ¯ ( v ; μ ) v X ¯ R .
The subspace property X ¯ R X ¯ and the well-posedness of Problem 5 imply that Problem 6 has a unique solution.

2.2.3. Output Statistics and Error Estimates

We derive SGRB approximations of the expectation and variance of linear outputs together with error bounds with respect to the corresponding SGFE approximations. The variance can be interpreted in terms of quadratic outputs. We follow the ideas of [22,23] to derive the respective error bounds.
We introduce a linear form l ¯ ( v ; μ ) : = E [ l ( v ; · , μ ) ] , which is μ -uniformly bounded on X ¯ . This linear form is the quantity of interest in the SGRB model. We complement the primal problem of Section 2.2.1 with corresponding dual problems:
Problem 7
(dual SGFE models). For given μ P , find
u ¯ ( 1 ) X ¯ : a ¯ ( v , u ¯ ( 1 ) ) = l ¯ ( v ) v X ¯ , u ¯ ( 2 ) X ¯ : a ¯ ( v , u ¯ ( 2 ) ) = E [ l ( u ¯ + u ¯ R ) l ( v ) ] v X ¯ , u ¯ ( 3 ) X ¯ : a ¯ ( v , u ¯ ( 3 ) ) = l ¯ ( u ¯ ) + l ¯ ( u ¯ R ) 2 r ¯ ( u ¯ ( 1 ) R ) l ¯ ( v ) v X ¯ .
Letting X ¯ ( 1 ) R X ¯ , X ¯ ( 2 ) R X ¯ and X ¯ ( 3 ) R X ¯ be R-dimensional subspaces, we introduce the following set of reduced dual equations:
Problem 8
(dual SGRB models). For given μ P , find
u ¯ ( 1 ) R X ¯ ( 1 ) R : a ¯ ( v , u ¯ ( 1 ) R ) = l ¯ ( v ) v X ¯ ( 1 ) R , u ¯ ( 2 ) R X ¯ ( 2 ) R : a ¯ ( v , u ¯ ( 2 ) R ) = 2 E [ l ( u ¯ R ) l ( v ) ] v X ¯ ( 2 ) R , u ¯ ( 3 ) R X ¯ ( 3 ) R : a ¯ ( v , u ¯ ( 3 ) R ) = 2 l ¯ ( u ¯ R ) r ¯ ( u ¯ ( 1 ) R ) l ¯ ( v ) v X ¯ ( 3 ) R .
The error bounds will be provided in terms of dual norms of residuals:
Definition 3.
Based on Problems 5, 6, 7 and 8,
r ¯ ( · ) : = f ¯ ( · ) a ¯ ( u ¯ R , · ) ,
r ¯ ( 1 ) ( · ) : = l ¯ ( · ) a ¯ ( · , u ¯ ( 1 ) R ) ,
r ¯ ( 2 ) ( · ) : = 2 E [ l ( u ¯ R ) l ( · ) ] a ¯ ( · , u ¯ ( 2 ) R ) ,
r ¯ ( 3 ) ( · ) : = 2 l ¯ ( u ¯ R ) r ¯ ( u ¯ ( 1 ) R ) l ¯ ( · ) a ¯ ( · , u ¯ ( 3 ) R ) .
We define, for any μ P , the coercivity factor
α ¯ ( μ ) = inf v X ¯ { 0 } a ¯ ( v , v ; μ ) v X ¯ 2
and the continuity factor
γ ¯ ( 2 ) ( μ ) = sup w , v X ¯ { 0 } E [ l ( w ; μ , · ) l ( v ; μ , · ) ] w X ¯ v X ¯ .
It is possible to replace these factors by efficiently computable upper and lower bounds [22]. We introduce the dual space X ¯ of X ¯ with norm
F X ¯ : = sup v X ¯ { 0 } | F ( v ) | v X ¯ F X ¯ .
We can derive the following error bound for the error in the reduced-order approximation of the solution:
Theorem 5
(solution bound). For given μ P ,
u ¯ u ¯ R X ¯ r ¯ X ¯ α ¯ .
Proof. 
Analog to the proof of Theorem 1.    □
In view of the definition of the weak linear form l ¯ , we obtain the following bounds for the expected value and variance of the output:
Theorem 6
(expected output bound). For given μ P ,
| E [ l ( u ¯ ) ] l ¯ ( u ¯ R ) + r ¯ ( u ¯ ( 1 ) R ) | r ¯ X ¯ r ¯ ( 1 ) X ¯ α ¯ .
Proof. 
Analog to the proof of Theorem 2.    □
Theorem 7
(output variance bound). For given μ P ,
| V [ l ( u ¯ ) ] V [ l ( u ¯ R ) ] r ¯ ( u ¯ ( 1 ) R ) 2 + r ¯ ( u ¯ ( 2 ) R ) r ¯ ( u ¯ ( 3 ) R ) | γ ¯ ( 2 ) r ¯ X ¯ 2 α ¯ 2 + r ¯ X ¯ 2 r ¯ ( 1 ) X ¯ 2 α ¯ 2 + r ¯ ( 2 ) r ¯ ( 3 ) X ¯ r ¯ X ¯ α ¯ .
Proof. 
Setting e = u ¯ u ¯ R , we rewrite
r ¯ ( u ¯ ( 2 ) R ) = ( 9 ) , ( 10 ) a ¯ ( e , u ¯ ( 2 ) R ) = ( 12 ) 2 E [ l ( u ¯ R ) l ( e ) ] r ¯ ( 2 ) ( e ) ,
r ¯ ( u ¯ ( 3 ) R ) = ( 9 ) , ( 10 ) a ¯ ( e , u ¯ ( 3 ) R ) = ( 13 ) 2 ( l ¯ ( u ¯ R ) r ¯ ( u ¯ ( 1 ) R ) ) l ¯ ( e ) r ¯ ( 3 ) ( e ) .
After expressing the variance in terms of expectations, the left-hand side of (18) becomes
| E [ l ( u ¯ ) 2 ] E [ l ( u ¯ R ) 2 ] + r ¯ ( u ¯ ( 2 ) R ) = E [ l ( e ) 2 ] r ¯ ( 2 ) ( e ) by ( 19 ) l ¯ ( u ¯ ) 2 + l ¯ ( u ¯ R ) 2 r ¯ ( u ¯ ( 1 ) R ) 2 r ¯ ( u ¯ ( 3 ) R ) = r ¯ ( 3 ) ( e ) ( l ¯ ( e ) + r ¯ ( u ¯ ( 1 ) R ) ) 2 by ( 20 ) | | E [ l ( e ) 2 ] | + ( l ¯ ( e ) + r ¯ ( u ¯ ( 1 ) R ) ) 2 + | r ¯ ( 2 ) ( e ) r ¯ ( 3 ) ( e ) | .
The final result follows from the following bounds:
| E [ l ( e ) 2 ] | ( 14 ) γ ¯ ( 2 ) e X ¯ 2 ( 16 ) γ ¯ ( 2 ) r ¯ X ¯ 2 α ¯ 2 ,
( l ¯ ( e ) + r ¯ ( u ¯ ( 1 ) R ) ) 2 ( 17 ) r ¯ X ¯ 2 r ¯ ( 1 ) X ¯ 2 α ¯ 2 , | r ¯ ( 2 ) ( e ) r ¯ ( 3 ) ( e ) | ( 15 ) r ¯ ( 2 ) r ¯ ( 3 ) X ¯ e X ¯ ( 16 ) r ¯ ( 2 ) r ¯ ( 3 ) X ¯ r ¯ X ¯ α ¯ .
   □
A different estimate for the expectation and variance of a functional output of interest for an SGRB method without residual correction can be found in [24].

2.3. Reduced Spaces

We introduce candidate reduced spaces X R and X ¯ R to be used in Problems 2 and 6, respectively. For simplicity, we focus on spaces generated by snapshot-based proper orthogonal decomposition (POD), but the theory of Section 2.1 and Section 2.2 does not depend on this choice. For instance, the availability of computable error bounds also allows the use of greedy snapshot sampling [5,6,34].
The procedures described in this section can also be applied to create the dual reduced spaces encountered in Problems  4 and 8, by applying the POD to snapshots of the corresponding discretized dual solutions. The creation of the dual reduced spaces must follow a certain sequence because some of the discretized dual problems contain reduced primal and dual solutions on their right-hand sides. For instance, creating X ¯ ( 3 ) R from samples of u ¯ ( 3 ) requires the availability of X ¯ R and X ¯ ( 1 ) R due to the right-hand side of the discretized dual problem that defines u ¯ ( 3 ) , see Problem  7.
We motivate the POD spaces by corresponding continuous minimization problems. We discretize these minimization problems using quadrature ([4], Section 6.4 and 6.5). The discrete minimization problems can be solved using a weighted singular value decomposition of a snapshot matrix, based on [35]. Algorithm 1 provides a definition of the POD algorithm in terms of linear algebra, assuming N snapshot vectors of length M. The algorithm is formulated in a way that allows for general snapshot weighting and the maximum possible number of output vectors. Actual implementations can benefit from using a simpler (diagonal) snapshot weighting matrix and assuming a small number of output vectors. Section 2.3.1 and Section 2.3.2 describe how to generate the input to the algorithm in order to compute POD basis vectors from available FE or SGFE snapshots.
Algorithm 1 Proper orthogonal decomposition.
  • Input: Snapshot matrix U = U 1 , , U N R M × N . Symmetric positive definite weighting matrices S R M × M and W R N × N .
  • Output: POD basis matrix Φ = ( Φ 1 , , Φ M ) R M × M .
 1: Compute Cholesky factor S ˜ such that S = S ˜ T S ˜ .
 2: Compute Cholesky factor W ˜ such that W = W ˜ T W ˜ .
 3: Compute singular value decomposition Φ ˜ Σ V ˜ T of S ˜ U W ˜ T .
 4: Solve S ˜ Φ = Φ ˜ for Φ .

2.3.1. Spatial POD

We provide a POD of snapshots of the solution u of Problem 1, resulting in a spatial POD space X R = span ( φ 1 , , φ R ) for R M FE . One can define a POD basis as a set of functions which solve the continuous minimization problems
min φ 1 , , φ R X P Ξ u ( y , μ ) r = 1 R ( u ( y , μ ) , φ r ) X φ r X 2 d P ξ ( y ) d μ , ( φ i , φ j ) X = δ i j ,
for R = 1 , , M FE . We approximate the double integral using Monte Carlo quadrature.
Concerning the Monte Carlo quadrature of the Ξ -integral on the one hand, we already know that the reduced-order model will only be evaluated at the random parameter points y 1 , , y N ξ , because the Monte Carlo discretization of the final stochastic estimates is fixed from the beginning. We use exactly these points for the discretization of the POD minimization problem, too, because with this choice, our reduced basis will be optimal in a mean-square sense with respect to approximating the finite element solution at y 1 , , y N ξ .
Concerning the Monte Carlo quadrature of the P -integral on the other hand, our model should be able to estimate the output statistics reasonably well at any point in P . Having no further information about how the reduced-order model will ultimately be used, we discretize the deterministic parameter domain using a training set P N μ train = { μ 1 , , μ N μ train } , with μ 1 , , μ N μ train distributed independently and uniformly over P . When testing the performance of the resulting reduced-order model, we use a different set of points in the parameter domain in order to verify the robustness of the model with respect to the deterministic parameter.
The Monte Carlo quadrature of the double integral in (23) finally results in a set of discretized POD minimization problems
min φ 1 , , φ R X 1 N ξ N μ train i = 1 N ξ j = 1 N μ train u ( y i , μ j ) r = 1 R ( u ( y i , μ j ) , φ r ) X φ r X 2 , ( φ i , φ j ) X = δ i j ,
for R = 1 , , M FE . For the POD computation in terms of Algorithm 1, we set N = N ξ N μ train and M = M FE and let U ( i 1 ) N ξ + j R M be the coefficient vector corresponding to the expansion of u ( y i , μ j ) X in a basis of X for i = 1 , , N ξ and j = 1 , , N μ train . By substituting the finite element basis expansions into (24), we find
S = M X , U = U 1 , , U N , W = 1 N I N ,
where M X denotes the mass matrix corresponding to X and I N denotes the N × N identity matrix. The output of Algorithm 1 is a POD basis matrix Φ = ( Φ 1 , , Φ M FE ) R M FE × M FE . The i-th POD basis function φ i can be obtained from the i-th POD basis vector Φ i via an expansion in the available basis of X, using the elements of Φ i as expansion coefficients. Finally, an R-dimensional POD reduced space is given by X R = span ( φ 1 , , φ R ) for any R = 1 , , M FE and the trivial space X 0 X contains only the zero function.

2.3.2. Spatial-Stochastic POD

We introduce a reduced basis space that can be used to derive a stochastic Galerkin reduced basis method. It employs a simultaneous reduction of the spatial and stochastic degrees of freedom of a stochastic Galerkin finite element discretization.
Spatial-stochastic POD reduced basis functions can be defined as solutions to a set of P -continuous POD minimization problems
min φ ¯ 1 , , φ ¯ R X ¯ P u ¯ ( μ ) r = 1 R ( u ¯ ( μ ) , φ ¯ r ) X ¯ φ ¯ r X ¯ 2 d μ , ( φ ¯ i , φ ¯ j ) X ¯ = δ i j ,
for R = 1 , , M FE M SG . A Monte Carlo quadrature of the P -integral raises the issue of choosing the sample points. Using the same training set P N μ train = { μ 1 , , μ N μ train } as in Section 2.3.1 leads to discrete POD minimization problems
min φ ¯ 1 , , φ ¯ R X ¯ 1 N μ train n = 1 N μ train u ¯ ( μ n ) r = 1 R ( u ¯ ( μ n ) , φ ¯ r ) X ¯ φ ¯ r X ¯ 2 , ( φ ¯ i , φ ¯ j ) X ¯ = δ i j
for R = 1 , , M FE M SG . Regarding the POD computation in terms of Algorithm 1, we set N = N μ train and M = M FE M SG and denote the stochastic Galerkin finite element coefficient vector of u ¯ ( μ n ) by U n . By substituting the stochastic Galerkin finite element basis expansions into (25), we obtain
S = M S M X , U = U 1 , , U N μ train , W = 1 N μ train I N μ train ,
where I N μ train is the N μ train × N μ train identity matrix and M S is the mass matrix containing the mutual S-inner products of the basis functions used to represent S. In view of Algorithm 1, the i-th POD basis function φ ¯ i can be obtained from the i-th POD basis vector Φ i via an expansion in the available basis of X ¯ , using the elements of Φ i as expansion coefficients. Finally, an R-dimensional POD reduced space is given by X ¯ R = span ( φ ¯ 1 , , φ ¯ R ) for any R = 1 , , M FE M SG and the trivial space X ¯ 0 X ¯ contains only the zero function.

3. Results

We assess the provided error bounds and compare the accuracy of the MCRB and SGRB models in terms of computing the expectation and variance of a linear output for two different convection–diffusion–reaction problems.

3.1. Convection–Diffusion–Reaction Model

Let y = ( y 1 , , y K ) T Ξ R K denote the value of a sample of a random parameter vector, μ = ( μ 1 , μ 2 ) T P R 2 the value of a deterministic parameter vector and x = ( x 1 , x 2 ) T Ω R 2 the coordinate in the computational domain Ω . We model the random input by a second-order random field with expected value τ 0 . We use two different covariances for two different test cases. For the first one, we use a separable exponential covariance c 1 ( x ) = σ 2 exp ( | x 1 | / L | x 2 | / L ) , where σ is the standard deviation and L is the correlation length, see [36]. For the second one, we use a Gaussian covariance c 2 ( x ) = σ 2 exp ( x 1 x 2 2 / L 2 ) , see [33]. We approximate the random field using a truncated Karhunen–Loève expansion τ ( x ; y ) = τ 0 + σ τ k = 1 K λ k τ k ( x ) y k , where λ k denote the eigenvalues of the corresponding eigenvalue problem, ordered decreasingly by magnitude, and τ k ( x ) denote respective eigenfunctions. The covariance function c 1 ( x ) allows for an analytical solution of the eigenvalue problem [36]. The eigenvalue problem for covariance c 2 ( x ) is solved numerically using finite elements, see, for example, [37]. We assume that the parameters of the Karhunen–Loève expansion originate from independent uniformly distributed random variables. This is done frequently in the uncertainty quantification literature, see, for example, [31,33]. The truncation index K can be interpreted as a modeling parameter because it enters the definition of the bilinear form. The convection–diffusion–reaction equations then take the following form.
Problem 9
(spatial strong form). For any ( y , μ ) Ξ × P , find u ( · ; y , μ ) : Ω R such that
μ · u ( x ; y , μ ) · ε ( x ; y ) u ( x ; y , μ ) κ ( x ; y ) u ( x ; y , μ ) = 1 , x Ω , u ( x ; y , μ ) = 0 , x Ω .
The deterministic parameter vector μ P R 2 can be interpreted as a spatially uniform convective velocity. The random parameter vector y Ξ R K enters either via a parametrized random reactivity (test case 1) or diffusivity (test case 2). For the first example, the reactivity is given by κ ( x ; y ) = κ 0 + σ κ k = 1 K λ k κ k ( x ) y k as described above and the diffusivity is constant, that is, ε ( x ; y ) = ε 0 . For the second example, the reactivity is constant, that is, κ ( x ; y ) = κ 0 , and the diffusivity is given as ε ( x ; y ) = ε 0 + σ ε k = 1 K λ k ε k ( x ) y k . The concrete instances of the example problems are determined by the model parameters given in Table 1. The output of the example problems is given by l ( u ( y , μ ) ; y , μ ) , where
l ( v ; y , μ ) = 0 1 2 0 1 2 v ( x ) d x 1 d x 2 .
In order to express the example problems in terms of the spatial weak form of Problem 1, we set
a ( w , v ; y , μ ) = a 0 ( w , v ) + k = 1 K y k a y k ( w , v ) + p = 1 2 μ p a μ p ( w , v ) ,
with
a 0 ( w , v ) = ε 0 Ω w ( x ) · v ( x ) d x κ 0 Ω w ( x ) v ( x ) d x , a y k ( w , v ) = σ κ λ k Ω κ k ( x ) w ( x ) v ( x ) d x , ( test case 1 or ) a y k ( w , v ) = σ ε λ k Ω ε k ( x ) w ( x ) · v ( x ) d x , ( test case 2 ) k = 1 , , K , a μ p ( w , v ) = Ω x p w ( x ) v ( x ) d x , p = 1 , 2
and
f ( v ; y , μ ) = Ω v ( x ) d x .
A spatial weak form of Problem 9 is provided in terms of the standard infinite-dimensional Sobolev space H 0 1 ( Ω ) , as follows:
Problem 10
(spatial weak form). For given ( y , μ ) Ξ × P , find
u ( y , μ ) H 0 1 ( Ω ) : a ( u ( y , μ ) , v ; y , μ ) = f ( v ; y , μ ) v H 0 1 ( Ω ) .
By taking the expectation and using the notation of Section 2.2.1, a spatial-stochastic weak form is given by:
Problem 11
(spatial-stochastic weak form). For given μ P , find
u ¯ ( μ ) L ξ 2 ( Ξ , H 0 1 ( Ω ) ) : a ¯ ( u ¯ ( μ ) , v ; μ ) = f ¯ ( v ; μ ) v L ξ 2 ( Ξ , H 0 1 ( Ω ) ) .
The bilinear forms of Problems 10 and 11 are ( y , μ ) - and μ -uniformly bounded and coercive, respectively, because μ is constant, that is, divergence-free, the random parameters y are bounded, and the problems have Dirichlet boundary conditions. In that setting, the convective parts of the bilinear forms are skew-self-adjoint and their contributions to the coercivity estimates are always positive, see, for example, ([38], Section 6.2). The example problems thus fit in the abstract settings of Section 2.1 and Section 2.2 and are well-posed.

3.2. Discretization

The MCFE and SGFE discretizations (Problems 1 and 5) provide necessary links between the infinite-dimensional test problems (Problems 10 and 11) and the respective reduced-order models (Problems 2 and 6). We compute the coercivity and continuity constants necessary for the output bounds by solving eigenvalue problems associated with the discrete counterparts of the bilinear forms in Problems 10 and 11. These steps can be replaced by more efficient procedures, as pointed out in Section 2.1. In the following, we describe the computational details of the MCFE and SGFE discretizations of the test problems. Table 2 lists our choice of the relevant discretization parameters.

3.2.1. Finite Element Method

We derive an instance of the stochastic strong finite element problem by replacing H 0 1 ( Ω ) in Problem 10 with a finite-dimensional subspace. In particular, we employ the space X H 0 1 ( Ω ) formed by continuous piecewise linear finite elements corresponding to a regular graded simplicial triangulation of Ω , characterized by the number M FE of finite element degrees of freedom. The triangulation of the computational domain Ω is shown in Figure 1. We estimate the spatial discretization error using simulations on a finer reference triangulation as a substitute for the exact solution.

3.2.2. Monte Carlo Method

We provide estimates of the expectation and variance by discretizing the respective stochastic integrals using Monte Carlo quadrature in the sense of Section 2.1.1. To this end, we generate random samples y 1 , , y N ξ Ξ according to the distribution P ξ with a standard pseudorandom number generator. A reference simulation with a higher number of samples delivers an estimate of the sampling error. An exemplary solution of Problem 10 approximated by an MCFE method is shown in the right plot of Figure 1.

3.2.3. Stochastic Galerkin Method

Stochastic Galerkin methods estimate the expectation and variance by directly evaluating the respective stochastic integrals, given a stochastic Galerkin solution based on a finite-dimensional subspace of L ξ 2 ( Ξ ) . In general, a stochastic Galerkin finite element method applied to a linear elliptic problem with a random elliptic coefficient leads to a large, block-structured system of linear algebraic equations. In our case, however, the underlying random variables y 1 , , y K are independent and enter the bilinear form linearly, see (27). Under these conditions, it is possible to find a double-orthogonal polynomial basis which decouples the blocks in the system matrix [39,40]. The resulting block-diagonal system of equations can be solved efficiently due to the relatively small bandwidth and the ability to treat the blocks in parallel. To define a suitable double-orthogonal basis, we start with K spaces of possibly different dimensions, spanned by univariate Legendre polynomials over the interval [ 3 , 3 ] . We normalize the polynomials regarding the underlying uniform distribution and rotate the bases such that they consist of double-orthogonal univariate polynomials, as described in [39]. Finally, a tensor product of these univariate double-orthogonal polynomial bases forms a basis of an M SG -dimensional subspace S L ξ 2 ( Ξ ) . In our experiments, we use the same polynomial degree d in all directions. We assess the error associated with the choice of d by comparing with a reference solution using a higher degree.

3.2.4. Reduced Basis

The considered reduced-order models rely on POD spaces generated from snapshots of the underlying discretized solution. We choose N μ train = 64 as the number of training samples in the deterministic parameter domain. Consequently, Section 2.3 specifies the creation of the reduced spaces.

3.3. Computational Costs

We analyze the computational costs associated with the MCRB and SGRB method. We focus on the offline costs, that is, the costs necessary to set up the reduced models and estimators. The costs of the online phase depend on the dimensions of the reduced spaces which are not known a priori. The computational costs of the online phase for the numerical examples considered in Section 3 are briefly analyzed in Section 4.
The offline phase consists of several steps which are necessary to construct the reduced-order models and estimators. In the following, we list the computations that dominate the overall costs. At first, all coercivity constants necessary for the estimators in Section 2.1.3 and Section 2.2.3 are computed. As a second step, we construct the reduced spaces for the primal and dual Problems 2, 4, 6 and 8 based on the POD Algorithm 1. This must be done five times for the MCRB method (one primal problem and four dual problems) and four times for the SGRB method (one primal problem and three dual problems). Constructing a reduced space consists of the snapshot computations and the singular value decomposition of the weighted snapshot matrix. The systems of equations to compute the snapshots for the SGRB method are decoupled with respect to the SG basis because of the double-orthogonal polynomials. An overview of the listed computations is given in Table 3 for the MCRB method and in Table 4 for the SGRB method. We only state the problems that must be solved and their dimensions as we use the same algorithms for both methods. For the MCRB approach, the coercivity constant is computed N ξ times, that is, for every MC sample of the random parameters, see Section 2.1.3. Using a strictly positive lower bound instead reduces the costs to the computation of one single eigenvalue problem.
One can see that the difference in the offline costs primarily results from the difference between the number of MC samples N ξ and the dimension of the SG basis M SG . Consequently, the SGRB method will be competitive in scenarios where M SG , that is, the number of random parameters and the polynomial degree, is small. For problems with a large number of random parameters, M SG can become orders of magnitude larger than N ξ .

3.4. Numerical Results

Figure 2 presents the parameter-dependent output statistics obtained with the default parameter-dependent SGFE models. The top row shows the results for test case 1 and the bottom row shows the results for test case 2. The crosses in Figure 2 correspond to the snapshot training parameter values provided by the pseudorandom number generator. Additionally, Figure 2 shows the test parameter values that are used to assess the model.
The reduced basis estimates of the output statistics together with the respective error bounds are provided by Theorems 3 and 4 for the MCRB method, and Theorems 6 and 7 for the SGRB method. First, we validate the error bounds for a single random realization of the deterministic parameter, marked by a square in Figure 2. The convergence regarding the number of reduced basis functions R is presented in Figure 3 and Figure 4. The error components of the underlying discretized solutions are provided as references. Looking at the discretization errors only, we see that the number of MC samples is sufficient to approximate the expectation, but is actually too small to balance the FE error in case of the variance. The SG error, on the other hand, is smaller than the FE error in all cases, which provides evidence that the stochastic Galerkin discretization of the stochastic domain is sufficiently fine. Concerning the reduced basis models, we observe that R 16 reduced basis functions are sufficient to obtain reduced-order estimates which are on a par with the full-order estimates in all considered cases. The plots suggest that all error bounds converge at the same rates as the respective errors. This is useful, because it implies that efficiency of the error bounds does not become significantly worse when the number of reduced basis functions is increased.
We assess the convergence globally over P in order to confirm that the point-wise observations in the deterministic parameter space provided by Figure 3 and Figure 4 is not a lucky coincidence. To this end, we employ an L 2 ( P ) -norm, approximated using Monte Carlo quadrature with N μ test = 64 samples shown as circles in Figure 2. The convergence results are presented in Figure 5 and Figure 6. Since we have averaged over the parameter space, the plots appear less random than the plots in Figure 3 and Figure 4. The convergence of the estimates and the corresponding bounds correspond quite well. Moreover, the MCRB and SGRB methods perform similarly in terms of accuracy per number of basis functions.
In Figure 3, Figure 4, Figure 5 and Figure 6, it appears that the SGRB error bound over-estimates the actual error more severely (by four orders of magnitude) than the MCRB error bound (two orders of magnitude). A closer inspection of the individual components of the error estimate reveals that for larger R, the lower-order term involving the continuity factor becomes responsible for the major portion of the error estimate. In particular, for R = 64 at the parameter point corresponding to Figure 3, the terms on the right-hand side of Theorem 7 amount to approximately 4.2 · 10 12 , 1.2 · 10 19 and 5.2 · 10 14 , respectively.

4. Discussion

We have investigated the use of MCRB and SGRB methods to estimate parameter-dependent statistics of functional outputs of interest of elliptic PDEs with parametrized random and deterministic input data. Our analysis is restricted to linear elliptic variational problems and cannot be generalized to nonlinear or non-elliptic equations in a straightforward way. It further relies on the fact that all parameters enter the equations affinely. For more complex, non-affine input representations, additional approximation steps, such as empirical interpolation techniques, are necessary for the analysis and methods to be applicable. Furthermore, reduced-order methods, as considered in this work, are only reasonable surrogates for the high-fidelity models when the parameter-dependent output of interest lives on a low-dimensional manifold. This is the case for our numerical examples, as can be seen by the fast decay of the error contributions.
We have observed that the SGRB method can deliver estimates of the expectation and variance of linear outputs with an accuracy similar to the MCRB method for the convection–diffusion–reaction model problems. Additionally, the SGRB error bounds regarding the expected value were very close to the respective MCRB bounds in our experiments. Concerning the variance, the presented SGRB bounds overestimate the error more severely than the available MCRB bounds, which opens opportunities for future improvement of the SGRB variance bound. Nevertheless, the MCRB and SGRB variance bounds both converge at the same order depending on the number of reduced basis functions. This behavior is reflected by the theory, which predicts the same order of convergence in terms of dual norms of residuals.
The MCRB statistical output estimates and error bounds require a Monte Carlo sampling of the reduced quantities point-wise in the random parameter domain. In our convection-diffusion-rection model problems, 1024 samples were sufficient to balance the finite element error for the expectation, but an accurate prediction of the variance would require even more samples. The SGRB estimates and bounds, on the other hand, are obtained by an exact integration of the corresponding reduced basis expansions in the setup phase of the reduced-order model, and, thus, do not rely on Monte Carlo sampling. As a consequence, the primal and dual SGRB problems need to be solved only once for each new deterministic parameter. This benefit comes at the cost of a more expensive offline phase. In our tests, the SGRB and the MCRB methods achieved a similar reduction of degrees of freedom for a given error tolerance. As a consequence, the possible online speedup of SGRB methods compared to MCRB methods is in the order of magnitude of the number of Monte Carlo samples. This is particularly attractive in scenarios where evaluating the reduced-order model shall be as inexpensive as possible, while the offline costs are not a primary concern.

Author Contributions

Conceptualization, S.U., C.M. and J.L.; methodology, S.U., C.M. and J.L.; software, S.U.; validation, S.U. and C.M.; formal analysis, S.U., C.M. and J.L.; investigation, S.U., C.M. and J.L.; resources, J.L.; data curation, S.U. and C.M.; writing—original draft preparation, S.U.; writing—review and editing, C.M. and J.L.; visualization, S.U. and C.M.; supervision, J.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Graduate School CE within the Centre for Computational Engineering at Technical University of Darmstadt and by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) within the collaborative research centre TRR154 “Mathematical modeling, simulation and optimisation using the example of gas networks” (Project-ID 239904186, TRR154/2-2018, TP B01).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Morton, K.W. Numerical Solution of Convection-Diffusion Problems; Chapman & Hall: London, UK, 1996. [Google Scholar]
  2. Hesthaven, J.; Rozza, G.; Stamm, B. Certified Reduced Basis Methods for Parametrized Partial Differential Equations; Springer: Cham, Switzerland, 2016. [Google Scholar]
  3. Prud’homme, C.; Rovas, D.V.; Veroy, K.; Machiels, L.; Maday, Y.; Patera, A.T.; Turinici, G. Reliable real-time solution of parametrized partial differential equations: Reduced-basis output bound methods. J. Fluid. Eng. Trans. ASME 2002, 124, 70–80. [Google Scholar] [CrossRef]
  4. Quarteroni, A.; Manzoni, A.; Negri, F. Reduced Basis Methods for Partial Differential Equations; Springer: Cham, Switzerland, 2016. [Google Scholar]
  5. Boyaval, S.; Bris, C.; Maday, Y.; Nguyen, N.; Patera, A. A reduced basis approach for variational problems with stochastic parameters: Application to heat conduction with variable Robin coefficient. Comput. Methods Appl. Mech. Eng. 2009, 198, 3187–3206. [Google Scholar] [CrossRef]
  6. Haasdonk, B.; Urban, K.; Wieland, B. Reduced basis methods for parametrized partial differential equations with stochastic influences using the Karhunen–Loève Expansion. SIAM/ASA J. Uncertain. Quantif. 2013, 1, 79–105. [Google Scholar] [CrossRef]
  7. Chen, P.; Quarteroni, A.; Rozza, G. A weighted reduced basis method for elliptic partial differential equations with random input data. SIAM J. Numer. Anal. 2013, 51, 3163–3185. [Google Scholar] [CrossRef] [Green Version]
  8. Vidal-Codina, F.; Nguyen, N.C.; Giles, M.B.; Peraire, J. An empirical interpolation and model-variance reduction method for computing statistical outputs of parametrized stochastic partial differential equations. SIAM/ASA J. Uncertain. Quantif. 2016, 4, 244–265. [Google Scholar] [CrossRef] [Green Version]
  9. Spannring, C.; Ullmann, S.; Lang, J. A weighted reduced basis method for parabolic PDEs with random data. In Recent Advances in Computational Engineering; Schäfer, M., Behr, M., Mehl, M., Wohlmuth, B., Eds.; Springer International Publishing: Cham, Switzerland, 2018; pp. 145–161. [Google Scholar] [CrossRef] [Green Version]
  10. Newsum, C.; Powell, C. Efficient reduced basis methods for saddle point problems with applications in groundwater flow. SIAM/ASA J. Uncertain. Quantif. 2017, 5, 1248–1278. [Google Scholar] [CrossRef] [Green Version]
  11. Boyaval, S. A fast Monte–Carlo method with a reduced basis of control variates applied to uncertainty propagation and Bayesian estimation. Comput. Methods Appl. Mech. Eng. 2012, 241-244, 190–205. [Google Scholar] [CrossRef] [Green Version]
  12. Chen, P.; Schwab, C. Sparse-grid, reduced-basis Bayesian inversion: Nonaffine-parametric nonlinear equations. J. Comp. Phys. 2016, 316, 470–503. [Google Scholar] [CrossRef]
  13. Manzoni, A.; Pagani, S.; Lassila, T. Accurate solution of Bayesian inverse uncertainty quantification problems combining reduced basis methods and reduction error models. SIAM/ASA J. Uncertain. Quantif. 2016, 4, 380–412. [Google Scholar] [CrossRef] [Green Version]
  14. Chen, P.; Quarteroni, A. Accurate and efficient evaluation of failure probability for partial different equations with random input data. Comput. Methods Appl. Mech. Eng. 2013, 267, 233–260. [Google Scholar] [CrossRef]
  15. Venturi, D.; Wan, X.; Karniadakis, G.E. Stochastic low-dimensional modelling of a random laminar wake past a circular cylinder. J. Fluid. Mech. 2008, 606, 339–367. [Google Scholar] [CrossRef] [Green Version]
  16. Freitas, F.; Pulch, R.; Rommes, J. Fast and accurate model reduction for spectral methods in uncertainty quantification. Int. J. Uncertain. Quantif. 2016, 6, 271–286. [Google Scholar] [CrossRef]
  17. Pulch, R.; ter Maten, E. Stochastic Galerkin methods and model order reduction for linear dynamical systems. Int. J. Uncertain. Quantif. 2015, 5, 255–273. [Google Scholar] [CrossRef]
  18. Glas, S.; Mayerhofer, A.; Urban, K. Two ways to treat time in reduced basis methods. In Model Reduction of Parametrized Systems; Modeling, Simulation and Applications; Benner, P., Ohlberger, M., Patera, A., Rozza, G., Urban, K., Eds.; Springer International Publishing: Cham, Switzerland, 2017; Volume 17, Chapter 1; pp. 1–16. [Google Scholar] [CrossRef]
  19. Urban, K.; Patera, A. A new error bound for reduced basis approximation of parabolic partial differential equations. CR Math. 2012, 350, 203–207. [Google Scholar] [CrossRef] [Green Version]
  20. Wieland, B. Reduced Basis Methods for Partial Differential Equations with Stochastic Influences. Ph.D. Thesis, Universität Ulm, Ulm, Germany, 2013. [Google Scholar] [CrossRef]
  21. Vidal-Codina, F.; Nguyen, N.; Giles, M.; Peraire, J. A model and variance reduction method for computing statistical outputs of stochastic elliptic partial differential equations. J. Comput. Phys. 2015, 297, 700–720. [Google Scholar] [CrossRef] [Green Version]
  22. Huynh, D.B.P.; Peraire, J.; Patera, A.T.; Liu, G.R. Real-time reliable prediction of linear-elastic mode-i stress intensity factors for failure analysis. In Proceedings of the Singapore MIT Alliance Conference, Singapore, 7 July 2006. [Google Scholar]
  23. Sen, S. Reduced Basis Approximation and a Posteriori Error Estimation for Non-Coercive Elliptic Problems: Application to Acoustics. Ph.D. Thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, 2007. [Google Scholar]
  24. Jiang, J.; Chen, Y.; Narayan, A. A Goal-Oriented Reduced Basis Methods-Accelerated Generalized Polynomial Chaos Algorithm. SIAM/ASA J. Uncertain. Quantif. 2016, 4, 1398–1420. [Google Scholar] [CrossRef]
  25. Dexter, N.C.; Webster, C.G.; Zhang, G. Explicit cost bounds of stochastic Galerkin approximations for parameterized PDEs with random coefficients. Comput. Math. Appl. 2016, 71, 2231–2256. [Google Scholar] [CrossRef] [Green Version]
  26. Nouy, A. A generalized spectral decomposition technique to solve a class of linear stochastic partial differential equations. Comput. Methods Appl. Mech. Eng. 2007, 4521–4537. [Google Scholar] [CrossRef] [Green Version]
  27. Tamellini, L.; Maître, O.L.; Nouy, A. Model reduction based on proper generalized decomposition for the stochastic steady incompressible Navier-Stokes equations. SIAM J. Sci. Comput. 2014, 36, A1089–A1117. [Google Scholar] [CrossRef] [Green Version]
  28. Powell, C.E.; Silvester, D.; Simoncini, V. An efficient reduced basis solver for stochastic Galerkin matrix equations. SIAM J. Sci. Comput. 2017, A141–A163. [Google Scholar] [CrossRef] [Green Version]
  29. Lee, K.; Elman, H.C. A preconditioned low-rank projection method with a rank-reduction scheme for stochastic partial differential equations. SIAM J. Sci. Comput. 2017, 39, S828–S850. [Google Scholar] [CrossRef] [Green Version]
  30. Powell, C.E.; Elman, H.C. Block-diagonal preconditioning for spectral stochastic finite-element systems. IMA J. Numer. Anal. 2009, 29, 350–375. [Google Scholar] [CrossRef] [Green Version]
  31. Müller, C.; Ullmann, S.; Lang, J. A Bramble–Pasciak conjugate gradient method for discrete Stokes equations with random viscosity. SIAM/ASA J. Uncertain. Quantif. 2019, 7, 787–805. [Google Scholar] [CrossRef]
  32. Elman, H.C.; Liao, Q. Reduced basis collocation methods for partial differential equations with random coefficients. SIAM/ASA J. Uncertain. Quantif. 2013, 1, 192–217. [Google Scholar] [CrossRef] [Green Version]
  33. Powell, C.; Lord, G.; Shardlow, T. An Introduction to Computational Stochastic PDEs, 1st ed.; Cambridge University Press: Cambridge, UK, 2014. [Google Scholar]
  34. Ngoc Cuong, N.; Veroy, K.; Patera, A.T. Certified real-time solution of parametrized partial differential equations. In Handbook of Materials Modeling: Methods; Yip, S., Ed.; Springer: Dordrecht, The Netherlands, 2005; pp. 1529–1564. [Google Scholar] [CrossRef]
  35. Kunisch, K.; Volkwein, S. Control of the Burgers equation by a reduced-order approach using proper orthogonal decomposition. J. Optim. Theory App. 1999, 102, 345–371. [Google Scholar] [CrossRef]
  36. Ghanem, R.G.; Spanos, P.D. Stochastic Finite Elements: A Spectral Approach; Springer: New York, NY, USA, 1991. [Google Scholar]
  37. LeMaître, O.P.; Knio, O.M. Spectral Methods for Uncertainty Quantification—With Applications to Computational Fluid Dynamics; Springer: New York, NY, USA, 2010. [Google Scholar]
  38. Elman, H.; Silvester, D.; Wathen, A. Finite Elements and Fast Iterative Solvers—With Applications in Incompressible Fluid Dynamics, 2nd ed.; Oxford University Press: Oxford, UK, 2014. [Google Scholar]
  39. Babuška, I.; Tempone, R.; Zouraris, G.E. Galerkin finite element approximations of stochastic elliptic partial differential equations. SIAM J. Numer. Anal. 2004, 42, 800–825. [Google Scholar] [CrossRef]
  40. Frauenfelder, P.; Schwab, C.; Todor, R.A. Finite elements for elliptic problems with stochastic coefficients. Comput. Methods Appl. Mech. Eng. 2005, 194, 205–228. [Google Scholar] [CrossRef]
Figure 1. Computational domain with triangulation (left) and exemplary mean value of the solution of Problem 9 with a randomly chosen convective velocity, approximated by an MCFE approach (right).
Figure 1. Computational domain with triangulation (left) and exemplary mean value of the solution of Problem 9 with a randomly chosen convective velocity, approximated by an MCFE approach (right).
Fluids 06 00263 g001
Figure 2. Parameter-dependent output expectation E [ l ( u ( · , μ ) ; · , μ ) ] and variance V [ l ( u ( · , μ ) ; · , μ ) ] for the functional l given by (26). The top row shows the results for test case 1 and the bottom row shows the results for test case 2. Crosses mark the snapshot parameter values. The square marks the parameter value corresponding to Figure 3 and Figure 4. Circles mark the parameter values used to obtain Figure 5 and Figure 6.
Figure 2. Parameter-dependent output expectation E [ l ( u ( · , μ ) ; · , μ ) ] and variance V [ l ( u ( · , μ ) ; · , μ ) ] for the functional l given by (26). The top row shows the results for test case 1 and the bottom row shows the results for test case 2. Crosses mark the snapshot parameter values. The square marks the parameter value corresponding to Figure 3 and Figure 4. Circles mark the parameter values used to obtain Figure 5 and Figure 6.
Fluids 06 00263 g002
Figure 3. Log–log plots of the errors in the approximation of the expectation (top row) and variance (bottom row) of a linear functional with an MCRB method (left column) and an SGRB method (right column) depending on the dimension R of the reduced spaces, for a random point in the deterministic parameter domain, for the first test case. Respective error bounds and approximate FE/SG/MC discretization errors.
Figure 3. Log–log plots of the errors in the approximation of the expectation (top row) and variance (bottom row) of a linear functional with an MCRB method (left column) and an SGRB method (right column) depending on the dimension R of the reduced spaces, for a random point in the deterministic parameter domain, for the first test case. Respective error bounds and approximate FE/SG/MC discretization errors.
Fluids 06 00263 g003
Figure 4. Log–log plots of the errors in the approximation of the expectation (top row) and variance (bottom row) of a linear functional with an MCRB method (left column) and an SGRB method (right column) depending on the dimension R of the reduced spaces, for a random point in the deterministic parameter domain, for the second test case. Respective error bounds and approximate FE/SG/MC discretization errors.
Figure 4. Log–log plots of the errors in the approximation of the expectation (top row) and variance (bottom row) of a linear functional with an MCRB method (left column) and an SGRB method (right column) depending on the dimension R of the reduced spaces, for a random point in the deterministic parameter domain, for the second test case. Respective error bounds and approximate FE/SG/MC discretization errors.
Fluids 06 00263 g004
Figure 5. Log–log plots of the errors in the approximation of the expectation (top row) and variance (bottom row) of a linear functional with an MCRB method (left column) and an SGRB method (right column) depending on the dimension R of the reduced spaces, measured in terms of an approximate L 2 ( P ) -norm, for the first test case. Respective error bounds and approximate FE/SG/MC discretization errors. It is a coincidence that the approximate FE and MC errors of the estimated expectation are very close in this outcome of the random experiment.
Figure 5. Log–log plots of the errors in the approximation of the expectation (top row) and variance (bottom row) of a linear functional with an MCRB method (left column) and an SGRB method (right column) depending on the dimension R of the reduced spaces, measured in terms of an approximate L 2 ( P ) -norm, for the first test case. Respective error bounds and approximate FE/SG/MC discretization errors. It is a coincidence that the approximate FE and MC errors of the estimated expectation are very close in this outcome of the random experiment.
Fluids 06 00263 g005
Figure 6. Log–log plots of the errors in the approximation of the expectation (top row) and variance (bottom row) of a linear functional with an MCRB method (left column) and an SGRB method (right column) depending on the dimension R of the reduced spaces, measured in terms of an approximate L 2 ( P ) -norm, for the second test case. Respective error bounds and approximate FE/SG/MC discretization errors.
Figure 6. Log–log plots of the errors in the approximation of the expectation (top row) and variance (bottom row) of a linear functional with an MCRB method (left column) and an SGRB method (right column) depending on the dimension R of the reduced spaces, measured in terms of an approximate L 2 ( P ) -norm, for the second test case. Respective error bounds and approximate FE/SG/MC discretization errors.
Fluids 06 00263 g006
Table 1. Model parameters of the test problems.
Table 1. Model parameters of the test problems.
SymbolValueDescription
κ 0 1000 expected value of reactivity
ε 0 1expected value of diffusivity
σ κ 200standard deviation of reactivity
σ ε 0.2 standard deviation of diffusivity
L1correlation length
K5Karhunen-Loève truncation index
Ω ( 0.5 , 0.5 ) 2 spatial domain with boundary Ω
P [ 200 , 200 ] 2 deterministic parameter domain
Ξ [ 3 , 3 ] K random parameter domain
Table 2. Discretization parameters of the test problems. The default values are used in the snapshot simulations. The reference values are used to assess the accuracy of the snapshot discretizations.
Table 2. Discretization parameters of the test problems. The default values are used in the snapshot simulations. The reference values are used to assess the accuracy of the snapshot discretizations.
SymbolDefaultReferenceDescription
M FE 225961number of FE degrees of freedom
N ξ 102416384number of MC samples of y Ξ
d23degree of SG polynomials
M SG 2431024number of SG degrees of freedom: ( d + 1 ) K
Table 3. Primary computational costs of the offline phase of the MCRB approach.
Table 3. Primary computational costs of the offline phase of the MCRB approach.
#DimensionProblem to SolveOutput
N ξ M FE × M FE eigenvalue problemcoercivity constant
5 N ξ · N μ train M FE × M FE system of equationssnapshots
5 M FE × N ξ · N μ train singular value decompositionreduced basis
Table 4. Primary computational costs of the offline phase of the SGRB approach.
Table 4. Primary computational costs of the offline phase of the SGRB approach.
#DimensionProblem to SolveOutput
1 M FE M SG × M FE M SG eigenvalue problemcoercivity constant
4 M SG · N μ train M FE × M FE system of equationssnapshots
4 M FE M SG × N μ train singular value decompositionreduced basis
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ullmann, S.; Müller, C.; Lang, J. Stochastic Galerkin Reduced Basis Methods for Parametrized Linear Convection–Diffusion–Reaction Equations. Fluids 2021, 6, 263. https://0-doi-org.brum.beds.ac.uk/10.3390/fluids6080263

AMA Style

Ullmann S, Müller C, Lang J. Stochastic Galerkin Reduced Basis Methods for Parametrized Linear Convection–Diffusion–Reaction Equations. Fluids. 2021; 6(8):263. https://0-doi-org.brum.beds.ac.uk/10.3390/fluids6080263

Chicago/Turabian Style

Ullmann, Sebastian, Christopher Müller, and Jens Lang. 2021. "Stochastic Galerkin Reduced Basis Methods for Parametrized Linear Convection–Diffusion–Reaction Equations" Fluids 6, no. 8: 263. https://0-doi-org.brum.beds.ac.uk/10.3390/fluids6080263

Article Metrics

Back to TopTop