Next Article in Journal
First-Order Comprehensive Adjoint Method for Computing Operator-Valued Response Sensitivities to Imprecisely Known Parameters, Internal Interfaces and Boundaries of Coupled Nonlinear Systems: II. Application to a Nuclear Reactor Heat Removal Benchmark
Previous Article in Journal
Introducing the Journal of Nuclear Engineering: An Interdisciplinary Open Access Journal Dedicated to Publishing Research in Nuclear and Radiation Sciences and Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

First-Order Comprehensive Adjoint Method for Computing Operator-Valued Response Sensitivities to Imprecisely Known Parameters, Internal Interfaces and Boundaries of Coupled Nonlinear Systems: I. Mathematical Framework

by
Dan Gabriel Cacuci
Department of Mechanical Engineering, University of South Carolina, 300 Main Street, Columbia, SC 29208, USA
Submission received: 1 June 2020 / Revised: 16 July 2020 / Accepted: 31 August 2020 / Published: 8 September 2020

Abstract

:
This work presents the first-order comprehensive adjoint sensitivity analysis methodology (1st-CASAM) for computing efficiently the first-order sensitivities (i.e., functional derivatives) of operator-valued responses (i.e., model results) of general models of coupled nonlinear physical systems characterized by imprecisely known or and/or uncertain parameters, external boundaries, and internal interfaces between the coupled systems. The explicit mathematical formalism developed within the 1st-CASAM for computing the first-order sensitivities of operator-valued response to uncertain internal interfaces and external boundaries in the models’ phase–space enables this methodology to generalize all of the previously published methodologies for computing first-order response sensitivities. The computational resources needed for using forward versus adjoint operators in conjunction with spectral versus collocation methods for computing the response sensitivities are analyzed in detail. By enabling the exact computations of operator-valued response sensitivities to internal interfaces and external boundary parameters and conditions, the 1st-CASAM presented in this work makes it possible, inter alia, to quantify the effects of manufacturing tolerances on operator-valued responses of physical and engineering systems.

1. Introduction

The aim of sensitivity analysis is to compute the sensitivities (i.e., functional derivatives) of responses (i.e., results of interest) of a computational model with respect to the respective model’s parameters. Statistical and “brute force” methods can yield approximate values for such sensitivities, while forward and adjoint methods yield mathematically exact expressions for sensitivities, which can therefore be computed to machine accuracy. It is beyond the scope of this work to review these methods and their numerous applications, but the interested reader may wish to consult the books [1,2,3] and references therein. The specific aim of this work is to generalize the forward/adjoint sensitivity analysis methodology conceived by Cacuci [4,5] for operator-valued (as opposed to scalar-valued) model responses, to enable the explicit computation of operator-valued response sensitivities to uncertain phase–space locations of boundaries and interfaces in coupled nonlinear subsystems. Knowledge of such sensitivities is crucial in practice for enabling the quantification of the effects of manufacturing tolerances when actually constructing any physical system, from benchmark experiments to industrial-size installations. It will be shown that response sensitivities to the imprecisely known phase–space locations of domain boundaries and interfaces can arise both from the definition of the system’s response as well as from the equations, interfaces and boundary conditions defining the model and its imprecisely known domain of definition. This work is structured as follows: Section 2 presents the general mathematical framework for computing exactly (in parameter space) and efficiently the sensitivities of a generic operator type to the physical system’s imprecisely known parameters, internal and external boundaries. This mathematical framework is called the “first-order comprehensive adjoint sensitivity analysis methodology” (1st-CASAM), where the qualifier “comprehensive” indicates that that all possible uncertain model parameters, including those characterizing the phase–space locations of internal and external boundaries are explicitly taken into consideration. The total sensitivity of the operator-valued response is represented using its spectral expansion and, alternatively, using its collocation/pseudo-spectral expansion. The relative advantages and disadvantages are discussed, including using mixed spectral/collocation expansions of the sensitivities of the operator-valued response. Section 3 offers concluding remarks.

2. Mathematical Framework of the 1st-Casam for Operator-Valued Responses of Coupled Systems Comprising Imprecisely Known Parameters, Interfaces and Boundaries

The system considered in this work comprises two nonlinear sub-systems which are coupled to one another across a common internal interface (boundary) in phase–space, and which will be called “Subsystem I” and, respectively, “Subsystem II”. The first subsystem is represented mathematically as follows:
N ( I ) [ u ( x ) ; α ] = Q ( I ) ( α ; x ) , x Ω x ( α )
Bold letters will be used in this work to denote matrices and vectors. Unless explicitly stated otherwise, the vectors in this work are considered to be column vectors. The second subsystem is represented mathematically as follows:
N ( I I ) [ v ( y ) ; α ] = Q ( I I ) ( α ; y ) , y Ω y ( α )
If differential operators appear in Equations (1) and (2), a corresponding set of boundary and/or initial/final conditions must also be given, these conditions can be represented in operator form as follows:
B [ u ( x ) , v ( y ) ; α ; x , y ] = 0 , x Ω x ( α ) , y Ω y ( α )
The quantities appearing in Equations (1)–(3) are defined as follows:
  • α ( α 1 , , α Z α ) Z α denotes a column vector having Z α scalar-valued components representing all of the imprecisely known internal and boundary parameters of the physical systems, including imprecisely known parameters that characterize the interface and boundary conditions. Some of these parameters are common to both physical systems, e.g., the parameters that characterize common interfaces. These scalar parameters are considered to be subject to both random and systematic uncertainties, as is usually the case in practical applications. In order to use such parameters in practical computations, which is the scope of the methodology presented in this work, they are considered to be either “uncertain” or “imprecisely known”. “Uncertain” parameters are usually considered to follow a probability distribution having a known “mean value” and a known “standard deviation”. On the other hand, the actual values of “imperfectly known” parameters are unknown. To enable the use of such parameters in computations, “expert opinion” is invoked to assign each of such imprecisely known parameters a “nominal value” (which plays the role of a “mean value”) and a “range of variation” (which plays the role of a standard deviation). For practical computations, the actual origin of the parameter’s nominal (or mean) value and of its assigned standard deviation is immaterial, which is why the qualifiers “uncertain” and “imprecisely known” are often used interchangeably. In this work, the superscript “zero” will be used to denote the known nominal or mean values of various quantities. In particular, the vector of nominal and/or mean parameter values will be denoted as α 0 ( α 1 0 , , α Z α 0 ) . The symbol “ ” will be used to denote “is defined as” or “is by definition equal to” and transposition will be indicated by a dagger ( ) superscript.
  • x ( x 1 , , x Z x ) Z x denotes the phase–space position vector, of dimension Z x , of independent variables for the system defined in Equation (1). The vector of independent variable x is defined on a phase–space domain denoted as Ω x ( α ) , Ω x ( α ) { a i ( α ) x i b i ( α ) ; i = 1 , , Z x } , and is therefore considered to depend on the uncertain parameters α . The lower-valued imprecisely known boundary-point of the independent variable is denoted as a i ( α ) , while the upper-valued imprecisely known boundary-point of the independent variable is denoted as b i ( α ) . For physical systems modeled by diffusion theory, for example, the “vacuum boundary condition” requires that the particle flux vanish at the “extrapolated boundary” of the spatial domain facing the vacuum; the “extrapolated boundary” depends on the imprecisely known geometrical dimensions of the system’s domain in space and also on the system’s microscopic transport cross sections and atomic number densities. The boundary Ω x ( α ) { a ( α ) b ( α ) } of the domain Ω x ( α ) comprises all of the endpoints a ( α ) [ a 1 ( α ) , , a Z x ( α ) ] and b ( α ) [ b 1 ( α ) , , b Z x ( α ) ] of the intervals on which the respective components of x are defined. It may happen that some components a i ( α ) and/or b j ( α ) are infinite, in which case they would not depend on any imprecisely known parameters.
  • u ( x ) [ u 1 ( x ) , , u Z u ( x ) ] denotes a Z u -dimensional column vector whose components represent the system’s dependent variables (also called “state functions”). The vector-valued function u ( x ) is considered the unique nontrivial solution of the physical problem described by Equations (1) and (3).
  • N ( I ) [ u ( x ) ; α ] [ N 1 ( I ) ( u ; α ) , , N i ( I ) ( u ; α ) , , N Z u ( I ) ( u ; α ) ] , i = 1 , , Z u , denotes a column vector of dimensions Z u whose components are operators that act nonlinearly on u ( x ) and α .
  • Q ( I ) ( α ; x ) [ Q 1 ( I ) ( α ; x ) , , Q Z u ( I ) ( α ; x ) ] denotes a Z u -dimensional column vector whose elements represent inhomogeneous source terms that depend either linearly or nonlinearly on α . The components of Q ( I ) ( α ; x ) may involve operators (rather than just finite-dimensional functions) and distributions acting on α and x .
  • y ( y 1 , , y Z y ) Z y denotes the Z y -dimensional phase–space position vector of independent variables for the physical system defined in Equation (2). The vector of independent variable y is defined on a phase–space domain denoted as Ω y ( α ) , which is defined as follows: Ω y ( α ) { c j ( α ) y j d j ( α ) ; j = 1 , , Z y } . The lower-valued imprecisely known boundary-point of the independent variable y i is denoted as c j ( α ) , while the upper-valued imprecisely known boundary-point of the independent variable y i is denoted as d j ( α ) . Some or all of the points c j ( α ) may coincide with the points b j ( α ) . Additionally, some components of y may coincide with some components of x , in which case the respective lower and upper boundary points for the respective coinciding independent variables would also coincide correspondingly. The boundary Ω y ( α ) { c ( α ) d ( α ) } of the domain Ω y ( α ) comprises all of the endpoints c ( α ) [ c 1 ( α ) , , c Z y ( α ) ] and d ( α ) [ d 1 ( α ) , , d Z y ( α ) ] of the intervals on which the respective components of y are defined.
  • v ( y ) [ v 1 ( y ) , , v Z v ( y ) ] denotes a Z v -dimensional column vector whose components represent the system’s dependent variables (also called “state functions”). The vector-valued function v ( y ) is considered the unique nontrivial solution of the physical problem described by Equations (2) and (3).
  • N ( I I ) [ u ( x ) ; α ] [ N 1 ( I I ) ( u ; α ) , , N i ( I I ) ( u ; α ) , , N Z v ( I I ) ( u ; α ) ] , i = 1 , , Z v , denotes a column vector of dimensions Z v whose components are operators acting nonlinearly on v ( y ) and α .
  • Q ( I I ) ( α ; y ) [ Q 1 ( I I ) ( α ; y ) , , Q Z v ( I I ) ( α ; y ) ] denotes a Z v -dimensional column vector whose elements represent inhomogeneous source terms that depend either linearly or nonlinearly on α . The components of Q ( I I ) ( α ; y ) may involve operators and distributions acting on α and y .
  • The vector-valued operator B [ u ( x ) , v ( y ) ; α ; x , y ] comprises all of the boundary, interface, and initial/final conditions for the coupled physical systems. If the boundary, interface and/or initial/final conditions are inhomogeneous, which is most often the case, then B [ 0 , 0 ; α ; x , y ] 0 .
  • Since Q ( I ) ( α ; x ) and Q ( I I ) ( α ; y ) may involve operators and distributions acting on α and y , all of the equalities in this work, including Equations (1)–(3), are considered to hold in the weak (“distributional”) sense.
The nominal (or “base-case”) solutions of Equations (1)–(3), denoted as u 0 ( x ) and v 0 ( y ) , are obtained by solving these equations at the nominal parameter values α 0 , i.e.,
N ( I ) [ u 0 ( x ) ; α 0 ] = Q ( I ) ( α 0 ; x ) , x Ω x ( α 0 )
N ( I I ) [ v 0 ( y ) ; α 0 ] = Q ( I I ) ( α 0 ; y ) , y Ω y ( α 0 )
B [ u 0 ( x ) , v 0 ( y ) ; α 0 ; x , y ] = 0 , x Ω x ( α 0 ) , y Ω y ( α 0 )
The response considered in this work is a generic nonlinear function-valued operator, denoted as follows:
R [ u ( x ) , v ( y ) ; α ; x , y ]
The nominal value of the response, denoted as R 0 R [ u 0 ( x ) , v 0 ( y ) ; α 0 ; x , y ] , is determined by computing the response at the nominal values α 0 , u 0 ( x ) and v 0 ( y ) . The true values of imprecisely known model, interface and boundary parameters may differ from their nominal (average, or “base-case”) values by variations denoted as δ α ( δ α 1 , , δ α N α ) , where δ α i α i α i 0 , i = 1 , , N α . In turn, the parameter variations δ α will cause variations δ u ( x ) [ δ u 1 ( x ) , , δ u Z u ( x ) ] and δ v ( y ) [ δ v 1 ( y ) , , δ v Z v ( y ) ] in the state functions and all of these variations will cause variations in the response R [ u ( x ) , v ( y ) ; α ; x , y ] around the nominal response value R 0 . Sensitivity analysis aims at computing the functional derivatives (called “sensitivities”) of the response to the imprecisely known parameters α . Subsequently, these sensitivities can be used for a variety of purposes, including quantifying the uncertainties induced in responses by the uncertainties in the model and boundary parameters, combining the uncertainties in computed responses with uncertainties in measured response (“data assimilation”) to obtain more accurate predictions of responses and/or parameters (“model calibration”, “predictive modeling”, etc.).
As has been shown by Cacuci [1], the most general definition of the 1st-order total sensitivity of an operator-valued model response to parameter variations is provided by the first-order “Gateaux-variation” (G-variation) of the response under consideration. To determine the first G-variation of the response R [ u ( x ) , v ( y ) ; α ; x , y ] , it is convenient to denote the functions appearing in the argument of the response as being the components of a vector e [ u ( x ) , v ( y ) ; α ] , which represents an arbitrary “point” in the combined phase–space of the state functions and parameters. The point which corresponds to the nominal values of the state functions and parameters in this phase space is denoted as e 0 [ u 0 ( x ) , v 0 ( y ) ; α 0 ] . Analogously, it is convenient to consider the variations in the model’s state functions and parameters to be the components of a “vector of variations”, δ e , defined as follows: δ e [ δ u ( x ) , δ v ( y ) ; δ α ] . The 1st-order Gateaux- (G-) variation of the response R ( e ) , which will be denoted as δ R ( e 0 ; δ e ) , for arbitrary variations δ e in the model parameters and state functions in a neighborhood ( e 0 + ε δ e ) around e 0 , is obtained, by definition, as follows:
δ R ( e 0 ; δ e ) { d d ε R [ u 0 ( x ) + ε δ u ( x ) , v 0 ( y ) + ε δ v ( y ) ; α 0 + ε δ α ; x , y ] } ε = 0
The unknown variations δ u ( x ) and δ v ( y ) in the state functions are related to the variations δ α through the equations obtained by applying the definition of the G-differential to the equations underlying the coupled nonlinear, i.e., Equations (1)–(3), to obtain the following relations:
{ d d ε N ( I ) [ u 0 ( x ) + ε δ u ( x ) ; α 0 + ε δ α ] } ε = 0 = { d d ε Q ( I ) ( α 0 + ε δ α ; x ) } ε = 0 , x Ω x ( α 0 ) ,
{ d d ε N ( I I ) [ v 0 ( y ) + ε δ v ( y ) ; α 0 + ε δ α ] } ε = 0 = { d d ε Q ( I I ) ( α 0 + ε δ α ; y ) } ε = 0 , y Ω y ( α 0 ) ,
{ d d ε B [ u 0 ( x ) + ε δ u ( x ) , v 0 ( y ) + ε δ v ( y ) ; α 0 + ε δ α ; x , y ] } ε = 0 = 0 , x Ω x ( α 0 ) , y Ω y ( α 0 ) .
Performing in Equations (9)–(11) the differentiations with respect to ε and setting ε = 0 in the resulting expression yields the following system of equations:
δ N ( I ) [ u 0 ( x ) , α 0 ; δ u ( x ) , δ α ] = δ Q ( I ) ( α 0 ; δ α ) , x Ω x ( α 0 )
δ N ( I I ) [ v 0 ( y ) , α 0 ; δ v ( y ) , δ α ] = δ Q ( I I ) ( α 0 ; δ α ) , y Ω y ( α 0 ) ,
δ B ( e 0 ; δ e ) = 0 , x Ω x ( α 0 ) , y Ω y ( α 0 )
The system of equations comprising Equations (12)–(14) is called the “First-Level Forward Sensitivity System” (1st-LFSS) and could be solved to obtain the variations δ u ( x ) and δ v ( y ) in the state functions in terms of the parameter variations δ α which appear as sources in the 1st-LFSS equations. Subsequently, the variations δ u ( x ) and δ v ( y ) thus obtained could be used to compute the total sensitivity δ R ( e 0 ; δ e ) defined in Equation (8).
The existence of the G-variations of the operators underlying the 1st-LFSS and total sensitivity δ R ( e 0 ; δ e ) does not guarantee their numerical computability. Numerical methods most often require that δ R ( e 0 ; δ e ) and the operators underlying the 1st-LFSS be linear in the variations δ e in a neighborhood ( e 0 + ε δ e ) around e 0 . The necessary and sufficient conditions for the G-differential δ W ( e 0 ; δ e ) of a nonlinear operator W ( e ) to be linear in δ e in a neighborhood ( e 0 + ε δ e ) around e 0 , and thus admit partial and total G-derivatives, are as follows [6]:
(i)
W ( e ) satisfies a weak Lipschitz condition at e 0 ;
W ( e 0 + ε h ; x ) W ( e 0 ; x ) k ε e 0 , k <
(ii)
for two arbitrary vectors of variations δ e 1 and δ e 2 , the operator W ( e ) satisfies the following relation:
W ( e 0 + ε δ e 1 + ε δ e 2 ) W ( e 0 + ε δ e 1 ) W ( e 0 + ε δ e 2 ) + W ( e 0 ) = o ( ε )
It will henceforth be assumed that the operators N ( I ) , N ( I I ) , B , Q ( I ) , Q ( I I ) and R satisfy the conditions indicated in Equations (15) and (16). Hence, Equations (12)–(14) can be written in the following form:
{ N ( I ) ( u ; α ) u } ( u 0 ; α 0 ) δ u ( x ) = { Q 1 ( 1 ) ( u ; α ; δ α ) } ( u 0 ; α 0 ) , x Ω x ( α 0 ) ,
{ N ( I I ) ( v ; α ) v } ( v 0 , α 0 ) δ v ( y ) = { Q 2 ( 1 ) ( v ; α ; δ α ) } ( v 0 , α 0 ) , y Ω y ( α 0 ) ,
{ B [ u ( x ) , v ( y ) ; α ; x , y ] u } ( e 0 ) δ u ( x ) + { B [ u ( x ) , v ( y ) ; α ; x , y ] v } ( e 0 ) δ v ( y ) + { B [ u ( x ) , v ( y ) ; α ; x , y ] α } ( e 0 ) δ α = 0 , x Ω x ( α 0 ) , y Ω y ( α 0 ) .
where
{ Q 1 ( 1 ) ( u ; α ; δ α ) } ( u 0 ; α 0 ) { [ Q ( I ) ( α ; x ) N ( I ) [ u ( x ) ; α ] ] α } ( u 0 ; α 0 ) δ α ,
{ Q 2 ( 1 ) ( v ; α ; δ α ) } ( v 0 , α 0 ) { [ Q ( I I ) ( α ; y ) N ( I I ) [ v ( y ) ; α ] ] α } ( v 0 , α 0 ) δ α ,
The partial G-derivatives N ( I ) [ u ( x ) ; α ] / u , N ( I I ) [ v ( y ) ; α ] / v , B [ u ( x ) , v ( y ) ; α ; x , y ] / u , B [ u ( x ) , v ( y ) ; α ; x , y ] / v , N ( I ) [ u ( x ) ; α ] / α , N ( I I ) [ v ( y ) ; α ] / α , Q ( I ) [ u ( x ) ; α ] / α , Q ( I I ) [ u ( x ) ; α ] / α and B [ u ( x ) , v ( y ) ; α ; x , y ] / α , which appear in Equations (17)–(21), are matrices of corresponding dimensions. When the G-variation δ R ( e 0 ; δ e ) is linear in δ e , it is called the G-differential of R ( e ) and is usually denoted as D R ( e 0 ; δ e ) . Furthermore, the result of the differentiations indicated on the right-side of the definition provided in Equation (8) can be written as follows:
D R ( e 0 ; δ e ) = { D R ( e 0 ; δ α ) } d i r + { D R ( e 0 ; δ u , δ v ) } i n d ,
where the so-called “direct-effect” term is defined as follows:
{ D R ( e 0 ; δ α ) } d i r { R α } ( e 0 ) δ α = i = 1 Z α { R α i } ( u 0 ; α 0 ) δ α i ,
while the so-called “indirect-effect” term is defined as follows:
{ D R ( e 0 ; δ α ) } i n d { R u } ( e 0 ) δ u ( x ) + { R v } ( e 0 ) δ v ( y ) .
In Equations (23) and (24), the vectors R / u , R / v and R / α comprise, as components, the first-order partial G-derivatives computed at the phase–space point e 0 . The G-differential D R ( e 0 ; δ e ) is an operator defined on the same domain as R ( e ) and has the same range as R ( e ) . The G-differential D R ( e 0 ; δ e ) satisfies the relation R ( e 0 + ε δ e ) R ( e 0 ) = D R ( e 0 ; δ e ) + Δ ( δ e ) , with lim ε 0 [ Δ ( ε δ e ) ] / ε = 0 .
The “direct effect” term { D R ( e 0 ; δ α ) } d i r depends only on the parameter variations δ α so it can be computed immediately, since it does not depend on the variations δ u and δ v . On the other hand, “indirect effect” term { D R ( e 0 ; δ α ) } i n d depends indirectly on the parameter variations δ α through the yet unknown variations δ u ( x ) and δ v ( y ) in the state functions, and these functions can be determined only by solving repeatedly the 1st-LFSS for every possible parameter variation δ α i , i = 1 , , Z α . The need for these prohibitively expensive computations can be circumvented by extending the concepts underlying the “Adjoint Sensitivity Analysis Methodology” (ASAM) conceived by Cacuci [1] to construct a “First-Level Adjoint Sensitivity System” (1st-LASS), the solution of which will be independent of the variations δ α , δ u ( x ) and δ v ( y ) . Subsequently, the solution of the 1st-LASS will be used to compute the indirect-effect term { D R ( e 0 ; δ α ) } i n d by constructing an equivalent expression (for this indirect-effect term) which would not involve the unknown variations δ u ( x ) and δ v ( y ) .

2.1. Spectral Representation of the System Response’s Indirect-Effect Term

Since the indirect-effect term { D R ( e 0 ; δ α ) } i n d is defined on the same domain Ω x ( α 0 ) Ω y ( α 0 ) as R ( e 0 ) , and has the same range as R ( e 0 ) , it follows that it can be represented in the following form:
{ D R ( e 0 ; δ α ) } i n d = m 1 = 0 m Z x = 0 n 1 = 0 n Z y = 0 { F m 1 m Z x n 1 n Z y ( u ; v ; α ; δ u ) + G m 1 m Z x n 1 n Z y ( u ; v ; α ; δ v ) } ( e 0 ) × P m 1 ( x 1 ) P m Z x ( x Z x ) O n 1 ( y 1 ) O n Z y ( y Z y ) .
where
{ F m 1 m Z x n 1 n Z y ( u ; v ; α ; δ u ) } ( e 0 ) a 1 ( α 0 ) b 1 ( α 0 ) d x 1 a Z x ( α 0 ) b Z x ( α 0 ) d x Z x c 1 ( α 0 ) d 1 ( α 0 ) d y 1 c Z y ( α 0 ) d Z y ( α 0 ) d y Z y { R [ u ( x ) ; v ( y ) ; α ; x ; y ] u δ u } ( e 0 )   × P m 1 ( x 1 ) P m Z x ( x Z x ) O n 1 ( y 1 ) O n Z y ( y Z y )
and
{ G m 1 m Z x n 1 n Z y ( u ; v ; α ; δ v ) } ( e 0 ) a 1 ( α 0 ) b 1 ( α 0 ) d x 1 a Z x ( α 0 ) b Z x ( α 0 ) d x Z x c 1 ( α 0 ) d 1 ( α 0 ) d y 1 c Z y ( α 0 ) d Z y ( α 0 ) d y Z y { R [ u ( x ) ; v ( y ) ; α ; x ; y ] v δ v } ( e 0 )   × P m 1 ( x 1 ) P m Z x ( x Z x ) O n 1 ( y 1 ) O n Z y ( y Z y ) .
The following designations have been used in Equations (26) and (27): (i) the quantities P m i ( x i ) , i = 1 , , Z x , denote the corresponding spectral basis functions (e.g., orthogonal polynomials, Fourier exponential/trigonometric functions) for the domain defined as the domain Ω x ; (ii) the quantities O m i ( y i ) , i = 1 , , Z y , denote the spectral functions corresponding to the domain Ω y ; and (iii) the quantities { F m 1 m Z x n 1 n Z y ( u ; v ; α ; δ u ) } ( e 0 ) and { G m 1 m Z x n 1 n Z y ( u ; v ; α ; δ v ) } ( e 0 ) denote the corresponding generalized spectral (Fourier) coefficients.
The appearance of the “difficult to compute” variations δ u and δ v in the functionals defined in Equations (26) and (27), respectively, can be eliminated by expressing the right-sides of Equations (26) and (27) in terms of adjoint functions that will be obtained by implementing the following sequence of steps:
  • Introduce a Hilbert space pertaining to the domain Ω x ( α 0 ) Ω y ( α 0 ) , denoted as H , comprising square-integrable vector-valued elements of the form f ( α ) ( x , y ) [ g ( α ) ( x , y ) , h ( α ) ( x , y ) ] H and f ( β ) ( x , y ) [ g ( β ) ( x , y ) , h ( β ) ( x , y ) ] H , where g ( α ) ( x , y ) [ g 1 ( α ) ( x , y ) , , g Z u ( α ) ( x , y ) ] , g ( β ) ( x , y ) [ g 1 ( β ) ( x , y ) , , g Z u ( β ) ( x , y ) ] , h ( α ) ( x , y ) [ h 1 ( α ) ( x , y ) , , h Z v ( α ) ( x , y ) ] , h ( β ) ( x , y ) [ h 1 ( β ) ( x , y ) , , h Z v ( β ) ( x , y ) ] .
  • Define the inner product, denoted as f ( α ) ( x , y ) , f ( β ) ( x , y ) , between two elements of H , as follows:
    f ( α ) ( x , y ) , f ( β ) ( x , y ) a 1 ( α 0 ) b 1 ( α 0 ) a Z x ( α 0 ) b Z x ( α 0 ) c 1 ( α 0 ) d 1 ( α 0 ) c Z x ( α 0 ) d Z x ( α 0 ) [ g ( α ) ( x , y ) · g ( β ) ( x , y ) + h ( α ) ( x , y ) · h ( β ) ( x , y ) ] d x d y
    where
    g ( α ) ( x , y ) · g ( β ) ( x , y ) n = 1 Z u g n ( α ) ( x , y ) g n ( β ) ( x , y )
    and
    h ( α ) ( x , y ) · h ( β ) ( x , y ) n = 1 Z v h n ( α ) ( x , y ) h n ( β ) ( x , y )
  • Recast Equations (17) and (18) in the following matrix from:
    { [ N ( I ) ( u ; α ) u 0 0 N ( I I ) ( v ; α ) v ] } ( e 0 ) ( δ u ( x ) δ v ( y ) ) = { ( Q 1 ( 1 ) ( u ; α ; δ α ) Q 2 ( 1 ) ( v ; α ; δ α ) ) } ( e 0 )
  • Use the definition provided in Equation (28) to form the inner product of Equation (31) with a square-integrable vector ψ ( 1 ) ( x , y ) [ ψ ( I ) ( x , y ) , ψ ( I I ) ( x , y ) ] H to obtain the following relation:
    ( ψ ( I ) ( x , y ) ψ ( I I ) ( x , y ) ) , { [ N ( I ) ( u ; α ) u 0 0 N ( I I ) ( v ; α ) v ] } ( e 0 ) ( δ u ( x ) δ v ( y ) ) = ( ψ ( I ) ( x , y ) ψ ( I I ) ( x , y ) ) , { ( Q 1 ( 1 ) ( u ; α ; δ α ) Q 2 ( 1 ) ( v ; α ; δ α ) ) } ( e 0 ) .
  • Using the definition of the adjoint operator in the Hilbert space H , recast the left-side of Equation (32) as follows:
    ( ψ ( I ) ( x , y ) ψ ( I I ) ( x , y ) ) , { [ N ( I ) ( u ; α ) u 0 0 N ( I I ) ( v ; α ) v ] } ( e 0 ) ( δ u ( x ) δ v ( y ) ) = ( δ u ( x ) δ v ( y ) ) , { [ A * ( u ; α ) 0 0 B * ( v ; α ) ] } ( e 0 ) ( ψ ( I ) ( x , y ) ψ ( I I ) ( x , y ) ) + { B C ( 1 ) [ u ( x ) ; v ( y ) ; ψ ( I ) ( x , y ) , ψ ( I I ) ( x , y ) ; δ u ( x ) , δ v ( y ) ; α ; x , y ; δ α ] Ω x Ω y } ( e 0 ) ,
    where the operator A * ( u ; α ) denotes the formal adjoint of N ( I ) ( u ; α ) / u , the operator B * ( v ; α ) denotes the formal adjoint of N ( I I ) ( v ; α ) / v and where { B C ( 1 ) [ u ( x ) ; v ( y ) ; ψ ( I ) ( x , y ) , ψ ( I I ) ( x , y ) ; δ u ( x ) , δ v ( y ) ; α ; x , y ; δ α ] Ω x Ω y } ( e 0 ) denotes the bilinear concomitant evaluated on the boundary δ Ω x ( α 0 ) δ Ω y ( α 0 ) . The superscript “1” which appears in the notation of the bilinear concomitant B C ( 1 ) indicates that this quantity arises in conjunction with the construction of the “First-Level Adjoint Sensitivity System (1st-LASS)”.
  • Replace the left-side of Equation (33) with the right-side of Equation (32) to obtain the following relation:
    ( δ u ( x ) δ v ( y ) ) , { [ A * ( u ; α ) 0 0 B * ( v ; α ) ] } ( e 0 ) ( ψ ( I ) ( x , y ) ψ ( I I ) ( x , y ) ) = ( ψ ( I ) ( x , y ) ψ ( I I ) ( x , y ) ) , { ( Q 1 ( 1 ) ( u ; α ; δ α ) Q 2 ( 1 ) ( v ; α ; δ α ) ) } ( e 0 ) { B C ( 1 ) [ u ( x ) ; v ( y ) ; ψ ( I ) ( x , y ) , ψ ( I I ) ( x , y ) ; δ u ( x ) , δ v ( y ) ; α ; x , y ; δ α ] Ω x Ω y } ( e 0 ) .
  • Require the left-side of Equation (34) to represent the indirect-effect term { D R ( e 0 ; δ u , δ v ) } i n d i r e c t defined in Equation (25), which can be fulfilled by requiring the yet undetermined (adjoint) functions ψ ( I ) ( x , y ) and ψ ( I I ) ( x , y ) to satisfy the following equations:
    { A * ( u ; α ) } ( e 0 ) ψ ( I ) ( x , y ) = { R [ u ( x ) ; v ( y ) ; α ; x ; y ] u δ u } ( e 0 ) P m 1 ( x 1 ) P m Z x ( x Z x ) O n 1 ( y 1 ) O n Z y ( y Z y ) ,
    { B * ( v ; α ) } ( e 0 ) ψ ( I I ) ( x , y ) = { R [ u ( x ) ; v ( y ) ; α ; x ; y ] v δ v } ( e 0 ) P m 1 ( x 1 ) P m Z x ( x Z x ) O n 1 ( y 1 ) O n Z y ( y Z y ) .
  • Since the source terms on the right-sides of Equations (35) and (36) depend on the indices of the spectral bases functions, it follows that the adjoint functions ψ ( I ) ( x , y ) and ψ ( I I ) ( x , y ) also depend on the respective indices, which will henceforth be explicitly displayed by writing ψ m 1 . . m Z x n 1 n Z y ( I ) ( x , y ) and ψ m 1 . . m Z x n 1 n Z y ( I I ) ( x , y ) , respectively.
  • The boundary, interface, and initial/final conditions for the functions ψ m 1 . . m Z x n 1 n Z y ( I ) ( x , y ) and ψ m 1 . . m Z x n 1 n Z y ( I I ) ( x , y ) are now determined by imposing the following requirements:
    (a)
    Implement the boundary, interface and initial/final conditions given in Equation (19) into the bilinear concomitant in Equation (34).
    (b)
    Eliminate the remaining unknown boundary, interface and initial/final conditions involving the functions δ u ( x ) and δ v ( y ) from the expression of the bilinear concomitant in Equation (34) by selecting boundary, interface and initial/final conditions for the adjoint functions ψ m 1 . . m Z x n 1 n Z y ( I ) ( x , y ) and ψ m 1 . . m Z x n 1 n Z y ( I I ) ( x , y ) such that the selected conditions for these adjoint functions must be independent of unknown values of δ u ( x ) , δ v ( y ) and δ α while ensuring that Equations (35) and (36) are well posed. The boundary conditions thus chosen for the adjoint functions ψ m 1 . . m Z x n 1 n Z y ( I ) ( x , y ) and ψ m 1 . . m Z x n 1 n Z y ( I I ) ( x , y ) can be represented in operator form as follows:
    { B A ( 1 ) [ u ( x ) ; v ( y ) ; ψ m 1 . . m Z x n 1 n Z y ( I ) ( x , y ) , ψ m 1 . . m Z x n 1 n Z y ( I I ) ( x , y ) ; α ; x , y ] } ( e 0 ) = 0 , x Ω x ( α 0 ) , y Ω y ( α 0 ) ,
    where the subscript “A” indicates “adjoint” and the superscript “1” indicates that these boundary conditions arises in conjunction with the construction of 1st-LASS. The selection of the boundary conditions for the adjoint functions ψ m 1 . . m Z x n 1 n Z y ( I ) ( x , y ) and ψ m 1 . . m Z x n 1 n Z y ( I I ) ( x , y ) represented by Equation (37) eliminates the appearance of any unknown values of the variations δ u ( x ) and δ v ( y ) in the bilinear concomitant in Equation (34), reducing it to a residual quantity that contains boundary terms involving only known values of δ α , u ( x ) , v ( y ) , ψ m 1 . . m Z x n 1 n Z y ( I ) ( x , y ) , ψ m 1 . . m Z x n 1 n Z y ( I I ) ( x , y ) , and α . This residual bilinear concomitant will be denoted as { R C ( 1 ) [ u ( x ) ; v ( y ) ; ψ m 1 . . m Z x n 1 n Z y ( I ) ( x , y ) , ψ m 1 . . m Z x n 1 n Z y ( I I ) ( x , y ) ; α ; x , y ; δ α ] δ Ω x δ Ω y } ( e 0 ) .
    In general, this residual bilinear concomitant does not automatically vanish, although it may do so in particular instances. In principle, this residual bilinear concomitant could be forced to vanish, if necessary, by considering extensions, in the operator sense, of the linear operators A * ( u ; α ) and/or B * ( v ; α ) , but such extensions seldom need to be used in practice.
  • Using Equations (34)–(36) in conjunction with Equations (26) and (27) in Equation (25) yields the following expression for the indirect-effect term { D R ( e 0 ; δ u , δ v ) } i n d :
    { D R ( e 0 ; δ u , δ v ) } i n d = m 1 = 0 m Z x = 0 m 1 = 0 m Z y = 0 { ψ ( I ) ( x , y ) Q 1 ( 1 ) ( u ; α ; δ α ) + ψ ( I I ) ( x , y ) Q 2 ( 1 ) ( v ; α ; δ α ) R C ( 1 ) [ u ( x ) ; v ( y ) ; ψ m 1 . . m Z x n 1 n Z y ( I ) ( x , y ) , ψ m 1 . . m Z x n 1 n Z y ( I I ) ( x , y ) ; α ; x , y ; δ α ] Ω x Ω y } ( e 0 ) × P m 1 ( x 1 ) P m Z x ( x Z x ) O n 1 ( y 1 ) O n Z y ( y Z y ) { D R ( e 0 ; ψ m 1 . . m Z x n 1 n Z y ( I ) , ψ m 1 . . m Z x n 1 n Z y ( I I ) ; δ α ) } i n d .
As the expression in Equation (38) indicates, the desired elimination from { D R ( e 0 ; δ α ) } i n d of the unknown variations δ u and δ v has been accomplished by having replaced them by the adjoint functions ψ m 1 . . m Z x n 1 n Z y ( I ) and ψ m 1 . . m Z x n 1 n Z y ( I I ) , which do not depend on any parameter variations; this fact that has been underscored by having explicitly indicated that the indirect-effect term can now be written in the form { D R ( e 0 ; ψ m 1 . . m Z x n 1 n Z y ( I ) , ψ m 1 . . m Z x n 1 n Z y ( I I ) ; δ α ) } i n d .
When first introduced in Equation (32), it was not known that the adjoint functions would ultimately depend on the indices m 1 , , m Z x and n 1 , , n Z y ; this fact has become apparent only after having constructed the right-sides (i.e., sources) of Equations (35) and(36) to emphasize this fact. These equations are re-written below:
{ A * ( u ; α ) } ( e 0 ) ψ m 1 . . m Z x n 1 n Z y ( I ) ( x , y ) = { R [ u ( x ) ; v ( y ) ; α ; x ; y ] u δ u } ( e 0 ) × P m 1 ( x 1 ) P m Z x ( x Z x ) O n 1 ( y 1 ) O n Z y ( y Z y ) ,
{ B * ( v ; α ) } ( e 0 ) ψ m 1 . . m Z x n 1 n Z y ( I I ) ( x , y ) = { R [ u ( x ) ; v ( y ) ; α ; x ; y ] v δ v } ( e 0 ) × P m 1 ( x 1 ) P m Z x ( x Z x ) O n 1 ( y 1 ) O n Z y ( y Z y ) .
The system of Equations (39) and (40), together with the adjoint boundary/initial conditions represented by Equation (37) will be called the “First-Level Adjoint Sensitivity System (1st-LASS).” The 1st-LASS is independent of the parameter variations δ α but depends on the indices m 1 , , m Z x and n 1 , , n Z y . In principle, therefore, the 1st-LASS needs to be solved as many times as there are nonzero spectral basis functions, which act as sources on the right side of the equations underlying the 1st-LASS. It is therefore very important to represent the indirect-effect term { D R ( e 0 ; δ u , δ v ) } i n d defined in Equation (25) using as few basis-functions as possible, within a criterion of accuracy that is set by the user, a priori. Once the adjoint functions ψ m 1 . . m Z x n 1 n Z y ( I ) and ψ m 1 . . m Z x n 1 n Z y ( I I ) are available, they can be used in Equation (38) to compute the indirect-effect term { D R ( e 0 ; ψ m 1 . . m Z x n 1 n Z y ( I ) , ψ m 1 . . m Z x n 1 n Z y ( I I ) ; δ α ) } i n d exactly and efficiently, using quadrature formulas, which are many orders of magnitude faster to compute than solving the operator (differential, integral) equations that underlie the 1st-LFSS.
In practice, orthogonal polynomials will often be selected to serve as basis-functions for the spectral Fourier representations of the responses of interest. As is well-known, orthogonal polynomials possess many recurrence relations which can be advantageously used to reduce massively the number of computations that would actually require solving the 1st-LASS.
In the particular case when the response is a scalar-valued functional of the system’s dependent variables, the expansion in Equation (25) reduces to a single term, so that the summations in the expression of the indirect-effect term { D R ( e 0 ; δ u , δ v ) } i n d in Equation (38) also reduce to a single term.

2.2. Pseudo-Spectral Representation of the System Response’s Indirect-Effect Term

Alternatively, Lagrange interpolation, see e.g., [7], can be used to express the indirect-effect term defined in Equation (24) approximately as follows:
{ D R ( e 0 ; δ u , δ v ) } i n d i , j = 0 N R i j ( x i , y j ) C F i j ( x , y ) .
where the quantities C F i j ( x , y ) represent the “cardinal functions”, where x i and y j denote the collocation (or interpolation) points, and where
R i j ( x i , y j ) = a 1 ( α 0 ) b 1 ( α 0 ) . . a Z x ( α 0 ) b Z x ( α 0 ) c 1 ( α 0 ) d 1 ( α 0 ) . . c Z x ( α 0 ) d Z x ( α 0 ) [ ( R u ) ( e 0 ) δ u ( x ) + ( R v ) ( e 0 ) δ v ( y ) ] × δ ( x x i ) δ ( y y j ) d x d y .
The cardinal functions C F i j ( x , y ) are also called [3] the “fundamental polynomials for pointwise interpolation”, the “elements of the cardinal basis”, the “Lagrange basis”, or the “shape functions”. Depending on the domains of definition x Ω x ( α 0 ) , y Ω y ( α 0 ) and choices of weight functions, particularly important cardinal functions are the Chebyshev, Legendre, Gegenbauer, Hermite, Laguerre polynomials, and Whittaker’s “sinc” function. In several dimensions, it is most efficient to use a tensor product basis, i.e., use basis functions that are products of one-dimensional basis functions. Particularly efficient computational procedures can be constructed when both the basis functions and grid are tensor products of one-dimensional functions and grids, respectively. Using trigonometric functions, Chebyshev polynomials, or rational Chebyshev functions as basis functions enables the use of the Fast Fourier Transform, which further enhances computational efficiency.
Following established practice [3], “collocation points” and “interpolation points” will be used as synonyms in this work, as will be the terms “collocation” and “pseudospectral” when referring to the fact that interpolatory methods will be used to determine the yet unknown indirect-effect term { D R ( e 0 ; δ u , δ v ) } i n d by expressing it in terms of adjoint functions specifically developed for each of the collocation/interpolation points. The reason that “collocation” methods are alternatively labeled “pseudospectral” is that the optimum choice of the interpolation points makes collocation methods identical with the Galerkin method if the inner products are evaluated by “Gaussian integration”. It is important to note that neither the cardinal functions C F i j ( x , y ) nor the collocation points x i and y j are subject to model parameter uncertainties.
The functionals F i j ( x i , y j ) defined in Equation (42) can be evaluated by using adjoint functions that are the solutions of a 1st-LASS constructed by following the same conceptual steps as those leading to Equations (39) and (40), and the adjoint boundary conditions defined by Equation (37). Omitting these intermediate steps, the final result is as follows:
F i j ( x i , y j ) = { ψ ( A ) ( x , y ; x i , y j ) Q 1 ( 1 ) ( u ; α ; δ α ) + ψ ( B ) ( x , y ; x i , y j ) Q 2 ( 1 ) ( v ; α ; δ α ) C ^ ( 1 ) [ u ( x ) ; v ( y ) ; ψ ( A ) ( x , y ; x i , y j ) , ψ ( B ) ( x , y ; x i , y j ) ; α ; x , y ; δ α ] Ω x Ω y } ( e 0 )
where the adjoint functions ψ ( A ) ( x , y ; x i , y j ) and ψ ( B ) ( x , y ; x i , y j ) are the solutions of the following 1st-LASS:
{ A * ( u ; α ) } ( e 0 ) ψ ( A ) ( x , y ; x i , y j ) = { R [ u ( x ) ; v ( y ) ; α ; x ; y ] u δ u } ( e 0 ) δ ( x x i ) δ ( y y j )
{ B * ( v ; α ) } ( e 0 ) ψ ( B ) ( x , y ; x i , y j ) = { R [ u ( x ) ; v ( y ) ; α ; x ; y ] v δ v } ( e 0 ) δ ( x x i ) δ ( y y j )
{ C A ( 1 ) [ u ( x ) ; v ( y ) ; ψ ( A ) ( x , y ; x i , y j ) , ψ ( B ) ( x , y ; x i , y j ) ; α ; x , y ] } ( e 0 ) = 0 , x Ω x ( α 0 ) , y Ω y ( α 0 ) ,
It is evident from Equations (44)–(46) that the 1st-LASS must be solved anew for each of the collocation/interpolation points considered in the expansion of the indirect-effect term shown in Equation (41). The choice between using the spectral expansion shown in Equation (25) or using the collocation/interpolation pseudo-spectral expansion shown in Equation (41) depends on the specific problem under consideration, but for comparable accuracy in the computation of the response sensitivities, using the collocation/interpolation pseudo-spectral expansion shown in Equation (41) is often more efficient computationally than using the full spectral expansion.
The practical implementation of the mathematical methodology underlying the 1st-CASAM is illustrated in Figure 1 and Figure 2. The derivation of the 1st-LFSS is illustrated in Figure 1. The path on the left side of Figure 1 depicts the derivation of the (non-discretized) 1st-LFSS starting from the differential equations underlying the original nonlinear system. On the other hand, the path on the right-side of Figure 1 depicts the derivation of the discretized 1st-LFSS starting from the discretized form of the original nonlinear equations. If this path is followed, it must be ensured that the discretized 1st-LFSS is consistent with the differential form of the 1st-LFSS in the limit of vanishing size of the discretization interval considered for the independent variables.
The derivation of the 1st-LASS is illustrated in Figure 2. The path on the left side of Figure 2 depicts the derivation of the (non-discretized) 1st-LASS starting from the differential form of the 1st-LFSS. On the other hand, the path on the right side of Figure 2 depicts the derivation of the discretized 1st-LASS starting from the discretized 1st-LFSS. If this path is chosen, the consistency of the discretized 1st-LASS with the differential form of the 1st-LFSS must again be ensured.

3. Concluding Remarks

This work has presented the First-Order Comprehensive Adjoint Sensitivity Analysis Methodology (1st-CASAM) for computing efficiently the exact first-order sensitivities (i.e., functional derivatives) of operator-valued responses (i.e., model results) of general models of coupled nonlinear physical systems characterized by imprecisely known parameters, internal interfaces between the coupled systems and external boundaries. When the model response is a (scalar-valued) functional of the system’s dependent variables (i.e., state functions), the total sensitivity of a scalar-valued functional response to all of the model’s state functions is (also) a functional of the variations in the model’s state variables. By being a functional of the variations in the model’s state variables, the total response sensitivity naturally defines an inner product in terms of which it can be expressed uniquely by virtue of the well-known Riesz Representation Theorem (which ensures that every functional defined in a Hilbert space can be expressed uniquely as an inner product). The existence of such a natural inner-product induced by a functional response enables the construction of an appropriate adjoint sensitivity system, the solution of which (i.e., the respective adjoint sensitivity functions) can always be used to compute, exactly and most efficiently, the sensitivities of a functional response to the model’s scalar parameters. When the response is a functional of the state variables, a single adjoint computation (i.e., solution of the adjoint sensitivity system) suffices for subsequently computing exactly all of the model’s response sensitivities to all of the model’s scalar parameters. The adjoint sensitivity system has the same dimensions as the original system, but it is always linear in the adjoint state functions. This is in contradistinction to the original system, which is usually nonlinear in its state functions. Solving the original forward system and the adjoint sensitivity system involve large-scale computations, since these systems invariably involve inversion of large matrices stemming from differential, difference, integral, and/or algebraic equations. Since the adjoint sensitivity analysis methodology requires solving just once the adjoint sensitivity system, this methodology is the most advantageous to use computationally in practice for large-scale systems involving many parameters.
On the other hand, the total sensitivity (to model parameters and state functions) of a model response which is a function-valued (as opposed to a scalar-valued) operator of the model’s state functions does not provide a natural inner product for the model/system under consideration. Without an inner product, it is not possible to construct an adjoint sensitivity system, the solution of which would subsequently be used for computing the response sensitivities to the model’s parameters. Therefore, an inner product must first be constructed to enable expressing the operator-valued total response sensitivity to the variations in the state functions in terms of functionals of the system’s dependent variables (state functions). The requisite inner product can be constructed by representing the total sensitivity of the operator-valued response to the system’s state functions in terms of scalar-valued (functionals) response using: (i) spectral expansions; or (ii) collocation/pseudo-spectral expansions, or (iii) combined spectral/collocation expansions. The coefficients in any of these expansions are functionals that can be represented in terms of an inner product. In turn, this inner product enables the construction of an adjoint sensitivity system, the solution of which can subsequently be used to compute exactly and efficiently the sensitivities of these coefficients to the model’s parameters. A different source for the adjoint sensitivity system is developed for each spectral coefficient or for each collocation point. Altogether, therefore, as many adjoint computations would be needed as there are spectral coefficients and/or collocation points in the phase–space of independent variables. Thus, for operator-valued responses, the fundamental issue is to establish the number of collocation points in the phase–space of independent variables and/or the number of Fourier coefficients which would be needed for representing the response within an a priori established accuracy in the phase–space of independent variables. Subsequently, for each Fourier coefficient and/or at each collocation point, the 1st-CASAM provides the exact sensitivities in the parameter space, in the computationally most efficient manner. By enabling the exact computations of operator-valued response sensitivities to internal interfaces and external boundary parameters and conditions, the 1st-CASAM presented in this work makes it possible, inter alia, to quantify the effects of manufacturing tolerances on the responses of physical and engineering systems.
An accompanying work [7] will present the application of the 1st-CASAM developed in this work to a benchmark problem [8] that models coupled heat conduction and convection in a physical system comprising an electrically heated rod surrounded by a coolant which simulates the geometry of a nuclear reactor. In particular, this benchmark [8] was used to verify [8,9] the numerical results produced by the FLUENT Adjoint Solver [10].

Funding

This research received no external funding.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Cacuci, D.G. Sensitivity and Uncertainty Analysis: Theory; Chapman & Hall/CRC: Boca Raton, NJ, USA, 2003; Volume 1. [Google Scholar]
  2. Cacuci, D.G.; Ionescu-Bujor, M.; Navon, M.I. Sensitivity and Uncertainty Analysis: Applications to Large Scale Systems; Chapman & Hall/CRC: Boca Raton, NJ, USA, 2005; Volume 2. [Google Scholar]
  3. Cacuci, D.G. The Second-Order Adjoint Sensitivity Analysis Methodology; Taylor & Francis/CRC Press: Boca Raton, NJ, USA, 2018. [Google Scholar]
  4. Cacuci, D.G. Sensitivity Theory for Nonlinear Systems: I. Nonlinear Functional Analysis Approach. J. Math. Phys. 1981, 22, 2794–2802. [Google Scholar] [CrossRef]
  5. Cacuci, D.G. Sensitivity Theory for Nonlinear Systems: II. Extensions to Additional Classes of Responses. J. Math. Phys. 1981, 22, 2803–2812. [Google Scholar] [CrossRef]
  6. Rall, L.B. (Ed.) Nonlinear Functional Analysis and Applications; Academic Press: New York, NY, USA, 1971. [Google Scholar]
  7. Boyd, J.P. Chebyshev and Fourier Spectral Methods, 2nd ed.; Dover Publications Inc.: Mineola, NY, USA, 2000. [Google Scholar]
  8. Cacuci, D.G. Adjoint Method for Computing Operator-Valued Response Sensitivities to Imprecisely Known Parameters, Internal Interfaces and Boundaries of Coupled Nonlinear Systems: II. Application to a Nuclear Heat Removal Benchmark. J. Nucl. Eng. 2020, 1, 18–45. [Google Scholar]
  9. Cacuci, D.G.; Fang, R.; Ilic, M.; Badea, M.C. A Heat Conduction and Convection Analytical Benchmark for Adjoint Solution Verification of CFD Codes Used in Reactor Design. Nucl. Sci. Eng. 2015, 182, 452–480. [Google Scholar] [CrossRef]
  10. ANSYS® Academic Research, Release 16.0. FLUENT Adjoint Solver; ANSYS, Inc.: Pittsburgh, PA, USA, 2015. [Google Scholar]
Figure 1. Implementation of the computational path for solving numerically the First-Level Forward Sensitivity System (1st-LFSS) to compute response sensitivities using forward sensitivity state functions.
Figure 1. Implementation of the computational path for solving numerically the First-Level Forward Sensitivity System (1st-LFSS) to compute response sensitivities using forward sensitivity state functions.
Jne 01 00002 g001
Figure 2. Implementation of the computational path for solving numerically the First-Level Adjoint Sensitivity System (1st-LASS) to compute response sensitivities using adjoint sensitivity state functions.
Figure 2. Implementation of the computational path for solving numerically the First-Level Adjoint Sensitivity System (1st-LASS) to compute response sensitivities using adjoint sensitivity state functions.
Jne 01 00002 g002

Share and Cite

MDPI and ACS Style

Cacuci, D.G. First-Order Comprehensive Adjoint Method for Computing Operator-Valued Response Sensitivities to Imprecisely Known Parameters, Internal Interfaces and Boundaries of Coupled Nonlinear Systems: I. Mathematical Framework. J. Nucl. Eng. 2020, 1, 3-17. https://0-doi-org.brum.beds.ac.uk/10.3390/jne1010002

AMA Style

Cacuci DG. First-Order Comprehensive Adjoint Method for Computing Operator-Valued Response Sensitivities to Imprecisely Known Parameters, Internal Interfaces and Boundaries of Coupled Nonlinear Systems: I. Mathematical Framework. Journal of Nuclear Engineering. 2020; 1(1):3-17. https://0-doi-org.brum.beds.ac.uk/10.3390/jne1010002

Chicago/Turabian Style

Cacuci, Dan Gabriel. 2020. "First-Order Comprehensive Adjoint Method for Computing Operator-Valued Response Sensitivities to Imprecisely Known Parameters, Internal Interfaces and Boundaries of Coupled Nonlinear Systems: I. Mathematical Framework" Journal of Nuclear Engineering 1, no. 1: 3-17. https://0-doi-org.brum.beds.ac.uk/10.3390/jne1010002

Article Metrics

Back to TopTop