Next Article in Journal
Effect of Machine Entropy Production on the Optimal Performance of a Refrigerator
Next Article in Special Issue
Modeling, Simulation, and Reconstruction of 2-Reservoir Heat-to-Power Processes in Finite-Time Thermodynamics
Previous Article in Journal
On the Use of Concentrated Time–Frequency Representations as Input to a Deep Convolutional Neural Network: Application to Non Intrusive Load Monitoring
Previous Article in Special Issue
How It All Began
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Averaged Optimization and Finite-Time Thermodynamics

Ailamazyan Program Systems Institute of Russian Academy of Sciences, Petra Pervogo st., 4a, Veskovo, Yaroslavl oblast 152021, Russia
*
Authors to whom correspondence should be addressed.
Submission received: 1 July 2020 / Revised: 15 August 2020 / Accepted: 18 August 2020 / Published: 20 August 2020
(This article belongs to the Special Issue Finite-Time Thermodynamics)

Abstract

:
The paper considers typical extremum problems that contain mean values of control variables or some functions of these variables. Relationships between such problems and cyclic modes of dynamical systems are explained and optimality conditions for these modes are found. The paper shows how these problems are linked to the field of finite-time thermodynamics.

1. Introduction

Averaging plays a very important role in optimization problems applied to engineering. Here are a few examples:
1.
Assume that there is a bound on the source flow g s and this constraint lowers the possible optimal value of the production. If we introduce some buffer (container) in such a way that the source flow is its feed, we can raise the possible value of the feed flow to the actual process without violating this constraint. Actual values of the feed flow will oscillate between values that are greater than and less than g s . Only the mean value of the feed flow will be bounded in this case. Using this approach we can replace the strict constraint on g s by the averaged one. If we use such a buffer to store the product flow, we can maximize not this flow itself, but its mean value (Figure 1).
Let us assume that the relationship between production rate g and consumption q has the form presented at Figure 2.
With the help of buffers this relationship could be improved on interval from 0 to q 1 . The process must operate with consumption q 1 during some fraction of time and with zero consumption during the remaining time. Relationship between average production and average consumption is represented by dashed slope line at Figure 2. This is the way pumps of water towers operate.
2.
Controls often can have only discrete values. For example, the light switch can be either on or off. None of these discrete values satisfy the constraints of the original problem. If there are devices that smooth out any oscillations of control variables, the optimal mode can correspond to the switching strategy that maintains given average values of flows. This kind of switching is the basis of electronic light dampers.
3.
In a heat engine the working fluid periodically makes contacts with the hot and cold sources, and the properties of these contacts must be chosen such that the average properties of the working fluid satisfy the constraints of the optimum cycle problem.
Averaged problems arise in finite-time thermodynamics for two main reasons:
1.
Many processes are periodic and their constraints must be satisfied on average per cycle.
2.
Interactions of thermodynamic systems are characterized by values of extensive variables X (volume, amount of substance, internal energy, entropy), and flows of mass and energy emerging in these interactions depend on intensive variables y (temperature, pressure, molar fraction). The rate of change of extensive variables depend on a flow, and of course on y. This means that the governing equations for thermodynamic interactions have the form:
d X d t = F ( y )
The right hand side of (1) does not contain X and this means that the increase in extensive variables during some given amount of time depends only on the mean value of F. It does not depend on the order in which intensive variables have different values, if the mean value of F remains constant. Equations such as (1) are called Lyapunov-type. They allow us to formulate the problem of optimal control for thermodynamic systems in averaged form.
Below we will consider some of these problems. The last section contains applications of methods developed in the paper to finite-time thermodynamics.
We consider dynamical systems characterized by a finite number of variables.
By a steady-state mode of a system we mean a mode such that, for every variable y ν ( t ) characterizing the system, there exists a period T ν such that the average value of y ν ( t ) over this period is constant in time. Formally, this can be written as [1]
1 T ν t T ν t y ν ( τ ) d τ = y ¯ ν .
Clearly, static modes, under which y ν ( t ) are constant for all ν , satisfy this definition.
Another, more general, subclass of steady-state modes is formed by modes for which there exists a period T that is a multiple of all periods T ν . Such modes are called cyclic ones.
There are also steady modes for which there does not exist a common period T for all variables y ν ( t ) . This corresponds to the case where the ratio of at least two periods T ν and T μ is irrational. Such modes are called quasi-cyclic steady-state modes.
If the system is affected by external factors represented by stationary random processes and the mean values of the variables characterizing the system tend to some limits as the period T of averaging increases, then the steady mode is said to be stochastic.
A switch to a non-static steady-state mode may be caused either by the absence of a static mode admitted by the operating conditions of the system or by the fact that the efficiency of the system in a static mode is lower than in other modes.
Consider some examples.
  • The human organism in a steady mode in the absence of external perturbations is characterized by a constant temperature, a constant composition of arterial blood, etc. However, some factors such as the blood pressure and the lung volume periodically change. This is related to the ”structure“ of the respiration and circulation organs.
  • A system consisting of a pump connected with a tank (e.g., a water tower) and consumers operates so that, even if the liquid consumption G ¯ is constant, the pump is sometimes completely switched off (and the liquid does not flow into the tank) and is sometimes switched on and operates with delivery higher than G ¯ , with the average delivery being G ¯ . If the dependence of the pump delivery g on the power expenditure S is described by a strictly convex function, then the average pump delivery is higher than that in the static mode at the same average power expenditure.
    In the rest of the paper, we mainly consider cyclic steady-state modes, among which two limit classes are distinguished. The first class includes modes in which each of the periods T ν significantly exceeds the time of relaxation processes in the system. Moreover, each of the static steady states is assumed to be stable. In this case, we can neglect the dynamics of the system and assume that under variation of the mode variables, the state variables change in accordance with the static characteristics. Such modes are said to be quasi-static.
    The second class is formed by sliding steady-state modes, in which all or some of the control variables vary with frequency so high that, due to the inertia of the object, the state variables remain virtually constant, and their values depend only on the averaged influence of the control variables.
    Although static modes are a special case of cyclic modes, below by cyclic modes we mean modes under which at least one variable of the process changes periodically in time. A cyclic mode is said to be efficient if the passage to this mode improves the efficiency of the process in comparison with the static mode.
  • Cyclic modes are typical of systems with no admissible static modes. Often a system has no static modes if the set V of admissible values of variables is non-convex; e.g., this set may include only discrete values. This is so, for example, in a heat engine in which a working fluid contacts a heat source whose temperature can take only two values, T + (a hot source) and T (a cold source), and the average power over a cycle is required to be maximal under certain constraints.
Cyclic processes may be organized not only in time; variables may also depend on a spatial coordinate. In this case, the parameters of the system are constant in each section of the apparatus and vary periodically from section to section.
When passing from a static mode to a cyclic mode, one needs to replace the objective function by its mean value over the cycle and to replace all or some constraints imposed at each moment of time by averaged constraints. Thus, this passage involves an operation of averaging. Before passing to a cyclic mode, one must answer the following questions:
  • Does there exist a cyclic mode satisfying the constraints of the problem?
  • Is the transition from the optimal static mode to the cyclic mode efficient?
  • What is the gain in the optimality criterion from this passage?
  • What are the optimal forms of variation of the control and state variables, optimality conditions, computational algorithms?
It is desirable to answer questions 1–3 without solving problem 4, which is rather difficult in most cases.
Usually the problem of choosing an optimal static mode of an apparatus reduces to the problem of finding the extremal value of the objective function under certain equality and inequality constraints on the variables, i.e., to a non-linear programming problem. The transition to cyclic modes extends the set of possible solutions and, depending on a particular setting, leads either to an averaged non-linear programming problem or to a variational control problem. If the problem has an optimal static mode, then we refer to this problem as the initial problem.
In the rest of the paper we consider various methods for constructing problems with larger sets of admissible solutions as compared with the initial problem; we show that there are relationships between such extended problems, which allows one to estimate solutions and values of some of them by solving others.

2. Averaged Optimization Problems and Their Optimality Conditions

In this section, we consider various methods for introducing averaging into a non-linear programming (NLP) problem and obtain optimality conditions for averaged problems. To obtain these conditions, we use a trick based on reducing any averaged problem to a canonical form and deriving necessary optimality conditions for a particular problem from those for a general problem.

2.1. Averaging of Functions Included in the Formulation of an Optimization Problem

Consider an initial NLP problem [2] in the form
f 0 ( x ) max f i ( x ) = 0 , i = 1 , m ¯ , x V x .
On the set V x , we define a probability measure p ( x ) such that
V x p ( x ) d x = 1 , p ( x ) 0 .
The average value of the function f ( x ) on the interval [ 0 , τ ] can be calculated as follows:
f ( x ) ¯ = 1 τ 0 τ f ( x ( t ) ) d t = V x f ( x ) p ( x ) d x .
Let us assume that x varies with time or one solves the problem (3) and maximizes the mean value of f 0 , but not the value of this function itself. If functions f i vanish on average, then we will arrive at a problem of the form
f 0 ( x ) ¯ max f i ( x ) ¯ = 0 , i = 1 , m ¯ .
A sought solution of problem (6) is a measure p ( x ) on V x rather than a vector x. The variable x is called a randomized one, and p ( x ) is called a generalized solution. Following A.D. Ioffe and V.M. Tikhomirov [3], we call the value of the objective functional at the optimal solution as the value of a problem.

2.2. Convex Hulls—Carathéodory’s Theorem

The notion of convexity is very important for optimization problems.
1
The convex hull of a set V is the minimum convex set C o V such that V C o V .
2
The set of points lying on or below the graph of a function is called its hypograph. The convex hull C o f of a function f is the upper boundary of the convex hull of its hypograph.
3
Alternatively, the convex hull of a function f is the minimum convex function defined on the convex hull of the domain of f. For every x ˜ from the domain of f the following holds: C o f ( x ˜ ) f ( x ˜ ) .
Carathéodory’s theorem is the most important theorem of convex analysis and geometry. It states that coordinates of every point of the convex hull of the set V R n could be calculated as the weighted arithmetic mean of some points of V and the maximum necessary number of these points is no more than n + 1 . The beautiful exposition of this theorem is given in [4].

2.3. Optimal Distribution in An Averaged NLP Problem

Let us take some x 0 V x . If p ( x ) = δ ( x x 0 ) , then problem (6) coincides with the initial problem. If the set of admissible solutions of a problem includes the set of admissible solution of the initial NLP problem and the optimality criteria in both problems coincide on the set of admissible solutions of the NLP problem, then the former problem is called an extension of the NLP problem.
First, consider the special form of problem (6) with f i ( x ) = x i :
f 0 ( x ) ¯ max / x ¯ i = 0 , i = 1 , n ¯ .
The value of the problem (7) is equal to the ordinate of the convex hull of the function f 0 ( x ) on the set V x at the point x = 0 . According to Carathéodory’s theorem, constructing any ordinate of the convex hull of a function of n variables requires averaging at most n + 1 ordinates of the function f 0 ( x ) ; therefore, we can rewrite problem (7) in the form
ν = 0 n γ ν f 0 ( x ν ) max / ν = 0 γ ν x i ν = 0 , i = 1 , n ¯ , γ ν 0 , ν = 0 n γ ν = 1 .
Let us return to problem (6) and try to reduce it to simple calculation. We need to calculate the ordinate of a convex hull of the given function. Please note that problem (6) can be solved in two stages. At the first stage, we find the maximum of the function f 0 ( x ) subject to the constraint f ( x ) = C , where C takes all values for which the level surface f ( x ) = C intersects V x . The problem
f 0 ( x ) max f i ( x ) = C , i = 1 , m ¯ , x V x
is a non-linear programming problem. Solving (9), we obtain a set of conditionally optimal solutions x * ( C ) and the corresponding values of the reachability function f 0 * ( C ) = f 0 ( x * ( C ) ) of the non-linear programming problem.
The following assertion holds: The optimal distribution p * ( x ) in problem (6) is concentrated at the points x * ( C ) . In other words, one needs to average only over conditionally optimal values of f 0 .

2.4. Necessary Conditions of Optimality—Kuhn-Tucker Theorem

The Kuhn-Tucker theorem generalizes Lagrange multipliers method to problems with inequality constraints:
f 0 ( x ) max x / f i ( x ) = 0 , φ j ( x ) 0 , i = 1 , k ¯ , j = k + 1 , m ¯ ,
where all functions are smooth.
The theorem states that there is nonzero vector of Lagrange multipliers with components λ i , μ j 0 such that Lagrange function
L = λ 0 f 0 ( x ) + i λ i f i ( x ) + j μ j φ j ( x ) = R ( λ , x ) + j μ j φ j ( x )
is stationary on the optimal solution of the problem (10). The multiplier λ 0 could equal to zero or one. In the former case the solution is called degenerate.
It follows from this theorem that when φ j ( x ) = x j we have inequality R x j 0 for the optimal solution. More detailed explanation could be found in [5].

2.5. Reduction to an Ordinary NLP Problem

The above considerations allow us to formulate the second stage in solving problem (6). This stage is the maximization of the average value of the function f 0 * ( C ) with the constraint that the vector C has zero mean, i.e.,
f 0 * ( C ) ¯ max / C ¯ i = 0 , i = 1 , m ¯ , C i V C .
This problem is similar to the problem (7). Its value, and hence the value of problem (6), is equal to the ordinate of the convex hull of the reachability function f 0 * ( C ) at C = 0 :
sup x D f 0 ( x ) ¯ = sup f 0 * ( C ) ¯ / C ¯ = 0 , C V C .
Since the vector C is m-dimensional, the number of base points C ν in problem (9) is at most m + 1 . Thus, the distribution p ( C ) in problem (9) can be sought in the form
p ( C ) = ν = 0 m γ ν δ ( C C n u ) .
Since each of the base values C ν corresponds to a conditionally optimal solution x * ( C ν ) , the optimal distribution p ( x ) is also concentrated at no more than m + 1 points:
p ( x ) = ν = 0 m γ ν δ ( x x ν ) .
Substituting the distribution (15) into the expressions for f 0 ( x ) ¯ and f i ( x ) ¯ , we reduce problem (6) to the form
I = ν = 0 m γ ν f 0 ( x ν ) max ν = 0 m γ ν f i ( x ν ) = 0 , i = 1 , m ¯ , x ν V x , γ ν 0 , ν = 0 m γ n u = 1 .
Thus, we have reduced the problem to an ordinary NLP problem whose variables are the base values x ν of the vector x and the weight factors γ ν .

2.6. Relationship between Averaged NLP Problem and the Lagrangian Function of the NLP Problem without Averaging

The Lagrangian function
R ¯ = ν = 0 m γ ν f 0 ( x ν ) + i = 1 m λ i ν = 0 m γ ν f i ( x ν ) + Λ 1 ν = 0 m γ ν = = ν = 0 m γ ν f 0 ( x ν ) + i = 1 m λ i f i ( x ν ) Λ + Λ
of problem (16) is related to the Lagrangian function
R = f 0 ( x ) + i = 1 m λ i f i ( x )
of the initial NLP problem by
R ¯ = ν = 0 m γ ν ( R ( x ν , λ ) Λ ) + Λ .
Since ν γ ν = 1 , the Lagrangian function of the averaged problem equals the average value of the Lagrangian function of the initial problem over all base values x ν . Some of the weight factors γ ν may vanish; then the number of base points is less than m + 1 .
Let us find conditions that must hold for those x ν that have non-zero weights in (19). For this purpose, we apply the Kuhn–Tucker theorem and write the optimality conditions for problem (16) with respect to the variables γ ν :
R ¯ γ ν δ γ ν 0 .
Since γ ν are bounded only from below ( γ ν 0 ), it follows that δ γ ν 0 ; therefore,
R ¯ γ ν = R ( x ν ) Λ 0 ,
or R ( x ν ) Λ . If γ ν * > 0 , then δ γ ν may be of any sign, and so inequality (21) transforms into the equality
R ( x ν ) = Λ .
Thus, for all x ν involved in the averaged problem with non-zero weights, the Lagrangian function R of the initial non-linear programming problem attains an absolute maximum. Of course, this maximum is the same for all x ν .
The requirements that the function R must take the same value at all points x ν * and that this value must be maximum give equations for the variables to be found. Thus, applying Kuhn–Tucker theorem the problem (6), we obtain the vector of Lagrange multipliers λ for which the function R ¯ attains an absolute maximum with respect to the variables x ν V x and γ ν V γ at an element of the set D of admissible solutions to problem (6), and these multipliers λ satisfy the condition
R ¯ ( λ * , γ ν * , x ν * ) = inf λ V λ sup γ ν , x ν R ¯ ( λ , γ ν , x ν ) = inf λ V λ sup x V x R ( λ , x ) .
Thus, when the attainability function f 0 * ( C ) coincides with its convex hull at C = 0 , the transition to the averaged problem is not efficient (the values of the NLP problem and problem (6) coincide). By virtue of (23), we can look for the value of the averaged problem in the form inf λ V λ sup x V x R ( λ , x ) . If the extended problem is inefficient, then we say that it is equivalent to the initial problem.
In the general case, the dimension of the vector of unknown variables and the computational complexity of problem (6) are much greater than those for the NLP problem. However, in many cases, we are interested not in the solution but in the value of the averaged problem, which shows the gain obtained by transition to the averaged setting. Some methods for estimating the value of problem (6) from above and below were proposed in [6].

2.7. Other Forms of Averaged Extensions of the NLP Problem

Problem (6) is not the only possible extension of the NLP problem by averaging. The optimality criteria, relations, and constraints in real-life problems often include the mean values of variables x rather than the variables themselves. For example, the performance of a distillation column is characterized by the mean not current composition of output flows, because these flows are accumulated in some containers or apparatuses at the exit of the column (or attached to the column). Below, we describe several possible modifications of the averaged extension [7].
  • Problem of maximizing a function of the mean value of the argument. When D is the set of admissible solutions of the initial NLP problem, i.e., D is defined by the condition f ( x ) = 0 , and x ¯ is the mean value of the vector x on the set D, we have:
    f 0 ( x ¯ ) sup p ( x ) = 0 x D .
    Since the set of values x ¯ satisfying this condition is the convex hull of D, problem (24) is equivalent to the NLP problem on the convex hull of D:
    f 0 ( x ) sup x Co D .
  • Problem of maximizing the mean value of a function under constraints imposed on the mean value of the argument:
    f 0 ( x ) ¯ sup / f i ( x ¯ ) = 0 , i = 1 , m ¯
    or, in more detail,
    V x f 0 ( x ) p ( x ) d x sup p ( x ) / f i V x x · p ( x ) d x = 0 , i = 1 , m ¯ .
  • Problem of maximizing a function of the mean value of x under averaged constraints:
    f 0 ( x ¯ ) sup f i ( x ) ¯ = 0 , i = 1 , m ¯ .
Each of the above problems is an extension of the non-linear programming problem, and the solutions of these problems are distributions p ( x ) .
Averaged problems with two types of variables. An NLP problem can be extended only with respect to some components of the solution rather than with respect to the whole solution. In practice, this situation occurs when the problem is solved repeatedly and some components (we denote them by x) can vary from one solution to another, while the remaining components must be chosen only once and then fixed. We denote the latter group of variables by y. For example, x may be the operating conditions of the process (such as flow, pressure, temperature, etc.) and y may be the design parameters of an apparatus.
If we denote
f ( y , x ) ¯ x = C x f ( y , x ) p ( x ) d x ,
a problem in which averaging is performed over only part of variables has the form
f 0 ( y , x ) ¯ x sup f i ( y , x ) ¯ x = 0 , i = 1 , m ¯ .
One need to find the vector y and distribution p ( x ) in (30).
For each fixed y, this problem coincides with the usual setting of problem (6). If we separate the randomized variables x E r and the deterministic variables y E s in the Lagrangian function R of the initial NLP problem, then we can write optimality conditions with respect to x by analogy with problem (6) in the form (see (23))
R ( λ , γ ν * , y , x ν * ) = sup x V x R ( λ , y , x ) , ν = 0 , m ¯ .
In this case, if we denote the admissible set of (30) as D x ( y ) ¯ x , for each y V y , there exist λ ( y ) such that
inf λ sup x V x R ( λ , y , x ) = sup x D x ( y ) ¯ x f 0 ( y , x ) ¯ .
The Lagrangian function attains an absolute maximum at the base values of x.
At the same time, for a fixed function p ( x ) , problem (30) becomes a usual non-linear programming problem with respect to the variables y. The Kuhn–Tucker conditions hold for this problem, which include in this case the complementary slackness conditions as well as the requirement that the function R ( λ , γ ν , y , x ν ) be stationary with respect to y, which in turn, leads to the equations
y j ν = 0 m γ ν R ( λ , y , x ν ) = 0 , j = 1 , s ¯ .
where R is the Lagrangian function for the NP problem.
Averaged problems with two types of variables are in a sense close to optimal control problems, and optimality conditions for such problems are close to the Pontryagin maximum principle.

2.8. The Algorithm for Obtaining Optimality Conditions in Averaged Problems

By an averaged problem of static optimization we mean any NLP problem in which either functions or variables are averaged with respect to all or part of the variables.
As shown above, the settings of averaged problems are very diverse. The reason for this is that a problem may contain both the mean values of functions and functions of the mean values of variables. Moreover, averaging may involve only part of the variables. Under these conditions, it is inexpedient to study each possible setting of an averaged problem. It is significantly more convenient to obtain optimality conditions for some canonical form of an averaged problem and apply them to each particular problem after having reduced the latter to this canonical form [8].
Before obtaining optimality conditions, we must answer the following two questions:
  • Is the optimal distribution, which is one of the components of the solution of an averaged problem, always concentrated at finitely many base points?
  • If the answer to the previous question is ”yes,“ then what is the limit number of these points?
The necessary optimality conditions given below yield an affirmative answer to the first question and allow one to determine the limit number of base points.
Let y denote the vector of deterministic variables, and let x be the vector of randomized variables. For the former, we must find an optimal value, and for the latter, an optimal measure. The canonical form of the averaged problem is
F 0 ( f ( x , y ) ¯ , y , x ) max
under the constraints
F ν ( f ( x , y ) ¯ , φ ( x , y ) , x ¯ ) = 0 , ν = 1 , r ¯ , F ν ( f ( x , y ) ¯ , φ ( x , y ) , x ¯ ) 0 , ν = r + 1 , m ¯ .
Here the bar over the symbol of a function denotes averaging over the set V x of randomized variables x, which is assumed to be compact.
Suppose that the vector x has dimension k and the vector function f has dimension n. The function F is assumed to be continuously differentiable with respect to all its variables, and f and φ are continuous in x and continuously differentiable in y.
In [8], one of the authors (A.T.) proved that the optimal measure p * ( x ) on the set of randomized variables is concentrated at no more than L + 1 base points, where L = n + k . Thus,
p * ( x ) = l = 0 L γ l δ ( x x l ) , γ l 0 , l = 0 L γ l = 1 .
Therefore, for the optimal solution, we have
f * ( x , y ) ¯ = l = 0 L γ l f ( x l , y ) , x ¯ = l = 0 L γ l x l ,
and constraints (35) take the form
F ν ( f ¯ , φ ( x l , y ) , x ¯ ) = 0 , ν = 1 , r ¯ , F ν ( f ¯ , φ ( x l , y ) , x ¯ ) 0 , ν = r + 1 , m ¯ .
for all values of x l .
These expressions turn problem (34), (35) into an ordinary NLP problem with respect to γ l , y and x l . The Kuhn–Tucker conditions reduce to the following: the Lagrangian function
R = F 0 ( f ¯ , y , x ¯ ) + ν = 1 m λ ν F ν ( f ¯ , φ ( x l , y ) , x ¯ )
of this problem is stationary with respect to x l and y and is unimprovable with respect to γ l (we assume the solution is non-degenerate, so λ 0 = 1 ). To write down the optimality conditions, we introduce the notation
a j = R f ¯ j , β i = R x i , r μ l = R φ μ ( x l , y ) .
Using this notation, we can write the condition that R is unimprovable with respect to γ l as follows: the expression
C ( x ) = j a j f j ( x , y * ) + i β i x i
attains its maximum with respect to x V x at the points x l , so that
x l * = arg max x C ( x ) , l = 1 , L ¯ ;
the condition that R is stationary with respect to y has the form
y j a j f j ( x , y ) ¯ + F 0 ( f ¯ , x , y ) + μ , l r μ l φ μ ( x l , y ) = 0 .
The maximality of C ( x ) , together with equations (42), constraints (35), and the complementary slackness conditions
ν = r + 1 m λ ν F ν ( f ¯ * , φ * , x * ) = 0 , λ ν 0 , ν = r + 1 , m ¯
allows one to find a solution γ l * , y * , x l .
When formulating a specific averaged problem, one
  • writes the conditions of the problem in the canonical form (34), (35);
  • separates the randomized and deterministic variables;
  • calculates the total number L of averagings, which is equal to the sum of the dimensions of the vector of randomized variables and of the vector of functions to be averaged;
  • constructs the functions R and C and substitutes them into expressions (42)–(44).
For example, in problem (26), we have
F 0 = f 0 ( x ) ¯ , F ν = f ν ( x ¯ ) , ν = 1 , m ¯ .
The number L equals k, and
R = λ 0 f 0 ( x ) ¯ + ν = 1 m λ ν f ν ( x ¯ ) .
In (42), we have a 0 = λ 0 = 1 , a ν = 0 for ν > 0 and
β i = ν = 1 m λ ν f ( x ¯ ) x ¯ i x ¯ , i = 1 , k ¯ .
At the base points x l , the number of which does not exceed k + 1 , the expression
C ( x ) = f 0 ( x ) + i = 1 k x i ν = 1 m λ ν f ( x ¯ ) x ¯ i x *
attains its maximum, and conditions (35) hold, which have the form
f ν l = 0 k γ l x l = 0 , ν = 1 , m ¯ .

3. Non-Stationary Problems of Averaged Optimization

Consider an extremal problem of the form
f ¯ 0 = 1 τ 0 τ f 0 ( J ( t ) , u ( t ) ) d t max u
subject to the constraints
f ¯ ν = 1 τ 0 τ f ν ( J ( t ) , u ( t ) ) d t = 0 , ν = 1 , n ¯ ,
where the functions f ν : R k 1 × R k 2 R , ν = 0 , n ¯ , are continuous in J and u, u V u R k 1 is a measurable function, the set V u is compact, and J ( t ) V J R k 2 is a given measurable function of time. With J ( t ) we can associate a probability measure (distribution) p ( J ) . If J ( t ) takes a value J k on a part of the interval ( 0 , τ ) of relative length α k , then p ( J ) contains a term of the form α k δ ( J J k ) . The length of the interval ( 0 , τ ) may tend to infinity, and J ( t ) may be a stationary random process with distribution p ( J ) .
The distribution p ( J ) can be written in the form
p ( J ) = p ¯ ( J ) + k α k δ ( J J k ) .
For problem (50), (51), let α τ be the length of the part of ( 0 , τ ) on which J ( t ) takes one of the constant values J k ; we have α = k α k . We refer to α τ as the total constancy interval of J ( t ) . The remaining part ( 1 α ) τ is called the interval of variation of the parameter J.
Theorem 1.
Let u * ( t ) be an optimal solution; then there exists a non-zero vector λ = { λ 0 , , λ n } with λ 0 { 0 , 1 } such that
  • on the interval of variation of the parameter J ( t )
    u * ( J , λ ) = arg max u V u ν = 0 n λ ν f ν ( J , u ) ;
  • on the total constancy interval of J ( t ) , the optimal solution switches between at most n + 1 base values u j , and each of these values satisfies the condition
    u j = arg max u V u k α k ν = 0 n λ ν f ν ( J k , u ) , j = 0 , n ¯ ;
  • the portions γ j of the constancy interval α τ on which u * ( t ) takes the respective values u j satisfy the conditions
    V J p ¯ ( J ) f ν ( J , u * ( J ) ) d J + j = 0 n γ j k α k f ν ( J k , u j ) = 0 , ν = 1 , n ¯ , j = 0 n γ j = 1 , γ j 0 ;
  • the vector of multipliers λ ν , ν = 1 , n ¯ , is determined by the conditions
    λ * = arg min λ V J p ¯ ( J ) ν = 0 n λ ν f ν ( J , u * ( J , λ ) ) d J + j = 0 n γ j ν = 0 n λ ν k α k f ν ( J k , u j ( λ ) ) .
Thus, on the constancy intervals, the optimal solution of a problem with non-stationary parameters coincides with the solution of an averaged mathematical programming problem, and on the interval of variation of the parameter, it varies as the solution of a problem with integral constraints. This theorem was proved in [9].
Example 1.
Consider the problem of maximizing the average power p ¯ of a heat engine in which the working fluid contacts a source of variable temperature T 0 ( t ) . This problem has the form
p ¯ = 1 τ 0 τ q ( T 0 ( t ) , T ( t ) ) d t max T
subject to the constraint
σ ¯ = 1 τ 0 τ q ( T 0 ( t ) , T ( t ) ) T ( t ) d t = 0 .
Here T ( t ) is the temperature of the working substance, q is the heat flux from the source to the working fluid, and σ ¯ is the mean rate of variation of the entropy of the working substance. A substantiation of the setting (56), (57) can be found in [10,11,12]. The optimality conditions (53) imply the following relation for the interval of variation of T 0 ( t ) :
1 T 2 q ( T 0 , T ) q ( T 0 , T ) / T 1 T = const .
In particular, for the Newtonian law q ( T 0 , T ) = β ( T 0 T ) of heating, (59) implies
T * ( T 0 ) = m T 0 ,
where m is the constant equal to the mean value of the square root of the source temperature.
For example, suppose that T 0 ( t ) has a uniform distribution (for a regular function T 0 ( t ) , this means that the source temperature depends linearly on time) and T 02 and T 01 are the maximal and minimal source temperatures, respectively. Then
T * ( T 0 ) = 2 ( T 02 3 / 2 T 01 3 / 2 ) 3 ( T 02 T 01 ) T 0 .
The maximum power is given by
p ¯ m a x = β T 02 + T 01 2 4 9 ( T 02 3 / 2 T 01 3 / 2 ) T 02 T 01 .
Thus, a heat engine with one source may have non-zero power if the variance of the source temperature is positive.
For some laws q ( T 0 , T ) , the optimal temperature T * ( t ) may switch between two base values on intervals of constancy of the parameter T 0 .

4. Estimation of the Performance of Cyclic Modes

Suppose that the dynamics of a system is characterized by the differential equations
x ˙ ν = f ν ( x , u , a ) , ν = 1 , m ¯ ,
whose right-hand sides do not explicitly depend on t. Here, as in the preceding sections, x denotes the state variables, u are the control ones, and a denotes parameters to be optimized. As a rule, boundary conditions are not fixed for equations (63), but the state variables are required to vary periodically:
x ν ( τ ) = x ν ( 0 ) 0 τ f ν ( x , u , a ) d t = 0 , ν = 1 , m ¯ .
The performance averaged over the cycle plays the role of the optimality criterion for such a cyclic process and can be written in the form
I = 1 τ 0 τ f 0 ( x , u , a ) d t max .
The duration τ of each cycle is one of the components of the vector a; in the general case, it is not fixed. The parameters and controls are subject to constraints a V a and u V u ; in addition to the integral constraints (64), which follow from the periodicity of the process, the problem usually contains integral constraints determined by given mean rates of consumption of some resources (resource constraints):
J j = 0 τ φ j ( x , u , a ) d t = 0 , j 1 , r ¯ .
It is assumed that each of the functions determining the problem is continuous in all its variables and is continuously differentiable with respect to x and a.
Optimality conditions. Optimality conditions for problem (63)–(66) can be obtained by using the maximum principle [6]. Namely, if an optimal solution x * , a * , u * exists and is non-degenerate, then there exist a non-zero vector λ and a differentiable vector function ψ ( t ) such that the function
R = 1 τ f 0 + ν ψ ˙ ν x ν + ( ψ ν + λ ν ) f ν + j λ j φ j .
is stationary with respect to x and attains a maximum with respect to u, and the integral S of this function is locally unimprovable with respect to a. Thus,
R x i = 0 ψ ˙ i = x i 1 τ f 0 + ν ( ψ ν + λ ν ) f ν + j λ j φ j .
Since the values x ν ( τ ) and x ν ( 0 ) are not fixed, it follows that ψ ν ( τ ) and ψ ν ( 0 ) vanish. Introducing the notation ψ ˜ ν = ψ ν + λ ν and taking into account the equality ψ ˜ ˙ ν = ψ ˙ ν , we can rewrite condition (68) in the form
ψ ˜ ˙ i = x i 1 τ f 0 + ν ψ ν f ν + j λ j φ j = x i H .
For these equations, since ψ ( 0 ) and ψ ( τ ) vanish, the costate variables satisfy the periodicity conditions
ψ ˜ ν ( 0 ) = ψ ˜ ν ( τ ) 0 τ H x ν d t = 0 , ν = 1 , m ¯ .
The conditions of maximality of R with respect to u have the form
u * ( t ) = arg max u V u f 0 τ + ν ψ ˜ ν f ν + j λ j φ j .
Finally, the optimality conditions with respect to each component a k of the vector a, including the duration τ of the cycle, yield the inequalities
S a k δ a k 0 , k = 1 , 2 ,
Here δ a is the cone of variations of the vector a that are admissible with respect to the inclusion a V a .
Please note that the phase trajectory corresponding to an optimal cyclic process has no self-intersections [13].

5. Estimation of the Efficiency of Transition to a Cyclic Process

5.1. Conditions of Equivalence and Efficiency of a Cyclic Extension

The optimal cyclic mode problem (63)–(66) (we refer to it as Problem C) is an extension of a non-linear programming problem. Indeed, imposing the additional constraints x = const and u = const on the solution of this problem, we obtain the following optimal static mode problem (Problem S):
I S = f 0 ( x , u , a ) max / f ν ( x , u , a ) = 0 , φ j ( x , u , a ) = 0 u V u , a V a , ν = 1 , m ¯ , j = 1 , 2 .
Since the set of admissible solutions of problem (63)–(66) is larger than that of Problem S, it follows that
I S * I C * .
where I C * denotes the value of the optimal cyclic mode problem.
One of the problems in designing cyclic processes consists of distinguishing a class of problems for which inequality (74) turns into an equality, i.e., the cyclic extension is equivalent to the static problem. An important role in solving this problem is played by the Lagrangian function of Problem S,
R S = f 0 ( x , u , a ) + ν λ ν f ν ( x , u , a ) + j ξ j φ j ( x , u , a )
To determine whether a cyclic process is equivalent to a static one or efficient without solving problem (63)–(66), we form averaged problems, which are in turn extensions for Problem S or C or for both. Comparing the values of these problems with I C * , we find conditions for the equivalence of a cyclic extension.
  • An upper bound for I C * and sufficient conditions for the equivalence of a cyclic extension. Let us enlarge the set of admissible solutions of Problem C by removing the differential equations (63). We obtain Problem S ¯ , which we call an estimating problem:
    I S ¯ = f 0 ( x , u , a ) ¯ x , u max / f ν ( x , u , a ) ¯ x , u = 0 , φ j ( x , u , a ) ¯ x , u = 0 ν = 1 , m ¯ , u V u , a V a , j = 1 , r ¯ .
    Clearly,
    I S ¯ * I C * ,
    and Problem S ¯ is an averaged extension of Problem S with the variables x and u and the parameters a. The roles of the variables x and u in the conditions of Problem S ¯ are similar, and we unite these variables and denote them by y = ( x , u ) . In shorthand notation, this problem has the form
    I S ¯ = f 0 ( y , a ) ¯ y max / f ν ( y , a ) ¯ y = 0 , φ j ( y , a ) ¯ y = 0 , ν = 1 , m ¯ , j = 1 , r ¯ .
    The value of problem (78) as an extension of the optimal static mode problem can be expressed in terms of the function R S as
    I S ¯ * = inf λ , ξ sup y R S y , a * , λ , ξ .
    For determining the vector of parameters, we have the condition
    a R S ( y , a , λ , ξ ) ¯ y a = a * = 0 .
    If a * lies inside V a , then condition (80) reduces to the condition of stationarity of R S with respect to a.
    If the value I 5 * given by (79) equals I S * (i.e., Problem S ¯ has a unique base solution), then inequalities (74) and (77) imply I C * = I S * ; i.e., the static mode cannot be improved by passing to a cyclic mode. If I S * > I S * , then the difference Δ S ¯ between these values gives an upper bound for the possible gain from the passage to a cyclic mode.
  • A lower bound for I C * . Quasi-static and sliding modes. Consider the case when x ( t ) and u ( t ) vary so that the time derivatives of x ( t ) can be neglected. Then the relations between x and u are given, as in the static case, by f ( x ( t ) , u ( t ) , a ) = 0 for all t . The corresponding modes are said to be quasi-static. The problem of an optimal choice of x ( t ) and u ( t ) under the quasi-static conditions (Problem QS) has the form
    I QS = 1 τ 0 τ f 0 ( x , u , a ) d t max / f ( x , u , a ) = 0 , 0 τ φ ( x , u , a ) d t = 0 , u V u , a V a .
    or, in shorthand notation,
    I QS = f 0 ( y , a ) ¯ y max / φ ( y , a ) ¯ y = 0 , y V y , a V a .
    Here y = ( x , u ) , and the set V y is determined by the conditions u V u , a V a , and f ( x , u , a ) = 0 .
    Since any solution of Problem QS is admissible for Problem C, it follows that
    I QS * I C * .
    At the same time, the value I Q S * of Problem QS, being the value of an averaged problem, is given by the expression
    I QS * = inf ξ sup y f 0 y , a * + ξ φ y , a * / f y , a * = 0 , u V u .
    Here a * is the optimal value of a subject to the constraint
    a f 0 ( y , a ) ¯ y + ξ , φ ( y , a ) ¯ y + i = 0 r λ i f y i , a a * δ a 0 .
    in which δ a is the set of variations allowed by the inclusion a V a .
    We choose the Lagrange multipliers λ i in (84) so that f ( y i , a ) = 0 for any base value y i of the vector y. The number of base values of y is determined by the dimension r of the vector function φ ; thus, the problem takes the form
    f ¯ 0 = i = 0 r γ i f 0 y i , a , φ ¯ = i = 0 r γ i φ y i , a , i = 0 r γ i = 1 , γ i 0 .
    Consider the case when the control vector in the steady state of the system changes with a frequency so high that the state vector x remains virtually constant. Such a mode is called a sliding steady mode. The optimization problem for such a mode is formulated as
    I SL = f 0 ( b , u ) u ¯ sup / f ( b , u ) ¯ u = 0 , u V u , φ ( b , u ) ¯ u = 0 , b V b .
    This problem is known as Problem SL. In (86), b denotes the vector formed by x and a. This mode is the limit case of the cyclic mode, so we have
    I SL * I C * .
    Problem (86) is an averaged extension of Problem S with two types of variables; its value is given by
    I SL * = min λ , ξ max u R u , b * , λ , ξ = min λ , ξ max u V u f 0 u , b * + λ f u , b * + ξ φ u , b * ,
    where b * satisfies the condition
    b R u , b * , λ , ξ ¯ u δ b 0 .
    The number of base values of the vector function u in Problem SL is at most m + r + 1 .
    A necessary condition for the efficiency of the transition to a cyclic mode can be stated in terms of I QS * and I SL * . Consider the quantity
    I K = max I QS * , I SL * .
    If I K is greater than I S * , then the passage to a cyclic mode is efficient, and the difference
    Δ K = I K I S *
    provides a lower bound for the efficiency.

5.2. The Frequency Criterion for the Efficiency of the Passage to a Cyclic Mode

Suppose that an optimal static mode x 0 , u 0 in Problem S is known. As above, it is required to determine whether the cyclic extension of Problem S is efficient. In [14], a frequency criterion for the efficiency of a cyclic mode was proposed. This criterion is based on the analysis of the increment in the optimality criterion I as compared to its maximal static value I 0 for small harmonic oscillations of the control about u 0 .
Let λ 0 and μ 0 be the values of Lagrange multipliers λ and μ corresponding to the optimal static mode in the Lagrangian function
R = f 0 ( x , u ) + i λ i f i ( x , u ) + j μ j φ j ( x , u )
for Problem S.
In a neighborhood of the optimal static mode and the corresponding Lagrange multipliers, we calculate the first and second derivatives of the functions that determine the problem with respect to x and u (if x and u are vectors, then these derivatives are matrices):
A = f x , B = f u , P = 2 R x 2 , Q = 2 R x u , H = 2 R u 2 , K = φ x , M = φ u .
In a neighbourhood of the optimal static mode, the increment of the functional I under small variations δ x ( t ) and δ u ( t ) is given by
Δ I = 1 2 T 0 T δ x P δ x + δ x Q δ u + δ u Q δ x + δ u H δ u d t .
The transition to a cyclic mode is efficient if there is a variation δ u such that the quantity Δ I is positive under the linearized constraints (63), i.e.,
δ x ˙ = A δ x + B δ u , δ x ( T ) = δ x ( 0 ) .
To get rid of these constraints, we consider only harmonic variations, i.e., those of the form
δ u ( t ) = ν = u ν e i ν 2 π T t .
Applying the Fourier transform to the linear differential constraints (89), we obtain
δ x ( i ω ) = δ u ( i ω ) B i ω E A = δ u ( i ω ) W ( i ω ) .
Here E is the identity matrix ( E = 1 for scalar x). It is assumed that the matrix A has no eigenvalues with zero real part; otherwise, small deviations δ u ( t ) may correspond to large deviations δ x ( t ) , and the linearization may be incorrect.
Let us express the quantity Δ I by Parseval’s identity in the frequency domain, replacing δ x ( i ω ) by its expression in terms of δ u . The increment in the criterion under harmonic oscillations of the control with frequencies that are multiples of 2 π / T takes the form
Δ I = 1 2 δ u ( i ω ) A ( ω ) δ u ( i ω ) d ω .
Here A ( ω ) is defined by the matrices P, Q, and H and the relation between δ u and δ x ; it is easy to show that
A ( ω ) = W ( i ω ) P W ( i ω ) + Q W ( i ω ) + W ( i ω ) Q + H ,
where the prime denotes transposition.
For the scalar problem, we have
A ( ω ) = P | W ( i ω ) | 2 + 2 Q Re W ( i ω ) + H
If the matrix A ( ω ) for some ω is such that the integrand in the expression for Δ I is positive for at least one vector δ u , then the static mode can be improved and the passage to a cyclic mode is efficient.
For the scalar problem, we have
Δ I = 1 2 | δ u ( i ω ) | 2 A ( ω ) d ω ,
and the static mode improves if A ( ω ) is positive for some ω .

5.3. Lyapunov Problems

For an important class of problems, the inequality (77) turns into an equality. In these problems, the functions f 0 , f, and φ in relations (63)–(66) depend only on u and a, so that
x ˙ = f ( u , a )
Such equations are called Lyapunov-type equations, and the corresponding problems are known as Lyapunov problems. If we discard equations (63), which have the form (90), in Problem C, thereby transitioning to Problem S, then we can find its solution u * ( t ) , a * . Substituting this solution into equation (90), we determine an optimal trajectory. Clearly, in this problem, I S ¯ * = I C * , u * ( t ) takes at most m + r + 1 base values, and the function x * ( t ) is a polygonal line with at most m + r (internal) vertices.
Problems that include, in addition to Lyapunov-type equations, equations of the form
x ˙ ν = f ν ( u , a ) F ν x ν
can also be reduced to Lyapunov problems. Indeed, such equations can be reduced to the form (90) by the change
y ν x ν = d x ν F ν x ν ,
so that y ˙ ν = f ν ( u , a ) . The optimal solution y ν ( t ) is piecewise linear, and x ν ( t ) can be found from (91) by solving the equation
d x ν F ν x ν = y ν ( t ) d t .

6. Average Optimization in Finite-Time Thermodynamics

The field of finite-time thermodynamics is one of the most important examples of application of averaged optimization techniques. The reasons for this are the following:
  • Problems of optimal thermodynamic cycles.
    There are a very important kind of thermodynamic systems — intermediary ones. These systems contact different subsystems (reservoirs) alternately while producing power and thus lowering the irreversibility arising from a continuous contact of the above-mentioned subsystems. The main example here is the heat engine, where the working fluid contacts two sources of different temperature.
    One of the most essential problems in finite-time thermodynamics is the problem of maximum average power of heat engines, when the average rate of the heat flow from the hot source is given.
    Similar problems arise also in absorption-desorption systems, where the working fluid contacts with the multi-component mixture and picks one component out from one source, releasing it at another one.
    In reverse cycles, the working fluid obtains the energy from the exterior system. Upon contact with the source that loses energy or matter in the regular cycle, the working fluid enriches it with the corresponding resource.
    In all of these problems, the working fluid restores its state at the beginning of every cycle. One needs to average all of the variables determining the process.
  • Relations between intensive and extensive variables are Lyapunov-type equations. Thermodynamic variables are divided into two classes: intensive (temperature, pressure, chemical potential, …) and extensive (volume, internal energy, entropy, amount of substance, …) ones. Flow rates of transport processes between subsystems depend only on intensive variables. This value determines in turn the rate of change of extensive variables. This means that equations determining the change of state of the thermodynamic system have the form [10,11,12]:
    d Z j d t = F j ( u i , u j ) .
    Here i and j are indices of the contacting subsystems, u is the vector of intensive variables, Z is the vector of extensive variables. Equations of this type are called Lyapunov-type equations earlier in this paper. The right hand side of these equations does not depend on Z, and the increase of Z is determined by the average value of the function F. As we have shown above, one can obtain the limiting capabilities of systems characterized by Lyapunov-type equations using techniques of the averaged optimization.

7. Example: Averaged Optimization of a Heat Engine

7.1. Maximum Average Power Output

We will assume that there is a heat engine with constant temperature of sources T + and T [15]. If we denote the temperature of the source contacting with the working fluid at the moment as T n and the temperature of the working fluid itself as T, we will obtain that the average output per cycle is
p ¯ = q ( T n , T ) ¯ .
Now we can formulate the averaged optimization problem, given that the average entropy generation within the working fluid per cycle is zero:
q ( T n , T ) ¯ max T , / q ( T n , T ) T ¯ = 0 , T n = ( T + ; T ) , T > 0 .
This is the problem in the form (6). Using the algorithm described earlier (see (42)–(44)) we find the number of base points is two. This means that the Lagrange function
R = q ( T n , T ) + λ q ( T n , T ) T = q ( T n , T ) 1 + λ T
has two maxima, so
T 1 = arg max T q ( T + , T ) 1 + λ T , T 2 = arg max T q ( T , T ) 1 + λ T .
Both maxima are global and therefore they must be equal [16]. It means that the Lagrange multiplier is the solution of
q ( T + , T 1 ) 1 + λ T 1 = q ( T + , T 2 ) 1 + λ T 2 .
When the heat transfer law is linear
q ( T + , T ) = α + ( T + T ) , q ( T , T ) = α ( T T ) ,
solution of equations (94)–(96) with notation α = α + α α + + α 2 leads to
p m a x = α T + T 2 .
The relationship between entropy generation and heat flows is shown at Figure 3. It is clear from this picture that the point of maximum power output lies on the convex hull of original output curves, so it is attainable only when averaged control is used.

7.2. Maximum Efficiency

When the power output p 0 is given, the problem of maximum efficiency is equivalent to the problem of minimum entropy generation within the system. Using again that the average entropy generation within the working fluid is zero for a cyclic process, we obtain the problem:
σ = q ( T n , T ) T n ¯ max T , / q ( T n , T ) ¯ = p 0 , q ( T n , T ) T ¯ = 0 , T n = ( T + ; T ) , T > 0 .
One may notice that this problem allows three base points in general, because there are three averaging operations in (100). This is the case when the entropy generation as a function of q ( T + , T ) is not convex. We will not consider this case here, because this function is convex for most of heat transfer laws.
Another possibility corresponds to two base points. In this case, we have the following equations for T 1 and T 2 :
T 1 = arg max T q ( T + , T ) 1 T + + λ + μ T λ p 0 , T 2 = arg max T q ( T , T ) 1 T + λ + μ T λ p 0 .
These maxima must be equal, which leads to:
q ( T + , T ) 1 T + + λ + μ T = q ( T , T ) 1 T + λ + μ T .
The averaged constraints must also be satisfied:
γ q ( T + , T 1 ) + ( 1 γ ) q ( T , T 2 ) = p 0 , γ q ( T + , T 1 ) T 1 + ( 1 γ ) q ( T , T 2 ) T 2 = 0 .
Equations (101)–(103) allow one to find values of T 1 , T 2 , λ , μ and γ .
For the linear heat transfer law we have the following value of maximum efficiency:
η m a x ( p ) = 1 2 p α T + + η c ± 1 4 p α T + + η c 2 p α T + .
When p 0 , the value of (104) approaches η c (Carnot efficiency) and when p = p m a x (99), we have
η m a x ( p m a x ) = 1 T T + = 1 1 η c ,
which is the well-known result of Novikov [17], Chambadal [18], Curzon and Ahlborn [19]. Important results for other types of heat transfer laws and different processes are presented in [20,21,22].

8. Results

We obtained the general necessary conditions of optimality for averaged optimization problems. These conditions can be written down using the algorithmic procedure given in the paper, which allows one to use them for problems of any structure. We showed how these techniques can be applied to the problems of finite-time thermodynamics leading to new results in the field.

Author Contributions

Conceptualization, A.T. and I.S.; methodology, A.T.; validation, I.S.; writing–original draft preparation, A.T.; writing–review and editing, I.S.; supervision, A.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Tsirlin, A.M. Problems and methods of averaged optimization. Proc. Steklov Inst. Math. 2008, 261, 270–286. [Google Scholar] [CrossRef]
  2. Kaplinski, A.M.; Propoi, A.I. Stochastic Approach to Nonlinear Programming Problems. Automat. Remote Control 1970, 31, 448–459. [Google Scholar]
  3. Ioffe, A.; Tikhomirov, V. Theory of Extremal Problems; Elsevier North-Holland: New York, NY, USA, 1979; p. 459. [Google Scholar]
  4. Rockafellar, R.T. Convex Analysis; Princeton University Press: Princeton, NJ, USA, 1996; p. 472. [Google Scholar]
  5. Boltyanski, V.G.; Martini, H.; Soltan, V. Geometric Methods and Optimization Problems; Springer: New York, NY, USA, 1999; p. 432. [Google Scholar]
  6. Tsirlin, A. Averaged Optimization Methods and Their Applications (Metody usrednjonnoj optimizatsii i ikh prilozhenija); Fizmatlit: Moscow, Russia, 1997. (In Russian) [Google Scholar]
  7. Tsirlin, A. Optimal Cycles and Cyclic Modes (Optimal’nye tsikly i tsiklicheskie rezhimy); Energoatomizdat: Moscow, Russia, 1985. (In Russian) [Google Scholar]
  8. Tsirlin, A.M. Conditions for Optimality of Solutions to Average Problems in Mathematical Programming. Sov. Phys. Dokl. 1992, 37, 117–119. [Google Scholar]
  9. Tsirlin, A.M. The Optimal Conditions for Averaged Problems with Time-Dependent Parameters. Dokl. Math. 2000, 62, 297–299. [Google Scholar]
  10. Rozonoer, L.I.; Tsirlin, A.M. Optimal Control of Thermodynamic Processes. I. Automat. Remote Control 1983, 44, 55–62. [Google Scholar]
  11. Rozonoer, L.I.; Tsirlin, A.M. Optimal Control of Thermodynamic Processes. II. Automat. Remote Control 1983, 44, 209–220. [Google Scholar]
  12. Rozonoer, L.I.; Tsirlin, A.M. Optimal Control of Thermodynamic Processes. III. Automat. Remote Control 1983, 44, 314–326. [Google Scholar]
  13. Zevin, A.A. Optimal Control of Periodic Processes. Automat. Remote Control 1980, 41, 304–308. [Google Scholar]
  14. Guardabassi, G.; Locatelli, A.; Rinaldi, S. Periodic Optimization of Continuous Systems. In Proceedings of the International Conference on Cybernetics and Society, Washington, DC, USA, 9–12 October 1972; pp. 261–263. [Google Scholar]
  15. Tsirlin, A. Minimum Dissipation Processes in Irreversible Thermodynamics (Protsessy minimalnoj dissipatsii v neobratimoj termodinamike); Lan: Saint-Petersburg, Russia, 2020; p. 400. (In Russian) [Google Scholar]
  16. Tsirlin, A. Optimization Methods in Irreversible Thermodynamics and Microeconomics (Metody optimizatsii v neobratimoj termodinamike i mikroekonomike); Fizmatlit: Moscow, Russia, 2003. (In Russian) [Google Scholar]
  17. Novikov, I.I. The efficiency of atomic power stations (a review). J. Nucl. Energy 1958, 7, 125–128. [Google Scholar] [CrossRef]
  18. Chambadal, P. Atomic Power Stations (Les centrales nucleaires); Colin: Paris, France, 1957; p. 188. (In French) [Google Scholar]
  19. Curzon, F.L.; Ahlborn, B. Efficiency of a Carnot engine at maximum power output. Am. J. Phys. 1975, 43, 22–24. [Google Scholar] [CrossRef]
  20. Berry, R.; Kazakov, V.; Sieniutycz, S.; Szwast, Z.; Tsirlin, A. Thermodynamic Optimization of Finite-Time Processes; Wiley: Chichester, UK, 1999. [Google Scholar]
  21. Boehme, B.; Sofieva, Y.N.; Tsirlin, A.M. On the characteristic of steady state for some types of dynamic plants. Automat. Remote Control 1979, 40, 5–11. [Google Scholar]
  22. Kuznetsov, A.G.; Rudenko, A.V.; Tsirlin, A.M. Optimal control in thermodynamic systems with sources of finite capacity. Automat. Remote Control 1985, 46, 20–32. [Google Scholar]
Figure 1. Flowsheet of a simple process with averaging of both source and product flows.
Figure 1. Flowsheet of a simple process with averaging of both source and product flows.
Entropy 22 00912 g001
Figure 2. Relationship between production and consumption. Effect of averaging.
Figure 2. Relationship between production and consumption. Effect of averaging.
Entropy 22 00912 g002
Figure 3. Relationship between entropy generation and heat flows and its convex hull. Here q h and q c are heat exchange rates upon contact with the hot and cold reservoirs, respectively, and σ h , σ c are the corresponding entropy generation rates. The optimal solution is attained when σ c = σ 1 , σ h = σ 2 , q 1 = q c ( σ 1 ) , q 2 = q h ( σ 2 ) and p m a x = q 1 + q 2 .
Figure 3. Relationship between entropy generation and heat flows and its convex hull. Here q h and q c are heat exchange rates upon contact with the hot and cold reservoirs, respectively, and σ h , σ c are the corresponding entropy generation rates. The optimal solution is attained when σ c = σ 1 , σ h = σ 2 , q 1 = q c ( σ 1 ) , q 2 = q h ( σ 2 ) and p m a x = q 1 + q 2 .
Entropy 22 00912 g003

Share and Cite

MDPI and ACS Style

Tsirlin, A.; Sukin, I. Averaged Optimization and Finite-Time Thermodynamics. Entropy 2020, 22, 912. https://0-doi-org.brum.beds.ac.uk/10.3390/e22090912

AMA Style

Tsirlin A, Sukin I. Averaged Optimization and Finite-Time Thermodynamics. Entropy. 2020; 22(9):912. https://0-doi-org.brum.beds.ac.uk/10.3390/e22090912

Chicago/Turabian Style

Tsirlin, Anatoly, and Ivan Sukin. 2020. "Averaged Optimization and Finite-Time Thermodynamics" Entropy 22, no. 9: 912. https://0-doi-org.brum.beds.ac.uk/10.3390/e22090912

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop