Next Article in Journal
Fractional Sliding Mode Nonlinear Procedure for Robust Control of an Eutrophying Microalgae Photobioreactor
Previous Article in Journal
PD Steering Controller Utilizing the Predicted Position on Track for Autonomous Vehicles Driven on Slippery Roads
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimization of Constrained Stochastic Linear-Quadratic Control on an Infinite Horizon: A Direct-Comparison Based Approach

1
Department of Automation, Shanghai Jiaotong University, Shanghai 200240, China
2
School of Economics and Management, Fuzhou University, Fuzhou 350108, China
*
Author to whom correspondence should be addressed.
Submission received: 20 January 2020 / Revised: 15 February 2020 / Accepted: 19 February 2020 / Published: 24 February 2020

Abstract

:
In this paper we study the optimization of the discrete-time stochastic linear-quadratic (LQ) control problem with conic control constraints on an infinite horizon, considering multiplicative noises. Stochastic control systems can be formulated as Markov Decision Problems (MDPs) with continuous state spaces and therefore we can apply the direct-comparison based optimization approach to solve the problem. We first derive the performance difference formula for the LQ problem by utilizing the state separation property of the system structure. Based on this, we successfully derive the optimality conditions and the stationary optimal feedback control. By introducing the optimization, we establish a general framework for infinite horizon stochastic control problems. The direct-comparison based approach is applicable to both linear and nonlinear systems. Our work provides a new perspective in LQ control problems; based on this approach, learning based algorithms can be developed without identifying all of the system parameters.

1. Introduction

In this paper we study the discrete-time stochastic linear-quadratic (LQ) control optimal problem with conic control constraints and multiplicative noises on an infinite horizon. There exist in the literature various studies on the estimation and control problems of systems with a multiplicative noise [1,2]. As for the LQ type of stochastic optimal control problems with multiplicative noise, investigations have been focused on the LQ formulation with indefinite penalty matrices on control and state variables for both continuous-time and discrete-time models (see, e.g., [3,4]).
In an LQ optimal problem, the system dynamics are both linear in state and control variables, and the cost function is quadratic in these two variables [5]. One important attractive quality of the LQ type of optimal control models is its explicit control policy which can be derived by solving the corresponding Riccati equation. Due to the elegant structure, the LQ problem has always been a hot issue in optimal control research. Since the fundamental research on deterministic LQ problems by Kalman [6], there have been a great number of studies on it; see [5,7,8]. In the past few years, stochastic LQ problems have drawn more and more attention on this topic, due to the promising applications in different fields, including dynamic portfolio management, financial derivative pricing, population models, and nuclear heat transfer problems; see [9,10,11].
This paper is motivated by two recent developments: LQ optimal control and Markov decision problems (MDPs). First, the constrained LQ problem is significant in both theory and applications. Due to the constraints on state and control variables, it is hard to obtain the explicit control policy by solving the Riccati equation [5]. Recently, there have been studies regarding the constrained LQ optimal control problems, such as [12,13,14]. Meanwhile, in real applications, considering some practical limits, such as the risk or the economic regulations, we have to take some constraints on the control variables into the consideration. In the LQ control problems, including the positivity constraint for the control, some literature, [15,16], propose the optimality conditions and some numerical methods to characterize the optimal control policy. In this paper, we characterise the limits as the conic control constraints considering the real applications.
Work by Cao [17] and Puterman [18] demonstrate that stochastic control problems can be viewed as Markov decision problems. Therefore, the constrained stochastic LQ control problem can be formulated as an MDP, such as [19]. A direct-comparison based approach (or relative optimization), which originated in the area of discrete event systems, has been developed in the past years for the optimization of MDPs [17].
With this approach, optimization is based on the comparison of the performance measures of the system under any two policies. It is intuitively clear, and it can provide new insights, leading to new results to many problems, such as [20,21,22,23,24,25,26]. This approach is very convenient and suitable to the performance optimization problems, leading to results including the property of under-selectivity in time-nonhomogeneous Markov processes [24]. In this paper, we show that the special features of the constrained stochastic LQ optimal control make it possible to be solved by the direct-comparison based approach, leading to some new insights for the problem.
In our work, we consider the stochastic LQ control problem through an MDP formulation in the infinite horizon. Through the direct-comparison based approach [17], we first derive the performance potentials for the LQ problem by utilizing the state separation property of the system structure. Based on this, we successfully derive the optimality conditions and the stationary optimal feedback control. We show that the optimal control policy is a piece-wise affine function with respect to the state variables. In real applications, the proposed methodology can be used in many fields, such as system risk contagion [26] and power grid systems [27].
Our work provides a new perspective for LQ control problems. Compared with the former literature, such as [5,13], we still consider the multiplicative noises. We establish a general framework for studying infinite horizon stochastic control problems. With the direct-comparison based approach, which is applicable to both linear and nonlinear systems, we propose more results for the performance optimization problems, and the results can be extended easily. In addition, without identifying all the system parameters, this approach can be implemented on-line, and learning based algorithms can be developed.
The paper is organized as follows. Section 2 introduces an MDP formulation of the constrained stochastic LQ problem with multiplicative noises; some preliminary knowledge on MDP and the state separation property is also provided. In Section 3, we derive the performance difference formula, which is the foundation of the performance optimization; based on it, the Poisson equation and Dynkin’s formula can be obtained. Then we derive the optimality condition and the optimal policy through the performance difference formula. In Section 4, we illustrate the results by numerical examples. Finally, we conclude the paper in Section 5.

2. Problem Formulation

2.1. Problem with Infinite Time Horizon

In this section, we study the infinite horizon discrete-time stochastic LQ optimal control problem, in which the conic control constraints are also considered; see [5,14]. For simplicity of the parameters, we consider a one dimensional dynamic system with a multiplicative noise described by
x l + 1 = A x l + B u l ( x l ) + [ A x l + B u l ( x l ) ] ξ l ,
for time l = 0 , 1 , . By denoting R ( R + ) as the set of real (nonnegative real) numbers, in this system, A R and B R 1 × m are deterministic values; x l R is the state with x 0 being given; and u l R m is a feedback control law at time l. For each l, ξ l denotes an independent identically distributed one-dimensional multiplicative noise, satisfying a normal distribution with mean 0 and variance σ 2 , σ 0 . For each l, ξ l denotes a one-dimensional noise. ξ l and ξ k are independent for every l k .
Now, we consider the conic control constraint sets (cf. [5])
C l : = { u l | u l F l , H u l R + n } ,
for l = 0 , 1 , , where H R n × m is a deterministic matrix; and F l is the filtration of the information available at time l. Let C l R m be a given closed cone; i.e., α u l C l whenever u l C l and α 0 ; and u l + v l C l whenever u l , v l C .
The goal of optimization is to minimize the total reward performance measure in a quadratic form:
( P A ) min { u l } | l = 0 η u = lim L E [ l = 0 L 1 ( Q x l 2 + u l R u l ) | x 0 ] ( s . t . ) { x t , u t }   satisfies   ( 1 )   and   ( 2 )   for   l = 0 , 1 , ,
where Q R + and R R + m × m are deterministic. Here we denote the transpose operation by a prime in the superscript, such as u l . { u l } denotes the control sequence { u 0 , u 1 , } . We also assume that (3) exists.
Therefore, the performance function of (3) is
f u ( x ) = Q x 2 + u R u .
In this paper, we will show that the direct-comparison based approach leads to more new results for the total rewards problem [7], and that the results can be easily extended.

2.2. MDPs with Continuous State Spaces

For a stationary control law u l = u ( x ) , at time l = 0 , 1 , , the constraint (2) can be written as
C : = { u | u R m , H u R + n } .
Then the above stochastic control problem can be viewed as an MDP with continuous state spaces. More precisely, u ( x ) plays a similar role of actions in MDPs, and then the control law u is the same as a policy.
Consider a discrete-time Markov chain X : = { x l } l = 0 with a continuous state space on R . The transition probability can be described by a transition operatorP as
( P h ) ( x ) : = R h ( y ) P ( d y | x ) ,
where P ( d y | x ) is the transition probability function, with x , y R ; and h ( y ) is any measurable function on R . As ξ l is independent Gaussian noises, given the current state x l = x , under the stationary control u ( x ) , y = x l + 1 satisfies a normal distribution with mean μ y = A x + B u ( x ) and variance σ y 2 = [ A x + B u ( x ) ] 2 σ 2 . Then we have the transition function of this system as follows,
P u ( d y | x ) = 1 2 π σ y e x p { ( y μ y ) 2 2 σ y 2 } d y .
Let B be the σ -field of R containing all the (Lebesgue) measurable sets. For any set B B , we can define the identity transition function I ( B | x ) . I ( B | x ) = 1 if x B ; I ( B | x ) = 0 otherwise. For any function h and x R , we have ( I h ) ( x ) = h ( x ) .
The product of two transition functions P 1 ( B | x ) and P 2 ( B | x ) is defined as a transition function ( P 1 P 2 ) ( B | x ) :
( P 1 P 2 ) ( B | x ) : = R P 2 ( B | y ) P 1 ( d y | x ) ,
where x , y R , B B .
For any transition function P, we can define the kth power, k = 0 , 1 , , as P 0 = I , P 1 = P , and P k = P P k 1 , k = 2 , . Suppose that the Markov chain X is time-homogeneous with transition function P ( B | x ) , x R , B B . Then the k-step transition probability functions, denoted as P ( k ) ( B | x ) , k = 1 , 2 , , are given by the 1-step transition function defined as P ( 1 ) ( B | x ) = P ( B | x ) and
P ( k ) ( B | x ) : = R P ( d y | x ) P k 1 ( B | y ) , k 2 .
For any function h ( x ) , we have
( P ( k ) h ) ( x ) = R h ( y ) P ( k ) ( d y | x ) = P ( P ( k 1 ) ) h ( x ) .
That is, as an operator, we have P ( k ) = P ( P ( k 1 ) ) . Recursively, we can prove that P ( k ) = P k .
Suppose that a Markov chain X with a continuous state space on R has a steady-state distribution π satisfying π = π P . Define function e ( x ) = 1 for all x R . We denote the performance potential g as a function which satisfies the Poisson equation (cf. [17])
( I P ) g ( x ) + η ( x ) = f ( x ) ,
where I and P are two transition functions, and η ( x ) = ( π f ) e ( x ) = η e ( x ) . Then if g is a solution to (7), so is g + c e , with any constant c. We define
g K : = { I + k = 1 K ( P k e π ) } f ,
and assume the limit g ( x ) : = lim K g K ( x ) exists for x R . Then we have the following lemma,
Lemma 1
(Solution to Poisson Equations [17]). For any transition function P and performance function f ( x ) , if
lim k P k f = ( e π ) f = η e , lim K g K = g , and lim K P g K = P g ,
hold for every x R , then
g = { I + k = 1 ( P k e π ) } f ,
is a solution to the Poisson Equation (7).

2.3. State Separation Property

In order to derive the explicit solution of the stochastic LQ control problem with conic constraints, Reference [14] gives the following lemma for the state separation property of the LQ problem,
Lemma 2
(State Separation [14]). In the system (1), for any x R , the optimal solution for problem (3) at time l is a piecewise linear feedback policy
u * ( x l ) = K ^ * x l , i f x l 0 , K ¯ * x l , i f x l < 0 ,
for l = 0 , 1 , , where K : = { K R m | H K R + n } associated with the control constraint sets C l ; K ^ * , K ¯ * K , are the optimal values of two correspondent auxiliary optimization problems, and the superscript “*” corresponds to the optimal control.
Based on (10) in Lemma 2, the stationary control can be written as u ( x ) = K ^ x 1 x 0 K ¯ x 1 x < 0 , where 1 B is an indicator function, such that 1 B = 1 , if the condition B holds true and 1 B = 0 otherwise; and K ^ , K ¯ K . Applying this control, the system dynamics (1) becomes
x l + 1 = C ^ x l 1 x l 0 + C ¯ x l 1 x l < 0 + [ C ^ x l 1 x l 0 + C ¯ x l 1 x l < 0 ] ξ l ,
for l = 0 , 1 , , where
C ^ = A + B K ^ , C ¯ = A B K ¯ .
Moreover, the performance measure (3) becomes
η u ( x ) = lim L E [ l = 0 L 1 W ^ x l 2 1 x l 0 + W ¯ x l 2 1 x l < 0 | x 0 = x ] ,
where W ^ = Q + K ^ T R K ^ and W ¯ = Q + K ¯ T R K ¯ . Therefore, the performance function (4) becomes
f ( x ) = W ^ x 2 1 x 0 + W ¯ x 2 1 x < 0 .
It is easy to verify that W ^ and W ¯ are positive semi-definite. We assume that this one-dimensional state system is stable, and then the spectral radiuses of C ^ and C ¯ are less than 1, i.e., C m a x = m a x ( C ^ , C ¯ ) < 1 . In the next section, we will derive the performance potentials for the LQ problem, which is the foundation of the performance optimization. Based on this, the Poisson equation and the Dynkin’s formula can be derived. The direct-comparison based approach provides a new perspective for this problem, and the results can be extended easily.

3. Performance Optimization

In this section, utilizing the state separation property, we derive the performance difference formula, which compares the performance measures of any tow policies, and then derive the optimality condition and the optimal policy with the direct-comparison based approach.

3.1. Performance Difference Formula

We denote W ^ 0 = W ^ and W ¯ 0 = W ¯ . Then we have the performance function as f ( x ) = W ^ 0 x 2 1 x 0 + W ¯ 0 x 2 1 x < 0 . With the initial condition x 0 = x , by (5), (6), (11), and (13), the performance operator is
( P f ) ( x ) = W ^ 1 x 2 1 x 0 + W ¯ 1 x 2 1 x < 0 ,
where
W ^ 1 = ( a 1 W ^ 0 + a 2 W ¯ 0 ) C ^ 2 , W ¯ 1 = ( a 1 W ^ 0 + a 2 W ¯ 0 ) C ¯ 2 ,
and
a 1 = σ ϕ ( 1 σ ) + ( 1 + σ 2 ) Φ ( 1 σ ) ,
a 2 = σ ϕ ( 1 σ ) + ( 1 + σ 2 ) Φ ( 1 σ ) ,
with ϕ ( · ) as the probability density function of a standard normal distribution. We can verify that a 1 and a 2 are both nonnegative constants, with a 1 + a 2 = 1 + σ 2 .
As P 2 f = P ( P f ) , continuing this process, we obtain
( P k f ) ( x ) = W ^ k x 2 1 x 0 + W ¯ k x 2 1 x < 0 ,
where
W ^ k = ( a 1 W ^ k 1 + a 2 W ¯ k 1 ) C ^ 2 , W ¯ k = ( a 1 W ^ k 1 + a 2 W ¯ k 1 ) C ¯ 2 .
We set W 0 * = max ( W 0 ^ , W 0 ¯ ) .
In order to ensure the stability of the system, Reference [14] gives some assumptions. Here we assume max ( C ^ 2 , C ¯ 2 ) < 1 / ( 1 + σ 2 ) 1 . Then we have
W ^ k ( 1 + σ 2 ) k ( C ^ 2 ) k W 0 * , W ¯ k ( 1 + σ 2 ) k ( C ¯ 2 ) k W 0 * .
Therefore, we have
lim k + W ^ k = lim k + W ¯ k = 0 .
We denote G ^ k : = i = 0 k W ^ i and G ¯ k : = i = 0 k W ¯ i . Based on the above claims, we obtain that G ^ k and G ¯ k would converge when k + . Thus we denote
G ^ : = lim K + G ^ K = k = 0 + W ^ k , G ¯ : = lim K + G ¯ K = k = 0 + W ¯ k .
Based on the definition of total rewards (3), we have
η ( x ) = G ^ x 2 1 x 0 + G ¯ x 2 1 x < 0 .
By (17) and (18), we have
lim k + ( P k f ) ( x ) = 0 .
Then we have proved that the closed-loop system (11) is L 2 -asymptotically stable, i.e., lim l E [ ( x l ) 2 ] = 0 . Therefore, the total rewards η ( x ) exists, that is, a piecewise quadratic function with positive semi-definite matrices G ^ and G ¯ .
Now, we define the discrete version of generator, A for any function h ( x ) , x R , such that
A h ( x ) : = ( P h ) ( x ) h ( x ) .
Taking h ( x ) as η ( x ) , and by the definition of η ( x ) in (3), we have the Poisson equation as follows,
A η ( x ) + f ( x ) = 0 .
By (5) and (20), we obtain the discrete version of Dynkin’s formula as
E { k = 0 K 1 [ A h ( x k ) ] | x 0 } = E { h ( x K ) | x 0 } h ( x 0 ) .
and if the limit K exists, then
E { k = 0 [ A h ( x k ) ] | x 0 } = lim K E { h ( x K ) | x 0 } h ( x 0 ) .
Now, we consider two policies u , u U 0 , resulting in two independent Markov chains X and X in the same state space R , with P , f , η , A , E , and P , f , η , A , E , respectively. Let x 0 = x 0 . Applying the Dynkin’s Formula (22) on X with h ( x ) = η ( x ) yields
E { k = 0 K 1 [ A η ( x k ) | x 0 } = E { η ( x K ) ] | x 0 } η ( x 0 ) .
Noting that η ( x 0 ) = lim K k = 0 K 1 { E [ f ( x k ) ] | x 0 } , and lim K E { η ( x K ) | x 0 } = 0 due to asymptotical stability. Then by (23), we obtain the performance difference formula:
η ( x 0 ) η ( x 0 ) = lim K k = 0 K 1 E { ( A η + f ) ( x k ) | x 0 } .

3.2. Optimal Policy

Based on the performance difference Formula (24), we have the following optimality condition.
Theorem 1
(Optimality Condition). A policy u * in C is optimal if, and only if,
A u η u * + f u 0 = A u * η u * + f u * , u C .
From (25), the optimality equation is:
min u C { A u η u * + f u } = 0 .
Proof. 
First, the “if” part follows from the performance difference Formula (24) and the Poisson Equation (21).
Next, we prove the “only if” part: Let u * be an optimal policy. We need to prove that (25) holds. Suppose that this is not true. Then, there must exist one policy, denoted as u , such that (25) does not hold. That is, there must be at least one state, denoted as y, such that
P u * η u * ( y ) + f u * ( y ) > P u η u * ( y ) + f u ( y ) .
Then we can create a policy u ˜ by setting u ˜ = u when x = y , and u ˜ = u * when x y . We have η u * > η u . This contradicts to the fact that u * is an optimal policy.  □
Based on the optimality condition, the optimal control u * can be obtained by developing policy iteration algorithms. Roughly speaking, we start with any policy u 0 . At the kth step, k = 0 , 1 , , given a piecewise linear policy u k ( x ) = K ^ x 1 x 0 K ¯ x 1 x < 0 , where K ^ , K ¯ K , we want to find a better policy by (26). We consider any policy u ( x ) . Setting h ( x ) = η u k ( x ) = G ^ x 2 1 x 0 + G ¯ x 2 1 x < 0 , by ( 5 ) , ( 12 ) , and ( 14 ) , we have
( P u η u k ) ( x ) = ( a 1 G ^ + a 2 G ¯ ) ( A + B K ^ ) 2 x 2 1 x 0 + ( a 1 G ^ + a 2 G ¯ ) ( A B K ¯ ) 2 x 2 1 x < 0 .
where a 1 and a 2 satisfy Equations (15) and (16), respectively.
Then, from (4) and (27), we have
u k + 1 ( x ) = arg { min u C [ ( P u η u k ) ( x ) + f u ( x ) ] } = K ^ k + 1 x 1 x 0 K ¯ k + 1 x 1 x < 0 ,
with
K ^ k + 1 = arg min K K [ a 1 C ^ 2 G ^ + a 2 C ^ 2 G ¯ + Q + K T R K ] , K ¯ k + 1 = arg min K K [ a 1 C ¯ 2 G ^ + a 2 C ¯ 2 G ¯ + Q + K T R K ] ,
where C ^ = A + B K , and C ¯ = A B K .
It can be seen that if the policy u k ( x ) is a piecewise linear control, then we can find an improved policy u k + 1 ( x ) , which is also piecewise linear. Moreover, if K ^ k + 1 = K ^ and K ¯ k + 1 = K ¯ , i.e., u k + 1 = u k , then the iteration stops. The policy u k satisfies the optimal condition (26) in Theorem 1, and therefore is an optimal control.
Therefore, we can obtain the optimal policy as follows,
u * ( x ) = K ^ * x 1 x 0 K ¯ * x 1 x < 0 ,
where
K ^ * = arg min K K [ a 1 C ^ 2 G ^ * + a 2 C ^ 2 G ¯ * + Q + K T R K ] ,
K ¯ * = arg min K K [ a 1 C ¯ 2 G ^ * + a 2 C ¯ 2 G ¯ * + Q + K T R K ] .
Moreover,
G ^ * = min K K { a 1 C ^ 2 G ^ * + a 2 C ^ 2 G ¯ * + Q + K T R K } ,
G ¯ * = min K K { a 1 C ¯ 2 G ^ * + a 2 C ¯ 2 G ¯ * + Q + K T R K } .
The original problem (3) is transformed to two auxiliary optimization problems (29) and (30). Under the optimal control u * in (28), the closed-loop system (11) is L 2 -asymptotically stable. From (19), with the initial condition x 0 = x , we know the optimal total reward performance is
η * ( x ) = G ^ * x 2 1 x 0 + G ¯ * x 2 1 x < 0 ,
where G ^ * and G ¯ * satisfy (31) and (32), respectively.
Policy iteration can also be implemented on-line, the performance (potential) can be learned on a sample path without knowing all the transition probabilities. In on-line algorithms, the computation of policy evaluation is O ( n ) , where n is the length of a sample path. Additionally, Reference [14] also provides some algorithms for calculating the optimal policy.

4. Simulation Examples

In this section, we use two numerical examples to illustrate the optimal policy for the constrained LQ control problem (3).
Example 1.
We consider a stochastic LQ system with x 0 = 10 , m = 3 , A = 0.8 , and B = ( 0.35 , 0.18 , 0.25 ) . The cost matrix is
R = 1.2 0.6 0.4 0.6 1.8 0.2 0.4 0.2 2.4 , and Q = 1.2 .
For time l = 0 , 1 , , the variance of the 0-mean i.i.d. Gaussian noise ξ l is σ 2 = 0.25 . We consider the conic constraint u 0 . By applying Theorem 1, the stationary optimal control is u l * ( x l ) = K ^ * x l 1 x l 0 K ¯ * x l 1 x l < 0 , for l = 0 , 1 , , where K ^ * = ( 0.574 , 0 , 0 ) , K ¯ * = ( 0 , 0.250 , 0.270 ) , G ^ * = 2.773 and G ¯ * = 3.473 . Furthermore, the optimal reward performance is η * ( x 0 ) = G ^ * x 0 2 1 x 0 0 + G ¯ * x 0 2 1 x 0 < 0 = 623.987 .
As shown in Figure 1a plots the outputs G ¯ * and G ^ * with respect to iteration time K; Figure 1b plots the state trajectories of 50 samples by setting x 0 = 10 and implementing the stationary optimal control u * . It can be observed that x l * converges to 0 after time l = 20 and this closed loop system is asymptotically stable.
Example 2.
In the second case, we assume x 0 = 10 , A and B, following the identical discrete distribution with five cases. We assume A ( 0.7 , 0.6 , 0.9 , 1 , 1.1 ) , and
B 0.18 0.05 0.140 , 0.03 0.12 0.03 , 0.05 0.05 0.05 , 0.01 0.05 0.01 , 0.05 0.01 0.06 ,
each of which has the same probability 0.2. The cost matrix is
R = 1.5 0.6 0.4 0.6 1.5 0.2 0.4 0.2 2.5 , and Q = 1.5 .
For time l = 0 , 1 , , the variance of the 0-mean i.i.d. Gaussian noise ξ l is σ 2 = 0.25 . We consider the conic constraint u 0 . By applying Theorem 1, the stationary optimal control is u l * ( x l ) = K ^ * x l 1 x l 0 K ¯ * x l 1 x l < 0 , for l = 0 , 1 , , where K ^ * and K ¯ * are identified as follows, K ^ * = ( 0.259 , 0.100 , 0.130 ) , K ¯ * = ( 0.100 , 0.500 , 0.100 ) , G ^ * = 4.111 and G ¯ * = 3.859 . Furthermore, the optimal reward performance is η * ( x 0 ) = G ^ * x 0 2 1 x 0 0 + G ¯ * x 0 2 1 x 0 < 0 = 489.23 .
As shown in Figure 2a plots the outputs G ¯ * and G ^ * with respect to iteration times K; Figure 2b plots the state trajectories of 50 samples by setting x 0 = 10 and implementing the stationary optimal control u * . It can be observed that x l * converges to 0 after time l = 35 and this closed loop system is asymptotically stable.

5. Conclusions

In this paper, we apply the direct-comparison based optimization approach to study the rewards optimization of the discrete-time stochastic linear-quadratic control problem with conic constraints on an infinite horizon. We derive the performance difference formula by utilizing the state separation property of the system structure. Based on this, the optimality condition and the stationary optimal feedback control can be obtained. The direct-comparison based approach is applicable to both linear and nonlinear systems. By introducing the the LQ optimization problem, we establish a general framework for studying infinite horizon control problems with total rewards. We verify that the proposed optimal approach can solve the LQ problems. Then we illustrate our results by two simulation examples.
The results can easily be extended to the cases of non-Gaussian noises and average rewards. Most significantly, our methodology can deal with a very general class of linear constraints on state and control variables, which includes the cone constraints, positivity and negativity constraints, and the state-dependent upper and lower bound constraints as a special case. In addition to the problem with the infinite control horizon, our results still fit problems with a finite horizon. In addition, without identifying all the system structure parameters, this approach can also be implemented on-line, and learning based algorithms can be developed.
Finally, this work focuses on the discrete-time stochastic LQ control problem. Our next step is to investigate continuous cases. As the constrained LQ problem has a wide range of applications, we hope to apply our approach in more areas, such as dynamic portfolio management, security optimization of cyber-physical systems, and financial derivative pricing, in our future research.

Author Contributions

Conceptualization, R.X. and X.Y.; methodology, R.X.; validation, R.X., X.Y. and W.W.; formal analysis, X.Y.; data curation, X.Y.; writing–original draft preparation, R.X. and X.Y.; writing–review and editing, R.X., X.Y. and W.W.; supervision, W.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China under Grant 61573244 “Stochastic control optimization of uncertain systems based on offset with multiplicative noises and its applications in the financial optimization” and 61521063 “Control theory and techniques: design, control and optimization of network systems”.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
MDPMarkov Decision Process
LQLinear-Quadratic

References

  1. Basin, M.; Perez, J.; Skliar, M. Opitmal filtering for polynomial system wtates with polynomial multiplicative noise. Int. J. Robust Nonlinear Control 2006, 16, 303–314. [Google Scholar] [CrossRef]
  2. Gershon, E.; Shaked, U. Static H2 and Houtput-feedback of discrete-time LTI systems with state multiplicative noise. Syst. Control Lett. 2006, 55, 232–239. [Google Scholar] [CrossRef]
  3. Lim, A.B.E.; Zhou, X.Y. Stochastic optimal control LQR control with integral quadratic constraints and indefinite control weights. IEEE Trans. Autom. Control 1999, 44, 1359–1369. [Google Scholar] [CrossRef] [Green Version]
  4. Zhu, J. On stochastic riccati equations for the stochastic LQR problem. Syst. Control Lett. 2005, 44, 119–124. [Google Scholar] [CrossRef]
  5. Hu, Y.; Zhou, X.Y. Constrained stochastic LQ control with random coefficients, and application to portfolio selection. SIAM J. Control Optim. 2005, 44, 444–446. [Google Scholar] [CrossRef]
  6. Kalman, R.E. Contributions to the theory of optimal control. Bol. Soc. Mat. Mex. 1960, 5, 102–119. [Google Scholar]
  7. Anderson, B.D.; Moore, J.B. Optimal Control: Linear Quadratic Methods; Courier Corporation: North Chelmsford, MA, USA, 2007; pp. 167–189. [Google Scholar]
  8. Yong, J. Linear-quadratic optimal control problems for mean-field stochastic differential equations. SIAM J. Control Optim. 2013, 51, 2809–2838. [Google Scholar] [CrossRef]
  9. Gao, J.J.; Li, D.; Cui, X.Y.; Wang, S.Y. Time cardinality constrained mean-variance dynamic portfolio selection and market timing: A stochastic control approach. Automatica 2015, 54, 91–99. [Google Scholar] [CrossRef]
  10. Costa, O.L.V.; Fragoso, M.D.; Margues, R.P. Discrete-Time Markov Jump Linear Systems; Springer: Berlin, Germany, 2007; pp. 291–317. [Google Scholar]
  11. Primbs, J.A.; Sung, C.H. Stochastic receding horizon control of contrained linear systems with state and control multiplicative noise. IEEE Trans. Autom. Control 2009, 54, 221–230. [Google Scholar] [CrossRef]
  12. Dong, Y.C. Constrained LQ problem with a random jump and application to portfolio selection. Chin. Ann. Math. 2019, 39, 829–848. [Google Scholar] [CrossRef] [Green Version]
  13. Gao, J.J.; Li, D. Cardinality constrained linear quadratic optimal control. IEEE Trans. Autom. Control 2011, 56, 1936–1941. [Google Scholar] [CrossRef]
  14. Wu, W.P.; Gao, J.J.; Li, D.; Shi, Y. Explicit solution for constrained stochastic linear-quadratic control with multiplicative noise. IEEE Trans. Autom. Control 2019, 64, 1999–2012. [Google Scholar] [CrossRef]
  15. Campbell, S.L. On positive controllers and linear quadratic optimal control problems. Int. J. Control 1982, 36, 885–888. [Google Scholar] [CrossRef]
  16. Heemels, W.P.; Eijndhoven, S.V.; Stoorvogel, A.A. Linear quadratic regulator problem with positive controls. Int. J. Control 1998, 70, 551–578. [Google Scholar] [CrossRef]
  17. Cao, X.R. Stochastic Learning and Optimization: A Sensitivity-Based Approach; Springer: New York, NY, USA, 2007. [Google Scholar]
  18. Puterman, M.L. Markov Decision Processes: Discrete Stochastic Dynamic Programming; Wiley: New York, NY, USA, 1994. [Google Scholar]
  19. Chen, R.C. Constrained stochastic control with probabilistic criteria and search optimization. In Proceedings of the 43rd IEEE Conference on Decision and Control (CDC), Nassau, Bahamas, 14–17 December 2004. [Google Scholar]
  20. Zhang, K.J.; Xu, Y.K.; Chen, X.; Cao, X.R. Policy iteration based feedback control. Automatica 2008, 44, 1055–1061. [Google Scholar] [CrossRef]
  21. Cao, X.R. Stochastic feedback control with one-dimensional degenerate diffusions and nonsmooth value functions. IEEE Trans. Autom. Control 2018, 62, 6136–6151. [Google Scholar] [CrossRef]
  22. Cao, X.R.; Wan, X.W. Sensitivity analysis of nonlinear behavior with distorted probability. Math. Financ. 2017, 27, 115–150. [Google Scholar] [CrossRef] [Green Version]
  23. Xia, L. Mean-variance optimization of discrete time discounted Markov decision processes. Automatica 2018, 88, 76–82. [Google Scholar] [CrossRef] [Green Version]
  24. Cao, X.R. Optimality consitions for long-run average rewards with underselectivity and nonsmooth features. IEEE Trans. Autom. Control 2017, 62, 4318–4332. [Google Scholar] [CrossRef]
  25. Xue, R.B.; Ye, X.S.; Cao, X.R. Optimization of stock trading with additional information by Limit Order Book. Automatica 2019. submitted. [Google Scholar]
  26. Ye, X.S.; Xue, R.B.; Gao, J.J.; Cao, X.R. Optimization in curbing risk contagion among financial institutes. Automatica 2018, 94, 214–220. [Google Scholar] [CrossRef]
  27. Jia, Q.S.; Yang, Y.; Xia, L.; Guan, X.H. A tutorial on event-based optimization with application in energy Internet. J. Control Theory Appl. 2018, 35, 32–40. [Google Scholar]
Figure 1. The simulation results of Example 1.
Figure 1. The simulation results of Example 1.
Algorithms 13 00049 g001
Figure 2. Simulation Results of Example 2.
Figure 2. Simulation Results of Example 2.
Algorithms 13 00049 g002

Share and Cite

MDPI and ACS Style

Xue, R.; Ye, X.; Wu, W. Optimization of Constrained Stochastic Linear-Quadratic Control on an Infinite Horizon: A Direct-Comparison Based Approach. Algorithms 2020, 13, 49. https://0-doi-org.brum.beds.ac.uk/10.3390/a13020049

AMA Style

Xue R, Ye X, Wu W. Optimization of Constrained Stochastic Linear-Quadratic Control on an Infinite Horizon: A Direct-Comparison Based Approach. Algorithms. 2020; 13(2):49. https://0-doi-org.brum.beds.ac.uk/10.3390/a13020049

Chicago/Turabian Style

Xue, Ruobing, Xiangshen Ye, and Weiping Wu. 2020. "Optimization of Constrained Stochastic Linear-Quadratic Control on an Infinite Horizon: A Direct-Comparison Based Approach" Algorithms 13, no. 2: 49. https://0-doi-org.brum.beds.ac.uk/10.3390/a13020049

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop