Next Article in Journal
Semi-Hyers–Ulam–Rassias Stability of the Convection Partial Differential Equation via Laplace Transform
Next Article in Special Issue
Riemann–Liouville Fractional Integral Inequalities for Generalized Pre-Invex Functions of Interval-Valued Settings Based upon Pseudo Order Relation
Previous Article in Journal
Calderón Operator on Local Morrey Spaces with Variable Exponents
Previous Article in Special Issue
On Well-Posedness of Some Constrained Variational Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimality Conditions and Duality for a Class of Generalized Convex Interval-Valued Optimization Problems

1
College of Science, Hohai University, Nanjing 210098, China
2
School of Mathematics and Statistics, Hubei Normal University, Huangshi 435002, China
3
Department of Applied Mathematics, University Politehnica of Bucharest, 060042 Bucharest, Romania
*
Author to whom correspondence should be addressed.
Submission received: 25 October 2021 / Revised: 17 November 2021 / Accepted: 20 November 2021 / Published: 22 November 2021
(This article belongs to the Special Issue Variational Problems and Applications)

Abstract

:
This paper is devoted to derive optimality conditions and duality theorems for interval-valued optimization problems based on gH-symmetrically derivative. Further, the concepts of symmetric pseudo-convexity and symmetric quasi-convexity for interval-valued functions are proposed to extend above optimization conditions. Examples are also presented to illustrate corresponding results.

1. Introduction

Due to the complexity of the environment and the inherent ambiguity of human cognition, the information data in real world optimization problems are usually uncertain. More often, we can not ignore the fact that small uncertainties in data may lead to a completely meaningless of the usual optimal solutions from a practical viewpoint. Therefore, much interest has been paid to the uncertain optimization problems, see [1,2,3,4].
There are various approaches used to tackle the optimization problems with uncertainty, such as stochastic process [5], fuzzy set theory [6] and interval analysis [7]. Among them, the method of interval analysis is to express an uncertain variable as a real interval or an interval-valued function (IVF), which has been applied to many fields, such as, the models involving inexact linear programming problems [8], data envelopment analysis [9], optimal control [10], goal programming [11], minimax regret solutions [12] and multi-period portfolio selection problems [13] etc. Up to now, we can find many works involving interval-valued optimization problems (IVOPs) (see [14,15]).
In classical optimization theory, the derivative is the most frequently used one. It plays an important role in the study of optimality conditions and duality theorems in constrained optimization problems. To date, various notions of IVF’s derivative have been proposed, see [16,17,18,19,20,21,22,23]. One famous concept is H-derivative defined in [16]. However, the H-derivative is restrictive. In 2009, Stefanini and Bede presented the gH-derivative [23] to overcome the disadvantages of H-derivative. Furthermore, in [24], Guo et al. proposed the gH-symmetrically derivative which is more general than gH-derivative. Researchers of optimal problems have largely used these derivatives of IVFs. For instance, Wu [25] considered the Karush–Kuhn–Tucker (KKT) conditions for nonlinear IVOPs using H-derivative. In [26,27], Wolfe type dual problems of IVOPs were investigated. Later, more general KKT optimality conditions has been proposed by Chalco-Cano et al. [28,29] based on gH-derivative. Besides, Jayswal et al. [30] extended optimality conditions and duality theorems for IVOPs with the generalized convexity. Antczak et al. [31] studied the optimality conditions and duality results for nonsmooth vector optimization problems with multiple IVFs [32]. In 2019, Ghosh [33] have extended the KKT condition for constrained IVOPs. In addition, Van [34] investigated the duality results for interval-valued pseudoconvex optimization problems with equilibrium constraints.
Based on the fact that the IVOPs have been extensively studied on optimality condition and duality by many researchers in recent years, in this paper, we continue to study and develop these results on optimality conditions and Wolfe duality of IVOPs on the basis of the gH-symmetrically derivative. In addition, we are going to introduce more appropriate concepts of symmetric pseudo-convexity and symmetric quasi-convexity to weak the convexity hypothesis.
The remaining of the paper is as follows: In Section 2, we give preliminaries and recall some main concepts. In Section 3, we propose the directional gH-symmetrically derivative and more appropriate concepts of generalized convexity. Section 4 establishes the necessary optimality conditions and Wolfe duality theorems. In Section 5, we apply the generalized convexities to investigate the content in Section 4. Our results are properly wider than the results in [28,29,30].

2. Preliminaries

Theorem 1
([35]). Suppose that f : M R is symmetrically differentiable on M and N is an open convex subset of M. Then f is convex on N if and only if
f ( t ) f ( t * ) f s ( t * ) T ( t t * ) , for all t , t * N .
Theorem 2
([36]). Let A be a m × n real matrix and let c R n be a column vector. Then the implication
A t 0 c T t 0
holds for all t R n if and only if
u 0 : u T A = c T ,
where u R m .
Let I be the set of all bounded and closed intervals in R , i.e.,
I = { a = [ a ̲ , a ¯ ] | a ̲ , a ¯ R and a ̲ a ¯ } .
For a = [ a ̲ , a ¯ ] , b = [ b ̲ , b ¯ ] , c = [ c ̲ , c ¯ ] I and k R , we have
a + b = [ a ̲ , a ¯ ] + [ b ̲ , b ¯ ] = [ a ̲ + b ̲ , a ¯ + b ¯ ] ,
k · a = k · [ a ̲ , a ¯ ] = [ k a ̲ , k a ¯ ] , if k > 0 ; [ k a ¯ , k a ̲ ] , if k 0 .
In [23], Stefanini and Bede presented the gH-difference:
a g b = c a = b + c ; or b = a + ( 1 ) c .
In addition, this difference between two intervals always exists, i.e.,
a g b = min { a ̲ b ̲ , a ¯ b ¯ } , max { a ̲ b ̲ , a ¯ b ¯ } .
Furthermore, the partial order relation “ L U " on I is determined as follows:
[ a ̲ , a ¯ ] L U [ b ̲ , b ¯ ] a ̲ b ̲ and a ¯ b ¯ ,
[ a ̲ , a ¯ ] L U [ b ̲ , b ¯ ] [ a ̲ , a ¯ ] L U [ b ̲ , b ¯ ] and [ a ̲ , a ¯ ] [ b ̲ , b ¯ ] ,
a and b are said to be comparable if and only if a L U b or a L U b .
Let R n be the n-dimensional Euclidean space, and T R n is an open set. We call the function F : T I an IVF, i.e., F ( t ) is a closed interval in R for every t T . The IVF F can also be denoted as F = [ F ̲ , F ¯ ] , where F ̲ and F ¯ are real functions and F ̲ F ¯ on T. Moreover, F ̲ , F ¯ are called the endpoint functions of F.
Definition 1
([24]). Let F : T I . Then F is said to be gH-symmetrically differentiable at t 0 T if there exists F s ( t 0 ) I such that:
lim | | h | | 0 F ( t 0 + h ) g F ( t 0 h ) | | h | | = F s ( t 0 ) .
Definition 2
([24]). Let F : T I and t 0 T . If the IVF φ ( t i ) = F ( t 1 0 , , t i 1 0 , t i , t i + 1 0 , , t n 0 ) is gH-symmetrically differentiable at t i 0 , then we say that F has the ith partial gH-symmetrically derivative ( s F t i ) g ( t 0 ) at t 0 and
( s F t i ) g ( t 0 ) = φ s ( t i 0 ) .
Definition 3
([24]). Let F : T I be an IVF, and t i s F stands for the partial gH-symmetrically derivative with respect to the ith variable t i . If t i s F ( t 0 ) ( i = 1 , , n ) exist on some neighborhoods of t 0 and are continuous at t 0 , then F is said to be gH-symmetrically differentiable at t 0 T . Moreover, we denote by
s F ( t 0 ) = ( t 1 s F ( t 0 ) , , t n s F ( t 0 ) )
the symmetric gradient of F at t 0 .
Theorem 3
([24]). Let the IVF F : T I be continuous in ( t 0 δ , t 0 + δ ) for some δ > 0 . Then F is gH-symmetrically differentiable at t 0 T if and only if F ̲ and F ¯ are symmetrically differentiable at t 0 .
Definition 4
([28]). Let F = [ F ̲ , F ¯ ] be an IVF defined on T. We say that F is LU-convex at t * if
F ( θ t * + ( 1 θ ) t ) L U θ F ( t * ) + ( 1 θ ) F ( t )
for every θ [ 0 , 1 ] and t T .
Now, we introduce the following IVOP:
min F ( t ) subject   to g i ( t ) 0 , i = 1 , , m ,
where F : M I , g i : M R ( i = 1 , , m ) , and M R n is an open and convex set. Let
X = { t R n : t M and g i ( t ) 0 , i = 1 , , m }
be the collection of feasible points of Problem (5), and the set of objective values of primal Problem (5) is indicated by:
O P ( F , X ) = { F ( t ) : t X } .
Moreover, we review the definition of non-dominated solution to the Problem (5):
Definition 5
([27]). Let t * be a feasible solution of Problem (5), i.e., t * X . Then t * is said to be a non-dominated solution of Problem (5) if there exists no t X \ { t * } such that: F ( t ) L U F ( t * ) .
The KKT sufficient optimality conditions of Problem (5) have been obtained in [24]:
Theorem 4
([24], Sufficient optimality condition). Assume that F : M I is LU-convex and gH-symmetrically differentiable at t * , g : M R n is convex and symmetrically differentiable at t * . If there exist (Lagrange) multipliers 0 μ i R , i = 1 , , m such that
s F ̲ ( t * ) + s F ¯ ( t * ) + i = 1 m μ i s g i ( t * ) = 0 ; i = 1 m μ i g i ( t * ) = 0 , where μ = ( μ 1 , , μ m ) T ,
then t * is a non-dominated solution to Problem (5).
Example 1.
Consider the IVOP as below:
min F ( t ) s u b j e c t   t o g 1 ( t ) 0 , g 2 ( t ) 0 ,
where
F ( t ) = [ 4 t 2 + 2 t 3 , 3 t 2 + 3 t ] , if t ( 1 , 0 ) ; [ 3 t 3 , 3 t ] , if t [ 0 , 1 ) ,
and
g 1 ( t ) = t ; g 2 ( t ) = t 1 .
By simple calculation, F is LU-convex and gH-symmetrically differentiable at t = 0 and
s F ( 0 ) = [ 5 2 , 3 ] , g 1 s ( 0 ) = 1 , and g 2 s ( 0 ) = 1 .
The condition (7) in Theorem 4 is satisfied at t = 0 when μ 1 = 11 2 , and μ 2 = 0 .
On the other hand, it can be easily verified that t = 0 is a non-dominated solution of Problem (8). Hence, Theorem 4 is verified.
Noted that F is not gH-differentiable at t = 0 , the sufficient conditions in [24] are properly wider than those in [28].

3. Generalized Convexity of gH-Symmetrically Differentiable IVFs

The LU-convexity assumption in [28] may be restrictive. For example, the IVF
F ( t ) = [ t , 2 t ] , if t 0 ; [ 2 t , t ] , if t < 0 ,
is not LU-convex at t = 0 . Inspired by this, we introduce the directional gH-symmetrically derivative and the concepts of generalized convexity for IVFs which will be used in Section 4.
Definition 6.
Let F : T I be an IVF and h R n . Then F is called directional gH-symmetrically differentiable at t 0 in the direction h if D s F ( t 0 : h ) I exists and
D s F ( t 0 : h ) = lim α 0 + F ( t 0 + α h ) g F ( t 0 α h ) 2 α .
If t = ( t 1 , , t n ) T and e i = ( 0 , , 1 i , , 0 ) , then D s F ( t : e i ) is the partial gH-symmetrically derivative of F with respect to t i at t.
Theorem 5.
If F : T I is gH-symmetrically differentiable at t T and h R n , then the directional gH-symmetrically derivative exists and
D s F ( t : h ) = F s ( t ) T h .
Proof. 
Since, by hypothesis, F is gH-symmetrically differentiable at t, then there exists F s ( t ) I such that:
lim α h 0 F ( t + α h ) g F ( t α h ) 2 α h = F s ( t ) .
Then, we have:
lim α 0 D ( F ( t + α h ) g F ( t α h ) 2 α , F s ( t ) h ) = 0 .
i.e.,
D s F ( t : h ) = F s ( t ) h .
Thus, we complete the proof. □
Definition 7.
The IVF F : T I is called symmetric pseudo-convex (SP-convex) at t 0 T , if F is gH-symmetrically differentiable at t 0 and
F s ( t 0 ) ( t t 0 ) L U 0 i m p l i e s F ( t ) L U F ( t 0 ) ,
for all t T .
F is said to be symmetric pseudo-concave (SP-concave) at t 0 if F is SP-convex at t 0 .
Definition 8.
The IVF F : T I is called symmetric quasi-convex (SQ-convex) at t 0 T , if F is gH-symmetrically differentiable at t 0 and
F ( t ) L U F ( t 0 ) i m p l i e s F s ( t 0 ) ( t t 0 ) L U 0 ,
for all t T .
F is said to be symmetric quasi-concave (SQ-concave) at t 0 if F is SQ-convex at t 0 .
Remark 1.
When F ̲ = F ¯ , i.e., F degenerates to a real function, the concepts of SQ-convexity and SP-convexity will degenerate to s-quasiconvexity and s-pseudoconvexity in [35].

4. KKT Necessary Conditions

The necessary optimality conditions are an important part of the optimization theory, because these conditions can be used to exclude all the feasible solutions which are not optimal solutions, i.e., they can identify all options for solving the problem. From this point, using gH-symmetrically derivative, we establish a KKT necessary optimality condition which is more general than [28,29].
In order to obtain the necessary condition of Problem (5), we shall use the Slater’s constraint qualification [37]. Such condition is:
t 0 X such that g i ( t 0 ) < 0 , i = 1 , , m .
Theorem 6
(Necessary optimality condition). Assume that F : M I is LU-convex and gH-symmetrically differentiable, g i : M R ( i = 1 , , m ) are symmetrically differentiable and convex on M. Suppose H = { i : g i ( t * ) = 0 } . If t * is a non-dominated solution to Problem (5) and the following conditions are satisfied:
(A1)
For every i H and for all y R n , there exist some positive real numbers ξ i , when 0 < ξ < ξ i and s g i ( t * ) T y < 0 , we have:
s g i ( t * + ξ y ) T y < 0 ;
(A2)
The set X satisfies the Slater’s constraint qualification. For i H and for all h R n , D + F ̲ ( t * : h ) 0 implies that D s F ̲ ( t * : h ) 0 or D + F ¯ ( t * : h ) 0 implies that D s F ¯ ( t * : h ) 0 ;
where D + F ̲ and D F ̲ ( D + F ¯ and D F ¯ ) are the right-sided and left-sided directional derivative of F ̲ ( F ¯ ). Then, there exists u * R + m such that condition (7) in Theorem 4 holds.
Proof. 
Suppose the above conditions are satisfied. Assume there exists w R n such that:
w T s g i ( t * ) 0 , and w T s F ̲ ( t * ) < 0 , w T s F ¯ ( t * ) < 0 , ( i H ) .
Since X satisfies the Slater’s constraint qualification, by Equation (10), there exists t 0 X such that g i ( t 0 ) < 0 ( i = 1 , , m ). Then we have:
g i ( t 0 ) g i ( t * ) < 0 , ( i H ) ,
Combining Theorem 1 and the convexity of g i , we have
s g i ( t * ) ( t 0 t * ) < 0 , ( i H ) .
by inequality (11), we get
s g i ( t * ) [ w + ρ ( t 0 t * ) ] < 0 , ( i H )
for all ρ > 0 . By hypothesis in (A1), there exists ξ i > 0 such that
g i ( t * + ξ [ w + ρ ( t 0 t * ) ] < 0 , ( i H )
for 0 < ξ < ξ i . Therefore, we have: t * + ξ [ w + ρ ( t 0 t * ) ] X .
Since t * is a non-dominated solution to Problem (5), there exists no feasible solution t such that: F ( t ) F ( t * ) , i.e.,
F ̲ ( t * + ξ [ w + ρ ( t 0 t * ) ] ) F ̲ ( t * ) , or F ¯ ( t * + ξ [ w + ρ ( t 0 t * ) ] ) F ¯ ( t * ) .
By hypothesis (A2), we have
[ w + ρ ( t 0 t * ) ] s F ̲ ( t * ) 0 , or [ w + ρ ( t 0 t * ) ] s F ¯ ( t * ) 0 ,
for all ρ > 0 . When ρ 0 + , we obtain
w T s F ̲ ( t * ) 0 , or w T s F ¯ ( t * ) 0 ,
which contradicts to the inequality (11).
Thus, inequality (11) has no solution. By Theorem 2, there exists 0 μ i * R such that
s F ̲ ( t * ) + s F ¯ ( t * ) + i = 1 m μ i * s g i ( t * ) = 0 .
For i H , let μ i = 0 , then we have i = 1 m μ i g i ( t * ) = 0 . The proof is complete. □
Example 2.
Continued from Example 1, note that g 1 ( 0 ) = 0 and g 1 s ( t ) 1 . Moreover, M satisfies the Slater’s condition. For h R n we have:
D + F ̲ ( 0 : h ) = lim α 0 + F ̲ ( 0 + α h ) F ̲ ( 0 ) α = 3 h , h > 0 ; 2 h , h 0 .
D F ̲ ( 0 : h ) = lim α 0 F ̲ ( 0 + α h ) F ̲ ( 0 ) α = 3 h .
Obviously, D + F ̲ ( t * : h ) 0 implies that
D + F ̲ ( t * : h ) + D F ̲ ( t * : h ) 0 .
Thus, the conditions in Theorem 6 hold at t = 0 .
On the other hand, we have:
s F ̲ ( 0 ) + s F ¯ ( 0 ) + i H μ i * s g i ( 0 ) = 5 2 + 3 + μ 1 · ( 1 ) + μ 2 · 1 = 0
when μ 1 = 11 2 , μ 2 = 0 . Hence, Theorem 6 is verified.

5. Wolfe Type Duality

In this section, we consider the Wolfe dual Problem (14) of Problem (5) as follows:
max F ( t ) + i = 1 m μ i g i ( t ) subject   to s F ̲ ( t ) + s F ¯ ( t ) + i = 1 m μ i s g i ( t ) = 0 , μ = ( μ 1 , , μ m ) 0 .
For convenience, we write:
L ( t , μ ) = F ( t ) + i = 1 m μ 1 g i ( t ) .
We denote by
Y = { ( t , μ ) R n × R m : s F ̲ ( t ) + s F ¯ ( t ) + i = 1 m μ i s g i ( t ) = 0 }
the feasible set of dual Problem (14) and
O D ( L , Y ) = { L ( t , μ ) : ( t , μ ) Y }
the set of all objective values of Problem (14).
Definition 9.
Let ( t * , μ * ) be a feasible solution to Problem (14), i.e., ( t * , μ * ) Y . ( t * , μ * ) is said to be a non-dominated solution to Problem (14), if there is no ( t , μ ) Y such that L ( t * , μ * ) L U L ( t , μ ) .
Next, we discuss the solvability for Wolfe primal and dual problems.
Lemma 1.
Assume that F : M I is LU-convex and gH-symmetrically differentiable, g i : M R ( i = 1 , , m ) are symmetrically differentiable and convex on M. Furthermore, H = { i : g i ( t * ) = 0 } . If t ^ , ( t , μ ) are feasible solutions to Problems (5) and (14), respectively, then the following statements hold true:
(B1)
If F ̲ ( t ) F ̲ ( t ^ ) , then F ¯ ( t ^ ) L ¯ ( t , μ ) ;
(B2)
If F ¯ ( t ) F ¯ ( t ^ ) , then F ̲ ( t ^ ) L ̲ ( t , μ ) .
Moreover, the statements still hold true under strict inequality.
Proof. 
Suppose t ^ , ( t , μ ) are feasible solutions to Problem (5) and (14), respectively. Since F is LU-convex, we have:
F ¯ ( t ^ ) F ¯ ( t ) + s F ¯ ( t ) ( t ^ t ) = F ¯ ( t ) s F ̲ ( t ) ( t ^ t ) i = 1 m s g i ( t ) ( t ^ t ) F ¯ ( t ) + F ̲ ( t ) F ̲ ( t ^ ) + i = 1 m [ g i ( t ) g i ( t ^ ) ] .
If F ̲ ( t ) F ̲ ( t ^ ) 0 , it follows that
F ¯ ( t ^ ) F ¯ ( t ) + i = 1 m g i ( t ) = L ¯ ( t , μ ) .
Thus, the statement ( B 1 ) holds true. On the other hand, if F ̲ ( t ) F ̲ ( t ^ ) > 0 , then
F ¯ ( t ^ ) > F ¯ ( t ) + i = 1 m g i ( t ) = L ¯ ( t , μ ) .
The other statements can also be proof by using similar arguments. □
Lemma 2.
Under the same assumption to Lemma 1, if t ^ , ( t , μ ) are feasible solutions to Problems (5) and (14), respectively, then the following statements hold true:
(C1)
If F ¯ ( t ) F ¯ ( t ^ ) , then F ¯ ( t ^ ) L ¯ ( t , μ ) ;
(C2)
If F ̲ ( t ) F ̲ ( t ^ ) , then F ̲ ( t ^ ) L ̲ ( t , μ ) .
Moreover, the statements still hold true under strict inequality.
Proof. 
Suppose F ¯ ( t ) F ¯ ( t ^ ) , then we have:
F ¯ ( t ^ ) L ¯ ( t , μ ) = F ¯ ( t ^ ) F ¯ ( t ) i = 1 m μ i g i ( t ) F ¯ s ( t ) ( t ^ t ) + [ i = 1 m μ i g i ( t ^ ) + i = 1 m μ i g i ( t ^ ) i = 1 m μ i g i ( t ) ] F ¯ s ( t ) ( t ^ t ) + [ i = 1 m μ i g i ( t ^ ) + i = 1 m μ i g i s ( t ) ( t ^ t ) ] = [ F ¯ s ( t ) + i = 1 m μ i g i s ( t ) ] ( t ^ t ) i = 1 m μ i g i ( t ^ ) = F ̲ s ( t ^ t ) i = 1 m μ i g i ( t ^ ) F ̲ ( t ) F ̲ ( t ^ ) i = 1 m μ i g i ( t ^ ) = F ̲ ( t ) L ̲ ( t ^ , μ ) 0 .
Thus, the statement ( C 1 ) holds true. On the other hand, if F ¯ ( t ) < F ¯ ( t ^ ) , then:
F ¯ ( t ^ ) > L ¯ ( t , μ ) .
The proof of (C2) is similar to (C1), so we omit it. □
Theorem 7.
(Weak duality) Under the same assumption of Lemma 1, if t ^ , ( t , μ ) are feasible solutions to Problems (5) and (14), respectively, then the following statements hold true:
(D1)
If F ( t ) and F ( t ^ ) are comparable, then F ( t ^ ) L ( t , μ ) .
(D2)
If F ( t ) and F ( t ^ ) are not comparable, then F ̲ ( t ^ ) > L ̲ ( t , μ ) or F ¯ ( t ^ ) > L ¯ ( t , μ ) .
Proof. 
If F ( t ) and F ( t ^ ) are comparable, by Lemmas 1 and 2, we can obtain the statement ( D 1 ) ; If F ( t ) , F ( t ^ ) are not comparable, then we have:
F ( t ^ ) F ( t ) , or F ( t ^ ) F ( t ) .
By Lemmas 1 and 2, we obtain that:
F ̲ ( t ^ ) > L ̲ ( t , μ ) , or F ¯ ( t ^ ) > L ¯ ( t , μ ) .
The proof is complete. □
Example 3.
Consider the optimization problem in Example 1. The corresponding Wolfe duality problem is:
max F ( t ) + μ 1 g 1 ( t ) + μ 2 g 2 ( t ) s u b j e c t   t o s F ̲ ( t ) + s F ¯ ( t ) + μ 1 s g 1 ( t ) + μ 2 s g 2 ( t ) = 0 , μ = ( μ 1 , μ 2 ) 0 .
Clearly, t ^ = 0 is a feasible solution of the Problem (8) and the objective value is [ 3 , 0 ] . Moreover, ( t , μ 1 , μ 2 ) = ( 1 2 , 0 , 2 ) is a feasible solution to the Problem (18), and objective value is [ 6 , 15 4 ] .
We observe that
F ( 0 ) L ( 1 2 , 0 , 2 ) .
Hence, Theorem 7 is verified.
Theorem 8.
(Solvability) Under the same assumption of Lemma 1, if ( t * , μ * ) Y and L ( t * , μ * ) O P ( F , X ) , then ( t * , μ * ) solves the Problem (14).
Proof. 
Suppose ( t * , μ * ) is not a non-dominated solution to Problem (14), then there exists ( t , μ ) Y so that:
L ( t * , μ * ) L ( t , μ ) .
Since L ( t * , μ * ) O P ( F , X ) , there exists t ^ X such that:
F ( t ^ ) = L ( t * , μ * ) L ( t , μ ) .
According to Theorem 7, if F ( t ) , F ( t ^ ) are comparable, then we have
F ( t ^ ) L ( t , μ ) .
If F ( t ) , F ( t ^ ) are not comparable, then:
F ̲ ( t ^ ) > L ̲ ( t , μ ) , or F ¯ ( t ^ ) > L ¯ ( t , μ ) .
These two results are contradict to Equation (20). Thus, we complete the proof. □
Theorem 9.
(Solvability) Under the same assumption of Lemma 1, if t ^ X is a feasible solution to Problem (5) and F ( t ^ ) O D ( L , Y ) , then t ^ solves the Problem (5).
Proof. 
The proof is similar to Theorem 8, so we omit it. □
Corollary 1.
Under the same assumption of Lemma 1, if t ^ , ( t * , μ * ) are feasible solutions to Problems (5) and (14), respectively, moreover, if F ( t ^ ) = L ( t * , μ * ) , then t ^ solves Problem (5) and ( t * , μ * ) solves the Problem (14).
Proof. 
The proof follows Theorem 8 and Theorem 9. □
Theorem 10.
(Strong duality) Under the same assumption of Lemma 1, if F, g i   ( i = 1 , , m ) satisfy the conditions (A1) and (A2) at t * , then there exists μ * R + m such that ( t * , μ * ) is a solution of Problem (14) and
L ( t * , μ * ) = F ( t * ) .
Proof. 
By Theorem 6, there exists μ * R + m such that:
s F ̲ ( t * ) + s F ¯ ( t * ) + i = 1 m μ i * s g i ( t * ) = 0 ,
and i = 1 m g i ( t * ) = 0 . It can be shown that L ( t * , μ * ) O D ( L , Y ) and
L ( t * , μ * ) = F ( t * ) .
By Corollary 1, there exists μ * R + m such that ( t * , μ * ) is a solution to Problem (14). The proof is complete. □
Example 4.
Continued from Example 2, after calculation, the non-dominated solution to Problem (18) is ( 0 , 11 2 , 0 ) and the objective value is [ 6 , 0 ] ; While t = 0 is also a non-dominated solution to Problem (8) and the objective value is [ 6 , 0 ] . Then we have:
L ( 0 , 7 2 , 0 ) = F ( 0 ) .
On the other hand, the IVF F in Example 2 satisfies the conditions (A1) and (A2), which verifies Theorem 10.

6. The optimality Conditions with Generalized Convexity

In this section, we use the concepts of SP-convexity and SQ-convexity which are less restrictive than LU-convexity to obtain some generalized optimality theorems of Problem (5).
Theorem 11.
(Sufficient condition) Suppose F is SP-convex and g i is s-quasiconvex at t * for i H . If t * X , and for some μ * R + n condition (7) in Theorem 4 holds, then t * is a non-dominated solution to Problem (5).
Proof. 
Assume for some μ * 0 , condition (7) in Theorem 4 holds. We have i = 1 m μ i * g i ( t * ) = 0 , where μ i * = 0 when i H . Since g i ( t ) g i ( t * ) and g i is s-quasiconvex at t * for i H , we obtain g i s ( t * ) ( t t * ) 0 . Thus:
i = 1 m μ i * g i s ( t * ) ( t t * ) 0 , for all t X ,
which implies:
s ( F ̲ ( t * ) + F ¯ ( t * ) ) ( t t * ) 0 for all t X .
Thanks to the SP-convexity of F, we have:
F ̲ ( t ) + F ¯ ( t ) F ̲ ( t * ) + F ¯ ( t * ) for all t X .
Then t * is an optimal solution to the real-valued objective function F ̲ + F ¯ subject to the same constraints of Problem (5). Suppose t * is not a non-dominated solution of Problem (5), there exists t X such that:
F ( t ) F ( t * )
which contradicts Equation (22). The proof is complete. □
Example 5.
Consider the following optimization:
min F ( t ) s u b j e c t   t o g 1 ( t ) 0 , g 2 ( t ) 0 .
where:
F ( t ) = [ t 3 + t , 2 t 3 + t ] , if t 0 ; [ 2 t , 1.5 t ] , if t < 0 ,
and g 1 ( t ) = t , g 2 ( t ) = t 1 .
We observe that F is not gH-differentiable at t = 0 , and F is not LU-convex at t = 0 with:
F ( 0 ) 2 3 F ( 1 4 ) + 1 3 F ( 1 2 ) .
However, F is SP-convex at t = 0 and g i is s-quasiconvex at t = 0 for i H . Furthermore, F is gH-symmetrically differentiable at t = 0 with
F s ( 0 ) = [ 5 4 , 3 2 ] .
Moreover, we have:
s F ̲ ( 0 ) + s F ¯ ( 0 ) + i = 1 m μ i s g i ( 0 ) = 0 ; i = 1 m μ i g i ( 0 ) = 0 , where μ = ( 11 4 , 0 ) T .
On the other hand, t = 0 is a non-dominated solution to Problem (23), which verifies Theorem 11.
Theorem 12.
(Necessary condition) Suppose F is SQ-concave at t * and g i is s-pseudoconcave at t * for i H . If t * is a non-dominated solution to Problem (5) and g i is lower semicontinuous on M for all i H , then ( t * , μ * ) satisfies condition (7) in Theorem 4 with some μ * 0 .
Proof. 
Assume X 1 = { t X : g i ( t ) < 0 for all i H } . The set X 1 is relatively open since g i is lower semicontinuous on M for each i H . Since t * X 1 , there is some α 0 such that for any y E n , t * + α y X 1 when: 0 < α < α 0 .
Suppose 0 < α < α 0 and for i H we have g i s ( t * ) T y 0 , then g i s ( t * ) T α y 0 for i H . According to the s-pseudoconcavity of g i at t * , we have:
g i ( t * + α y ) g i ( t * ) .
Since t * solves Problem (5), we have: F ( t * ) L U F ( t * + α y ) . The SQ-concavity of F at t * implies that
( s F ̲ ( t * ) + s F ¯ ( t * ) ) ( α y ) 0 .
Thus:
g i s ( t * ) T y 0 , ( s F ̲ ( t * ) + s F ¯ ( t * ) ) y < 0
has no solution y in R n . Hence, by Farkas’ lemma, there exist μ i * 0 such that:
s F ̲ ( t * ) + s F ¯ ( t * ) + i = 1 m μ i * s g i ( t * ) = 0 .
Example 6.
Note that in Example 5, t = 0 is a non-dominated solution. F is SQ-concave at t = 0 , and g 1 ( t ) = t is s-pseudoconcave at t = 0 , g 2 ( t ) = t 1 is lower semicontinuous on R .
On the other hand, for μ = ( 11 4 , 0 ) , the condition (7) is satisfied at t = 0 which verifies Theorem 12.
Theorem 13.
(Weak duality) Suppose for each μ such that ( t , μ ) R , L ( · , μ ) is SP-convex on X . Then for all t ^ X and ( t , μ ) Y , L ( t , μ ) L U F ( t ^ ) .
Proof. 
Consider t ^ X and ( t , μ ) Y . Then we have: L t s ( t , μ ) = 0 . Since L ( · , μ ) is SP-convex on X , we obtain L ( t ^ , μ ) L ( t , μ ) . Therefore,
F ( t ^ ) + i = 1 m u i g i ( t ^ ) L ( t , μ ) .
The proof is complete. □
Example 7.
Continued the problem of Example 5, t = 0 is a feasible solution to Problem (23) and the objective value is F ( 0 ) = 0 .
Moreover, ( t , μ ) = ( 1 , 11 , 0 ) is a feasible solution to the Wolfe problem of Problem (23) and the objective value is [ 9 , 8 ] . Furthermore, we have
F ( 0 ) L ( 1 , 11 , 0 ) ,
which verifies Theorem 13.
Theorem 14.
(Strong duality) Suppose F, g i ( i = 1 , , m ) and t * satisfy the conditions of Theorem 12. Furthermore, for each μ such that ( t , μ ) R , L ( · , μ ) is SP-convex on X . Then there exists a μ * 0 such that ( t * , μ * ) solves Problem (14) and L ( t * , μ * ) = F ( t * ) .
Proof. 
The proof is similar to the proof of Theorem 10. □
Example 8.
Continued from Example 5, the non-dominated solution to Wolfe dual of Problem (23) is ( 0 , 11 4 , 0 ) and the objective value is L ( 0 , 11 4 , 0 ) = 0 .
While t = 0 is also a non-dominated solution of Problem (23) and the objective value is F ( 0 ) = 0 . Then we have:
L ( 0 , 11 4 , 0 ) = F ( 0 ) .
On the other hand, the IVF F in Example 5 satisfies the conditions of Theorem 14, which verifies Theorem 14.

7. Conclusions

The IVOP is an interesting topic with many real world applications. The nondifferentiable counterpart of this problem is an interesting topic too. In this work, we newly investigate a topic on gH-symmetrically differentiable IVOPs and obtain the KKT conditions and duality theorems which are properly wider than those in [28]. Additionally, more appropriate concepts of generalized convexity are introduced to extend the optimality conditions in [24]. Some developments of the results presented in this paper, which will be investigated in future papers, are given by the study of the saddle-point optimality criteria for the considered class of IVOPs.

Author Contributions

Funding acquisition, G.Y., W.L. and D.Z.; writing—original draft, Y.G.; writing—review and editing, G.Y., W.L., D.Z. and S.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key Research and Development Program of China (2018YFC1508100), Natural Science Foundation of Jiangsu Province (BK20180500), Key Projects of Educational Commission of Hubei Province of China (D20192501), and Philosophy and Social Sciences of Educational Commission of Hubei Province of China (20Y109).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

This work has been supported by the National Key Research and Development Program of China (2018YFC1508100), Natural Science Foundation of Jiangsu Province (BK20180500), Key Projects of Educational Commission of Hubei Province of China (D20192501), and Philosophy and Social Sciences of Educational Commission of Hubei Province of China (20Y109).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Abel, A.B. Optimal investment under uncertainty. Am. Econ. Rev. 1983, 73, 228–233. [Google Scholar]
  2. Chuong, T.D. Robust Optimality and Duality in Multiobjective Optimization Problems under Data Uncertainty. SIAM J. Optim. 2020, 30, 1501–1526. [Google Scholar] [CrossRef]
  3. Mehdi, D.; Hamid, M.A.; Perrin, F. Robustness and optimality of linear quadratic controller for uncertain systems. Automatica 1996, 32, 1081–1083. [Google Scholar] [CrossRef]
  4. Engau, A.; Sigler, D. Pareto solutions in multicriteria optimization under uncertainty. Eur. J. Oper. Res. 2020, 281, 357–368. [Google Scholar] [CrossRef]
  5. Fu, Y.; Xiao, H.; Lee, L.H.; Huang, M. Stochastic optimization using grey wolf optimization with optimal computing budget allocation. Appl. Soft Comput. 2021, 103, 107154. [Google Scholar] [CrossRef]
  6. Zhang, S.; Chen, M.; Zhang, W.; Zhuang, X. Fuzzy optimization model for electric vehicle routing problem with time windows and recharging stations. Expert Syst. Appl. 2020, 145, 113123. [Google Scholar] [CrossRef]
  7. Steuer, R.E. Algorithms for linear programming problems with interval objective function coefficients. Math. Oper. Res. 1981, 6, 333–348. [Google Scholar] [CrossRef]
  8. Charnes, A.; Granot, F.; Phillips, F. An algorithm for solving interval linear programming problems. Oper. Res. 1977, 25, 688–695. [Google Scholar] [CrossRef]
  9. Despotis, D.K.; Smirlis, Y.G. Data envelopment analysis with imprecise data. Eur. J. Oper. Res. 2002, 140, 24–36. [Google Scholar] [CrossRef]
  10. Treanţǎ, S. Efficiency in uncertain variational control problems. Neural. Comput. Appl. 2021, 33, 5719–5732. [Google Scholar] [CrossRef]
  11. Inuiguchi, M.; Kume, Y. Goal programming problems with interval coefficients and target intervals. Eur. J. Oper. Res. 1991, 52, 345–360. [Google Scholar] [CrossRef]
  12. Li, Y.P.; Huang, G.H.; Chen, X. An interval-valued minimax-regret analysis approach for the identification of optimal greenhouse-gas abatement strategies under uncertainty. Energy Policy 2011, 39, 4313–4324. [Google Scholar] [CrossRef]
  13. Lai, K.K.; Wang, S.Y.; Xu, J.P.; Zhu, S.S.; Fang, Y. A class of linear interval programming problems and its application to portfolio selection. IEEE Trans. Fuzzy Syst. 2002, 10, 698–704. [Google Scholar] [CrossRef]
  14. Urli, B.; Nadeau, R. An interactive method to multiobjective linear programming problems with interval coefficients. INFOR Inf. Syst. Oper. Res. 1992, 30, 127–137. [Google Scholar] [CrossRef]
  15. Oliveira, C.; Antunes, C.H. Multiple objective linear programming models with interval coefficients-an illustrated overview. Eur. J. Oper. Res. 2007, 181, 1434–1463. [Google Scholar] [CrossRef]
  16. Hukuhara, M. Integration des applications mesurables dont la valeur est un compact convexe. Funkcial. Ekvac. 1967, 10, 205–223. [Google Scholar]
  17. Markov, S. Calculus for interval functions of a real variable. Computing 1979, 22, 325–337. [Google Scholar] [CrossRef]
  18. Stefanini, L. A generalization of Hukuhara difference and division for interval and fuzzy arithmetic. Fuzzy Sets Syst. 2020, 161, 1564–1584. [Google Scholar] [CrossRef]
  19. Malinowski, M.T. Interval differential equations with a second type Hukuhara derivative. Appl. Math. Lett. 2011, 24, 2118–2123. [Google Scholar] [CrossRef] [Green Version]
  20. Chalco-Cano, Y.; Román-Flores, H.; Jiménez-Gamero, M.D. Generalized derivative and π-derivative for set-valued functions. Inform. Sci. 2011, 181, 2177–2188. [Google Scholar] [CrossRef]
  21. Malinowski, M.T. Interval Cauchy problem with a second type Hukuhara derivative. Inform. Sci. 2012, 213, 94–105. [Google Scholar] [CrossRef]
  22. Chalco-Cano, Y.; Maqui-Huamán, G.G.; Silva, G.N.; Jiménez-Gamero, M.D. Algebra of generalized Hukuhara differentiable interval-valued functions: Review and new properties. Fuzzy Sets Syst. 2019, 375, 53–69. [Google Scholar] [CrossRef]
  23. Stefanini, L.; Bede, B. Generalized Hukuhara differentiability of interval-valued functions and interval differential equations. Nonlinear-Anal-Theory Methods Appl. 2009, 71, 1311–1328. [Google Scholar] [CrossRef] [Green Version]
  24. Guo, Y.; Ye, G.; Zhao, D.; Liu, W. gH-symmetrically derivative of interval-valued functions and applications in interval-valued optimization. Symmetry 2019, 11, 1203. [Google Scholar] [CrossRef] [Green Version]
  25. Wu, H.C. The Karush-Kuhn-Tucker optimality conditions in an optimization problem with interval-valued objective function. Eur. J. Oper. Res. 2007, 176, 46–59. [Google Scholar] [CrossRef]
  26. Wu, H.C. On interval-valued nonlinear programming problems. J. Math. Anal. Appl. 2008, 338, 299–316. [Google Scholar] [CrossRef] [Green Version]
  27. Wu, H.C. Wolfe duality for interval-valued optimization. J. Optim. Theory Appl. 2008, 138, 497. [Google Scholar] [CrossRef]
  28. Chalco-Cano, Y.; Lodwick, W.A.; Rufian-Lizana, A. Optimality conditions of type KKT for optimization problem with interval-valued objective function via generalized derivative. Fuzzy Optim. Decis. Mak. 2013, 12, 305–322. [Google Scholar] [CrossRef]
  29. Osuna-Gomez, R.; Chalco-Cano, Y.; Hernandez-Jimenez, B.; Ruiz-Garzon, G. Optimality conditions for generalized differentiable interval-valued functions. Inf. Sci. 2015, 321, 136–146. [Google Scholar] [CrossRef]
  30. Jayswal, A.; Stancu-Minasian, I.; Ahmad, I. On sufficiency and duality for a class of interval-valued programming problems. Appl. Math. Comput. 2011, 218, 4119–4127. [Google Scholar] [CrossRef]
  31. Antczak, T. Optimality conditions and duality results for nonsmooth vector optimization problems with the multiple interval-valued objective function. Acta Math. Sci. 2017, 37, 1133–1150. [Google Scholar] [CrossRef]
  32. Dar, B.A.; Jayswal, A.; Singh, D. Optimality, duality and saddle point analysis for interval-valued nondifferentiable multiobjective fractional programming problems. Optimization 2021, 70, 1275–1305. [Google Scholar] [CrossRef]
  33. Ghosh, D.; Singh, A.; Shukla, K.K.; Manchanda, K. Extended Karush-Kuhn-Tucker condition for constrained interval optimization problems and its application in support vector machines. Inf. Sci. 2019, 504, 276–292. [Google Scholar] [CrossRef]
  34. Van, S.T.; Dinh, D.H. Duality results for interval-valued pseudoconvex optimization problem with equilibrium constraints with applications. Comput. Appl. Math. 2020, 39, 127. [Google Scholar] [CrossRef]
  35. Minch, R.A. Applications of symmetric derivatives in mathematical programming. Math. Program. 1971, 1, 307–320. [Google Scholar] [CrossRef]
  36. Farkas, J. Theorie der einfachen Ungleichungen. J. Reine. Angew. Math. 1902, 1902, 1–27. [Google Scholar]
  37. Slater, M. Lagrange Multipliers Revisited: A Contribution to NONLINEAR Programming; Cowles Comission Discussion Paper No. 80: Mathematics 403; Cowles Foundation for Research in Economics: New Haven, CO, USA, 1950. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Guo, Y.; Ye, G.; Liu, W.; Zhao, D.; Treanţǎ, S. Optimality Conditions and Duality for a Class of Generalized Convex Interval-Valued Optimization Problems. Mathematics 2021, 9, 2979. https://0-doi-org.brum.beds.ac.uk/10.3390/math9222979

AMA Style

Guo Y, Ye G, Liu W, Zhao D, Treanţǎ S. Optimality Conditions and Duality for a Class of Generalized Convex Interval-Valued Optimization Problems. Mathematics. 2021; 9(22):2979. https://0-doi-org.brum.beds.ac.uk/10.3390/math9222979

Chicago/Turabian Style

Guo, Yating, Guoju Ye, Wei Liu, Dafang Zhao, and Savin Treanţǎ. 2021. "Optimality Conditions and Duality for a Class of Generalized Convex Interval-Valued Optimization Problems" Mathematics 9, no. 22: 2979. https://0-doi-org.brum.beds.ac.uk/10.3390/math9222979

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop