Next Article in Journal
The Extremal Graphs of Some Topological Indices with Given Vertex k-Partiteness
Previous Article in Journal
Identifying the Informational/Signal Dimension in Principal Component Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Efficient Solutions of Interval Programming Problems with Inexact Parameters and Second Order Cone Constraints

1
Department of Mathematics, Faculty of Mathematical Sciences and Computer, Shahid Chamran University of Ahvaz, 6135743136 Ahavaz, Iran
2
Faculty of Mathematical Sciences, Sharif University of Technology, 11365-11155 Tehran, Iran
*
Authors to whom correspondence should be addressed.
Submission received: 1 October 2018 / Revised: 5 November 2018 / Accepted: 13 November 2018 / Published: 20 November 2018

Abstract

:
In this article, a methodology is developed to solve an interval and a fractional interval programming problem by converting into a non-interval form for second order cone constraints, with the objective function and constraints being interval valued functions. We investigate the parametric and non-parametric forms of the interval valued functions along with their convexity properties. Two approaches are developed to obtain efficient and properly efficient solutions. Furthermore, the efficient solutions or Pareto optimal solutions of fractional and non-fractional programming problems over R + n { 0 } are also discussed. The main idea of the present article is to introduce a new concept for efficiency, called efficient space, caused by the lower and upper bounds of the respective intervals of the objective function which are shown in different figures. Finally, some numerical examples are worked through to illustrate the methodology and affirm the validity of the obtained results.

1. Introduction

We consider solving fractional interval programming problems with second order cone constraints with both the objective and constraints being interval valued functions. There are several approaches in the literature to solve such problems. Nonlinear interval optimization problems have been studied in several directions by many researchers during the past few decades [1,2,3,4]. Most considered models used quadratic programming problems with interval parameters. A methodology applied to interval valued convex quadratic programming problems by Bhurjee and Panda [1] which categorized how a solution of a general optimization problem can exist.
In the past few decades, fractional programming problems have also attracted the interest of many researchers. These problems have applications in the real physical world such as finance, production planning, electronic, etc. Fractional programming is being used for modelling real life problems involving one or more objective(s) such as actual cost/standard cost, inventory/sales and profit/cost. There are different algorithms to determine solutions of particular fractional programming problems. For example, Charnes and Cooper [5] converted a linear fractional program (LFP) to a linear program (LP) by a variable transformation technique. Tantawy [6] proposed an iterative method based on a conjugate gradient projection method. Dinkelbach [7] considered the same objective over a convex feasible set. He also solved the same problem using a sequence of nonlinear convex programming problems.
On the other hand, we know that the convexity of the SOCP (Second Order Cone Constraints) problems are definite. The problems such as linear programs, convex quadratic programs and quadratically constrained convex quadratic programs can be easily converted to SOCP problems; for several other types of problems not falling into these three categories, see [8,9].
Lobo et al. [9] discussed several applications of SOCP in engineering. Nesterov and Nemirovski [10] and Lobo et al. [9,11] showed several kinds of problems formulated as SOCP problems, such as filter design, truss design, grasping force optimization in robotics, etc. In a pioneering paper, Nesterov and Nemirovski [10] applied the concept of self-concordant barrier to SOCP problems and found an iteration complexity of m for problems with m second order cone inequalities.
Nestrov and Todd [12,13] were the first to investigate primal-dual interior methods for SOCP problems in which they investigate their outcome in the form of optimization over self-scaled cones having second order cones class as an especial case. Alizadeh & Goldfarb [8] considered and overviewed a large class of SOCP problems. They showed that many optimization problems such as linear programming (LP), quadratic programming (QP), quadratically constrained quadratic programming (QCQP) and other types of optimization problems could be rewritten as SOCP problems. They also demonstrated the method of converting different types of constraints into the form of SOC inequalities. Furthermore, they described an algebraic foundation of SOCs and showed how robust least squares and robust linear programming problems could be converted to SOCPs. The authors of [8] also discussed duality and complementary slackness for SOCP with notions of primal and dual non-degeneracy and strict complementarity along with logarithmic barrier function and primal-dual path following interior point methods (IPMs) for SOCPs.
Kim and Kojima [14] showed that semi-definite programming (SDP) and SOCP relaxation provide exact optimal solutions for a class of non-convex quadratic optimization problems. Moreover, SDP problems can in fact be formulated as SOCP problems and solved as such. There are a number of advantages for an SOCP problem. Adding a SOC constraint sometimes leads to negative decision variables, which usually does not occur with LP problems unless we let the variables be free in sign and usually get a much better solution, even though the dimension and convexity remain the same.
In our work here, we establish two results concerning efficient and properly efficient solutions of interval programming problems constrained with a second order cone constraint. The remainder of our work is organized as follows. In Section 2, the definitions and notations are provided. The interval valued functions in parametric and non-parametric forms along with their convexity properties are discussed in Section 3. In Section 4, we explain the existence of solutions for interval valued optimization problems and establish certain results concerning the efficient and properly efficient solutions of the interval problems involving SOC constraints. We also investigate the efficient solution for interval fractional and non-fractional programming problems in R + n { 0 } . In Section 5, some numerical examples are worked through to verify the results on efficient and properly efficient solutions using MATLAB software environment. We conclude in Section 6.

2. Definitions and Notations

Let I ( R ) be represented as the class of all closed intervals. A closed interval is shown by M = [ m ̲ , m ¯ ] , where m ̲ and m ¯ are respectively the lower and upper bounds of M . For closed intervals M, N, and k R , we have:
(i)
M + N = { m + n : m M , n N } = [ m ̲ + n ̲ , m ¯ + n ¯ ] ,
(ii)
M = { m : m M } ,
(iii)
k M = { k m : m M } .
Definition 1.
If M = [ m ̲ , m ¯ ] and N = [ n ̲ , n ¯ ] are bounded, real valued intervals, then the multiplication of M and N is defined to be
M N = { m i n { m ̲ n ̲ , m ̲ n ¯ , m ¯ n ̲ , m ¯ n ¯ } , m a x { m ̲ n ̲ , m ̲ n ¯ , m ¯ n ̲ , m ¯ n ¯ } } .
Let F ( x ) = F ( x 1 , , x n ) be a closed interval in R, for each x R n . The interval-valued function F may be represented as F ( x ) = [ F ̲ ( x ) , F ¯ ( x ) ] , where F ̲ and F ¯ are real-valued functions defined on R n and satisfy F ̲ ( x ) F ¯ ( x ) , for every x R n . We say that the interval-valued function F is differentiable at x 0 R n if and only if the real-valued functions F ̲ ( x ) and F ¯ ( x ) are differentiable at x 0 . To know more, see [2].
Let M = [ m ̲ , m ¯ ] and N = [ n ̲ , n ¯ ] be two closed intervals in R and the relation “⪯” be a partial ordering on I ( R ) . We write M N if and only if m ̲ n ̲ and m ¯ n ¯ . We also write M N if and only if M N and M N , meaning that M is inferior to N, or N is superior to M.
A second order cone is defined as follows:
Q n = { x = ( x 1 ; x ¯ ) R n : x 1 x ¯ } ,
where · is the standard Euclidean norm, and n is the dimension of Q n ; n is usually dropped from the subscript. We refer to inequality x Q 0 as the second-order cone inequality.
For the cone Q, let
b d Q = { x Q : x 1 = x ¯ a n d x 0 }
denote the boundary of Q without the origin 0. In addition, let
i n t Q = { x Q : x 1 > x ¯ }
denote the interior of Q.
We continue to present an overview of the SOCP problem. A standard form of an SOCP problem is given by
( P 1 ) : M i n c 1 T x 1 + + c m T x m s . t . A 1 x 1 + + A m x m = b , x i Q n i i = 1 , , m ,
with its dual being
( D 1 ) : M a x b T y s . t . A i T y + z i = c i , i = 1 , , m , z i Q n i , i = 1 , , m ,
where A i R r × n i , b R r , m is the number of blocks, n = i = 1 m n i is the dimension of the problem, c i , z i , x i R n i and y R r [8].
We make the following assumptions regarding the primal-dual pair ( P 1 ) and ( D 1 ) [8].
Assumption 1.
The matrix A = ( A 1 , , A m ) has r linearly independent rows.
Assumption 2.
Due to the strict feasibility of both primal and dual, there exists a vector x = ( x 1 ; ; x m ) for every x i Q 0 , for i = 1 , , m , and dual-feasible y and z = ( z 1 ; ; z m ) such that z i Q 0 , for i = 1 , , m .
Remark 1.
If problem ( P 1 ) has only one second order cone constraint, then the standard SOCP problem can be written as
( P 2 ) : M i n c T x s . t . A x = b , x Q n ,
with its corresponding dual as
( D 2 ) : M a x b T y s . t . A T y + z = c , z Q n .
Over time, we have seen a rapid development in improvement of software packages that can be applied to the problems such as SOCPs and mixed SOCP problems. SeDuMi [11] is a widely available package based on the Nesterov–Todd method.

3. Interval Valued Function

The definition of interval function in terms of functions of one and more intervals is given by see [2,3,4]. Walster and Hansen and Moore [2] defined an interval function as a function of one or more interval arguments onto an interval. Wu considered the interval valued function F : R n I ( R ) as F ( x ) = [ F ̲ ( x ) , F ¯ ( x ) ] , where F ̲ , F ¯ : R n R , F ̲ ( x ) F ¯ ( x ) , x R n .
These functions may be defined on one or more interval arguments or maybe interval extension of real valued functions. The interval valued function in parametric form, introduced by [4], is as follows. For m ( t ) M ν k , let f m ( t ) : R n R . Then, for a given interval vector M ν k , an interval valued function F M ν k : R n I ( R ) is defined
F M ν k ( x ) = { f m ( t ) ( x ) | f m ( t ) : R n R , m ( t ) M ν k } .
For every fixed x, if f m ( t ) ( x ) is continuous in t , then m i n t [ 0 ; 1 ] k f m ( t ) ( x ) and m a x t [ 0 ; 1 ] k f c ( t ) ( x ) exist. Then,
F M ν k ( x ) = [ m i n t [ 0 ; 1 ] k f m ( t ) ( x ) , m a x t [ 0 ; 1 ] k f m ( t ) ( x ) ] .
If f m ( t ) ( x ) is linear in t , then m i n t [ 0 , 1 ] k f m ( t ) ( x ) and m a x t [ 0 , 1 ] k f m ( t ) ( x ) exist. If f m ( t ) ( x ) is monotonically increasing in t , then F M ν k = [ f m ( 0 ) ( x ) , f m ( 1 ) ( x ) ] .

3.1. Interval Valued Convex Function

Interval valued convex function has the important property to guarantee the existence of solution of the interval optimization problem.
Definition 2.
[4] An interval valued function F M ν k : R n I ( R ) is said to be convex regarding ⪯ on a convex set N R n , if for every x 1 , x 2 N and 0 λ 1 , we have
F M ν k ( λ x 1 + ( 1 λ ) x 2 ) λ F M ν k ( x 1 ) ( 1 λ ) F M ν k ( x 2 ) .
Remark 2.
From Definition 2, one may observe that F M ν k is convex regarding , meaning that
f m ( t ) ( λ x 1 + ( 1 λ ) x 2 ) λ f m ( t ) ( x 1 ) + ( 1 λ ) f m ( t ) ( x 2 ) ,
for all t [ 0 , 1 ] k ; we can realize that F M ν k ( x ) is convex with respect to ⪯ iff f m ( t ) ( x ) is a convex function on N , for every t.

3.2. Interval Valued Function in the Parametric Form

Let a binary operation on the set of real numbers be represented by { + , , . , / } . The binary operation ⊛ between two intervals M = [ m ̲ , m ¯ ] and N = [ n ̲ , n ¯ ] in I ( R ) , denoted by M N , is the set { m n | m M , n N } . In the case of division, M / N , it is to be noted that 0 N . An interval may be shown as a parameter form in several disciplines. Any point in M may be expressed as m ( t ) , where m ( t ) = m ̲ + t ( m ¯ m ̲ ) . Throughout our work, we consider a specific parametric representation of an interval as M = [ m ̲ , m ¯ ] = { m ( t ) | t [ 0 , 1 ] } . The algebraic operations over classical intervals can be represented as either lower or upper bounds of the intervals [1]. The parametric form of interval operations can be represented as follows:
M N = { m ( t 1 ) n ( t 2 ) | t 1 , t 2 [ 0 , 1 ] } .
In addition, M ν k ( I ( R ) ) k , M ν k = ( M 1 , M 2 , , M k ) T , can be expressed in terms of parameters as
M ν k = { m ( t ) | m ( t ) = ( m 1 ( t 1 ) , m 2 ( t 2 ) , , m k ( t k ) ) T } ,
where m j ( t j ) M j , m j ( 0 ) = m j l , m j ( 1 ) = m j u , t = ( t 1 , t 2 , , t k ) T , 0 t j 1 .

4. Existence of Solutions

In this section, we consider the interval optimization problem as follows:
( P 3 ) : m i n i m i z e ( m i n ) F M ν k ( x ) s u c h t h a t ( s . t . ) G j N ν m j ( x ) B j , j = 1 , 2 , , p ,
where B j I ( R ) , and the interval valued functions F M ν k , G j N ν m j : R n I ( R ) are the sets F M ν k ( x ) = { f m ( t ) ( x ) | f m ( t ) : R n R , m ( t ) M ν k } and G j N ν m j = { g j n ( t j ) ( x ) | g j n ( t j ) ( x ) : R n R , n ( t j ) D ν m j } .
Discussion of the partial ordering is seen in Section 2, and the feasible space of ( P 3 ) is expressed as the following set:
χ = { x R n | G j N ν m j ( x ) B j , j = 1 , 2 , , p } .
Definition 3.
x χ is said to be an efficient solution of ( P 3 ) , if there does not exist any x χ with f m ( t ) ( x ) f m ( t ) ( x ) , t [ 0 ; 1 ] k , and F M ν k ( x ) F M ν k ( x ) [4].
Definition 4.
x χ is said to be a properly efficient solution of (P3), if x χ is an efficient solution and there exists a real number μ > 0 such that for some t [ 0 ; 1 ] k and all x χ with f m ( t ) ( x ) < f m ( t ) ( x ) , at least there is one t [ 0 , 1 ] k , t t , exists with f m ( t ) ( x ) > f m ( t ) ( x ) and
f m ( t ) ( x ) f m ( t ) ( x ) f m ( t ) ( x ) f m ( t ) ( x ) μ .
Consider the following optimization problem with respect to a weight function ω : [ 0 , 1 ] k R + :
( P 4 ) : m i n x χ 0 1 0 1 0 1 ω ( t ) f m ( t ) ( x ) d t 1 d t 2 d t k ,
where ω ( t ) = ω ( t 1 , t 2 , , t k ) . Here, t 1 , t 2 , , t k are mutually independent and each t i varies from 0 to 1. Thus, 0 1 0 1 0 1 ω ( t ) f m ( t ) ( x ) d t 1 d t 2 d t k is a function of x only, say h ( x ) . Thus, ( P 4 ) as m i n x χ h ( x ) is a general nonlinear programming problem free from interval uncertainty. The problem can be solved by a nonlinear programming technique. The following theorem establishes the relationship between the solution of the transformed problem ( P 4 ) and the original problem ( P 3 ) [4].
Theorem 1.
If x χ is an optimal solution of ( P 4 ) , then x is a properly efficient solution of ( P 3 ) .
Proof. 
See [4].  ☐

4.1. Alternative Method for Solving an Interval Problem with an SOC Constraint

Here, we consider a problem by applying the order relation “⪯” for the constraints as follows:
( P 5 ) : M i n F ( x ) = [ F ̲ ( x ) , F ¯ ( x ) ] s . t . G i ( x ) [ b i L , b i U ] , i = 1 , 2 , , m , x Q n ,
where the G i ( x ) = [ G i ̲ ( x ) , G i ¯ ( x ) ] are interval-valued constraints. A point x = ( x 1 , , x n ) is a feasible solution of problem ( P 5 ) , if G i ( x ) [ b i L , b i U ] , for i = 1 , , m , or equivalently, G i ̲ ( x ) b i L and G i ¯ ( x ) b i U , for i = 1 , 2 , , m . Then, the auxiliary interval-valued optimization problem ( P 5 ) can be rewritten as follows:
( P 6 ) : M i n F ( x ) = [ F ̲ ( x ) , F ¯ ( x ) ] s . t . G i ̲ ( x ) b i L , i = 1 , 2 , , m , G i ¯ ( x ) b i U , i = 1 , 2 , , m , x Q n .
It is obvious that the feasible regions of problems ( P 5 ) and ( P 6 ) are the same and, since their objective function is also the same, we have the same solution for both problems. The interval property of problem ( P 6 ) incurs a very important concept called efficient space which is a new concept from the optimization point of view.
Therefore, the interval-valued optimization problem ( P 6 ) is easily converted to a common form as below:
( P 7 ) : M i n F ( x ) s . t . g i ( x ) 0 , i = 1 , 2 , , m , h i ( x ) 0 , i = 1 , 2 , , m , x Q n ,
where F : R n I ( R ) is an interval-valued function, and g i : R n R and h i : R n R , i = 1 , , m , are real-valued functions. Let M = [ m ̲ , m ¯ ] and N = [ n ̲ , n ¯ ] be two closed intervals in R. We write M N if and only if m ̲ n ̲ and m ¯ n ¯ , and we write M N iff N N and M N . Equivalently, M N iff
m ̲ < n ̲ m ¯ n ¯ o r m ̲ n ̲ m ¯ < n ¯ o r m ̲ < n ̲ m ¯ < n ¯ .
We need to interpret the meaning of minimization for ( P 7 ) . Since ⪯ is a partial ordering, not a total ordering on I ( R ) , we may follow the similar solution concept (efficient solution) used in multi-objective programming problem to interpret the meaning of minimization in the primal problem ( P 7 ) . For the minimization problem ( P 7 ) , we say that the feasible solution x ¯ is better than (dominates) the feasible solution x , if F ( x ¯ ) < F ( x ) . Therefore, we propose the following definition.
Definition 5.
Let x be a feasible solution of the primal problem ( P 7 ) . We say that x is an efficient solution of ( P 7 ) if there exists no x ¯ X such that F ( x ¯ ) F ( x ) . In this case, F ( x ) is called the efficient objective value of F.
We denote the set of all efficient objective values of problem ( P 7 ) by M i n ( F , X ) . More precisely, we write
M i n ( F , X ) = { F ( x ) } ,
where x is an efficient solution of ( P 7 ) . Let m be a real number. Then, it can be represented as an interval [ m , m ] . Let M = [ m ̲ , m ¯ ] be a closed interval. By M + m , we mean M + [ m , m ] = [ m ̲ + m , m ¯ + m ] .
Now, consider the following optimization problem:
( P 8 ) : M i n f ( x ) = F ̲ ( x ) + F ¯ ( x ) s . t . g i ( x ) 0 , i = 1 , 2 , , m , h i ( x ) 0 , i = 1 , 2 , , m , x Q n .
Obviously, if x is an optimal solution of problem ( P 8 ) , then x is a nondominated solution of problem ( P 7 ) ; see [4]. We may now focus here on the two results given as Theorems 2 and 3 below, by which the optimal solutions of the problems ( P 9 ) and ( P 10 ) are indeed the efficient solutions of the problem ( P 7 ) :
( P 9 ) : M i n f ( x ) = F ̲ ( x ) F ¯ ( x ) s . t . g i ( x ) 0 , i = 1 , 2 , , m h i ( x ) 0 , i = 1 , 2 , , m F ̲ ( x ) > 0 , x Q n ,
( P 10 ) : M i n f ( x ) = α F ̲ ( x ) + β F ¯ ( x ) s . t . g i ( x ) 0 , i = 1 , 2 , , m h i ( x ) 0 , i = 1 , 2 , , m x Q n ,
where α and β are positive scalars.
Theorem 2.
If x is an optimal solution of problem ( P 9 ) , then x is an efficient solution of problem ( P 7 ) .
Proof. 
We see that problems ( P 9 ) and ( P 7 ) have the same feasible region. Suppose that x is not an efficient solution. Then, there exists a feasible solution x such that F ( x ) F ( x ) . From inequation (1), it means that
F ̲ ( x ) < F ̲ ( x ) F ¯ ( x ) F ¯ ( x ) , o r F ̲ ( x ) F ̲ ( x ) F ¯ ( x ) < F ¯ ( x ) , o r F ̲ ( x ) < F ̲ ( x ) F ¯ ( x ) < F ¯ ( x ) .
This also shows that f ( x ) < f ( x ) , which contradicts the fact that x is an optimal solution of problem ( P 7 ) . This completes the proof. ☐
Theorem 3.
If x is an optimal solution of problem ( P 10 ) , then x is an efficient solution of problem ( P 7 ) .
Proof. 
We see that problems ( P 10 ) and ( P 7 ) have the same feasible region. Suppose that x is not an efficient solution. Then, there exists a feasible solution x such that F ( x ) F ( x ) . From (1), it means that
F ̲ ( x ) < F ̲ ( x ) F ¯ ( x ) F ¯ ( x ) o r F ̲ ( x ) F ̲ ( x ) F ¯ ( x ) < F ¯ ( x ) . o r F ̲ ( x ) < F ̲ ( x ) F ¯ ( x ) < F ¯ ( x ) .
Then, we have
α F ̲ ( x ) < α F ̲ ( x ) β F ¯ ( x ) β F ¯ ( x ) , o r α F ̲ ( x ) α F ̲ ( x ) β F ¯ ( x ) < β F ¯ ( x ) , o r α F ̲ ( x ) < α F ̲ ( x ) β F ¯ ( x ) < β F ¯ ( x ) .
This also shows that f ( x ) < f ( x ) , which contradicts the fact that x is an optimal solution of problem ( P 7 ) . This completes the proof.  ☐

4.2. Interval Valued Convex Linear Programming Problem with SOC Constraint

An interval valued optimization problem (P3) is said to be an interval valued convex programming problem, if F M ν k and G j N ν m j are convex functions with respect to .
If (P3) is an interval valued convex programming problem, then P 4 is a convex programming problem.
A general interval linear programming problem (P3) has the following form:
( P 11 ) : M i n M ν k x s . t . A m x B ν p , x Q n ,
where M ν k ( I ( R ) ) n , B ν p ( I ( R ) ) n and A m = ( A i j ) p × n is an interval valued matrix with A i j = [ a i j l , a i j u ] and M ν k x = j = 1 k M j x j , the product of a real vector x R k and an interval vector M ν k ( I ( R ) k ) .

5. Numerical Results

In this section, we consider three examples having various dimensions to illustrate the obtained results. In order to solve problems using both theorems, we use the fmincon command of Matlab. Notations are given in Table 1 and the results are summarized in Table 2, Table 3 and Table 4 and the corresponding diagrams. We generate problems with different dimensions and report the required CPU times. All computations are performed on MATLAB R2015a (8.5) using a laptop with Intel(R) Core i3 CPU 2.53 GHz and 5.00 GB of RAM.
We present computational results on Examples 1 and 2 to compare the results due to Theorems 2 and 3. To compare the obtained results for the numerical examples, we use different diagrams and tables to show the advantages of the given Theorems 2 and 3 by showing that any solution of the problem ( P 9 ) or ( P 10 ) is an efficient solution of the problem ( P 7 ) . In addition, the efficient space for different pairs of ( α , β ) is also shown, with the generated nonzero elements α taken randomly in the interval ( 0 , 1 ) and elements of vector β given in ( 0 , n ) with step length 1.
Example 1.
Consider the interval programming problem with SOC constraint as follows:
( P 12 ) : M i n [ 7 x 1 + x 2 3 x 1 + 4 x 2 + 36 , 7 x 1 + x 2 3 x 1 + 4 x 2 + 12 ] s . t . x 1 + x 2 7 , 4 x 1 9 x 2 3 , x 1 + 2 x 2 1.5 , x Q 2 .
Example 2.
Consider the interval programming problem with SOC constraint as follows:
( P 13 ) : M i n [ x 1 + 3 x 2 + 1.5 x 3 + 3.5 x 1 + x 2 + 2 x 3 + 1 , 2 x 1 + 7 x 2 + 2.5 x 3 + 4 0.5 x 1 + 0.75 x 2 + 7 / 8 x 3 + 0.5 ] s . t . x 1 + 2 x 2 x 3 6 , 2 x 1 + 3 x 2 + x 3 8 , x 1 + x 2 + x 3 13 , x Q 3 .
Example 3.
Consider the interval programming problem with SOC constraint as follows:
( P 14 ) : M i n [ 10 , 6 ] x 1 + [ 2 , 3 ] x 2 s . t . [ 1 , 2 ] x 1 + 3 x 2 [ 1 , 10 ] , [ 2 , 8 ] x 1 + [ 4 , 6 ] x 2 [ 4 , 6 ] , x Q 2 .
By our described methodology, we get a properly efficient solution. Here, t = ( t 1 , t 2 ) T , t j [ 0 , 1 ] . f m ( t ) ( x ) = ( 10 + 4 t 1 ) x 1 + ( 2 + t 2 ) x 2 and χ = { ( x 1 , x 2 ) | 2 x 1 + 3 x 2 1 , 8 x 1 + 6 x 2 4 } .
For some ω ( t ) : R 2 R , the corresponding problem ( P 4 ) becomes:
( P 15 ) : M i n χ 0 1 0 1 ω ( t ) [ ( 10 + 4 t 1 ) x 1 + ( 2 + t 2 ) ] x 2 d t 1 d t 2 .
This problem is an SOCP problem and can be solved by an interior point method.
If ω ( t ) = 2 t 1 , then the properly efficient solution is x = [ 0.2 , 0.2 ] and the optimal interval is [ 1.6 , 0.6 ] and efficient solution obtained by Theorem 2 is x = [ 0.2 , 0.2 ] .
Table 2 shows the objective function values obtained using Theorems 2 and 3.
We see the results for various values of n in Table 3. The results for different values of n are summarized in Table 3 and Table 4. We observe that the CPU times for problems with SOC constraints is lower than the ones for problems without SOC constraint.
Efficient spaces for Example 3 with SOC constraint for different n’s is given in the Figure 1, Figure 2 and Figure 3 and without SOC constraint for different n’s is illustrated in the Figure 4, Figure 5 and Figure 6, where efficient space is a new concept in efficiency literature.

6. Conclusions

A very important concept of SOCP is investigated here. We have paid our attention to consider the interval fractional programming problem with second order cone constraints. To solve such problems, we established two important results concerning the efficient and properly efficient solutions of the second-order cone constrained interval programming problems. In addition and furthermore, a new notion of efficiency called efficient space was proposed due to interval form of the objective function and the corresponding obtained results were summarized in Table 3 and Table 4 and simultaneously in Figure 1, Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6 with efficient spaces related to upper and lower bound properties of the interval problem. To illustrate the performance of our methodology, a few numerical examples were worked through to represent the importance of the study. The numerical results showed that the CPU times needed for solving problems with second order cone constraints are less than the ones for the problem without second-order cone constraints, which is a very important issue.

Author Contributions

A.S., M.S. and N.M.A. contributed to the design and implementation of the research, to the analysis of the results and to the writing of the manuscript.

Funding

This research received no external funding.

Acknowledgments

The authors are thankful to the Shahid Chamran University of Ahvaz and Sharif University of Technology for making this joint work possible and Shahid Chamran University of Ahvaz for the financial support.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bhurjee, A.K.; Panda, G. Efficient solution of interval optimization problem. Math. Meth. Oper. Res. 2012, 76, 273–288. [Google Scholar] [CrossRef]
  2. Moore, R.E. Interval Analysis; Prentice-Hall: Upper Saddle River, NJ, USA, 1966. [Google Scholar]
  3. Walster, G.W.; Hansen, E. Global Optimization Using Interval Analysis; Marcel Dekker, Inc.: New York, NY, USA, 2004. [Google Scholar]
  4. Wu, H. On interval-valued nonlinear programming problems. J. Math. Anal. Appl. 2008, 338, 299–316. [Google Scholar] [CrossRef]
  5. Charnes, A.; Cooper, W.W. Programs with linear fractional functions. Nav. Res. Logist. Q. 1962, 9, 181–196. [Google Scholar] [CrossRef]
  6. Tantawy, S.F. A new procedure for solving linear fractional programming problems. J. Math. Comput. Model. 2008, 48, 969–973. [Google Scholar] [CrossRef]
  7. Dinkelbach, W. On nonlinear fractional programming. Manag. Sci. 1967, 13, 492–498. [Google Scholar] [CrossRef]
  8. Alizadeh, F.; Goldfarb, D. Second-order cone programming. Math. Program. Ser. B 2003, 95, 35–51. [Google Scholar] [CrossRef]
  9. Lobo, M.S.; Vandenberghe, L.; Boyd, L.; Lebret, H. Applications of second order cone programming. Linear Algebra Appl. 1998, 284, 193–228. [Google Scholar] [CrossRef]
  10. Nesterov, Y.; Nemirovski, A. Interior Point Polynomial Methods in Convex Programming: Theory and Applications; Society for Industrial and Applied Mathematics (SIAM): Philadelphia, PA, USA, 1994. [Google Scholar]
  11. Lobo, M.S.; Vandenberghe, L.; Boyd, S. SOCP: Software for Second-Order Cone Programming; Information Systems Laboratory, Stanford University: Standford, CA, USA, 1997. [Google Scholar]
  12. Nesterov, Y.E.; Todd, M.J. Primal-dual interior-point methods for self-scaled cones. SIAM J. Optim. 1998, 8, 324–364. [Google Scholar] [CrossRef]
  13. Nesterov, Y.E.; Todd, M.J. Self-scaled barriers and interior-point methods for convex programming. Math. Oper. Res. 1997, 22, 1–42. [Google Scholar] [CrossRef]
  14. Kim, S.; Kojima, M. Exact solutions of some nonconvex quadratic optimization problems via SDP and SOCP relaxations. Comput. Optim. Appl. 2003, 26, 143–154. [Google Scholar] [CrossRef]
Figure 1. Efficient space for Example 3 with SOC constraint using n = 20 .
Figure 1. Efficient space for Example 3 with SOC constraint using n = 20 .
Mathematics 06 00270 g001
Figure 2. Efficient space for Example 3 with SOC constraint using n = 100 .
Figure 2. Efficient space for Example 3 with SOC constraint using n = 100 .
Mathematics 06 00270 g002
Figure 3. Efficient space for Example 3 with SOC constraint using n = 1000 .
Figure 3. Efficient space for Example 3 with SOC constraint using n = 1000 .
Mathematics 06 00270 g003
Figure 4. Efficient space for Example 3 without SOC constraint using n = 20 .
Figure 4. Efficient space for Example 3 without SOC constraint using n = 20 .
Mathematics 06 00270 g004
Figure 5. Efficient space for Example 3 without SOC constraint using n = 100 .
Figure 5. Efficient space for Example 3 without SOC constraint using n = 100 .
Mathematics 06 00270 g005
Figure 6. Efficient space for Example 3 without SOC constraint using n = 1000 .
Figure 6. Efficient space for Example 3 without SOC constraint using n = 1000 .
Mathematics 06 00270 g006
Table 1. Notations.
Table 1. Notations.
nNumber of pairs corresponding to α and β related to Theorem 3
Exwsoc1Example 1 without SOC constraint
Exsoc1Example 1 with SOC constraint
Exwsoc2Example 2 without SOC constraint
Exsoc2Example 2 with SOC constraint
CPUCPU time in seconds
CPU.ratioThe ratio of CPU times consumed by problem due to SOC constraint over without SOC constraint
Table 2. Objective function values using Theorems 2 and 3.
Table 2. Objective function values using Theorems 2 and 3.
Applying Theorem 2Applying Theorem 3
Example 1−12−0.003973
Example 24.0454171.31
Table 3. CPU times corresponding to Example 1.
Table 3. CPU times corresponding to Example 1.
nCPU Time for Exwsoc1CPU Time for Exsoc1cpu.ratio
10 0.975 0.7641.2762
504.0132.7601.454
1007.3535.3201.3821
15011.0798.1371.36155
20014.67410.5431.392
25018.43512.8901.43018
30021.82815.4391.413822
35025.26918.3881.37421
40028.60120.7141.38076
45032.04422.8601.40174
50036.38525.7501.413
60042.51430.7761.38140
70049.89935.8521.39180
80056.71340.9761.38405
90064.21945.8731.39993
100071.64551.2561.398
Table 4. CPU times corresponding to Example 2.
Table 4. CPU times corresponding to Example 2.
nCPU Time for Exwsoc2CPU Time for Exsoc2cpu.ratio
101.6211.2251.32327
505.0764.8141.05442
1009.6089.1541.0496
15014.42413.0181.10800
20018.69517.1851.02804
25022.74721.7141.04752
30026.84225.8661.03773
35032.51231.5701.02983
40036.39933.4551.0880
45040.24938.4121.04782
50045.47142.7751. 06303
60054.62352.0151.05014
70062.41459.6531.04628
80071.67668.4001.04790
90079.98079.9561.00030
100090.24382.6161.09231

Share and Cite

MDPI and ACS Style

Sadeghi, A.; Saraj, M.; Mahdavi Amiri, N. Efficient Solutions of Interval Programming Problems with Inexact Parameters and Second Order Cone Constraints. Mathematics 2018, 6, 270. https://0-doi-org.brum.beds.ac.uk/10.3390/math6110270

AMA Style

Sadeghi A, Saraj M, Mahdavi Amiri N. Efficient Solutions of Interval Programming Problems with Inexact Parameters and Second Order Cone Constraints. Mathematics. 2018; 6(11):270. https://0-doi-org.brum.beds.ac.uk/10.3390/math6110270

Chicago/Turabian Style

Sadeghi, Ali, Mansour Saraj, and Nezam Mahdavi Amiri. 2018. "Efficient Solutions of Interval Programming Problems with Inexact Parameters and Second Order Cone Constraints" Mathematics 6, no. 11: 270. https://0-doi-org.brum.beds.ac.uk/10.3390/math6110270

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop