Next Article in Journal
A Novel Hybrid Approach: Instance Weighted Hidden Naive Bayes
Previous Article in Journal
Semi-Hyers–Ulam–Rassias Stability of the Convection Partial Differential Equation via Laplace Transform
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Global Optimization Algorithm for Solving Linearly Constrained Quadratic Fractional Problems

College of Science, Zhejiang University of Technology, Hangzhou 310023, China
*
Author to whom correspondence should be addressed.
Submission received: 18 September 2021 / Revised: 8 November 2021 / Accepted: 17 November 2021 / Published: 22 November 2021

Abstract

:
This paper first proposes a new and enhanced second order cone programming relaxation using the simultaneous matrix diagonalization technique for the linearly constrained quadratic fractional programming problem. The problem has wide applications in statics, economics and signal processing. Thus, fast and effective algorithm is required. The enhanced second order cone programming relaxation improves the relaxation effect and computational efficiency compared to the classical second order cone programming relaxation. Moreover, although the bound quality of the enhanced second order cone programming relaxation is worse than that of the copositive relaxation, the computational efficiency is significantly enhanced. Then we present a global algorithm based on the branch and bound framework. Extensive numerical experiments show that the enhanced second order cone programming relaxation-based branch and bound algorithm globally solves the problem in less computing time than the copositive relaxation approach.

1. Introduction

The quadratic fractional programming problem refers to min x X f 1 ( x ) f 2 ( x ) with f 1 ( x ) and f 2 ( x ) being quadratic functions and the feasible region X . It has many applications in electric engineering [1], finance, production planning [2], and communications over wireless channels [3] etc. Many strategies have been developed to solve this important issue. One classical approach is the Dinkelbach method proposed by Dinkelbach [4]. For example, Salahi et al. [5] studied the problem of minimizing the ratio of two indefinite quadratic functions subject to a strictly convex quadratic constraint. Zhang et al. [6] proposed a Celis-Dennis-Tapia based approach to quadratic fractional programming problems with two quadratic constraints. Gotoh et al. [3,7] solved the general quadratic fractional problems by combing Dinkelbach iterative algorithm with the branch and bound algorithm together. Moreover, the metaheuristics-based approaches successfully combining machine learning and swarm intelligence were able to solve the problem globally [8,9]. In recent years, the semidefinite programming (SDP) relaxation and the copositive relaxation have become popular to solve the quadratic fractional programming problems. Some special case of the quadratic fractional programming problem can be reformulated into an exact SDP relaxation and solved in polynomial time. Beck et al. [10] showed that minimizing the ratio of indefinite quadratic functions over an ellipsoid admitted an exact SDP reformulation under some technical conditions. Xia [11] improved their results by removing the technical conditions. Nguyen et al. [12] analysed quadratic fractional problems over a two-sided quadratic constraint with three cases and illustrated that each of them admited an exact SDP relaxation. Moreover, Preisig [13] used the idea of copositivity to deal with the standard quadratic fractional functions. Amaral et al. [14] proposed a copositive relaxation for nonconvex min-max fractional quadratic problems under quadratic constraints and showed that the lower bound provided by the copositive relaxation could speed up a well-known solver in obtaining the optimal value.
In this paper, we consider the quadratic fractional programming with linear constraints:
min f ( x ) = x T Q x + 2 q T x + c x 2 + 1 , s . t . A x = a , x 0 .
The above problem was proposed by Amaral et al. [15]. Interesting applications of (1) include the standard quadratic fractional problem and the symmetric eigenvalue complementarity problem. Here, Q R n × n is a symmetric matrix, q R n , a R m , c R and A R m × n . Without loss of generality, we assume that A is full row rank. When Q is semidefinite, it becomes the total least squares, and is thus widely used in a variety of disciplines such as statics, economics and signal processing [16]. In this paper, F = { x R n : A x = a , x 0 } is supposed to be nonempty and compact, i.e., F and ker A R + n = { 0 } . The nonconvexity of the objective function leads to the challenge in solving this problem. Amaral et al. [15] proposed a copositive (CP) relaxation for the problem. Although they showed that the CP relaxation admitted a better lower bound including small relative gaps with the optimal value, the computational complexity was high as shown in their numerical results [15]. In particular, the CPU spent more than 50 s to solve the CP relaxation when the dimension of variables was 79. Thus, designing a convex relaxation that can be efficiently used even for huge-size problem while maintaining the strength of the convex relaxation is critical. In this paper, we design an enhanced second order cone programming (SOCP) relaxation for (1) instead. We first reformulate the primal problem into a quadratic programming problem with a quadratic equality and linear constraints. Furthermore, we present an enhanced SOCP relaxation exploiting the simultaneous matrix diagonalization tool. We compare the enhanced SOCP relaxation with the classical SOCP relaxation, and extensive numerical experiments verify that the enhanced SOCP relaxation shows superiority in both the relaxation effect and computational complexity. In particular, the superiority is magnified when the number of the negative eigenvectors of Q increases. Then we design a branch and bound algorithm based on the enhanced SOCP relaxation to find the optimal solution. Numerical experiments show that though the lower bound provided by the enhanced SOCP relaxation is worse than that of the CP relaxation, the computational complexity is much lower. Thus, the enhanced SOCP relaxation-based branch and bound algorithm spends much less time to obtain the optimal solution than that of the CP relaxation when the dimension of the variables is more than 100.
The following notations are adopted throughout the paper. Given a real symmetric matrix X, X 0 means X is positive semidefinite. I denotes an identity matrix. For n by n real matrices A = ( A i j ) and B = ( B i j ) , A B = trace ( A T B ) = i , j = 1 n A i j B i j . a represents that a R is rounded down to the nearest integer. Given a vector b R n , diag ( b ) corresponds to an n × n diagonal matrix with its diagonal elements equal to b.
The paper is organized as follows. In Section 2, we recast the problem into a quadratic programming problem with a quadratic equality and linear constraints and then present an enhanced SOCP relaxation. Section 3 describes a branch and bound algorithm. Section 4 provides numerical experiments to verify that the enhanced SOCP relaxation-based branch-and-bound method is effective to solve the problem. Conclusions are given in Section 5.

2. A Reformulation of (1) and an Enhanced SOCP Relaxation

Some constrained quadratic fractional problems are equivalent to quadratically constrained quadratic programming problems [13,15]. Following this idea, in this section we first equivalently reformulate (1) into a quadratically constrained quadratic programming problem and then design an enhanced SOCP relaxation.
For convenience, let A ¯ = a A , Q ¯ = c q T q Q , P ¯ = 1 0 T 0 I , then (1) equals to the following homogeneous quadratic fractional program with linear constraints:
min z T Q ¯ z z T P ¯ z , s . t . z 0 , z 1 = 1 , A ¯ z = 0 .
If we define y = z z T P ¯ z , then (2) is recast into:
min y T Q ¯ y , s . t . y 0 , A ¯ y = 0 , y T P ¯ y = 1 .
Lemma 1. 
(2) is equivalent to (3).
Proof. 
If z is a feasible solution of (2), then let y = z z T P ¯ z . It is easy to verify that y is a feasible solution of (3) and y T Q ¯ y = z T Q ¯ z z T P ¯ z . Hence, the optimal value of (3) is no more than that of (2). Conversely, if y is a feasible solution of (3), then y T P ¯ y = 1 implies that y 0 . Let y = [ y 1 ; y 2 ] with y 1 R and y 2 R n . If y 1 = 0 , then A y 2 = 0 and y 2 0 . Hence, y 2 ker A R + n = { 0 } , which contradicts with the conclusion that y 0 . Therefore, y 1 > 0 . Let z = y y 1 , then it is easy to verify that z is a feasible solution of (2) and z T Q ¯ z z T P ¯ z = y T Q ¯ y . Hence, the optimal value of (3) is no less than that of (2). Consequently, (2) is equivalent to (3).  □
Lemma 1 leads directly to the following proposition.
Proposition 1.
(1) is equivalent to (3).
Therefore, in order to solve (1), we would like to solve (3) instead. Since there is an equality constraint in (3), we first reduce the variable dimension from n + 1 to n m + 1 by employing a similar method as in [15]. Let S = { s 1 , , s n m + 1 } R n × ( n m + 1 ) be an orthonormal basis of ker A ¯ , thus, y can be written as y = S w with w R n m + 1 on condition that y satisfying A ¯ y = 0 . Let Q ^ = S T Q ¯ S and P ^ = S T S = I , then (3) turns into:
min w T Q ^ w , s . t . S w 0 , w T w = 1 .
(4) is a nonconvex quadratic program with one spherical constraint and linear constraints. In general, it cannot be solved in polynomial time. Amaral et al. [15] proposed a CP relaxation for (1):
min Q ^ W , s . t . I W = 1 , S W S T 0 , W 0 .
They showed that the CP relaxation could provide a good lower bound and numerical experiments also verified that the CP relaxation resulted in small relative gaps with the optimal value. However, the computational complexity is high. Thus, it is not effective to solve the problem when the dimension of variables becomes larger. In contrast, the classical SOCP relaxation has much lower computational complexity, but its relaxation effect is worse [17]. To balance the relaxation effect and computational complexity, we design an enhanced SOCP relaxation which could both improve the lower bound and reduce the computation time compared to the classical SOCP relaxation.
Next, we first briefly introduce the classical SOCP relaxation. We decompose Q ^ = i = 1 n m + 1 σ i η i η i T by eigenvalue decomposition, where σ i are eigenvalues and η i are corresponding eigenvectors, for i = 1 , , n m + 1 . Let s denote the number of negative eigenvalues of Q ^ . The lower bound and upper bound of η i T w are solved by f i = min S w 0 , w T w 1 η i T w and g i = max S w 0 , w T w 1 η i T w for i = 1 , , s , respectively. Moreover, the lower bound and upper bound of w i are solved by b i = min S w 0 , w T w 1 w i and d i = max S w 0 , w T w 1 w i for i = 1 , , n m + 1 , respectively. Then the classical SOCP relaxation becomes [17]:
min i = 1 s σ i τ i + i = s + 1 n m + 1 σ i ( η i T w ) 2 , s . t . S w 0 , i = 1 n m + 1 γ i = 1 , w i 2 γ i , γ i ( b i + d i ) w i b i d i , i = 1 , , n m + 1 , ( η i T w ) 2 τ i , τ i ( f i + g i ) η i T w f i g i , i = 1 , , s .
In what follows, we design a new SOCP relaxation by employing the simultaneous matrix diagonalization technique. The simultaneous matrix diagonalization-based convex relaxation was first proposed to solve the one quadratically constrained quadratic program on condition that the quadratic forms are simultaneously diagonalizable by Ben-Tal et al. [18]. Then Zhou et al. used the simultaneous matrix diagonalization technique to solve various problems including the convex quadratic program with linear complementarity constraints [19], the generalized trust-region problem [20], and the convex quadratically constrained nonconvex quadratic programming problem [21]. All of the above research implies that convex relaxations employing the simultaneous matrix diagonalization to solve some special quadratically constrained quadratic programming problems could result in a better lower bound or reduce the computational complexity.
It is obvious that I and Q ^ can be simultaneously diagonalizable, i.e., there exists a nonsingular matrix V such that V T I V and V T Q ^ V are both diagonal matrices. In fact, let Q ^ = V V T by using eigenvalue decomposition where = diag ( σ 1 , , σ n m + 1 ) and V = ( η 1 , , η n m + 1 ) , then V T V = I and V T Q ^ V = . Let w = V ξ , then (4) becomes:
min ξ T ξ , s . t . S V ξ 0 , ξ T ξ = 1 .
The lower bound and upper bound of ξ i are solved by l i = min ξ | S V ξ 0 , ξ T ξ 1 ξ i and u i = max ξ | S V ξ 0 , ξ T ξ 1 ξ i for i = 1 , , n m + 1 , respectively. We derive a new SOCP relaxation by relaxing ξ i 2 = t i into ξ i 2 t i for i = 1 , , n m + 1 :
min i = 1 n m + 1 σ i t i , s . t . S V ξ 0 , i = 1 n m + 1 t i = 1 , ξ i 2 t i , t i ( l i + u i ) ξ i l i u i , i = 1 , , n m + 1 .
We observe that (6) has s more convex quadratic constraints and s more linear constraints than those of (8); hence, the computational complexity of (6) is higher. Moreover, the auxiliary variables γ i and τ i are only bounded above by linear constraints in (6), in contrast, the auxiliary variable t i is not only bounded above by the linear constraints, but also appears in the objective function. Thus, minimizing the objective function also prevents the auxiliary variables t i from going to infinity when σ i > 0 .
To verify that (8) indeed enhance the relaxation effect of the classical SOCP relaxation (6), we use some randomly generated instances to test the two relaxations. Five instances are generated for each given problem size. The average lower bounds and average computing time in seconds are computed. The concrete generation process of random examples are described in Section 4. In Figure 1, we let n = 10 , 50 , 100 , 150 , 200 , m = n / 4 and r = n / 2 , where r denotes the number of negative eigenvalues of the objective function matrix Q. To compare the two relaxations varying from r, we set n = 200 , m = 50 and r = 50 , 100 , 150 , 200 and the results are listed in Figure 2.
Figure 1 shows that (8) obtains a better lower bound in less computing time than that of (6). Moreover, the advantage of computing time is highlighted when the dimension increases.
Figure 2 shows that the relaxation effect and computing time of (8) change very little with the varying number of negative eigenvalues of Q. In contrast, the relaxation effect of (6) becomes worse and the computing time increases when the number of negative eigenvalues of Q becomes larger. Hence, we conclude that the advantages of both the relaxation effect and computing time of (8) are highlighted as the number of negative eigenvalues of Q increases.

3. An Enhanced SOCP Relaxation Based Branch-and-Bound Algorithm

The branch and bound algorithm is widely used for globally solving constrained fractional programming problems [22], thus, we present an enhanced SOCP relaxation-based branch-and-bound scheme detailed in Algorithm 1 for (7) in this section. There are four steps in the design framework:
(1) Initialization. Set the initial lower bound l i = min ξ | S V ξ 0 , ξ T ξ 1 ξ i and u i = max ξ | S V ξ 0 , ξ T ξ 1 ξ i for i = 1 , , n m + 1 . Solve (8) with [ l 0 , u 0 ] to obtain its optimal value l b 0 and optimal solution ( ξ 0 , t 0 ) .
(2) The node selection strategy. The algorithm employs the classical “best-first” selection strategy, i.e., the one with the lowest bound among the live subproblems is selected.
(3) The variable selection strategy and branching rule. Let ( ξ k , t k ) be the solution of (8) at the current node k over the box [ l k , u k ] . Choose j * = arg max j { 1 , , n m + 1 } ( t j k ( ξ j k ) 2 ) . Then the box [ l k , u k ] is split into two sub-boxes [ l a , u a ] and [ l b , u b ] with l j a = l j k , u j a = u j k for j j * and u j * a = l j * k + u j * k 2 , u j b = u j k , l j b = l j k for j j * and l j * b = u j * a . Thus, two new subproblems are generated over the two new sub-boxes [ l a , u a ] and [ l b , u b ] , respectively.
(4) Lower bound and upper bound. As described by the branching rule, every enumeration node is over a box [ l , u ] . The lower bound l b and ( ξ , t ) for each node is provided by solving (8) with corresponding [ l , u ] . Moreover, if ξ > 0 , then ξ ¯ = ξ ( ξ ) T ξ is a feasible solution of (7) and U = ( ξ ¯ ) T ξ ¯ is an upper bound.
The following theorem proves the convergence of Algorithm 1.
Theorem 1. 
If { ξ k , t k , l b k , l k , u k } selected from D in Line 10 of Algorithm 1 satisfies l b k = min { l b j | l b j D } and i * = arg max i { 1 , , n m + 1 } ( t i k ( ξ i k ) 2 ) , then for any ϵ > 0 , there exists a δ > 0 such that Algorithm 1 terminates in Line 13 on condition that ( u i * k l i * k ) δ .
Proof. 
Since ξ i k [ l i k , u i k ] and t i k ( l i k + u i k ) ξ i k l i k u i k ,
t i k ( ξ i k ) 2 t i * k ( ξ i * k ) 2 ( u i * k l i * k ) 2 4 δ 2 4
and
1 ( ξ k ) T ξ k = i = 1 n m + 1 t i k i = 1 n m + 1 ( ξ i k ) 2 ( n m + 1 ) ( t i * k ( ξ i * k ) 2 ) ( n m + 1 ) δ 2 4 .
Set δ = min 2 ϵ σ ^ ( n m + 1 ) , 2 n m + 1 , then ( n m + 1 ) δ 2 4 1 2 . Hence, ( ξ k ) T ξ k 1 2 . Let ξ ¯ k = ξ k ( ξ k ) T ξ k , then ξ ¯ k is a feasible solution of (7). Let σ ^ = max i | σ i | . For any ϵ > 0 , we have
i = 1 n m + 1 σ i ( ξ ¯ i k ) 2 i = 1 n m + 1 σ i t i k | i = 1 n m + 1 σ i ( ξ ¯ i k ) 2 i = 1 n m + 1 σ i ( ξ i k ) 2 | + | i = 1 n m + 1 σ i ( ξ i k ) 2 i = 1 n m + 1 σ i t i k | σ ^ 1 ( ξ k ) T ξ k + σ ^ i = 1 n m + 1 t i k i = 1 n m + 1 ( ξ i k ) 2 σ ^ ( n m + 1 ) δ 2 4 + ( n m + 1 ) δ 2 4 σ ^ ( n m + 1 ) δ 2 2 ϵ
Therefore, Algorithm 1 terminates in Line 13.  □
Algorithm 1 A Branch-and-Bound Algorithm for Solving (1).
Require: An instance of (1) and a given error tolerance ϵ > 0 . Set Optimization k = 1 and U * = + .
1:
Solve (8) with [ l 0 , u 0 ] for its optimal value l b 0 and optimal solution ( ξ 0 , t 0 ) .
2:
if ( ξ 0 ) T ξ 0 > 0 , then
3:
     ξ * = ξ 0 ( ξ 0 ) T ξ 0 and U * = ( ξ * ) T ξ * .
4:
end if
5:
Construct a set D and insert { ξ 0 , t 0 , l b 0 , l 0 , u 0 } into it.
6:
loop
7:
    if  D = , then
8:
        return ( ξ * , U * ) and terminate.
9:
    end if
10:
    Choose a node from D , denoted as { ξ k , t k , l b k , l k , u k } such that l b k = min { l b i | l b i D } and remove it from D .
11:
    if  U * l b k ε , then
12:
        return ( ξ * , U * ) and terminate.
13:
    end if
14:
    Choose j * = arg max j { 1 , , n m + 1 } ( t j k ( ξ j k ) 2 ) .
15:
    Construct the box [ l a , u a ] by setting l j a = l j k , u j a = u j k , for j j * and u j * a = l j * k + u j * k 2 and construct the box [ l b , u b ] by setting u j b = u j k , l j b = l j k for j j * and l j * b = u j * a .
16:
    Set k k + 1 .
17:
    if (8) over [ l a , u a ] is feasible, then
18:
        Solve (8) over [ l a , u a ] for its optimal objective function value l b a and optimal solution ( ξ a , t a ) .
19:
        if  ( ξ a ) T ξ a > 0 , then
20:
            ξ ¯ a = ξ a ( ξ a ) T ξ a and U a = ( ξ ¯ a ) T ξ ¯ a .
21:
        end if
22:
        if  U a < U * , then
23:
            U * = U a and ξ * = ξ ¯ a .
24:
        end if
25:
        if  U * l b a > ε , then
26:
           insert { ξ a , t a , l b a , l a , u a } into D .
27:
        end if
28:
    end if
29:
    if (8) over [ l b , u b ] is feasible, then
30:
        Solve (8) over [ l b , u b ] for its optimal objective function value l b b and optimal solution ( ξ b , t b ) .
31:
        if  ( ξ b ) T ξ b > 0 , then
32:
            ξ ¯ b = ξ b ( ξ b ) T ξ b and U b = ( ξ ¯ b ) T ξ ¯ b .
33:
        end if
34:
        if  U b < U * , then
35:
            U * = U b and ξ * = ξ ¯ b .
36:
        end if
37:
        if  U * l b b > ε , then
38:
           insert { ξ b , t b , l b b , l b , u b } into D .
39:
        end if
40:
    end if
41:
end loop

4. Numerical Experiments

In this section, we report the encouraging numerical experience for randomly generated instances using Algorithm 1, and compare the numerical results with the lower bound provided by the CP relaxation.
All the algorithms are implemented in MATLAB R2013b (MathWorks Inc, Natick, MA, USA) on a Windows 7 PC with 2.50 GHZ Inter Dual Core CPU processors. (8) is computed by the Cplex solver (IBM Inc, Almonck, New York, USA) and the CP relaxation is solved by Sedumi [23] with the interface code cvx. The error tolerance is set to be ϵ = 1 × 10 4 . We generated the instances as follows [15]: Z = R T R T , T = diag ( T 1 , , T n ) , T i U [ 0 , 1 ] for i = 1 , , r and T i U [ 0 , 1 ] for i = r + 1 , , n , R = W 1 W 2 W 3 , W j = I 2 w j w j T w j 2 for j = 1 , 2 , 3 , where w j k U [ 1 , 1 ] is the k-th element of w j ; q i U [ 1 , 1 ] , for i = 1 , , n ; a m × n matrix A with A ( 1 , : ) U [ 0 , 5 ] , whereas A ( i , : ) U [ 5 , 5 ] for i = 2 , , m ; a randomly generated x { x R + n : e T x = 1 } , then let a = A x . Five instances are generated for each given problem size. The following three tables report the experimental results. Some symbols are denoted as follows:
LB_SOCP—Value of the initial lower bound obtained by the SOCP relaxation (8).
Opv—Optimal value provided by Algorithm 1 within the given error tolerance.
Nodes—Explored nodes of Algorithm 1 to obtain opv.
Time1—CPU time in seconds of Algorithm 1 to obtain the opv.
LB_CP—Value of the lower bound obtained by the CP relaxation (5).
Time2—CPU time in seconds to obtain LB_CP.
“-”—Denotes that the algorithm fails to solve the instance within 10,000 s.
Table 1, Table 2 and Table 3 show that though the copositive relaxation could offer a better lower bound or even an optimal value for (1), the computational complexity is higher. In particular, when n m + 1 approximates 100, all the randomly generated instances cannot be solved by (5) within 10,000 s. In contrast, (8) could provide a reasonable lower bound with a reasonable computing time. though the lower bound is worse than that of the CP relaxation, the computing time using Algorithm 1 is far less than that of solving (5) for different m and r. In particular, when n becomes larger, the advantage is highlighted.
To give an intuitive overview of the results in Table 1, Table 2 and Table 3, we additionally list the following metric comparisons of computing time between the proposed algorithm and the CP relaxation in Figure 3 and Figure 4.
Figure 3 and Figure 4 illustrates that the proposed algorithm provides a better mean value and standard deviation when the dimension n is greater than 100, and the superiority is enlarged when n increases.
Figure 5 shows the convergence speed of the proposed algorithm. The lower bound updates faster in the first 5 to 10 iterations till it reaches a relatively steady value.

5. Conclusions

We showed that the enhanced SOCP relaxation employing the simultaneous matrix diagonalization technique could result in a better lower bound and reduce the computational complexity compared to the classical SOCP relaxation. Although the lower bound of the enhanced SOCP relaxation is not as good as the copositive relaxation, it benefits from less computational complexity. Numerical experiments imply that the enhanced SOCP relaxation is more suitably applied in the branch and bound algorithm to obtain the optimal solution. For future research, we may focus on designing simultaneous diagonalization-based SOCP relaxation for quadratically constrained quadratic fractional problems.

Author Contributions

Conceptualization, Z.X. and J.Z.; methodology, Z.X. and J.Z.; software, Z.X.; validation, J.Z.; writing—original draft preparation, J.Z.; writing—review and editing, Z.X.; funding acquisition, Z.X. and J.Z. All authors have read and agreed to the published version of the manuscript.

Funding

Xu’s research has been supported by the National Natural Science Foundation of China (Grants #11704336), Zhou’s research has been supported by the National Natural Science Foundation of China (Grant #11701512).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lai, H.C.; Huang, T.Y. Optimality conditions for nondifferentiable minimax fractional programming with complex variables. J. Math. Anal. Appl. 2009, 359, 229–239. [Google Scholar] [CrossRef] [Green Version]
  2. Stancu-Minasian, I.M. Fractional Programming: Theory, Methods and Applications, 1st ed.; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1997; pp. 6–33. [Google Scholar]
  3. Cai, H.; Wang, Y.; Yi, T. An approach for minimizing a quadratically constrained fractional quadratic problem with application to the communications over wireless channels. Optim. Method Softw. 2014, 29, 310–320. [Google Scholar] [CrossRef]
  4. Dinkelbach, W. On nonlinear fractional programming. Manag. Sci. 1967, 13, 492–498. [Google Scholar] [CrossRef]
  5. Salahi, M.; Fallahi, S. On the quadratic fractional optimization with a strictly convex quadratic constraint. Kybernetika 2015, 51, 293–308. [Google Scholar] [CrossRef] [Green Version]
  6. Zhang, A.; Hayashi, S. Celis-Dennis-Tapia based approach to quadratic fractional programming problems with two quadratic constraints. Numer. Algebra Control Optim. 2011, 1, 83–98. [Google Scholar] [CrossRef]
  7. Gotoh, J.Y.; Konno, H. Maximization of the ratio of two convex quadratic functions over a polytope. Comput. Optim. Appl. 2001, 20, 43–60. [Google Scholar] [CrossRef]
  8. Bezdan, T.; Stoean, C.; Namany, A.A.; Bacanin, N.; Rashid, A.T.; Zivkovic, M.; Venkatachalam, K. Hybrid Fruit-Fly Optimization Algorithm with K-Means for Text Document Clustering. Mathematics 2021, 9, 1929. [Google Scholar] [CrossRef]
  9. Dong, G.; Liu, C.; Liu, D.; Mao, X. Adaptive multi-level search for global optimization: An integrated swarm intelligence-metamodelling technique. Appl. Sci. 2021, 11, 2277. [Google Scholar] [CrossRef]
  10. Beck, A.; Teboulle, M. A convex optimization approach for minimizing the ratio of indefinite quadratic functions over an ellipsoid. Math. Program. Ser. A 2009, 118, 13–35. [Google Scholar] [CrossRef]
  11. Xia, Y. Using SeDuMi 1.02, On minimizing the ratio of quadratic functions over an ellipsoid. Optimization 2015, 64, 1097–1106. [Google Scholar] [CrossRef]
  12. Nguyen, V.B.; Sheu, R.L.; Xia, Y. An SDP approach for quadratic fractional problems with a two-sided quadratic constraint. Optim. Methods Softw. 2016, 31, 701–719. [Google Scholar] [CrossRef]
  13. Preisig, J.C. Copositivity and the minimization of quadratic functions with nonnegativity and quadratic equality constraints. SIAM J. Control Optim. 1996, 34, 1135–1150. [Google Scholar] [CrossRef]
  14. Amaral, P.; Bomze, I.M.; Júdice, J. Nonconvex min-max fractional quadratic problems under quadratic constraints: Copositive relaxations. J. Glob. Optim. 2019, 75, 227–245. [Google Scholar] [CrossRef] [Green Version]
  15. Amaral, P.; Bomze, I.M.; Júdice, J. Copositivity and constrained fractional quadratic problems. Math. Program. 2014, 146, 325–350. [Google Scholar] [CrossRef] [Green Version]
  16. Sadeghi, A.; Saraj, M.; Amiri, N.M. Solving a fractional program with second order cone constraint. Iran. J. Math. Sci. Inform. 2019, 14, 33–42. [Google Scholar]
  17. Kim, S.; Kojima, M. Second order cone programming relaxation of nonconvex quadratic optimization problems. Optim. Methods Softw. 2001, 15, 201–224. [Google Scholar] [CrossRef]
  18. Ben-Tal, A.; Den Hertog, D. Hidden conic quadratic representation of some nonconvex quadratic optimization problems. Math. Program. 2014, 143, 1–29. [Google Scholar] [CrossRef]
  19. Zhou, J.; Xu, Z. A simultaneous diagonalization based SOCP relaxation for convex quadratic programs with linear complementarity constraints. Optim. Lett. 2017, 13, 1615–1630. [Google Scholar] [CrossRef]
  20. Zhou, J.; Lu, C.; Tian, Y.; Tang, X. A SOCP relaxation based branch-and-bound method for generalized trust-region subproblem. J. Ind. Manag. Optim. 2021, 17, 151–168. [Google Scholar] [CrossRef] [Green Version]
  21. Zhou, J.; Chen, S.; Yu, S.; Tian, Y. A simultaneous diagonalization based quadratic convex reformulation for nonconvex quadratically constrained quadratic program. Optimization 2020, in press. [Google Scholar] [CrossRef]
  22. Liu, X.; Gao, Y.; Zhang, B.; Tian, F. A new global optimization algorithm for a class of linear fractional programming. Mathematics 2019, 7, 867. [Google Scholar] [CrossRef] [Green Version]
  23. Sturm, J.F. Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones. Optim. Methods Softw. 1999, 11, 625–653. [Google Scholar] [CrossRef]
Figure 1. Comparisons of lower bounds and computing time between (6) and (8) with varying dimensions of variables.
Figure 1. Comparisons of lower bounds and computing time between (6) and (8) with varying dimensions of variables.
Mathematics 09 02981 g001
Figure 2. Comparisons of lower bounds and computing time between (6) and (8) with varying numbers of negative eigenvalues of Q.
Figure 2. Comparisons of lower bounds and computing time between (6) and (8) with varying numbers of negative eigenvalues of Q.
Mathematics 09 02981 g002
Figure 3. Mean value of computing times with different ( n , m , r ) sets.
Figure 3. Mean value of computing times with different ( n , m , r ) sets.
Mathematics 09 02981 g003
Figure 4. Standard deviation of computing times with different ( n , m , r ) sets.
Figure 4. Standard deviation of computing times with different ( n , m , r ) sets.
Mathematics 09 02981 g004
Figure 5. Convergence speed graphs.
Figure 5. Convergence speed graphs.
Mathematics 09 02981 g005
Table 1. Performance Comparisons of the enhanced SOCP relaxation and the CP relaxation with m = n 2 and r = n 2 .
Table 1. Performance Comparisons of the enhanced SOCP relaxation and the CP relaxation with m = n 2 and r = n 2 .
( n , m , r )SOCP_BBCP_BB
LB_SOCPOpvNodesTime1LB_CPTime2
(10, 5, 5)−0.4693−0.4293130.6629−0.42930.1494
(10, 5, 5)−0.8524−0.845670.4113−0.84560.1662
(10, 5, 5)−1.2073−1.202160.3530−1.20210.1501
(10, 5, 5)−0.9140−0.907260.3618−0.90720.1524
(10, 5, 5)−0.7263−0.688190.5096−0.68810.1816
(50, 25, 25)−0.7163−0.606714317.3670−0.60674.5898
(50, 25, 25)−0.7214−0.56079814.3159−0.56073.6403
(50, 25, 25)−0.7798−0.6902183.1592−0.69023.7478
(50, 25, 25)−1.0089−0.9433254.8718−0.94335.0656
(50, 25, 25)−1.2317−1.208351.7368−1.20835.4888
(100, 50, 50)−1.0565−0.9001137.9829−0.9001250.9022
(100, 50, 50)−0.9554−0.79973014.5958−0.7997194.6950
(100, 50, 50)−1.0330−0.86594218.6046−0.8659170.7346
(100, 50, 50)−1.1277−0.96549231.8563−0.9654203.0717
(100, 50, 50)−0.8737−0.689913440.4807−0.6899214.5695
(150, 75, 75)−1.0552−0.785712075.8879−0.78572.7118 × 103
(150, 75, 75)−1.1371−0.94241418.4429−0.94242.9627 × 103
(150, 75, 75)−1.1903−1.0224917.4320−1.02242.0537 × 103
(150, 75, 75)−1.0528−0.80659460.9915−0.80652.4712 × 103
(150, 75, 75)−1.2053−0.96543832.4540−0.96542.3078 × 103
(200, 100, 100)−1.2726−0.9486179187.0444--
(200, 100, 100)−1.2393−0.909091109.7111--
(200, 100, 100)−1.1200−0.8023499487.9172--
(200, 100, 100)−1.3677−1.06416580.6285--
(200, 100, 100)−1.1613−0.8508185192.5417--
Table 2. Performance Comparisons of the enhanced SOCP relaxation and the CP relaxation with m = n 4 and r = n 2 .
Table 2. Performance Comparisons of the enhanced SOCP relaxation and the CP relaxation with m = n 4 and r = n 2 .
( n , m , r )SOCP_BBCP
LB_SOCPOpvNodesTime1LB_CPTime2
(10, 2, 5)−0.9896−0.960350.6060−0.96031.5734
(10, 2, 5)−0.8272−0.792350.4432−0.79230.1278
(10, 2, 5)−0.8851−0.843950.4632−0.84390.1386
(10, 2, 5)−1.3820−1.379620.3577−1.37960.1269
(10, 2, 5)−1.0857−1.051050.5859−1.05100.1302
(50, 12, 25)−1.7140−1.677753.1569−1.677717.7567
(50, 12, 25)−1.7958−1.729162.9847−1.729114.3435
(50, 12, 25)−1.4860−1.340862.8456−1.340817.1463
(50, 12, 25)−1.8775−1.799662.8528−1.799615.5760
(50, 12, 25)−1.6074−1.549652.7021−1.549616.3399
(100, 25, 50)−1.5728−1.33672619.3778−1.33670.9316 × 103
(100, 25, 50)−1.8812−1.7798811.3498−1.77981.0162 × 103
(100, 25, 50)−1.7331−1.5541811.3676−1.55411.0123 × 103
(100, 25, 50)−2.3014−2.2407712.3799−2.24071.0646 × 103
(100, 25, 50)−1.7297−1.5881811.4687−1.58810.8403 × 103
(150, 37, 75)−2.2469−2.1016832.4534--
(150, 37, 75)−2.0106−1.7806832.4796--
(150, 37, 75)−2.0413−1.8458732.4477--
(150, 37, 75)−1.8265−1.53244167.4079--
(150, 37, 75)−1.7998−1.5960836.5955--
(200, 50, 100)−1.9059−1.5500127282.0235--
(200, 50, 100)−2.0762−1.7748857.7022--
(200, 50, 100)−1.9991−1.6971855.1303--
(200, 50, 100)−1.7675−1.203981202.9095--
(200, 50, 100)−2.0341−1.7518963.0475--
Table 3. Performance Comparisons of the enhanced SOCP relaxation and the CP relaxation with m = n 2 and r = n 4 .
Table 3. Performance Comparisons of the enhanced SOCP relaxation and the CP relaxation with m = n 2 and r = n 4 .
( n , m , r )SOCP_BBCP
LB_SOCPOpvNodesTime1LB_CPTime2
(10, 5, 2)−0.5666−0.566610.4499−0.56661.2285
(10, 5, 2)−0.3186−0.2993110.6461−0.29930.1261
(10, 5, 2)−0.6271−0.627110.2275−0.62710.1175
(10, 5, 2)−0.4529−0.450950.3733−0.45090.1193
(10, 5, 2)−1.3076−1.307610.1667−1.30760.1448
(50, 25, 12)−0.9542−0.924151.7985−0.92414.0500
(50, 25, 12)−0.7892−0.7576102.3463−0.75763.4063
(50, 25, 12)−1.1374−1.0829102.3780−1.08294.0707
(50, 25, 12)−0.9922−0.972751.8784−0.97274.0418
(50, 25, 12)−1.2224−1.204551.8484−1.20453.3520
(100, 50, 25)−0.8275−0.71573715.6906−0.7157189.7557
(100, 50, 25)−0.9672−0.846077.6640−0.8460210.9579
(100, 50, 25)−0.8787−0.7226199.3551−0.7226182.1559
(100, 50, 25)−0.6688−0.47298627.5411−0.4729203.2400
(100, 50, 25)−1.1576−1.085055.4911−1.0850182.8251
(150, 75, 37)−0.9557−0.75392122.3072−0.75392.2003 × 103
(150, 75, 37)−1.0578−0.8889915.3286−0.88892.6835 × 103
(150, 75, 37)−1.0882−0.88891519.1659−0.88892.1471 × 103
(150, 75, 37)−0.9347−0.76771720.8678−0.76773.4121 × 103
(150, 75, 37)−0.9359−0.74752422.9717−0.74752.1899 × 103
(200, 100, 50)−0.9704−0.73214567.2220--
(200, 100, 50)−1.3138−1.1077526.8615--
(200, 100, 50)−0.9420−0.6978158180.3040--
(200, 100, 50)−0.9998−0.79814062.2748--
(200, 100, 50)−0.9535−0.6954264298.8807--
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Xu, Z.; Zhou, J. A Global Optimization Algorithm for Solving Linearly Constrained Quadratic Fractional Problems. Mathematics 2021, 9, 2981. https://0-doi-org.brum.beds.ac.uk/10.3390/math9222981

AMA Style

Xu Z, Zhou J. A Global Optimization Algorithm for Solving Linearly Constrained Quadratic Fractional Problems. Mathematics. 2021; 9(22):2981. https://0-doi-org.brum.beds.ac.uk/10.3390/math9222981

Chicago/Turabian Style

Xu, Zhijun, and Jing Zhou. 2021. "A Global Optimization Algorithm for Solving Linearly Constrained Quadratic Fractional Problems" Mathematics 9, no. 22: 2981. https://0-doi-org.brum.beds.ac.uk/10.3390/math9222981

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop