# The Auxiliary Problem Principle with Self-Adaptive Penalty Parameter for Multi-Area Economic Dispatch Problem

^{*}

Next Article in Journal

Previous Article in Journal

Key Laboratory of Measurement and Control of Complex Systems of Engineering, Ministry of Education, Southeast University, Nanjing, Jiangsu 210096, China

Author to whom correspondence should be addressed.

Academic Editor: Hans Kellerer

Received: 28 January 2015 / Revised: 10 April 2015 / Accepted: 13 April 2015 / Published: 22 April 2015

The auxiliary problem principle is a powerful tool for solving multi-area economic dispatch problem. One of the main drawbacks of the auxiliary problem principle method is that the convergence performance depends on the selection of penalty parameter. In this paper, we propose a self-adaptive strategy to adjust penalty parameter based on the iterative information, the proposed approach is verified by two given test systems. The corresponding simulation results demonstrate that the proposed self-adaptive auxiliary problem principle iterative scheme is robust in terms of the selection of penalty parameter and has better convergence rate compared with the traditional auxiliary problem principle method.

The aim of economic dispatch (ED) problem in power systems field is to determine the allocation of real power outputs for the generating units economically while satisfying corresponding physical and operational constraints [1]. With the growing scale of power systems, traditional centered ED algorithm is inclined to bring many problems like disaster of dimensionality, bottleneck of network communication. Furthermore, the modern power systems are composed of numerous sub-networks interconnected by tie-lines, each sub-network has its own energy management system (EMS) and most data in sub-network is treated confidentially and used internally. Thus, it is necessary to propose a distributed algorithm and each sub-network only exchanges its boundary information with its neighbor sub-networks, rather than exchanging all data with all sub-networks in power systems.

Wei Yan proposed a new decomposition-coordination interior point method (DIPM) to tackle the multi-area optimal reactive power flow problem [2]. Actually, the proposed DIPM is parallel distributed algorithm. However, DIPM needs a coordinator server which communicates with all sub-systems for exchanging required information, a great deal of data interchange will lead to communication bottlenecks. Approximate Newton directions method is applied to address multi-area optimal power flow problem [3,4], the upside is that it allows the EMS in each sub-network to operate its system independently while obtaining an optimal coordinated but decentralized solution; the downside is that it is limited by strict condition which is a difficult requirement to meet in practice.

Auxiliary problem principle (APP) has been originally introduced by Cohen in 1980 [5], this method has been extensive applied in power systems field, such as daily generation scheduling optimization [6], unit commitment [7] and distributed optimal power flow [8,9]. However, APP method has a major drawback: it is very sensitive to the value of related penalty parameter, i.e., the convergence performance of APP will be poor if penalty parameter is selected improperly. Meanwhile, the suitable penalty parameter varies with different systems, it is impossible to obtain a generic penalty parameter in advance. In this paper, we propose a self-adaptive APP for solving multi-area economic dispatch (MAED) problem, the key is that penalty parameter is allowed to vary per iteration according to the iterate information for better convergence performance.

The rest of this paper is organized as follows. In section 2, we give the formulation of the multi-area economic dispatch problem. Then, section 3 describes the traditional auxiliary problem principle. After that, we propose a self-adaptive APP iterative scheme in section 4, during which the corresponding penalty parameter c is allowed to adjust based on the iterate message. In section 5, two test systems are given to verify the effectiveness of the proposed method, and we conclude in section 6.

The aim of MAED is to minimize the total production cost while satisfying various physical and operational constraints [10]. In this section, we employ a two-area economic dispatch formulation to illustrate the MAED problem.

The objective function F, which represents total generator fuel cost of two areas, can be written as
where P_{mn} is the real power output of the nth generating unit in area m. f_{mn}(P_{mn}) is the fuel cost function of the nth generating unit in area m and is usually expressed as a quadratic function without considering valve-point effect, the corresponding cost coefficients for f_{mn}(P_{mn}) are given as a_{mn}, b_{mn}, c_{mn}. N_{m} represents there are N_{m} generating units in area m. In addition, the objective function is minimized and subjected to the following physical constraints [11,12].

$$\begin{array}{l}\mathrm{min}\text{}F={\displaystyle {\sum}_{m=1}^{2}{\displaystyle {\sum}_{n=1}^{{N}_{m}}{f}_{mn}\left({P}_{mn}\right)}}\\ \text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}={\displaystyle {\sum}_{m=1}^{2}{\displaystyle {\sum}_{n=1}^{{N}_{m}}\left({a}_{mn}{P}_{mn}^{2}+{b}_{mn}{P}_{mn}+{c}_{mn}\right)}}\end{array}$$

1) Real power balance constraints without considering transmission loss
where P_{mD} is the total demand in area m; P_{L} is the tie-line real power transfer between area 1 and area 2. P_{L} is positive when power flow transfers from area 1 to area 2. By contrast, P_{L} is negative when power flow transfers from area 2 to area 1.

$$\sum}_{n=1}^{{N}_{1}}{P}_{1n}={P}_{1D}+{P}_{L$$

$$\sum}_{n=1}^{{N}_{2}}{P}_{2n}={P}_{2D}-{P}_{L$$

2) Tie-line capacity constraints
where P_{L,max} is the maximum tie-line power flow between area 1 and area 2.

$$\left|{P}_{L}\right|<{P}_{L,\mathrm{max}}$$

3) Real power generation capacity constraints
where P_{mn,min} and P_{mn,max} are the minimum and maximum generation limits of the nth generating unit n in area m.

$${P}_{mn,\mathrm{min}}<{P}_{mn}<{P}_{mn,\mathrm{max}}$$

Kim proposed a coarse-grained distributed optimal power flow method in [13]. Inspired by this idea, we add a “dummy bus X” between area1 and area 2, the corresponding variable for this “dummy bus” is tie-line power flow P_{L}. By duplicating the “dummy bus” as X1 and X2, we get two separated systems as shown in Fig. 1. Meanwhile, P_{L} is duplicated as P_{1}_{L} and P_{2}_{L}, i.e., a new consistency constraint $\text{\hspace{0.17em}}{P}_{1L}-{P}_{2L}=0$ is introduced. Then the equivalent form of traditional MAED problem can be expressed as follows.

$$\begin{array}{l}\mathrm{min}\text{}F={\displaystyle {\sum}_{m=1}^{2}{\displaystyle {\sum}_{n=1}^{{N}_{m}}{f}_{mn}\left({P}_{mn}\right)}}\\ subject\text{\hspace{0.17em}}to:\text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}{\displaystyle {\sum}_{n=1}^{{N}_{1}}{P}_{1n}={P}_{1D}+{P}_{1L}}\\ \text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}{\displaystyle {\sum}_{n=1}^{{N}_{2}}{P}_{2n}={P}_{2D}-{P}_{2L}}\\ \text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}{P}_{mn,\mathrm{min}}<{P}_{mn}<{P}_{mn,\mathrm{min}}\\ \text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}\left|{P}_{1L}\right|<{P}_{L,\mathrm{max}}\\ \text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}\left|{P}_{2L}\right|<{P}_{L,\mathrm{max}}\\ \text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}{P}_{1L}-{P}_{2L}=0\end{array}$$

For convenience, the compact form of problem (6) is expressed as follows.
where $f\left({x}_{1}\right)$ and $g\left({x}_{2}\right)$ are total fuel cost in area 1 and area 2 respectively; ${x}_{1}={\left({P}_{1,1},{P}_{1,2},\cdots ,{P}_{1,{N}_{1}},{P}_{1L}\right)}^{T}$ and ${x}_{2}={\left({P}_{2,1},{P}_{2,2},\cdots ,{P}_{2,{N}_{2}},{P}_{2L}\right)}^{T}$ are the vectors of corresponding control variables in area 1 and area 2; ${\mathrm{\Omega}}_{1}$ and ${\mathrm{\Omega}}_{2}$ are feasible regions representing operational constraints on x_{1} and x_{2}; equation $\mathrm{\Theta}\left({x}_{1},{x}_{2}\right)$ represents the new introduced consistency constraint, during which A and B are given matrices.

$$\begin{array}{l}\mathrm{min}\text{}f\left({x}_{1}\right)+g\left({x}_{2}\right)\\ subject\text{\hspace{0.17em}}to:\text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}{x}_{1}\in {\mathrm{\Omega}}_{1},\text{\hspace{0.17em}}{x}_{2}\in {\mathrm{\Omega}}_{2}\\ \text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}\mathrm{\Theta}\left({x}_{1},{x}_{2}\right)\equiv A{x}_{1}+B{x}_{2}=0\end{array}$$

As shown in (7), it is clear that there is a coupling relationship between adjacent areas through the consistency constraint $\mathrm{\Theta}\left({x}_{1},{x}_{2}\right)$. In this section, with the concept of augmented lagrangian relaxation method and auxiliary problem principle, we will discuss how to divide the original problem (7) into independent sub-problems with the consistency constraint decoupled.

In this section, we first employ augmented lagrangian relaxation method to deal with the consistency constraint $\mathrm{\Theta}\left({x}_{1},{x}_{2}\right)$. Then, the problem (7) can be rewritten as follows.
where $\lambda $ is the Lagrangian multiplier for $\mathrm{\Theta}\left({x}_{1},{x}_{2}\right)$; c>0 is the given penalty factor. $\Vert x\Vert =\sqrt{{x}^{T}x}$ denotes the Euclidean norm of vector x.

$$\begin{array}{l}\mathrm{min}\text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}f\left({x}_{1}\right)+g\left({x}_{2}\right)-{\lambda}^{T}\left(A{x}_{1}+B{x}_{2}\right)+\frac{c}{2}{\Vert A{x}_{1}+B{x}_{2}\Vert}^{2}\\ subject\text{\hspace{0.17em}}to:\text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}{x}_{1}\in {\mathrm{\Omega}}_{1}\\ \text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}{x}_{2}\in {\mathrm{\Omega}}_{2}\end{array}$$

In addition, problem (8) is equivalent to solving a saddle-point problem via the following iterative scheme:

$\left({x}_{1}^{k+1},{x}_{2}^{k+1}\right)$ is obtained by solving

$$\text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}\left({x}_{1}^{k+1},{x}_{2}^{k+1}\right)=\mathrm{arg}\text{\hspace{0.17em}}\mathrm{min}\left\{f\left({x}_{1}\right)+g\left({x}_{2}\right)-{\lambda}^{T,k}\left(A{x}_{1}+B{x}_{2}\right)+\frac{c}{2}{\Vert A{x}_{1}+B{x}_{2}\Vert}^{2}|{x}_{1}\in {\mathrm{\Omega}}_{1},{x}_{2}\in {\mathrm{\Omega}}_{2}\right\}$$

then, ${\lambda}^{k+1}$ is obtained by updating

$${\lambda}^{k+1}={\lambda}^{k}-c\left(A{x}_{1}^{k+1}+B{x}_{2}^{k+1}\right)$$

In this section, we take advantage of auxiliary problem principle to cope with the non-separable quadratic term $\frac{c}{2}{\Vert A{x}_{1}+B{x}_{2}\Vert}^{2}$. The iterative scheme APP for solving problem (8) can be expressed as follows [14,15].

Step1. Compute

$$\text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}{x}_{1}^{k+1}=\mathrm{arg}\text{\hspace{0.17em}}\mathrm{min}\left\{f\left({x}_{1}\right)-{\lambda}^{T,k}A{x}_{1}+\frac{c}{2}{\Vert A{x}_{1}+B{x}_{2}^{k}\Vert}^{2}+\frac{\beta -c}{2}{\Vert A{x}_{1}-A{x}_{1}^{k}\Vert}^{2}|{x}_{1}\in {\mathrm{\Omega}}_{1}\right\}$$

Step2. Compute

$$\text{\hspace{0.17em}\hspace{0.17em}}{x}_{2}^{k+1}\text{\hspace{0.17em}}=\mathrm{arg}\text{\hspace{0.17em}}\mathrm{min}\left\{g\left({x}_{2}\right)-{\lambda}^{T,k}B{x}_{2}+\frac{c}{2}{\Vert A{x}_{1}^{k}+B{x}_{2}\Vert}^{2}+\frac{\beta -c}{2}{\Vert B{x}_{2}-B{x}_{2}^{k}\Vert}^{2}|{x}_{2}\in {\mathrm{\Omega}}_{2}\right\}$$

Step3. Lagrange multiplier updating

$${\lambda}^{k+1}={\lambda}^{k}-\epsilon \left(A{x}_{1}^{k}+B{x}_{2}^{k}\right)$$

Step4. Check the stop criterion. If
then stop. If not k=k+1, go to Step1.

$$\mathrm{max}\left(\Vert {\lambda}^{k}-{\lambda}^{k+1}\Vert ,\Vert A{x}_{1}^{k}-A{x}_{1}^{k+1}\Vert ,\Vert B{x}_{2}^{k}-B{x}_{2}^{k+1}\Vert \right)<\eta $$

where β is given APP parameter and ε denotes step size. The sufficient condition for convergence of APP iterative scheme is given as follow [13].

$$\epsilon <2c<\beta $$

Furthermore, experience on applications has illustrated that, for the given penalty parameter c, the selection of parameters β and ε has a significant influence on convergence performance of APP iterative scheme. Ren has pointed that APP iterative scheme can get the best convergence performance when $\beta \to 2c$ and $\epsilon =c$ [16]. Moreover, kim has pointed that convergence performance of APP was reliable with the choice [13]:

$$\epsilon \text{=}\frac{\beta}{2}\text{=}c$$

As a result, for the best convergence performance, we choice the APP parameters satisfying $\epsilon \text{=}c\text{=}\beta /2$ in this paper.

Although, the intrinsic relationship between parameters β, ε and c has been discussion in section 3.2, how to achieve proper parameters for APP proved to be a challenge. Experience on applications has shown that the convergence results depend on the selection of penalty parameter c; hence, it is important to estimate a proper penalty parameter. Inspired by Bingsheng He’s modified alternating direction method [17], we proposed a self-adaptive APP which makes progress in the choice of penalty parameter.

Bingsheng He proposed a modified alternating direction method to deal with problem (8). To be more exact, this method adjusts the penalty parameter c based on the iterate message [17].

Now we give the details on how to adjust the penalty parameter c for better convergence performance. In fact, the solution of problem (8) is equivalent to finding a zero point e(w^{k}):
where
${P}_{\mathrm{\Omega}}(\xb7)$ is the projection on Ω. $\nabla f(\xb7)$ and $\nabla g(\xb7)$ denote the gradient of $f(\xb7)$ and $g(\xb7)$ respectively.

$$\text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}e\left({w}^{k}\right)=\left[\begin{array}{l}{e}_{{x}_{1}}\left({w}^{k}\right)\hfill \\ {e}_{{x}_{2}}\left({w}^{k}\right)\hfill \\ {e}_{\lambda}\left({w}^{k}\right)\hfill \end{array}\right]=\left[\begin{array}{l}{x}_{1}^{k}-{P}_{\mathrm{\Omega}1}\left\{{x}_{1}^{k}-\left[\nabla f\left({x}_{1}^{k}\right)-{A}^{T}{\lambda}^{k}\right]\right\}\hfill \\ {x}_{2}^{k}-{P}_{\mathrm{\Omega}2}\left\{{x}_{2}^{k}-\left[\nabla g\left({x}_{2}^{k}\right)-{B}^{T}{\lambda}^{k}\right]\right\}\hfill \\ A{x}_{1}^{k}+B{x}_{2}^{k}\hfill \end{array}\right]$$

$$w=\left(\begin{array}{l}{x}_{1}\hfill \\ {x}_{2}\hfill \\ \lambda \hfill \end{array}\right),\text{\hspace{0.17em}\hspace{0.17em}}W={\mathrm{\Omega}}_{1}\times {\mathrm{\Omega}}_{2}\times {R}^{r},\text{\hspace{0.17em}}{w}^{k}\in W$$

Equation (17) offers us an inspiration on how to adjust penalty parameter c. For the sake of balance, we should adjust the penalty parameter c such that $\Vert {e}_{{x}_{1}}\left({w}^{k}\right)\Vert \approx \Vert {e}_{{x}_{2}}\left({w}^{k}\right)\Vert \approx \Vert {e}_{\lambda}\left({w}^{k}\right)\Vert $, but it is always difficult to achieve. Fortunately, if we use alternating direction method to deal with problem (8), then we will find that [17]:

$${e}_{{x}_{2}}\left({w}^{k}\right)={x}_{2}^{k}-{P}_{\mathrm{\Omega}2}\left\{{x}_{2}^{k}-\left[\nabla g\left({x}_{2}^{k}\right)-{B}^{T}{\lambda}^{k}\right]\right\}=0$$

Now, we only need to adjust the penalty parameter c such that $\Vert {e}_{{x}_{1}}\left({w}^{k}\right)\Vert \approx \Vert {e}_{\lambda}\left({w}^{k}\right)\Vert $, it is relatively easy to be achieved. To be more exact, for an iterate ${w}^{k}$, if $\Vert {e}_{{x}_{1}}\left({w}^{k}\right)\Vert \ll \Vert {e}_{\lambda}\left({w}^{k}\right)\Vert $, we should increase penalty parameter c. By contrast, if $\Vert {e}_{{x}_{1}}\left({w}^{k}\right)\Vert \gg \Vert {e}_{\lambda}\left({w}^{k}\right)\Vert $, we should decrease penalty parameter c.

Inspired by Bingsheng He’s idea, we employ the concept of balance to adjust penalty parameter c during APP’s iteration. However, it is difficult to achieve a balance between $\Vert {e}_{{x}_{1}}\left({w}^{k}\right)\Vert $, $\Vert {e}_{{x}_{2}}\left({w}^{k}\right)\Vert $ and $\Vert {e}_{\lambda}\left({w}^{k}\right)\Vert $ directly, so we first give Lemma 1 which gives a basic convergence property for APP iterative scheme and is useful for further analysis about how to adjust penalty parameter.

Let sequence {w^{k}} is generated by the iterative scheme APP and w^{*} is the solution of problem (8). Then we have
where
The M-norm ${\Vert x\Vert}_{M}$ is denoted by $\sqrt{{x}^{T}Mx}$.

$$\text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}{\Vert {w}^{k}-{w}^{*}\Vert}_{M}^{2}-{\Vert {w}^{k+1}-{w}^{*}\Vert}_{M}^{2}\ge {\Vert {w}^{k}-{w}^{k+1}\Vert}_{M}^{2}$$

$$\text{\hspace{0.17em}}w=\left(\begin{array}{l}{x}_{1}\hfill \\ {x}_{2}\hfill \\ \lambda \hfill \end{array}\right)\text{\hspace{0.17em}},\text{\hspace{0.17em}\hspace{0.17em}}M=\text{\hspace{0.17em}}\left(\begin{array}{lll}\left(\beta -c\right){A}^{T}A\hfill & -c{A}^{T}B\hfill & 0\hfill \\ -c{B}^{T}A\hfill & \left(\beta -c\right){B}^{T}B\hfill & 0\hfill \\ 0\hfill & 0\hfill & \frac{1}{c}{I}_{m}\hfill \end{array}\right)$$

Solving optimization problem (11) and (12) is equivalent to solving $\left({x}_{1}^{k+1},{x}_{2}^{k+1}\right)$ which satisfies [18]
and using ${\lambda}^{k+1}={\lambda}^{k}-c\left(A{x}_{1}^{k+1}+B{x}_{2}^{k+1}\right)$, we get

$$\begin{array}{l}f\left({x}_{1}\right)-f\left({x}_{1}^{k+1}\right)+{\left({x}_{1}-{x}_{1}^{k+1}\right)}^{T}\\ \left\{\left(\beta -c\right){A}^{T}\left(A{x}_{1}-A{x}_{1}^{k}\right)+c{A}^{T}\left(A{x}_{1}+B{x}_{2}^{k}\right)-{A}^{T}{\lambda}^{k}\right\}\ge 0,\text{\hspace{0.17em}}\forall {x}_{1}\in {\mathrm{\Omega}}_{1}\end{array}$$

$$\begin{array}{l}g\left({x}_{2}\right)-g\left({x}_{2}^{k+1}\right)+{\left({x}_{2}-{x}_{2}^{k+1}\right)}^{T}\\ \left\{\left(\beta -c\right){B}^{T}\left(B{x}_{2}-B{x}_{2}^{k}\right)+c{B}^{T}\left(A{x}_{1}^{k}+B{x}_{2}\right)-{B}^{T}{\lambda}^{k}\right\}\ge 0,\text{\hspace{0.17em}}\forall {x}_{2}\in {\mathrm{\Omega}}_{2}\end{array}$$

$$\begin{array}{l}f\left({x}_{1}\right)+g\left({x}_{2}\right)-f\left({x}_{1}^{k+1}\right)-g\left({x}_{2}^{k+1}\right)+\\ {\left(w-{w}^{k+1}\right)}^{T}\left\{F\left({w}^{k+1}\right)+M\left({w}^{k+1}-{w}^{k}\right)\right\}\ge 0,\text{\hspace{0.17em}}\forall w\in W\end{array}$$

Setting w=w^{*} in (24), we get,
where

$$\begin{array}{l}{\left({w}^{*}-{w}^{k+1}\right)}^{T}M\left({w}^{k+1}-{w}^{k}\right)\\ \ge f\left({x}_{1}^{k+1}\right)+g\left({x}_{2}^{k+1}\right)-f\left({x}_{1}^{*}\right)-g\left({x}_{2}^{*}\right)+{\left({w}^{k+1}-{w}^{*}\right)}^{T}F\left({w}^{k+1}\right)\end{array}$$

$$F\left(w\right)=\left(\begin{array}{l}-{A}^{T}\lambda \hfill \\ -{B}^{T}\lambda \hfill \\ A{x}_{1}+B{x}_{2}\hfill \end{array}\right)\text{\hspace{0.17em}}$$

Using the concept of variational inequality [18], solving optimization problem (8) is equivalent to solving w^{*} which satisfies

$$f\left({x}_{1}\right)+g\left({x}_{2}\right)-f\left({x}_{1}^{*}\right)-g\left({x}_{2}^{*}\right)+{\left(w-{w}^{*}\right)}^{T}F\left({w}^{*}\right)\ge 0,\text{\hspace{0.17em}}\forall w\in W$$

In fact, F is monotone operator [19]. We have,

$${\left({w}^{k+1}-{w}^{*}\right)}^{T}F\left({w}^{k+1}\right)\ge {\left(w-{w}^{*}\right)}^{T}F\left({w}^{*}\right)$$

Combing (25) and (28), we get

$$\begin{array}{l}{\left({w}^{*}-{w}^{k+1}\right)}^{T}M\left({w}^{k+1}-{w}^{k}\right)\ge 0\\ \Rightarrow {\left({w}^{*}-{w}^{k}+{w}^{k}-{w}^{k+1}\right)}^{T}M\left({w}^{k+1}-{w}^{k}\right)\ge 0\\ \Rightarrow {\left({w}^{*}-{w}^{k}\right)}^{T}M\left({w}^{k+1}-{w}^{k}\right)\ge {\left({w}^{k}-{w}^{k+1}\right)}^{T}M\left({w}^{k}-{w}^{k+1}\right)\end{array}$$

Using (29), we get,
Based on above discussion, the proof of Lemma 1 is completed.

$$\begin{array}{l}\text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}{\Vert {w}^{k}-{w}^{*}\Vert}_{M}^{2}-{\Vert {w}^{k+1}-{w}^{*}\Vert}_{M}^{2}\\ ={\Vert {w}^{k}-{w}^{*}\Vert}_{M}^{2}-{\Vert {w}^{k}-{w}^{*}-({w}^{k}-{w}^{k+1})\Vert}_{M}^{2}\\ =2{\left({w}^{*}-{w}^{k}\right)}^{T}M\left({w}^{k+1}-{w}^{k}\right)-{\Vert {w}^{k}-{w}^{k+1}\Vert}_{M}^{2}\\ \ge 2{\Vert {w}^{k}-{w}^{k+1}\Vert}_{M}^{2}-{\Vert {w}^{k}-{w}^{k+1}\Vert}_{M}^{2}\\ ={\Vert {w}^{k}-{w}^{k+1}\Vert}_{M}^{2}\end{array}$$

Furthermore, (20) can be rewritten as follow.
where

$$\text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}{\Vert {v}^{k}-{v}^{*}\Vert}_{N}^{2}-{\Vert {v}^{k+1}-{v}^{*}\Vert}_{N}^{2}\ge {\Vert {v}^{k}-{v}^{k+1}\Vert}_{N}^{2}$$

$$v=\left(\begin{array}{l}A{x}_{1}\hfill \\ B{x}_{2}\hfill \\ \lambda \hfill \end{array}\right),\text{\hspace{0.17em}}N=\left(\begin{array}{lll}\left(\beta -c\right)I\hfill & -cI\hfill & 0\hfill \\ -cI\hfill & \left(\beta -c\right)I\hfill & 0\hfill \\ 0\hfill & 0\hfill & \frac{1}{c}{I}_{m}\hfill \end{array}\right)$$

It is clear that (31) is Fejér monotone, so we get [20]

$$\underset{k\to \infty}{\mathrm{lim}}\Vert A{x}_{1}^{k+1}-A{x}_{1}^{k}\Vert =0,\text{\hspace{0.17em}\hspace{0.17em}}\underset{k\to \infty}{\mathrm{lim}}\Vert B{x}_{2}^{k+1}-B{x}_{2}^{k}\Vert =0,\text{\hspace{0.17em}\hspace{0.17em}}\underset{k\to \infty}{\mathrm{lim}}\Vert {\lambda}^{k+1}-{\lambda}^{k}\Vert =0$$

Interestingly, (33) is consistent with APP’s stop criterion, so it is clear that if ${\Vert {w}^{k}-{w}^{k+1}\Vert}_{M}^{2}\to 0$, then w^{k}^{+1} can be denoted by w^{*}. Hence, the magnitude of ${\Vert {w}^{k}-{w}^{k+1}\Vert}_{M}^{2}$ can measure the error between w^{k} and w^{*}.

According to β=2c which is described in (16), we get,

$${\Vert {w}^{k}-{w}^{k+1}\Vert}_{M}^{2}={\Vert \left(A{x}_{1}^{k+1}-A{x}_{1}^{k}\right)-\left(B{x}_{2}^{k+1}-B{x}_{2}^{k}\right)\Vert}_{c}^{2}+{\Vert {\lambda}^{k+1}-{\lambda}^{k}\Vert}_{\frac{1}{c}}^{2}$$

Now, for the sake of balance, we only need to adjust penalty parameter c such that ${\Vert \left(A{x}_{1}^{k+1}-A{x}_{1}^{k}\right)+\left(B{x}_{2}^{k+1}-B{x}_{2}^{k}\right)\Vert}_{c}\approx {\Vert {\lambda}^{k+1}-{\lambda}^{k}\Vert}_{\frac{1}{c}}$.

Problem (7) is the two-area economic dispatch model. Apparently, Problem (7) can be easily extended to N-area system. By denoting $\mathrm{\Theta}\left({x}_{i},{x}_{j}\right)\equiv {A}_{ij}{x}_{i}+{B}_{ij}{x}_{j}=0$ for the consistency constraint between area i and area j, the corresponding lagrangian multiplier and penalty parameter for $\mathrm{\Theta}\left({x}_{i},{x}_{j}\right)$ are defined as ${\lambda}_{ij}$ and c_{ij}. In consideration of the most complex case, each area exchanges active power with all the other areas, a general formulation for the N-area model of MAED can be expressed as follows:

$$\begin{array}{l}\mathrm{min}\text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}{\displaystyle \sum _{i=1}^{n}{f}_{i}\left({x}_{i}\right)+{\displaystyle \sum _{i=1}^{n}{\displaystyle \sum _{j=i+1}^{n}\left({\lambda}_{ij}^{T}\left({A}_{ij}{x}_{i}+{B}_{ij}{x}_{j}\right)+\frac{{c}_{ij}}{2}{\Vert {A}_{ij}{x}_{i}+{B}_{ij}{x}_{j}\Vert}^{2}\right)}}}\\ subject\text{\hspace{0.17em}}to:\text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}{x}_{i}\in {\mathrm{\Omega}}_{i}\text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}i=1,2,\cdots ,n\end{array}$$

For solving problem (35), the corresponding APP iterative scheme can be expressed as follows. $\left({x}_{1}^{k+1},{x}_{2}^{k+1},\cdots ,{x}_{n}^{k+1}\right)$ is obtained by solving
then, ${\lambda}_{ij}^{k+1}$ is obtained by updating

$$\begin{array}{l}\left({x}_{1}^{k+1},{x}_{2}^{k+1},\cdots ,{x}_{n}^{k+1}\right)=\mathrm{arg}\text{\hspace{0.17em}}\mathrm{min}\{{\displaystyle \sum _{i=1}^{n}{f}_{i}\left({x}_{i}\right)}+{\displaystyle \sum _{i=1}^{n}{\displaystyle \sum _{j=i+1}^{n}({\lambda}_{ij}^{k,T}\left({A}_{ij}{x}_{i}+{B}_{ij}{x}_{j}\right)+\frac{{c}_{ij}}{2}\left({\Vert {A}_{ij}{x}_{i}^{k}+{B}_{ij}{x}_{j}\Vert}^{2}+{\Vert {A}_{ij}{x}_{i}+{B}_{ij}{x}_{j}^{k}\Vert}^{2}\right)}}\\ \text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}+\frac{{\beta}_{ij}-{c}_{ij}}{2}\left({\Vert {A}_{ij}{x}_{i}-{A}_{ij}{x}_{i}^{k}\Vert}^{2}+{\Vert {B}_{ij}{x}_{j}-{B}_{ij}{x}_{j}^{k}\Vert}^{2}\right))|{x}_{i}\in {\mathrm{\Omega}}_{i},\text{\hspace{0.17em}}i=1,2,\cdots ,n\}\end{array}$$

$${\lambda}_{ij}^{k+1}={\lambda}_{ij}^{k}-{c}_{ij}\left({A}_{ij}{x}_{i}^{k+1}+{B}_{ij}{x}_{j}^{k+1}\right),\text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}i,j=1,2,\cdots ,n\text{\hspace{0.17em}}and\text{\hspace{0.17em}}j>i$$

Like the proof of Lemma 1 and (31), taking advance of the concept of variational inequality to deal with (36) and using (37), it is easy to get
where

$$\text{\hspace{0.17em}\hspace{0.17em}}{\displaystyle \sum _{i=1}^{n}{\displaystyle \sum _{j=i+1}^{n}{\Vert {v}_{ij}^{k}-{v}_{ij}^{*}\Vert}_{{N}_{ij}}^{2}}}\text{\hspace{0.17em}}-{\displaystyle \sum _{i=1}^{n}{\displaystyle \sum _{j=i+1}^{n}{\Vert {v}_{ij}^{k+1}-{v}_{ij}^{*}\Vert}_{{N}_{ij}}^{2}}}\ge {\displaystyle \sum _{i=1}^{n}{\displaystyle \sum _{j=i+1}^{n}{\Vert {v}_{ij}^{k}-{v}_{ij}^{k+1}\Vert}_{{N}_{ij}}^{2}}}$$

$$\text{\hspace{0.17em}}{v}_{ij}=\left(\begin{array}{l}{A}_{ij}{x}_{i}\hfill \\ {B}_{ij}{x}_{j}\hfill \\ {\lambda}_{ij}\hfill \end{array}\right)\text{\hspace{0.17em}},\text{\hspace{0.17em}\hspace{0.17em}}{N}_{ij}=\text{\hspace{0.17em}}\left(\begin{array}{lll}\left({\beta}_{ij}-{c}_{ij}\right)I\hfill & -{c}_{ij}I\hfill & 0\hfill \\ -{c}_{ij}I\hfill & \left({\beta}_{ij}-{c}_{ij}\right)I\hfill & 0\hfill \\ 0\hfill & 0\hfill & \frac{1}{{c}_{ij}}I\hfill \end{array}\right),\text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}i,j=1,2,\cdots ,n\text{\hspace{0.17em}}and\text{\hspace{0.17em}}j>i$$

For the sake of balance, we only need to adjust penalty parameter such that

$$\sum _{i=1}^{n}{\displaystyle \sum _{j=i+1}^{n}{\Vert \left({A}_{ij}{x}_{i}^{k+1}-{A}_{ij}{x}_{i}^{k}\right)-\left({B}_{ij}{x}_{j}^{k+1}-{B}_{ij}{x}_{j}^{k}\right)\Vert}_{{c}_{ij}}^{2}}}\approx {\displaystyle \sum _{i=1}^{n}{\displaystyle \sum _{j=i+1}^{n}{\Vert {\lambda}_{ij}^{k+1}-{\lambda}_{ij}^{k}\Vert}_{\frac{1}{{c}_{ij}}}^{2}}$$

In order to achieve balance in (40), we have to create a coordinator server which communicates with all sub-systems for exchanging required information. However, it is impossible for practical applications. In fact, achieving the balance in (40) provided that

$${\Vert \left({A}_{ij}{x}_{i}^{k+1}-{A}_{ij}{x}_{i}^{k}\right)-\left({B}_{ij}{x}_{j}^{k+1}-{B}_{ij}{x}_{j}^{k}\right)\Vert}_{{c}_{ij}}^{2}\approx {\Vert {\lambda}_{ij}^{k+1}-{\lambda}_{ij}^{k}\Vert}_{\frac{1}{{c}_{ij}}}^{2},\text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}i,j=1,2,\cdots ,n\text{\hspace{0.17em}}and\text{\hspace{0.17em}}j>i$$

The corresponding self-adaptive strategy for adjusting penalty parameter can be expressed as follows.
where

$${c}_{ij}=\{\begin{array}{l}0.5\frac{{c}_{ij}}{{r}_{ij}^{k+1}}\text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}if\text{\hspace{0.17em}}{r}_{ij}^{k+1}>10\text{\hspace{0.17em}}\hfill \\ 2{c}_{ij}\text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}if\text{\hspace{0.17em}}{r}_{ij}^{k+1}<0.1\hfill \\ {c}_{ij}\text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}otherwise\text{\hspace{0.17em}}\hfill \end{array}$$

$${r}_{ij}^{k+1}=\frac{{\Vert \left({A}_{ij}{x}_{i}^{k+1}-{A}_{ij}{x}_{i}^{k}\right)-\left({B}_{ij}{x}_{j}^{k+1}-{B}_{ij}{x}_{j}^{k}\right)\Vert}_{{c}_{ij}}}{{\Vert {\lambda}_{ij}^{k+1}-{\lambda}_{ij}^{k}\Vert}_{\frac{1}{{c}_{ij}}}},\text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}i,j=1,2,\cdots ,n\text{\hspace{0.17em}}and\text{\hspace{0.17em}}j>i$$

It is clear that the proposed SAPP only uses existing information to adjust penalty parameter, no extra information is needed. In addition，in comparison with the optimization problem (36), the calculation cost of (42) and (43) is slight and can be neglected. Therefore, the proposed SAPP scheme is implementable for practical applications.

In this section, we employ two test systems to verify the effectiveness of the proposed self-adaptive APP iterative scheme (SAPP) which is compared with traditional APP iterative scheme. Throughout this paper, stop criterion is set to be $\eta ={10}^{-4}$, the initial lagrangian multiplier is set to be ${\lambda}_{ij}=0$, the maximum iteration kmax is set to be 100 and the intrinsic relationship between APP’s parameters β_{ij}, ε_{ij} and c_{ij} is set to be c_{ij}=ε_{ij}=β_{ij}/2.

This system is composed of four generating units without considering valve-point effect and transmission loss, and all data about generating unit is strict from [21]. The total load demand for test system 1 is 800MW. In details, all generating units are divided into two areas. Area 1 includes 2 generating units and the corresponding load demand accounts for 70% of the total load demand. Area 2 consists of 2 generating units and makes up 30% of the total load demand. The maximum power flow between area 1 and area 2 is 200 MW.

This system is constituted of ten generators; valve-point effect and transmission loss are neglected here. The total load demand is set to be 2700 MW. Moreover, test system 2 is divided into three areas as shown in Fig. 2. Area 1 consists of four generating units; area 2 is comprised of three generating units; area 3 is made up of four generating units. The load demand in area 1 accounts for 50% of the total load demand, the corresponding proportion for area 2 and area 3 are both 25%. The generating unit data with three different fuel options has been taken from [22]. In this paper, we take the fuel option 1 as the fuel cost. In addition, there are three tie-lines in this system and the limits for tie-lines are all set to be 100MW.

Table I and table II reflect the convergence information of test system 1 and test system 2 respectively. The first column gives initial value of penalty parameter. The second and third columns indicate total number of iterations when the corresponding iterative scheme satisfies stop criterion. The fourth column shows total cost as objective function for MAED problem. Simulation results demonstrate that the APP is sensitive to the selection of initial penalty parameter, if penalty parameter is chosen too big or too small, APP need more number of iteration to reach the optimum. By contrast, the proposed SAPP has better stability in convergence with different penalty parameter, and has better convergence rate.

Auxiliary problem principle is one of the attractive approaches to solve multi-area economic dispatch problem and has been applied in many other fields in power systems. However, its convergence performance is significantly influenced by the selection the penalty parameter. In this paper, we propose a self-adaptive APP iterative scheme, which allow the penalty parameter increase or decrease according to iterative information. Simulation results illustrate that the proposed SAPP is superior to the traditional APP in terms of stability in convergence.

Penalty parameter | Number of iterations | Total cost ($/h) | |
---|---|---|---|

APP | SAPP | ||

10^{2} | Fail to convergence with maximum iteration 100 | 6 | 7.7549×10^{3} |

10^{1} | Fail to convergence with maximum iteration 100 | 6 | |

10^{0} | Fail to convergence with maximum iteration 100 | 6 | |

10^{-1} | Fail to convergence with maximum iteration 100 | 6 | |

10^{-2} | 15 | 7 | |

10^{-3} | 45 | 10 | |

10^{-4} | Fail to convergence with maximum iteration 100 | 14 | |

10^{-5} | Fail to convergence with maximum iteration 100 | 17 | |

10^{-6} | Fail to convergence with maximum iteration 100 | 21 |

Penalty parameter | Number of iterations | Total cost ($/h) | |
---|---|---|---|

APP | SAPP | ||

10^{2} | Fail to convergence with maximum iteration 100 | 46 | 718.0707 |

10^{1} | Fail to convergence with maximum iteration 100 | 31 | |

10^{0} | Fail to convergence with maximum iteration 100 | 27 | |

10^{-1} | Fail to convergence with maximum iteration 100 | 27 | |

10^{-2} | Fail to convergence with maximum iteration 100 | 11 | |

10^{-3} | 31 | 23 | |

10^{-4} | Fail to convergence with maximum iteration 100 | 23 | |

10^{-5} | Fail to convergence with maximum iteration 100 | 31 | |

10^{-6} | Fail to convergence with maximum iteration 100 | 28 |

The authors would like to thank the department of automation of southeast university for providing the necessary facilities and encouragement to carry out this work. This work is supported by State Grid Corporation of China, Major Projects on Planning and Operation Control of Large Scale Grid (SGCC-MPLG022-2012).

This research was carried out in collaboration among all two authors. Shumin Fei defined the research theme. Yaming Ren designed the algorithm, carried out the experiments and analyzed the data. All authors have read and approved the final manuscript.

The authors declare no conflict of interest.

- Basu, M. Artificial bee colony optimization for multi-area economic dispatch. Int. J. Elec. Power
**2013**, 49, 181–187. [Google Scholar] [CrossRef] - Yan, W.; Wen, L.; Li, W.; Chung, C.Y.; Wong, K.P. Decomposition-coordination interior point method and its application to multi-area optimal reactive power flow. Int. J. Elec. Power
**2011**, 33, 55–60. [Google Scholar] [CrossRef] - Conejo, A.J.; Nogales, F.J.; Prieto, F.J. A decomposition procedure based on approximate Newton directions. Math. Program
**2002**, 93, 495–515. [Google Scholar] [CrossRef] - Nogales, F.J. A decomposition methodology applied to the multi-area optimal power flow problem. Ann. Oper. Res.
**2003**, 120, 99–116. [Google Scholar] [CrossRef] - Cohen, G. Auxiliary problem principle and decomposition of optimization problems. J. Optimiz Theory App.
**1980**, 32, 277–305. [Google Scholar] [CrossRef] - Batut, J.; Renaud, A. Daily generation scheduling optimization with transmission constraints: a new class of algorithms. IEEE T. Power Syst.
**1992**, 7, 982–989. [Google Scholar] [CrossRef] - Beltran, C.; Heredia, F.J. Unit Commitment by Augmented Lagrangian Relaxation: Testing Two Decomposition Approaches. J. Optimiz. Theory App.
**2002**, 112, 295–314. [Google Scholar] [CrossRef][Green Version] - Liu, K.; Li, Y.; Sheng, W. The decomposition and computation method for distributed optimal power flow based on message passing interface (MPI). Int. J. Elec. Power
**2011**, 33, 1185–1193. [Google Scholar] [CrossRef] - Chung, K.H.; Kim, B.H.; Hur, D. Distributed implementation of generation scheduling algorithm on interconnected power systems. Energ. Convers. Manage
**2011**, 52, 3457–3464. [Google Scholar] [CrossRef] - Somasundaram, P.; Swaroopan, N.M.J. Fuzzified Particle Swarm Optimization Algorithm for Multi-area Security Constrained Economic Dispatch. Elec. Power Compon. Sys.
**2011**, 39, 979–990. [Google Scholar] [CrossRef] - Ramesh, V.; Jayabarathi, T.; Asthana, S.; Mital, S.; Basu, S. Combined Hybrid Differential Particle Swarm Optimization Approach for Economic Dispatch Problems. Electr. Power Compon. Sys.
**2010**, 38, 545–557. [Google Scholar] [CrossRef] - Fadil, S.; Yazici, A.; Urazel, B. A Security-constrained Economic Power Dispatch Technique Using Modified Subgradient Algorithm Based on Feasible Values and Pseudo Scaling Factor for a Power System Area Including Limited Energy Supply Thermal Units. Electr. Power Compon. Sys.
**2011**, 39, 1748–1768. [Google Scholar] [CrossRef] - Kim, B.H.; Baldick, R. Coarse-grained distributed optimal power flow. IEEE T. Power Syst.
**1997**, 12, 932–939. [Google Scholar] [CrossRef] - Losi, A. On the application of the auxiliary problem principle. J. Optimiz. Theory App.
**2003**, 117, 377–396. [Google Scholar] [CrossRef] - Langenberg, N. Interior point methods for equilibrium problems. Comput. Optim. Appl.
**2012**, 53, 453–483. [Google Scholar] [CrossRef] - Ren, Y.; Tian, Y.; Wei, H. Parameter evaluation of auxiliary problem principle for large-scale multi-area economic dispatch. Int. Transactions on Elec. Energ. Sys.
**2014**, 24, 1782–1790. [Google Scholar] [CrossRef] - He, B.S.; Yang, H.; Wang, S.L. Alternating direction method with self-adaptive penalty parameters for monotone variational inequalities. J Optimiz. Theory App.
**2000**, 106, 337–356. [Google Scholar] [CrossRef] - Yuan, X. An improved proximal alternating direction method for monotone variational inequalities with separable structure. Comput. Optim. Appl.
**2011**, 49, 17–29. [Google Scholar] [CrossRef] - He, B.S.; Shen, Y. On the convergence rate of customized proximal point algorithm for convex optimization and saddle-point problem. Sci. Sin. Math.
**2012**, 42, 515–525. [Google Scholar] [CrossRef] - He, B.; Yuan, X. Convergence Analysis of Primal-Dual Algorithms for a Saddle-Point Problem: From Contraction Perspective. SIAM J. Imaging Sci.
**2012**, 5, 119–149. [Google Scholar] [CrossRef] - Chen, C.L.; Chen, N.M. Direct search method for solving economic dispatch problem considering transmission capacity constraints. IEEE T Power Syst.
**2001**, 16, 764–769. [Google Scholar] [CrossRef] - Manoharan, P.S.; Kannan, P.S.; Baskar, S.; Willjuice Iruthayarajan, M. Evolutionary algorithm solution and KKT based optimality verification to multi-area economic dispatch. Int. J. Elec. Power
**2009**, 31, 365–373. [Google Scholar] [CrossRef]

© 2015 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/4.0/).