Next Article in Journal
A Method for Evaluating the Data Integrity of Microseismic Monitoring Systems in Mines Based on a Gradient Boosting Algorithm
Previous Article in Journal
A Hybrid Reproducing Kernel Particle Method for Three-Dimensional Helmholtz Equation
Previous Article in Special Issue
A New Robust Iterative Scheme Applied in Solving a Fractional Diffusion Model for Oxygen Delivery via a Capillary of Tissues
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Optimal ADMM for Unilateral Obstacle Problems

1
School of Mathematical Sciences, Chongqing Normal University, Chongqing 401331, China
2
CISDI Information Technology Co., Ltd., Chongqing 401120, China
3
School of Computer and Information Science, Chongqing Normal University, Chongqing 401331, China
*
Author to whom correspondence should be addressed.
Submission received: 13 May 2024 / Revised: 8 June 2024 / Accepted: 13 June 2024 / Published: 19 June 2024
(This article belongs to the Special Issue Variational Inequality and Mathematical Analysis)

Abstract

:
We propose a new alternating direction method of multipliers (ADMM) with an optimal parameter for the unilateral obstacle problem. We first use the five-point difference scheme to discretize the problem. Then, we present an augmented Lagrangian by introducing an auxiliary unknown, and an ADMM is applied to the corresponding saddle-point problem. Through eliminating the primal and auxiliary unknowns, a pure dual algorithm is then used. The convergence of the proposed method is analyzed, and a simple strategy is presented for selecting the optimal parameter, with the largest and smallest eigenvalues of the iterative matrix. Several numerical experiments confirm the theoretical findings of this study.
MSC:
90C33; 65N06; 65K15; 65F50

1. Introduction

As we know, many variant versions methods of the alternating direction method of multipliers (ADMM) have been successfully applied to different models such as constrained convex optimization problems [1,2,3,4,5], obstacle problems [6,7], contact problems [8,9,10], nonlinear complementarity problems [11,12], etc. Since the penalty parameter plays an important role for this method, the choice of the parameter is crucial for the ADMM. However, the optimal parameter is not easily obtainable in practical applications. Therefore, a good strategy for selecting the optimal parameter is very important for the ADMM [10,13].
Recently, some optimal parameter selections of the ADMM have been considered for convex optimization problems with linear equality constraints [14,15,16] and inequality constraints [14,17,18]. In addition, the automatic parameter selection for the ADMM was designed for the unilateral contact problem [8]. This method derives a pure dual form of the ADMM to eliminate the auxiliary and primal unknowns. This method obtains the optimal parameter using the smallest and largest eigenvalues of the iterative matrix. Unfortunately, the optimal parameter is not easily obtainable in practice because of the complex computing and the high computation cost.
The unilateral obstacle problem has many applications, such as lubrication phenomena, fluid flow in porous media and option pricing, etc. [19,20]. In this paper, we propose a new ADMM [7,21] with an optimal parameter for the unilateral obstacle problem [22,23,24,25,26,27,28,29,30,31], which is based on the extreme eigenvalues of the matrix using the finite difference method (FDM) [6,22,26,32]. The main advantage to this method is the simplicity of selecting the optimal parameter [8,33,34,35,36,37]. Moreover, the method saves computational time for large-scale problems because the iterative matrix is always kept constant.
The rest of this paper is structured as follows. We present some basic results in Section 2 for the unilateral obstacle problem. In particular, we developed a new ADMM and show the convergence of the method in Section 3, and we present an optimal parameter selection scheme in detail. In Section 4, some numerical results are presented to show the performance of the method. Some conclusions and perspectives are presented in Section 5.

2. Unilateral Obstacle Problems and ADMM Algorithm in Finite Dimension

For simplicity, we consider a square domain Ω in R 2 , and Γ = Ω is its boundary. For given functions f ( x ) and g ( x ) and an obstacle ψ ( x ) , the unilateral obstacle problem can be described as follows: find u ( x ) such that
Δ u ( x ) f ( x ) , x in Ω , u ( x ) ψ ( x ) , x in Ω , ( u ( x ) ψ ( x ) ) ( Δ u ( x ) f ( x ) ) = 0 , x in Ω , u ( x ) = g ( x ) , x on Γ ,
where Δ is the Laplace operator. Then, obstacle problem (1) admits a unique solution u [13].
To approximate problem (1) using the FDM, we divide Ω by a regular mesh, and then the mesh consists of boundary points on G and n 2 interior points in G. Then, we obtain a linear complementarity problem (LCP):
Δ h u h ( x ) f ( x ) , x in G , u h ( x ) ψ ( x ) , x in G , ( u h ( x ) ψ ( x ) ) ( Δ h u h ( x ) f ( x ) ) = 0 , x in G , u h ( x ) = g ( x ) , x on G ,
where u h ( x ) is an approximate solution to u ( x ) at the grid points x, and Δ h is a difference operator that approximates Δ .
In particular, we use the following five-point difference scheme in Ω to discrete the first inequality (1):
4 u i j u i 1 , j u i + 1 , j u i , j 1 u i , j + 1 h 2 f i , j , i , j = 1 , 2 , , n ,
where h is the mesh width. We use (2) and substitute the known values of u i j and f i j on the boundary grid and domain, grid respectively. Tthe LCP reads as follows:
u ψ , A u + b 0 , ( u ψ ) T ( A u + b ) = 0 ,
where u is the vector N ( N = n 2 ) of unknown interior mesh values, b is the given data vector, and A is a R N × N matrix expressed as
A = B + 2 I I I B + 2 I I I B + 2 I I I B + 2 I
where B = tridiag ( 1 , 2 , 1 ) R n × n , and I R n × n is the identity matrix. It is clear that A is diagonally dominant. Moreover, the matrix A has the following special features:
(a)
A is symmetric, positive-definite.
(b)
A has n diagonal blocks in form B + 2 I and subdiagonal blocks I .
To design efficient algorithms using these features for LCP (3), some iterative algorithms have been presented [6,22,26]. Moreover, LCP (3) is equivalent to the following constrained minimization problem: Find u ψ such that
F ( u ) F ( v ) , v ψ ,
where F ( v ) = 1 2 v T A v + v T b . Consequently, we can solve optimization problem (4) and obtain the solution of LCP (3).
To achieve the solution of problem (4) using the ADMM algorithm [7,21], we introduce an indicator function I as follows:
I ( q ) = 0 if q ψ , + otherwise .
We introduce an auxiliary unknown p and consider the following constraint minimization problem:
F ( u ) + I ( p ) F ( v ) + I ( q ) , v , q R N , subject   to p u = 0 .
Then, (4) is equivalent to problem (5), and we have the following preliminary result [6,21].
 Lemma 1. 
u R N is the solution of (4) if and only if there exists u ψ such that { u , p } is the solution of (5).
To solve problem (5), we present the saddle-point problem:
L ρ ( v , q , λ ) L ρ ( u , p , λ ) L ρ ( u , p , μ ) , { v , q , μ } R N × R N × R N ,
with the augmented Lagrangian L ρ defined by
L ρ ( u , p , λ ) = F ( u ) + I ( p ) + λ , p u + ρ 2 p u 2 ,
and the penalty parameter ρ > 0 . Let · , · denote the inner product, and we obtain the following results for problems (5) and (6) [13,21].
 Lemma 2. 
Let { u , p , λ } R N × R N × R + N be the solution of (6). Then, u is the solution of (4), and p = u ; moreover, { u , λ } is the unique solution of the following problem:
(8) A u + b = λ , (9) λ , u = 0 .
 Proof. 
From the two inequalities (6) and (7), it follows that
λ μ , u p 0 , μ R + N ,
which leads to
p = u .
From the first inequality (6), we have
L ρ ( v , q , λ ) L ρ ( u , p , λ ) = E ( u ) , ( v , q ) R N × R N .
In taking q = v , it follows that
E ( u ) L ρ ( v , v , λ ) = E ( v ) , v R N .
Hence, u is the solution of (4); in taking q = p , it follows that
A u = f + λ ρ ( u p ) .
Since p = u , we obtain (8). We take v = u in the first inequality (6) and have
λ , q p 0 , q R N ,
It follows from p = u that
λ , q u 0 , q R N .
Then, we take q = 0 and q = 2 u , and we have
λ , u 0 , λ , u 0 ,
which proves (9).    □
For given u 0 , λ 0 R + N , the ADMM algorithm can now be described as the following three-step pattern:
(10) u k + 1 = arg min u R N L ρ u , p k , λ k , (11) p k + 1 = arg min p R + N L ρ u k + 1 , p , λ k , (12) λ k + 1 = λ k + ρ p k + 1 u k + 1 ,
until some stopping criterion is satisfied. Minimizing (10) results in an equilibrium problem that admits a unique solution. The minimization of (11) is solved explicitly. Let ( x ) + = max ( 0 , x ) ( x R N ) . Then, we describe Algorithm 1 as follows [8].
Algorithm 1 ADMM
Step 0. λ 0 R N , p 0 R N and ρ > 0 are given; set k : = 0 .
Step 1. Find u k + 1 R N by solving
( A + ρ I ) u k + 1 = ρ p k + λ k b .
Step 2. Find p k + 1 R N by solving
p k + 1 = u k + 1 1 ρ [ λ k ( λ k ρ ( u k + 1 ψ ) ) + ] .
Step 3. Find λ k + 1 R N by solving
λ k + 1 = λ k ρ ( u k + 1 p k + 1 ) .
If the stopping criterion is satisfied then stop; else, go to Step 1 with k : = k + 1 .
In letting λ k = min ( 0 , λ + k 1 ρ ( u k ψ ) ) and ( x ) = min ( 0 , x ) ( x R N ) , Algorithm 1 without p k and p k + 1 can be described as Algorithm 2 [9].
Algorithm 2 ADMM without auxiliary unknown
Step 0. λ 0 R N , p 0 R N and ρ > 0 are given; set k : = 0 .
Step 1. Find u k + 1 R N by solving
( A + ρ I ) u k + 1 = ( λ k λ + k ) b .
Step 2. Find λ + k + 1 , λ k + 1 R N by solving
λ + k + 1 = ( λ + k ρ ( u k + 1 ψ ) ) + ,
λ k + 1 = ( λ + k ρ ( u k + 1 ψ ) ) ,
and go to Step 1 with k : = k + 1 .
To obtain the pure dual algorithm for the unilateral obstacle problem, we from (16) to obtain
u k + 1 = A ρ 1 ( λ k λ + k b ) .
where
A ρ = A + ρ I .
We eliminate the vector u k + 1 in (17) and (18) using (19). We then obtain Algorithm 3.
Algorithm 3 Dual ADMM
Step 0. λ 0 R N and ρ > 0 are given; set a = A ρ 1 b and k : = 0 .
Step 1. Compute λ + k + 1 and λ k + 1 as
λ + k + 1 = ( ( I ρ A ρ 1 ) λ + k + ρ A ρ 1 λ k + ρ a ) + ,
λ k + 1 = ( ( I ρ A ρ 1 ) λ + k + ρ A ρ 1 λ k + ρ a ) ,
and repeat Step 1 with k : = k + 1 .

3. Proof of Convergence

In this section, we prove the convergence of Algorithm 3. From min ( 0 , x ) = max ( 0 , x ) , it follows that iteration (22) can be rewritten as
λ k + 1 = ( ( I ρ A ρ 1 ) λ + k ρ A ρ 1 λ k ρ a ) + .
Since x + x for any x R N , from (21) and (23), we obtain
λ + k + 1 = ( ( I ρ A ρ 1 ) λ + k + ρ A ρ 1 λ k + ρ a ,
λ k + 1 = ( ( I ρ A ρ 1 ) λ + k ρ A ρ 1 λ k + ρ a .
By adding (24) and (25), we have
λ + k + 1 + λ k + 1 ( ( I ρ A ρ 1 ) λ + k + ρ A ρ 1 λ k + ( ( I ρ A ρ 1 ) λ + k ρ A ρ 1 λ k + 2 ρ a .
Let us set
λ ± k = λ + k λ k and M = I ρ A ρ 1 ρ A ρ 1 ( I ρ A ρ 1 ) ρ A ρ 1
and define the normal λ ± k = λ + k + λ k . Then, from (26), we obtain
λ ± k + 1 = M λ ± k + 2 ρ a .
Therefore, the spectral radius of M determines the convergence of Algorithm 3. To prove the convergence, the following lemma will be needed [8].
 Lemma 3. 
M has the same non-zero eigenvalues as I 2 ρ A ρ 1 .
 Proof. 
Let M 1 = I ρ A ρ 1 and M 2 = ρ A ρ 1 . We have
M = M 1 M 2 M 1 M 2 .
We need to prove that M has the same eigenvalues as M 1 M 2 = I 2 ρ A ρ 1 . Suppose that α is an eigenvalue of M and ξ 1 ξ 2 ( ξ 1 , ξ 2 R N ) is its eigenvector. It follows that
M 1 ξ 1 + M 2 ξ 2 = α ξ 1 ,
M 1 ξ 1 M 2 ξ 2 = α ξ 2 ,
Adding (27) and (28), we obtain α ( ξ 1 + ξ 2 ) = 0 . For non-zero α , we then have ξ 1 = ξ 2 , and it follows from (27) and (28) that
( M 1 M 2 ) ξ 1 = α ξ 1 ,
( M 1 M 2 ) ξ 2 = α ξ 2 ,
Then, we prove that α is an eigenvalue of M 1 M 2 . From the relations above, it is also the eigenvalues of M.    □
From Lemma 3 above, we have the convergence analysis as follows.
 Theorem 1. 
ADMM Algorithm 3 converges when ρ > 0 . Let λ M and λ m be the largest and smallest eigenvalues of A, respectively. Then, the optimal penalty parameter is given by
ρ * = λ M λ m .
 Proof. 
To study the eigenvalues of I 2 ρ A ρ 1 , we consider the following sequence:
μ k + 1 = ( I 2 ρ A ρ 1 ) μ k ,
Since
A ρ 1 = ( A + ρ I ) 1 = ( I + ρ A 1 ) 1 A 1 ,
from (32) and (33), we have
μ k + 1 = ( I 2 ρ A 1 ( I + ρ A 1 ) 1 ) μ k ,
From Lemma 1 and the convergence of the sequence { μ k } , we can obtain the convergence of the sequences { λ + k , λ k } and { u k } . To obtain the eigenvalues of the iterative matrix G = I 2 ρ A 1 ( I + ρ A 1 ) 1 from the eigenvalues of A 1 , let λ i ( A 1 ) ( i = 1 , , N ) be the eigenvalues of A 1 sorted in ascending order. Then, the eigenvalues of G satisfy
λ i ( G ) = 1 2 ρ λ i ( A 1 ) 1 + ρ λ i ( A 1 ) = 1 ρ λ i ( A 1 ) 1 + ρ λ i ( A 1 ) .
Because all eigenvalues λ i ( A 1 ) and penalty parameter ρ are positive, from (35), we obtain
1 < λ i ( G ) < 1 , i = 1 , 2 , , N .
Hence, we proved the convergence of Algorithm 3. Moreover, from (35), it follows that the optimal choice for ρ is
1 ρ λ m ( A 1 ) 1 + ρ λ m ( A 1 ) = 1 ρ λ M ( A 1 ) 1 + ρ λ M ( A 1 ) ,
so that the optimal parameter is given by
ρ * = 1 λ M ( A 1 ) λ m ( A 1 ) = λ M ( A ) λ m ( A ) .
Then, we complete the proof.    □

4. Optimal Penalty Parameter Approximation

In this section, we introduce Kronecker Products to compute the largest and smallest eigenvalues of A 1 . For the sake of simplicity, we use the discretization with a regular grid in the square. As the tridiagonal matrix B = tridiag ( 1 , 2 , 1 ) plays a central role in our method, here, we consider its eigenvalues and eigenvectors [33].
 Lemma 4. 
The eigenvalues of B are given as λ i = 2 ( 1 cos i π n + 1 ) , i = 1 , 2 , , n . The corresponding eigenvectors can be given by z i ( k ) = 2 n + 1 ( 1 cos i π n + 1 ) sin ( i k π n + 1 ) , where i , k = 1 , 2 , , n . Let the diagonal matrix Λ = d i a g ( λ 1 , λ 2 , , λ n ) , and Z be the orthogonal matrix with columns as eigenvectors. Then, B = Z Λ Z T .
It follows that the greatest eigenvalue of B is
λ M = λ n 4 ,
and the smallest eigenvalue of B is
λ m = λ 1 π 2 ( n + 1 ) 2 .
As the matrix A can be rewritten in terms of Kronecker products,
A = ( B I + I B ) = 2 I I I 2 I I I 2 I I I 2 I + B B B B ,
we refer to the following lemma to compute the eigenvalue of the matrix A [33].
 Lemma 5. 
Let B = Z Λ Z T be the eigendecomposition of B, where Λ = d i a g ( λ 1 , λ 2 , , λ n ) , and Z = [ z 1 , z 2 , , z n ] is the orthogonal matrix with columns as eigenvectors. Then, the eigendecomposition of A = ( B I + I B ) is
B I + I B = ( Z Z ) · ( Λ I + I Λ ) ( Z Z ) T .
Λ I + I Λ is a diagonal matrix, and its ( i n + j ) t h diagonal component is obtained using the ( i , j ) t h eigenvalue of B λ i , j = λ i + λ j . Z Z is an orthogonal matrix, and its ( i n + j ) t h columns are obtained using the corresponding eigenvalue z i z j .
It follows from Lemma 5 that the largest eigenvalue of A is
λ M = λ n + λ n = 4 ( 1 cos n π n + 1 ) 8 ,
and the smallest eigenvalue of A is
λ m = λ 1 + λ 1 = 4 ( 1 cos π n + 1 ) = 4 ( 1 ( 1 2 sin 2 π 2 ( n + 1 ) ) ) 2 π 2 ( n + 1 ) 2 .
Therefore, from the eigenvalues of A, we obtain the optimal penalty parameter approximation as follows:
ρ * = λ m ( A ) λ M ( A ) = 4 ( 1 cos n π n + 1 ) ( 1 cos π n + 1 ) 4 π n + 1 .

5. Results and Discussion

In this section, we present some preliminary numerical results to assess the performance of the ADMM (Algorithm 1). Note that the semi-smooth Newton method (SSNM) is superlinear convergent and equivalent to the primal-dual active set algorithm. This approach is widely used for unilateral obstacle problems [10,24,25]. We set the set of indices N h = { 1 , 2 , , N } and describe Algorithm 4 as follows.
Algorithm 4 SSNM
Step 0. Given λ 0 R N , u 0 R N , and ρ > 0 , set k : = 0 .
Step 1. Set A k = { i , λ i k ρ j = 1 N A i j u j k > 0 } .
Step 2. Find u k + 1 R N and λ k + 1 R N such that
A u k + 1 + b = λ k + 1 ,
where λ i k + 1 = λ i k ρ j = 1 N A i j u j k + 1 for i A k , and λ i k + 1 = 0 for i N h A k .
Step 3. If A k + 1 = A k , then stop; else, return to Step 1 with k : = k + 1 .
It is well known that the active set A k changes during the iterative process of the SNNM, and the matrix of linear system (40) varies at each iteration. Different from the SSNM, the iterative matrix of our proposed ADMM is constant in the iteration, saving time on matrix computation. This advantage is particularly noticeable for large systems where N is enough large.
For the SSNM algorithm, the convergence speed also depends on the penalty parameter ρ , and the change is very moderate when ρ is large enough. For the examples that we tested, the number of iterations did not change anymore for ρ 10 4 . In all tests, we took parameters ρ = 10 4 for the SSNM and ρ * , using (39) for the ADMM. All computations were performed with Matlab R2016b on a Notebook personal computer Lenovo with a 13th Gen Intel(R) Core(TM) i7-13700H CPU 2.40 GHz, and the stopping criterion was u k + 1 u k 2 10 5 u k + 1 2 . Here, we present several numerical examples to show the efficiency of the ADMM and compare the results with those of the SSNM.

5.1. Example 1

We first considered unilateral obstacle problem (1) on a unit square Ω = ( 0 , 1 ) × ( 0 , 1 ) with f ( x ) = 50 and ψ ( x ) = 0.5 . We choose the parameter ρ = 4 π 81 and applied the proposed algorithm to this problem with step size h = 1 80 . We show the numerical solution u h and the contact zone u = ψ in Figure 1 and Figure 2, respectively. Our numerical results appear to be consistent with those in [22]. Let Iterations and CPU(s) represent the number of iterations and amount of CPU time in seconds, respectively. We compare the number of iterations and amount of CPU time with that of the SSNM in Table 1. One can see that the ADMM requires less CPU time than the SSN [8] because of the constant matrix for the ADMM and the matrix factorizations for the SSNM in every iteration.
To further investigate the convergence of the proposed ADMM, we choose different values of ρ , and Figure 3 shows the number of iterations for different step sizes h. The curves in Figure 3 show that the penalty parameter ρ can heavily affect the convergence rate of the ADMM. We can easily use (39) to compute all optimal parameters ρ * 4 π 11 , 4 π 21 , 4 π 41 , 4 π 81 , 4 π 161 for h = 1 10 , 1 20 , 1 40 , 1 80 , 1 160 ( N = ( 1 h 1 ) 2 ), respectively. These numerical results also demonstrate the accuracy and effectiveness of the proposed ADMM.

5.2. Example 2

Let Ω = ( 1.5 , 1.5 ) × ( 1.5 , 1.5 ) . We computed obstacle problem (1) with f ( x ) = 2 and ψ ( x ) = 0 . For this example, the analytical solution is
u ( x ) = r 2 1 2 ln r if r 1 , 0 if r < 1 ,
where x = ( x 1 , x 2 ) R 2 , and r = x 1 2 + x 2 2 [24,25,28,29,31].
In this example, we considered the same parameters as in Example 1. Figure 4 and Figure 5, respectively, plot the numerical solution u h and the contact zone with h = 3 80 . We observe that our numerical solution is in good agreement with the analytical solution.
Table 2 presents the number of iterations and amount of CPU time of both algorithms, with different step sizes h. Figure 6 shows the convergence speed versus the parameter ρ with different step sizes h. We also observe that the proposed ADMM also requires less CPU time than the SSNM, and the optimal parameters of the numerical results are in good agreement with the ρ * obtained using (39).

5.3. Example 3

Let Ω = ( 1 , 1 ) × ( 1 , 1 ) . We computed obstacle problem (1) with ψ ( x ) = 0 and
f ( x ) = 8 ( 2 r 2 r 0 2 ) if r > r 0 , 8 r 0 2 ( 1 r 2 + r 0 2 ) otherwise .
For any 0 < r 0 < 1 , the analytical solution of this example is
u ( x ) = ( max { r 2 r 0 2 , 0 } ) 2 .
As in [23,27], we took r 0 = 0.7 for the computation. Figure 7 and Figure 8, respectively, plot the numerical solution u h and the contact zone with h = 1 40 . Table 3 displays the number of iterations and the CPU time. Figure 9 shows the convergence speed versus the parameter ρ and step size h. It can be seen that the proposed ADMM is very effective.

5.4. Example 4

Finally, we considered the unilateral obstacle problem (1) in Ω = ( 1.5 , 1.5 ) × ( 1.5 , 1.5 ) with f = 0 and
ψ ( x ) = 1 r 2 if r 1 , 1 if r > 1 .
The exact solution for this example is
u ( x ) = 1 r 2 if r r 0 , r 0 2 ln ( r / 2 ) 1 r 0 2 otherwise ,
where r 0 = 0.6979651482 [24,25,28,29,31].
Figure 10 and Figure 11 plot the numerical solution u h and the contact zone, respectively. Table 4 displays the number of iterations and CPU time. Figure 12 shows the impact of parameters ρ in the convergence of the ADMM. It can be seen that the numerical results are also in good agreement with our theoretical analysis.

6. Conclusions

In this paper, we propose an ADMM with an optimal parameter for unilateral obstacle problems. To solve the LCP arising from the discretization of free boundary problems with the FDM, we use an optimal penalty parameter selection to improve the performance of the ADMM. The numerical results demonstrate that the proposed ADMM is feasible and efficient for the numerical solution. In the future, we will apply this method to bilateral obstacle problems.

Author Contributions

Conceptualization, methodology, Writing—review & editing, S.Z.; funding acquisition, project administration, X.C.; software, validation, G.X.; formal analysis, Writing—review, R.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Chongqing Natural Science Foundation of China (Grant no. cstc2020jcyj-msxmX0066) and the Scientific and Technological Research Project of the Chongqing Municipal Education Commission of China (Grant Nos. KJZD-K202100503 and KJZD-K202300506).

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

Xiyong Cui was employed by CISDI Information Technology Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. CISDI Information Technology Co., Ltd. had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Dolgopolik, M.V. The alternating direction method of multipliers for finding the distance between ellipsoids. Appl. Math. Comput. 2021, 409, 126387. [Google Scholar] [CrossRef]
  2. Gonçalves, M.L.N. On the pointwise iteration-complexity of a dynamic regularized ADMM with over-relaxation stepsize. Appl. Math. Comput. 2018, 336, 315–325. [Google Scholar] [CrossRef]
  3. Luenberger, D.G.; Ye, Y.Y. Linear and Nonlinear Programming; Springer: Berlin/Heidelberg, Germany, 2016. [Google Scholar]
  4. Shen, Y.; Xu, M.H. On the O(1/t) convergence rate of Ye-Yuan’s modified alternating direction method of multipliers. Appl. Math. Comput. 2014, 226, 367–373. [Google Scholar] [CrossRef]
  5. Yu, S.T.; Peng, J.J.; Tang, Z.G.; Peng, Z.Y. Iterative methods to solve the constrained Sylvester equation. AIMS Math. 2023, 8, 21531–21553. [Google Scholar] [CrossRef]
  6. Zhang, J.J.; Zhang, J.L.; Ye, W.Z. An inexact alternating direction method of multipliers for the solution of linear complementarity problems arising from free boundary problems. Numer. Algor. 2018, 78, 895–910. [Google Scholar] [CrossRef]
  7. Zhang, S.G.; Guo, N.X. Uzawa block relaxation method for free boundary problem with unilateral obstacle. Int. J. Comput. Math. 2021, 98, 671–689. [Google Scholar] [CrossRef]
  8. Chorfi, A.; Koko, J. Alternating direction method of multiplier for the unilateral contact problem with an automatic penalty parameter selection. Appl. Math. Model. 2020, 78, 706–723. [Google Scholar] [CrossRef]
  9. Essoufi, E.H.; Koko, J.; Zafrar, A. Alternating direction method of multiplier for a unilateral contact problem in electro-elastostatics. Comput. Math. Appl. 2017, 78, 1789–1802. [Google Scholar] [CrossRef]
  10. Koko, J. Uzawa block relaxation method for the unilateral contact problem. J. Comput. Appl. Math. 2011, 235, 2343–2356. [Google Scholar] [CrossRef]
  11. He, J.W.; Lei, C.C.; Shi, C.Y.; Vong, S.W. An inexact alternating direction method of multipliers for a kind of nonlinear complementarity problems. Numer. Algebra Control Optim. 2021, 11, 353–362. [Google Scholar] [CrossRef]
  12. He, J.W.; Zheng, H.; Vong, S. Improved inexact alternating direction methods for a class of nonlinear complementarity problems. East Asian J. Appl. Math. 2022, 12, 125–144. [Google Scholar] [CrossRef]
  13. Glowinski, R. Numerical Methods for Nonlinear Variational Problems; Springer: Berlin/Heidelberg, Germany, 2008; pp. 166–194. [Google Scholar]
  14. Ghadimi, E.; Teixeira, A.; Shames, I.; Johansson, M. Optimal parameter selection for the alternating direction method of multipliers (ADMM): Quadratic problems. IEEE Trans. Automat. Control 2015, 60, 644–658. [Google Scholar] [CrossRef]
  15. Teixeira, A.; Ghadimi, E.; Shames, I.; Sandberg, H.; Johansson, M. The ADMM algorithm for distributed quadratic problems: Parameter selection and constraint preconditioning. IEEE Trans. Signal Process 2016, 64, 290–305. [Google Scholar] [CrossRef]
  16. Zhang, R.Y.; White, J.K. GMRES-accelerated ADMM for quadratic objectives. SIAM J. Optim. 2018, 28, 3025–3056. [Google Scholar] [CrossRef]
  17. Mavromatis, C.; Foti, M.; Vavalis, M. Auto-tuned weighted-Penalty parameter ADMM for distributed optimal power flow. IEEE Trans. Power Syst. 2021, 36, 970–978. [Google Scholar] [CrossRef]
  18. You, Z.; Zhang, H.S. A prediction-correction ADMM for multistage stochastic variational inequalities. J. Optimiz. Theory Appl. 2023, 199, 693–731. [Google Scholar] [CrossRef]
  19. Khandelwal, R.; Porwal, K.; Singla, R. Supremum-norm a posteriori error control of quadratic discontinuous Galerkin methods for the obstacle problem. Comput. Math. Appl. 2023, 137, 147–171. [Google Scholar] [CrossRef]
  20. Gaddam, S.; Gudi, T.; Porwal, K. Two new approaches for solving elliptic obstacle problems using discontinuous Galerkinmethods. BIT 2022, 62, 89–124. [Google Scholar] [CrossRef]
  21. Glowinski, R. Variational Methods for the Numerical Solution of Nonlinear Elliptic Problems; SIAM: Philadelphia, PA, USA, 2015; pp. 157–200. [Google Scholar]
  22. Xue, L.; Cheng, X.L. An algorithm for solving the obstacle problems. Comput. Math. Appl. 2004, 48, 1651–1657. [Google Scholar] [CrossRef]
  23. Cicuttin, M.; Ern, A.; Gudi, T. Hybrid high-order methods for the elliptic obstacle problem. J. Sci. Comput. 2020, 83, 8. [Google Scholar] [CrossRef]
  24. De Dios, B.A.; Gudi, T.; Porwal, K. Pointwise a posteriori error analysis of a discontinuous Galerkin method for the elliptic obstacle problem. IMA J. Numer. Anal. 2022, 43, 2377–2412. [Google Scholar] [CrossRef]
  25. Khandelwal, R.; Porwal, K. Pointwise a posteriori error analysis of quadratic finite element method for the elliptic obstacle problem. J. Comput. Appl. Math. 2022, 412, 114364. [Google Scholar] [CrossRef]
  26. Lin, Y.; Cryer, C.W. An alternating direction implicit algorithm for the solution of linear complementarity problems arising from free boundary problems. Appl. Math. Optim. 1985, 13, 1–17. [Google Scholar] [CrossRef]
  27. Nochetto, R.H.; Siebert, K.G.; Veeser, A. Pointwise a posteriori error control for elliptic obstacle problems. Numer. Math. 2003, 95, 163–195. [Google Scholar] [CrossRef]
  28. Xu, C.; Shi, D.Y. Superconvergence analysis of low order nonconforming finite element methods for variational inequality problem with displacement obstacle. Appl. Math. Comput. 2019, 348, 1–11. [Google Scholar] [CrossRef]
  29. Wang, F.; Eichholz, J.; Han, W.M. A two level algorithm for an obstacle problem. Appl. Math. Comput. 2018, 330, 65–76. [Google Scholar] [CrossRef]
  30. Weiss, A.; Wohlmuth, B.I. A posteriori error estimator for obstacle problems. SIAM. J. Sci. Comput. 2010, 32, 2627–2658. [Google Scholar] [CrossRef]
  31. Zhao, M.; Wu, H.; Xiong, C. Error analysis of HDG approximations for elliptic variational inequality: Obstacle problem. Numer. Algor. 2019, 81, 445–463. [Google Scholar] [CrossRef]
  32. Cao, F.J.; Yuan, D.F. A class of HOC finite difference method for elliptic interface problems with imperfect contact. AIMS Math. 2022, 8, 5789–5815. [Google Scholar] [CrossRef]
  33. Demmel, J.W. Applied Numerical Linear Algebra; SIAM: Philadelphia, PA, USA, 1997; pp. 267–278. [Google Scholar]
  34. He, B.S.; Liao, L.Z.; Wang, S.L. Self-adaptive operator splitting methods for monotone variational inequalities. Numer. Math. 2003, 94, 715–737. [Google Scholar] [CrossRef]
  35. Zhang, S.G. Two projection methods for the solution of Signorini problems. Appl. Math. Comput. 2018, 326, 75–86. [Google Scholar] [CrossRef]
  36. Zhang, S.G.; Li, X.L. A self-adaptive projection method for contact problems with the BEM. Appl. Math. Model. 2018, 55, 145–159. [Google Scholar] [CrossRef]
  37. Zhang, S.G.; Li, X.L.; Ran, R.S. Self-adaptive projection and boundary element methods for contact problems with Tresca friction. Commun. Nonlinear Sci. Numer. Simulat. 2019, 68, 72–85. [Google Scholar] [CrossRef]
Figure 1. The numerical solution (example 1).
Figure 1. The numerical solution (example 1).
Mathematics 12 01901 g001
Figure 2. The contact zone (example 1).
Figure 2. The contact zone (example 1).
Mathematics 12 01901 g002
Figure 3. Number of iterations (example 1).
Figure 3. Number of iterations (example 1).
Mathematics 12 01901 g003
Figure 4. The numerical solution (example 2).
Figure 4. The numerical solution (example 2).
Mathematics 12 01901 g004
Figure 5. The contact zone (example 2).
Figure 5. The contact zone (example 2).
Mathematics 12 01901 g005
Figure 6. Number of iterations (example 2).
Figure 6. Number of iterations (example 2).
Mathematics 12 01901 g006
Figure 7. The numerical solution (example 3).
Figure 7. The numerical solution (example 3).
Mathematics 12 01901 g007
Figure 8. The contact zone (example 3).
Figure 8. The contact zone (example 3).
Mathematics 12 01901 g008
Figure 9. Number of iterations (example 3).
Figure 9. Number of iterations (example 3).
Mathematics 12 01901 g009
Figure 10. The numerical solution (example 4).
Figure 10. The numerical solution (example 4).
Mathematics 12 01901 g010
Figure 11. The contact zone (example 4).
Figure 11. The contact zone (example 4).
Mathematics 12 01901 g011
Figure 12. Number of iterations (example 4).
Figure 12. Number of iterations (example 4).
Mathematics 12 01901 g012
Table 1. Performances of ADMM with ρ * and SSNM with ρ = 10 4 (example 1).
Table 1. Performances of ADMM with ρ * and SSNM with ρ = 10 4 (example 1).
hADMM (Algorithm 1)SSNM (Algorithm 4)
IterationsCPU (s)IterationsCPU (s)
1 10 200.024270.0245
1 20 160.026080.0331
1 40 220.049790.0723
1 80 300.389190.7741
1 160 553.658498.9213
Table 2. Performances of ADMM with ρ * and SSNM with ρ = 10 4 (example 2).
Table 2. Performances of ADMM with ρ * and SSNM with ρ = 10 4 (example 2).
hADMM (Algorithm 1)SSNM (Algorithm 4)
IterationsCPU (s)IterationsCPU (s)
3 10 230.024860.0311
3 20 180.027080.0333
3 40 230.0485110.0869
3 80 380.3730131.0514
3 160 633.48861412.5551
Table 3. Performances of ADMM with ρ * and SSNM with ρ = 10 4 (example 4).
Table 3. Performances of ADMM with ρ * and SSNM with ρ = 10 4 (example 4).
hADMM (Algorithm 1)SSNM (Algorithm 4)
IterationsCPU (s)IterationsCPU (s)
1 5 270.036560.0386
1 10 320.042680.0431
1 20 530.0857110.0976
1 40 790.6882171.3378
1 80 1164.7102924.5476
Table 4. Performances of ADMM with ρ * and SSNM with ρ = 10 4 (example 4).
Table 4. Performances of ADMM with ρ * and SSNM with ρ = 10 4 (example 4).
hADMM (Algorithm 1)SSNM (Algorithm 4)
IterationsCPU (s)IterationsCPU (s)
3 10 210.028550.0292
3 20 260.032650.0336
3 40 430.069180.0768
3 80 750.6283121.0372
3 160 1335.41492118.8991
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, S.; Cui, X.; Xiong, G.; Ran, R. An Optimal ADMM for Unilateral Obstacle Problems. Mathematics 2024, 12, 1901. https://0-doi-org.brum.beds.ac.uk/10.3390/math12121901

AMA Style

Zhang S, Cui X, Xiong G, Ran R. An Optimal ADMM for Unilateral Obstacle Problems. Mathematics. 2024; 12(12):1901. https://0-doi-org.brum.beds.ac.uk/10.3390/math12121901

Chicago/Turabian Style

Zhang, Shougui, Xiyong Cui, Guihua Xiong, and Ruisheng Ran. 2024. "An Optimal ADMM for Unilateral Obstacle Problems" Mathematics 12, no. 12: 1901. https://0-doi-org.brum.beds.ac.uk/10.3390/math12121901

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop