Next Article in Journal
Improved Mixed-Integer Linear Programming Model for Short-Term Scheduling of the Pressing Process in Multi-Layer Printed Circuit Board Manufacturing
Next Article in Special Issue
Some New Results on Bicomplex Bernstein Polynomials
Previous Article in Journal
Geometric Modeling of C-Bézier Curve and Surface with Shape Parameters
Previous Article in Special Issue
Discrete Hypergeometric Legendre Polynomials
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Combining Nyström Methods for a Fast Solution of Fredholm Integral Equations of the Second Kind

by
Domenico Mezzanotte
1,
Donatella Occorsio
1,2,* and
Maria Grazia Russo
1
1
Department of Mathematics, Computer Science and Economics, University of Basilicata, Viale dell’Ateneo Lucano 10, 85100 Potenza, Italy
2
C.N.R. National Research Council of Italy, IAC Institute for Applied Computing “Mauro Picone”, Via P. Castellino 111, 80131 Napoli, Italy
*
Author to whom correspondence should be addressed.
Submission received: 1 October 2021 / Revised: 12 October 2021 / Accepted: 16 October 2021 / Published: 20 October 2021
(This article belongs to the Special Issue Orthogonal Polynomials and Special Functions)

Abstract

:
In this paper, we propose a suitable combination of two different Nyström methods, both using the zeros of the same sequence of Jacobi polynomials, in order to approximate the solution of Fredholm integral equations on [ 1 , 1 ] . The proposed procedure is cheaper than the Nyström scheme based on using only one of the described methods . Moreover, we can successfully manage functions with possible algebraic singularities at the endpoints and kernels with different pathologies. The error of the method is comparable with that of the best polynomial approximation in suitable spaces of functions, equipped with the weighted uniform norm. The convergence and the stability of the method are proved, and some numerical tests that confirm the theoretical estimates are given.

1. Introduction

Let the following be a Fredholm Integral Equation (FIE) of the second kind:
f ( y ) = g ( y ) + μ 1 1 f ( x ) k ( x , y ) ρ ( x ) d x , y ( 1 , 1 ) ,
where ρ is a Jacobi weight, g and k are known functions defined in ( 1 , 1 ) and ( 1 , 1 ) 2 , respectively, μ is a non zero real parameter and f is the unknown function we want to look for. The kernel function k is also allowed to be weakly singular along the diagonal y = x , or it could show some other pathologies such as high oscillating behaviour or a “nearly singular” factor. The nature of the kernel, with the presence of the Jacobi weight inside the integral, implies that the solution f can have a singular behaviour at the endpoints of the definition interval (see for instance [1,2]); therefore, the natural choice is to study Equation (1) in suitable spaces of weighted functions.
A high number of papers on the numerical methods for FIEs is disposable in the literature, and in the last two decades a deep attention was devoted, in the case under consideration, to the so-called “global approximation methods”. They are essentially based on polynomial approximation and use zeros of orthogonal polynomials (see for instance [3,4] and the references therein). There are also examples of global approximation methods based on equispaced points [5], which are especially convenient when the data are available in discrete form but are limited to the unweighted case (see [5,6]). Global methods, more or less, behave as the best polynomial approximation of the solution in suitable spaces of weighted functions; consequently, this approximation strategy provides a powerful performance in the case of very smooth functions. On the other hand, these methods can converge slowly if the functions are not smooth or if the kernel has pathologies as described above.
Recently in [4], a new method based on the collocation approach using the so-called Extended Interpolation was proposed in order to reduce the computational effort in the cases where the solution is not so smooth [7]. Moreover the method delays the computation of high degree polynomial zeros that becomes progressively unstable as the degree increases.
Following a similar idea, we propose here a Mixed Nyström scheme based on product quadrature rules of the “extended” type, i.e., based on the zeros of the polynomial p m + 1 ( w ) p m ( w ) , where p n ( w ) n denotes the orthonormal sequence with respect to a suitable fixed Jacobi weight w [8]. The advantages of using a Nyström scheme with respect to a collocation one include several benefits. Indeed, first of all, we will use here only one sequence of orthonormal polynomials, while in the collocation method in [4] two different sequences ( p n ( w ) n and p n ( φ 2 w ) n , where φ ( x ) = 1 x 2 ) were required to obtain optimal Lebesgue constants of the interpolating operators. Secondly, due to its nature, the Nyström strategy with a fixed kernel provides a faster convergence with respect to the collocation approach if the right-hand side g in (1) is not so smooth. Third, a Nyström method based on a product rule allows treating kernel functions having different pathologies.
The idea of the proposed scheme is the following. Consider two sequences of Nyström interpolants f m m and f ˜ 2 m + 1 m : The first is based on the product rule using the zeros of p m ( w ) , and the second one is based on the extended product rule using the zeros of p m + 1 ( w ) p m ( w ) . Each step of the procedure consists in solving, for a fixed m, the first Nyström method and in using the coefficients defining the corresponding Nyström interpolant f m in order to “reduce” about one half of the computation of the coefficients of the Extended Nyström interpolant f ˜ 2 m + 1 . In other words, we will assume that the two interpolants “coincide” on the zeros of p m ( w ) . This assumption results in solving only a linear system of order m + 1 instead of one of dimension 2 m + 1 in order to obtain an approximating function that is comparable with f ˜ 2 m + 1 from the convergence point of view.
The outline of the paper is the following. Section 2 contains preliminary notations and a collection of tools needed to introduce the main results stated in Section 3. Here, we present an extended Nyström method, and the combined algorithm that allows us to solve Equation (1) faster based on this. In Section 4, we provide some computational details for the effective construction of the linear systems. Section 5 concerns the numerical tests, while Section 6 contains the proofs.

2. Notation and Preliminary Results

Throughout the paper, we use C in order to denote a positive constant, which may have different values at different occurrences, and we write C C ( n , f , ) to mean that C > 0 is independent of n , f , .

2.1. Function Spaces

Let u be the Jacobi weight defined as follows:
u ( x ) = v γ , δ ( x ) : = ( 1 x ) γ ( 1 + x ) δ , x ( 1 , 1 ) , γ , δ 0 .
We denote by C u the Banach space of the locally continuous functions f on ( 1 , 1 ) such that the following limit conditions are satisfied:
lim x 1 f ( x ) u ( x ) = 0 , i f γ > 0 , a n d lim x 1 + f ( x ) u ( x ) = 0 , i f δ > 0 .
C u is equipped with the following norm:
f C u : = f u = max x [ 1 , 1 ] | f ( x ) | u ( x ) .
The limit conditions (2) are necessary in order to assure that the following is the case (see for instance [9]):
lim m E m ( f ) u = 0 , f C u
where, denoted by P m the space of all algebraic polynomials having degree at most m, it is
E m ( f ) u : = inf P P m f P C u
the error of best polynomial approximation of f C u .
For smoother functions, we consider the following Sobolev-type subspaces of C u of order r N , defined as the following:
W r ( u ) = f C u : f ( r 1 ) A C ( ( 1 , 1 ) ) , f ( r ) φ r u < , r N ,
where A C denotes the space of all the functions that are absolutely continuous on every closed subset of ( 1 , 1 ) , and φ ( x ) : = 1 x 2 . W r ( u ) is equipped with the following norm:
f W r ( u ) : = f C u + f ( r ) φ r u .
Finally, by L log + L , we denote the set of all measurable function f defined in ( 1 , 1 ) such that the following is the case:
1 1 | f ( x ) | ( 1 + log + | f ( x ) | ) d x < , log + f ( x ) = log max 1 , f ( x ) .
For any bivariate function k ( x , y ) , we will write k y (or k x ) in order to regard k as the univariate function in the only variable x (or y).

2.2. Solvability of the Equation (1) in C u

Let us set the following:
( K f ) ( y ) = μ 1 1 f ( x ) k ( x , y ) ρ ( x ) d x , ρ = v σ , τ , σ , τ > 1 .
Equation (1) can be rewritten in the following form:
( I K ) f = g ,
where I denotes the identity operator.
In order to provide the sufficient conditions assuring the compactness of the operator K : C u C u , we need to recall the following definition. For any f C u and with 0 < r N , in [10], it was defined the following modulus of smoothness:
Ω φ r ( f , t ) u = sup 0 < h t ( Δ h φ r f ) u I h r ,
where
I h r = [ 1 + 4 h 2 r 2 , 1 4 h 2 r 2 ]
and the following is the case:
Δ h φ r f ( x ) = i = 0 r ( 1 ) i r i f x + h 2 ( r 2 i ) h φ ( x ) .
For any f W r ( u ) , the modulus Ω φ r ( f , t ) u is estimated by means of the following inequality (see for instance [11], p. 314):
Ω φ r ( f , t ) u C sup 0 < h t h r f ( r ) φ r u I h r , C C ( f , t ) .
We are now able to state a theorem that guarantees the solvability of the Equation (1) in the space C u and for which its proof is given in Section 6.
Theorem 1.
Under the following assumptions, with 0 < s < r and C C ( f ) ,
sup | y | 1 k y ρ u L 1 ( [ 1 , 1 ] ) , sup t > 0 Ω φ r ( K f , t ) u t s C f C u ,
the operator K : C u C u is compact. Therefore, if ker ( I K ) = 0 , for any g C u , Equation (1) admits a unique solution in C u .
Remark 1.
We observe that (4) is satisfied also when the kernel k ( x , y ) in (3) is weakly singular. For instance k ( x , y ) = | x y | μ , 1 < μ < 0 , fulfils the assumption with s = 1 + μ (see ([11], Lemma 4.1, p. 322) and ([3], pp. 3–4)).

2.3. Product Integration Rules

Denoted by { p m ( w ) } m N , the system of the orthonormal polynomials with respect to the Jacobi weight w = v α , β , α , β > 1 , the polynomial p m ( w ) is so defined:
p m ( w , x ) = γ m ( w ) x m + terms of lower degree , γ m ( w ) > 0 .
Let { x k : = x m , k ( w ) : k = 1 , , m } be the zeros of p m ( w ) and let the following:
λ m , k : = λ m , k ( w ) = i = 0 m 1 p i 2 ( w , x k ) 1 , k = 1 , , m ,
be the Christoffel numbers with respect to w.
For the following integral:
I ( f , y ) = 1 1 f ( x ) k ( x , y ) ρ ( x ) d x
consider the following product integration rule:
I ( f , y ) = k = 1 m C k ( y ) f ( x k ) + e m I ( f , y ) = : I m ( f , y ) + e m I ( f , y ) ,
where
C k ( y ) = λ m , k i = 0 m 1 p i ( w , x k ) M i ( y ) , M i ( y ) = 1 1 p i ( w , x ) k ( x , y ) ρ ( x ) d x , i = 0 , 1 , , m 1 .
According to a consolidated terminology, we will refer to the product integration rule in (5) as Ordinary Product Rule only to distinguish it from the extended product integration rule introduced below. Moreover, we recall that { M i ( y ) } i N are known as Modified Moments [12] (see, e.g., [13]).
With respect to the stability and the convergence of the previous rule, the following result, useful for our aim, can be deduced by ([9], p. 348) (see also [14]).
Theorem 2.
Under the following assumptions:
sup | y | 1 k y ρ u L log + L , sup | y | 1 k y ρ w φ L 1 ( [ 1 , 1 ] ) , w φ u L 1 ( [ 1 , 1 ] ) ,
for any f C u , we obtain the following bounds:
sup m sup | y | 1 | I m ( f , y ) | C f C u
and
sup | y | 1 e m I ( f , y ) C E m 1 ( f ) u ,
with C C ( m , f ) .
In addition to the previous well-known product rule, we recall the following Extended Product Rule (see [8]) based on the zeros of p m ( w ) p m + 1 ( w ) . Denoted by { y k : = x m + 1 , k ( w ) : k = 1 , , m + 1 } the zeros of p m + 1 ( w ) , the extended formula is as follows: -4.6cm0cm
I ( f , y ) = k = 1 m A k ( y ) f ( x k ) + k = 1 m + 1 B k ( y ) f ( y k ) + e 2 m + 1 Σ ( f , y ) = : Σ 2 m + 1 ( f , y ) + e 2 m + 1 Σ ( f , y ) ,
where
A k ( y ) = λ m , k p m + 1 ( w , x k ) i = 0 m 1 p i ( w , x k ) M i ( m + 1 ) ( y ) ,
B k ( y ) = λ m + 1 , k p m ( w , y k ) i = 0 m p i ( w , y k ) M i ( m ) ( y ) ,
and
M i ( h ) ( y ) = 1 1 p i ( w , x ) p h ( w , x ) k ( x , y ) ρ ( x ) d x , h { m , m + 1 } ,
are known as the Generalized Modified Moments (GMMs).
With respect to the stability and convergence of the extended quadrature rule (8), we recall the following.
Theorem 3
([8], Theorem 3.2). Under the following assumptions:
sup | y | 1 k y ρ w φ L log + L , sup | y | 1 k y ρ u L 1 ( [ 1 , 1 ] ) , w u L ( [ 1 , 1 ] ) ,
for any f C u , we obtain the following bounds:
sup m sup | y | 1 | Σ 2 m + 1 ( f , y ) | C f C u
and
sup | y | 1 e 2 m + 1 Σ ( f , y ) C E 2 m ( f ) u ,
with C C ( m , f ) .

2.4. A Nyström Method

In order to approximate the solution of (1), we recall the following weighted Nyström method based on the product quadrature rule (5). Introducing the sequence ( K m f ) ( y ) m , where the following is the case:
( K m f ) ( y ) = μ I m ( f , y ) ,
we proceed to solve in C u the following finite dimensional equation in the unknown f m :
( I K m ) f m = g , m = 1 , 2 , .
By multiplying both sides of the previous equation by the weight function u and collocating at the nodes { x i } i = 1 m , we reach the following linear system of order m in the unknowns { c i : = f m ( x i ) u ( x i ) } i = 1 m :
k = 1 m δ i k μ u ( x i ) u ( x k ) C k ( x i ) c k = ( g u ) ( x i ) , i = 1 , , m ,
where { C k ( y ) } k is defined in (6).
By setting the following:
c m = [ c 1 , , c m ] T , r m = [ r 1 , , r m ] T , with r i = ( g u ) ( x i ) ,
D m ( i , k ) = δ i k μ u ( x i ) u ( x k ) C k ( x i ) ,
the system (15) can be rewritten in the following matrix form:
D m c m = r m .
Details on the matrix D m will be given in Section 4.
Once the solution { c i * } i = 1 m of the system (16) has been determined, the Ordinary Nyström interpolant takes the form:
( f m u ) ( y ) = ( g u ) ( y ) + μ u ( y ) k = 1 m c k * u ( x k ) C k ( y ) .
About the convergence of this method, the following theorem can be obtained by using weighted arguments in [15]:
Theorem 4.
Under the assumptions of Theorems 1 and 2, for any g W r ( u ) , the finite dimensional Equation (14) admits a unique solution f m * C u such that we have the following:
( f * f m * ) u C f W r ( u ) m r ,
with r 1 and C C ( m , f ) .
In what follows, we will refer to this Nyström method as the Ordinary Nyström Method (ONM).

3. Main Results

3.1. The Extended Nyström Method

Now, we introduce a Nyström method based on the extended product quadrature rule (8), calling it the Extended Nyström Method (ENM). Proceeding in analogy to the ONM, we begin by constructing the following sequence ( K ˜ 2 m + 1 f ) ( y ) m :
( K ˜ 2 m + 1 f ) ( y ) = μ Σ 2 m + 1 ( f , y ) .
Moreover, we solve in C u the following extended finite dimensional equation in the unknown f ˜ 2 m + 1 :
( I K ˜ 2 m + 1 ) f ˜ 2 m + 1 = g , m = 1 , 2 , .
By multiplying both sides of (19) by the weight function u and collocating at the quadrature nodes { x i } i = 1 m { y i } i = 1 m + 1 , we obtain the following linear system:
a i μ u ( x i ) k = 1 m a k u ( x k ) A k ( x i ) + k = 1 m + 1 b k u ( y k ) B k ( x i ) = r i , i = 1 , , m , b i μ u ( y i ) k = 1 m a k u ( x k ) A k ( y i ) + k = 1 m + 1 b k u ( y k ) B k ( y i ) = s i , i = 1 , , m + 1 ,
where the following is the case:
a i = ( f ˜ 2 m + 1 u ) ( x i ) , r i = ( g u ) ( x i ) , i = 1 , , m , b i = ( f ˜ 2 m + 1 u ) ( y i ) , s i = ( g u ) ( y i ) , i = 1 , , m + 1 .
The coefficients { A k ( y ) } k = 1 m and { B k ( y ) } k = 1 m + 1 are defined in (9)–(10). The linear system of order 2 m + 1 can be rewritten in the following more convenient block-matrix form:
D 1 , 1 D 1 , 2 D 2 , 1 D 2 , 2 a m b m + 1 = r m s m + 1 ,
with D 1 , 1 R m × m , D 1 , 2 R m × ( m + 1 ) , D 2 , 1 R ( m + 1 ) × m and D 2 , 2 R ( m + 1 ) × ( m + 1 ) . Details on the effective construction of the system will be provided in Section 4.
Denoted by [ a m * T , b m + 1 * T ] T , the vector solution of (21), the extended Nyström interpolant takes the following form:
( f ˜ 2 m + 1 u ) ( y ) = ( g u ) ( y ) + μ u ( y ) k = 1 m a k * u ( x k ) A k ( y ) + k = 1 m + 1 b k * u ( y k ) B k ( y ) .
With respect to the convergence, we are able to prove the following:
Theorem 5.
Under the assumption of Theorems 1 and 3, for any g W r ( u ) , the finite dimensional Equation (19) admits a unique solution f ˜ 2 m + 1 C u such that the following error estimate holds:
( f f ˜ 2 m + 1 ) u C f W r ( u ) ( 2 m ) r , C C ( m , f ) .

3.2. The Mixed Nyström Method

We observe that, under suitable assumptions, both sequences { f m } m and { f ˜ 2 m + 1 } m uniformly converge to the solution f of (1). Thus, it makes sense to consider a mixed scheme that combines the two methods previously introduced.
Therefore, the Mixed Nyström Method (MNM) consists of two steps:
  • For a given m, solve the linear system of order m:
    D m c m = r m
    and construct the Nyström interpolant f m by means of its solution c * m , in other words,
    ( f m u ) ( y ) = ( g u ) ( y ) + μ u ( y ) k = 1 m c k * u ( x k ) C k ( y ) .
  • By assuming a m c * m in the linear system (21), we obtain the following:
    D 1 , 1 D 1 , 2 D 2 , 1 D 2 , 2 c * m b ˜ m + 1 = r m s m + 1
    by which we deduce the reduced system of order m + 1 in the only unknown b ˜ m + 1
    D 2 , 2 b ˜ m + 1 = s m + 1 D 2 , 1 c * m .
    Denoted by b ˜ m + 1 * its solution, we construct the interpolant f ^ 2 m + 1 f ˜ 2 m + 1 :
    ( f ^ 2 m + 1 u ) ( y ) = ( g u ) ( y ) + μ u ( y ) k = 1 m c k * u ( x k ) A k ( y ) + k = 1 m + 1 b ˜ k * u ( y k ) B k ( y ) .
  • Restart the procedure determining f 4 m and f ^ 8 m + 1 and so on.
The mixed sequence of Nyström interpolants is obtained by iterating a couple of steps of the types (24)–(26), allowing us to obtain the following mixed sequence of Nyström interpolants:
f ¯ n ( x ) = f 2 n ( x ) n = 2 , 4 , , f ^ 2 n + 1 ( x ) n = 3 , 5 , .
Denoting by A = max 1 i n j = 1 n | a i j | the infinity norm of the matrix A R n × n , the uniform convergence of { f ¯ n } n to the solution f C u of (1) is stated in the following:
Theorem 6.
Under the assumptions of Theorems 1, 4, 5 and supposing that the matrix D 2 , 2 in (25) is invertible, with sup n D 2 , 2 1 < , for any g W r ( u ) , the sequence { f ¯ n } n uniformly converges to f C u , and the following error estimate holds:
( f f ¯ n ) u C f W r ( u ) ( 2 n ) r , C C ( n , f ) .
Remark 2.
By comparing (23) with (28), both the sequences obtained by the extended and the mixed Nyström methods uniformly converge to f C u with the same rate of convergence.
By implementing the Mixed Nyström Method, we gain different advantages, specifically the reduction in the sizes of the involved linear systems.
More precisely, at each step of the mixed scheme, setting m = 2 n , we solve two systems of order m and m + 1 . By doing so, the obtained error is comparable with that performed by solving the two systems of order m and 2 m + 1 by the Ordinary Nyström Method.
Therefore, the computational cost of the global procedure is strongly reduced. Indeed, if we compute the solution of the linear systems by Gaussian Elimination, we save 77.8% off long operations and 33.2% off function evaluations.
Furthermore, the difficulties in the evaluation of the modified moments for “large” degree are delayed, as well as the instability in the construction of Jacobi polynomial zeros of high degrees by the Golub–Welsh algorithm.

4. Computational Details

Given two integers h , k , h < k , , in this section, we use the short notation h : k to denote the set h , h + 1 , , k . Denoting by I m the identity matrix of order m, the matrix of the linear system (16) takes the following form:
D m : = I m K m ,
with
K m : = μ U m M m P m Λ m U m 1 ,
U m = diag u ( x 1 ) , , u ( x m ) , Λ m = diag λ m , 1 , , λ m , m , M m ( i , j ) = M j ( x i ) , i = 1 : m , j = 0 : m 1 , P m ( j , k ) = p j ( w , x k ) , j = 0 : m 1 , k = 1 : m .
It is well known that the system (16) and the finite dimensional Equation (14) are equivalent (see for instance ([16], Theorem 12.7, p. 202)).
About the block-matrix of the system (21), according to the previously introduced matrices, we have the following:
D 1 , 1 : = I m μ U m M 1 , 1 P m Λ m Q m 1 U m 1 R m × m , D 1 , 2 : = μ U m M 1 , 2 P m + 1 Λ m + 1 R m + 1 1 U m + 1 1 R m × ( m + 1 ) , D 2 , 1 : = μ U m + 1 M 2 , 1 P m Λ m Q m 1 U m 1 R ( m + 1 ) × m , D 2 , 2 : = I m + 1 μ U m + 1 M 2 , 2 P m + 1 Λ m + 1 R m + 1 1 U m + 1 1 R ( m + 1 ) × ( m + 1 ) ,
where
Q m = diag p m + 1 ( w , x 1 ) , , p m + 1 ( w , x m ) ) , R m + 1 = diag p m ( w , y 1 ) , , p m ( w , y m + 1 ) ) , U m + 1 = diag u ( y 1 ) , , u ( y m + 1 ) , Λ m + 1 = diag λ m + 1 , 1 , , λ m + 1 , m + 1 ,
and the matrices M 1 , 1 , M 1 , 2 , M 2 , 1 and M 2 , 2 are as follows:
M 1 , 1 ( i , j ) = M j m + 1 ( x i ) , i = 1 : m , j = 0 : m 1 , M 1 , 1 R m × m , M 1 , 2 ( i , j ) = M j m ( x i ) , i = 1 : m , j = 0 : m , M 1 , 2 R m × ( m + 1 ) , M 2 , 1 ( i , j ) = M j m + 1 ( y i ) , i = 1 : m + 1 , j = 0 : m 1 , M 2 , 1 R ( m + 1 ) × m , M 2 , 2 ( i , j ) = M j m ( y i ) , i = 1 : m + 1 , j = 0 : m , M 1 , 1 R ( m + 1 ) × ( m + 1 ) .
Remark 3.
The entries of the matrices in (29) require the computation of the GMMs. As usual, the ordinary Modified Moments (MMs), which depend on the specific kernel we considered, are often derived by suitable recurrence relations (see, e.g., [13]). In [8] a general scheme for deriving GMMs starting from MMs was proposed. Alternatively, for very smooth kernels, Gaussian rules can be also used. In any case, the global algorithm can be organized in such a manner that the matrices in (29), requiring the most expensive computation effort, can be performed once for a given couple ( m , m + 1 ) .

5. Numerical Experiments

Now we propose some tests showing the numerical results that were obtained by approximating the solution of equations of the type (1) by the mixed sequence { f ¯ n } n in (27). We will compare the results with those attained by the corresponding ordinary sequence used in the standard method and in Example 2 also with those achieved by the mixed collocation method proposed in [4]. Indeed, in this test, the kernel is moderately smooth, and the convergence conditions of both methods are satisfied.
We have selected g possessing different regularities and kernels k presenting some kinds of drawback such as a contemporary high oscillating behaviour with a “near” singular fixed point or being weakly singular. In each test, we will report either the common weight w used in the construction of the quadrature formulae and the weight u defining the space to which f belongs.
For effective comparison between the ordinary and the extended sequences on the same number of nodes, we have considered the following sequences:
f ˇ n ( x ) = f 2 n ( x ) n = 2 , 4 , , f 2 n + 1 ( x ) n = 3 , 5 , . f ¯ n ( x ) = f 2 n ( x ) n = 2 , 4 , , f ^ 2 n + 1 ( x ) n = 3 , 5 , .
Since the solution f is unknown, we retain the exact values attained by the approximating function f N , for sufficiently large N. We remark that N = 1024 turns out to be a suitable choice for the functions under consideration. In the tables, for increasing n, we will report the weighted maximum error attained by f ˇ n and f ¯ n at the set ( z i ) i = 1 , , M , M = 1000 of equally spaced points of ( 1 , 1 ) by setting
E n o n e : = max 1 i M | ( u f N ) ( z i ) ( u f ˇ n ) ( z i ) | ,
E n m i x : = max 1 i M | ( u f N ) ( z i ) ( u f ¯ n ) ( z i ) | .
In each table, first and third columns contain the size of the ordinary system (o.l.s.) and its condition number cond o n e related to the ordinary sequence. In the fourth and sixth columns, the sizes of the couple of linear systems of the mixed scheme (m.l.s.) and the condition number cond m i x of the “reduced” system (25) are reported. All the condition numbers have been computed with respect to the infinity norm.
Finally, in order to contain possible moderate loss of accuracy in computing GMMs, we have carried out their construction by using the software Wolfram Mathematica 12.1 in quadruple precision. All the other computations have been performed in double-machine precision e p s 2.220446049250313e-16.
Example 1.
Let us consider the following equation:
f ( y ) 1 100 π 1 1 f ( x ) | x y | 1 2 ( 1 x 2 ) 1 5 d x = | y | 10 3
u = v 0.25 , 0.25 , w = v 0.5 , 0.5 , ρ = v 0.2 , 0.2
In this case g W 3 ( u ) and according to Theorem 6, which holds since all the assumptions are satisfied, the errors are O m 3 , and the numerical results reported in Table 1 are even better. All the linear systems are well conditioned, the ordinary condition numbers being slightly smaller than the mixed ones. The weighted absolute errors by ONM and MNM are displayed in Figure 1.
Example 2.
Let us consider the following equation:
f ( y ) 1 3 1 1 f ( x ) | x y | e ( 1 x 2 ) 3 4 d x = y 5 2
u = v 0.7 , 0.7 , w = ρ = v 0.75 , 0.75
In Table 2 and Table 3 we report the results achieved by the mixed and ordinary Nyström methods and those obtained by the mixed and ordinary collocation methods in [4]. Indeed, the assumptions assuring stability and convergence for all the methods are satisfied; hence, the comparison makes sense.
We denote by E ¯ n o n e and E ¯ n m i x the weighted maximum error attained by the Ordinary Collocation Method (OCM) and the Mixed Collocation Method (MCM) in [4] at the set ( z i ) i = 1 , , M , M = 1000 of equally spaced points of ( 1 , 1 ) , respectively.
The results show that both the Nyström methods behave better than the collocation ones, and this is quite common in cases such as the one under consideration. Indeed, even if the solution f W 2 ( u ) (since g W 2 ( u ) ), the rate of convergence of the collocation approach depends on both the approximations of the integral operator and the right-hand side. On the contrary, the order of convergence of the Nyström method depends essentially on the smoothness of the kernel. This is one of the reasons why in these cases the Nyström approach produces better results than the collocation one, as also announced in the Introduction.
Example 3.
Let us consider the following equation:
f ( y ) 1 2 π 1 1 f ( x ) cos ( 250 x ) 1 x 2 d x = ( 1 y ) 5 2 cos y
u = v 0.1 , 0.1 , w = ρ = v 0.5 , 0.5
In this test, the kernel k ( x , y ) = cos ( 250 x ) presents a fast oscillating behaviour. Its graphic is reported in Figure 2. Hence, the product formula allows us to overcome the drawbacks deriving from the use of the Gauss–Jacobi rule. About the rate of convergence, since g W 5 ( u ) , we expect that the errors are O m 5 . We have reported the values attained by ONM and MNM in three different points of the interval ( 1 , 1 ) (Table 4) and the maximum errors on the entire interval (Table 5). In all the cases the theoretical estimates are attained. Moreover, in both methods the condition numbers of the linear systems are comparable.
Example 4.
Let us consider the following equation:
f ( y ) 1 20 π 1 1 f ( x ) ( 1 x 2 ) 3 / 10 ( x 2 + 5 2 ) 5 / 4 d x = y 1 2 9 2
u = v 0.2 , 0.2 , w = ρ = v 0.3 , 0.3
In this case, ρ and u satisfy the assumptions for the convergence of the Nyström methods, while they do not satisfy those of the collocation methods [4]. We recall that for the convergence of both the collocation methods smoother kernels and more restrictive assumptions on the weights are required. About the rate of convergence, since g W 4 ( u ) , we expect that the errors are O m 4 . Moreover, we have chosen this test to propose a comparison with the Nyström method obtained by approximating the coefficients { C k ( y ) } k = 1 m in (5) by Gaussian rules. We will refer to this procedure as the Ordinary Nyström method by Gaussian rule (shortly, ONG). We point out that the nature of the kernel k makes this comparison possible, since the computation of the coefficients by the Gauss–Jacobi rule can be performed. Thus, in Table 6, in addition to the results by the mixed and ordinary Nyström methods, in the last two columns, we will set the maximum weighted errors attained by the ONG at the same set of nodes ( z i ) i = 1 , , M , M = 1000 and the condition numbers of the corresponding linear systems. They will be shortly denoted as E n O N G and cond O N G , respectively. The results by ONM and MNM are slightly better than the expected accuracy, and the condition numbers of the mixed linear systems are a little bit lower than their ordinary counterparts. With respect to the ONG method, as we can observe, the errors result as stagnant.
Example 5.
Let us consider the following equation:
f ( y ) 1 27 1 1 f ( x ) sin ( 50 x ) ( x 2 + 50 2 ) 11 10 d x = y sin y
u = v 0 , 0 , w = ρ = v 0 , 0
This test deals with a kernel that is a product of a periodic function having high frequency and multiplied by a “nearly” singular function. Such kernels, treated in the bidimensional case in [17], appear for instance in the solution of problems of propagation in uniform wave-guides with non-perfect conductors [18]. The graphic of the kernel is given in Figure 3. With respect to the results of the equation, since g W r ( u ) , r 1 , a very fast convergence is expected, and this is confirmed also by the numerical results reported in Table 7. The mixed condition numbers are significantly smaller than the ordinary ones. The graphic of the weighted solution f u is provided in Figure 4.

6. Proofs

In order to prove Theorem 1, we recall the following well-known inequality ([9], p. 171).
Proposition 1
(Weak Jackson Inequality). Let f C u and 0 1 Ω φ r ( f , t ) u t d t < with 1 r N . Then, the following inequality holds:
E m ( f ) u C 0 1 m Ω φ r ( f , t ) u t d t ,
where m > r and C C ( m , f ) .
Proof of Theorem 1.
Observing that the following is the case:
| ( K f ) ( y ) u ( y ) | | μ | u ( y ) 1 1 f ( x ) k ( x , y ) ρ ( x ) d x | μ | f C u u ( y ) 1 1 | k ( x , y ) | ρ ( x ) u ( x ) d x ,
the boundedness of the operator K is proved by using the first assumption of (4).
A well-known result (see ([19], 2.5.1, p. 44)) states that the bounded operator K is compact if and only if lim m + sup f C u = 1 E m ( K f ) u = 0 . Then, by using the weak Jackson inequality and (4), we obtain the following bound:
E m ( f ) u C 0 1 m Ω φ r ( K f , t ) u t d t C f C u m s , C C ( m , f ) .
Thus, the theorem follows. □
To prove Theorem 5, we recall the following well-known result (see, for instance ([15], Theorem 4.1.1, p. 106)).
Theorem 7.
Let X , · be a Banach space. Assume K : X X to be a bounded compact operator and K m : X X , m N , to be a sequence of given bounded operators with lim m K f K m f = 0 for all f X . Consider the operator equations of the following:
( I K ) f = g ,
( I K m ) f m = g ,
where I is the identity operator in X and g X . If lim m ( K K m ) K m = 0 , i.e., the sequence { K m } m is collectively compact, then , for all sufficiently large m, ( I K m ) 1 exists and is uniformly bounded with the following.
( I K m ) 1 1 + ( I K ) 1 K m 1 ( I K ) 1 ( K K m ) K m .
Moreover, denoted by f * X and f m * X , the unique solutions of (32) and (33). respectively, the following results.
f * f m * C ( K K m ) f * , C C ( m , f * , g ) .
Proof of Theorem 5.
The invertibility of I K follows under the assumption of Theorem 1, while the uniformly boundedness of { K ˜ 2 m + 1 ( f ) } m and the following limit condition
lim m ( K K ˜ 2 m + 1 ) f C u = 0 , f C u ,
hold under the assumptions of Theorem 3.
It remains to be proven that the sequence { K ˜ 2 m + 1 } m is collectively compact. This is equivalent (see ([15], p. 114) and ([19], p. 44)) to prove that lim N + sup m sup f C u = 1 E N ( K ˜ 2 m + 1 f ) u = 0 . Hence, noting that the following is the case:
E N ( K ˜ 2 m + 1 f ) u K ˜ 2 m + 1 f P C u ( K K ˜ 2 m + 1 ) f C u + K f P C u
the thesis is easily proved using (34) and the uniform boundedness of the operator K. □
Proof of Theorem 6.
In order to prove (28), we first define the following sequence:
F ¯ n ( x ) = f 2 n ( x ) , n = 2 , 4 , f ˜ 2 n + 1 ( x ) , n = 3 , 5 ,
obtained by composing two sub-sequences of those defined by the Nyström methods ONM in (17) and ENM in (22) . As proved in Theorems 4 and 5, under the assumptions of Theorems 1, 4 and 5, these sequences are both convergent to the unique solution of the integral Equation (1). Therefore, all the sub-sequences are convergent to the same limit function f, and the speed of convergence is the same.
Consequently, we can conclude that with m = 2 n , the following is the case:
[ f F ¯ n ] u C f W r ( u ) m r , C C ( m , f ) .
Hence, what remain to be conducted in order to obtain (28) is to estimate the following distance:
[ f ˜ 2 m + 1 f ^ 2 m + 1 ] u .
From the definition of the two polynomial sequences, we can write the following:
f ˜ 2 m + 1 ( y ) f ^ 2 m + 1 ( y ) = μ u ( y ) k = 1 m a k * c k * u ( x k ) A k ( y ) + k = 1 m + 1 b k * b ˜ k * u ( y k ) B k ( y ) .
We immediately recognize that, by the definition of a m * and c m * , using estimates (18) and (23), we obtain the following.
| a k * c k * | = | f ˜ 2 m + 1 ( x k ) u ( x k ) f m ( x k ) u ( x k ) | | f ˜ 2 m + 1 ( x k ) u ( x k ) f ( x k ) u ( x k ) | + | f ( x k ) u ( x k ) f m ( x k ) u ( x k ) | C m r f W r ( u ) , k = 1 , , m .
Consequently, the following is the case:
a m * c m * C m r f W r ( u ) , C C ( m , f ) ,
where d = max k | d k | , d = [ d 1 , d 2 , , d m ] T , denotes the infinity norm in R m .
Now, we remark that by (21) and (25) and under the assumption that D 2 , 2 is invertible, the following identity holds true:
D 2 , 2 ( b ˜ m + 1 * b m + 1 * ) = D 2 , 1 ( a m * c m * ) .
Therefore, we have the following:
b ˜ m + 1 * b m + 1 *   D 2 , 2 1 D 2 , 1 a m * c m * .
If we denote by D 2 m + 1 the matrix of coefficients in (21), we note that D 2 , 1 is a submatrix of it. By using standard arguments (see for instance [15]), it is possible to show that D 2 m + 1 I K ˜ 2 m + 1 C u C u and the operator norm at the right-hand side is uniformly bounded with respect to m since the sequence { K ˜ 2 m + 1 } is bounded in virtue of (12). Therefore, since we are assuming that sup m D 2 , 2 1 < , we can conclude that the following is the case:
b ˜ m + 1 * b m + 1 * C a m * c m * .
Hence, by (36), we can obtain the following:
f ˜ 2 m + 1 f ^ 2 m + 1 ] u C a m * c m * K ˜ 2 m + 1 C u C u
and (28) follows by (38) and estimate (12). □

7. Conclusions

In this research, we have proposed a global Nyström method involving ordinary and extended product integration rules, both based on Jacobi zeros. For the nature of the method, we can handle FIE with kernels presenting some kind of pathological behaviours since the coefficients of the rules are exactly computed via recurrence relations. The method employs two different discrete sequences, namely the ordinary and the extended sequences, that are suitably mixed to strongly reduce the computational effort required by the ordinary Nyström method. Advantages are achieved with respect to the mixed collocation method in [4] from different points of view that can be summarised as follows: we can treat FIEs as having less regular kernels and under wider assumptions in order to obtain a better rate of convergence. Such improvements have been shown by means of some numerical tests. In particular, Example 2 evidences how the mixed Nyström method provides a better performance than the mixed collocation one in [4]. Furthermore, Example 4 shows how the assumptions of the mixed Nyström method are wider than those of the above mentioned mixed collocation one. Both methods allow us to reduce the sizes of the involved linear systems but require the computation of Modified and Generalized Modified Moments. In any case, once the kernel k and the order m are given, the algorithm can be organized pre-computing the matrix of the system. Moreover, once Modified Moments are given, Generalized Modified Moments can be always deduced by a suitable recurrence relation (see, e.g., [8]). Therefore, the global process has a general applicability and only requires the assumptions of convergence to be satisfied. With respect to the Modified Moments, they can be computed through recurrence relations (see, e.g., [13]). However, when these relations are unstable, Modified Moments can be accurately computed by suitable numerical methods. For instance, in the case of high oscillating or nearly singular kernels, this approach has been successfully tried by implementing “dilation” techniques [20,21]. The major cost represents a well known limit of the classical Nyström methods based on product integration rules. They are more expensive since the coefficients of the rule possessing many and different pathological kernels have to be “exactly” computed. On the other hand, this major effort is amply repaid by the better performance with respect to other cheaper procedures. Finally, establishing that the convergence conditions are also necessary is still an open problem. This will be a subject for further investigations.

Author Contributions

All authors equally contributed to the paper. Conceptualization, D.M., D.O. and M.G.R.; methodology, D.M., D.O. and M.G.R.; software, D.M., D.O. and M.G.R.; validation, D.M., D.O. and M.G.R.; analysis, D.M., D.O. and M.G.R.; investigation, D.M., D.O. and M.G.R.; resources, D.M., D.O. and M.G.R.; data curation, D.M., D.O. and M.G.R.; writing—original draft preparation, writing—review and editing, D.M., D.O. and M.G.R.; visualization, D.M., D.O. and M.G.R.; supervision D.M., D.O. and M.G.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially supported by University of Basilicata (local funds) and by GNCS Project 2020 “Approssimazione multivariata ed equazioni funzionali per la modellistica numerica”.

Acknowledgments

The authors thank the anonymous referees for their suggestions and remarks, which allowed to improve the paper. The research has been accomplished within “Research ITalian network on Approximation” (RITA). All the authors are members of the INdAM-GNCS Research Group. The second and third authors are members of the TAA-UMI Research Group.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Fermo, L.; Russo, M.G. Numerical methods for Fredholm integral equations with singular right-hand sides. Adv. Comput. Math. 2010, 33, 305–330. [Google Scholar] [CrossRef]
  2. Vainikko, G.; Pedas, A. The properties of solutions of weakly singular integral equations. J. Aust. Math. Soc. 1981, 22, 419–430. [Google Scholar] [CrossRef] [Green Version]
  3. De Bonis, M.C.; Mastroianni, G. Projection methods and condition numbers in uniform norm for Fredholm and Cauchy singular integral equations. SIAM J. Numer. Anal. 2006, 44, 1351–1374. [Google Scholar] [CrossRef]
  4. Occorsio, D.; Russo, M.G. A mixed collocation scheme for solving second kind Fredholm integral equations in [−1,1]. Electron. Trans. Numer. Anal. 2021, 54, 443–459. [Google Scholar] [CrossRef]
  5. Occorsio, D.; Russo, M.G. Nyström Methods for Fredholm Integral Equations Using Equispaced Points. Filomat 2014, 28, 49–63. [Google Scholar] [CrossRef]
  6. Occorsio, D.; Russo, M.G.; Themistoclakis, W. Some numerical applications of generalized Bernstein Operators. Constr. Math. Anal. 2021, 4, 186–214. [Google Scholar]
  7. Criscuolo, G.; Mastroianni, G.; Occorsio, D. Convergence of extended Lagrange interpolation. Math. Comput. 1990, 55, 197–212. [Google Scholar] [CrossRef]
  8. Occorsio, D.; Russo, M.G. A mixed scheme of product integration rules in (−1,1). Appl. Numer. Math. 2020, 149, 113–123. [Google Scholar] [CrossRef]
  9. Mastroianni, G.; Milovanović, G.V. Interpolation Processes—Basic Theory and Applications; Springer Monographs in Mathematics; Springer: Berlin, Germany, 2008. [Google Scholar]
  10. Ditzian, Z.; Totik, V. Moduli of Smoothness; Springer Series in Computational Mathematics; Springer: New York, NY, USA, 1987; Volume 9. [Google Scholar]
  11. Mastroianni, G.; Russo, M.G.; Themistoclakis, W. Numerical Methods for Cauchy Singular Integral Equations in Spaces of Weighted Continuous Functions. Oper. Theory Adv. Appl. 2005, 160, 311–336. [Google Scholar]
  12. Gautschi, W. On the Construction of Gaussian Quadrature Rules from Modified Moments. Math. Comput. 1970, 24, 245–260. [Google Scholar]
  13. Piessens, R.; Branders, M. Numerical Solution of Integral equations of Mathematical Physics using Chebyshev Polynomials. J. Comput. Phys. 1976, 21, 178–196. [Google Scholar] [CrossRef]
  14. Nevai, P. Mean Convergence of Lagrange Interpolation III. Trans. Am. Math. Soc. 1984, 282, 669–698. [Google Scholar] [CrossRef]
  15. Atkinson, K. The Numerical Solution of Integral Equations of the Second Kind; Cambridge Monographs on Applied and Computational Mathematics; Cambridge University Press: Cambridge, UK, 1997. [Google Scholar]
  16. Kress, R. Linear Integral Equations, 2nd ed.; Applied Mathematical Sciences; Springer: New York, NY, USA, 1999; Volume 82. [Google Scholar]
  17. Occorsio, D.; Serafini, G. Cubature formulae for nearly singular and highly oscillating integrals. Calcolo 2018, 55, 33. [Google Scholar] [CrossRef]
  18. Dobbelaere, D.; Rogier, H.; Zutter, D.D. Accurate 2.5-D boundary element method for conductive media. Radio Sci. 2014, 49, 389–399. [Google Scholar] [CrossRef] [Green Version]
  19. Timan, A.F. Theory of Approximation of Functions of a Real Variable; Dover Publications: New York, NY, USA, 1994. [Google Scholar]
  20. De Bonis, M.C.; Pastore, P. A quadrature formula for integrals of highly oscillatory functions. Rend. Circ. Mat. Palermo 2010, 2 (Suppl. 82), 279–303. [Google Scholar]
  21. Fermo, L.; Russo, M.G.; Serafini, G. Numerical treatment of the generalized Love integral equation. Numer. Algorithms 2021, 86, 1769–1789. [Google Scholar] [CrossRef]
Figure 1. Example 1. (a) Errors by Ordinary Nyström Method. (b) Errors by Mixed Nyström Method.
Figure 1. Example 1. (a) Errors by Ordinary Nyström Method. (b) Errors by Mixed Nyström Method.
Mathematics 09 02652 g001
Figure 2. Example 3: graphic of k(x) = cos(250x).
Figure 2. Example 3: graphic of k(x) = cos(250x).
Mathematics 09 02652 g002
Figure 3. Example 5: graphic of k ( x ) = sin ( 50 x ) ( x 2 + 50 2 ) 11 10 .
Figure 3. Example 5: graphic of k ( x ) = sin ( 50 x ) ( x 2 + 50 2 ) 11 10 .
Mathematics 09 02652 g003
Figure 4. Example 5: graphic of the weighted solution f u .
Figure 4. Example 5: graphic of the weighted solution f u .
Mathematics 09 02652 g004
Table 1. Example 1.
Table 1. Example 1.
Size o.l.s. E n one cond one Size m.l.s. E n mix cond mix
45.3 × 10−41.01(4,5)6.4 × 10−51.04
92.6 × 10−51.02
162.7 × 10−61.02(16,17)1.0 × 10−81.14
332.4 × 10−81.02
642.3 × 10−91.03(64,65)2.8 × 10−101.59
1293.3 × 10−101.03
2566.8 × 10−111.03(256,257)5.3 × 10−124.40
5131.2 × 10−111.03
Table 2. Example 2: Ordinary and Mixed Nyström methods.
Table 2. Example 2: Ordinary and Mixed Nyström methods.
Size o.l.s. E n one cond one Size m.l.s. E n mix cond mix
41.6 × 10−41.99(4,5)8.7 × 10−52.60
92.9 × 10−52.07
164.1 × 10−62.13(16,17)1.7 × 10−73.03
334.6 × 10−72.13
644.0 × 10−82.13(64,65)1.8 × 10−93.10
1294.3 × 10−92.14
2563.3 × 10−102.14(256,257)1.5 × 10−113.11
5133.8 × 10−112.14
Table 3. Example 2: Ordinary and Mixed Collocation methods [4].
Table 3. Example 2: Ordinary and Mixed Collocation methods [4].
Size o.l.s. E ¯ n one cond one Size m.l.s. E ¯ n mix cond mix
51.1 × 10−21.99(5,4)1.2 × 10−21.31
93.0 × 10−32.07
177.2 × 10−42.13(17,16)1.5 × 10−31.45
331.5 × 10−42.13
653.0 × 10−52.13(65,64)2.2 × 10−51.52
1295.7 × 10−62.14
2579.9 × 10−72.14(257,256)8.8 × 10−81.52
5131.7 × 10−72.14
Table 4. Example 3: numerical values of the weighted solution attained by ONM and MNM.
Table 4. Example 3: numerical values of the weighted solution attained by ONM and MNM.
x = 0.8
m ( f ˇ n u ) ( x ) ( f ¯ n u ) ( x )
42.7338375322483132.733837532248313
92.7338571474944052.733857116533352
162.7338571510785032.733857151078503
332.7338571514935992.733857151494889
642.7338571514908182.733857151490818
1292.7338571514909212.733857151490937
2562.7338571514909142.733857151490914
5132.7338571514909102.733857151490910
x = 0
m ( f ˇ n u ) ( x ) ( f ¯ n u ) ( x )
49.973699234411486 × 10−19.97369923441148610 × 10−1
99.973916652404244 × 10−19.973916609227826 × 10−1
169.973916692130872 × 10−19.973916692130872 × 10−1
339.973916696731837 × 10−19.973916696740153 × 10−1
649.973916696702131 × 10−19.973916696702131 × 10−1
1299.973916696702052 × 10−19.973916696702041 × 10−1
2569.973916696702078 × 10−19.973916696702078 × 10−1
5139.973916696702031 × 10−19.973916696702035 × 10−1
x = 0 . 5
m ( f ˇ n u ) ( x ) ( f ¯ n u ) ( x )
41.502019304836581 × 10−11.502019304836581 × 10−1
91.502230544539280 × 10−11.502230511114784 × 10−1
161.502230583137010 × 10−11.502230583137010 × 10−1
331.502230587607230 × 10−11.502230587564985 × 10−1
641.502230587578369 × 10−11.502230587578369 × 10−1
1291.502230587578292 × 10−11.502230587578573 × 10−1
2561.502230587578317 × 10−11.502230587578317 × 10−1
5131.502230587578272 × 10−11.502230587578276 × 10−1
Table 5. Example 3.
Table 5. Example 3.
Size o.l.s. E n one cond one Size m.l.s. E n mix cond mix
42.2 × 10−51.00(4,5)3.9 × 10−81.00
94.4 × 10−91.01
164.6 × 10−101.02(16,17)2.1 × 10−121.01
333.0 × 10−121.02
649.8 × 10−141.03(64,65)3.0 × 10−141.02
1291.8 × 10−141.04
2564.4 × 10−151.35(256,257)eps1.31
5134.5 × 10−161.36
Table 6. Example 4.
Table 6. Example 4.
Size o.l.s. E n one cond one Size m.l.s. E n mix cond mix E n ONG cond ONG
44.4 × 10−22.40(4,5)1.5 × 10−51.703.6 × 10−21.50
93.5 × 10−62.249.7 × 10−32.36
162.7 × 10−72.25(16,17)7.6 × 10−91.211.5 × 10−22.19
335.6 × 10−102.281.5 × 10−22.23
641.4 × 10−122.29(64,65)1.5 × 10−131.211.5 × 10−22.24
1293.0 × 10−132.301.5 × 10−22.25
2562.7 × 10−132.31(256,257)5.7 × 10−151.221.5 × 10−22.26
5131.3 × 10−142.321.5 × 10−22.26
Table 7. Example 5.
Table 7. Example 5.
Size o.l.s. E n one cond one Size m.l.s. E n mix cond mix
43.9 × 10−141.78(4,5)5.5 × 10−141.88
93.3 × 10−143.32
162.0 × 10−155.78(16,17)eps2.15
33eps40.9
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Mezzanotte, D.; Occorsio, D.; Russo, M.G. Combining Nyström Methods for a Fast Solution of Fredholm Integral Equations of the Second Kind. Mathematics 2021, 9, 2652. https://0-doi-org.brum.beds.ac.uk/10.3390/math9212652

AMA Style

Mezzanotte D, Occorsio D, Russo MG. Combining Nyström Methods for a Fast Solution of Fredholm Integral Equations of the Second Kind. Mathematics. 2021; 9(21):2652. https://0-doi-org.brum.beds.ac.uk/10.3390/math9212652

Chicago/Turabian Style

Mezzanotte, Domenico, Donatella Occorsio, and Maria Grazia Russo. 2021. "Combining Nyström Methods for a Fast Solution of Fredholm Integral Equations of the Second Kind" Mathematics 9, no. 21: 2652. https://0-doi-org.brum.beds.ac.uk/10.3390/math9212652

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop