Next Article in Journal
An Image-Based Double-Smoothing Cohesive Finite Element Framework for Particle-Reinforced Materials
Next Article in Special Issue
Improved Conditions for Oscillation of Functional Nonlinear Differential Equations
Previous Article in Journal
Geometric Properties and Algorithms for Rational q-Bézier Curves and Surfaces
Previous Article in Special Issue
Oscillation Criteria of Higher-order Neutral Differential Equations with Several Deviating Arguments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Approximation of Finite Hilbert and Hadamard Transforms by Using Equally Spaced Nodes

by
Frank Filbir
1,2,
Donatella Occorsio
3,4,* and
Woula Themistoclakis
4
1
Department of Scientific Computing, Helmholtz Zentrum München German Research Center for Environmental Health, Ingolstädter Landstrasse 1, 85764 Neuherberg, Germany
2
Applied Numerical Analysis, Fakultät für Mathematik, Technische Universität München, Boltzmannstrasse 3 85748 Garching bei München. Research Center, Ingolstädter Landstrasse 1, 85764 Neuherberg, Germany
3
Department of Mathematics, Computer Science and Economics, University of Basilicata, viale dell’Ateneo Lucano 10, 85100 Potenza, Italy
4
C.N.R. National Research Council of Italy, IAC Institute for Applied Computing “Mauro Picone”, via P. Castellino, 111, 80131 Napoli, Italy
*
Author to whom correspondence should be addressed.
Submission received: 6 March 2020 / Revised: 31 March 2020 / Accepted: 1 April 2020 / Published: 7 April 2020
(This article belongs to the Special Issue Multivariate Approximation for solving ODE and PDE)

Abstract

:
In the present paper, we propose a numerical method for the simultaneous approximation of the finite Hilbert and Hadamard transforms of a given function f, supposing to know only the samples of f at equidistant points. As reference interval we consider [ 1 , 1 ] and as approximation tool we use iterated Boolean sums of Bernstein polynomials, also known as generalized Bernstein polynomials. Pointwise estimates of the errors are proved, and some numerical tests are given to show the performance of the procedures and the theoretical results.

1. Introduction

The Hilbert transform in its original form is an integral transform given by
H ( f , t ) = f ( x ) x t d x , t R .
Alongside this form there are different variants defining Hilbert transforms on a finite interval, on the torus, or discrete groups. Objects of our studies are the finite Hilbert transform and its derivative, namely the Hadamard transform, both defined on the finite (standard) interval [ 1 , 1 ] . They are given by
H ( f , t ) = 1 1 f ( x ) x t d x , H 1 ( f , t ) = = 1 1 f ( x ) ( x t ) 2 d x , 1 < t < 1 ,
where the single and double bar-integral notation indicate that the involved integrals have to be understood as the Cauchy principal value and the Hadamard finite-part integrals, respectively.
The interest in integrals of this kind is due to their wide use to formulate boundary-value problems in many areas of mathematical physics (potential theory, fracture mechanics, aerodynamics, elasticity, etc. …) in terms of singular integral equations in [ 1 , 1 ] involving such integrals (see e.g., [1,2,3,4,5] and the references therein).
In fact, the Hilbert transform in its aforementioned form (1) and all its relatives appear in various fields in mathematical analysis, signal processing, physics and other fields in science. Among them are partial differential equations, optics (X-ray crystallography, electron-atom scattering), electrodynamics and quantum mechanics (Kramers–Kronig relation), signal processing (phase retrieval, transfer functions of linear systems, spectral factorization). We will not go into details here but instead refer to the comprehensive two volume treatise by F. King [6] on many aspects of the Hilbert transform and its various variants. Due to its outstanding relevance it is of great importance to possess procedures which allow computation of the Hilbert transform numerically with high degree of accuracy. This problem was studied by many authors for all the different variants of the Hilbert transform and under different assumptions. We limit the citation here to only a few interesting papers [7,8,9].
Our focus in the present paper lies on the numerical computation of the finite Hilbert and Hadamard transforms (2). There is an extensive literature on numerical methods for these transforms. We only mention here [5,10] and the references therein. Many of these methods produce a high degree of approximation, especially when the smoothness of f increases (see e.g., [11,12,13,14,15]). Since they are based on Gaussian quadrature rules and its modified versions or product rules, they require the values of f to be given at the zeros of Jacobi polynomials which often is not the case. For example, in many applications the measurements of f are produced by devices which sample the function on equidistant knots. Other procedures which pay attention to this fact and which are frequently used in applications involve composite quadrature rules on equally spaced points. However, this type of quadrature rules suffers from a low degree of approximation or show saturation phenomena. Hence there is a need to establish a new approach which combines the advantages of both the aforementioned methods.
To move towards this goal, we propose some quadrature rules obtained by means of the sequence { B m , s f } m of the so-called generalized Bernstein polynomials, defined as iterated Boolean sums of the classical Bernstein polynomials { B m f } m ([16,17,18]). These types of formulas are based on equally spaced knots in the interval [ 1 , 1 ] and their convergence order increases with increasing smoothness of the function, in contrast to various popular rules based on piecewise polynomial approximation. Moreover, there exists a numerical evidence showing that the speed of convergence of the formula increases for higher values of the parameter s and for fixed m (see [19], Remark 4.1).
Concerning the numerical computation of the Hilbert transform H ( f ) , we revisit the method introduced in [20] from both the theoretical and computational point of view. Indeed, here, according to a more recent result obtained in [21], we estimate the quadrature error in terms of the more refined kth φ -modulus of Ditzian and Totik, instead of the ordinary modulus of smoothness. As consequence, we get error estimates in Sobolev and Hölder Zygmund spaces and we are able to state the maximum rate of convergence for functions in such spaces. The second improvement of the method in [20] regards the computation of the quadrature weights that is performed in a more stable way. It is based on a recurrence relation which does not require the transformation to the canonical bases { 1 , x , , x m } , but it preserves the fundamental Bernstein polynomials { p m , k ( x ) } k = 0 m .
As regards the Hadamard transform H 1 ( f , t ) , before introducing the numerical procedure for its computation, we prove that H 1 ( f , t ) presents algebraic singularities at the endpoints of the interval, when the density function f satisfies a Dini-type condition. Successively, we introduce a quadrature rule for approximating H 1 ( f , t ) always based on the polynomials B m , s f and useful also for the simultaneous approximation of H ( f , t ) and H 1 ( f , t ) , since the samples of the function f at the equidistant nodes employed in the computation of H ( f , t ) have been reused to approximate H 1 ( f , t ) too. The convergence of such a quadrature rule is proved by using the simultaneous approximation property of the generalized Bernstein polynomials, and similarly to the Hilbert transform case, for the quadrature error we state weighted pointwise estimates.
It comes out that the additional integer parameter s introduced by the B m , s f can be suitable chosen to accelerate the convergence of the quadrature rules for both the transforms H and H 1 . Moreover, the coefficients of both the quadrature rules are given in a simple compact vectorial form and can be efficiently computed by recurrence.
The outline of the paper is as follows. Section 2 contains some notation and preliminary results concerning the generalized Bernstein polynomials and the Hilbert and Hadamard transforms. The quadrature rules with the corresponding pointwise error estimates can be found in Section 3 where details about the recurrence relations for their coefficients are also given. Section 4 contains additional numerical details for computing the quadrature weights and some numerical tests which show the performance of our procedures and confirm the theoretical estimates. Finally, in Section 5 the proofs are given.

2. Notation and Preliminary Results

In the sequel C will denote a generic positive constant which may differ at different occurrences. We will write C C ( a , b , ) to indicate that C is independent of a , b , . Moreover, if A , B > 0 depend on some parameters the notation A B means that there are fixed constants C 1 , C 2 > 0 (independent of the parameters in A , B ) such that C 1 A B C 2 A . For m N , we set N 0 m = { 0 , 1 , 2 , , m } and denote by P m the space of all algebraic polynomials of degree at most m. C m will denote the space of functions with m continuous derivatives on [ 1 , 1 ] and C 0 is the space of the continuous functions on [ 1 , 1 ] equipped with the uniform norm f : = max x [ 1 , 1 ] | f ( x ) | . In C 0 , setting φ ( x ) : = 1 x 2 , it is possible to define the following φ -modulus of smoothness by Ditzian and Totik ([22], Theorem 2.1.2)
ω φ r ( f , t ) = sup 0 < h t Δ h φ r f , r N
where
Δ h φ ( x ) r f ( x ) = k = 0 r ( 1 ) k r k f x + ( r 2 k ) h 2 φ ( x ) .
We recall that ([22], Theorem 2.1.1)
ω φ r ( f , t ) K r , φ ( f , t r ) : = inf { f g + t r g ( r ) φ r : g ( r 1 ) AC } ,
where AC denotes the space of the absolutely continuous functions on [ 1 , 1 ] .
By means of this modulus of continuity, we define the subspace D T C 0 of all functions satisfying a Dini-type condition, namely
D T = f C 0 : 0 1 ω φ ( f , u ) u d u < ,
where we set ω φ ( f , u ) = ω φ 1 ( f , u ) .
Moreover, the Hölder–Zygmund space of order λ > 0 is defined by
Z λ = f C 0 : sup t > 0 ω φ r ( f , t ) t λ < , r > λ , λ > 0 ,
and equipped with the norm
f Z λ = f + sup t > 0 ω φ r ( f , t ) t λ , r > λ .
The space Z λ constitutes a particular case of the Besov-type spaces studied in [23] where it has been proved that ([23], Theorem 2.1)
f Z λ sup n 0 ( n + 1 ) λ E n ( f ) , E n ( f ) : = inf P P n f P .
Such norms’ equivalence ensures that the previous definitions are indeed independent of the integer r > λ we choose. Moreover, by (6) we get an interesting characterization of the continuous functions f Z λ in terms of the rate of convergence to zero of the errors of best uniform polynomial approximation of f, which in turn is closely related to the smoothness of f (see e.g., Corollary 8.2.2 and Theorem 6.2.2 of [22]). More precisely, for any continuous function f and any λ > 0 we have
f Z λ E n ( f ) = O ( n λ ) ω φ r ( f , t ) = O ( t λ ) , r > λ .
Furthermore, by the previous definitions, for any r > λ > 0 we get
ω φ r ( f , t ) C t λ f Z λ , f Z λ , C C ( f , t ) .
In the case that λ = k N , by virtue of (3), we have that the space Z λ is equivalent to the Sobolev space
W k = f ( k 1 ) AC : f ( k ) φ k < , k N ,
equipped with the norm f W k = f + f ( k ) φ k , and we recall that ([22], Theorem 4.2.1)
ω φ r ( f , t ) = O ( t r ) f W r , r N ,
ω φ r ( f , t ) = o ( t r ) f P r 1 , r N .
Finally, since we are going to use a result of [24] based also on the ordinary moduli of smoothness (cf. Theorem 2), we conclude the subsection by recalling their definition and some properties.
Set
Δ h f ( x ) : = f x + h 2 f x h 2 , Δ h r : = Δ h ( Δ h r 1 ) , r > 1 ,
the ordinary r-th modulus of smoothness of f is defined as
ω r ( f , t ) : = sup 0 < h t Δ h r f , r N .
It is related with the φ modulus by
ω φ r ( f , t ) C ω r ( f , t ) , C C ( f , t ) .
Moreover, set
W ˜ k : = f k 1 AC : f ( k ) < , k N
we have the following analogues of (9) and (10) (see e.g., [22], p. 40)
ω r ( f , t ) = O ( t r ) f W ˜ r W r , r N
ω r ( f , t ) = o ( t r ) f P r 1 , r N

2.1. Generalized Bernstein Polynomials in [ 1 , 1 ]

For any f C 0 the m-th Bernstein polynomial B m f is defined as
B m f ( x ) : = k = 0 m f ( t k ) p m , k ( x ) , t k : = 2 k m 1 , x [ 1 , 1 ] ,
where
p m , k ( x ) : = 1 2 m m k ( 1 + x ) k ( 1 x ) m k , k = 0 , 1 , , m ,
are the so-called fundamental Bernstein polynomials. They satisfy the following recurrence relation
p m , k ( x ) = ( 1 x ) 2 p m 1 , k ( x ) + ( 1 + x ) 2 p m 1 , k 1 ( x ) , m = 1 , 2 , ,
with p 0 , 0 ( x ) = 1 and p m , k ( x ) = 0 if k < 0 or k > m .
The computation of B m f ( x ) can be efficiently performed by the de Casteljau algorithm (see e.g., [25]).
Based on the polynomial B m f , the generalized Bernstein polynomial B m , s f were introduced separately in [16,17,18]. They are defined as the following Boolean sums
B m , s f = f ( f B m f ) s , s N , B m , 1 f = B m f .
Please note that B m , s f P m and it can be expressed as
B m , s f ( x ) = j = 0 m p m , j ( s ) ( x ) f ( t j ) , t j : = 2 j m 1 , x [ 1 , 1 ] ,
where
p m , j ( s ) ( x ) = i = 1 s s i ( 1 ) i 1 B m i 1 p m , j ( x ) B m i = B m ( B m i 1 ) , i = 1 , , s .
An estimate of the error R m , s f : = f B m , s f in uniform norm is given by the following theorem
Theorem 1.
[21] Let s N be fixed. Then for all m N and any f C 0 we have
f B m , s f C ω φ 2 s f , 1 m + f m s , C C ( m , f ) .
Moreover, for any 0 < λ 2 s we obtain
f B m , s f = O ( m λ 2 ) , m ω φ 2 s ( f , t ) = O ( t λ )
and the o-saturation class is characterized by
f B m , s f = o ( m s ) f   i s   a   l i n e a r   f u n c t i o n .
Remark 1.
Please note that unlike the basic Bernstein operator B m , the Boolean sums B m , s may accelerate the speed of convergence as the smoothness of f increases. In particular, taking into account (7)–(9), from Theorem 1 we deduce
f B m , s f C f Z λ m λ , f Z λ w i t h 0 < λ 2 s , C C ( m , f ) .
About the simultaneous approximation of the derivatives of f by means of the sequence { B m , s f } m , the following estimate holds.
Theorem 2
([24], Corollary 1.6). Let s N be fixed. Then for all m , k N and any f C k we have
( f B m , s f ) ( k ) C ω φ 2 s f , 1 m + ω s f , 1 m + ω f , 1 m s , k = 1 , ω φ 2 s f ( k ) , 1 m + ω s f ( k ) , 1 m + f ( k ) m s , k 2 ,
where ω : = ω 1 and C C ( m , f ) .
Remark 2.
From Theorem 2 and (9), (11) we deduce the following maximum rate of convergence
f ( k ) ( B m , s f ) ( k ) = O 1 m s ( m ) f C k W ˜ 2 s + k , k N .
Finally, we give some details on the computation of B m , s f and its first derivative.
Observe that a more convenient representation of the fundamental polynomials { p m , i ( s ) } i = 0 m is given by [26] (see also [27])
p m ( s ) ( x ) = p m ( x ) C m , s , x [ 1 , 1 ] ,
where we set
p m ( s ) ( x ) : = [ p m , 0 ( s ) ( x ) , p m , 1 ( s ) ( x ) , , p m , m ( s ) ( x ) ] , p m ( x ) : = [ p m , 0 ( x ) , , p m , m ( x ) ] ,
and C m , s R ( m + 1 ) × ( m + 1 ) is the changing basis matrix given by
C m , s = I + ( I A ) + + ( I A ) s 1 , C m , 1 = I ,
where I denotes the identity matrix and A is the matrix with entries
A : = ( A i , j ) A i , j : = p m , j ( t i ) , i , j N 0 m .
Let c i , j ( m , s ) be the entry ( i , j ) of C m , s , then in view of (20) we get
p m , j ( s ) ( x ) = i = 0 m p m , i ( x ) c i , j ( m , s ) , x [ 1 , 1 ] ,
and consequently
B m , s f ( x ) = i = 0 m j = 0 m c i , j ( m , s ) f ( t j ) p m , i ( x ) .
In matrix-vector notation this reads as
B m , s f ( x ) = p m ( x ) C m , s f ,
with
f : = [ f ( t 0 ) , f ( t 1 ) , , f ( t m ) ] T .
As regards the derivatives of the Bernstein polynomials B m , s f , we obtain from (25) the following useful representation
B m , s f ( x ) = p m 1 ( x ) C m , s f ,
where
p m 1 ( x ) : = [ p m , 0 ( x ) , , p m , m ( x ) ] ,
Finally, concerning the entries of the vector p m 1 ( x ) , i.e., the derivatives of the fundamental Bernstein polynomials at x [ 1 , 1 ] , starting from the definition (14), easy computations yield the expression
p m , k ( x ) = m 2 p m 1 , k 1 ( x ) p m 1 , k ( x ) , k = 0 , , m ,
with the usual convention p m , j ( x ) = 0 if j N 0 m .

2.2. Hilbert and Hadamard Transforms

First, we recall the finite Hilbert transform H ( f , t ) is defined by
H ( f , t ) = 1 1 f ( x ) x t d x = lim ϵ 0 + 1 t ϵ f ( x ) x t d x + t + ϵ 1 f ( x ) x t d x .
The following theorem provides a sufficient condition for the existence of H ( f , t ) in ( 1 , 1 ) when the density function f satisfies a Dini-type condition. It also shows the behavior of H ( f , t ) as t approaches the endpoints of the interval [ 1 , 1 ] .
Theorem 3
([28], Theorem 2.1). For any f D T and | t | < 1 , we have
log 1 e 1 t 2 | H ( f , t ) | C f + 0 1 ω φ ( f , u ) u d u , C C ( f , t ) .
Consider now H 1 ( f , t ) , which is the finite part of the divergent integral in the Hadamard sense (see for instance [5,10,14]), i.e., defined for | t | < 1 as (cf. [5], Equation (1.3))
H 1 ( f , t ) = = 1 1 f ( x ) ( x t ) 2 d x = lim ε 0 + 1 t ε f ( x ) ( x t ) 2 d x + t + ε 1 f ( x ) ( x t ) 2 d x 2 f ( t ) ε .
An alternative definition interprets H 1 ( f , t ) as the first derivative of H ( f ) at t, i.e.,
H 1 ( f , t ) = d d t 1 1 f ( x ) x t d x , | t | < 1 ,
being (30) and (29) equivalent when f is an Hölder continuous function (see [5]).
By the following theorem, we are going to state that for all functions f with f D T , we have that H 1 ( f , t ) exists finite for any | t | < 1 , while it algebraically diverges at the endpoints of the interval [ 1 , 1 ] .
Theorem 4.
Let the function f C 1 be s.t. f D T . Then, for any 1 < t < 1 , we have
φ 2 ( t ) | H 1 ( f , t ) | C f + 0 1 ω φ ( f , τ ) τ d τ , C C ( f , t ) .

3. The Quadrature Rules

3.1. On the Computation of H ( f , t )

The numerical method for computing H ( f , t ) is based on the following proposition
Proposition 1.
For any f D T and for any | t | < 1 , we have
H ( f , t ) = 1 1 f ( x ) f ( t ) x t d x + f ( t ) log 1 t 1 + t ,
In view of (32), we mainly must approximate the function
F ( f , t ) : = 1 1 f ( x ) f ( t ) x t d x , 1 < t < 1 .
For any given s N , by means of the polynomial sequence { B m , s f } m , we define the following approximation of F ( f , t )
F m , s ( f , t ) : = 1 1 B m , s f ( x ) B m , s f ( t ) x t d x
and let
Φ m , s ( f , t ) : = F ( f , t ) F m , s ( f , t ) , 1 < t < 1 .
Please note that
F m , s ( f , t ) = j = 0 m f ( t j ) 1 1 p m , j ( s ) ( x ) p m , j ( s ) ( t ) x t d x = : j = 0 m f ( t j ) D m , j ( s ) ( t ) ,
and taking into account the relation in (20) between the bases { p m , i ( x ) } i N 0 m and { p m , i ( s ) ( x ) } i N 0 m , we have
D m , j ( s ) ( t ) = i = 0 m c i , j ( m , s ) 1 1 p m , i ( x ) p m , i ( t ) x t d x = : i = 0 m c i , j ( m , s ) q m , i ( t ) .
About the computation of { q m , i ( t ) } i N 0 m we can prove the following
Proposition 2.
For the sequence { q m , i ( t ) } the following recurrence relation holds
q 0 , 0 ( t ) = 0 , q 1 , 0 ( t ) = 1 , q 1 , 1 ( t ) = 1 q m , 0 ( t ) = ( 1 t ) 2 q m 1 , 0 ( t ) 1 m q m , k ( t ) = ( 1 t ) 2 q m 1 , k ( t ) + ( 1 + t ) 2 q m 1 , k 1 ( t ) , 1 k m 1 q m , m ( t ) = ( 1 + t ) 2 q m 1 , m 1 ( t ) + 1 m .
Setting
q m ( t ) = [ q m , 0 ( t ) , q m , 1 ( t ) , , q m , m ( t ) ] ,
the quadrature rule (34) takes the form
F m , s ( f , t ) = q m ( t ) C m , s f .
This formula can be directly applied to approximate H ( f , t ) in the form given in (32), i.e., supposed to know f ( t ) , we can approximate
H ( f , t ) = F ( f , t ) + log 1 t 1 + t f ( t ) F m , s ( f , t ) + log 1 t 1 + t f ( t ) .
In the case only the samples f ( t j ) f ( t ) are given, we propose to approximate H ( f , t ) by
H m , s ( f , t ) : = F m , s ( f , t ) + log 1 t 1 + t B m , s f ( t ) .
Using matrix-vector notation as in (39) and (25) we arrive at
H m , s ( f , t ) = q m ( t ) + log 1 t 1 + t p m ( t ) C m , s f .
The quadrature error can then be expressed as
E m , s ( f , t ) : = H ( f , t ) H m , s ( f , t ) = Φ m , s ( f , t ) + log 1 t 1 + t f ( t ) B m , s f ( t )
About the convergence of both the previous quadrature rules F m , s and H m , s , the following theorem estimates the associate errors, Φ m , s and E m , s respectively.
Theorem 5.
Let be 1 < t < 1 . Then for any f D T , we have
log 1 e 1 t 2 | E m , s ( f , t ) | C log m ω φ 2 s f , 1 m + f m s + C 0 1 m ω φ r ( f , u ) u d u ,
with r < m and C C ( m , f , t ) .
The same estimate continues to hold for Φ m , s ( f , t ) , which satisfies also
| Φ m , s ( f , t ) | C ω φ 2 s f , 1 m + ω s f , 1 m + ω f , 1 m s , f C 1 ,
with C C ( m , f , t ) .
In case of smoother functions, from the previous estimates and (7), (8), (9) and (11), we easily get
Corollary 1.
Let be 1 < t < 1 . Then for all f Z λ , with 0 < λ 2 s , we have
| E m , s ( f , t ) | C log e 1 t 2 f Z λ m λ / 2 log m , C C ( m , f , t ) ,
and the same holds for | Φ m , s ( f , t ) | . Moreover, for all f C k + 1 , with 1 k 2 s , we have
| Φ m , s ( f , t ) | C m k / 2 , C C ( m , t ) .
In conclusion, we remark that in proving Theorem 5 we also stated the following relations between the quadrature errors and the approximation errors by generalized Bernstein polynomials
| E m , s ( f , t ) | C log e 1 t 2 log m f B m , s f + 0 1 m ω φ r ( f , u ) u d u , f D T , | Φ m , s ( f , t ) | C ( f B m , s f ) , f C 1 .

3.2. On the Computation of H 1 ( f , t )

We are going to use the following proposition
Proposition 3.
For any f C 1 s.t. f D T and for all | t | < 1 , we have
H 1 ( f , t ) = 1 1 f ( x ) f ( t ) f ( t ) ( x t ) ( x t ) 2 d x + f ( t ) log 1 t 1 + t f ( t ) 2 1 t 2 .
Let
F 1 ( f , t ) : = 1 1 f ( x ) f ( t ) f ( t ) ( x t ) ( x t ) 2 d x , 1 < t < 1 .
Supposed both f ( t ) and f ( t ) are known, then we can get the exact value of the non-integral part at the right-hand side of (44). In this case, the numerical computation of H 1 ( f , t ) can be performed by the following quadrature rule
F 1 ( f , t ) = F m , s 1 ( f , t ) + Φ m , s 1 ( f , t ) ,
where
F m , s 1 ( f , t ) : = 1 1 B m , s f ( x ) B m , s f ( t ) B m , s f ( t ) ( x t ) ( x t ) 2 d x = d d t F m , s ( f , t ) .
Using (36), (37) and (46), we get
F m , s 1 ( f , t ) = j = 0 m f ( t j ) j = 0 m d d t D m , j ( s ) ( t ) , d d t D m , j ( s ) ( t ) = i = 0 m c i , j ( m , s ) d m , i ( t ) , d m , i ( t ) : = q m , i ( t ) ,
where the polynomials d m , i ( t ) , i = 0 , , m , can be computed recursively, according to
Proposition 4.
The sequence d m , i ( t ) , i = 0 , , m , satisfies the following recurrence relation
d 1 , 0 ( t ) = 0 , d 1 , 1 ( t ) = 0 , d m , 0 ( t ) = ( 1 t ) 2 d m 1 , 0 ( t ) 1 2 q m 1 , 0 ( t ) , d m , k ( t ) = ( 1 t ) 2 d m 1 , k ( t ) 1 2 q m 1 , k ( t ) + ( 1 + t ) 2 d m 1 , k 1 ( t ) + 1 2 q m 1 , k 1 ( t ) , 1 k m 1 , d m , m ( t ) = ( 1 + t ) 2 d m 1 , m 1 ( t ) + 1 2 q m 1 , m 1 ( t ) .
The previous recurrence relation can be easily deduced by Proposition 2.
Let
d m ( t ) = d m , 0 ( t ) , d m , 1 ( t ) , , d m , m ( t ) ,
then the quadrature rule (46) takes the following form
F m , s 1 ( f , t ) = d m ( t ) C m , s f .
In the case that only the vector f is known, we have to approximate also the non-integral part in (44) and we propose the following quadrature rule
H 1 ( f , t ) = H m , s 1 ( f , t ) + E m , s 1 ( f , t ) ,
where E m , s 1 ( f , t ) denotes the error and
H m , s 1 ( f , t ) : = F m , s 1 ( f , t ) + log 1 t 1 + t B m , s f ( t ) 2 1 t 2 B m , s f ( t ) .
By (49), (25) and (26), the rule in vector form is given by
H m , s 1 ( f , t ) = d m ( t ) + log 1 t 1 + t p m 1 ( t ) 2 1 t 2 p m ( t ) C m , s f .
We point out that both the rules (49) and (52) are based on the same data vector f used in the rules (39) and (41). We see that our method allows simultaneous approximation of the Hilbert transform H ( f , t ) and its first derivative H 1 ( f , t ) for | t | < 1 by using the same samples of the function f.
About the convergence of the quadrature rules F m , s 1 and H m , s 1 , the following theorem estimates the associate errors Φ m , s 1 and E m , s 1 by means of the error of approximation when f and f are approximated by generalized Bernstein polynomials.
Theorem 6.
Let be 1 < t < 1 . Then for any f C 1 s.t. f D T , we have
( 1 t 2 ) | E m , s 1 ( f , t ) | C f B m , s f + log m ( f B m , s f ) + 0 1 m ω φ r ( f , u ) u d u
with r < m and C C ( m , f , t ) .
The same estimate can also be applied to Φ m , s 1 ( f , t ) , which in the case of continuously differentiable functions in C 2 satisfies also
| Φ m , s 1 ( f , t ) | C ( f B m , s f ) , f C 2 ,
with C C ( m , f , t ) .
Thanks to this theorem, by Theorem 1 and Theorem 2 we can easily get estimates of the quadrature errors E m , s 1 and Φ m , s 1 based on several moduli of smoothness of f and f . For brevity we omit the details and only state the following result, which easily follows by using (9) and (11) in the estimates of Theorems 1 and 2, which in turn are used in Theorem 6.
Corollary 2.
Let 1 < t < 1 and s N . For all functions f C k + 1 , with 1 k 2 s , and for sufficiently large m N , we have
( 1 t 2 ) | E m , s 1 ( f , t ) | C m k / 2 log m , C C ( m , t ) .
The same estimate holds for Φ m , s 1 ( f , t ) , which also satisfies
| Φ m , s 1 ( f , t ) | C m k / 2 , C C ( m , t ) , f C k + 2 , 1 k 2 s .

4. Numerical Details and Some Experiments

First, we recall some details given in [19] about the computation of the matrix C m , s in (21). We start from the matrix A defined in (22). It will be constructed by rows by making use of the triangular scheme in (15) and thus for each row m 2 long operations are required. On the other hand, since A is centrosymmetric, i.e., A = J A J , where J is the counter-identity matrix of order m + 1 ( J i , j = δ i , m j , i , j N 0 m , being δ h , k the Kronecker delta), it will be enough to compute only the first ( m + 1 2 ) or ( m + 2 2 ) rows, according to m is odd or even, respectively. Therefore, the construction of A requires about m 3 2 long operations. Furthermore, since the product of two centrosymmetric matrices can be performed in almost m 3 4 long operations [29], the matrix C m , s in (21) can be constructed in almost ( s 2 ) m 3 / 4 long operations, instead of ( s 2 ) m 3 ones, i.e., with a saving of about the 75%. A more significant reduction is achieved when the parameter s = 2 p , p N , p 1 . Indeed, by using ([30], (14))
C m , 2 p = C m , 2 p 1 + ( I A ) 2 p 1 C m , 2 p 1 ,
the matrix C m , s can be determined by 2 ( log 2 s 1 ) products of centrosymmetric matrices and therefore requiring almost m 3 2 ( log 2 s 1 ) long operations. For instance, for s = 256 if we use Equation (21), 255 products of centrosymmetric matrices require about 256 m 3 4 63.7 m 3 long operations. On On the contrary, if we use (55) then approximatively only 3.5 m 3 long operations are needed.
Now we propose some numerical tests obtained by approximating H ( f , t ) and H 1 ( f , t ) by means of the quadrature rules { F m , s ( f , t ) } m and { F m , s 1 ( f , t ) } m , respectively, namely for a given t ( 1 , 1 ) , we compute
H ( f , t ) F m , s ( f , t ) + log 1 t 1 + t f ( t ) , H 1 ( f , t ) F m , s 1 ( f , t ) + log 1 t 1 + t f ( t ) 2 1 t 2 f ( t ) .
For any choice of m we consider different values of s. In the tables we report the approximating values of the integrals. All the computations have been performed in double-machine precision ( e p s 2.22044 e 16 ).
Example 1.
H ( f , t ) = 1 1 sin x x t d x , H 1 ( f , t ) = = 1 1 sin x ( x t ) 2 d x , t = 0.1 .
Here f C and as we can see the performance of the quadrature rules improves keeping m fixed and increasing the values of s. An empty cell means that there is no improvement in the computation. In particular as we can see in Table 1 and Table 2, the machine precision is attained for m = 128 and s = 16 as well as for m = 64 and s = 32 .
Example 2.
H ( f , t ) = 1 1 | x 0.5 | 15 2 x t d x , H 1 ( f , t ) = = 1 1 | x 0.5 | 15 2 ( x t ) 2 d x , t = 0.3 .
In this case, f Z 15 2 , and as the results in Table 3 and Table 4 show, the numerical errors agree with the theoretical estimates.
Example 3.
H ( f , t ) = 1 1 exp ( x ) sin ( x ) 1 + x 2 d x x t , t = 0.7 .
Here f C . In this test (see Table 5), we want to show the performance of the quadrature rule when m is fixed and s increases, highlighting how we get an improvement, but it seems till to a certain threshold. This behavior will be the subject of future investigations.

5. Proofs

The following three lemmas will be useful in the sequel.
Lemma 1.
Let f D T and P m P m , m 2 . Then
0 1 m ω φ ( f P m , t ) t d t C ( f P m ) + 0 1 m ω φ r ( f , t ) t d t ,
where r N with r < m and 0 < C C ( m , f ) .
Proof. 
Taking into account that ω φ ( f , t ) is a non-decreasing function of t, we have
0 1 m ω φ ( f P m , t ) t d t = j = m 1 j + 1 1 j ω φ ( f P m , t ) t d t C j = m ω φ f P m , 1 j j .
Then, by applying the following Stechkin type inequality ([22], Theorem 7.2.4)
ω φ ( f , t ) C t i = 0 1 t E i ( f ) , 0 < C C ( f , t ) ,
we get
0 1 m ω φ ( f P m , t ) t d t C j = m 1 j 2 i = 0 j E i ( f P m ) = C j = m 1 j 2 i = 0 m 1 E i ( f P m ) + i = m j E i ( f P m ) C ( f P m ) j = m m j 2 + C j = m 1 j 2 i = m j E i ( f ) ,
and taking into account that j = n 1 j 2 C n holds for all n N , with C C ( n ) , we obtain
0 1 m ω φ ( f P m , t ) t d t C ( f P m ) + C i = m E i ( f ) j = i 1 j 2 C ( f P m ) + C i = m E i ( f ) i .
Finally, by applying the Jackson type inequality ([22], Theorem 7.2.1) (see also [31], Section 2.5.2),
E m ( f ) C ω φ r f , 1 m , r < m , C C ( m , f ) ,
and recalling that ([22], (4.1.3))
ω φ ( g , α t ) C α ω φ ( g , t ) , α 1 , C C ( g , t , α ) ,
we deduce
i = m E i ( f ) i C i = m ω φ r f , 1 i i = i = m ω φ r f , 1 i ( i 1 ) 1 i 1 i 1 d u C i = m 1 i 1 i 1 ω φ r f , u u d u = C 0 1 m 1 ω φ r ( f , u ) u d u = C 0 1 m ω φ r f , m m 1 t t d t C 0 1 m ω φ r ( f , t ) t d t ,
which completes the proof. ☐
Lemma 2.
For any 1 < t 1 2 , and for any f s.t. f D T , we have
= 1 2 t + 1 f ( x ) ( x t ) 2 d x C 0 1 ω φ ( f , σ ) σ d σ + f 1 + t ,
where C C ( f , t ) .
Proof. 
Since 1 2 t + 1 d x x t = 0 , we write
= 1 2 t + 1 f ( x ) ( x t ) 2 d x = = 1 2 t + 1 f ( x ) f ( t ) f ( t ) ( x t ) ( x t ) 2 d x + f ( t ) = 1 2 t + 1 d x ( x t ) 2 = : A 1 ( t ) + A 2 ( t ) .
Concerning A 1 , by reasoning as done in proving Proposition 3 we have that f D T implies
A 1 ( t ) = 1 2 t + 1 f ( x ) f ( t ) f ( t ) ( x t ) ( x t ) 2 d x
and using
f ( x ) f ( t ) f ( t ) ( x t ) = t x [ f ( τ ) f ( t ) ] d τ ,
we obtain the form
A 1 ( t ) = 1 t x t [ f ( t ) f ( τ ) ] d τ d x ( x t ) 2 + t 2 t + 1 t x [ f ( τ ) f ( t ) ] d τ d x ( x t ) 2 .
Hence, changing the variables x = t σ 2 1 t 2 , τ = t h 2 1 t 2 in the first addendum and x = t + σ 2 1 t 2 , τ = t + h 2 1 t 2 in the second one, we get
A 1 ( t ) = 0 2 1 + t 1 t 0 σ f t + h 2 1 t 2 f t h 2 1 t 2 d h d σ σ 2 = 0 2 1 + t 1 t 0 σ Δ h φ ( t ) f ( t ) d h d σ σ 2 .
Consequently, for any 1 < t 1 2 we obtain
| A 1 ( t ) | 0 2 1 + t 1 t 0 σ Δ h φ f d h d σ σ 2 0 2 1 + t 1 t sup h σ Δ h φ f d σ σ = 0 2 1 + t 1 t ω φ ( f , σ ) σ d σ 0 2 3 ω φ ( f , σ ) σ d σ ,
and using (56), we conclude that
| A 1 ( t ) | 0 2 3 ω φ ( f , σ ) σ d σ = 0 1 ω φ f , 2 3 u d u u C 0 1 ω φ ( f , u ) u d u .
Finally, since
= 1 2 t + 1 d x ( x t ) 2 = lim ε 0 + 1 t ε 1 ( x t ) 2 d x + t + ε 2 t + 1 1 ( x t ) 2 d x 2 ε = 2 1 + t ,
we have
| A 2 ( t ) | = 2 1 + t | f ( t ) | 2 f 1 + t ,
and the statement follows by collecting this last inequality, (59) and (57). ☐
Similarly, we can prove the following
Lemma 3.
For any 1 2 t < 1 , and for any f s.t. f D T , we have
= 2 t 1 1 f ( x ) ( x t ) 2 d x C 0 1 ω φ ( f , σ ) σ d σ + f 1 t ,
where C C ( f , t ) .
Proof of Theorem 4.
Assume first that 1 < t 1 2 . In this case, φ 2 ( t ) ( 1 + t ) and we have
φ 2 ( t ) H 1 ( f , t ) ( 1 + t ) = 1 2 t + 1 f ( x ) ( x t ) 2 d x + 2 t + 1 1 f ( x ) ( x t ) 2 d x .
Since
( 1 + t ) 2 t + 1 1 f ( x ) ( x t ) 2 d x C f ,
the statement follows from Lemma 2 for any 1 < t 1 2 .
Assume now 1 2 t < 1 , so that φ 2 ( t ) ( 1 t ) . By using the decomposition
φ 2 ( t ) H 1 ( f , t ) ( 1 t ) 1 2 t 1 f ( x ) ( x t ) 2 d x + = 2 t 1 1 f ( x ) ( x t ) 2 d x ,
and taking into account that
( 1 t ) 1 2 t 1 f ( x ) ( x t ) 2 d x C f ,
the statement follows from Lemma 3 for any 1 2 t < 1 .
Finally, suppose | t | < 1 2 and fix 1 4 < a < 1 2 . In this case, φ ( t ) 1 and we consider the following decomposition
φ 2 ( t ) H 1 ( f , t ) | x t | a f ( x ) ( x t ) 2 d x + = t a t + a f ( x ) f ( t ) f ( t ) ( x t ) ( x t ) 2 d x + + f ( t ) = t a t + a d x ( x t ) 2 .
For the first term at the right-hand side of (62) we get
| x t | a f ( x ) ( x t ) 2 d x C f .
Concerning the second addendum of (62), we proceed analogously to the estimate of A 1 ( t ) in Lemma 2. More precisely, by using f D T and (58) we obtain
= t a t + a f ( x ) f ( t ) f ( t ) ( x t ) ( x t ) 2 d x = t a t + a f ( x ) f ( t ) f ( t ) ( x t ) ( x t ) 2 d x = t a t + t t + a t x f ( τ ) f ( t ) d τ d x ( x t ) 2 ,
and by changing the variables x = t ± σ 2 φ ( t ) and τ = t ± h 2 φ ( t ) , we get
= t a t + a f ( x ) f ( t ) f ( t ) ( x t ) ( x t ) 2 d x 0 2 a φ ( t ) 0 σ Δ h φ ( t ) f ( t ) d h d σ σ 2 C 0 1 ω φ ( f , u ) u d u .
Finally, as regards the third term at the right-hand side of (62), since = t a t + a d x ( x t ) 2 = 2 a , we have
f ( t ) = t a t + a d x ( x t ) 2 2 a f ,
and the theorem is completely proven. ☐
Proof of Proposition 1.
Start from the standard decomposition
H ( f , t ) = 1 1 f ( x ) f ( t ) x t d x + f ( t ) H ( 1 , t ) ,
and taking into account
H ( 1 , t ) : = 1 1 d x x t = log 1 t 1 + t ,
we must prove that the principal value integral in (63) is indeed an improper integral. To this aim, let us first prove that
t 1 f ( x ) f ( t ) x t d x = lim ε 0 + t + ϵ 1 f ( x ) f ( t ) x t d x < .
Please note that for any ϵ > 0 ,
t + ϵ 1 f ( x ) f ( t ) x t d x = ϵ 1 t f ( u + t ) f ( t ) u d u .
Moreover, for any g AC , we note that
f ( u + t ) f ( t ) = f ( u + t ) g ( u + t ) f ( t ) + g ( t ) + g ( u + t ) g ( t ) 2 f g + t u + t g ( σ ) d σ 2 f g + g φ t u + t d σ φ ( σ ) = 2 f g + u g φ arcsin ( u + t ) arcsin ( t ) u .
On the other hand, recalling that
arcsin y = y + y 3 6 + 3 40 y 5 + 5 112 y 7 + 35 1152 y 9 + , | y | < 1 ,
we easily get
arcsin ( u + t ) arcsin ( t ) u C C ( t , u ) , | t | < 1 , 0 < u 2 ,
and therefore, the previous estimate and (3) yield
f ( u + t ) f ( t ) C inf g AC { f g + u g φ = C K 1 , φ ( f , u ) ω φ ( f , u ) .
Hence, for all | t | < 1 it follows
lim ϵ 0 + ϵ 1 t f ( u + t ) f ( t ) u d u C lim ϵ 0 + ϵ 2 ω φ ( f , u ) u d u , C C ( t ) ,
i.e., under the assumption f D T , (64) holds.
Similarly proceeding, we can prove that
1 t f ( x ) f ( t ) x t d x = lim ε 0 + ϵ 1 + t f ( t ) f ( t u ) u d u <
and the statement follows. ☐
Proof of Proposition 2.
For 1 k m 1 , by using the recurrence relation (15) and taking into account that 1 1 p m , h ( x ) d x = 2 m + 1 holds h N 0 m , we get
q m , k ( t ) = 1 2 1 1 ( 1 x ) p m 1 , k ( x ) ( 1 t ) p m 1 , k ( t ) x t d x + 1 2 1 1 ( 1 + x ) p m 1 , k 1 ( x ) ( 1 + t ) p m 1 , k 1 ( t ) x t d x = 1 2 q m 1 , k ( t ) 1 1 x p m 1 , k ( x ) t p m 1 , k ( t ) x t d x + 1 2 q m 1 , k 1 ( t ) + 1 1 x p m 1 , k 1 ( x ) t p m 1 , k 1 ( t ) x t d x = 1 2 q m 1 , k ( t ) 2 m t q m 1 , k ( t ) + 1 2 q m 1 , k 1 ( t ) + 2 m + t q m 1 , k 1 ( t ) = ( 1 t ) 2 q m 1 , k ( t ) + ( 1 + t ) 2 q m 1 , k 1 ( t ) .
For k = 0 , we have
q m , 0 ( t ) = 1 2 1 1 ( 1 x ) p m 1 , 0 ( x ) ( 1 t ) p m 1 , 0 ( t ) x t d x = 1 2 q m 1 , k ( t ) 2 m t q m 1 , k ( t ) = ( 1 t ) 2 q m 1 , k ( t ) 1 m .
For k = m we proceed analogously. ☐
Proof of Theorem 5.
Set R m , s f = f B m , s f , we have
E m , s ( f , t ) = H ( R m , s f , t ) , and Φ m , s ( f , t ) = F ( R m , s f , t ) .
Applying Theorem 3, E m , s ( f , t ) can be estimated as follows
| E m , s ( f , t ) | C log e 1 t 2 R m , s f + 0 1 ω φ ( R m , s f , u ) u d u , C C ( m , f , t ) ,
and by Theorem 1 we further obtain
R m , s f C ω φ 2 s f , 1 m + f m s , C C ( m , f ) .
Moreover, by Lemma 1 we get
0 1 ω φ ( R m , s f , u ) u d u = 0 1 m ω φ ( R m , s f , u ) u d u + 1 m 1 ω φ ( R m , s f , u ) u d u C 0 1 m ω φ r ( f , u ) u d u + R m , s f log m ,
and (42) follows from this last estimate, (65) and (66).
Regarding the quadrature error Φ m , s ( f , t ) , we observe that
Φ m , s ( f , t ) = F ( R m , s f , t ) = H ( R m , s f , t ) log 1 t 1 + t R m , s f ( t ) ,
which leads to
log 1 e 1 t 2 | Φ m , s ( f , t ) | log 1 e 1 t 2 | H ( R m , s f , t ) | + C | R m , s f ( t ) | C log 1 e 1 t 2 | E m , s ( f , t ) | + C R m , s f .
Hence, in the case that f D T , the estimate (42) holds for Φ m , s ( f , t ) as well.
Finally, if f C 1 then, by applying the mean value theorem, we get
| Φ m , s ( f , t ) | = 1 1 R m , s f ( x ) R m , s f ( t ) x t d x C ( f B m , s f )
and (43) follows from Theorem 2. ☐
Proof of Proposition 3.
We start from the standard decomposition
H 1 ( f , t ) = = 1 1 f ( x ) f ( t ) f ( t ) ( x t ) ( x t ) 2 d x + = 1 1 f ( t ) + f ( t ) ( x t ) ( x t ) 2 d x ,
and recalling the definitions
= 1 1 g ( x ) ( x t ) 2 d x = lim ε 0 + 1 t ε g ( x ) ( x t ) 2 d x + t + ε 1 g ( x ) ( x t ) 2 d x 2 g ( t ) ε , 1 1 g ( x ) x t d x = lim ϵ 0 + 1 t ϵ g ( x ) x t d x + t + ϵ 1 g ( x ) x t d x ,
we note that
= 1 1 d x ( x t ) 2 = 2 1 t 2 , = 1 1 ( x t ) ( x t ) 2 d x = 1 1 d x ( x t ) = log 1 t 1 + t .
Moreover, taking into account that
f ( x ) f ( t ) = f ( ξ x , t ) ( x t ) , min { x , t } < ξ x , t < max { x , t } ,
we have
= 1 1 f ( x ) f ( t ) f ( t ) ( x t ) ( x t ) 2 d x = 1 1 f ( ξ x , t ) f ( t ) ( x t ) d x .
Hence to complete the proof, we have to prove that this last principal value integral is indeed an improper integral if f D T .
We are going to prove that
t 1 f ( ξ x , t ) f ( t ) ( x t ) d x = lim ε 0 + t + ϵ 1 f ( ξ x , t ) f ( t ) ( x t ) d x < ,
being the proof of
1 t f ( ξ x , t ) f ( t ) ( x t ) d x = lim ε 0 + 1 t ϵ f ( ξ x , t ) f ( t ) ( x t ) d x <
analogous.
Set ξ x , t = ( x t ) θ + t , with 0 < θ < 1 , for any ϵ > 0 , we have
t + ϵ 1 f ( ξ x , t ) f ( t ) ( x t ) d x = t + ϵ 1 f ( ( x t ) θ + t ) f ( t ) ( x t ) d x = ϵ 1 t f ( u θ + t ) f ( t ) u d u .
On the other hand, for any g AC , | t | < 1 , 0 < θ < 1 and 0 < u 2 , similarly to the proof of Proposition 1, we have
f ( u θ + t ) f ( t ) = f ( u θ + t ) g ( u θ + t ) f ( t ) + g ( t ) + g ( u θ + t ) g ( t ) 2 f g + t u θ + t g ( σ ) d σ 2 f g + u g φ arcsin ( u θ + t ) arcsin ( t ) u C f g + u g φ , C C ( g , u , θ , t ) .
Hence, by means of (3), we get
lim ϵ 0 + ϵ 1 t f ( u θ + t ) f ( t ) u d u C lim ϵ 0 + ϵ 2 ω φ ( f , u ) u d u
and under the assumption f D T , (68) follows. ☐
Proof of Theorem 6.
We start from
E m , s 1 ( f , t ) = H 1 ( R m , s f , t ) , R m , s f ( t ) = f ( t ) B m , s f ( t ) .
By Theorem 4, we have
( 1 t 2 ) | H 1 ( R m , s f , t ) | C R m , s f + 0 1 ω φ ( ( R m , s f ) , τ ) τ d τ .
Since
0 1 ω φ ( ( R m , s f ) , τ ) τ d τ = 0 1 m + 1 m 1 ω φ ( ( R m , s f ) , τ ) τ d τ 0 1 m ω φ ( ( R m , s f ) , τ ) τ d τ + 2 ( R m , s f ) 1 m 1 d τ τ = 0 1 m ω φ ( ( R m , s f ) , τ ) τ d τ + 2 ( R m , s f ) log m ,
by Lemma 1 we get
( 1 t 2 ) | H 1 ( R m , s f , t ) | C R m , s f + ( R m , s f ) log m + 0 1 m ω φ r ( f , τ ) τ d τ ,
i.e., (53) holds.
The same estimate (53) also holds for Φ m , s 1 , since by (44) we have
Φ m , s 1 ( f , t ) = H 1 ( R m , s f , t ) log 1 t 1 + t ( R m , s f ) ( t ) + 2 1 t 2 R m , s f ( t ) ,
and we note that
( 1 t 2 ) log 1 t 1 + t ( R m , s f ) ( t ) C ( R m , s f ) , C C ( t , f , m ) , ( 1 t 2 ) 2 1 t 2 R m , s f ( t ) 2 R m , s f .
Finally, (54) follows from the Peano form of the Taylor’s remainder term, namely
g ( x ) = g ( t ) + g ( t ) ( x t ) + g ( ξ ) ( x t ) 2 2 , min { x , t } ξ max { x , t } ,
which for g = R m , s f , yields
| Φ m , s 1 ( f , t ) | = | F 1 ( R m , s f , t ) | 1 1 | R m , s f ( x ) R m , s f ( t ) R m , s f ( t ) ( x t ) | ( x t ) 2 d x R m , s f .
 ☐

Author Contributions

All authors equally contributed to the paper. Conceptualization, F.F., D.O. and W.T.; methodology, F.F., D.O. and W.T.; software, F.F., D.O. and W.T.; validation, F.F., D.O. and W.T.; analysis, F.F., D.O. and W.T.; investigation, F.F., D.O. and W.T.; resources, F.F., D.O. and W.T.; data curation, F.F., D.O. and W.T.; writing–original draft preparation, writing–review and editing, F.F., D.O. and W.T.; visualization, F.F., D.O. and W.T.; supervision F.F., D.O. and W.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially supported by INdAM - GNCS Project 2019 “Discretizzazione di misure, approssimazione di operatori integrali ed applicazioni”. The research of the first author was partially supported by the Helmholtz Association under the project Ptychography4.0.

Acknowledgments

The authors thank the anonymous referees for their suggestions and remarks, which allowed to improve the paper. This research has been accomplished within the RITA “Research ITalian network on Approximation”.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kalandiya, A.I. Mathematical Methods of Two-Dimensional Elasticity; Publ. Nauka: Moscow, Russia, 1973. [Google Scholar]
  2. Mastroianni, G.; Russo, M.G.; Themistoclakis, W. Numerical Methods for Cauchy Singular Integral Equations in Spaces of Weighted Continuous Functions. In Operator Theory Advances and Applications; Birkäuser Verlag: Basel, Switzerland, 2005; Volume 160, pp. 311–336. [Google Scholar]
  3. Mastroianni, G.; Russo, M.G.; Themistoclakis, W. The boundedness of the Cauchy singular integral operator in weighted Besov type spaces with uniform norms. Integr. Equ. Oper. Theory 2002, 42, 57–89. [Google Scholar] [CrossRef]
  4. Mastroianni, G.; Themistoclakis, W. A numerical method for the generalized airfoil equation based on the de la Vallée Poussin interpolation. J. Comput. Appl. Math. 2005, 180, 71–105. [Google Scholar] [CrossRef] [Green Version]
  5. Sun, W.; Wu, J. Interpolatory quadrature rules for Hadamard finite-part integrals and their superconvergence. IMA J. Numer. Anal. 2008, 28, 580–597. [Google Scholar] [CrossRef]
  6. King, F. Hilbert Transforms I & II; Cambridge University Press: Cambridge, UK, 2009. [Google Scholar]
  7. Boche, H.; Pohl, V. On the calculation of the Hilbert transform from interpolated data. IEEE Trans. Inform. Theory 2008, 54, 2358–2366. [Google Scholar] [CrossRef]
  8. Boche, H.; Pohl, V. Limits of calculating the finite Hilbert transform from discrete samples. Appl. Comp. Harm. Anal. 2019, 46, 66–93. [Google Scholar] [CrossRef]
  9. Parker, P.J.; Anderson, B.D.O. Hilbert transform from interpolation data. Math. Control Signals Syst. 1990, 3, 97–124. [Google Scholar] [CrossRef]
  10. Monegato, G. Definitions, properties and applications of finite-part integrals. J. Comp. Appl. Math. 2009, 229, 425–439. [Google Scholar] [CrossRef]
  11. Davis, P.J.; Rabinowitz, P. Methods of Numerical Integration, 2nd ed.; Academic Press: New York, NY, USA, 1984. [Google Scholar]
  12. De Bonis, M.C.; Occorsio, D. On the simultaneous approximation of a Hilbert transform and its derivatives on the real semiaxis. Appl. Numer. Math. 2017, 114, 132–153. [Google Scholar] [CrossRef]
  13. De Bonis, M.C.; Occorsio, D. Error bounds for a Gauss-type quadrature rule to evaluate hypersingular integrals. Filomat 2018, 32, 2525–2543. [Google Scholar]
  14. Monegato, G. Numerical evaluation of hypersingular integrals. J. Comp. Appl. Math. 1994, 50, 9–31. [Google Scholar] [CrossRef] [Green Version]
  15. Monegato, G. The numerical evaluation of one-dimensional Cauchy principal value integrals. Computing 1982, 29, 337–354. [Google Scholar] [CrossRef]
  16. Felbecker, G. Linearkombinationen von iterierten Bernsteinoperatoren. Manuscripta Math. 1979, 29, 229–246. [Google Scholar] [CrossRef]
  17. Mastroianni, G.; Occorsio, M.R. Una generalizzazione dell’operatore di Bernstein. Rend. Accad. Sci. Fis. Mat. Napoli 1977, 44, 151–169. [Google Scholar]
  18. Micchelli, C. The saturation class and iterates of the Bernstein polynomials. J. Approx. Theory 1973, 8, 1–18. [Google Scholar] [CrossRef] [Green Version]
  19. Occorsio, D.; Russo, M.G. Bivariate Generalized Bernstein Operators and Their Application to Fredholm Integral Equations; Nouvelle serie, (114), tome 100; Publications de l’Institut Matthématique: Belgrade, Serbia, 2016; pp. 141–162. [Google Scholar]
  20. Mastroianni, G.; Occorsio, M.R. Alcuni Algoritmi Per il Calcolo Numerico di Integrali A Valor Principale Secondo Cauchy; Technical Report CNR IAM n. 3/84; Institute for Applications of Mathematics of National Research Council of Italy: Naples, Italy, 1984. [Google Scholar]
  21. Gonska, H.H.; Zhou, X.-L. Approximation theorems for the iterated Boolean sums of Bernstein operators. J. Comput. Appl. Math. 1994, 53, 21–31. [Google Scholar] [CrossRef] [Green Version]
  22. Ditzian, Z.; Totik, V. Moduli of Smoothness; SCMG Springer: New York, NY, USA, 1987. [Google Scholar]
  23. Ditzian, Z.; Totik, V. Remarks on Besov spaces and best polynomial approximation. Proc. Am. Math. Soc. 1988, 104, 1059–1066. [Google Scholar] [CrossRef]
  24. Draganov, B.R. Strong estimates of the weighted simultaneous approximation by the Bernstein and Kantorovich operators and their Boolean sums. J. Approx. Theory 2015, 200, 92–135. [Google Scholar] [CrossRef]
  25. Farin, G.E. Curves and Surfaces for Computer Aided Geometric Design: A Practical Guide; Academic Press: Cambridge, MA, USA, 1993; ISBN 0122490525. [Google Scholar]
  26. Occorsio, D.; Simoncelli, A.C. How to go from Bézierto Lagrange curves by means of generalized Bézier curves. Facta Univ. Ser. Math. Inform. (Niš) 1996, 11, 101–111. [Google Scholar]
  27. Occorsio, D. Some new properties of Generalized Bernstein polynomials. Stud. Univ. Babes Bolyai Math. 2011, 56, 147–160. [Google Scholar]
  28. Capobianco, M.R.; Mastroianni, G.; Russo, M.G. Pointwise and uniform approximation of the finite Hilbert transform. Approx. Optim. 1997, 1, 45–66. [Google Scholar]
  29. Abu-Jeib, I.T. Algorithms for Centrosymmetric and Skew-Centrosymmetric Matrices. Missouri J. Math. Sci. 2006, 18, 1–8. [Google Scholar]
  30. Occorsio, D.; Russo, M.G. Nyström methods for Fredholm integral equations using equispaced points. Filomat 2014, 28, 49–63. [Google Scholar] [CrossRef]
  31. Mastroianni, G.; Milovanovic, G.V. Interpolation Processes. Basic Theory and Applications; Springer: Berlin, Germany, 2008. [Google Scholar]
Table 1. Example 1a: 1 1 sin x x 0.1 d x .
Table 1. Example 1a: 1 1 sin x x 0.1 d x .
m s = 8 s = 16 s = 32 s = 64
8 1.868 1.8688 1.86885 1.86885
16 1.8688 1.868855 1.86885558 1.868855589
32 1.868855 1.868855589 1.868855589128 1.86885558912878
64 1.868855589 1.8688555891287 1.86885558912878
128 1.86885558912 1.86885558912878
256 1.8688555891287
Table 2. Example 1b: = 1 1 sin x ( x 0.1 ) 2 d x .
Table 2. Example 1b: = 1 1 sin x ( x 0.1 ) 2 d x .
m s = 8 s = 16 s = 32 s = 64
8 0.466 0.4668 0.4668 0.46685
16 0.4668 0.46685 0.466857 0.466857
32 0.46685 0.466857 0.46685700178 0.46685700178498
64 0.466857 0.466857001784 0.46685700178498
128 0.4668570017 0.4668570017849
256 0.466857001784 0.46685700178498
Table 3. Example 2a: 1 1 | x 0.5 | 15 2 x 0.3 d x . Exact value 3.29987610310676 .
Table 3. Example 2a: 1 1 | x 0.5 | 15 2 x 0.3 d x . Exact value 3.29987610310676 .
m s = 8 s = 16 s = 32 s = 64
16 3 3.298 3.299 3.2998
32 3.299 3.29987 3.29987 3.299876
64 3.299876 3.299876 3.299876 3.299876
128 3.29987610 3.2998761 3.29987610 3.29987610
256 3.29987610 3.299876103 3.299876103 3.2998761031
e 512 3.2998761031 3.2998761031 3.29987610310 3.2998761031
1024 3.299876103106 3.299876103106 3.2998761031066 3.2998761031067
Table 4. Example 2b: = 1 1 | x 0.5 | 15 2 ( x 0.3 ) 2 d x .
Table 4. Example 2b: = 1 1 | x 0.5 | 15 2 ( x 0.3 ) 2 d x .
m s = 8 s = 16 s = 32 s = 64
32 3.0 3.03 3.038 3.0383
64 3.038 3.03838 3.03838 3.038388
128 3.03838 3.038388 3.038388 3.0383888
256 3.0383888 3.0383888 3.03838883 3.03838883
512 3.03838883 3.03838883 3.038388835 3.03838883525
1024 3.038388835 3.03838883528 3.03838883525 3.03838883525
Table 5. Example 3: 1 1 exp ( x ) sin ( x ) 1 + x 2 d x x + 0.7 .
Table 5. Example 3: 1 1 exp ( x ) sin ( x ) 1 + x 2 d x x + 0.7 .
m = 8 m = 16 m = 32 m = 64 m = 128 m = 516 m = 1024 m = 2048
s = 4 2.03 2.00 2.00 2.0067 2.00674 2.006741 2.006741 2.00674110
s = 8 2.023 2.006 2.006 2.0067 2.0067412 2.00674121 2.00674121192 2.0067412119231
s = 16 2.004 2.006 2.00674 2.006741 2.00674121 2.00674121192 2.0067412119231 2.00674121192318
s = 32 2.000 2.006 2.00674 2.006741 2.0067412119 2.006741211923 2.0067412119231 2.00674121192318
s = 64 2.002 2.006 2.00674 2.00674121 2.0067412119 2.006741211923 2.0067412119231 2.0067412119231
s = 128 2.006 2.006 2.00674 2.00674121 2.006741211923 2.006741211923 2.0067412119231 2.0067412119231
s = 256 2.008 2.0067 2.00674 2.006741211 2.006741211923 2.006741211923 2.006741211923 2.00674121192318
s = 512 2.010 2.0067 2.0067412 2.006741211 2.006741211923 2.006741211923 2.006741211923 2.006741211923
s = 1024 2.010 2.0067 2.0067412 2.0067412119 2.006741211923 2.006741211923 2.006741211923 2.006741211923
s = 2048 2.011 2.0067 2.0067412 2.0067412119 2.006741211923 2.006741211923 2.006741211923 2.006741211923
s = 4096 2.011 2.0067 2.0067412 2.0067412119 2.006741211923 2.006741211923 2.006741211923 2.0067412119231

Share and Cite

MDPI and ACS Style

Filbir, F.; Occorsio, D.; Themistoclakis, W. Approximation of Finite Hilbert and Hadamard Transforms by Using Equally Spaced Nodes. Mathematics 2020, 8, 542. https://0-doi-org.brum.beds.ac.uk/10.3390/math8040542

AMA Style

Filbir F, Occorsio D, Themistoclakis W. Approximation of Finite Hilbert and Hadamard Transforms by Using Equally Spaced Nodes. Mathematics. 2020; 8(4):542. https://0-doi-org.brum.beds.ac.uk/10.3390/math8040542

Chicago/Turabian Style

Filbir, Frank, Donatella Occorsio, and Woula Themistoclakis. 2020. "Approximation of Finite Hilbert and Hadamard Transforms by Using Equally Spaced Nodes" Mathematics 8, no. 4: 542. https://0-doi-org.brum.beds.ac.uk/10.3390/math8040542

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop