Next Article in Journal
A Comparison of Linking Methods for Two Groups for the Two-Parameter Logistic Item Response Model in the Presence and Absence of Random Differential Item Functioning
Previous Article in Journal
A Survey on Existence Results for Boundary Value Problems of Hilfer Fractional Differential Equations and Inclusions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Analogues of the Laplace Transform and Z-Transform with Piecewise Linear Kernels

School of Mathematics and Applied Statistics, University of Wollongong, Wollongong, NSW 2522, Australia
*
Author to whom correspondence should be addressed.
Submission received: 13 July 2021 / Revised: 8 September 2021 / Accepted: 8 September 2021 / Published: 13 September 2021
(This article belongs to the Section Mathematical Sciences)

Abstract

:
Two new transforms with piecewise linear kernels are introduced. These transforms are analogues of the classical Laplace transform and Z-transform. Properties of these transforms are investigated and applications to ordinary differential equations and integral equations are provided. This article is ideal for study as a foundational project in an undergraduate course in differential and/or integral equations.

1. Introduction

An integral transform maps a function from its original function space into another function space via integration. In many cases, properties of the original function might be more easily characterised in the transformed space rather than in the original space. Integral transforms arise in many areas of mathematics, e.g., differential and integral equations, probability, number theory and computer science (see for instance [1,2,3,4,5,6,7] and the comprehensive references therein). The reader is also referred to the recent article [8] that introduced and studied a broad class of integral transforms and which includes many well-known integral transforms as special cases. A discrete transform is one where the input function is a sequence and the integral is typically replaced by a summation [6,9].
Two of the most well-known and also widely used transforms are the Laplace transform [3,6,10]
L { f ( t ) ; s } = 0 e s t f ( t ) d t
and the Z-transform [6,9,11]
Z { f ( j ) ; z } = j = 0 z j f ( j ) .
The former is a continuous transform since t R + = ( 0 , ) while the latter is a discrete transform since j N 0 = N { 0 } . The kernel K s ( t ) = e s t of the Laplace transform is positive and decreasing for s , t R + , and satisfies
K s ( 0 ) = 1 , lim t K s ( t ) = 0 for s R + .
Similarly, the kernel K z ( j ) = z j of the Z-transform is positive and decreasing for z R + and j N 0 , and satisfies
K z ( 0 ) = 1 , lim j K z ( j ) = 0 for z R + .
The Laplace transform and Z-transform have many nice properties, hence they are ubiquitous in many areas of applied mathematics and engineering [6,12,13]. Note that here for simplicity we assume that s and z are positive real numbers although they can also be considered in the complex plane.
Suppose that we replace the above two kernels by similarly behaved but piecewise linear functions. More precisely, let us define two new transforms
R c { f ( t ) ; s } = 0 K s ( t ) f ( t ) d t , K s ( t ) = max 1 t s , 0
and
R d { f ( j ) ; z } = j = 0 K z ( j ) f ( j ) , K z ( j ) = max 1 j z , 0 ,
provided the improper integral and series converge. More specific assumptions on the appropriate function spaces will be given later. Then it is easy to see that K s and K z are nonnegative, nonincreasing and satisfy (1) and (2), respectively. Moreover, R c and R d are linear operators. The goal of this article is to investigate some properties and applications of these analogues of the Laplace transform and Z-transform with piecewise linear kernels. We consider the continuous transform R c in Section 2, while Section 3 is devoted to the discrete transform R d . A brief discussion is given in Section 4. The results in this article are useful as a springboard for further investigation of other continuous and discrete transforms, and are particularly useful as a foundational project for a first course in differential and/or integral equations.

2. A Continuous Transform with a Piecewise Linear Kernel

2.1. Properties of the Continuous Transform

By breaking up the interval of integration, we see that (3) can be simplified to
R c { f ( t ) ; s } = 0 s 1 t s f ( t ) d t .
Let R ¯ + = [ 0 , ) . If f C ( R ¯ + ) , then R c { f ( t ) ; s } exists for all s R + . In fact, the mapping s R c { f ( t ) ; s } belongs to C ( R + ) . Hence we may write R c : U V , where U = C ( R ¯ + ) and V = C ( R + ) .
Remark 1.
Note, however, that the assumption U = C ( R ¯ + ) is a sufficient, but not a necessary, condition for the R c -transform to exist. For example, if f C ( R + ) but f ( t ) becomes unbounded as t tends to zero from the right (e.g., f ( t ) = 1 / t ), then the R c -transform may still exist; see Example 6, where it is also pointed out that a similar situation arises for the Laplace transform. As will be seen in following propositions, depending on the transform property that one wishes to prove, stricter conditions on U may need to be imposed (e.g., U = C n ( R ¯ + ) C ( R + ) , where n N ).
The next result shows how R c transforms derivatives. We omit the proof since the result follows from straightforward integration by parts. Let n N be the order of the derivative. For comparison recall that
L { f ( n ) ( t ) ; s } = s n L { f ( t ) ; s } j = 1 n s n j f ( j 1 ) ( 0 ) , n 1 .
Proposition 1.
If f C n ( R ¯ + ) , then
R c { f ( t ) ; s } = f ( 0 ) + 1 s 0 s f ( t ) d t , R c { f ( n ) ( t ) ; s } = f ( n 1 ) ( 0 ) + 1 s [ f ( n 2 ) ( s ) f ( n 2 ) ( 0 ) ] , n 2 .
Now, we look at how R c transforms integrals. Comparing with the Laplace transform, it is known that
L 0 t f ( u ) d u ; s = 1 s L { f ( t ) ; s } .
Proposition 2.
If f C ( R ¯ + ) , then
R c 0 t f ( u ) d u ; s = s 2 R c { f ( t ) ; s } 1 2 R c { t f ( t ) ; s } .
Proof. 
For notational convenience define
g ( t ) = 0 t f ( u ) d u ;
hence g is uniformly continuous on [ 0 , t ] and differentiable on ( 0 , t ) . From integration by parts we obtain
R c 0 t f ( u ) d u ; s = 0 s 1 t s g ( t ) d t = s 2 0 s f ( u ) d u 0 s 1 t 2 s t f ( t ) d t .
Observing (5), we can rewrite the above equation as
R c 0 t f ( u ) d u ; s = s 2 0 s 1 t s + t s f ( t ) d t 1 2 0 s 2 t s t f ( t ) d t = s 2 0 s 1 t s f ( t ) d t + 1 2 0 s t f ( t ) d t 1 2 0 s 1 t s t f ( t ) d t 1 2 0 s t f ( t ) d t = s 2 R c { f ( t ) ; s } 1 2 R c { t f ( t ) ; s } ,
which proves the assertion. □
The next proposition shows that R c has a scaling property. An analogous scaling property for the Laplace transform is
L { f ( a t ) ; s } = 1 a L f ( t ) ; s a , a > 0 .
Proposition 3.
Let a > 0 and f C ( R ¯ + ) . Then
R c { f ( a t ) ; s } = 1 a R c { f ( t ) ; a s } .
Proof. 
The result follows from making the substitution u = a t and using (5). □
Let F ( s ) = { f ( t ) ; s } . Then the inverse Laplace transform is given by
f ( t ) = L 1 { F ( s ) ; t } = 1 2 π i lim T c i T c + i T e s t F ( s ) d s ,
where c is a real number such that the contour path of integration is in the region of convergence of F ( s ) . We now derive the inverse R c -transform. Unlike the inverse Laplace transform, the inverse R c -transform does not involve contour integration but differentiation of the transformed function.
Proposition 4.
Suppose that f C ( R ¯ + ) . If F ( s ) = R c { f ( t ) ; s } is twice continuously differentiable with respect to s, then
f ( t ) = R c 1 { F ( s ) ; t } = lim s t 1 s d d s [ s 2 F ( s ) ] .
Proof. 
Let g ( s , t ) = ( 1 t / s ) f ( t ) , so that ( g / s ) ( s , t ) = t f ( t ) / s 2 is continuous. We have from (5) and the Leibniz Integral Rule that
F ( s ) = d d s 0 s 1 t s f ( t ) d t = d d s 0 s g ( s , t ) d t = 1 s 2 0 s t f ( t ) d t
or
s 2 F ( s ) = 0 s t f ( t ) d t .
The Fundamental Theorem of Calculus implies that
d d s [ s 2 F ( s ) ] = s f ( s ) or f ( s ) = 1 s d d s [ s 2 F ( s ) ] .
Hence we see that
f ( t ) = lim s t 1 s d d s [ s 2 F ( s ) ] .
The Laplace convolution of two functions f : R + R and g : R + R is defined as
( f g ) ( t ) = 0 t f ( t u ) g ( u ) d u .
It can be shown that the convolution operator is commutative and the convolution property gives
L { ( f g ) ( t ) ; s } = F ( s ) G ( s ) or ( f g ) ( t ) = L 1 { F ( s ) G ( s ) ; t } ,
where F ( s ) = L { f ( t ) ; s } and G ( s ) = L { g ( t ) ; s } .
We wish to define a convolution operator associated with the R c -transform such that the convolution property holds, namely
R c { ( f g ) ( t ) ; s } = F ( s ) G ( s ) or ( f g ) ( t ) = R c 1 { F ( s ) G ( s ) ; t } ,
where now F ( s ) = R c { f ( t ) ; s } and G ( s ) = R c { g ( t ) ; s } . But from Proposition 4 we conclude that
( f g ) ( t ) = R c 1 { F ( s ) G ( s ) ; t } = lim s t 1 s d d s [ s 2 ( F G ) ( s ) ] .
Thus we take (7) as the definition of the R c -convolution so that the convolution property necessarily holds.
The R c -convolution f g defined in (7) is expressed in terms of the R c -transforms of f and g. However, if we look at (6), we note that the Laplace convolution is not given in terms of the Laplace transforms of f and g. Hence in the next result we derive an alternative formula for the R c -convolution.
Proposition 5.
The R c -convolution (7) of f and g is formally equivalent to
( f g ) ( t ) = 0 t 1 u t [ f ( t ) g ( u ) + f ( u ) g ( t ) ] d u + 2 t 3 0 t u f ( u ) d u 0 t u g ( u ) d u .
Proof. 
Let
F ( s ) = R c { f ( t ) ; s } = 0 s 1 u s f ( u ) d u , G ( s ) = R c { g ( t ) ; s } = 0 s 1 v s g ( v ) d v .
Then we can compute
( F G ) ( s ) = 1 s 2 0 s u f ( u ) d u · 0 s 1 v s g ( v ) d v + 0 s 1 u s f ( u ) d u · 1 s 2 0 s v g ( v ) d v ,
s 2 ( F G ) ( s ) = 0 s u f ( u ) d u · 0 s 1 v s g ( v ) d v + 0 s 1 u s f ( u ) d u · 0 s v g ( v ) d v ,
d d s [ s 2 ( F G ) ( s ) ] = s f ( s ) · 0 s 1 v s g ( v ) d v + 0 s u f ( u ) d u · 1 s 2 0 s v g ( v ) d v + 1 s 2 0 s u f ( u ) d u · 0 s v g ( v ) d v + 0 s 1 u s f ( u ) d u · s g ( s ) .
Thus (7) gives
( f g ) ( t ) = 0 t 1 u t [ f ( t ) g ( u ) + f ( u ) g ( t ) ] d u + 2 t 3 0 t u f ( u ) d u 0 t u g ( u ) d u .
Note that the R c -convolution operator is also commutative. □

2.2. Examples

Example 1.
Let p 0 . Then
R c { t p ; s } = 0 s 1 t s t p d t = s p + 1 ( p + 1 ) ( p + 2 ) .
As a special case, when p = 0 , we get R c { 1 ; s } = s / 2 . Note that if 1 < p < 0 , then the function t t p does not belong to C ( R ¯ + ) but R c { t p ; s } may still exist as an improper Riemann integral. This can be seen from
lim ϵ 0 + ϵ s 1 t s t p d t = lim ϵ 0 + s p + 1 p + 1 s p + 2 s ( p + 2 ) ϵ p + 1 p + 1 + ϵ p + 2 s ( p + 2 ) = s p + 1 ( p + 1 ) ( p + 2 )
and therefore
R c { t p ; s } = s p + 1 ( p + 1 ) ( p + 2 ) , p > 1 .
Of course, to be able to apply the results of the previous section for this case, we have to assume that p 0 .
It is known that
lim s L { f ( t ) ; s } = lim s 0 e s t f ( t ) d t = 0
in general. Furthermore,
L { t p ; s } = Γ ( p + 1 ) s p + 1 , p > 1 ,
where Γ is the Euler gamma function. In contrast, this example shows that
lim s R c { t p ; s } = , p > 1 .
Observe that L { t p ; s } will still exist as an improper Riemann integral when 1 < p < 0 , just as for R c { t p ; s } .
Example 2.
If a 0 , then
R c { e a t ; s } = 0 s 1 t s e a t d t = 1 a 2 s ( e a s a s 1 ) .
By comparison,
L { e a t ; s } = 1 s a , s > a .
Replacing a by i a in (9) gives
R c { e i a t ; s } = 1 a 2 s ( e i a s i a s 1 ) .
Thus, equating real and imaginary parts, we obtain
R c { cos ( a t ) ; s } = 1 a 2 s [ 1 cos ( a s ) ] , R c { sin ( a t ) ; s } = 1 a 2 s [ a s sin ( a s ) ] .
Recalling that cosh ( x ) = cos ( i x ) and sinh ( x ) = i sin ( i x ) for x R , we deduce that
R c { cosh ( a t ) ; s } = 1 a 2 s [ cosh ( a s ) 1 ] , R c { sinh ( a t ) ; s } = 1 a 2 s [ sinh ( a s ) a s ] .
The above formulas can also be compared with their Laplace transform counterparts.
Example 3.
If F ( s ) = R c { f ( t ) ; s } = sin ( s ) , then f ( t ) = R c 1 { F ( s ) ; t } = 2 cos ( t ) t sin ( t ) from Proposition 4.
Example 4.
Let f ( t ) = 1 and g ( t ) = e t . Then using (8) and (9), we see that
F ( s ) = R c { f ( t ) ; s } = s 2 , G ( s ) = R c { g ( t ) ; s } = 1 s ( e s s 1 ) .
Straightforward calculations give
( F G ) ( s ) = 1 2 ( e s s 1 ) , d d s [ s 2 ( F G ) ( s ) ] = 1 2 s 2 e s + s e s s .
Therefore (7) yields
( f g ) ( t ) = lim s t 1 s d d s [ s 2 ( F G ) ( s ) ] = 1 2 t e t + e t 1 .
Example 5.
Integral transforms are usually used to solve linear ordinary differential equations (ODEs). This example shows that the R c -transform can be used to solve some nonlinear ODEs. Let f : R + R be a given continuous function of t, and consider
y y + ( y ) 2 = f ( t ) ,
where y = y ( t ) is to be determined. With a slight abuse of notation, if f : R + R and g : R + R are any continuously differentiable functions of t, then
R c { f ( t ) g ( t ) ; s } = 0 s 1 t s f ( t ) g ( t ) d t = f ( 0 ) g ( 0 ) + 1 s 0 s f ( t ) g ( t ) d t R c { f ( t ) g ( t ) ; s }
from integration by parts. Choosing f ( t ) = y ( t ) and g ( t ) = y ( t ) , we see from (12) that
R c { y ( t ) y ( t ) ; s } = y ( 0 ) y ( 0 ) + 1 s 0 s y ( t ) y ( t ) d t R c { y ( t ) y ( t ) ; s } ,
which gives
R c { y ( t ) y ( t ) ; s } + R c { [ y ( t ) ] 2 ; s } = y ( 0 ) y ( 0 ) + 1 2 s [ y ( s ) 2 y ( 0 ) 2 ] .
Taking the R c -transform of (11), we obtain
y ( 0 ) y ( 0 ) + 1 2 s [ y ( s ) 2 y ( 0 ) 2 ] = R c { y ( t ) y ( t ) ; s } + R c { [ y ( t ) ] 2 ; s } = F ( s ) ,
where F ( s ) = R c { f ( t ) ; s } . Solving for y ( s ) 2 yields
y ( s ) 2 = 2 s F ( s ) + 2 s y ( 0 ) y ( 0 ) + y ( 0 ) 2 .
Replacing s by t,
y ( t ) 2 = 2 t F ( t ) + 2 t y ( 0 ) y ( 0 ) + y ( 0 ) 2 = 2 t 0 t 1 u t f ( u ) d u + 2 t y ( 0 ) y ( 0 ) + y ( 0 ) 2 = 2 0 t ( t u ) f ( u ) d u + 2 t y ( 0 ) y ( 0 ) + y ( 0 ) 2 .
Thus the solution of the nonlinear ODE (11) using the R c -transform is
y ( t ) = ± 2 0 t ( t u ) f ( u ) d u + 2 t y ( 0 ) y ( 0 ) + y ( 0 ) 2 1 / 2 ,
where the initial conditions y ( 0 ) and y ( 0 ) are assumed to be given. Alternatively, (11) can be solved by rewriting it as ( y y ) = f ( t ) and possibly making a change of variables to reduce the order.
Example 6.
Let us solve the integral equation
y ( t ) = 0 t y ( u ) d u t 2 y ( t ) + 1
for y = y ( t ) . Suppose that we take f ( t ) = 1 and g ( t ) = y ( t ) in Proposition 5. We obtain
( f g ) ( t ) = 0 t 1 u t [ y ( u ) + y ( t ) ] d u 2 t 3 0 t u d u 0 t u y ( u ) d u = 1 t 0 t ( t u ) y ( u ) d u y ( t ) t t 2 1 t 0 t u y ( u ) d u = 0 t y ( u ) d u t 2 y ( t ) .
Therefore the integral Equation (13) can be expressed as
y ( t ) = ( f g ) ( t ) + 1 .
Taking the R c -transform, applying the R c -convolution property and using (8), we have
Y ( s ) = s 2 Y ( s ) + s 2 or Y ( s ) = s 2 + s ,
where Y ( s ) = R c { y ( t ) ; s } = R c { g ( t ) ; s } . Then
Y ( s ) = 2 ( 2 + s ) 2 , s 2 Y ( s ) = 2 s 2 ( 2 + s ) 2 , d d s [ s 2 Y ( s ) ] = 8 s ( 2 + s ) 3 .
Hence (7) gives
y ( t ) = lim s t 1 s d d s [ s 2 Y ( s ) ] = 8 ( 2 + t ) 3
as the solution of the integral Equation (13). For comparison, we observe that (13) can be converted to an ODE by differentiating with respect to t. Thus
y ( t ) = y ( t ) 1 2 y ( t ) t 2 y ( t ) or y ( t ) = 3 2 + t y ( t ) .
This is a nonautonomous linear first-order ODE, which can be solved using the Laplace transform only if we rewrite it as 2 y ( t ) + t y ( t ) = 3 y ( t ) . Even then, the term L { t y ( t ) ; s } = Y ( s ) produces another ODE in Laplace transform space, which is of the same degree of difficulty as the original problem. However, using the method of integrating factors, the general solution can be expressed as
y ( t ) = c ( 2 + t ) 3 ,
where c is an arbitrary constant. Note also from (13) that y ( 0 ) = 1 , thus c = 8 and we recover the solution (14) obtained using the R c -transform.
Example 7.
More generally, let us solve the linear integral equation
y ( t ) = ( f y ) ( t ) + g ( t )
for y = y ( t ) , where f : R ¯ + R and g : R ¯ + R are arbitrary but given continuous functions. Assume further that f ( t ) 0 for t 0 . It follows that
F ( s ) = R c { f ( t ) ; s } = 0 s 1 t s f ( t ) d t 0 < 1 .
Note that in general the integral Equation (15) cannot be converted to a linear ODE by differentiation with respect to t, unlike in the previous example. Expanding the R c -convolution operator in (15), we have
( f y ) ( t ) = 0 t 1 u t [ f ( t ) y ( u ) + f ( u ) y ( t ) ] d u + 2 t 3 0 t u f ( u ) d u 0 t u y ( u ) d u .
An application of the R c -convolution property to (15) gives
Y ( s ) = F ( s ) Y ( s ) + G ( s ) or Y ( s ) = G ( s ) 1 F ( s ) ,
where Y ( s ) = R c { y ( t ) ; s } , F ( s ) = R c { f ( t ) ; s } and G ( s ) = R c { g ( t ) ; s } . If we define
H ( s ) = 1 1 F ( s ) = [ 1 F ( s ) ] 1 ,
then
H ( s ) = [ 1 F ( s ) ] 2 F ( s ) , s 2 H ( s ) = s 2 [ 1 F ( s ) ] 2 F ( s )
and
d d s [ s 2 H ( s ) ] = s [ 1 F ( s ) ] 2 { 2 F ( s ) 2 s [ F ( s ) ] 2 [ 1 F ( s ) ] 1 s F ( s ) } .
Therefore from (7) we deduce that
h ( t ) = R c 1 { H ( s ) ; t } = 1 [ 1 F ( t ) ] 2 2 F ( t ) 2 t [ F ( t ) ] 2 1 F ( t ) t F ( t ) .
Finally, y can be expressed as a R c -convolution, i.e.,
y ( t ) = R c 1 G ( s ) 1 F ( s ) ; t = R c 1 { H ( s ) G ( s ) ; t } = ( h g ) ( t ) .
The function g is given, while F (and therefore h) is obtained from the given function f by taking its R c -transform and replacing s by t.
Example 8.
Consider the nonlinear integral equation
y ( t ) = ( y y ) ( t ) + g ( t ) ,
where
( y y ) ( t ) = 2 0 t 1 u t y ( t ) y ( u ) d u + 2 t 3 0 t u y ( u ) d u 2
from Proposition 5 and g : R ¯ + R is a given continuous function such that g ( t ) 0 for t 0 . This implies that
G ( s ) = R c { g ( t ) ; s } = 0 s 1 t s g ( t ) 0 .
As before, we want to determine y = y ( t ) .
Taking the R c -transform of (16) and invoking the convolution property, we get
Y ( s ) = Y ( s ) 2 + G ( s ) or Y ( s ) = 1 2 [ 1 ± 1 4 G ( s ) ] ,
where Y ( s ) = R c { y ( t ) ; s } and G ( s ) = R c { g ( t ) ; s } . Proposition 4 implies that
y ( t ) = lim s t 1 s d d s [ s 2 Y ( s ) ] = lim s t 1 s d d s s 2 G ( s ) 1 4 G ( s ) .
An auxiliary condition is needed to determine the correct sign above.

3. A Discrete Transform with a Piecewise Linear Kernel

3.1. Properties of the Discrete Transform

Throughout this section we assume that z > 0 . If z denotes the greatest integer less than or equal to z, then z z < z + 1 . For all j z + 1 we see that j z + 1 > z and 1 j / z < 0 ; hence (4) reduces to
R d { f ( j ) ; z } = j = 0 z 1 j z f ( j ) .
Let U be the collection of all real-valued sequences defined on N 0 . As (17) is a finite sum, we see that R d { f ( j ) ; z } R is always defined for any z > 0 . Because of the presence of z , we see that the R d -transform is not necessarily continuous in z. Let V be the collection of all real-valued functions defined on R + . Then R d : U V .
Remark 2.
We shall see later that the image of R d is in fact a proper subset of V.
The Z-transform satisfies the backward shift and forward shift properties
Z { f ( j k ) ; z } = z k Z { f ( j ) ; z } , Z { f ( j + k ) ; z } = z k Z { f ( j ) ; z } z k j = 0 k 1 z j f ( j ) ,
respectively, where k N . Similar to the Z-transform, the R d -transform also has backward shift and forward shift properties although they are not as simple since the kernel of the Z-transform satisfies a semigroup property while the kernel of the R d -transform does not. In the following computations we will assume that f ( j ) = 0 if j < 0 .
Proposition 6.
For k N and z k the R d -transform has the backward shift property
R d { f ( j k ) ; z } = R d { f ( j ) ; z } k z j = 0 z f ( j ) j = z k + 1 z 1 j + k z f ( j ) .
Proof. 
We see from (17) that
R d { f ( j k ) ; z } = j = 0 z 1 j z f ( j k ) = j = k z 1 j z f ( j k )
since f ( j k ) = 0 for all j < k . Introduce the new index n = j k to give
R d { f ( j k ) ; z } = n = 0 z k 1 n + k z f ( n ) .
Therefore
R d { f ( j k ) ; z } = n = 0 z 1 n + k z f ( n ) n = z k + 1 z 1 n + k z f ( n ) = n = 0 z 1 n z f ( n ) k z n = 0 z f ( n ) n = z k + 1 z 1 n + k z f ( n ) = j = 0 z 1 j z f ( j ) k z j = 0 z f ( j ) j = z k + 1 z 1 j + k z f ( j ) = R d { f ( j ) ; z } k z j = 0 z f ( j ) j = z k + 1 z 1 j + k z f ( j )
and the conclusion follows. □
Remark 3.
From Proposition 6 we deduce the R d -transform
R d { f ( j ) f ( j k ) ; z } = k z j = 0 z f ( j ) + j = z k + 1 z 1 j + k z f ( j )
of the backward difference f ( j ) f ( j k ) . In particular, when k = 1 , this gives
R d { f ( j ) f ( j 1 ) ; z } = 1 z j = 0 z f ( j ) + 1 z + 1 z f ( z ) = 1 z j = 0 z 1 f ( j ) + f ( z ) z z f ( z ) = 1 z j = 0 z 1 f ( j ) + 1 z z f ( z ) .
Proposition 7.
For k N and z k the R d -transform has the forward shift property
R d { f ( j + k ) ; z } = R d { f ( j ) ; z } + k z j = 0 z f ( j ) j = 0 k 1 1 j k z f ( j ) + j = z + 1 z + k 1 j k z f ( j ) .
Proof. 
Equation (17) yields
R d { f ( j + k ) ; z } = j = 0 z 1 j z f ( j + k ) .
Set n = j + k , so that
R d { f ( j + k ) ; z } = n = k z + k 1 n k z f ( n ) = n = k z 1 n k z f ( n ) + n = z + 1 z + k 1 n k z f ( n ) .
Rearranging gives
R d { f ( j + k ) ; z } = n = 0 z 1 n k z f ( n ) n = 0 k 1 1 n k z f ( n ) + n = z + 1 z + k 1 n k z f ( n ) = n = 0 z 1 n z f ( n ) + k z n = 0 z f ( n ) n = 0 k 1 1 n k z f ( n ) + n = z + 1 z + k 1 n k z f ( n ) = R d { f ( j ) ; z } + k z j = 0 z f ( j ) j = 0 k 1 1 j k z f ( j ) + j = z + 1 z + k 1 j k z f ( j ) .
This completes the proof. □
Remark 4.
Using Proposition 7, we obtain the R d -transform
R d { f ( j + k ) f ( j ) ; z } = k z j = 0 z f ( j ) j = 0 k 1 1 j k z f ( j ) + j = z + 1 z + k 1 j k z f ( j )
of the forward difference f ( j + k ) f ( j ) . A special case is k = 1 , giving
R d { f ( j + 1 ) f ( j ) ; z } = 1 z j = 0 z f ( j ) 1 + 1 z f ( 0 ) + 1 z z f ( z + 1 ) .
Next, we derive a formula for the inverse R d -transform. Before we state and prove the result, let us first deduce a pattern and then generalise. Recall from (17) that
F ( z ) = R d { f ( j ) ; z } = j = 0 z f ( j ) 1 z j = 0 z j f ( j ) = a ( z ) 1 z b ( z )
for some sequences a : N 0 R and b : N 0 R . Therefore, if F ( z ) is the R d -transform of some sequence f : N 0 R , then necessarily it has to be of the form F ( z ) = a ( z ) b ( z ) / z . This explains the statement in Remark 2. Here we assume that F ( z ) is known, so a ( z ) and b ( z ) are also known, and we want to recover f : N 0 R .
Suppose that 0 < z < 1 . From (20) we get
F ( z ) = j = 0 0 f ( j ) 1 z j = 0 0 j f ( j ) = f ( 0 ) = a ( 0 ) 1 z b ( 0 ) ,
or a ( 0 ) = f ( 0 ) and b ( 0 ) = 0 . It follows that f ( 0 ) = a ( 0 ) . Moreover, b ( 0 ) = 0 always if F ( z ) is a R d -transform.
Now suppose that 1 z < 2 . Then (20) gives
F ( z ) = j = 0 1 f ( j ) 1 z j = 0 1 j f ( j ) = f ( 0 ) + f ( 1 ) 1 z f ( 1 ) = a ( 1 ) 1 z b ( 1 ) ,
so that a ( 1 ) = f ( 0 ) + f ( 1 ) and b ( 1 ) = f ( 1 ) . Hence f ( 1 ) = b ( 1 ) = a ( 1 ) a ( 0 ) .
Take 2 z < 3 in (20), hence
F ( z ) = j = 0 2 f ( j ) 1 z j = 0 2 j f ( j ) = f ( 0 ) + f ( 1 ) + f ( 2 ) 1 z [ f ( 1 ) + 2 f ( 2 ) ] = a ( 2 ) 1 z b ( 2 ) .
We deduce that a ( 2 ) = f ( 0 ) + f ( 1 ) + f ( 2 ) and b ( 2 ) = f ( 1 ) + 2 f ( 2 ) . Therefore f ( 2 ) = a ( 2 ) a ( 1 ) = [ b ( 2 ) b ( 1 ) ] / 2 .
The pattern is now apparent, i.e.,
f ( j ) = a ( j ) a ( j 1 ) = 1 j [ b ( j ) b ( j 1 ) ] , j 1 , f ( 0 ) = a ( 0 ) ,
provided b ( 0 ) = 0 . Furthermore, if F ( z ) is a R d -transform, not only must (20) hold but a : N 0 R and b : N 0 R are such that (21) is true.
We are now ready to state and prove the following result.
Proposition 8.
Let
F ( z ) = R d { f ( j ) ; z } = a ( z ) 1 z b ( z )
be the R d -transform of some sequence f : N 0 R , where a : N 0 R is arbitrary and b : N 0 R satisfies
b ( j ) = k = 0 j 1 ( k + 1 ) [ a ( k + 1 ) a ( k ) ] , j 1 , b ( 0 ) = 0 .
Then f ( j ) = R d 1 { F ( z ) ; j } is given by
f ( j ) = a ( j ) a ( j 1 ) = 1 j [ b ( j ) b ( j 1 ) ] , j 1 , f ( 0 ) = a ( 0 ) .
Proof. 
For all j 1 we have
b ( j ) b ( j 1 ) = k = 0 j 1 ( k + 1 ) [ a ( k + 1 ) a ( k ) ] k = 0 j 2 ( k + 1 ) [ a ( k + 1 ) a ( k ) ] = j [ a ( j ) a ( j 1 ) ] .
We need to show that R d { f ( j ) ; z } = a ( z ) b ( z ) / z = F ( z ) , where f : N 0 R is defined in (23).
Recalling (17) and (23), we get
R d { f ( j ) ; z } = j = 0 z 1 j z [ a ( j ) a ( j 1 ) ] ,
which is precisely the R d -transform of the backward difference a ( j ) a ( j 1 ) . With the aid of (18), we get
R d { f ( j ) ; z } = 1 z j = 0 z 1 a ( j ) + 1 z z a ( z ) .
So R d { f ( j ) ; z } = a ( z ) b ( z ) / z if and only if
j = 0 z 1 a ( j ) z a ( z ) = b ( z ) .
We can express (22) as
b ( j ) = k = 0 j 1 ( k + 1 ) a ( k + 1 ) k = 0 j 1 k a ( k ) k = 0 j 1 a ( k ) = k = 1 j k a ( k ) k = 1 j 1 k a ( k ) k = 0 j 1 a ( k ) = j a ( j ) k = 0 j 1 a ( k ) .
Taking j = z establishes (24). This proves that R d { f ( j ) ; z } = a ( z ) b ( z ) / z = F ( z ) . □

3.2. Examples

Example 9.
If f ( j ) = 1 , then
R d { 1 ; z } = j = 0 z 1 j z = z + 1 z 2 z ( z + 1 ) , z > 0 .
By comparison with the Z-transform, there holds
Z { 1 ; z } = 1 1 z 1 , z > 1 .
Example 10.
Let f ( j ) = j n , where n N . Then
R d { j n ; z } = j = 0 z 1 j z j n = j = 0 z j n 1 z j = 0 z j n + 1 = j = 1 z j n 1 z j = 1 z j n + 1 .
Recall that
j = 1 z j n = k = 0 n B k k ! n k 1 ̲ z n + 1 k ,
where B k are the Bernoulli numbers and n k 1 ̲ is the falling factorial [4]. It follows that
R d { j n ; z } = k = 0 n B k k ! n k 1 ̲ z n + 1 k 1 z k = 0 n + 1 B k k ! ( n + 1 ) k 1 ̲ z n + 2 k .
We remark that an analogous formula to (26) for Z { j n ; z } for an arbitrary n is not available.
Example 11.
Let a 1 and f ( j ) = a j . Then
R d { a j ; z } = j = 0 z 1 j z a j = j = 0 z a j 1 z j = 0 z j a j = a z + 1 1 a 1 1 z ( a 1 ) a z + 1 ( z + 1 ) a z + 2 + a ( a 1 ) 2 , z > 0 .
By comparison with the Z-transform,
Z { a j ; z } = 1 1 a z 1 , z > | a | .
In particular, if ω R \ { 0 } , then
R d { e ω j ; z } = e ω ( z + 1 ) 1 e ω 1 1 z ( e ω 1 ) e ω ( z + 1 ) ( z + 1 ) e ω ( z + 2 ) + e ω ( e ω 1 ) 2 .
Expressions for R d { cosh ( ω j ) ; z } and R d { sinh ( ω j ) ; z } can then be derived using
cosh ( ω j ) = 1 2 ( e ω j + e ω j ) , sinh ( ω j ) = 1 2 ( e ω j e ω j ) ,
respectively, and the linearity of the operator R d . Similarly, R d { e i ω j ; z } can be formally derived to obtain R d { cos ( ω j ) ; z } and R d { sin ( ω j ) ; z } . Alternatively, the definition of the R d -transform can be applied directly to the real-valued sequences cos ( ω j ) and sin ( ω j ) . Note, however, that their forms are not as simple as their Z -transform counterparts.
Example 12.
Suppose that a ( j ) = j for all j N 0 . Substituting into (25), we obtain
b ( j ) = j 2 k = 0 j 1 k = j 2 k = 1 j 1 k = 1 2 j ( j + 1 ) .
In other words, we want to find f : N 0 R such that
R c { f ( j ) ; z } = F ( z ) = z z 2 z ( z + 1 ) .
Note that
b ( j ) b ( j 1 ) = 1 2 j ( j + 1 ) 1 2 ( j 1 ) j = j = j [ a ( j ) a ( j 1 ) ] , j 1 .
Then (23) gives f ( j ) = j ( j 1 ) = 1 for j 1 and f ( 0 ) = 0 . An alternative representation is
f ( j ) = 1 δ j 0 , j 0 ,
where δ is the usual Kronecker delta. Let us verify that R d { f ( j ) ; z } = F ( z ) . Indeed, we see that
R d { f ( j ) ; z } F ( z ) = j = 0 z f ( j ) 1 z j = 0 z j f ( j ) a ( z ) + 1 z b ( z ) = j = 1 z 1 1 z j = 1 z j z + z 2 z ( z + 1 ) = 0 .

4. Discussion

In this article, we introduced two new transforms R c and R d whose kernels are piecewise linear analogues of the Laplace transform and Z-transform, respectively. We gave several examples and derived some properties of these two transforms.
For the continuous case, we showed that the inverse R c -transform has a form that is relatively straightforward to calculate. We defined a R c -convolution operator with the aid of the R c -inverse transform, and then derived an alternative formula. We showed through examples that the R c -transform can be used to solve certain (linear and nonlinear) ODEs and integral equations. As to be expected, since the transform properties for the derivatives are different, the R c -transform is not to be used to solve linear ODEs with constant coefficients since the Laplace transform is more efficient for this.
For the discrete case, we derived formulas for the backward shift and forward shift properties for the R d -transform. Since the kernel of the Z-transform has the semigroup property that K z ( j + k ) = K z ( j ) K z ( k ) for j , k N 0 , which does not hold for the R d -transform kernel, the formulas for the R d -transform tend to be more complicated than those for the Z-transform. Moreover, a R d -transform is necessarily of the form a ( z ) b ( z ) / z for some sequences a : N 0 R and b : N 0 R related through (23). This imposes a restriction on functions whose inverse R d -transforms exist.
Finding further interesting applications for the new transforms introduced in this article is still an open problem. One possible direction is in model fitting. For example, suppose that f ( j ) represents the population of some species at time j and is assumed to be modelled by a discrete logistic equation
f ( j + 1 ) = f ( j ) + r f ( j ) 1 f ( j ) K , j 1 , f ( 0 ) given ,
where r > 0 is the intrinsic growth rate and K > 0 is the carrying capacity. Taking the R d -transform of (27) and using (19) yields
1 z j = 0 z f ( j ) 1 + 1 z f ( 0 ) + 1 z z f ( z + 1 ) = r F ( z ) r K G ( z ) ,
where
F ( z ) = R d { f ( j ) ; z } = j = 0 z 1 j z f ( j ) , G ( z ) = R d { f ( j ) 2 ; z } = j = 0 z 1 j z f ( j ) 2 .
We can interpret F ( z ) and G ( z ) as being weighted averages of the population sizes given by f ( 0 ) , f ( 1 ) , , f ( z ) , with more weight being placed for large j. Now suppose that n + 1 observations f ( 0 ) , f ( 1 ) , , f ( n ) can be taken and we need to estimate the parameters r and K. Take two values of z, say z 1 = n / 2 and z 2 = n , and calculate F ( z 1 ) , G ( z 1 ) , F ( z 2 ) and G ( z 2 ) . Using (28), we may set up the system
r F ( z 1 ) r K G ( z 1 ) = 1 z 1 j = 0 z 1 f ( j ) 1 + 1 z 1 f ( 0 ) + 1 z 1 z 1 f ( z 1 + 1 ) , r F ( z 2 ) r K G ( z 2 ) = 1 z 2 j = 0 z 2 f ( j ) 1 + 1 z 2 f ( 0 ) + 1 z 2 z 2 f ( z 2 + 1 ) .
The algebraic system (29) is linear in r and r / K . Hence explicit analytical formulas for r and K (e.g., using Cramer’s Rule) can be derived in terms of the measured population sizes given by f ( 0 ) , f ( 1 ) , , f ( n ) . As (27) is nonlinear and its analytical solution is unknown, parameter estimation techniques such as those based on least squares, for example, to estimate r and K are not straightforward to implement. The parameter estimation technique outlined above can be viewed as a discrete version of the integration-based techniques introduced in [14,15].

Author Contributions

Conceptualization, M.R.R.; methodology, M.R.R.; validation, M.R.R. and M.L.; formal analysis, M.R.R. and M.L.; investigation, M.R.R. and M.L.; resources, M.R.R.; writing–original draft preparation, M.R.R.; writing–review and editing, M.R.R. and M.L.; project administration, M.R.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bateman, H.; Erdélyi, A. Tables of Integral Transforms, 1st ed.; Bateman Manuscript Project II; McGraw-Hill: New York, NY, USA, 1954. [Google Scholar]
  2. Bosch, P.; Carmenate García, H.J.; Rodríguez, J.M.; Sigarreta, J.M. On the generalized Laplace transform. Symmetry 2021, 13, 669. [Google Scholar] [CrossRef]
  3. Davies, B. Integral Transforms and Their Applications, 3rd ed.; Springer: New York, NY, USA, 2002. [Google Scholar]
  4. Graham, R.; Knuth, D.E.; Patashik, O. Concrete Mathematics: A Foundation for Computer Science; Addison-Wesley: Reading, UK, 1989. [Google Scholar]
  5. Knuth, D. The Art of Computer Programming; Addison-Wesley: Reading, UK, 2011; Volume 1–4. [Google Scholar]
  6. Poularikas, A.D. (Ed.) The Transforms and Applications Handbook; CRC Press: Boca Raton, FL, USA, 2000. [Google Scholar]
  7. Rodrigo, M.R. Laplace and Z transforms of linear dynamical systems and conic sections. Z. Angew. Math. Phys. (ZAMP) 2016, 67, 57. [Google Scholar] [CrossRef]
  8. Futcher, T.; Rodrigo, M.R. A general class of integral transforms and an expression for their convolution formulas. Integral Transform. Spec. Funct. 2021. [Google Scholar] [CrossRef]
  9. Elaydi, S. An Introduction to Difference Equations, 3rd ed.; Springer Science: New York, NY, USA, 2005. [Google Scholar]
  10. Spiegel, M.R. Schaum’s Outline of Theory and Problems of Laplace Transforms; McGraw-Hill: New York, NY, USA, 1965. [Google Scholar]
  11. Horváth, I.; Mészáros, A.; Telek, M. Numerical inverse transformation methods for Z-Transform. Mathematics 2020, 8, 556. [Google Scholar] [CrossRef] [Green Version]
  12. Golebioswski, M.; Golebiowski, L.; Smolen, A.; Mazur, D. Direct consideration of eddy current losses in laminated magnetic cores in finite element method (FEM) calculations using the Laplace transform. Energies 2020, 13, 1174. [Google Scholar] [CrossRef] [Green Version]
  13. Karnas, G. Computation of lightning current from electric field based on Laplace transform and deconvolution method. Energies 2021, 14, 4201. [Google Scholar] [CrossRef]
  14. Holder, A.B.; Rodrigo, M.R. An integration-based method for estimating parameters in a system of differential equations. Appl. Math. Comput. 2013, 219, 9700–9708. [Google Scholar] [CrossRef]
  15. Zulkarnaen, D.; Rodrigo, M.R. Modelling human carrying capacity as a function of food availability. ANZIAM J. 2020, 62, 318–333. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Rodrigo, M.R.; Li, M. Analogues of the Laplace Transform and Z-Transform with Piecewise Linear Kernels. Foundations 2021, 1, 99-115. https://0-doi-org.brum.beds.ac.uk/10.3390/foundations1010008

AMA Style

Rodrigo MR, Li M. Analogues of the Laplace Transform and Z-Transform with Piecewise Linear Kernels. Foundations. 2021; 1(1):99-115. https://0-doi-org.brum.beds.ac.uk/10.3390/foundations1010008

Chicago/Turabian Style

Rodrigo, Marianito R., and Mandy Li. 2021. "Analogues of the Laplace Transform and Z-Transform with Piecewise Linear Kernels" Foundations 1, no. 1: 99-115. https://0-doi-org.brum.beds.ac.uk/10.3390/foundations1010008

Article Metrics

Back to TopTop