Next Article in Journal
Prospects for Charged Higgs Bosons in Natural SUSY Models at the High-Luminosity LHC
Next Article in Special Issue
Modelling a Market Society with Stochastically Varying Money Exchange Frequencies
Previous Article in Journal
Embed-Solitons in the Context of Functions of Symmetric Hyperbolic Fibonacci
Previous Article in Special Issue
Symmetric Fuzzy Stochastic Differential Equations Driven by Fractional Brownian Motion
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Spectral Method Approach to Quadratic Normal Volatility Diffusions

Fakulteta za Računalništvo in Informatiko, Univerza v Ljubljani, Večna pot, 1000 Ljubljana, Slovenia
Submission received: 30 June 2023 / Revised: 16 July 2023 / Accepted: 20 July 2023 / Published: 25 July 2023
(This article belongs to the Special Issue Stochastic Differential Equations: Theory, Methods, and Applications)

Abstract

:
This paper is concerned with the quadratic volatility family of driftless stochastic differential equations (SDEs), also known in the literature as Quadratic Normal Volatility models (QNV), which have found applications primarily in mathematical finance, but can also model dynamics of stochastic processes in other fields such as mathematical biology and physics. These SDE models are characterized by a quadratic volatility term and can be reduced to one of four distinct possibilities depending on the roots of the quadratic volatility term and the position of initial value. We find explicit weak solutions for each case by a combination of Itô calculus and Fourier analysis, which can be described as a ’spectral method’. Furthermore, for all cases we also express the weak solutions as fairly simple functions of Brownian motion, which allows for efficient one-step Monte Carlo evaluation of functionals of the solutions X t of the form E ( f ( X t ) ) and also more general functionals of the solution. The method used to compute the solutions may also be of interest itself more generally in other fields where SDEs play a fundamental role.

1. Introduction

QNV models have been studied, in financial mathematics in particular, for more than three decades. Several authors have suggested that a number of financial variables (such as foreign exchange rates, equity prices, or forward interest rates, see for instance [1,2,3]) can be modelled by SDEs of the type
d X t = σ ( X t ) d W t
where the volatility term σ ( x ) is a quadratic function of x and W t is standard Brownian motion. The condition in financial mathematics that there should exist a ’risk-neutral’ probability measure with regard to which asset process X t is a (local) martingale is trivially met by such equations since any solution to (1) is a local martingale by definition of the Itô integral, hence the interest in driftless SDEs as models for underlying variables of financial instruments. Of course, in the financial literature the focus is on the pricing of various derivatives (such as European call and put options), which are basically more or less complex functionals of the underlying process X t , and not necessarily the dynamics of X t itself.
We suggest that dynamics and probabilistic properties of SDEs with quadratic volatility terms, driftless or not, can be interesting in themselves and can serve as viable models for stochastic processes not only in financial mathematics.
As a simple example in evolutionary dynamics, consider for instance a gene that has lost its evolutionary function, but is also not detrimental, meaning that no positive or negative evolutionary pressure is applied to the frequency of that gene in a population. What could be a plausible large population and continuous-time model for the frequency of such a gene over time? Reasonable assumptions for an equation that models the frequency X t [ 0 , 1 ] of such a gene would be that changes in the value of X t are (1) purely random and that (2) during a short period of time X t is just as likely to increase by a given amount as it is to decrease by that same amount. Put another way, the expected net change in X t after a while should be zero (which agrees with the notion of a martingale). Also, the amount of change in frequency during a short time interval should be (3) proportional both to the number of individuals that possess the gene as well as the number that do not possess the gene (although for this assumption to be reasonable a more specific description of the setup would be required). Then, the simplest possible SDE that fulfils these assumptions would be
d X t = a X t ( 1 X t ) d W t X 0 = x 0 [ 0 , 1 ]
for some constant a, which is an example of Equation (1).
In mathematical physics, there are also situations where processes of the type (1) or related SDEs can arise naturally. We feel that giving a sufficiently detailed example here somewhat exceeds the intended scope of this paper, but we are currently working on a separate paper where our results here can be applied to just such a case.
From a purely theoretical viewpoint, QNV models are also interesting as a perfect example of the difference between a true martingale and a strict local martingale, and how much more care is needed when dealing the latter.
The paper is organized as follows: in Section 2, we give a short summary of the existing literature on QNV models and the additional insight we hope to present here. The list of references here is not meant to be exhaustive. The intention is to provide some context for our paper, highlight the original results and aspects of our contribution, and contrast them to the results in the existing literature. In Section 3, we define the four classes of QNV models in terms of four ‘dimensionless’ SDEs, and summarize some qualitative probabilistic properties of our diffusions, which follow from general SDE theory and are used to justify the subsequent analysis. In Section 4, we describe the general procedure used to obtain our results and list the eigenfunctions of the diffusion ‘generators’ for these equations along with the transformations of the equations we will be using. In Section 5, we obtain expressions for the weak solutions for each case, along with alternative expressions relating the solutions to (possibly reflected) Brownian motion. Concise forms of the weak solutions to all four cases are then listed in Section 6, which also contains some comments and interpretations of the results. In Section 7, we present the results of numerical simulations, which demonstrate the accuracy of the obtained results and the efficiency of the proposed Monte Carlo schemes for evaluation of expectations. Finally, in Section 8, we present our conclusions and outline a few possible avenues for further research on this or related topics.

2. Our Results in Context with Existing QNV Model Literature

Initially, QNV models were approached mainly by Markov process and partial differential equation means (since SDEs can in principle be associated to Markov process diffusion generators) leading to heat equation problems for specific prices of financial derivatives of QNV modelled assets (i.e., functionals of solutions to (1)). This approach, modelled on the original Black–Scholes formulae derivation, is outlined in [4], and carried out for specific root configurations and option prices in for instance in [5,6].
However, as subsequently explained in [7], this approach has a fundamental flaw in the fact that for most root configurations solutions to (1) are local martingales that are not true martingales. This in practice means that the naive correspondence between such an SDE and its Markov process diffusion generator breaks down which in turn leads to incorrect pricing formulas derived by these methods. A more careful probabilistic treatment was therefore needed, which accounts for the strict local martingale nature of (1) (for most root configurations). This was carried out in [7], leading to correct pricing formulas for several choices of option prices and ’non-trivial’ root configurations.
Even more was achieved in [8] regarding a purely probabilistic (or more specifically, martingale theory) treatment of QNV models, where even a characterization of QNV models was obtained as the only local martingales that can be expressed as certain transformations of stopped Brownian motion, that also leads to pricing formulas for very general functionals of the underlying QNV model (see Theorem 1 and Corollary 1 in [8]). The comprehensive nature of the results in [8] probably explains the apparent lack of more recent significant theoretical results regarding the topic.
In this paper, we offer a more analytically minded alternative approach for dealing with the strict local martingale issue, which leads to a somewhat different but similarly complete probabilistic description of QNV models as to that in [8].
Roughly speaking, in both [7,8], the problem of taking expectations of local martingales was handled by localisation, i.e., stopping times and limiting arguments, a standard and natural technique was given the definition of a local martingale, and in [8] the QNV models were associated to stopped Brownian motion via a set of martingale measure changes using (generalisations of) the Girsanov theorem.
Here, the basic idea is to identify sufficiently large families of bounded functions of the solution X t to (1) that also happen to be local martingales, relying on the fundamental fact that a bounded local martingale is a true martingale. This leads to eigenfunction problems via the Itô formula, and it turns out that taking expectations of these families of true martingales actually produces variants of Fourier transforms of the transition probability measures from which weak solutions to (1) can be obtained.
The general idea and, to a degree, also the analysis is similar for all root configurations, but the properties of the solutions as well as the details of the analysis vary significantly. For instance, for the case of two real roots with initial values located between the roots, the transition measure of a transformed process can be expressed as a weighted average of two moving Gaussian measures, while for the case of two complex roots, the transition kernel can be expressed in terms of classical Jacobi theta functions and related to reflected Brownian motion.
We believe that the treatment presented here, which is more oriented towards classical analysis, has some merits. For one, our approach seems conceptually comparatively straightforward and yields fairly simple explicit expressions for the transition densities in the Markov process picture of QNV models. These transition densities are all expressed using elementary functions (with the exception of theta functions in one case, which are nevertheless well known and have efficient numerical implementations in common software) which is clearly just as useful for analytical or numerical computations of expectations for quite large classes of functions of paths. This contrasts with some results in the literature where expectations of specific functions are expressed in terms of convergent function series, which generally converge quite slowly (see [7]).
Comparing to the results in [8] where expectations in QNV models are expressed in terms of martingale changes of measure that depend on the terminal value of stopped Brownian motion, a theoretical novelty here lies in the observation that expectations can also be expressed in terms of martingale measure changes of reflected Brownian motion. These expressions for expectations of the form E ( f ( X t ) ) lead to similarly general expressions for path dependent functionals to those in [8] (see Section 6) and also efficient Monte-Carlo numerical methods for evaluating functionals where an analytical expression may not be available. Perhaps not too surprisingly, under close inspection it turns out that the specific measure change processes obtained in each case happen to be basically identical to those in [8] with the main difference being, as mentioned, that here the Brownian motion is reflected on the boundary as opposed to stopped. A surprising additional feature of several of these measure changes is that the martingale that performs the measure change is not strictly positive in contrast to the usual Girsanov-type measure changes.
Thus, we believe these results complement those in [8] nicely and contribute to better understanding of QNV models, especially since they are obtained from a largely independent perspective.
Interestingly, these same measure changes were also identified, in a somewhat heuristic fashion, in [9] from yet another point of view, namely Doobs h-transform (although certain assertions made in [9] are incorrect).
A minor novel result is also the observation that in the imaginary root configuration the solution to (1) has simple limit distribution, which seems to have gone unnoticed in the literature. Although this appears at first as counterintuitive (indeed, there are claims in the literature that this diffusion even explodes in finite time), and it is not at all obvious even from the results in [8], this fact becomes immediately evident here.

3. Preliminaries

We are seeking closed form expressions for weak solutions to Equation (1) with
σ ( x ) = ( a x + b ) 2 + c
The nature of the solution X t clearly depends on the sign of c and in the case of two real roots also the position of the initial value X 0 = x .
By linearly rescaling of X t and t, it is straightforward to see that any Equation (1) with quadratic volatility term σ can be reduced to one of the following four cases.
  • Two real roots
    d X t = ( 1 X t 2 ) d W t
    with two choices for the initial condition X 0 = x
    X 0 = x ( 1 , 1 ) o r
    X 0 = x ( 1 , )
    The case x < 1 is essentially the same as x > 1 by symmetry X X and W W .
  • Imaginary roots
    d X t = ( 1 + X t 2 ) d W t X 0 = x R
    In many ways, this is the example with the most interesting analysis.
  • Double roots
    d X t = X t 2 d W t X 0 = x ( 0 , )
    Again, the case x < 0 is the same by symmetry.
Our reference for properties of the general Equation (1) is [10], specifically Section 5.5. These references are of fundamental importance in the following since they not only guarantee the unique existence of the solutions to our equations, but also determine their precise domains and asymptotic properties.
Theorem 5.4. in [10] shows that a driftless SDE has a non-exploding weak solution for any initial distribution at least for continuous functions σ ( x ) . Using Theorem 5.7. in [10], we can also directly establish uniqueness in law for the solutions to all our Equations (3), (6) and (7). Furthermore, Theorem 5.9 in [10] gives sufficient conditions for pathwise uniqueness and strong uniqueness. These conditions can also be verified to hold for all our equations.
In the following, we will therefore rely on the fact that continuous global non-exploding unique weak and strong solutions exist for all the equations we will be dealing with.
Instead of dealing with Equations (3), (6) and (7) directly, we will actually prefer to work with transformed versions of these equations.
By the Itô formula, we have for φ C 2
d φ ( X t ) = σ ( X t ) φ ( X t ) d W t + 1 2 σ 2 ( X t ) φ ( X t ) d t
The transformations we have in mind are those that reduce the local martingale term above to Brownian motion (at the expense of introducing drift). To be explicit, we will be working with equations for Y t = g ( X t ) where
g ( x ) = 1 σ ( x ) d x
We will refer to such a function as a natural rescaling of X t , although it should be noted that in standard terminology X t would be regarded as the natural scale of the process Y t since X t is driftless.
Of course the fact that the processes (3), (6) and (7) do not explode as processes on R does not mean they could not explode when viewed as processes on subintervals I R . For instance, it is not immediately obvious that the solution to (7) does not reach the value 0 in finite time.
In order to determine the asymptotic behaviour of our processes X t and Y t , we apply Feller’s test for explosions as described in [10], Section 5.5.C.
By carrying out the calculations, case by case, required for the application of Proposition 5.22 and Theorem 5.29 in [10], we can verify that
  • The solution to (3) with initial value (4) remains on the interval ( 1 , 1 ) for all t < a.s., and has an a.s. limit lim t X t = Z , where Z is a random variable with distribution P ( Z = 1 ) = 1 P ( Z = 1 ) = ( x + 1 ) / 2 .
  • The solution to (3) with initial value (5) remains on the interval ( 1 , ) for all t < a.s. and has an a.s. limit lim t X t = 1 .
  • The solution to (6) remains on ( , ) , i.e., X t has no explosions (which we already know).
  • The solution to (7) remains on the interval ( 0 , ) for all t < a.s. and has an a.s. limit lim t X t = 0 .
Another thing worth mentioning is that we will be referring to the differential operator implicit in the bounded variation term of (8)
L X = 1 2 σ 2 ( x ) 2 x 2
as the ‘generator’ of X t and likewise for the operators found in the equations for Y t . However, we will not be interpreting L X as an infinitesimal generator of a diffusion defined by
L X φ = lim t 0 P ˙ t φ
for a Markov semigroup P t unless or until we have a specific semigroup P t in mind. We will be viewing such ‘generators’ primarily as linear operators acting on C 2 functions.

4. Eigenfunction Analysis and Rescaling

A known property of our equations is that their solutions are local martingales, but not martingales (except for the bounded interval case (3) with initial value (4)), as can be verified using the results from [11]. This clearly complicates taking expectations of the solutions or functions of the solutions.
As mentioned, here, the general idea is to use eigenfunctions of the generators associated to our SDEs to construct sufficiently large families of bounded local martingales (and hence true martingales), which then almost directly produce Fourier transforms, or Fourier series, of the transition densities of the natural rescalings of our processes simply by the defining property of martingales.
The results of this eigenfunction analysis are listed bellow. The eigenfunctions can be found using standard techniques for solving second order ordinary differential equations (our reference is [12]), but the fact that the listed functions are indeed eigenfunctions can of course be verified directly.
For each of our equations for X t , we list the generators L X from (10) together with their eigenfunctions and the natural rescalings of the equations for the convenient choice of primitive function in (9).
Since we are imposing no boundary conditions we obtain a two-dimensional eigenspace of L X with eigenvalue λ ( w ) = λ ( w ) for each w C (see (11), (20) and (25)). The forms of these eigenfunctions in each case are chosen so that we obtain a real-valued eigenfunction for w R .
For the rescaled processes Y t we also list their generators and eigenfunctions for the corresponding eigenvalues, which are the same as for the generators of X t .
An important thing to note are the supports of the random variables X t and Y t (for t < ). For X t , the fact that our equations have no explosions in finite time with regards to the supports listed below was verified using the Fellers test in the previous section. For Y t the support for each case can be obtained by considering the image of the specific scale mapping (9), or independently using Fellers test.
  • For Equation (3) with initial value (4), we have
generator of X : L X φ = 1 2 ( 1 x 2 ) 2 φ support of X t : ( 1 , 1 ) eigenvalues : λ ( w ) = w 2 1 2
eigenspace of L X : ψ X ( x , w ) = 1 x 2 e ± w atanh ( x ) natural rescaling : g ( x ) = 1 2 log 1 + x 1 x = atanh ( x ) Y t = atanh ( X t ) X t = tanh ( Y t )
equation for Y : d Y t = d W t + tanh ( Y t ) d t support of Y t : ( , )
generator of Y : L Y φ = 1 2 φ + tanh ( y ) φ
eigenspace of L Y : ψ Y ( y , w ) = e ± w y cosh ( y )
  • For Equation (3) and initial value (5), we have
generator of X : L X φ = 1 2 ( 1 x 2 ) 2 φ support of X t : ( 1 , ) eigenvalues : λ ( w ) = w 2 1 2 eigenspace of L X : ψ X ( x , w ) = x 2 1 e ± w acotanh ( x ) natural rescaling : g ( x ) = 1 2 log x + 1 x 1 = acotanh ( x ) Y t = acotanh ( X t ) X t = cotanh ( Y t )
equation for Y : d Y t = d W t + cotanh ( Y t ) d t support of Y t : ( 0 , )
generator of Y : L Y φ = 1 2 φ + cotanh ( y ) φ
eigenspace of L Y : ψ Y ( y , w ) = e ± w y sinh ( y )
  • For Equation (6), we have
generator of X : L X φ = 1 2 ( 1 + x 2 ) 2 φ support of X t : ( , ) eigenvalues : λ ( w ) = 1 + w 2 2 eigenspace of L X : ψ X ( x , w ) = 1 + x 2 e ± w arctan ( x )
natural rescaling : g ( x ) = arctan ( x ) Y t = arctan ( X t ) X t = tan ( Y t )
equation for Y : d Y t = d W t tan ( Y t ) d t support of Y t : ( π / 2 , π / 2 )
generator of Y : L Y φ = 1 2 φ tan ( y ) φ
eigenspace of L Y : ψ Y ( y , w ) = e ± w y cos ( y )
  • For Equation (7), we have
generator of X : L X φ = 1 2 x 4 φ support of X t : ( 0 , ) eigenvalues : λ ( w ) = w 2 2
eigenspace of L X : ψ X ( x , w ) = x exp ( ± w / x )
natural rescaling : g ( x ) = 1 x Y t = 1 X t X t = 1 Y t
equation for Y : d Y t = d W t + 1 Y t d t
support of Y t : ( 0 , ) generator of Y : L Y φ = 1 2 φ + 1 y φ eigenspace of L Y : ψ Y ( y , w ) = exp ( ± w y ) / y
For each of our Equations (3) with (4) and (5), (6) and (7), corresponding eigenfunctions ψ X and ψ Y listed above, and a fixed value of w C , applying the Itô formula shows that
L t w : = e λ ( w ) t ψ X ( X t , w ) = e λ ( w ) t ψ Y ( Y t , w )
is (in general) a complex-valued local martingale. In general, the eigenfunctions ψ X and ψ Y listed above, defined on their respective supports for X t and Y t , are unbounded. But it is possible to find linear subspaces of functions spanned by these eigenfunctions that are bounded. The corresponding subspaces spanned by the local martingales L t w in (30) will then contain only bounded martingales. As we show, taking expectations of these martingales will yield enough information about the processes X t and Y t to be able to explicitly determine their probability measures.
At this point, our analysis of the equations diverges, as we need to consider each case separately.

5. Weak Solutions

5.1. Two Real Roots, Bounded Interval ( 1 , 1 )

The first case we examine is the simplest, Equation (3) with initial value (4). This is also the only case where the solution is a true martingale. For illustration, a few sample path are simulated in Figure 1.
For the functions ψ Y from (15), we notice that for R e ( w ) = 0 and y R we have
| ψ Y ( y , w ) | = e w y cosh ( y ) 1
and the same goes for | ψ X ( x , w ) | for x [ 1 , 1 ] . Hence, the local martingales from (30)
L t u i = e λ ( u i ) t ψ Y ( Y t , u i ) = e 1 + u 2 2 t e u i Y t cosh ( Y t )
are bounded martingales for all u R . By taking expectations, we then establish that
E e u i Y t cosh ( Y t ) | Y 0 = x = e 1 + u 2 2 t e u i x cosh ( x )
or
e u i y cosh ( y ) P ( Y t d y | Y 0 = x ) = e 1 + u 2 2 t e u i x cosh ( x )
which means we have found the Fourier transform of the expression under the integral sign. Without too much work we can invert the right-hand side of (31). To arrive at the exact form stated below requires an amount of algebraic manipulation, but the details are omitted since they are all elementary.
Proposition 1.
Let Y t be the solution to Equation (13) with initial condition Y 0 = x R . Then,
P ( Y t d y | Y 0 = x ) = 1 2 π t cosh ( y ) e t 2 cosh ( x ) e ( y x ) 2 2 t d y
An interesting way of rewriting (32) is
P ( Y t d y | Y 0 = x ) = 1 2 π t 1 1 + e 2 x e ( y x t ) 2 2 t + 1 1 + e 2 x e ( y x + t ) 2 2 t d y
This shows the probability measure of Y t is actually a weighted average of diffusing Gaussian measures moving in opposite direction with speed 1 and also produces a neat expression for the weak solution to (13).
Corollary 1.
Let Y t be the solution to (13) and let W ˜ t be a Brownian motion with W ˜ 0 = x . For Wiener integrable functions, f we have
E ( f ( Y t ) | Y 0 = x ) = 1 1 + e 2 x E ( f ( W ˜ t + t ) ) + 1 1 + e 2 x E ( f ( W ˜ t t ) )
Another way of interpreting (32) is as a measure change of Brownian motion.
Corollary 2.
Let Y t and W ˜ t be as before. Define the martingale
M t = cosh ( W t ˜ ) e t 2 cosh ( x )
with M 0 = 1 . For Wiener integrable functions f, we have
E ( f ( Y t ) | Y 0 = x ) = E ( M t f ( W ˜ t ) )

5.2. Two Real Roots, Half-Line ( 1 , )

Here, we turn to the two roots case (3) with initial value to the right of the roots. It is well-known the solution is a supermartingale. Some simulations are shown in Figure 2.
An important difference compared to the finite interval case is that the functions (19) have singularities at 0 for any w C . But this is only a minor difficulty because there are plenty of linear combinations of functions from (19) that are bounded and continuous on [ 0 , ) . For u R , we define
ϕ Y ( y , u ) = 1 2 i ψ Y ( y , u i ) ψ Y ( y , u i ) = sin ( u y ) sinh ( y )
and confirm that ϕ Y ( y , u ) has a limit when y 0 for any u R and is still an eigenfunction of (18) with λ ( u i ) = λ ( u i ) = 1 + u 2 2 . The Itô formula then shows that
M t u = e λ ( u i ) t ϕ Y ( Y t , u ) = e 1 + u 2 2 t sin ( u Y t ) sinh ( Y t )
are bounded martingales for all u R , which allows us to take expectations at any time t < . Since we know the distribution of Y t is supported on ( 0 , ) , this yields the transition densities via the sine Fourier transform.
Proposition 2.
Let Y t be the solution to (17) with Y 0 = x ( 0 , ) . Then,
P ( Y t d y | Y 0 = x ) = 1 2 π t sinh ( y ) e t / 2 sinh ( x ) e ( y x ) 2 2 t e ( y + x ) 2 2 t d y
for y 0 (and = 0 for y < 0 ).
Proof. 
Taking expectation conditional on Y 0 = x of the bounded martingale (34) we obtain
E sin ( u Y t ) sinh ( Y t ) | Y 0 = x = e 1 + u 2 2 t sin ( u x ) sinh ( x )
or
0 sin ( u y ) sinh ( y ) P ( Y t d y | Y 0 = x ) = e 1 + u 2 2 t sin ( u x ) sinh ( x )
Computing the inverse sine Fourier transform of the right-hand side (as a function of u), we obtain the expression (35) for the density of Y t conditional on Y 0 = x . □
By elementary algebraic manipulation of (35), it is also possible to express the weak solution to (17) in terms of Brownian motion with drift reflected at the origin.
Corollary 3.
Let Y t be the solution to (17) and W ˜ t be a Brownian motion with W ˜ 0 = x > 0 . For Wiener integrable functions f on ( 0 , ) , we have
E ( f ( Y t ) | Y 0 = x ) = 1 1 e 2 x E ( f ( | W ˜ t + t | ) ) 1 e 2 x 1 E ( f ( | W ˜ t t | ) )
Proof. 
We denote the weights above by u 1 ( x ) = 1 / ( 1 e 2 x ) and u 2 ( x ) = 1 / ( e 2 x 1 ) (by the way, we see that u 1 ( x ) u 2 ( x ) = 1 ). With a little reworking of (35), we can write
E ( f ( Y t ) | Y 0 = x ) = 1 2 π t u 1 ( x ) 0 f ( y ) e ( y x t ) 2 2 t d y u 2 ( x ) 0 f ( y ) e ( y x + t ) 2 2 t d y + u 1 ( x ) 0 f ( y ) e ( y + x + t ) 2 2 t d y u 2 ( x ) 0 f ( y ) e ( y + x t ) 2 2 t d y
Clearly, we need f to be defined only on ( 0 , ) . But we can extend the domain of f to ( , ) by defining f ( y ) = f ( y ) and substitute y y in the third and fourth integrals above to rewrite the expression as
= 1 2 π t u 1 ( x ) f ( y ) e ( y x t ) 2 2 t d y u 2 ( x ) f ( y ) e ( y x + t ) 2 2 t d y
which is exactly what one obtains while computing the right-hand side of (36). □
Yet another way of interpreting the weak solution to (17) given by (35) is as a martingale measure change of Brownian motion reflected at the origin, albeit a rather odd kind of measure change.
Corollary 4.
Let Y t and W ˜ t be as above. Define the martingale
M t = sinh ( W ˜ t ) e t / 2 sinh ( x )
with M 0 = 1 . Then, for Weiner integrable functions f on ( 0 , ) , we have
E ( f ( Y t ) | Y 0 = x ) = E ( M t f ( | W ˜ t | ) )
Proof. 
Similarly to the proof of the previous Corollary, we write out the expression for E ( f ( Y t ) | Y 0 = x ) according to (35), define f ( y ) = f ( y ) , substitute y y in the second integral and remember that sinh ( y ) = sinh ( y ) to find that the result is the same as computing the right-hand side of (37). □
The ‘measure change’ expressed in (37) is of course strange considering the martingale M t attains negative values. Other examples of curios ‘negative measure changes’ akin to (37) will be found in the next subsections. We attempt a heuristic explanation of such phenomena in Section 6.

5.3. Imaginary Roots

The most mathematically interesting case is (6). For illustration, some sample path simulations are shown in Figure 3.
Here, we find only countably many linear combinations of the eigenfunctions in (24), which are bounded on ( π 2 , π 2 ) , but this will suffice.
For each n N , we can define
ϕ 2 n 1 ( x ) = 1 2 ψ Y ( x , ( 2 n 1 ) i ) + ψ Y ( x , ( 2 n 1 ) i ) = cos ( ( 2 n 1 ) x ) cos ( x ) ϕ 2 n ( x ) = 1 2 i ψ Y ( x , n i ) ψ Y ( x , 2 n i ) = sin ( 2 n x ) cos ( x )
and confirm these functions have well defined limits when x ± π 2 and can thus be considered C ( [ π 2 , π 2 ] , R ) functions. Note also that since λ ( w ) = λ ( w ) the functions (38) remain eigenfunctions of L Y .
Clearly, these functions are related to the usual Fourier basis. In particular, the odd functions ϕ 2 n 1 ( x ) are a rescaled variant of the Dirichlet kernel. They are not quite the same as the usual basis though, and it seems necessary to prove that (38) actually provide a worthwhile basis of functions. This is probably most easily accomplished by appealing to the Stone–Weierstrass theorem.
Proposition 3.
The linear hull of { ϕ k ( x ) } k = 1 , from (38) is dense in C ( [ π 2 , π 2 ] , R ) with regard to the topology of uniform convergence.
Proof. 
Using basic trigonometric identities one finds that
cos ( 2 n x ) = 1 2 ϕ 2 n 1 ( x ) + ϕ 2 n + 1 ( x ) sin ( ( 2 n + 1 ) x ) = 1 2 ϕ 2 n ( x ) + ϕ 2 n + 2 ( x )
The usual Fourier basis on [ π 2 , π 2 ] consists of cos ( 2 n x ) and sin ( 2 n x ) and the difference here is that the sine functions are shifted. Clearly then, (38) and (39) span the same space (in the sense that any finite linear combination of the functions in (38) can be expressed as a finite linear combination of (39) and vice-versa), so it suffices to prove the claim for (39).
According to the Stone–Weierstrass theorem we need to see that the functions (39) form a unital algebra (for point-wise products) that separates points.
The former can be verified using basic trigonometric identities. Indeed, one can see this algebra is generated by { 1 , sin ( x ) } .
The function sin ( x ) is a one-to-one map on [ π 2 , π 2 ] so it separates any distinct pair x , y [ π 2 , π 2 ] , including the edge points. So, in contrast to the usual Fourier basis, there is even no need to identify the endpoints π 2 and π 2 . □
We now view (23) as an operator acting upon the C 2 ( [ π 2 , π 2 ] , R ) subspace of an L ρ 2 ( [ π 2 , π 2 ] , R ) space for an appropriate weighted inner product
f , g ρ = π 2 π 2 f ( x ) g ( x ) ρ ( x ) d x
The weight ρ by which (23) becomes a symmetric operator (and hence with orthogonal eigenspaces) on the L ρ 2 space is determined by
ρ ( x ) = A e 2 tan ( x ) d x = A cos 2 ( x )
for any non-zero constant A (see [12]). We choose the constant A to make ρ a probability density which means
ρ ( x ) = 2 π cos 2 ( x )
With the choice (40) for the weight the eigenfunctions (38) define an orthonormal basis for a Hilbert space H ρ L ρ 2 ( [ π 2 , π 2 ] , R ) , meaning no additional normalizing constants are required.
Note that the Hilbert space H ρ spanned by (38) contains at least C ( [ π 2 , π 2 ] , R ) by Proposition 3 since uniform convergence implies convergence in the L ρ 2 norm.
Since indicator functions of Borel subsets of [ π 2 , π 2 ] can be approximated by C ( [ π 2 , π 2 ] , R ) functions and L ρ 2 ( [ π 2 , π 2 ] , R ) functions can approximated by linear combinations of indicators we can assert that we actually have equality of Hilbert spaces
H ρ = L ρ 2 ( [ π 2 , π 2 ] , R )
The point of using the eigenfunctions (38) is of course that the local martingales
L t k = e k 2 1 2 t ϕ k ( Y t )
are bounded martingales and we can thus take expectations to obtain
E ( ϕ k ( Y t ) | Y 0 = x ) = e 1 k 2 2 t ϕ k ( x )
This means the operator T t defined by
T t f ( x ) = E ( f ( Y t ) | X 0 = x )
acts on the H ρ space spanned by (38) as a contraction along the diagonal:
T t ϕ k = e 1 k 2 2 t ϕ k
This enables us to compute the expectation E ( f ( Y t ) | Y 0 = x ) for any H ρ function and also to express the conditional probability density of Y t .
In the following, we use auxiliary theta functions as defined in [13].
ϑ 3 ( u , q ) = n = q n 2 exp ( 2 n i u ) = 1 + 2 n = 1 q n 2 cos ( 2 n i u ) ϑ 4 ( u , q ) = n = q n 2 ( 1 ) n exp ( 2 n i u ) = 1 + 2 n = 1 q n 2 ( 1 ) n cos ( 2 n i u )
Proposition 4.
Let
f H ρ = L ρ 2 [ π 2 , π 2 ] , R
where the weight ρ is defined by (40) and let Y t be the solution to (22) with Y 0 = x ( π 2 , π 2 ) . Then, we have
E ( f ( Y t ) | Y 0 = x ) = π 2 π 2 f ( y ) ρ ( y ) k = 1 e 1 k 2 2 t ϕ k ( x ) ϕ k ( y ) d y = π 2 π 2 f ( y ) ρ ( y ) 1 + k = 2 e 1 k 2 2 t ϕ k ( x ) ϕ k ( y ) d y
where the functions ϕ k are defined by (38).
The Markov kernel above can be expressed as
P ( Y t d y | Y 0 = x ) = 1 2 π cos ( y ) e t 2 cos ( x ) ϑ 3 y x 2 , e t 2 ϑ 4 y + x 2 , e t 2 d y
for y [ π 2 , π 2 ] .
Proof. 
By assumption we have
f = k = 1 f , ϕ k ρ ϕ k
Applying the operator (42) and noting (43), which works because of the fact that (41) are bounded martingales, we have
E ( f ( Y t ) | Y 0 = x ) = T t k = 1 f , ϕ k ρ ϕ k ( x ) = k = 1 e 1 k 2 2 t f , ϕ k ρ ϕ k ( x ) = k = 1 e 1 k 2 2 t π 2 π 2 f ( y ) ϕ k ( y ) ρ ( y ) d y ϕ k ( x )
We can interchange the sum and integral due to L 2 convergence which gives (45). Obtaining (46) is just a matter of manipulating the expression for the integration kernel. Here are the details:
ρ ( y ) k = 1 e 1 k 2 2 t ϕ k ( x ) ϕ k ( y ) = 2 π cos 2 ( y ) n = 1 e 1 ( 2 n 1 ) 2 2 t cos ( ( 2 n 1 ) x ) cos ( x ) cos ( ( 2 n 1 ) y ) cos ( y ) + n = 1 e 1 ( 2 n ) 2 2 t sin ( 2 n x ) cos ( x ) sin ( 2 n y ) cos ( y ) = 1 π cos ( y ) e t 2 cos ( x ) n = 1 e ( 2 n 1 ) 2 2 t cos ( ( 2 n 1 ) ( y x ) ) + cos ( ( 2 n 1 ) ( y + x ) ) + n = 1 e ( 2 n ) 2 2 t cos ( 2 n ( y x ) ) cos ( 2 n ( y + x ) ) = 1 π cos ( y ) e t 2 cos ( x ) k = 1 e k 2 2 t cos ( k ( y x ) ) ( 1 ) k cos ( k ( y + x ) ) = 1 2 π cos ( y ) e t 2 cos ( x ) ϑ 3 y x 2 , e t 2 ϑ 4 y + x 2 , e t 2
Of course, wherever there is one expression involving theta functions, there are bound to be many more. We use the Poisson summation formula to derive only one such representation, which can then be related directly to reflected Brownian motion.
Proposition 5.
Let W ˜ t be a Brownian motion with W ˜ 0 = x and Y t the solution to (22) with Y 0 = x where x ( π 2 , π 2 ) .
Denote the functions
m ( y , x , t ) = cos ( y ) e t 2 cos ( x )
and
r ( y ) = k = ( y 2 k π ) · 1 { [ π 2 , π 2 ] + 2 k π } ( y ) + ( ( 2 k + 1 ) π y ) · 1 { [ π 2 , 3 π 2 ] + 2 k π } ( y ) = arcsin ( sin ( y ) )
This implies
M t = m ( W ˜ t , x , t )
is a martingale with M 0 = 1 and that
R t = r ( W ˜ t )
represents Brownian motion reflected on the edges of [ π 2 , π 2 ] .
For f L ρ 2 ( [ π 2 , π 2 ] , R ) we have
E ( f ( Y t ) | Y 0 = x ) = E ( M t f ( R t ) )
Proof. 
We proceed from the expression for the integration kernel (47) to write E ( f ( Y t ) | Y 0 = x ) as
= 1 π π 2 π 2 f ( y ) m ( y , x , t ) k = 1 e k 2 2 t cos ( k ( y x ) ) ( 1 ) k cos ( k ( x + y ) ) d y = 1 2 π π 2 π 2 f ( y ) m ( y , x , t ) k = e k 2 2 t exp ( k ( y x ) i ) ( 1 ) k exp ( k ( x + y ) i ) d y
since the term for k = 0 in the sum above equals zero. We apply Poisson summation to each of the two sums:
k = g ( k ) = k = e k 2 t 2 exp ( k ( y x ) i ) k = h ( k ) = k = e k 2 t 2 ( 1 ) k exp ( k ( y + x ) i ) = k = e k 2 t 2 exp ( k ( y + x + π ) i )
Computing the Fourier transforms, we obtain
g ^ ( w ) = g ( u ) e 2 π i w u d u = 2 π t exp ( y x 2 π w ) 2 2 t
and similarly
h ^ ( w ) = 2 π t exp ( y + x + π 2 π w ) 2 2 t
Then (49) can be expressed as
= 1 2 π t π 2 π 2 f ( y ) m ( y , x , t ) k = exp ( x y + 2 π k ) 2 2 t exp ( x + y + π + 2 k π ) 2 2 t d y
Noting that m ( y + π , x , t ) = m ( y , x , t ) , we can write
= 1 2 π t k = π 2 π 2 f ( y ) m ( y , x , t ) e ( y x + 2 π k ) 2 2 t d y π 2 π 2 f ( y ) m ( y , x , t ) e ( y x π + 2 k π ) 2 2 t d y = 1 2 π t k = π 2 π 2 f ( y ) m ( y , x , t ) e ( y x + 2 π k ) 2 2 t d y + π 2 3 π 2 f ( y π ) m ( y , x , t ) e ( y x + 2 k π ) 2 2 t d y
by substituting y + π y in the second integral. We expand the domain of f to a periodic function with period π , note that m ( y , x , t ) is even with period 2 π and substitute y y in the second integral above to obtain
= 1 2 π t k = π 2 π 2 f ( y ) m ( y , x , t ) e ( y x + 2 π k ) 2 2 t d y + π 2 3 π 2 f ( y ) m ( y , x , t ) e ( y x + 2 k π ) 2 2 t d y = 1 2 π t k { [ π 2 , π 2 ] + 2 k π } f ( y ) m ( y , x , t ) e ( y x ) 2 2 t d y + k { [ π 2 , 3 π 2 ] + 2 k π } f ( y ) m ( y , x , t ) e ( y x ) 2 2 t d y
Specializing to f ( y ) = 1 A ( y ) for Borel subset A [ π 2 , π 2 ] (so that we have 1 A ( y ) = 1 A ( y ) ) we see that the last expression is what one could write in order to compute
E ( 1 A ( R t ) m ( W t ˜ , x , t ) | W 0 ˜ = x )
with R t being the Brownian motion W ˜ t reflected at ± π / 2 . □
From (46) (or more obviously from (45)) we can see that our process Y t even has an elementary limit distribution, at least in a weak sense.
Corollary 5.
For f and Y t , as before we have
E ( f ( Y ) | Y 0 = x ) = π 2 π 2 f ( y ) ρ ( y ) d y = 2 π π 2 π 2 f ( y ) cos 2 ( y ) d y
This implies the limit distribution is independent from the initial value Y 0 = x .
The results for Y t from (22) of course carry over to the solutions of our original Equation (6) via the map X t = tan ( Y t ) (recalling (21)). In particular, it follows that X t also has a limit distribution.
Corollary 6.
The solution X t to Equation (6) has a limit distribution in the sense that for f such that f tan L ρ 2 ( [ π 2 , π 2 ] , R )
E ( f ( X ) | X 0 = x ) = 2 π f ( x ) ( 1 + x 2 ) 2 d x

5.4. Double Roots

The remaining case is the double roots Equation (7). Like the case (3) with initial value to the right of the roots, it is well-known that the solution is a supermartingale. Some sample path simulations are shown in Figure 4.
The functions (29) are not bounded when y 0 (which corresponds to x in (26)), but we can form linear combinations that are bounded:
ϕ Y ( y , u ) = 1 2 i ψ Y ( y , u i ) ψ Y ( y , u i ) = sin ( u y ) y
We then have that
M t u = e λ ( u i ) t ϕ Y ( Y t , u ) = e u 2 2 t sin ( u Y t ) Y t
is a bounded martingale. Taking expectation produces the sine Fourier transform of the transition density.
Proposition 6.
Let Y t be the solution to (28) with initial condition Y 0 = x > 0 . Then,
P Y t d y | Y 0 = x = 1 2 π t y x e ( y x ) 2 2 t e ( y + x ) 2 2 t
for y 0 .
Proof. 
Since (50) is a martingale, we can take expectations at any fixed time t < conditional on Y 0 = x > 0 to obtain
E sin ( u Y t ) Y t | Y 0 = x = e u 2 2 t sin ( u x ) x
Considering the distribution of Y t is supported on ( 0 , ) , this can be rewritten as
0 sin ( u y ) y P Y t d y | Y 0 = x = e u 2 2 t sin ( u x ) x
By computing the inverse sine Fourier transform of the right-hand side and after some elementary manipulations, we can express the transition density in the form (51). □
Again, a little reworking of the expression for E ( f ( Y t ) ) using (51) yields another instance of a ‘negative’ measure change.
Corollary 7.
Let Y t and Y 0 = x > 0 be as above. Let W ˜ t be a Brownian motion with W ˜ 0 = x and define
M t = W ˜ t x
which is clearly a martingale with M 0 = 1 . Then, for Wiener integrable functions f on ( 0 , ) , we have
E ( f ( Y t ) | Y 0 = x ) = E ( M t f ( | W ˜ t | ) )
Proof. 
Using (51), we can write
E ( f ( Y t ) | Y 0 = x ) = 1 2 π t 0 f ( y ) y x e ( y x ) 2 2 t d y 0 f ( y ) y x e ( y + x ) 2 2 t d y
By extending the definition of f to negative values by f ( y ) = f ( y ) and substituting y y in the second integral we can rewrite this as
E ( f ( Y t ) | Y 0 = x ) = 1 2 π t f ( y ) y x e ( y x ) 2 2 t d y
which is the same as computing the right-hand side of (52). □

6. Summary of Results and Some Remarks

All the expressions for Y t obviously carry over to X t . We summarize only the results obtained by Formulas (33), (37), (48) and (52) by substitutions (12), (16), (21) and (27). Explicit expressions for transition kernels can be found by substitution in the integrals.
  • For the solution to (3) with initial value (4) and function f on ( 1 , 1 ) such that f tanh is Wiener integrable, we have
    E ( f ( X t ) | X 0 = x ) = E ( M t f ( tanh ( W ˜ t ) ) | W ˜ 0 = atanh ( x ) )
    where
    M t = cosh ( W ˜ t ) e t 2 cosh ( W ˜ 0 )
  • For the solution to (3) with initial value (5) and function f on ( 1 , ) such that f cotanh is Wiener integrable, we have
    E ( f ( X t ) | X 0 = x ) = E ( M t f ( cotanh ( | W ˜ t | ) ) | W ˜ 0 = acotanh ( x ) )
    where
    M t = sinh ( W ˜ t ) e t 2 sinh ( W ˜ 0 )
  • For the solution to (6) and f on R such that f tan L ρ 2 ( [ π 2 , π 2 ] , R ) (with ρ defined by (40)), we have
    E ( f ( X t ) | X 0 = x ) = E M t f ( tan ( R t ) ) | W ˜ 0 = arctan ( x )
    where
    M t = cos ( W ˜ t ) e t 2 cos ( W ˜ 0 )
    and R t = arcsin ( sin ( W ˜ t ) ) is Brownian motion reflected on the edges of [ π / 2 , π / 2 ] .
  • For the solution to (7) and function f on ( 0 , ) such that x f ( 1 / x ) is Wiener integrable, we have
    E ( f ( X t ) | X 0 = x ) = E M t f ( 1 / | W ˜ t | ) | W ˜ 0 = 1 / x
    where
    M t = W ˜ t W ˜ 0
Perhaps the most interesting aspect of the computed weak solutions are their interpretation of martingale measure changes expressed in (37), (48) and (52).
The martingales M t from each of these formulas all have the the property M 0 = 1 and the results certainly look like usual martingale measure changes of a reflected Brownian motion R t (reflected at the origin in the cases (37 and (52) or at ± π 2 in the case of (48)). But from a probabilistic viewpoint, this should be nonsense since the martingales M t are all negative much of the time.
However, while the reflected Brownian motion R t and the associated martingale M t are both σ ( W ˜ t ) -measurable, M t is not σ ( R t ) -measurable: for fixed t and ω Ω , the weight M t ( ω ) assigned to the value R t ( ω ) can be either positive or negative depending the trajectory of W ˜ t ( ω ) . It all still works out because the negative weights have a smaller probability than positive weights, even for a fixed value of R t .
So, although (37), (52) and (48) cannot be regarded as a changes of measure in the usual sense, it shows it might be possible to broaden the concept of a change of measure to include the possibility of weights attaining negative values (or even complex values) without compromising fundamental probabilistic properties of the resulting semigroup, as long as the weights are not simply functions of the process undergoing the measure-change.
In our specific cases, the expressions for these measure changes were obtained by elementary means via simple algebraic manipulations of the transition kernels, as can be carried out to obtain special cases of the Girsanov transform for example, but it might be interesting to see if one could develop a more general framework of possibly negative martingale measure changes.
The fact that functionals of the type E ( f ( X t ) | X 0 = x ) can be expressed using these measure changes also implies that expressions for more general functionals of paths can also be obtained in the same way and for the same reasons as for familiar Girsanov-type measure changes. For instance, for a functional of the path { X s } t [ 0 , t ] of the type
0 t g ( X s ) d s
where g is a Riemann integrable function, one can approximate this integral with a finite sum and express the expectation of this sum as
E ( i g ( X t i ) Δ t i ) = i E ( g ( X t i ) ) Δ t i = i E ( M t i g ( W t i ˜ ) ) Δ t i = i E ( E ( M t | M t i ) g ( W t i ˜ ) ) Δ t i = i E ( M t g ( W t i ˜ ) ) Δ t i = E ( M t i g ( W t i ˜ ) Δ t i )
which implies E ( 0 t g ( X s ) d s ) = E ( M t 0 t g ( W ˜ s ) d s ) when these expectations exist. The point is that in order to write expectations for functionals of the paths { X s } t [ 0 , t ] using a measure change, the assumption that M t is non-negative is not needed, only that M t is a martingale is required.
It seems worth mentioning that a heuristic derivation of the ‘measure changes’ (37), (52) and (48), or at least something consistent with these formulas, is also described in [9] and uses Doobs h-transform. We give only a brief description.
With the semigroup of Brownian motion P t we associate the generator
L = lim t 0 d d t P t = 1 2 2 x 2
as usual. Take h to be an eigenfunction of L
L h = λ h
and define a transform of P t by
P t h f ( x ) = e λ t h ( x ) E ( h ( W t ) f ( W t ) | W 0 = x )
We can then explicitly compute the expression for the transformed generator for a given eigenfunction h
L h f = lim t 0 d d t P t h f = lim t 0 d d t e λ t h P t ( h f )
For instance, the choice h ( x ) = cosh ( x ) yields
L h = 1 2 2 x 2 + tanh ( x ) x
which is the generator for Y t from (14). This gives a link between the generator of Brownian motion L and (14), from which we can deduce the expression for the weak solution to (13) as the measure change (33).
On the other hand, the choice h ( x ) = cos ( x ) gives
L h = 1 2 2 x 2 tan ( x ) x
which is the generator for Y t from (23). Again, this gives us a direct link between the generator of Brownian motion and the generator (23) via the ‘change of measure’
d Q d P = M t = cos ( W t ) e t / 2 cos ( x )
One objection to this can be raised in the name of principle since M t takes negative values, which is strange for a probability measure change (although as mentioned this can actually be meaningful). A more practical problem in this case is that although we obtain the correct generator L associated with (23), without knowing its domain this does not uniquely determine the process generated by L . Namely, the generator of Brownian motion L also happens to be the generator of reflected Brownian motion (and also of W t   m o d   1 for instance). Equation (48) reveals that (23) is actually related to reflected Brownian motion. But when working with transformations only on the level of generators, it is not necessarily immediately clear what the underlying Markov semigroups are.
Another point that becomes apparent due to Proposition 4, is that the SDE (6) actually has a limit distribution (Corollary 6). This can even be explained intuitively if we recall that any local martingale X t can be expressed as a time-change of some Brownian motion W ˜ t , i.e., X t = W ˜ τ , where the time change process τ is related to the quadratic variation X t . In our case, the time increment can be expressed as d τ = d X t = ( 1 + X t 2 ) d t . This means X t = W ˜ τ moves at an accelerated rate the further it is from 0. But since Brownian motion is recurrent, this means it will also return to 0 more quickly and will thus spend more time in its vicinity where it moves more slowly. The fact that it has a limit distribution (unlike standard Brownian motion) in simply due to the fact it spends much of the time near 0.

7. Results of Numerical Simulations

In Figure 5, Figure 6, Figure 7 and Figure 8, we present some graphs, which are the results of numerical simulations for each of our four SDEs (two graphs for each of the four equations).
The blue lines in the graphs show the theoretical probability densities of the variable X t at the specified times t and initial values X 0 , which are computed according to the formulas in Section 5.
The histograms show the results of numerical simulations of sample paths of X t taken at the same specified times and initial value (examples of such simulated sample paths are as shown in the illustrations in Section 5). Each histogram is the result for n = 10 4 simulations. These histograms match the theoretical values to around the expected 1 % error for a sample size of n = 10 4 . This clearly demonstrates the accuracy of our results.
The red dots represent the Monte-Carlo evaluation of E ( f ( X t ) ) for a range of indicator functions f ( x ) = 1 [ x i , x i + 1 ] ( x ) of given small intervals [ x i , x i + 1 ] using the Formulas (53), (54), (55) and (56) from Section 6. For each red dot in each of the graphs, we were able to take a sample size of n = 10 8 , since instead of simulating the entire sample path X t for each simulation (as is the case with the histograms), it is enough to choose just one normal random number that simulates the value of W ˜ t , compute M t f ( W ˜ t ) according to the Formulas (53)–(56), and take the average of these values to approximate the expected value E ( M t f ( W ˜ t ) ) . All the dots agree with the theoretical values up to an error, which is expected to be 0.01 % for a sample size of n = 10 8 . This additionally confirms the correctness of the results and also demonstrates the efficiency of the proposed Monte Carlo schemes for evaluation of E ( f ( X t ) ) without the need to simulate the trajectories of X t .

8. Conclusions

We have shown that the described method provides a successful and rigorous way of treating QNV models that apparently also clarifies some confusing aspects and results in the existing literature.
From a theoretical viewpoint, this ’spectral method’, i.e., the blending of Itô calculus, classic ODE eigensystem analysis and Fourier analysis to solve an SDE, is of course far from completely original. Indeed, Fourier and related transforms are of fundamental importance throughout probability in general, including Itô calculus. These connections, however, do not appear to have been used much in this specific way for SDEs, most likely due to the fact that there are other ways of treating SDEs, which are often based on more general and sophisticated theory. And although a more sophisticated theory clearly may offer advantages, it may also be an advantage if it is possible to keep things simple. The more traditional martingale theory treatment avoids technical and practical issues of convergence and integrability sometimes present in the treatment of SDEs by PDE or Markov process methods, by using localisation techniques. Here, we avoid the pitfalls at a basic level by exploiting the direct link between SDEs and their ’generators’ (without referring to Markov semigroups) provided by the Itô formula, and then by choosing an expedient complete eigensystem of bounded functions.
This works well for QNV models due to their special analytic properties, which are discussed in the literature (for instance in [9]), and is not as widely applicable as the mentioned traditional and general martingale theory methods. But this method should be applicable to at least Sturm–Liouville-type SDEs, which include a wide range of classic diffusions (like Bessel, hypergeometric, Jacobi diffusions and so on). Such classic SDEs have of course been widely studied by various means and a ’spectral method’ may not necessarily produce significant new results, but we believe it can be valuable, particularly in instances where the strict local martingale or semimartingale nature of the SDEs must be accounted for and can produce exact quantitative results where applicable, as in the case of QNV models.
Another theoretical aspect that may be worth exploring is the question of whether a general framework for non-strictly positive martingale measure changes, which spontaneously appeared in our treatment, can be developed, as touched upon in Section 6.
From a more application-oriented viewpoint, we have shown that the derived expectation formulas provide for efficient numerical and also Monte Carlo methods for evaluating expectations that can be applied to for instance numerically efficient pricing of arbitrary derivatives based on QNV modelled assets. These results can be useful in other fields where a stochastic process of interest can be modelled by, or transformed to, a QNV type SDE. We believe that QNV models and related SDEs may find more applications, in particular if the fact that these SDEs are essentially solved was more widely known outside mathematical finance.
We also think that similarly efficient exact and numerical ways of computing quantities associated to other SDEs could be provided by such a method where it is applicable.

Funding

This research received no external funding.

Data Availability Statement

No new data were generated during the study.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Blacher, G. A new approach for designing and calibrating stochastic volatility models for optimal delta-vega hedging of exotics. In Proceedings of the ICBI Global Derivatives, Conference Presentation, Juan-Les-Pins, France, 11 May 2001. [Google Scholar]
  2. Ingersoll, J. Valuing foreign exchange rate derivatives with a bounded exchange process. Rev. Deriv. Res. 1997, 1, 159–181. [Google Scholar] [CrossRef]
  3. Zuhlsdorff, C. Extended LIBOR Market Models with Affine and Quadratic Volatility; Working Paper; University of Bonn: Bonn, Germany, 2002. Available online: http://xxx.lanl.gov/abs/ftp://web.bgse.uni-bonn.de/pub/RePEc/bon/bonedp/bgse6_2002.pdf (accessed on 19 June 2016).
  4. Albanese, C.; Campolieti, G.; Carr, P.P.; Lipton, A. Black-Scholes goes hypergeometric. RISK Mag. 2001, 14, 99–103. [Google Scholar]
  5. Lipton, A. The vol smile problem. RISK Mag. 2002, 15, 61–65. [Google Scholar]
  6. Zuhlsdorff, C. The pricing of derivatives on assets with quadratic volatility. Appl. Math. Financ. 2001, 8, 235–262. [Google Scholar] [CrossRef]
  7. Andersen, L. Option pricing with quadratic volatility: A revisit. Financ. Stochastics 2011, 15, 191–219. [Google Scholar] [CrossRef]
  8. Carr, P.; Fisher, T.; Ruf, J. Why Are Quadratic Normal Volatility Models Analytically Tractable? SIAM J. Financ. Math. 2013, 4, 185–202. [Google Scholar] [CrossRef] [Green Version]
  9. Albanese, C.; Kuznetsov, A. Transformations of Markov processes and classification scheme for solvable driftless diffusions. arXiv 2007, arXiv:0710.1596. [Google Scholar]
  10. Karatzas, I.; Shreve, S.E. Brownian Motion and Stochastic Calculus; Springer: Berlin/Heidelberg, Germany, 1988. [Google Scholar]
  11. Kotani, S. On a condition that one-dimensional diffusion processes are martingales. In Memoriam Paul-André Meyer: Seminaire de Probabilites XXXIX, Lecture Notes in Mathematics; Springer: Berlin/Heidelberg, Germany, 2006; Volume 1874, pp. 149–156. [Google Scholar]
  12. Zakrajšek, E. Analiza III; Društvo Matematikov, Fizikov in Astronomov Slovenije: Ljubljana, Slovenia, 1998. [Google Scholar]
  13. Abramowitz, M.; Stegun, I.A. Handbook of Mathematical Functions; Dover Publications, Inc.: New York, NY, USA, 1972. [Google Scholar]
Figure 1. Five simulated sample paths of Equation (2) with X 0 = 0.4 . Like any bounded martingale, X t has an a.s. limit X , which in this case is a Bernoulli variable with P ( X = 1 ) = 0.4 . The transition probability density of the natural rescaling Y t of X t is given in Proposition 1.
Figure 1. Five simulated sample paths of Equation (2) with X 0 = 0.4 . Like any bounded martingale, X t has an a.s. limit X , which in this case is a Bernoulli variable with P ( X = 1 ) = 0.4 . The transition probability density of the natural rescaling Y t of X t is given in Proposition 1.
Symmetry 15 01474 g001
Figure 2. A few simulated sample paths of Equation (3) with (5) X 0 = 5 . For this case the solution X t is a supermartingale bounded from below by 1. Somewhat like geometric Brownian motion, the process has an a.s. limit lim t X t = 1 . The transition density of the natural rescaling Y t is computed in Proposition 2.
Figure 2. A few simulated sample paths of Equation (3) with (5) X 0 = 5 . For this case the solution X t is a supermartingale bounded from below by 1. Somewhat like geometric Brownian motion, the process has an a.s. limit lim t X t = 1 . The transition density of the natural rescaling Y t is computed in Proposition 2.
Symmetry 15 01474 g002
Figure 3. A few simulated sample paths of Equation (6) with (5) X 0 = 0 . The solution X t is a strict local martingale which moves ’fast’ when away from 0 and ’slower’ when near 0. The transition density of the natural rescaling Y t is given in Proposition 4. Somewhat counterintuitively, it turns out it that X t has a limit distribution, which given in Corollary 6. A few remarks that explain that this actually makes sense are given in Section 6.
Figure 3. A few simulated sample paths of Equation (6) with (5) X 0 = 0 . The solution X t is a strict local martingale which moves ’fast’ when away from 0 and ’slower’ when near 0. The transition density of the natural rescaling Y t is given in Proposition 4. Somewhat counterintuitively, it turns out it that X t has a limit distribution, which given in Corollary 6. A few remarks that explain that this actually makes sense are given in Section 6.
Symmetry 15 01474 g003
Figure 4. Some simulated sample paths of Equation (7) with X 0 = 2 . Like the case (3) with (5), the solution X t is a supermartingale bounded from below. Also similarly, the process has an a.s. limit lim t X t = 0 , although it is less in a hurry to reach the limit. The transition density of the natural rescaling Y t is computed in Proposition 6.
Figure 4. Some simulated sample paths of Equation (7) with X 0 = 2 . Like the case (3) with (5), the solution X t is a supermartingale bounded from below. Also similarly, the process has an a.s. limit lim t X t = 0 , although it is less in a hurry to reach the limit. The transition density of the natural rescaling Y t is computed in Proposition 6.
Symmetry 15 01474 g004
Figure 5. The results for Equation (3) and (4) with initial value X 0 = 0.3 for t = 0.2 and t = 0.6 .
Figure 5. The results for Equation (3) and (4) with initial value X 0 = 0.3 for t = 0.2 and t = 0.6 .
Symmetry 15 01474 g005
Figure 6. The results for Equation (3) and (5) with initial value X 0 = 1.6 for t = 0.01 and t = 0.4 .
Figure 6. The results for Equation (3) and (5) with initial value X 0 = 1.6 for t = 0.01 and t = 0.4 .
Symmetry 15 01474 g006
Figure 7. The results for Equation (6) with initial value X 0 = 2 for t = 0.02 and t = 0.7 .
Figure 7. The results for Equation (6) with initial value X 0 = 2 for t = 0.02 and t = 0.7 .
Symmetry 15 01474 g007
Figure 8. The results for Equation (7) with initial value X 0 = 2 for t = 0.01 and t = 1 .
Figure 8. The results for Equation (7) with initial value X 0 = 2 for t = 0.01 and t = 1 .
Symmetry 15 01474 g008
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kink, P. A Spectral Method Approach to Quadratic Normal Volatility Diffusions. Symmetry 2023, 15, 1474. https://0-doi-org.brum.beds.ac.uk/10.3390/sym15081474

AMA Style

Kink P. A Spectral Method Approach to Quadratic Normal Volatility Diffusions. Symmetry. 2023; 15(8):1474. https://0-doi-org.brum.beds.ac.uk/10.3390/sym15081474

Chicago/Turabian Style

Kink, Peter. 2023. "A Spectral Method Approach to Quadratic Normal Volatility Diffusions" Symmetry 15, no. 8: 1474. https://0-doi-org.brum.beds.ac.uk/10.3390/sym15081474

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop