Next Article in Journal
On Some New Weighted Inequalities for Differentiable Exponentially Convex and Exponentially Quasi-Convex Functions with Applications
Next Article in Special Issue
Optimization of Queueing Model with Server Heating and Cooling
Previous Article in Journal
q-Analogue of Differential Subordinations
Previous Article in Special Issue
Exact Time-Dependent Queue-Length Solution to a Discrete-Time Geo/D/1 Queue
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Monte Carlo Methods and the Koksma-Hlawka Inequality

The Faculty of Mathematics and Mechanics, St. Petersburg State University, 199034 St. Petersburg, Russia
*
Author to whom correspondence should be addressed.
Submission received: 30 June 2019 / Revised: 5 August 2019 / Accepted: 7 August 2019 / Published: 9 August 2019
(This article belongs to the Special Issue Stochastic Processes: Theory and Applications)

Abstract

:
The solution of a wide class of applied problems can be represented as an integral over the trajectories of a random process. The process is usually modeled with the Monte Carlo method and the integral is estimated as the average value of a certain function on the trajectories of this process. Solving this problem with acceptable accuracy usually requires modeling a very large number of trajectories; therefore development of methods to improve the accuracy of such algorithms is extremely important. The paper discusses Monte Carlo method modifications that use some classical results of the theory of cubature formulas (quasi-random methods). A new approach to the derivation of the well known Koksma-Hlawka inequality is pointed out. It is shown that for high ( s > 5 ) dimensions of the integral, the asymptotic decrease of the error comparable to the asymptotic behavior of the Monte Carlo method, can be achieved only for a very large number of nodes N. It is shown that a special criterion can serve as a correct characteristic of the error decrease (average order of the error decrease). Using this criterion, it is possible to analyze the error for reasonable values of N and to compare various quasi-random sequences. Several numerical examples are given. Obtained results make it possible to formulate recommendations on the correct use of the quasi-random numbers when calculating integrals over the trajectories of random processes.

1. Introduction

Let ξ ( t , ω ) be a random process, t 1 < t 2 < < t n are given time moments, and F n ( t 1 , x 1 , , t n , x n ) = P ( ξ ( t 1 , ω ) x 1 ; ; ξ ( t n , ω ) x n ) are finite-dimensional distributions of the process. The Monte Carlo method of modeling a process ξ ( t , ω ) usually consists of modeling its finite-dimensional distributions. Extensive literature is devoted to the algorithms of such modeling (for example, [1] and some cited sources). The methods developed in these papers allow us to express realizations of a random vector Ξ = P ( ξ ( t 1 , ω ) ; ; ξ ( t n , ω ) ) through a certain set of randomizations of a random variable ( α 1 , , α M ) uniformly distributed on [ 0 , 1 ] , where M n and, generally speaking, these can be random.
Ultimately, the problem solution Φ is represented as the expectation of a certain function Ψ of a random vector Ξ for selected finite values n and M. That means Φ = E Ψ ( Ξ ) , and the computational process consists in multiple (N times) calculations of the independent realizations Ξ j of the vector Ξ and in estimation of Φ using the arithmetic mean E Ψ ( Ξ ) 1 / N j = 1 N Ψ ( Ξ j ) . Let us notice that the value M = M j may depend on the number of the realization j. The maximum value M ¯ = m j a x M j is called a constructive dimension of the algorithm. Finally, recall that Ξ j is expressed in terms of realizations of uniformly distributed numbers, i.e., Φ , the integral over the M ¯ -dimensional unit hypercube, has the following form
Φ = E Ψ ( Ξ ( α 1 , , α M ¯ ) ) = 0 1 d α 1 0 1 d α M ¯ .
Further, it is convenient to change the designations and talk about the calculation of the integral J = D s f ( X ) d X using the s-dimensional unit hypercube, X = ( x 1 , , x s ) , D s = { X : 0 x i 1 ; i = 1 , , s } . The problem of calculating this integral using the Monte Carlo method is well known. If it is estimated using the sum J 1 / N j = 1 N f ( α 1 j , , α s j ) , where α i j are independent realizations of a random variable uniformly distributed on [ 0 , 1 ] , and f is a quadratically integrable function, then for the error it is possible to construct a confidence interval of width O ( N 1 / 2 ) . At the same time, a number of articles [2,3,4] based on the theory of numbers considerations, pointed out sequences of s-dimensional vectors Y 1 , , Y N , for which the error J 1 / N j = 1 N f ( Y j ) decreased as l n s N / N , for functions that have the first partial derivative with respect to each variable. This result is obviously almost N times better than the Monte Carlo method. The sequences Y 1 , , Y N , possessing the property mentioned above, are called quasi-random. Extensive literature is devoted to their properties and applications (see, for example, [5] and the bibliography available there).
As a rule, the authors consider that quasi-random sequences are significantly better than the pseudo-random sequences used in the Monte Carlo method. The legitimacy of such comparisons is studied in detail below.

2. Koksma-Hlawka Inequality and Random Quadrature Formulas

One of the well-known approaches to constructing the sum
K N f = j = 1 N A j f ( X j ) ,
where A j are constants, X j = ( x 1 ( j ) , , x s ( j ) ) D R s , which is used in the integral D f ( X ) d X calculation, is as follows. It is assumed that f F belongs to a linear normed space of functions, the error of the integration formula
R N f = D f ( X ) d X K N f
is considered as a functional in this space and parameters of the formula (1) are A j and X j , j = 1 , , N are chosen so as to minimize the norm of the functional [6].
This task is usually very difficult. It is enough to note that it is possible to obtain the explicit expression of the above-mentioned functional norm only in a few particular cases. One of these cases is A j = 1 / N , j = 1 , , N , D is a unit hypercube D s = { X : 0 x l 1 , l = 1 , , s } and F is a space of functions of bounded variation in the Hardy-Krause sense. In this case one can use the Koksma-Hlawka inequality [7,8]
| R N [ f ] | V ( f ) · D * ( X 1 , , X N ) ,
where V ( f ) is the variation mentioned above, and D * ( X 1 , , X N ) is the error norm, which is called a discrepancy in the non-Russian literature or a deviation in the Russian one. Sometimes it is also called a star discrepancy. In the number-theoretical sense, this quantity characterizes the uniformity of the sequences distribution and equals to
s u p X | S ( X ) 1 N A ( X 1 , , X N ) | ,
where S ( X ) is the volume of the multidimensional box Δ ( X ) = { Y : y l x l , l = 1 , , s } , and A ( X 1 , , X N ) is a number of points of sequences belonging to Δ ( X ) .
Similar explanations regarding the value V ( f ) can be found in Appendix A for this article. For our purposes, it suffices to note that for N , one knows the upper bound for the asymptotic behavior of D * ( X 1 , , X N ) [9], namely
D * ( X 1 , , X N ) O ( l n s N N ) .
The authors of [2,3,4] indicate of the algorithms for constructing sequences X 1 , , X N for which equality is achieved in (5). These sequences are called quasi-random due to the formal similarity of algorithms of quasi-random numerical integration with the Monte Carlo method of calculating multiple integrals. As we have already said, the authors of numerous articles devoted to the study and use of quasi-random numbers usually note that the Monte-Carlo method has an asymptotic error decrease of the same order as O ( N 1 / 2 ) , while from (5) we can conclude that quasi-random methods provide an error decrease as O ( N 1 ) , more precisely O ( N 1 + ε ) for any arbitrarily small ε .
The proof of Inequality (3) given in the literature is quite large and complex. As we have already noted, it can easily be obtained by means of functional analysis. For s = 2 we showed it in the application. The general case simply requires more complex definitions. It can be noted that using the theory of cubature formulas [1], one can obtain analogs of Inequality (3) for Sobolev functions and many other classes of functions and specify sequences for these classes that have faster order of the error decrease. However, at the same time, computational algorithms are usually significantly complicated.
Other important applications of classical computational mathematics arise while estimating the error of quasi-random methods. The construction of the confidence interval in the Monte Carlo method automatically gives an estimate of the error, but for the quasi-Monte Carlo in its pure form there is only the Inequality (3), which is of little use when solving a specific problem. This difficulty can be overcome by randomizing quasi-random points.
Suppose Y 1 , , Y N are quasi-random vectors of dimension s, and β 1 , , β m are realizations of vectors of the same dimension, uniformly distributed in D s . The sum S m , N ( f ) = 1 / m l = 1 m 1 / N k = 1 N f ( { Y k + β l } ) is an unbiased estimate of the integral
E S m , N ( f ) = D s f ( X ) d X .
The curly brackets denote the operation of taking the fractional part performed on the components of the vector. The error for this randomized sum can be approximated using the central limit theorem. The following statement is trivial
J 1 N k = 1 N f ( { Y k + β } ) ,
where β is a vector uniformly distributed in D s , is a random quadrature formula with one free node. The theory of such formulas is given in detail in [1]. With the help of this theory, one can establish for which class of functions the formula is exact. A number of papers consider other methods for randomizing quasi-random numbers, known as scrambling methods.

3. Numerical Error Estimation Experiment: Monte Carlo and Quasi-Monte Carlo

In this paper we show that references to Inequality (3) when evaluating a computational algorithm are not completely correct, at least for s > 5 , and we suggest some correct approaches to determining the asymptotical error decrease of the quasi-random methods. It can be immediately noted that in the case of the Monte Carlo (MC) method, we are talking about the width of the confidence interval and the asymptotic behavior in the central limit theorem is already seen for small N ( N > 5 , for example). The situation is different for the expression (5). The multiplier l n s N plays a significant role already for s 4 . The asymptotics of order O ( N 1 + ε ) can be reached when N , but as follows from Table 1, for s = 5 the rate of error decrease of the two methods becomes approximately equal with N 12 , for s = 10 , with N 40 , for s = 15 with N = 10 65 .
The data given in Table 1 clearly confirm the unsuitability of Inequality (5) to show the advantages of the quasi-Monte Carlo (QMC) method for calculating integrals of large multiplicity and, with increasing multiplicity of the integral, this situation worsens. Many authors (for example, [10]) confirm that the real error decrease for different values of N for s > 5 does not obey inequality (5), but behaves like N 1 + ε with 0 ε 1 / 2 . Thus, one should speak about the quality of a particular quasi-random sequence only for reasonable values of N, when the asymptotic behavior indicated by equality (3) hasn’t been fulfilled yet and one should introduce a reasonable quality criterion only from empirical considerations. First, we discuss the behavior of the integration error in some numerical examples. Let f ( X ) be defined and integrable in the s-dimensional unit cube D s and the integral D s f ( X ) d X is calculated using the cubature formula (1).
Consider the integrals:
I 1 = D s 2 s e s / ( e 1 ) s i = 1 s x i e x i 2 d X , I 2 = D s ( 8 / π ) s i = 1 s x i ( 1 x i ) 1 / 2 d X .
I 3 = D s i = 1 s | 4 x i 2 | + i i + 1 d X , I 4 = D s 1 / | i = 1 s ( x i 1 / 2 ) 2 1 / 4 | d X .
The exact value of the integrals I 1 , I 2 , I 3 are known, they are equal to 1 for all s. Table 2 shows the absolute error of the QMC method, calculated for N = 10 6 and for the quasi-random Sobol (ErrS) and Halton (ErrH) sequences.
The obtained calculations lead to the following conclusions. The error values differ by no more than one order of magnitude for the considered sequences. The well-known estimate l n s N / N cannot serve as a reasonable estimate of the error in this case (see the last line in the Table 2). As already noted, the practical application of quasi-Monte Carlo for large s is limited. The calculations of integrals by the QMC method, the exact value of which is unknown, do not allow us to compare the accuracy of the various quasi-sequences used among themselves.

4. The Criterion of Decreasing Residual

We show that the use of the randomized quasi-Monte Carlo (RQMC) procedure allows us to effectively estimate the error of the quasi-Monte Carlo method. Consider a new criterion, that will allow us to judge about the error decrease depending on N, and will provide an opportunity to compare the quality of various quasi-random integration methods.
As a quality criterion, we will use the average order of the error decrease.
Consider the error change interval [ N 1 , N 2 ] for some randomized quasi-random quadrature formula. The average value for a given number of realizations N, N [ N 1 , N 2 ] is denoted α and we will approximate R f ( N ) using the least squares method with the function y = a + b · N α . The value of α is called the average order of the error decrease. In the case when the exact value of the integral is unknown we use the error estimate obtained by randomization.
Let us estimate the average order of the error decrease of the numerical integration for integrals I 4 . The exact value of these integrals is not known. The calculations were carried out for n = 10 5 (the number of nodes) and M = 10 (the number of randomizations) for quasi-random sequences of the Sobol and Halton method. Moreover, the total number of nodes is N = 10 6 . The value of random error R f ( N ) is approximated on the following intervals: Δ r 1 = [ 99 , 700 , 99 , 800 ] , Δ r 2 = [ 99 , 800 , 99 , 900 ] , Δ r 3 = [ 99 , 900 , 100 , 000 ] . The results of the calculations are given in Table 3 and Table 4, where δ is the average order of the error decrease value on a given interval.
Conducted calculations show the relative stability of the estimate of α for changing N 1 and N 2 .

5. Conclusions

Error analysis of numerical methods plays a major role in the choice of algorithms for solving a problem. The main goals of this article are to propose a new quality criterion for algorithms for calculating multidimensional integrals; point out the incorrect use of the Koksma-Hlawka inequality when comparing the asymptotic error behavior of the Monte Carlo and the quasi-Monte Carlo methods for calculating integrals; and propose a new quality criterion for calculation algorithm integrals with large multiplicity.
This will allow, in particular, to choose the ratio between the number of random and quasi-random components of the nodes used in quadrature formulas, when their number (constructive dimension) is very large. The results obtained are confirmed by numerical examples. It makes possible to judge the comparative quality of various quasi-random sequences in some cases.
The results obtained, can be useful in solving other problems (for example, optimization problems). However, this requires separate studies that are beyond the scope of this work.

Author Contributions

Conceptualization, S.E.; methodology, S.E. and S.L.; software, S.L.; validation, S.E. and S.L.; formal analysis, S.E.; investigation, S.E. and S.L.; writing—original draft preparation, S.E.; writing—review and editing, S.E. and S.L.; supervision, S.E.; project administration, S.E.

Funding

This research is supported by the Russian Foundation for Basic Research, under Grant number 17-01-00267-a.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

For the reader’s convenience a brief proof of the inequality Koksma-Hlawki in the two-dimensional case is given.
Let f ( x , y ) have integrable derivatives f ( x , 1 ) / x , f ( 1 , y ) / y and 2 f ( x , y ) / x y in unit square D 2 = { 0 x 1 ; 0 y 1 } . Then we have
f ( x , y ) = f ( 1 , 1 ) + x 1 f x ( u , 1 ) d u + y 1 f y ( 1 , v ) d v x 1 y 1 f x y ( u , v ) d u d v .
Elementary transformations allow to obtain from
R N [ f ] = D 2 f ( x , y ) d x d y 1 N j = 1 N f ( x j , y j )
the expression
R N [ f ] = 0 1 f x ( u , 1 ) [ u + 1 N Θ ( x j u ) ] d u + 0 1 f y ( 1 , v ) [ v + 1 N Θ ( y j v ) ] d v + 0 1 0 1 f x y ( u , v ) [ u v 1 N Θ ( x j u ) Θ ( y j v ) ] d u d v ,
where
Θ ( z ) = 1 z < 0 0 z > 0 1 / 2 z = 0 .
If we denote
V ( f ) = 0 1 | f x ( u , 1 ) | d u + 0 1 | f y ( 1 , v ) | d v + 0 1 0 1 | f x y ( u , v ) | d u d v ,
that we have
| R N [ f ] | V ( f ) · m a x ( K 1 ( x 1 , , x N ) , K 1 ( y 1 , , y N ) , K 2 ( x 1 , y 1 ; ; x N , y N ) ,
where
K 1 ( x j , , x N ) = s u p u | u 1 N Θ ( x j u ) | ,
K 1 ( y j , , y N ) = s u p v | v 1 N Θ ( y j u ) | ,
K 2 = s u p u v | u v 1 N Θ ( x j u ) Θ ( y j v ) | .
However, as you can see,
K 1 ( x 1 , , x N ) K 2 ( x 1 , y 1 ; ; x N , y N ) , K 1 ( y 1 , , y N ) K 2 ( x 1 , y 1 ; ; x N , y N )
and | R N [ f ] | V ( f ) · K 2 ( x 1 , y 1 ; ; x N , y N ) , K 2 ( x 1 , y 1 ; ; x N , y N ) and there is discrepance of a sequence of points.
In general, D * ( X 1 , , X N ) and V ( f ) are defined as follows
D * [ f ] = s u p x 1 , , x s | l = 1 s x l 1 N A ( X 1 , , X N ) ] | ,
here l = 1 s x l is the volume of the hyperparallelepiped Γ = { Y : 0 y l x l , l = 1 , , s } , and A ( X 1 , , X N ) is the number of sequence points X 1 , , X N , belonging to this parallelepiped.

References

  1. Ermakov, S.M. Monte Carlo Method and Related Questions; Fizmatlit: Moscow, Russia, 1975. (In Russian) [Google Scholar]
  2. Halton, J.H. On the Efficiency of Certain Quasi-random Sequence of Points Inevaluating Multi-dimensional Integrals. Numer. Math. 1960, 1988, 84–90. [Google Scholar] [CrossRef]
  3. Sobol, I.M. Multidimensional Quadrature Formulas and Haar Functions; Nauka: Moscow, Russia, 1969. (In Russian) [Google Scholar]
  4. Korobov, N.M. Number-Theoretic Methods in Approximate Analysis; MZNMO: Moscow, Russia, 2004. (In Russian) [Google Scholar]
  5. Lemieux, C. Monte Carlo and Quasi-Monte Carlo Sampling; Springer: New York, NY, USA, 2009. [Google Scholar]
  6. Sobolev, S.L. Introduction to the Theory of Cubature Formulas; Nauka: Moscow, Russia, 1974. (In Russian) [Google Scholar]
  7. Koksma, J.F. Een algemeene stelling inuit de theorie der gelijkmatige verdeelingmodulo 1. Mathematics (Zutphen B) 1942, 11, 7–11. [Google Scholar]
  8. Hlawka, E. Discrepancy and Riemann Integration. In Studies in Pure Mathematics; Academic Press: London, UK, 1971; pp. 21–129. [Google Scholar]
  9. Kuipers, L.; Niederreiter, H. Uniform Distribution of Sequences; John Wiley and Sons: New York, NY, USA, 1974. [Google Scholar]
  10. Levitan, Y.L.; Markovich, N.I.; Rozin, S.R.; Sobol, I.M. On quasi-random sequences for numerical calculations. Zh. Vychisl. Math Mat. Phys. 1988, 28, 755–759. [Google Scholar]
Table 1. Asymptotical error decrease.
Table 1. Asymptotical error decrease.
s 5 10 15
N 10 6 10 12 10 14 10 6 10 18 10 40 10 6 10 30 10 65
l n s N N 0.5 10 5 10 8 10 5 10 2 10 21 10 11 10 3 10 33
1 N 10 3 10 6 10 8 10 3 10 9 10 20 10 3 10 15 10 33
Table 2. The absolute error of the quasi-Monte Carlo (QMC) method.
Table 2. The absolute error of the quasi-Monte Carlo (QMC) method.
I i I 1 I 2 I 3
s101520101520101520
ErrS 10 4 10 4 10 3 10 6 10 4 10 4 10 7 10 6 10 6
ErrH 10 4 10 3 10 3 10 6 10 4 10 4 10 7 10 7 10 7
l n s N N 10 5 10 11 10 16 10 5 10 11 10 16 10 5 10 11 10 16
Table 3. The average decreasing order of the error α = α ( Δ ) for the integral I 4 . Randomized quasi-Monte Carlo (RQMC).
Table 3. The average decreasing order of the error α = α ( Δ ) for the integral I 4 . Randomized quasi-Monte Carlo (RQMC).
s = 15 s = 20
Method Δ r 1 Δ r 2 Δ r 3 δ Δ r 1 Δ r 2 Δ r 3 δ
RQMC, Sobol0.840.850.870.850.951.01.00.98
RQMC, Halton0.700.640.610.650.850.920.980.92
Table 4. The average decreasing order of the error α = α ( Δ ) for the integral I 1 .
Table 4. The average decreasing order of the error α = α ( Δ ) for the integral I 1 .
s = 15 s = 20
Method Δ r 1 Δ r 2 Δ r 3 δ Δ r 1 Δ r 2 Δ r 3 δ
RQMC, Sobol0.590.610.610.610.650.630.520.47
RQMC, Halton0.610.610.600.610.530.520.520.60

Share and Cite

MDPI and ACS Style

Ermakov, S.; Leora, S. Monte Carlo Methods and the Koksma-Hlawka Inequality. Mathematics 2019, 7, 725. https://0-doi-org.brum.beds.ac.uk/10.3390/math7080725

AMA Style

Ermakov S, Leora S. Monte Carlo Methods and the Koksma-Hlawka Inequality. Mathematics. 2019; 7(8):725. https://0-doi-org.brum.beds.ac.uk/10.3390/math7080725

Chicago/Turabian Style

Ermakov, Sergey, and Svetlana Leora. 2019. "Monte Carlo Methods and the Koksma-Hlawka Inequality" Mathematics 7, no. 8: 725. https://0-doi-org.brum.beds.ac.uk/10.3390/math7080725

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop