Next Article in Journal
Acknowledgement to Reviewers of Mathematics in 2017
Next Article in Special Issue
Inverse Multiquadratic Functions as the Basis for the Rectangular Collocation Method to Solve the Vibrational Schrödinger Equation
Previous Article in Journal
Best Approximation of the Fractional Semi-Derivative Operator by Exponential Series

Article

Iterative Methods for Computing Vibrational Spectra

Chemistry Department, Queen’s University, Kingston, ON K7L 3N6, Canada
Received: 21 December 2017 / Revised: 10 January 2018 / Accepted: 11 January 2018 / Published: 16 January 2018

Abstract

I review some computational methods for calculating vibrational spectra. They all use iterative eigensolvers to compute eigenvalues of a Hamiltonian matrix by evaluating matrix-vector products (MVPs). A direct-product basis can be used for molecules with five or fewer atoms. This is done by exploiting the structure of the basis and the structure of a direct product quadrature grid. I outline three methods that can be used for molecules with more than five atoms. The first uses contracted basis functions and an intermediate (F) matrix. The second uses Smolyak quadrature and a pruned basis. The third uses a tensor rank reduction scheme.

1. Introduction

Effective numerical methods for solving the time-independent Schroedinger equation to compute vibrational spectra of polyatomic molecules have been developed in the last thirty years [1,2,3,4,5]. They are important when approximations, often based on perturbation theory, are not accurate enough. Almost all methods begin by choosing a basis in which to represent both the wavefunctions and the Hamiltonian and then solve a linear algebra problem. These two basic tasks are not independent: a basis with structure favours iterative linear algebra methods (vide infra). Computing vibrational spectra is useful because it helps experimentalists to assign and interpret measured spectra.
In this article, I present a subjective review of several methods for solving the time-independent Schroedinger equation to calculate vibrational spectra [4,6,7]. It is possible to generalize the methods I describe so that they can also be used to compute ro-vibrational spectra [8,9,10,11,12,13,14]. All of the methods presented here obtain solutions to the Schroedinger equation from a space built by evaluating matrix-vector products (MVPs) and are called iterative methods [15]. I shall ignore Multimode-type methods (MM) [5,16,17,18,19,20], that work when the potential energy surface (PES) is a sum of terms that depend on a subset of the coordinates [17,21] (denoted an MM representation) and work quite well for semi-rigid molecules for which normal coordinates are appropriate. Although widely used, I shall also ignore multiconfiguration time-dependent Hartree (MCTDH) methods [22,23]. They can be used with a block power method [24], an “improved relaxation” method, [25,26,27], or a block Lanczos method [28] to compute accurate vibrational energy levels. However, improved relaxation, the most popular MCTDH approach for calculating spectra, converges poorly if the density of states is high and therefore cannot be used to compute a large number of levels of a large molecule [29].
When using iterative methods, it is better not to calculate a Hamiltonian matrix. Many calculations are done with a basis so large that it would not be possible to store the Hamiltonian matrix in memory. Iterative methods require the evaluation of MVPs. How is it possible to compute MVPs without building a matrix representing the Hamiltonian? I shall first outline ideas that make it possible to use a product basis to evaluate MVPs without a Hamiltonian matrix. They exploit the structure of the basis, the quadrature grid, and the kinetic energy operator (KEO). For molecules with more than five atoms, vectors representing wavefunctions and the vector representing the PES on the quadrature grid [4,30,31] are so large that they require too much memory. In the rest of this chapter, I therefore review methods that obviate the need to store large vectors. The first method (Section 4) uses a contracted basis. To make the contracted basis method useful, it is essential that it be possible to evaluate MVPs in the contracted basis without transforming to a huge product grid. The second method (Section 5) uses a pruned basis and a pruned grid. Pruning significantly reduces the size of the largest vectors one must store. The third method (Section 6) builds a basis from MVPs by using tensor rank reduction.

2. Direct Product Basis Sets

When there are D vibrational coordinates, a direct product basis function is
$Φ n 1 , n 2 , … , n D = ϕ n 1 ( q 1 ) ϕ n 2 ( q 2 ) … ϕ n D ( q D ) ,$
where the indices ${ n k }$ are independent and $n c = 0 , 1 , … , n c m a x$. $ϕ n c ( q c )$ is a 1D basis function for coordinate c. If $n c m a x = n ∀ c$, then the direct product basis set has $n D$ functions. The univariate functions are often $ϕ k ( x ) = h k − 1 / 2 [ w ( x ) ] 1 / 2 p k ( z )$, where z is a function of x, $p k ( z )$ is a classical orthogonal polynomial, $w ( x )$ is the corresponding weight function, and $h k$ is a normalization factor. Such a basis is usually called a variational basis representation (VBR) [4,30].
Although there are problems for which a VBR basis is best, it is sometimes advantageous to use a discrete variable representation (DVR) basis [4,30,31,32]. In 1D, a standard DVR basis is a set of orthogonal but localized functions that spans the same space as a set of orthogonal de-localized functions, $ϕ k ( x )$. The 1D DVR Hamiltonian matrix eigenvalue problem is
$T T ( K + V FBR ) T U = U E ,$
where $K$ is an exact kinetic matrix in a basis of $ϕ n ( q )$ (VBR) functions and $V FBR$ is either a product or a quadrature approximation for the exact potential matrix [4]. One way to obtain the transformation matrix $T$ is to diagonalize the matrix representing x in the VBR,
$x T = T X ,$
where $x$ is the matrix representing x in the $ϕ n ( q )$ basis and $X$ is a diagonal matrix whose nonzero values are eigenvalues [33]. Equation (2) can be written
$( T T K T + V diag ) U = U E ,$
where $V diag$ is a diagonal matrix whose diagonal elements are values of the potential at the quadrature (DVR) points. A potential optimised DVR (PO-DVR) [34,35] is made from 1D basis functions that are solutions of 1D Schroedinger equations.

3. Using a Direct Product Basis Set to Solve the Schroedinger Equation

Direct product DVR and VBR bases are popular [4,36,37,38,39,40]. Although direct product bases are huge, they can be used by exploiting the structure of the basis to efficiently evaluate the MVPs required to use iterative methods. The Lanczos and filter diagonalisation methods are popular iterative methods for solving the time-independent Schroedinger equation [40,41,42,43,44,45,46,47]. For a molecule with as many as five atoms, they make it possible, even with a direct product basis, to solve the vibrational Schroedinger equation with a general PES. The key ideas have been reviewed several times [4,5,36,37]. They are all based on doing sums sequentially [40,48].
In a direct product DVR, potential MVPs are trivial because the potential matrix is diagonal. When the KEO is a sum of products (SOPs) , with g terms each with D factors,
$K ^ = ∑ l = 1 g ∏ k = 1 D h ^ ( k , l ) ( q k ) ,$
then kinetic MVPs can be efficiently evaluated by doing sums sequentially,
$∑ l = 1 g ∑ n 1 h n 1 ′ , n 1 ( 1 , l ) ∑ n 2 h n 2 ′ , n 2 ( 2 , l ) … ∑ n D h n D ′ , n D ( f , l ) u n 1 , n 2 , … , n D = u n 1 ′ , n 2 ′ , … , n D ′ ′ ,$
where $h n k ′ , n k ( k , l )$ is an element of the $n × n$ matrix representation of the factor $h ^ ( k , l ) ( q k )$. Matrix elements of the full KEO are never computed.
If there are important singularities in the KEO, then a VBR basis is better than a DVR basis [49]. At an important singularity, the KEO is singular and vibrational wavefunctions have significant amplitude. In general, singularities occur whenever one coordinate takes a limiting value and another is undefined [50]. In a VBR basis, it is possible to evaluate the potential MVP by doing sums sequentially [48,49]. This enables one to avoid calculating potential matrix elements, which would require computing many-dimensional integrals. Consider a 2D example. The matrix–vector product is
$∑ n 1 ∑ n 2 V n 1 ′ n 2 ′ , n 1 n 2 u n 1 , n 2 = u n 1 ′ , n 2 ′ ′ ,$
where
$V n 1 ′ n 2 ′ , n 1 n 2 = ∫ d q 1 d q 2 ϕ n 1 ′ ( q 1 ) ϕ n 2 ′ ( q 2 ) V ( q 1 , q 2 ) ϕ n 1 ( q 1 ) ϕ n 2 ( q 2 ) .$
In terms of $T$ matrices (see Equation (3)) ,
$V n 1 ′ n 2 ′ , n 1 n 2 ≈ ∑ α ∑ β ( T ) n 1 ′ , α ( T ) n 2 ′ , β V ( ( q 1 ) α , ( q 2 ) β ) ( T † ) α , n 1 ( T † ) β , n 2 .$
The matrix–vector product can be written,
$∑ n 1 ∑ n 2 ∑ α ∑ β ( T ) n 1 ′ , α ( T ) n 2 ′ , β V ( ( q 1 ) α , ( q 2 ) β ) ( T † ) α , n 1 ( T † ) β , n 2 u n 1 , n 2 = u n 1 ′ , n 2 ′ ′$
and evaluated by doing sums sequentially,
$∑ α ( T ) n 1 ′ , α ∑ β ( T ) n 2 ′ , β V ( ( q 1 ) α , ( q 2 ) β ) ∑ n 1 ( T † ) α , n 1 ∑ n 2 ( T † ) β , n 2 u n 1 , n 2 = u n 1 ′ , n 2 ′ ′ .$
If there is an important singularity, good basis functions are always nondirect product functions which are products of functions of the coordinate which is undefined and the coordinate which takes a limiting value, with a shared index.
The ideas of this section make it possible to compute vibrational spectra without computing and storing a Hamiltonian matrix. However, for molecules with more than five atoms, the memory cost of storing vectors in a direct product basis is prohibitive. For molecules with more than five atoms, it is necessary to introduce other ideas to reduce the memory cost of calculations.

4. Using a DVR to Make a Contracted Basis

To include information about coupling in the basis functions, it is common to use basis functions that are products of factors that depend on more than one coordinate. I shall call the multi-dimensional factors contracted basis functions. It is important to devise good algorithms for evaluating MVPs in a contracted basis. Contracted bases are necessarily more complicated, i.e., they have less structure, and it is structure that is exploited to evaluate MVPS efficiently. An important advantage that contracted bases have is the reduced spectral range of the contracted-basis Hamiltonian matrix. Reducing the spectral range decreases the number of MVPs required to compute eigenvalues.
For molecules with more than three atoms, it is best to use contracted functions diagonalizing matrices that represent the Hamiltonian with one or more coordinates fixed. The basis functions are direct products of functions of different coordinates or groups of coordinates [6,51,52,53].

Evaluating Matrix-Vector Products without Storing a Vector as Large as the Direct Product DVR

The most obvious way to evaluate MVPs is to transform from the contracted basis to a primitive basis, in which the contracted functions are determined. Computing matrix elements of the potential in the primitive basis requires storing the potential on a large (direct product) grid of points. Because the grid is huge, this is impractical for molecules with more than five atoms. An alternative is to store an intermediate matrix [6,54]. To explain how this is done, consider a ($J = 0$) Hamiltonian in polyspherical coordinates [55,56,57]
$H = T b e n ( θ , r ) + T s t r ( r ) + V ( θ , r )$
with
$T b e n ( θ , r ) = ∑ i B i ( r ) T b ( i ) ( θ ) T s t r ( r ) = ∑ i − 1 2 μ i ∂ 2 ∂ r i 2 .$
$θ$ represents all of the bend coordinates and r represents all of the stretch coordinates. The functions $B i ( r )$ and the operators $T b ( i ) ( θ )$ are known [55,56,58]. One constructs contracted bend functions from a Hamiltonian obtained by fixing the stretch coordinates at some reference geometry and contracted stretch functions from a Hamiltonian obtained by fixing all the bend coordinates at reference values. Products of the bend contracted functions and stretch contracted functions are the final basis functions.
The reduced-dimension Hamiltonian for the bend contraction is,
$H ( b ) = T b e n ( θ , r e ) + V ( θ , r e ) .$
Its wavefunctions are denoted by
$X b ( θ ) = ∑ l C l b f l ( θ )$
and the energies by $E b$. The $f l$ are primitive bend basis functions (l is a composite index) and the number of retained bend wavefunctions is denoted by $n b$. Similarly, the reduced-dimension Hamiltonian for the stretch contraction is,
$H ( s ) = T s t r ( r ) + V ( θ e , r ) .$
with the wavefunctions denoted by,
$Y s ( r ) = ∑ α D α s g α ( r )$
and the energies by $E s$. The $g α$ are primitive DVR stretch basis functions ($α$ is a composite index representing a multidimensional DVR function) and the number of retained stretch wavefunctions is denoted by $n s$. $θ e$ and $r e$ represent reference (often equilibrium) values of all the bend coordinates and all the stretch coordinates. The final basis is a product of the retained stretch and bend eigenfunctions
$| b s 〉 = | X b 〉 | Y s 〉 .$
The full Hamiltonian is
$H = H ( b ) + H ( s ) + Δ T + Δ V$
where
$Δ V ( θ , r ) = V ( θ , r ) − V ( θ , r e ) − V ( θ e , r )$
and
$Δ T = ∑ i Δ B i ( r ) T b ( i ) ( θ )$
with
$Δ B i ( r ) = B i ( r ) − B i ( r e ) .$
In the contracted basis, MVPs for $Δ T$ and $H ( b ) + H ( s )$ are easy [6].
If one uses an finite basis representation (FBR) primitive bend basis and a DVR primitive stretch basis, a matrix element of $Δ V$ in the product contracted basis is,
$〈 b ′ s ′ | Δ V ( θ , r ) | b s 〉 = ∑ l ′ l α D α s ′ C l ′ b ′ 〈 l ′ | Δ V ( θ , r α ) | l 〉 C l b D α s .$
This may be re-written
$∑ α F b ′ b , α D α s ′ D α s ,$
where I have introduced an F matrix [6] defined by,
$F b ′ , b , α = < b ′ | Δ V ( θ , r α ) | b > = ∑ l ′ l C l ′ b ′ C l b 〈 l ′ | Δ V ( θ , r α ) | l 〉 .$
The integral $〈 l ′ | Δ V ( θ , r α ) | l 〉$ is computed with quadrature. The $F b ′ b , α$ elements are calculated (in parallel) and stored before matrix vector products are evaluated. The $Δ V$ matrix–vector product
$u b ′ s ′ ′ = ∑ b s 〈 b ′ s ′ | Δ V | b s 〉 u b s ,$
is done as follows:
$u b α ( 1 ) = ∑ s D α s u b s u b ′ α ( 2 ) = ∑ b F b ′ b α u b α ( 1 ) u b ′ s ′ ′ = ∑ α D α s ′ u b ′ α ( 2 ) .$
The idea of reducing the memory cost of contracted-basis calculations by storing a matrix representation of $Δ V$ was used in [54], where only the bend basis was contracted. The full power of the method is realized only when both stretch and bend bases are contracted [6,59,60,61,62,63,64]. Recently, similar ideas were used for Cl$−$ -H$2$O [65]. Yu has studied several molecules using similar ideas [66,67,68,69]. As in [54], he contracts only the bend part.

5. Using Pruning to Reduce Both Basis and Grid Size

In this section, I present an alternative method for computing vibrational spectra of molecules with more than five atoms. In contrast to the idea of using contracted basis functions, it uses univariate basis functions, but uses only selected products, i.e., the basis of Equation (1) is pruned by removing functions that are deemed unimportant. This makes it possible to obviate the need to store large vectors.
It seems clear that one should discard basis functions that are not necessary. Many authors have implemented basis pruning strategies [16,18,19,53,62,63,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86]. Although pruning has the obvious advantage that it decreases the size of the vectors one must store and the spectral range of the Hamiltonian matrix, if one uses an iterative method, it complicates the evaluation of MVPs. A pruned basis necessarily has less structure than a direct product basis. In this section, I shall discuss how to evaluate MVPs when the pruning strategy retains some product structure.
Pruning is more efficient when used with a FBR and not a DVR [40]. The simplest VBR pruning condition is $n 1 + … + n D ≤ b$. The pruned basis is much smaller than the direct product basis. If $n c = 0 , 1 , … , b$ for $c = 1 , … , D$ and $b = 14$ then the size of the direct product is ∼6 $× 10 11$, for $D = 10$; ∼4 $× 10 17$, for $D = 15$; and ∼3 $× 10 23$, for $D = 20$. On the other hand, if basis functions with $n 1 + … + n D > b = 14$ are discarded, the basis size increase with D is less than linear: ∼2.0 $× 10 6$, for $D = 10$, ∼7.7 $× 10 7$, for $D = 15$, ∼1.4 $× 10 9$, for $D = 20$. MVPs for the KEO in a pruned basis are straightforward. MVPs for the potential are only straightforward if a direct product quadrature grid is used [83], but if a direct product quadrature is used, one needs to store a potential vector about as large as the direct product vectors one avoids by pruning the basis. The most important advantage of pruning is therefore lost It is possible to find a nondirect product quadrature scheme that uses fewer points and to evaluate potential MVPs by doing sums sequentially. The ideas will be explained with the $n 1 + … + n D ≤ b$ pruning condition, but better pruning conditions will be briefly discussed at the end of this section.
The direct product quadrature is, in a sense, too good because it is so good that many matrix elements with basis functions removed by the pruning are also exact. A nondirect product Smolyak quadrature is better. It has far fewer points but maintains enough structure to allow efficient MVPs. For detail see [7,87,88]. To make a Smolyak quadrature, one needs a family of 1D quadrature rules for each coordinate. Quadratures are labelled by $i c$, $i c = 1 , 2 , … , i c m a x$. The number of points in quadrature rule $i c$ is $m c ( i c )$, where $m c ( i c )$ is a non-decreasing function of $i c$. To evaluate MVPs efficiently, one needs to use points for which all points in rule $i c − 1$ are also in rule $i c$. Such points are called nested. The standard way to write a Smolyak quadrature is $f ( q 1 , q 2 , q 3 , q 4 , q 5 , q 6 ) ( 1 w ( q 1 ) 2 w ( q 2 ) 3 w ( q 3 ) 4 w ( q 4 ) 5 w ( q 5 ) 6 w ( q 6 ) )$ is
$∑ i 1 + … + i 6 ≤ H C i 1 , … i 6 s m o l ∑ k 1 m 1 ( i 1 ) ∑ k 2 m 2 ( i 2 ) ∑ k 3 m 3 ( i 3 ) ∑ k 4 m 4 ( i 4 ) ∑ k 5 m 5 ( i 5 ) ∑ k 6 m 6 ( i 6 ) i 1 w k 1 i 2 w k 2 i 3 w k 3 i 4 w k 4 i 5 w k 5 i 6 w k 6 f ( q 1 k 1 , q 2 k 2 , q 3 k 3 , q 4 k 4 , q 5 k 5 , q 6 k 6 ) ,$
where $q c k c$ is a point in the quadrature labelled by $i c$, $i c w k c$ is the corresponding weight and the 1D quadratures are designed to approximate
$∫ d q c c w ( q c ) f ( z c ( q c ) ) .$
$C i 1 , … i 6 s m o l$ are coefficients; see [7]. H is increased until convergence is achieved. The union of the grids for which $i 1 + … + i 6 ≤ H$ is satisfied is called the Smolyak grid. The size of the Smolyak grid is orders of magnitude smaller than the direct product grid size.
It would be costly to use Equation (27) in the evaluation of MVPs because it would be necessary to evaluate the sum over $i 1 + … + i 6 ≤ H$ for each MVP. When the 1D quadrature rules are nested, one can [7] replace Equation (27) with
$= ∑ k 1 N 1 ∑ k 2 N 2 ∑ k 3 N 3 ∑ k 4 N 4 ∑ k 5 N 5 ∑ k 6 N 6 w ( k 1 , k 2 , k 3 , k 4 , k 5 , k 6 ) × f ( q 1 k 1 , q 2 k 2 , q 3 k 3 , q 4 k 4 , q 5 k 5 , q 6 k 6 ) ,$
where
$w ( k 1 , … , k 6 ) = ∑ i 1 + … i 6 ≤ H C i 1 , … , i 6 s m o l i 1 w k 1 … i D w k 6 ,$
are “super weights” that are pre-computed [89]. $N c$ is a maximum number of points for coordinate c [7]. $N c$ depends on $k c ′$ if $c > c ′$ and $N 1$ does not depend on $k 1 , … , k D$. Using the super weights, it is possible to evaluate a potential MVP by doing sums sequentially,
$u ′ ( n 6 ′ , n 5 ′ , n 4 ′ n 3 ′ , n 2 ′ , n 1 ′ ) = ∑ k 1 = 1 N 1 T n 1 ′ k 1 ∑ k 2 = 1 N 2 T n 2 ′ k 2 ∑ k 3 = 1 N 3 T n 3 ′ k 3 ∑ k 4 = 1 N 4 T n 4 ′ k 4 ∑ k 5 = 1 N 5 T n 5 ′ k 5 ∑ k 6 = 1 N 6 T n 6 ′ k 6 w ( k 1 , k 2 , k 3 , k 4 , k 5 , k 6 ) V ( q 1 k 1 , q 2 k 2 , q 3 k 3 , q 4 k 4 , q 5 k 5 , q 6 k 6 ) ∑ n 6 = 0 n 6 max T n 6 k 6 ∑ n 5 = 0 n 5 max T n 5 k 5 ∑ n 4 = 0 n 4 max T n 4 k 4 ∑ n 3 = 0 n 3 max T n 3 k 3 ∑ n 2 = 0 n 2 max T n 2 k 2 ∑ n 1 = 0 n 1 max T n 1 k 1 u ( n 6 , n 5 , n 4 , n 3 , n 2 , n 1 ) ,$
where $T n k = h k − 1 / 2 p k ( z ( q k ) )$. $n c m a x$ depends on $n c ′$ if $c < c ′$.
To use Equation (31), one first sums over $n 1$ to compute an intermediate vector $y 1 k 1 , n 2 , n 3 , n 4 , n 5 , n 6$ and then sums over $n 2$ to compute an intermediate vector whose components are $y 2 k 1 , k 2 , n 3 , n 4 , n 5 , n 6$ etc. At each step, the $n c$ and $k c$ indices are constrained among themselves. Everything is clearly explained in [90]. The Smolyak grid is a sum of smaller direct product grids and therefore has structure that makes it possible to evaluate MVPs by doing sums sequentially. It is also possible to do sums sequentially for any pruning condition of the form $g 1 ( n 1 ) + … + g D ( n D ) ≤ b$ [87]. Sometimes, $g c ( n c ) = α c n c$ with $α c = ω c ω l o w e s t + 0.5$ is a good choice. Often this choice can be improved. In general, basis functions with many non-zero indices for coordinates with large frequencies are unimportant. They can be pushed out of the basis by using $G c ( n c ) > α c P n c$. In general, it is important to include in the basis product functions for which there are several non-zero indices for coordinates with small frequencies. Such functions are preferentially included in the basis by using $G c ( n c ) < α c P$. These are general guidelines which we have found useful, but they are not specific [87,88].

6. Using Rank Reduction to Avoid Storing Full Dimensional Vectors

Contraction (Section 4) and pruning (Section 5) enable one to avoid storing vectors with as many elements as the direct product basis. For molecules with more than five atoms, this is essential. For example, if $D = 12$ and $n = 10$ then for a single vector one needs ∼8000 GB of memory to store a vector with $n D$ components. In this section, I describe another approach for avoiding vectors with $n D$ components. It $d o e s$ use a direct product basis but exploits advantages of a SOPs PES [91,92,93]. The key idea is that in some cases, the $n D$ coefficients, used to represent a function, can be computed from a much smaller set of numbers. For example, a product of functions of a single coordinate, $ϕ 1 ( q 1 ) ϕ 2 ( q 2 ) … ϕ D ( q D )$, can be represented as
$∑ i 1 = 1 n f i 1 ( 1 ) θ i 1 1 ( q 1 ) ∑ i 2 = 1 n f i 2 ( 2 ) θ i 2 2 ( q 2 ) … ∑ i D = 1 n f i D ( D ) θ i D D ( q D )$
and it is only necessary to store $D n$ numbers.
We have developed computational methods that solve the Schroedinger equation by projecting into a basis of functions that are sums of products. Although the functions in this basis are SOPs, the basis is not a direct product basis. A single basis function with R terms is determined by only $R D n$ numbers. If D is large, this is much less than $n D$. The key idea is to use basis functions that are sums of products of optimised factors. It works if the Hamiltonian is itself a SOPs. The SOPs basis functions are represented in a primitive product basis made from 1-D functions $θ i j j ( q j )$ with $i j = 1 , … , n j$ for each coordinate $q j$. Any function can be expanded in this basis as
$Ψ ( q 1 , … , q D ) ≃ ∑ i 1 = 1 n 1 … ∑ i D = 1 n D F i 1 i 2 … i D ∏ j = 1 D θ i j j ( q j ) .$
The goal is to avoid explicitly introducing $F i 1 i 2 … i D$. This is possible if $Ψ ( q 1 , … , q D )$ is a SOPs. In that case,
$F i 1 i 2 … i D = ∑ ℓ = 1 R ∏ j = 1 D f i j ( ℓ , j )$
where $f ( ℓ , j )$ is a one-dimensional vector associated with the -th term and coordinate j. The SOPs format for multidimensional functions is known as the canonical polyadic (CP) decomposition for tensors [94,95,96].
Basis functions are made by applying the Hamiltonian. We have used a shifted block power method [91]. It is imperative that every basis vector be in the form of Equation (33). This is only the case if the Hamiltonian is of the form,
$H ( q 1 , … , q D ) = ∑ k = 1 T ∏ j = 1 D h k j ( q j ) ,$
where $h k j$ is a one-dimensional operator acting in a Hilbert space associated with coordinate $q j$. PES can be forced into SOPs form by using, for example, potfit [5,97], multigrid potfit [98], or neural network methods [99,100,101].
When $H$ is applied to a vector $F$ to obtain a new vector $F ′$, the number of terms in the vector increases. If there are T terms in $H$, the rank (number of terms) of $F ′$ is a factor of T larger than the rank of $F$. All vectors have the form
$F i 1 i 2 … i D = ∑ ℓ = 1 R s ℓ ∏ j = 1 D f ˜ i j ( ℓ , j ) with ∑ i j n j | f ˜ i j ( ℓ , j ) | 2 = 1 ,$
where, for each term () and each coordinate (j), $f ˜ i j ( ℓ , j )$ is a normalized 1-D vector, $s ℓ$ is a normalization coefficient, and $n j$ is the number of basis functions for coordinate j. $H$ can be applied to $F$ by evaluating 1-D matrix–vector products,
$( H F ) i 1 ′ … i D ′ = ∑ i 1 , i 2 , … , i D ∑ k = 1 T ∏ j ′ = 1 D ( h k j ′ ) i j ′ ′ i j ′ ∑ ℓ = 1 R ∏ j = 1 D s ℓ f ˜ i j ( ℓ , j ) .$
$= ∑ k = 1 T ∑ ℓ = 1 R ∏ j = 1 D ∑ i j ( h k j ) i j ′ i j s ℓ f ˜ i j ( ℓ , j ) ,$
where $( h k j ) i j , i j ′ = 〈 θ i j j | h k j | θ i j ′ j 〉$.
To avoid having vectors with many terms, we must reduce the rank. To do this, we replace $F i 1 i 2 … i D old$
$F i 1 i 2 … i D old = ∑ ℓ = 1 R old s ℓ ∏ j = 1 D old f ˜ i j ( ℓ , j ) ⟹ F i 1 i 2 … i D new = ∑ ℓ = 1 R new s ℓ ∏ j = 1 D new f ˜ i j ( ℓ , j ) ,$
where $R new < R old$ and choose $new f ˜ i j ( ℓ , j )$ to minimize $∥ F new − F old ∥$. We use the same $R new$ for all reductions. An alternating least squares (ALS) algorithm described in [96] is used to carry out the reduction.
In [91], it was demonstrated that these ideas work for a 20D Hamiltonian of coupled oscillators. A rank of only 20 (i.e., 20 terms in each of the basis functions) was sufficient to converge about 40 states of a 20D Hamiltonian. Related ideas were used successfully for molecules with as many as 10 atoms [91,92,93].

7. Conclusions

Iterative eigensolvers make it possible to calculate vibrational spectra without storing a Hamiltonian matrix. The Lanczos algorithm, filter diagonalization, and a re-started Lanczos or Arnoldi method available as ARPACK [102] are common iterative eigensolvers.
It is easiest to use an iterative eigensolver when the Hamiltonian is a SOP. A Taylor series potential is a SOPs. In many cases, the vibrational KEO is a SOPs. In normal coordinates, it is only a SOPs if one expands elements of the effective moment of inertia tensor [103] or sets them to zero (approximates the KEO). If the PES is not a SOPs, it can be massaged into SOPs form [101,104]. It is harder to use an iterative eigensolver when the Hamiltonian is not an SOPs. In this case, quadrature (or collocation) is used and the Hamiltonian matrix is usually not sparse. In a product basis, the cost of Hamiltonian MVPs scales as $n D + 1$ [40,48,49]. This favourable scaling is obtained by exploiting structure. Product basis/product grid methods are methods of first resort for molecules with four or five atoms. They also work extremely well for Van der Waals molecules when only the intermonomer coordinates are treated explicitly [8,11,12,105,106].
In this review article, I describe three methods that obviate the need to store vectors with as many components as the product basis. The first method uses basis functions that are eigenfunctions of a Hamiltonian obtained by setting a subset of the coordinates equal to reference values. In Section 4, a method is described for evaluating MVPs in a product contracted basis. It does not require storing vectors as large as the primitive product basis set. The key idea is to store an intermediate matrix, called the F matrix. The second method uses basis functions that are products of univariate functions. Functions deemed unimportant are removed from a direct-product basis by imposing a pruning condition. The pruning condition is chosen so that the pruned basis has structure. When used with a general PES, it is necessary to use the pruned basis in conjunction with a nondirect product quadrature grid that satisfies two requirements. It must have fewer points than the product quadrature and it must have sufficient structure to be able to evaluate MVPs by doing sums sequentially. A Smolyak quadrature satisfies both requirements. However, to use it, the quadrature must be written not as a sum over quadrature levels, as is usually the case, but as a sum over points (i.e., not Equation (27) but Equation (30)). The third method uses SOPs basis functions. It works only if the Hamiltonian is a SOPs. The basis functions are determined by reducing the rank of the vectors obtained from MVPs. With rank reduction methods, it is possible to compute vibrational spectra for molecules with more than a dozen atoms [107].

Acknowledgments

The research described in this paper was supported by the Canadian Natural Sciences and Engineering Research Council. Many excellent students and postdocs made important contributions to the development and implementation of the ideas described here. I am especially grateful to Gustavo Avila, Matthew Bramley, Arnaud Leclerc, and Xiao-Gang Wang.

Conflicts of Interest

The author declares no conflicts of interest.

References

1. Schinke, R. Photodissociation Dynamics; Cambridge University Press: Cambridge, UK, 1993. [Google Scholar]
2. Tannor, D.J. Introduction to Quantum Mechanics: A Time- Dependent Perspective; University Science Books: Sausalito, CA, USA, 2007. [Google Scholar]
3. Kosloff, R. Time-dependent quantum-mechanical methods for molecular dynamics. J. Phys. Chem. 1988, 92, 2087–2100. [Google Scholar] [CrossRef]
4. Light, J.C.; Carrington, T., Jr. Discrete-variable representations and their utilization. Adv. Chem. Phys. 2000, 114, 263–310. [Google Scholar]
5. Bowman, J.M.; Carrington, T.; Meyer, H.-D. Variational quantum approaches for computing vibrational energies of polyatomic molecules. Mol. Phys. 2008, 106, 2145–2182. [Google Scholar] [CrossRef]
6. Wang, X.-G.; Carrington, T. New ideas for using contracted basis functions with a Lanczos eigensolver for computing vibrational spectra of molecules with four or more atoms. J. Chem. Phys. 2002, 117, 6923–6934. [Google Scholar] [CrossRef]
7. Avila, G.; Carrington, T., Jr. Nonproduct quadrature grids for solving the vibrational Schrödinger equation. J. Chem. Phys. 2009, 131, 174103. [Google Scholar] [CrossRef] [PubMed]
8. Dawes, R.; Wang, X.-G.; Jasper, A.; Carrington, T. Nitrous oxide dimer: a new potential energy surface and ro-vibrational spectrum of the non-polar isomer. J. Chem. Phys. 2010, 133, 134304:1–134304:14. [Google Scholar] [CrossRef] [PubMed]
9. Sarkar, P.; Poulin, N.; Carrington, T., Jr. Calculating rovibrational energy levels of a triatomic molecule with a simple Lanczos method. J. Chem. Phys. 1999, 110, 10269–10274. [Google Scholar] [CrossRef]
10. Szidarovszky, T.; Fabri, C.; Csaszar, A.G. The role of axis embedding on rigid rotor decomposition analysis of variational rovibrational wave functions. J. Chem. Phys. 2012, 136, 174112. [Google Scholar] [CrossRef] [PubMed]
11. Leforestier, C.; Braly, L.B.; Liu, K.; Elroy, M.J.; Saykally, R.J. Fully coupled six-dimensional calculations of the water dimer vibration-rotation-tunneling states with a split Wigner pseudo spectral approach. J. Chem. Phys. 1997, 106, 8527. [Google Scholar] [CrossRef]
12. Brown, J.; Wang, X.-G.; Dawes, R.; Carrington, T. Computational study of the rovibrational spectrum of (OCS)2. J. Chem. Phys. 2012, 136, 134306:1–134306:12. [Google Scholar] [CrossRef] [PubMed]
13. Wang, X.-G.; Carrington, T. Computing ro-vibrational levels of methane with internal vibrational coordinates and an Eckart frame. J. Chem. Phys. 2013, 138, 104106:1–104106:20. [Google Scholar] [CrossRef] [PubMed]
14. Wang, X.-G.; Carrington, T.; McKellar, A.R.W. Theoretical and experimental study of the rovibrational spectrum of He2-CO. J. Phys. Chem. A 2009, 113, 13331–13341, (Robert Field Festschrift). [Google Scholar] [CrossRef] [PubMed]
15. Golub, G.H.; van Loan, C.F. Matrix Computations; JHU Press: Baltimore, MD, USA, 2012; Volume 3. [Google Scholar]
16. Carter, S.; Bowman, J.M.; Handy, N.C. Extensions and tests of “multimode”: A code to obtain accurate vibration/rotation energies of many-mode molecules. Theor. Chim. Acta 1998, 100, 191–198. [Google Scholar] [CrossRef]
17. Carter, S.; Culik, S.J.; Bowman, J.M. Vibrational self-consistent field method for many-mode systems: A new approach and application to the vibrations of CO adsorbed on Cu (100). J. Chem. Phys. 1997, 107, 10458–10469. [Google Scholar] [CrossRef]
18. Benoit, D.M. Fast vibrational self-consistent field calculations through a reduced mode–mode coupling scheme. J. Chem. Phys. 2004, 120, 562–573. [Google Scholar] [CrossRef] [PubMed]
19. Meier, P.; Neff, M.; Rauhut, G. Accurate Vibrational Frequencies of Borane and Its Isotopologues. J. Chem. Theory Comput. 2011, 7, 148–152. [Google Scholar] [CrossRef] [PubMed]
20. Rauhut, G. Configuration selection as a route towards efficient vibrational configuration interaction calculations. J. Chem. Phys. 2007, 127, 184109. [Google Scholar] [CrossRef] [PubMed]
21. Alis, O.; Rabitz, H. General Foundations of High Dimensional Model Representations. J. Math. Chem. 1999, 25, 197–233. [Google Scholar]
22. Beck, M.H.; Jäckle, A.; Worth, G.A.; Meyer, H.D. The multiconfiguration time-dependent Hartree (MCTDH) method: A highly efficient algorithm for propagating wavepackets. Phys. Rep. 2000, 324, 1–105. [Google Scholar] [CrossRef]
23. Manthe, U.; Meyer, H.-D.; Cederbaum, L. Wave-packet dynamics within the multiconfiguration Hartree framework: General aspects and application to NOCl. J. Chem. Phys. 1992, 97, 3199–3213. [Google Scholar] [CrossRef]
24. Manthe, U. The state averaged multiconfigurational time-dependent Hartree approach: Vibrational state and reaction rate calculations. J. Chem. Phys. 2008, 128, 064108. [Google Scholar] [CrossRef] [PubMed]
25. Meyer, H.-D.; le Quéré, F.; Léonard, C.; Gatti, F. Calculation and selective population of vibrational levels with the Multiconfiguration Time-Dependent Hartree (MCTDH) algorithm. Chem. Phys. 2006, 329, 179–192. [Google Scholar] [CrossRef]
26. Richter, F.; Gatti, F.; Léonard, C.; le Quéré, F.; Meyer, H.-D. Time-dependent wave packet study on trans-cis isomerization of HONO driven by an external field. J. Chem. Phys. 2007, 127, 164315. [Google Scholar] [CrossRef] [PubMed]
27. Doriol, L.J.; Gatti, F.; Iung, C.; Meyer, H.-D. Computation of vibrational energy levels and eigenstates of fluoroform using the multiconfiguration time-dependent Hartree method. J. Chem. Phys. 2008, 129, 224109. [Google Scholar] [CrossRef] [PubMed]
28. Wodraszka, R.; Manthe, U. Iterative Diagonalization in the Multiconfigurational Time-Dependent Hartree Approach: Ro-vibrational Eigenstates. J. Phys. Chem. A 2013, 117, 7246–7255. [Google Scholar] [CrossRef] [PubMed]
29. Vendrell, O.; Gatti, F.; Meyer, H.D. Full dimensional (15-dimensional) quantum-dynamical simulation of the protonated water dimer. II. Infrared spectrum and vibrational dynamics. J. Chem. Phys. 2007, 127, 184303. [Google Scholar] [CrossRef] [PubMed]
30. Light, J.C.; Hamilton, I.P.; Lill, J.V. Generalized discrete variable approximation in quantum-mechanics. J. Chem. Phys. 1985, 82, 1400–1409. [Google Scholar] [CrossRef]
31. Bac̆ić, Z.; Light, J.C. Theoretical Methods for Rovibrational States of Floppy Molecules. Annu. Rev. Phys. Chem. 1989, 40, 469–498. [Google Scholar] [CrossRef]
32. Wei, H.; Carrington, T. Discrete variable representations of complicated kinetic energy operators. J. Chem. Phys. 1994, 101, 1343–1360. [Google Scholar] [CrossRef]
33. Harris, D.O.; Engerholm, G.G.; Gwinn, W.D. Calculation of Matrix Elements for One-Dimensional Quantum-Mechanical Problems and the Application to Anharmonic Oscillators. J. Chem. Phys. 1965, 43, 1515–1517. [Google Scholar] [CrossRef]
34. Echave, J.; Clary, D.C. Potential optimized discrete variable representation. Chem. Phys. Lett. 1992, 190, 225–230. [Google Scholar]
35. Wei, H.; Carrington, T. The discrete variable representation for a triatomic Hamiltonian in bond length-bond angle coordinates. J. Chem. Phys. 1992, 97, 3029–3037. [Google Scholar] [CrossRef]
36. Carrington, T. Methods for calculating vibrational energy levels. Can. J. Chem. 2004, 82, 900–914. [Google Scholar] [CrossRef]
37. Csaszar, A.G.; Fabri, C.; Szidarovszky, T.; Matyus, E.; Furtenbacher, T.; Czako, G. The fourth age of quantum chemistry: molecules in motion. Phys. Chem. Chem. Phys. 2012, 14, 1085–1106. [Google Scholar] [CrossRef] [PubMed]
38. Matyus, E.; Czako, G.; Sutcliffe, B.T.; Csaszar, A.G. Vibrational energy levels with arbitrary potentials using the Eckart-Watson Hamiltonians and the discrete variable representation. J. Chem. Phys. 2007, 127, 084102. [Google Scholar] [CrossRef] [PubMed]
39. Yu, H.; Muckerman, J.T. A General Variational Algorithm to Calculate Vibrational Energy Levels of Tetraatomic Molecules. J. Mol. Spectrosc. 2002, 214, 11–20. [Google Scholar] [CrossRef]
40. Bramley, M.J.; Carrington, T. A general discrete variable method to calculate vibrational energy levels of three-and four-atom molecules. J. Chem. Phys. 1993, 99, 8519–8541. [Google Scholar] [CrossRef]
41. Wall, M.R.; Neuhauser, D. Extraction, through filter-diagonalization, of general quantum eigenvalues or classical normal mode frequencies from a small number of residues or a short-time segment of a signal. I. Theory and application to a quantum-dynamics model. J. Chem. Phys. 1995, 102, 8011–8022. [Google Scholar] [CrossRef]
42. Mandelshtam, V.A.; Taylor, H.S. Harmonic inversion of time signals and its applications. J. Chem. Phys. 1997, 107, 6756–6769. [Google Scholar] [CrossRef]
43. Iung, C.; Leforestier, C. Direct calculation of overtones: Application to the CD3H molecule. J. Chem. Phys. 1995, 102, 8453–8461. [Google Scholar] [CrossRef]
44. Le Quéré, F.; Leforestier, C. Quantum exact three-dimensional study of the photodissociation of the ozone molecule. J. Chem. Phys. 1990, 92, 247–253. [Google Scholar] [CrossRef]
45. McNichols, A.; Carrington, T. Vibrational energy levels of formaldehyde calculated from an internal coordinate Hamiltonian using the Lanczos algorithm. Chem. Phys. Lett. 1993, 202, 464–470. [Google Scholar] [CrossRef]
46. Huang, S.-W.; Carrington, T. A comparison of filter diagonalisation methods with the Lanczos method for calculating vibrational energy levels. Chem. Phys. Lett. 1999, 312, 311–318. [Google Scholar] [CrossRef]
47. Lee, S.; Chung, J.S.; Felker, P.M.; Cacheiro, J.L.; Fernández, B.; Bondo Pedersen, T.; Koch, H. Computational and experimental investigation of intermolecular states and forces in the benzene–helium van der Waals complex. J. Chem. Phys. 2003, 119, 12956. [Google Scholar] [CrossRef]
48. Manthe, U.; Koeppel, H. New method for calculating wave packet dynamics: Strongly coupled surfaces and the adiabatic basis. J. Chem. Phys. 1990, 93, 345–356. [Google Scholar] [CrossRef]
49. Bramley, M.J.; Tromp, J.W.; Carrington, T., Jr.; Corey, C.G. Efficient calculation of highly excited vibrational energy levels of floppy molecules: The band origins of H+ 3 up to 35,000 cm−1. J. Chem. Phys. 1994, 100, 6175–6194. [Google Scholar] [CrossRef]
50. Sutcliffe, B.T. Coordinate Systems and Transformations. In Handbook of Molecular Physics and Quantum Chemistry; Wilson, S., Ed.; Wiley: Chichester, UK, 2003; Volume 1, Part 6, Chapter 31; pp. 485–500. [Google Scholar]
51. Carter, S.; Handy, N.C. A variational method for the determination of the vibrational (J = 0) energy levels of acetylene, using a Hamiltonian in internal coordinates. Comput. Phys. Commun. 1988, 51, 49–58. [Google Scholar] [CrossRef]
52. Bramley, M.J.; Handy, N.C. Efficient calculation of rovibrational eigenstates of sequentially bonded four-atom molecules. J. Chem. Phys. 1993, 98, 1378. [Google Scholar] [CrossRef]
53. Yu, H.-G. An exact variational method to calculate vibrational energies of five atom molecules beyond the normal mode approach. J. Chem. Phys. 2002, 117, 2030. [Google Scholar] [CrossRef]
54. Bramley, M.J.; Carrington, T., Jr. Calculation of triatomic vibrational eigenstates: Product or contracted basis sets, Lanczos or conventional eigensolvers? What is the most efficient combination? J. Chem. Phys. 1994, 101, 8494. [Google Scholar] [CrossRef]
55. Gatti, F.; Iung, C.; Menou, M.; Justum, Y.; Nauts, A.; Chapuisat, X. Vector parametrization of the N-atom problem in quantum mechanics. I. Jacobi vectors. J. Chem. Phys. 1998, 108, 8804. [Google Scholar] [CrossRef]
56. Mladenović, M. Rovibrational Hamiltonians for general polyatomic molecules in spherical polar parametrization. I. Orthogonal representations. J. Chem. Phys. 2000, 112, 1070–1081. [Google Scholar] [CrossRef]
57. Chapuisat, X.; Belfhal, A.; Nauts, A. N-body quantum-mechanical Hamiltonians: Extrapotential terms. J. Chem. Phys. 1991, 149, 274–304. [Google Scholar] [CrossRef]
58. Gatti, F.; Iung, C. Exact and constrained kinetic energy operators for polyatomic molecules: The polyspherical approach. Phys. Rep. 2009, 484, 1–69. [Google Scholar] [CrossRef]
59. Wang, X.-G.; Carrington, T., Jr. A contracted basis-Lanczos calculation of vibrational levels of methane: Solving the Schrödinger equation in nine dimensions. J. Chem. Phys. 2003, 119, 101–117. [Google Scholar] [CrossRef]
60. Tremblay, J.C.; Carrington, T., Jr. Calculating vibrational energies and wave functions of vinylidene using a contracted basis with a locally reorthogonalized coupled two-term Lanczos eigensolver. J. Chem. Phys. 2006, 125, 094311. [Google Scholar] [CrossRef] [PubMed]
61. Wang, X.-G.; Carrington, T., Jr. Vibrational energy levels of CH5+. J. Chem. Phys. 2008, 129, 234102. [Google Scholar] [CrossRef] [PubMed]
62. Lee, H.-S.; Light, J.C. Molecular vibrations: Iterative solution with energy selected bases. J. Chem. Phys. 2003, 118, 3458. [Google Scholar] [CrossRef]
63. Lee, H.-S.; Light, J.C. Iterative solutions with energy selected bases for highly excited vibrations of tetra-atomic molecules. J. Chem. Phys. 2004, 120, 4626. [Google Scholar] [CrossRef] [PubMed]
64. Wang, X.-G.; Carrington, T., Jr. Contracted basis Lanczos methods for computing numerically exact rovibrational levels of methane. J. Chem. Phys. 2004, 121, 2937–2954. [Google Scholar] [CrossRef] [PubMed]
65. Wang, X.-G.; Carrington, T. Using monomer vibrational wavefunctions as contracted basis functions to compute rovibrational levels of an H2O-atom complex in full dimensionality. J. Chem. Phys. 2017, 146, 104105:1–104105:15. [Google Scholar] [CrossRef] [PubMed]
66. Yu, H.-G. Two-layer Lanczos iteration approach to molecular spectroscopic calculation. J. Chem. Phys. 2002, 117, 8190–8196. [Google Scholar] [CrossRef]
67. Yu, H.-G. Full-dimensional quantum calculations of vibrational spectra of six-atom molecules. I. Theory and numerical results. J. Chem. Phys. 2004, 120, 2270–2284. [Google Scholar] [CrossRef] [PubMed]
68. Yu, H.-G. Converged quantum dynamics calculations of vibrational energies of CH4 and CH3D using an ab initio potential. J. Chem. Phys. 2004, 121, 6334. [Google Scholar] [CrossRef] [PubMed]
69. Yu, H.-G. A rigorous full-dimensional quantum dynamics calculation of the vibrational energies of H3O-2. J. Chem. Phys. 2006, 125, 204306. [Google Scholar] [CrossRef] [PubMed]
70. Yurchenko, S.N.; Thiel, W.; Jensen, P. Theoretical ROVibrational Energies (TROVE): A robust numerical approach to the calculation of rovibrational energies for polyatomic molecules. J. Mol. Spectrosc. 2007, 245, 126–140. [Google Scholar] [CrossRef]
71. Dawes, R.; Carrington, T. How to choose 1-D basis functions so that a very efficient multidimensional basis may be extracted from a direct product of the 1-D functions: Energy levels of coupled systems with as many as 16 coordinates. J. Chem. Phys. 2005, 122, 134101:1–134101:14. [Google Scholar] [CrossRef] [PubMed]
72. Colbert, D.T.; Miller, W.H. A novel discrete variable representation for quantum mechanical reactive scattering via the S-matrix Kohn method. J. Chem. Phys. 1992, 96, 1982. [Google Scholar] [CrossRef]
73. Shimshovitz, A.; Tannor, D.J. Phase-Space Approach to Solving the Time-Independent Schrodinger Equation. Phys. Rev. Lett. 2012, 109, 070402. [Google Scholar] [CrossRef] [PubMed]
74. Carter, S.; Handy, N.C. The variational method for the calculation of ro-vibrational energy levels. Comput. Phys. Rep. 1986, 5, 117–171. [Google Scholar] [CrossRef]
75. Halonen, L.; Noid, D.W.; Child, M.S. Local mode predictions for excited stretching vibrational states of HCCD and H12C 13CH. J. Chem. Phys. 1983, 78, 2803. [Google Scholar] [CrossRef]
76. Halonen, L.; Child, M.S. Local mode theory for C3v molecules: CH3D, CHD3, SiH3D, and SiHD3. J. Chem. Phys. 1983, 79, 4355. [Google Scholar] [CrossRef]
77. Maynard, A.; Wyatt, R.E.; Iung, C. A quantum dynamical study of CH overtones in fluoroform. II. Eigenstate analysis of the vCH=1 and vCH=2 regions. J. Chem. Phys. 1997, 106, 9483. [Google Scholar] [CrossRef]
78. Maynard, A.T.; Wyatt, R.E.; Iung, C. A quantum dynamical study of CH overtones in fluoroform. I. A nine-dimensional ab initio surface, vibrational spectra and dynamics. J. Chem. Phys. 1995, 103, 8372. [Google Scholar] [CrossRef]
79. Iung, C.; Leforestier, C.; Wyatt, R.E. Wave operator and artificial intelligence contraction algorithms in quantum dynamics: Application to CD3H and C6H6. J. Chem. Phys. 1993, 98, 6722. [Google Scholar] [CrossRef]
80. Poirier, B. Using wavelets to extend quantum dynamics calculations to ten or more degrees of freedom. J. Theor. Comput. Chem. 2003, 2, 65. [Google Scholar] [CrossRef]
81. Poirier, B.; Salam, A. Quantum dynamics calculations using symmetrized, orthogonal Weyl-Heisenberg wavelets with a phase space truncation scheme. III. Representations and calculations. J. Chem. Phys. 2004, 121, 1704–1724. [Google Scholar] [CrossRef] [PubMed]
82. Poirier, B.; Salam, A. Quantum dynamics calculations using symmetrized, orthogonal Weyl-Heisenberg wavelets with a phase space truncation scheme. II. Construction and optimization. J. Chem. Phys. 2004, 121, 1690–1703. [Google Scholar] [CrossRef] [PubMed]
83. Wang, X.-G.; Carrington, T. The utility of constraining basis function indices when using the lanczos algorithm to calculate vibrational energy levels. J. Phys. Chem. A 2001, 105, 2575–2581. [Google Scholar] [CrossRef]
84. Halverson, T.; Poirier, B. One Million Quantum States of Benzene. J. Phys. Chem. A 2015, 119, 12417–12433. [Google Scholar] [CrossRef] [PubMed]
85. Brown, J.; Carrington, T. Using an expanding nondirect product harmonic basis with an iterative eigensolver to compute vibrational energy levels with as many as seven atoms. J. Chem. Phys. 2016, 145, 144104:1–144104:10. [Google Scholar] [CrossRef] [PubMed]
86. Brown, J.; Carrington, T. Assessing the utility of phase-space-localized basis functions: Exploiting direct product structure and a new basis function selection procedure. J. Chem. Phys. 2016, 144, 244115:1–244115:10. [Google Scholar] [CrossRef] [PubMed]
87. Avila, G.; Carrington, T. Solving the vibrational Schrödinger equation using bases pruned to include strongly coupled functions and compatible quadratures. J. Chem. Phys. 2012, 137, 174108. [Google Scholar] [CrossRef] [PubMed]
88. Avila, G.; Carrington, T., Jr. Pruned bases that are compatible with iterative eigensolvers and general potentials: New results for CH3CN. Chem. Phys. 2017, 482, 3–8. [Google Scholar] [CrossRef]
89. Petras, K. Fast calculation of coefficients in the Smolyak algorithm. Numer. Algorithms 2001, 26, 93–109. [Google Scholar] [CrossRef]
90. Avila, G.; Carrington, T., Jr. Using nonproduct quadrature grids to solve the vibrational Schrödinger equation in 12D. J. Chem. Phys. 2011, 134, 054126. [Google Scholar] [CrossRef] [PubMed]
91. Leclerc, A.; Carrington, T. Calculating vibrational spectra with sum of product basis functions without storing full-dimensional vectors or matrices. J. Chem. Phys. 2014, 140, 174111:1–174111:13. [Google Scholar] [CrossRef] [PubMed]
92. Thomas, P.S.; Carrington, T., Jr. Using nested contractions and a hierarchical tensor format to compute vibrational spectra of molecules with seven atoms. J. Phys. Chem. A 2015, 119, 13074–13091. [Google Scholar] [CrossRef] [PubMed]
93. Thomas, P.S.; Carrington, T., Jr. An intertwined method for making low-rank, sum-of-product basis functions that makes it possible to compute vibrational spectra of molecules with more than 10 atoms. J. Chem. Phys. 2017, 146, 204110:1–204110:15. [Google Scholar] [CrossRef] [PubMed]
94. Zhang, T.; Golub, G.H. Rank-One Approximation to High Order Tensors. Matrix Anal. Appl. 2001, 23, 534–550. [Google Scholar] [CrossRef]
95. Beylkin, G.; Mohlenkamp, M.J. Numerical operator calculus in higher dimensions. Proc. Natl. Acad. Sci. USA 2002, 99, 10246–10251. [Google Scholar] [CrossRef] [PubMed]
96. Beylkin, G.; Mohlenkamp, M.J. Algorithms for Numerical Analysis in High Dimensions. Sci. Comput. 2005, 26, 2133–2159. [Google Scholar] [CrossRef]
97. Meyer, H.D.; Gatti, F.; Worth, G.A. (Eds.) Multidimensional Quantum Dynamics: MCTDH Theory and Applications; Wiley-VCH: Weinheim, Germany, 2009. [Google Scholar]
98. Pelaez, D.; Meyer, H.-D. The multigrid POTFIT (MGPF) method: Grid representations of potentials for quantum dynamics of large systems. J. Chem. Phys. 2013, 138, 014108. [Google Scholar] [CrossRef] [PubMed]
99. Manzhos, S.; Carrington, T. Using neural networks to represent potential surfaces as sums of products. J. Chem. Phys. 2006, 125, 194105. [Google Scholar] [CrossRef] [PubMed]
100. Manzhos, S.; Carrington, T. Using redundant coordinates to represent potential energy surfaces with lower-dimensional functions. J. Chem. Phys. 2007, 127, 014103. [Google Scholar] [CrossRef] [PubMed]
101. Manzhos, S.; Carrington, T. Using neural networks, optimized coordinates, and high-dimensional model representations to obtain a vinyl bromide potential surface. J. Chem. Phys. 2008, 129, 224104. [Google Scholar] [CrossRef] [PubMed]
102. Lehoucq, R.B.; Sorensen, D.C.; Yang, C. ARPACK Users’ Guide: Solution of Large-Scale Eigenvalue Problems With Implicitly Restarted Arnoldi Methods SIAM, Philadelphia, PA, 1998. Available online: http://www.caam.rice.edu/software/ARPACK (accessed on 21 December 2017).
103. Watson, J.K.G. Simplification of the molecular vibration-rotation hamiltonian. Mol. Phys. 1968, 15, 479–490. [Google Scholar] [CrossRef]
104. Jaeckle, A.; Meyer, H.-D. Product representation of potential energy surfaces. J. Chem. Phys. 1996, 104, 7974. [Google Scholar] [CrossRef]
105. Chen, H.; Light, J.C. Vibrations of the carbon dioxide dimer. J. Chem. Phys. 2000, 112, 5070–5080. [Google Scholar] [CrossRef]
106. Li, H.; Roy, P.-N.; Le Roy, R.J. Analytic Morse/long-range potential energy surfaces and predicted infrared spectra for CO2-H2. J. Chem. Phys. 2010, 132, 214309. [Google Scholar] [CrossRef] [PubMed]
107. Thomas, P.S.; Carrington, T. Unpublished work. J. Chem. Phys. 2018.