Next Article in Journal
Deep Learning for Brain Tumor Segmentation: A Survey of State-of-the-Art
Next Article in Special Issue
A Model-Based Optimization Framework for Iterative Digital Breast Tomosynthesis Image Reconstruction
Previous Article in Journal
The Potential Use of Radiomics with Pre-Radiation Therapy MR Imaging in Predicting Risk of Pseudoprogression in Glioblastoma Patients
Previous Article in Special Issue
A Computationally Efficient Reconstruction Algorithm for Circular Cone-Beam Computed Tomography Using Shallow Neural Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Hybrid Inversion Method for 2D Nuclear Magnetic Resonance Combining TSVD and Tikhonov Regularization

1
Department of Mathematics, University of Bologna, 40126 Bologna, Italy
2
Department of Civil, Chemical, Environmental, and Materials Engineering, University of Bologna, 40126 Bologna, Italy
*
Author to whom correspondence should be addressed.
Submission received: 28 December 2020 / Revised: 21 January 2021 / Accepted: 25 January 2021 / Published: 28 January 2021
(This article belongs to the Special Issue Inverse Problems and Imaging)

Abstract

:
This paper is concerned with the reconstruction of relaxation time distributions in Nuclear Magnetic Resonance (NMR) relaxometry. This is a large-scale and ill-posed inverse problem with many potential applications in biology, medicine, chemistry, and other disciplines. However, the large amount of data and the consequently long inversion times, together with the high sensitivity of the solution to the value of the regularization parameter, still represent a major issue in the applicability of the NMR relaxometry. We present a method for two-dimensional data inversion (2DNMR) which combines Truncated Singular Value Decomposition and Tikhonov regularization in order to accelerate the inversion time and to reduce the sensitivity to the value of the regularization parameter. The Discrete Picard condition is used to jointly select the SVD truncation and Tikhonov regularization parameters. We evaluate the performance of the proposed method on both simulated and real NMR measurements.

Graphical Abstract

1. Introduction

Nuclear Magnetic Resonance (NMR) relaxometry has become an important tool to study the molecular structure and properties of materials. A typical NMR experiment consists of measuring the relaxation process due to the re-establishment of the nuclear system into its equilibrium state, after the application of a short magnetic pulse parameterized with a predefined flip angle. The relaxation process is described by longitudinal and transversal dynamics, characterized by distributions of longitudinal ( T 1 ) and transversal ( T 2 ) relaxation times [1]. The computation of the relaxation times distribution requires the numerical solution of a Fredholm integral equation with separable Laplace-type kernels. In particular, we focus on the inversion of 2D NMR relaxation data, acquired using a conventional Inversion-Recovery (IR) experiment detected by a Carr-Purcell-Meiboom-Gill (CPMG) pulse train [2]. Then, the evolution time t 1 in IR and the evolution time t 2 in CPMG are two independent variables, and the 2D NMR relaxation data S ( t 1 , t 2 ) can be expressed as:
S ( t 1 , t 2 ) = 0 k 1 ( t 1 , T 1 ) k 2 ( t 2 , T 2 ) F ( T 1 , T 2 ) d T 1 d T 2 + e ( t 1 , t 2 )
where the unknown F ( T 1 , T 2 ) is the distribution of T 1 and T 2 relaxation times, e ( t 1 , t 2 ) represents Gaussian additive noise and k 1 ( t 1 , T 1 ) , k 2 ( t 2 , T 2 ) are Laplace-type kernels given by:
k 1 ( t 1 , T 1 ) = 1 2 exp ( t 1 / T 1 ) , k 2 ( t 2 , T 2 ) = exp ( t 2 / T 2 )
whose singular values quickly decay to zero. Since the unknown function F corresponds to distribution of the values of the relaxation times T 1 T 2 , we can assume F ( T 1 , T 2 ) 0 for all T 1 and T 2 . Experimental data is usually collected at discrete values of times; therefore by considering M 1 × M 2 samples of the times t 1 and t 2 , and N 1 × N 2 samples of the relaxation times T 1 and T 2 , problem (1) is discretized as:
K f + e = s , K = K 2 K 1
where K 1 R M 1 × N 1 , K 2 R M 2 × N 2 are the discretized exponential kernels, s R M , M = M 1 · M 2 , is the discrete vector of the measured noisy data (which can have negative values), f R N , N = N 1 · N 2 , is the vector reordering of the unknown distribution and e R M is the vector with the discretized noise.
The inversion of (3) is severely ill-conditioned and several direct and iterative methods have been deeply discussed both in NMR [3,4,5] and mathematical [6,7,8] literature. Direct regularization includes Truncated Singular Value Decomposition (TSVD) and Tikhonov methods. TSVD replaces K by a matrix of reduced rank thus avoiding noise amplification in the inversion procedure. This is usually obtained by removing the singular values smaller than a given tolerance τ > 0 . Tikhonov-like regularization substitutes (3) with a more stable optimization problem whose objective function includes a data-fitting term and a regularization term imposing some a priori knowledge about the solution. In NMR, Tikhonov-like regularization is usually employed. It requires the solution of a non-negative least squares (NNLS) problem of the form
min f 0 K f s 2 + α f 2
where α is the regularization parameter. In the sequel, · will denote the Euclidean norm of a vector or the Frobenius norm of a matrix. It is well-known that α controls the smoothness in f and makes the inversion less ill-conditioned but, if wrongly chosen, it may cause a bias to the result. For this reason, many parameter selection techniques have been proposed such as Generalized Cross Validation (GCV), the L-curve and U-curve methods and methods based on the Discrepancy Principle (DP) (see [6] and references therein). In NMR, the Butler, Reed, and Dawson (BRD) method [9] or the S-curve method [10] are usually applied for the selection of the regularization parameter.
Another critical issue when solving NMR problems is that the amount of data collected for one measurement can be very large, with typical sizes M 1 = 30 –150 and M 2 = 1000 –50,000. If a grid used for the estimated distribution has 100 × 100 elements, then the matrix K can have up to 5 · 10 9 elements and the computational cost of the data inversion may be extremely high.
The separable kernel structure of K has been widely exploited in the NMR literature to perform data compression using independent SVDs of K 1 and K 2 and reduce computational complexity. The method proposed in [4], in the sequel referred to as Venkataramanan–Song–Hürlimann (VSH) algorithm, is considered a reference method for NMR data inversion and it is widely used both in academia and industrial world. Given the TSVD of K 1 and K 2 :
K k i , i = U ¯ k i , i Σ ¯ k i , i V ¯ k i , i T , i = 1 , 2
where k i is the number of considered singular values, the VSH method approximates the original problem (4) with the better conditioned one
min f 0 K k 1 , k 2 f s 2 + α f 2
where K k 1 , k 2 = K k 2 , 2 K k 1 , 1 . The data s is then projected onto the column subspace of K k 1 , k 2 in order to obtain a NNLS problem with compressed data and kernel of significantly smaller dimensions. A method adapted from the BRD algorithm is then used to solve the optimization problem.
Other methods have been proposed in the literature for the inversion of 2DNMR data. The algorithm of Chouzenoux et al. [11] uses a primal-dual optimization strategy coupled with an iterative minimization in order to jointly account for the non-negativity constraint in (4) and introduce a regularization term. A preconditioning strategy is used to reduce the computational cost of the algorithm. In [12], an algorithm for 2DNMR inversion from limited data is presented. Such algorithm uses compressive sensing-like techniques to fill in missing measurements and the resulting regularized minimization problem is solved using the method of [4]. The Discrete Picard Condition (DPC) [13] is used to choose the truncation parameters. The 2DUPEN algorithm [14] uses multiparameter Tikhonov regularization with automatic choice of the regularization parameters. Without any a-priori information about the noise norm, 2DUPEN automatically computes the locally adapted regularization parameters and the distribution of the unknown NMR parameters by using variable smoothing. In [15], an improvement of 2DUPEN (I2DUPEN), obtained by applying data windowing and SVD filtering, is presented. The SVD filtering is applied to reduce the problem size as in [4].
Since the matrices K 1 and K 2 have typically small sizes, it is possible to compute the exact SVD of K by exploiting its Kronecker product structure. In fact, if K = U Σ V T is the SVD of K , then
U = U ¯ 2 U ¯ 1 , Σ = Σ ¯ 2 Σ ¯ 1 , V = V ¯ 2 V ¯ 1 .
However, the TSVD of K cannot be computed as a Kronecker product. For this reason, different approximation of the TSVD can be found in the literature such as the matrix K k 1 , k 2 of the VSH method or the randomized SVD (RSVD) [16,17] where randomized algorithms are used to approximate the dominant singular values of K . In this work, we show that it is possible to compute the exact TSVD of K efficiently, avoiding approximations and suitably using properties of the Kronecker product structure. Let K k = U k Σ k V k T be the TSVD of K where k is the number of considered singular values; we propose to solve the following Tikhonov-like problem:
min f 0 K k f s 2 + α f 2 .
Moreover, we propose and test an automatic rule, based on the DPC, for the automatic selection of both the TSVD truncation index and the Tikhonov parameter. Finally, we analyze the filtering properties of our method compared to TSVD, Tikhonov and VSH methods. Therefore, our approach can be consideredt o be a hybrid inversion method that combines TSVD and Tikhonov regularization: Tikhonov regularization prevents from discontinuities and artificial peaks and TSVD acts as a preconditioning strategy and reduces the computational cost. Actually, other approaches do exist in the literature combining RTSVD and Tikhonov regularization [16,17]. Such techniques apply randomized algorithms to reduce large-scale problems to much smaller-scale ones and find a regularized solution by applying some regularization method to the small-scale problems in combination with some parameter choice rule. The key point of these randomized algorithms is that the SVD of the original linear operator K is never computed.
Concerning the solution of (8), in the present paper we apply the Newton Projection (NP) method [18] where the Conjugate Gradient (CG) method is used to solve the inner linear systems. This technique guarantees the desired high accuracy and it has been successfully applied in the NMR context [14,15]. Gradient projection methods with acceleration techniques such those discussed in [19] required more computational effort to solve (8) with the required accuracy. However, the typical sparse structure of the solution, represented by nonzero peaks over flat regions, could possibly be taken into account in solving (8), for example by adding L 1 penalties [20].
Summarizing, the contribution of this paper is twofold; first, the paper shows that times distribution of improved quality can be obtained by using K k instead of the separate TSVDs of K 1 and K 2 without a significant increase in the computational complexity. In fact, the computational cost of our approach is slightly greater than the cost of VSH and considerably smaller than the cost of RSVD. Second, the paper describes an efficient method for jointly selecting the SVD truncation index k and the Tikhonov regularization parameter α and for solving the NMR data inversion problem.
The remainder of this paper is organized as follows. In Section 2 we introduce our method and, in Section 3, we analyze its filtering properties. In Section 4, we provide implementation details and discuss numerical results. Some conclusions are reported in Section 5. Finally, the VSH method is described in Appendix C.

2. The Proposed Hybrid Algorithm

In this section we illustrate the details of our approach and formalize our proposed Hybrid Algorithm 1. We first propose to approximate problem (4) with
min f 0 K k f s 2 + α f 2 .
where K k = U k Σ k V k T is the TSVD of K . By projecting the data vector s onto the column subspace of K k and by neglecting constant terms, we obtain the equivalent formulation:
min f 0 U k U k T K k f U k U k T s 2 + α f 2
which can be written as:
min f 0 Σ k V k T f U k T s 2 + α f 2
with compressed data U k T s R k and kernel Σ k V k T . We observe that the solution of (11) lies in a subspace of the column space of V k . In the following paragraphs we develop the solution steps of problem (11), the rule for the choice of the TSVD truncation index k, and of the Tikhonov parameter α . Finally, we illustrate the filtering properties of our new method and compare it to Tikhonov, TSVD and VSH methods.

2.1. Solution of the Minimization Problem

We use the Newton Projection (NP) method [18] to solve the constrained minimization problem (11); for a detailed description of NP we refer the reader to Appendix B. We apply the Conjugate Gradient (CG) method to solve the inner linear systems. To this purpose, it is necessary to perform several matrix-vector products involving the matrices U k and V k . Although U k and V k are parts of the matrices U and V , they do not inherit the Kronecker product structure. However, we can show that matrix-vector products can be performed efficiently by exploiting the structure of U and V .
Given a vector x R M , let zero ( x , k ) be the vector obtained by zeroing the last M k components of x :
zero ( x , k ) = ( x 1 , x 2 , , x k , 0 , , 0 ) T .
Using the Kronecker products property:
( A B ) vec ( X ) = vec ( BXA T ) .
From Equation (7) we have
U k T s = zero ( U T s , k ) = zero ( vec ( U ¯ 1 T S U ¯ 2 ) , k )
and
V k T f = zero ( V T f , k ) = zero ( vec ( V ¯ 1 T F V ¯ 2 ) , k ) .
where S R M 1 × M 2 is the matrix of the measured data s.t. s = vec ( S ) and F R N 1 × N 2 represents the computed distribution s.t. f = vec ( F ) . Thus, matrix-vector products involving U k and V k can be efficiently performed by using the Kronecker product structure of U and V and by setting to zero the last components of the resulting vector.

2.2. Choice of the Parameters k and α

The quality of the restored distribution f strongly depends on the values of both the truncation parameter k and the Tikhonov parameter α . We propose to choose both these values by using the Discrete Picard Condition (DPC) [6]. This condition, for ill-conditioned inverse problems, guarantees that a good solution is obtained by keeping in the SVD expansion for the solution f = i u i T s σ i v i only the coefficients u i T s / σ i such that the Fourier coefficients u i T s decrease on average faster than the singular values σ i . A truncation parameter k satisfying the DPC can be selected by visual inspection of the so-called Picard plot (i.e., a plot of the quantities u i T s and σ i versus i). Alternatively, an index k satisfying the DPC can also be selected by using automatic techniques such as those described in [21,22].
Once the value for k has been fixed by using the DPC, the value for α is set as
α = σ k 2
since, as explained in [6], the value α = σ i 2 represents the breakpoint at which the ith filter factor changes nature for the Tikhonov method [6]. Therefore this choice is motivated by the low-pass filtering properties of both TSVD and Tikhonov methods.
Summarizing the previous considerations, we outline the steps of our proposed Hybrid Algorithm 1.
Algorithm 1: Hybrid algorithm
1: compute the SVDs of K 1 and K 2
2: compute Σ = Σ 2 Σ 1
3: choose k by using the DPC
4: choose α = σ k 2
5: Apply the Newton projection method to solve the constrained problem
min f 0 Σ k V k T f U k T s 2 + α f 2

3. Analysis of the Filtering Properties

In this section we prove that our Hybrid algorithm acts as a low-pass filtering method, similarly to TSVD and Tikhonov methods, and we compare it to the filtering properties of VSH. Let us first report some basic properties of the solution of the NNLS problem:
min f 0 K f s 2 .
Assume that f is a solution of (17), then it satisfies [23]:
[ K T K f K T s ] i = 0 , for all i such that f i > 0 .
Once defined the active set of f by
A = { i | f i = 0 } ,
we can define the diagonal matrix D as
[ D ] i i = 1 , i A ; 0 , i A .
Then, using the relation D f = f , we obtain from (18)
D K T K D f D K T s = 0
which are the normal equations of the following LS problem
min f K D f s 2 .
The following result characterises the minimum norm solution of (22).
Theorem 1. 
Let K R N × M , s R N and let D be the diagonal matrix defined as (20) where A is the active set of a local solution of (17). Then, the minimum norm solution of the least squares problem (17) has the following expression
f = i = 1 N u i T s σ i D v i .
To prove this result, we first need the following lemma whose proof is given, for the reader’s convenience, in Appendix A.
Lemma 1. 
Let K = U Σ V T be the SVD of K R M × N and let D R N × N be the diagonal matrix defined in (20). Then, the pseudo-inverse of U Σ V T D R M × N is given by
( U Σ V T D ) = D V Σ U T .
We are now in the position to prove Theorem 1.
Proof. 
(Theorem 1) Using the pseudo-inverse notation, we can write the solution of the LS problem (22) as:
f = ( K D ) s = ( U Σ V T D ) s
and, using (24) we have:
f = D V Σ U T s
hence
f = D i = 1 N u i T s σ i v i = i = 1 N u i T s σ i D v i .
Theorem 1 shows that the solution of the constrained problem (17) can be written in terms of the SVD of matrix K as follows:
f = i = 1 N u i T s σ i v ^ i where v ^ i = D v i .
Obviously, the vectors v ^ i may be linear dependent if f lies in a subspace of R N . It is well known that TSVD and Tikhonov methods both compute a filtered solution f filt of problem (22) with different filter functions ϕ i ( · ) [6]. Using the result of Theorem 1, we can express the nonnegatively constrained TSVD solution as
f T S V D = i = 1 N ϕ i T S V D ( k ) u i T s σ i v ^ i , v ^ i = D v i
where D is the diagonal matrix (20) and the filter factors are
ϕ i T S V D ( k ) = 1 , if i k ; 0 , otherwise ; with k { 1 , , N } .
Analogously, the non negative Tikhonov solution is given by
f T I K H = i = 1 N ϕ i T I K H ( α ) u i T s σ i v ^ i , v ^ i = D T I K H v i
where D T I K H is the diagonal matrix (20) defined with respect to the active set A of a local solution of (4) and the filter factors are
ϕ i T I K H ( α ) = σ i 2 + α σ i 2 , with α R + .
Equations (25) and (27) define a low-pass filter with vectors v ^ i . The solution provided by the hybrid method is still a filtered solution whose filter factors depend on two parameters; i.e.,
f H Y B R I D = i = 1 N ϕ i H Y B R I D ( α , k ) u i T s σ i v ^ i , v ^ i = D H Y B R I D v i
where D H Y B R I D is the diagonal matrix (20) defined with respect to the active set A of a local solution of (11) and the filter factors are
ϕ i H Y B R I D ( α , k ) = σ i 2 + α σ i 2 , if i k ; 0 , otherwise ;
with k { 1 , , N } and α R + . By properly choosing the parameters k and α , the filter factors for low-frequency components can be set close to one while filter factors for high-frequency components can be set close to zero. Figure 1 depicts the behaviour of the filter factors obtained for the value α = 2.5 · 10 5 versus the singular values σ i . The σ i plotted on the abscissa axes are the singular values of the matrix K of the experiment with simulated NMR data (see Section 4.3). We can observe that for singular values σ i larger than α = 500 the filter factors ϕ i H Y B R I D behave as ϕ i T I K H (black dashdotted line) while for smaller values they are as ϕ i T S V D (red dashed line).
We observe that also the VSH solution can be expressed in terms of the SVD of K (see algorithm details in Appendix C). We define the index subset I ( k 1 , k 2 ) including the indices of the singular values of K which correspond to singular values of K k 2 , 2 K k 1 , 1 :
I ( k 1 , k 2 ) = { i | σ i diag ( Σ ¯ k 2 , 2 Σ ¯ k 1 , 1 ) } .
Thus, we have:
f V S H = i I ( k 1 , k 2 ) ϕ i V S H ( α , k 1 , k 2 ) u i T s σ i 2 v ^ i , v ^ i = D V S H v i
where D V S H is the diagonal matrix (20) defined with respect to the active set A of a local solution of (6) and the filter factors are
ϕ i V S H ( α , k 1 , k 2 ) = σ i 2 + α σ i 2 , i I ( k 1 , k 2 ) ; 0 , otherwise ;
with k 1 { 1 , , N 1 } , k 2 { 1 , , N 2 } and α R + . However, it is almost impossible to determine values of the truncation parameters k 1 and k 2 such that the vectors v ^ i for i I ( k 1 , k 2 ) only correspond to low-frequency components including meaningful information about the unknown solution. An explanatory example will be further discussed in the numerical Section 4. For this reason, the VSH method cannot be considered a low-pass filtering method.

4. Numerical Results

In this section, we present some results obtained by our numerical tests with both simulated and real 2DNMR measurements. We have compared the proposed Hybrid method with the VSH method which is a reference method for 2DNMR data inversion. Moreover, in the case of real data, the UPEN method has been considered as a benchmark method. The considered methods have been implemented in Matlab R2018b on Windows 10 (64-bit edition), configured with Intel Core i5 3470 CPU running at 3.2GHz.
The relative error value, computed as F e x F / F e x , is used to measure the effectiveness of the compared methods, while the execution time in seconds is used to evaluate their efficiency (here, F e x and F respectively denote the exact and restored distributions). The reported values of the execution time are the mean over ten runs. Since both methods are iterative, in the following, we give some details about the implemented stopping criteria and parameters.

4.1. Hybrid Method

The NP method is used for the solution of problem (11); it is described in detail in the Appendix B. As initial iterate the constant distribution with values equal to one is chosen. The NP iterations have been stopped on the basis of the relative decrease in the objective function F ( f ) of (11); i.e., the method is arrested when an iterate f ( k ) has been determined such that
F f ( k ) F f ( k 1 ) F f ( k ) < Tol NP .
The inner linear system for the search direction computation is solved by the CG method with relative tolerance Tol CG . The values Tol NP = 10 3 and Tol CG = 10 7 have been fixed in all the numerical experiments. A maximum of Kmax NP = 500 and Kmax CG = 10 4 iterations have been allowed for NP and CG, respectively.

4.2. VSH Method

The VSH method consists of three main steps. In the first step, the data is compressed using SVDs of the kernels; in the second step, the constrained optimization problem is transformed to an unconstrained one in a lower dimensional space, by using a method adapted from the BRD algorithm. In the third step, the optimal value of α is selected by iteratively finding a solution with fit error similar to the known noise variance. The VSH method is described in detail in the Appendix C. Here we just report the values of the parameters required in our Matlab implementation of the VSH method.
Each iteration of the VSH method needs, for a fixed value of α , the solution of a reduced-size optimization problem obtained from (6) by projecting the data s onto the column subspace of K k 1 , k 2 . The Newton method is used for the subproblems solution; its iterations have been stopped on the basis of the relative decrease in the objective function. The values Tol N = 10 3 and Kmax N = 500 have been fixed. The values Tol CG = 10 7 and Kmax CG = 10 4 have been set for the CG method solving the inner linear system. The outer iterations of VSH have been terminated when two sufficiently close approximations of the unknown distribution have been determined or after a maximum of 100 iterations; the relative tolerance value Tol VSH = 0.1 has been set. As initial choice for the regularization parameter, the value α 0 = 1 has been chosen.

4.3. Simulated NMR Data

The considered simulated distribution F ( T 1 , T 2 ) , shown in Figure 2, is a mixture of three Gaussian functions located at ( T 1 , T 2 ) given by ( 81.86 , 12.84 ) , ( 8.59 , 6.66 ) and ( 433.27 , 59.4 ) ms with standard deviations ( 0.1 , 0.1 ) , ( 0.1 , 0.25 ) and ( 0.25 , 0.1 ) ms. We have considered the Laplace-type kernels K 1 and K 2 of (2), we have sampled t 1 logarithmically between 0.001 and 3, and t 2 linearly between 0.001 and 1 with N 1 = 100 , N 2 = 100 . The 2D data have been obtained according to the 2D observation model (3) with M 1 = 2048 , M 2 = 128 . A white Gaussian noise has been added to get a signal-to-noise ratio (SNR) equal to 23 dB. We remark that this environmental setting corresponds to a realistic T 1 T 2 measurement.

4.3.1. Models Comparison

We first compare the proposed hybrid inversion model (9) with the VSH model (6) and with classic Tikhonov model (4). Figure 3 depicts the singular values of K , K 1 and K 2 . Clearly, the singular values of K are obtained by reordering in non increasing order the diagonal elements of Σ = Σ 2 Σ 1 and they are different from the diagonal elements of Σ k 1 , 1 Σ k 2 , 1 . That is, some unwanted small singular values of K may be included in K k 1 , 1 K k 2 , 2 , while some large singular values may be not considered. Figure 4 shows the singular values of the K k obtained for τ = 1 (blue line) and the singular values of K k 1 , 1 K k 2 , 2 for τ 1 = τ 2 = 0.5 (blue dotted line), τ 1 = τ 2 = 1 (black dashed line) and for τ 1 = τ 2 = 5 (red dashdotted line). Observe that the singular values of K k are always different from those of K k 1 , 1 K k 2 , 2 . Considering the threshold value τ = τ 1 = τ 2 = 1 , we have that the number k = 82 of singular values of K k is larger than that of K k 1 , k 2 , given by k 1 × k 2 = 8 × 6 . Comparing, in Figure 4, the blue line and the black dashed one (obtained for τ 1 = τ 2 = 1 ), we observe that the singular values of K k 1 , k 2 include a few elements smaller than τ , and miss some larger terms that should be included. Moreover, if τ 1 = τ 2 = 0.5 , we have that the number of singular values of K k 1 , k 2 is k 1 × k 2 = 9 × 7 which is closer to k = 82 but there are many singular values smaller than τ . Finally, in the case τ 1 = τ 2 = 5 , K k 1 , k 2 has only a few large singular values ( k 1 = 5 , k 2 = 4 ) and many relevant solution coefficients are missing. The plots in Figure 4 indicate that, when considering problem (6), it is highly probable to include in the solution also those components dominated by noise (if τ is slightly too small) or to discard those components dominated by the contributions from the exact right-hand side (if τ is too large).
A visual inspection of the Picard plot (Figure 5) indicates, for the hybrid method, the choice τ = 1 for the truncation tolerance giving the value k = 82 of the truncation parameter. The previous considerations suggest to choose the values τ 1 = τ 2 = 1 for the VSH method.
Once defined the projection subspaces for VSH and Hybrid methods, we solve the optimization problems (6) and (11) by applying the same numerical solver (Newton Projection method NP) and changing the values of the regularization parameters α . In this way we want to investigate how the different subspace selections affect the solutions computed by the (6) and (11) models for different values of α . Moreover, we apply the Tikhonov method (4) in which no subspace projection is applied. For this reason, we use NP to solve all the different optimization problems, since we aim at comparing the three different models for data inversion, independently from the numerical method used for their solution.
For each model and for ten values of α , logarithmically distributed between 10 4 and 10 2 , Table 1 reports the relative error values (columns labeled Err), the number of NP iterations (columns labeled It), and the time in seconds required for solving the three inversion models (columns labeled Time). The numerical results of Table 1 are summarized in Figure 6 where the relative error behaviour (on the left) and the computational time in seconds (on the right) are shown versus the regularization parameter values. Figure 7 shows the distributions estimated from the three models, for α = 1 (left column), α = 10 2 (middle column) and α = 10 4 (right column). Figure 8 and Figure 9 report the corresponding projections along the T 1 and T 2 dimensions. Figure 7, Figure 8 and Figure 9 shows the importance of the proper choice of the subspace where the computed solution is represented. Indeed, the distribution computed by our method lies in a subspace of the column space of V k while the distribution determined by the VSH method belongs to a subspace of the space spanned by the column vectors of V k 2 , 2 V k 1 , 1 .
The results of Table 1 and the plot of Figure 6 and Figure 7 show that the Hybrid model (9) is less sensitive to small values of α than the Tikhonov one, because the matrix K k is better conditioned than K . When the value of α is properly chosen the quality of distributions given by these two models is comparable, but the solution of (9) requires less computational effort. Also the VSH model is less sensitive to small values of α , because K k 1 , k 2 is better conditioned than K . Its solution has the least computational cost but the computed distributions exhibit artifacts which are due to subspace where they are represented (see Figure 7, Figure 8 and Figure 9).

4.3.2. Methods Comparison

We now compare the Hybrid and VSH algorithms on the simulated NMR data. Table 2 reports the relative error values and the required time in seconds for several values of the truncation parameter τ (Here, τ 1 = τ 2 = τ ). Moreover, the table shows the computed value for the regularization parameter α , the number of performed Newton (or Newton Projection) and CG iterations (columns labeled It N and It CG, respectively) and it reports, only for VSH, the number of outer iterations (column labeled It). The numerical results show that the VSH method has the lowest computational complexity while the Hybrid method gives the most accurate distributions. The execution time of the Hybrid method is very low, although VSH is less time consuming. Figure 10 and Figure 11 depict the best distributions estimated by the Hybrid and VSH methods, i.e.,: the distribution obtained with τ = 0.05 and τ 1 = τ 2 = 0.1 , respectively. By visually comparing Figure 10 and Figure 11, we observe some spurious small oscillations in the VSH distribution both in the boundary and in the flat region, while the distribution computed by the Hybrid method is less affected by such artefacts.

4.4. Real NMR data

In this section we compare the Hybrid and VSH methods using real MR measurements obtained from the analysis of an egg yolk sample. The sample was prepared in NMR Laboratory at the Department of Physics and Astronomy of the University of Bologna, by filling a 10 mm external diameter glass NMR tube with 6 mm of egg yolk. The tube was sealed with Parafilm, and then at once measured. NMR measurements were performed at 25 °C by a homebuilt relaxometer based on a PC-NMR portable NMR console (Stelar, Mede, Italy) and a 0.47 T Joel electromagnet. All relaxation experimental curves were acquired using phase-cycling procedures. The π / 2 pulse width was of 3.8µs and the relaxation delay (RD) was set to a value greater than 4 times the maximum T 1 of the sample. In all experiments RD was equal to 3.5 s. For the 2D measurements, longitudinal-transverse relaxation curve ( T 1 - T 2 ) was acquired by an IR-CPMG pulse sequence (RD- π x - T I - ( π / 2 ) x -TE/2-[ π y -TE/2-echo acquisition-TE/2] NE ). The T 1 relaxation signal was acquired with 128 inversion times ( T I ) chosen in geometrical progression from 1 ms up to 2.8 s, with N E = 1024 (number of acquired echos, echo times T E = 500 µs) on each CPMG, and number of scans equal to 4. All curves were acquired using phase-cycling procedures.
The data acquisition step produces an ascii structured file (in the STELAR data format) including M 1 · M 2 relaxation data s in (3) where M 1 = 128 and M 2 = 1024 , and the vectors t 1 R M 1 ( T I inversion times), t 2 R M 2 (CPMG echo times). The file is freely available upon email request to the authors. For the data inversion, in order to respect approximately the same ratio existing between M 1 and M 2 , we choose the values N 1 = 64 , N 2 = 73 and compute the vectors T 1 , T 2 in geometric progression in the ranges of predefined intervals obtained from the minimum and maximum values of the vectors t 1 , t 2 respectively. Finally, using (2) we compute the matrices K 1 and K 2 .
We use the times distribution restored by the 2DUPEN method [14], shown in Figure 12 (top line), as benchmark distribution as the UPEN method uses multiparameter regularization and it is known to provide accurate results [3]. Obviously, 2DUPEN requires more computational effort since it involves the estimation of one regularization parameter for each pixel of the distribution.
By visual inspection of the Picard plot (Figure 13) the value τ = 1 has been chosen for the hybrid method; the same value is fixed for the VSH method. Figure 13 shows the singular values of K , K 1 and K 2 . For the VSH method, we report the results obtained at the first iteration since, in this case, they worsen as the iteration number increases. Table 3 reports the T 2 T 1 coordinates (in m s ) where a peak is located, its hight in a.u. (arbitrary unit) and the required computational time in seconds. Finally, Figure 12 and Figure 14 illustrate the distribution maps, the surfaces restored and the projections along the T1 and T2 dimensions; the results of the Hybrid method are more similar to those obtained by 2DUPEN, while the distribution provided by the VSH method seems to exhibits larger boundary artifacts.

5. Conclusions

In this paper, we have presented a hybrid approach to the inversion of 2DNMR measurements. This approach combines main features of Tikhonov and TSVD regularization in order to jointly select a suitable approximation subspace for the restored distribution and to reduce the computational cost of the inversion. The Picard condition is used to select, at the same time, the Tikhonov regularization parameter and the SVD truncation parameter. The numerical results show that the proposed hybrid method is effective, efficient and robust. In our future work, we intend to complement the presented hybrid approach in the 2DUPEN method.

Author Contributions

Conceptualization, G.L. and F.Z.; methodology, G.L. and F.Z.; software, G.L.; validation, G.L., F.Z. and V.B.; formal analysis, G.L. investigation, G.L. and F.Z.; resources, V.B.; data curation, V.B.; writing—original draft preparation, G.L., F.Z. and V.B.; writing—review and editing, F.Z. and V.B. All authors have read and agreed to the published version of the manuscript.

Funding

GNCS-INdAM.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

This work was partially supported by Gruppo Nazionale per il Calcolo Scientifico—Istituto Nazionale di Alta Matematica (GNCS-INdAM). We thank Leonardo Brizi of DIFA University of Bologna.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Proof. 
(Lemma 1) The thesis is obtained if we verify that D V Σ U T R N × M meets the four Moore-Penrose conditions which define the pseudo-inverse A of A as the unique matrix satisfying
  • A A A = A ;
  • A A A = A ;
  • ( A A ) H = A A ;
  • ( A A ) H = A A .
By substituting U Σ V T D for A and D V Σ U for A in the first condition, we have:
U Σ V T D ( D V Σ U T ) U Σ V T D = U Σ V T D ( D ( V ( Σ ( ( U T U ) Σ ) V T ) D ) = U Σ V T D
since Σ Σ = I N , U T U = I M , V V T = I N and D D = D . Analogously, we can prove the second condition:
D V Σ U T ( U Σ V T D ) D V Σ U T = D V Σ U .
We have that
( A A ) H = ( U Σ V T D D V Σ U T ) H = U Σ V T D D V Σ U T = A A .
The last condition follows analogously. □

Appendix B. The Newton Projection Method

Let us now denote by Q ( k ) ( f ) the objective function of (11):
Q ( k ) ( f ) = Σ k V k T f U k T s 2 + α f 2
and by A ( f ) the set of indices defined as [24]
A ( f ) = i | 0 f i ϵ and ( Q ) i > 0 , ϵ = min { ϵ ¯ , f [ f Q ] + }
where ϵ ¯ is a small positive parameter and [ · ] + denotes the projection on the positive orthant. Finally, let E and F denote the diagonal matrices [24] such that
{ E ( f ) } i i = 1 , i A ( f ) ; 0 , i A ( f ) ; F ( f ) = I E ( f ) .
The NPCG method for the minimization of Q ( k ) ( f ) under nonnegativity constraints is formally stated in Algorithm A1.
Algorithm A1: NP (Newton-Projection) algorithm
1: choose f ( 0 ) and set k = 0
2: for k = 1 , 2 , do
3: compute the index subset A ( k ) and the matrices E ( k ) and F ( k ) ;
4: etermine the search direction d ( k ) by solving, with the CG method, the linear system
E ( k ) 2 Q ( k ) ( f ( k ) ) E ( k ) + F ( k ) d = Q ( k ) ( f ( k ) ) ;
5: determine a step-length α ( k ) satisfying the Armijo rule along the projection arc [23];
6: compute f ( k + 1 ) = [ f ( k ) + α ( k ) d ( k ) ] + ;
7: end for

Appendix C. The Venkataramanan-Song-Hürlimann Algorithm

Let K i = U ¯ i Σ ¯ i V ¯ i T be the SVD of K i , i = 1 , 2 and let k i be the number of considered singular values; for example, k i can be the last index of the singular values than a given tolerance τ , i = 1 , 2 . Given the TSVD matrices K k 1 , 1 = U ¯ k 1 , 1 Σ ¯ k 1 , 1 V ¯ k 1 , 1 T and K k 2 , 2 = U ¯ k 2 , 2 Σ ¯ k 2 , 2 V ¯ k 2 , 2 T of K 1 and K 2 , and defined their Kroenecker product K k 1 , k 2 :
K k 1 , k 2 = K k 2 , 2 K k 1 , 1
problem (4) can be approximated as
min f 0 K k 1 , k 2 f s 2 + α f 2
or equivalently
min F 0 K k 1 , 1 F K k 2 , 2 T S 2 + α F 2 .
The first step of the VSH algorithm consists of projecting the data s onto the column subspace of K k 1 , k 2 :
P 1 , 2 ( s ) = U ¯ k 1 , 1 U ¯ k 2 , 2 U ¯ k 1 , 1 U ¯ k 2 , 2 T s
that is
P 1 , 2 ( S ) = U ¯ k 1 , 1 U ¯ k 1 , 1 T S U ¯ k 2 , 2 U ¯ k 2 , 2 T .
Hence, by neglecting constant terms, problem (A6) can now be equivalently rewritten as
min F 0 U ¯ k 1 , 1 U ¯ k 1 , 1 T K k 1 F K k 2 T U ¯ k 2 , 2 U ¯ k 2 , 2 T P 1 , 2 ( S ) 2 + α F 2
which becomes
min F 0 ( Σ ¯ k 1 , 1 V ¯ k 1 , 1 T ) F ( Σ ¯ k 2 , 2 V ¯ k 2 , 2 T ) T U ¯ k 1 , 1 T S U ¯ k 2 , 2 2 + α F 2
or, equivalently
min f 0 K ˜ f s ˜ 2 + α f 2
where
K ˜ = Σ ¯ k 2 , 2 V ¯ k 2 , 2 T Σ ¯ k 1 , 1 V ¯ k 1 , 1 T and s ˜ = ( U ¯ k 2 , 2 T U ¯ k 1 , 1 T ) s .
This results in compressed data s ˜ and kernels Σ ¯ k 1 , 1 V ¯ k 1 , 1 T and Σ ¯ k 2 , 2 V ¯ k 2 , 2 T of significantly smaller dimensions, thus avoiding large memory requirements and reducing computational effort.
The second step of VSH consists of solving the optimization problem (A9) for a specific value of α . In [4], this is done by using a method adapted from the Butler-Reeds-Dawson (BRD) algorithm [9] which maps the constrained optimization problem (A9) onto an unconstrained one. Let us define a vector c implicitly from f by
f = max ( 0 , K ˜ T c ) .
Hence, problem (A9) can be reformulated as the unconstrained minimization problem
min c 1 2 c T G ( c ) + α I c c T s ˜ ,
where
G ( c ) = K ˜ H ( K ˜ 1 T c ) , 0 0 0 H ( K ˜ 2 T c ) 0 0 0 H ( K ˜ k 1 k 2 T c ) K ˜ T
H ( x ) is the Heaviside function and I is the identity matrix. The Newton method is then used for solving problem (A12). The Conjugate Gradient (CG) method is employed for the solution of the inner linear system.
Once (A9) has been solved for a specific α , the BRD method is used for updating the value of α as
α new = k 1 k 2 c .
This procedure is sketched in Algorithm A2.
Algorithm A2: VSH (Venkataramanan–Song–Hürlimann) algorithm
1: choose k 1 and k 2
2: compute the TSVD of K 1 and K 2
3: compute the kernel K ˜ and the projected data s ˜ as in (A10)
4: choose α ( 0 )
5: for k = 1 , 2 , do
6: compute with the Newton method a solution c ( j ) to the unconstrained problem
min c 1 2 c T G ( c ) + α ( j ) I c c T s ˜
7: compute f ( j ) = max ( 0 , K ˜ T c ( j ) )
8: update α ( j + 1 ) = k 1 k 2 / c ( j )
9: end for

References

  1. Ernst, R.; Bodenhausen, G.; Wokaun, A. Principles of Nuclear Magnetic Resonance in One and Two Dimensions, 2nd ed.; International Series of Monographs on Chemistry; Oxford University Press: Oxford, NY, USA, 1997. [Google Scholar]
  2. Blümich, B. Essential NMR; Springer: Berlin, Germany, 2005. [Google Scholar]
  3. Borgia, G.; Brown, R.; Fantazzini, P. Uniform-Penalty Inversion of Multiexponential Decay Data. J. Magn. Reson. 1998, 132, 65–77. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Venkataramanan, L.; Song, Y.; Hurlimann, M. Solving Fredholm integrals of the first kind with tensor product structure in 2 and 2.5 dimensions. IEEE Trans. Signal Process. 2002, 50, 1017–1026. [Google Scholar] [CrossRef]
  5. Kroeker, R.; Henkelman, R. Analysis of biological NMR relaxation data with continuous distributions of relaxation times. J. Magn. Reson. 1986, 69, 218. [Google Scholar] [CrossRef]
  6. Hansen, P. Rank-Deficient and Discrete Ill-Posed Problems; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 1998. [Google Scholar] [CrossRef]
  7. Engl, H.; Hanke, M.; Neubauer, A. Regularization of Inverse Problems; Mathematics and Its Applications; Springer: Dordrecht, The Netherlands, 2000. [Google Scholar]
  8. Eldèn, L. Algorithms for the regularization of ill-conditioned least squares problems. Behav. Inf. Technol. 1977, 17, 134–145. [Google Scholar] [CrossRef]
  9. Butler, J.; Reeds, J.; Dawson, S. Estimating solutions of the first kind integral equations with nonnegative constraints and optimal smoothing. SIAM J. Numer. Anal. 1981, 18, 381. [Google Scholar] [CrossRef]
  10. Fordham, E.; Sezginer, A.; Hall, L. Imaging multiexponential relaxation in the (y, logT1) plane, with application to clay filtration in rock cores. J. Magn. Reson. Ser. A 1995, 113, 139–150. [Google Scholar] [CrossRef]
  11. Chouzenoux, E.; Moussaoui, S.; Idier, J.; Mariette, F. Primal-Dual Interior Point Optimization for a Regularized Reconstruction of NMR Relaxation Time Distributions. In Proceedings of the 38th IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2013), Vancouver, BC, Canada, 26–31 May 2013; pp. 8747–8750. [Google Scholar]
  12. Cloninger, A.; Czaja, W.; Bai, R.; Basser, P. Solving 2D Fredholm Integral from Incomplete Measurements Using Compressive Sensing. SIAM J. Imaging Sci. 2014, 7, 1775–1798. [Google Scholar] [CrossRef] [Green Version]
  13. Hansen, P. The discrete Picard condition for discrete ill-posed problems. BIT Numer. Math. 1990, 30, 658–672. [Google Scholar] [CrossRef]
  14. Bortolotti, V.; Brown, R.; Fantazzini, P.; Landi, G.; Zama, F. Uniform Penalty inversion of two-dimensional NMR relaxation data. Inverse Probl. 2016, 33, 015003. [Google Scholar] [CrossRef] [Green Version]
  15. Bortolotti, V.; Brown, R.; Fantazzini, P.; Landi, G.; Zama, F. I2DUPEN: Improved 2DUPEN algorithm for inversion of two-dimensional NMR data. Microporous Mesoporous Mater. 2017, 269, 195–198. [Google Scholar] [CrossRef]
  16. Xiang, H.; Zou, J. Regularization with randomized SVD for large-scale discrete inverse problems. Inverse Probl. 2013, 29, 085008. [Google Scholar] [CrossRef] [Green Version]
  17. Xiang, H.; Zou, J. Randomized algorithms for large-scale inverse problems with general Tikhonov regularizations. Inverse Probl. 2015, 31, 085008. [Google Scholar] [CrossRef]
  18. Bertsekas, D.P. Projected Newton method for optimization with simple constraints. SIAM J. Control Optim. 1982, 20, 221–246. [Google Scholar] [CrossRef] [Green Version]
  19. Bonettini, S.; Landi, G.; Loli Piccolomini, E.; Zanni, L. Scaling techniques for gradient projection-type methods in astronomical image deblurring. Int. J. Comput. Math. 2013, 90, 9–29. [Google Scholar] [CrossRef]
  20. Bortolotti, V.; Landi, G.; Zama, F. 2DNMR data inversion using locally adapted multi-penalty regularization. arXiv 2020, arXiv:math.NA/2007.01268. [Google Scholar]
  21. Levin, E.; Meltzer, A.Y. Estimation of the Regularization Parameter in Linear Discrete Ill-Posed Problems Using the Picard Parameter. SIAM J. Sci. Comput. 2017, 39, A2741–A2762. [Google Scholar] [CrossRef] [Green Version]
  22. Landi, G.; Piccolomini, E.L.; Tomba, I. A stopping criterion for iterative regularization methods. Appl. Numer. Math. 2016, 106, 53–68. [Google Scholar] [CrossRef]
  23. Bertsekas, D. Nonlinear Programming, 2nd ed.; Athena Scientific: Nashua, NH, USA, 1999. [Google Scholar]
  24. Vogel, C.R. Computational Methods for Inverse Problems; SIAM: Philadelphia, PA, USA, 2002. [Google Scholar]
Figure 1. The Hybrid (blue line), TSVD (red dashed line) and Tikhonov (black dashdotted line) filter factors ϕ i versus σ i for α = 2.5 · 10 5 .
Figure 1. The Hybrid (blue line), TSVD (red dashed line) and Tikhonov (black dashdotted line) filter factors ϕ i versus σ i for α = 2.5 · 10 5 .
Jimaging 07 00018 g001
Figure 2. Top: simulated T 1 T 2 map and surface distribution. Bottom: projections along the T 1 and T 2 dimensions.
Figure 2. Top: simulated T 1 T 2 map and surface distribution. Bottom: projections along the T 1 and T 2 dimensions.
Jimaging 07 00018 g002
Figure 3. The singular values of K (blue line), K 1 (red dashed line) and K 2 (black dashdotted line). For easier visualization, the singular values of K 1 and K 2 are plotted versus the integers 1 , 100 1 , 100 2 , , 100 100 .
Figure 3. The singular values of K (blue line), K 1 (red dashed line) and K 2 (black dashdotted line). For easier visualization, the singular values of K 1 and K 2 are plotted versus the integers 1 , 100 1 , 100 2 , , 100 100 .
Jimaging 07 00018 g003
Figure 4. Singular values of K k obtained for τ = 0.1 (blue line) and the singular values of K k 1 , 1 K k 2 , 2 for τ 1 = τ 2 = 0.5 (blue dotted line), τ 1 = τ 2 = 1 (black dashed line) and for τ 1 = τ 2 = 5 (red dashdotted line).
Figure 4. Singular values of K k obtained for τ = 0.1 (blue line) and the singular values of K k 1 , 1 K k 2 , 2 for τ 1 = τ 2 = 0.5 (blue dotted line), τ 1 = τ 2 = 1 (black dashed line) and for τ 1 = τ 2 = 5 (red dashdotted line).
Jimaging 07 00018 g004
Figure 5. The Picard plot for the simulated NMR data. Singular values σ i (black dashed line), Fourier coefficients u i T s (blue continuous line) and solution coefficients u i T s / σ i (red circles) for i = 1 , , 500 .
Figure 5. The Picard plot for the simulated NMR data. Singular values σ i (black dashed line), Fourier coefficients u i T s (blue continuous line) and solution coefficients u i T s / σ i (red circles) for i = 1 , , 500 .
Jimaging 07 00018 g005
Figure 6. Relative error behaviour (left) and the computational time in seconds (right) versus the regularization parameter values.
Figure 6. Relative error behaviour (left) and the computational time in seconds (right) versus the regularization parameter values.
Jimaging 07 00018 g006
Figure 7. Estimated distributions from the Hybrid (top row), Tikhonov (middle row) and VSH (bottom row) models for α = 1 (left column), α = 10 2 (middle column) and α = 10 4 (right column).
Figure 7. Estimated distributions from the Hybrid (top row), Tikhonov (middle row) and VSH (bottom row) models for α = 1 (left column), α = 10 2 (middle column) and α = 10 4 (right column).
Jimaging 07 00018 g007
Figure 8. Projections along the T 1 dimension from the Hybrid (top row), Tikhonov (middle row) and VSH (bottom row) models for α = 1 (left column), α = 10 2 (middle column) and α = 10 4 (right column). The red dashed and blue continuous lines respectively represent the exact projections and the computed ones.
Figure 8. Projections along the T 1 dimension from the Hybrid (top row), Tikhonov (middle row) and VSH (bottom row) models for α = 1 (left column), α = 10 2 (middle column) and α = 10 4 (right column). The red dashed and blue continuous lines respectively represent the exact projections and the computed ones.
Jimaging 07 00018 g008
Figure 9. Projections along the T 2 dimension from the Hybrid (top row), Tikhonov (middle row) and VSH (bottom row) models for α = 1 (left column), α = 10 2 (middle column) and α = 10 4 (right column). The red dashed and blue continuous lines respectively represent the exact projections and the computed ones.
Figure 9. Projections along the T 2 dimension from the Hybrid (top row), Tikhonov (middle row) and VSH (bottom row) models for α = 1 (left column), α = 10 2 (middle column) and α = 10 4 (right column). The red dashed and blue continuous lines respectively represent the exact projections and the computed ones.
Jimaging 07 00018 g009
Figure 10. Hybrid method: restored T 1 T 2 distribution and projections along the T1 and T2 dimensions ( τ = 0.05 ).
Figure 10. Hybrid method: restored T 1 T 2 distribution and projections along the T1 and T2 dimensions ( τ = 0.05 ).
Jimaging 07 00018 g010
Figure 11. VSH method: restored T 1 T 2 distribution and projections along the T1 and T2 dimensions ( τ = 0.1 ).
Figure 11. VSH method: restored T 1 T 2 distribution and projections along the T1 and T2 dimensions ( τ = 0.1 ).
Jimaging 07 00018 g011
Figure 12. From top to bottom: UPEN, Hybrid and VSH restored T 1 T 2 maps (left) and surface distributions (right).
Figure 12. From top to bottom: UPEN, Hybrid and VSH restored T 1 T 2 maps (left) and surface distributions (right).
Jimaging 07 00018 g012
Figure 13. Left: singular values (black dashed line) of K and solution coefficients (red circles). Only the first 300 singular values are depicted. Right: Singular values of K 1 (red dashed line) and K 2 (black dashdotted line). The horizontal line corresponds to the value τ = 1 .
Figure 13. Left: singular values (black dashed line) of K and solution coefficients (red circles). Only the first 300 singular values are depicted. Right: Singular values of K 1 (red dashed line) and K 2 (black dashdotted line). The horizontal line corresponds to the value τ = 1 .
Jimaging 07 00018 g013
Figure 14. From top to bottom: UPEN, Hybrid and VSH projections along the T1 (left) and T2 (right) dimensions.
Figure 14. From top to bottom: UPEN, Hybrid and VSH projections along the T1 (left) and T2 (right) dimensions.
Jimaging 07 00018 g014aJimaging 07 00018 g014b
Table 1. Numerical results for the models comparison on simulated NMR data. Results obtained by applying NP method to (6), (4) and (11) with τ = τ 1 = τ 2 = 1 .
Table 1. Numerical results for the models comparison on simulated NMR data. Results obtained by applying NP method to (6), (4) and (11) with τ = τ 1 = τ 2 = 1 .
HybridTikhVSH
α ErrTimeItErrTimeItErrTimeIt
1000.5360.90250.5360.65200.5420.7352
21.540.4521.29310.4511.11250.4630.5233
4.64160.3901.74310.3891.56260.4130.6533
10.3560.99150.3361.79260.3720.7036
0.21540.3001.57180.2841.29150.3380.9739
0.04640.2725.39470.2403.75330.3180.7327
0.010.2573.87310.2157.43530.3160.5823
0.00220.2555.55430.24311.41680.3140.3515
0.00050.2524.73320.2619.94480.3150.5120
0.00010.2452.94250.30424.911000.3200.4518
Table 2. Numerical results for the methods comparison on simulated NMR data.
Table 2. Numerical results for the methods comparison on simulated NMR data.
τ Method α ErrTimeItIt NIt CG
0.001Hybrid 1.73 10 6 2.35 10 1 0.68/181700
VSH 1.06 10 1 3.08 10 1 1.605019814,800
0.005Hybrid 6.52 10 5 2.34 10 1 0.75/212000
VSH 9.34 10 2 3.85 10 1 1.505019814,800
0.01Hybrid 1.66 10 4 2.32 10 1 0.62/181700
VSH 9.34 10 2 3.85 10 1 1.505019814,800
0.05Hybrid 6.80 10 3 2.44 10 1 0.45/132200
VSH 8.52 10 2 3.15 10 1 0.65281128400
0.1Hybrid 1.35 10 2 2.41 10 1 0.80/232200
VSH 7.96 10 2 3.02 10 1 0.114201600
0.5Hybrid 6.15 10 1 3.23 10 1 0.95/212000
VSH 3.93 10 2 3.12 10 1 0.196352900
1Hybrid 1.17 10 0 3.23 10 1 0.79/232200
VSH 5.07 10 2 3.43 10 1 0.258534500
5Hybrid 5.51 10 + 1 5.08 10 1 1.36/433779
VSH 9.26 10 2 6.80 10 1 0.062271008
10Hybrid 1.16 10 + 2 5.08 10 1 0.53/201496
VSH 5.99 10 2 7.03 10 1 0.106501153
Table 3. Location and height of the peak in the restored distribution and required execution time in seconds.
Table 3. Location and height of the peak in the restored distribution and required execution time in seconds.
Method T 2 T 1 Peak HeightTime
Hybrid19.8551.90269.5211.06
VSH19.8543.70264.481.24
UPEN19.8551.90544.2371.93
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Landi, G.; Zama, F.; Bortolotti, V. A New Hybrid Inversion Method for 2D Nuclear Magnetic Resonance Combining TSVD and Tikhonov Regularization. J. Imaging 2021, 7, 18. https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7020018

AMA Style

Landi G, Zama F, Bortolotti V. A New Hybrid Inversion Method for 2D Nuclear Magnetic Resonance Combining TSVD and Tikhonov Regularization. Journal of Imaging. 2021; 7(2):18. https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7020018

Chicago/Turabian Style

Landi, Germana, Fabiana Zama, and Villiam Bortolotti. 2021. "A New Hybrid Inversion Method for 2D Nuclear Magnetic Resonance Combining TSVD and Tikhonov Regularization" Journal of Imaging 7, no. 2: 18. https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7020018

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop