Next Article in Journal
System Integrated Information
Next Article in Special Issue
Multilayer Perceptron Network Optimization for Chaotic Time Series Modeling
Previous Article in Journal
A Data-Driven Preprocessing Framework for Atrial Fibrillation Intracardiac Electrocardiogram Analysis
Previous Article in Special Issue
Two-Sample Tests Based on Data Depth
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

From Bilinear Regression to Inductive Matrix Completion: A Quasi-Bayesian Analysis

Department of Mathematical Sciences, Norwegian University of Science and Technology, 7034 Trondheim, Norway
Submission received: 27 January 2023 / Revised: 8 February 2023 / Accepted: 9 February 2023 / Published: 11 February 2023
(This article belongs to the Special Issue Recent Advances in Statistical Theory and Applications)

Abstract

:
In this paper, we study the problem of bilinear regression, a type of statistical modeling that deals with multiple variables and multiple responses. One of the main difficulties that arise in this problem is the presence of missing data in the response matrix, a problem known as inductive matrix completion. To address these issues, we propose a novel approach that combines elements of Bayesian statistics with a quasi-likelihood method. Our proposed method starts by addressing the problem of bilinear regression using a quasi-Bayesian approach. The quasi-likelihood method that we employ in this step allows us to handle the complex relationships between the variables in a more robust way. Next, we adapt our approach to the context of inductive matrix completion. We make use of a low-rankness assumption and leverage the powerful PAC-Bayes bound technique to provide statistical properties for our proposed estimators and for the quasi-posteriors. To compute the estimators, we propose a Langevin Monte Carlo method to obtain approximate solutions to the problem of inductive matrix completion in a computationally efficient manner. To demonstrate the effectiveness of our proposed methods, we conduct a series of numerical studies. These studies allow us to evaluate the performance of our estimators under different conditions and provide a clear illustration of the strengths and limitations of our approach.

1. Introduction

In this paper, we investigate the bilinear regression model, a statistical method that assumes a linear relationship between a set of multiple response variables and two sets of covariates. This model, also known as the growth curve model or generalized multivariate analysis model, is commonly used for analyzing longitudinal data, as shown in previous studies such as [1,2,3,4,5,6]. However, these studies only cover the scenario in which the response matrix is fully observed.
Recently, the bilinear regression model with incomplete response has been introduced and studied as the so-called inductive matrix completion, which is a generalization of the matrix completion problem [7,8]. This problem has attracted significant attention in various fields, such as drug repositioning [9], collaborative filtering [10], and genomics [7]. Inductive matrix completion is a challenging problem that arises when some of the entries in the response matrix are missing, which makes it difficult to infer the underlying relationship between the variables.
In this work, we explore the problem of bilinear regression and inductive matrix completion under a low-rank constraint on the coefficient matrix. Most existing approaches for these problems are frequentist methods, such as maximum likelihood estimation [2] or penalized optimization [7]. These methods are effective in providing point estimates for the parameters of the model but lack the ability to provide a full probabilistic characterization of the uncertainty. Recently, Bayesian approaches have been considered for these problems. For example, the paper [11] proposed a Bayesian approach for bilinear regression, and a Bayesian method was proposed for inductive matrix completion in the work [9]. However, unlike frequentist approaches, the statistical properties of the Bayesian approach for these models have not been fully explored yet.
The aim of this paper is to address an existing gap in the understanding of the bilinear regression and inductive matrix completion problems. To achieve this goal, we propose a novel approach that combines elements of Bayesian statistics with a quasi-likelihood method. Specifically, we start by addressing the problem of bilinear regression using a quasi-Bayesian approach, where a quasi-likelihood is employed. We then generalize this approach to the problem of inductive matrix completion. To ensure that our method is adaptive to the rank of the coefficient matrix, we use a spectral scaled Student prior distribution, which allows us to prove that the posterior mean satisfies a tight oracle inequality. This result demonstrates that our method is able to accurately estimate the parameters of the model, even when the rank of the coefficient matrix is unknown. Additionally, we also prove the contraction properties of the posteriors, which further enhances the performance of our method.
The proposed method in this paper, the quasi-Bayesian approach, is an extension of the traditional Bayesian approach and is becoming increasingly popular in statistics and machine learning as a technique for generalized Bayesian inference, as noted in studies such as [12,13,14]. This approach allows for more flexibility in the modeling assumptions by replacing the likelihood function with a more general notion of risk or quasi-likelihood.
To provide theoretical guarantees for our proposed quasi-posteriors, we make use of the PAC-Bayesian technique [15,16,17]. This technique provides bounds on the generalization error of a learned estimator, and has been widely used in the literature as described in recent reviews and introductions, such as [18,19]. The PAC-Bayes bounds have been successfully applied in the context of matrix estimation problems as shown in studies such as [20,21,22,23]. Interestingly, by using the PAC-Bayesian technique for inductive matrix completion, we do not need to make any assumptions about the distribution of the missing entries in the response matrix. This is in contrast with previous works on matrix completion, such as [24,25,26,27,28,29], which typically require assumptions about the missing data. This makes our method more versatile and applicable to a wider range of problems.
The proposed method in this paper makes use of a spectral scaled Student prior, which is a specific choice of prior distribution. This choice is motivated by recent works in which it has been shown to lead to optimal rates in a variety of problems, including high-dimensional regression [30], image denoising [31] and reduced rank regression [23]. Although this prior is not conjugate to our problems, it allows for the convenient implementation of gradient-based sampling methods, which makes it computationally efficient.
To compute the proposed estimators and sample from the quasi-posterior, we employ a Langevin Monte Carlo (LMC) method. This method is a widely used algorithm for approximating complex distributions and allows for efficient computation of the proposed estimators. The LMC method allows us to obtain approximate solutions to the problem of bilinear regression and inductive matrix completion in a computationally efficient manner.
Furthermore, we use numerical studies to demonstrate the effectiveness of our proposed methods. These studies enable us to evaluate the performance of our estimators in various scenarios and provide insight into the capabilities and limitations of our approach. By conducting a thorough evaluation, we can gain a better understanding of how our method performs in different settings and make any necessary adjustments to improve its performance. We also compare our method with the ordinary least squared method. This comparison allows us to demonstrate the superiority of our method over the traditional approach in certain scenarios, such as when the response matrix contains missing data. Overall, the numerical studies serve as a valuable tool for assessing the effectiveness of our proposed method and provide a clear illustration of its strengths and weaknesses, as well as its improvement over the traditional method.
The remainder of this paper is organized as follows: In Section 2, we present the problem of bilinear regression, and introduce the low-rank promoting prior distribution that we use to address this problem. In Section 3, we extend our approach to the problem of inductive matrix completion. In Section 4, we discuss the Langevin Monte Carlo method used for the computation of the estimators, and present numerical studies to demonstrate the effectiveness of our proposed methods. Finally, we conclude our work and provide a summary of our findings in Section 5. The technical proofs are provided in Appendix A for the interested readers.
Notation 1.
Let R n 1 × n 2 denote the set of n 1 × n 2 matrices with real elements. Let A R n 2 × n 1 denote the transpose of A. For any A R n 1 × n 2 and I = ( i , j ) { 1 , , n 1 } × { 1 , , n 2 } , we denote by A I = A ( i , j ) = A i , j the i-th row and j-th column elements of A. The matrix in R n 1 × n 2 with all entries equal to 0 is denoted by 0 n 1 × n 2 . For a matrix B R n 1 × n 1 , we let Tr ( B ) denote its trace. The identity matrix in R n 1 × n 1 is denoted by I n 1 . For  A R n 1 × n 2 , we define its sup-norm A = max i , j | A i , j | ; its Frobenius norm A F is defined by A F 2 = Tr ( A A ) = i , j A i , j 2 and rank ( A ) its rank.

2. Bilinear Linear Regression

2.1. Model

Let Y R n × q consist of n independent response vectors, X R n × p be a given between-individuals design matrix and Z R k × q be a known within-individuals design matrix. Consider the bilinear regression model as follows:
Y = X M * Z + E ,
where M * R p × k is the unknown parameter matrix. The random noise matrix E is assumed to have zero mean, E ( E ) = 0 . The main assumption here is the low-rank restriction on the model parameter that rank ( M * ) < min ( p , k ) .
The model presented in Equation (1) is a bilinear regression model, which can be seen as a generalization of the reduced rank regression problem. In the case where k = q and Z = I q , the model simplifies to the traditional reduced rank regression problem, which has been well-studied in the literature, such as [32,33]. However, in this paper, we consider the more general case where the matrix Z contains additional explanatory variables.
The low-rank assumption in this model can be interpreted as indicating the presence of a latent process that affects the response data, not only through the “between-individuals” structure of the model but also through the “within-individuals” structure. This model is often referred to as the growth curve model or the generalized multivariate analysis model (GMANOVA) and has been studied in depth in the literature, such as [2].
Assumption 1.
There is a known constant C < + such that X M * Z C .
From Assumption 1, it is not reliable to return predictions X M Z with entries that are outside of interval [ C , C ] . However, for computational reasons, it is extremely convenient to employ an unbounded prior for M. Therefore, we propose to use unbounded distributions for M but to use, as a predictor, a truncated version of X M Z rather than M itself. For a matrix A, let
Π C ( A ) = arg min B C A B F
be the orthogonal projection of A on matrices with entries bounded by C. Note that B is simply obtained by replacing entries of A larger than C by C, and entries smaller than C by C .
For a matrix M R p × k , we denote by r ( M ) the empirical risk of M as
r ( M ) = 1 n q Y Π C ( X M Z ) F 2
and its expectation is denoted by
R ( M ) = E r ( M ) = E Y 11 ( Π C ( X M Z ) ) 11 2 .
The focus of our work in this paper is on the predictive aspects of the model, that is, a matrix M predicts almost as well as M * if R ( M ) R ( M * ) is small. Under the assumption that E i j has a finite variance, using the Pythagorean theorem, we have
R ( M ) R ( M * ) = 1 n q Π C ( X M Z ) X M * Z F 2
for any M, which means that our results can also be interpreted in terms of the Frobenius norm.
Let π be a prior distribution on R p × k (see Section 2.2). For any λ > 0 , we define the quasi-posterior
ρ ^ λ ( d M ) exp ( λ r ( M ) ) π ( d M ) .
It is worth noting that for a specific choice of λ = n q / ( 2 σ 2 ) , the posterior distribution obtained corresponds to the case where the noise term E i j is assumed to be Gaussian distributed with a mean of 0 and a variance of σ 2 . However, our theoretical results hold under a more general class of noise distributions. It is known that a small enough λ is sufficient when the model is misspecified [14]. Additionally, even in the case of Gaussian noise, in high-dimensional settings, a smaller value of λ than n / ( 2 σ 2 ) leads to better adaptation properties [30,34]. The precise choice of λ in our method will be further discussed below.
We consider the following posterior mean of X M Z , given by
X M Z ^ λ = Π C ( X M Z ) ρ ^ λ ( d M ) .
It is worth noting that from the simulation experiments, it is observed that using reasonable values for C, the Monte Carlo algorithm never samples matrices M such that Π C ( X M Z ) X M Z . In other words, the boundedness constraint has very little impact on practice, and it is mainly necessary for technical proofs. If one is interested in obtaining an estimator of M * instead of an estimator of X M * Z , when X X and Z Z are invertible, one can consider the estimator M ^ λ = ( X X ) 1 X X M Z ^ λ Z ( Z Z ) 1 and note that X M ^ λ Z = X M Z ^ λ . This estimator can be used to obtain the estimator of M * in case the inverses of X X and Z Z are computationally feasible.
The quasi-posterior distribution investigated in this paper is often referred to as the “Gibbs posterior” in the PAC-Bayes approach. This terminology is used in the literature such as [17,18,19,35,36]. Additionally, the estimator M ^ λ is sometimes referred to as the Gibbs estimator or the exponentially weighted aggregate (EWA) in the literature, such as [34,37,38].

2.2. Prior Specification

We consider, in this paper, the following spectral scaled Student prior distribution, with parameter τ > 0 :
π ( M ) det ( τ 2 I m + M M ) ( p + m + 2 ) / 2 .
This prior distribution is designed to promote low-rankness by placing more probability mass on matrices with smaller singular values, which leads to sparse solutions. This prior has a similar form to the one used in related works such as [39,40], and it is known to lead to good performance in different problems, such as high-dimensional regression and image denoising. It can be verified that
π ( M ) j = 1 m ( τ 2 + s j ( M ) 2 ) ( p + m + 2 ) / 2 ,
where s j ( M ) denotes the jth largest singular value of M. The above expression is a scaled Student distribution evaluated at s j ( M ) , which can be seen as a way to approximate sparsity on s j ( M ) [30]. The log-sum function j = 1 m log ( τ 2 + s j ( M ) 2 ) used by [39,40] is also known to enforce approximate sparsity on the singular values of s j ( M ) . This means that under this prior, most of the s j ( M ) are close to 0, which implies that M is well approximated by a low-rank matrix. Therefore, it has the ability to promote the low-rankness of M.
As previously stated, this prior is not conjugate to the problem at hand. However, it is particularly convenient to implement gradient-based sampling algorithms, such as the Langevin Monte Carlo method, which will be discussed in more detail in Section 4. This is because the gradient of the log-posterior can be computed efficiently, and it allows for an efficient implementation of the LMC algorithm.

2.3. Theoretical Results

We assume the sub-exponential distribution assumption on the noise.
Assumption 2.
The entries E i , j of E are independent. There exist two known constants σ > 0 and ξ > 0 such that
k 2 , E ( | E i , j | k ) σ 2 k ! ξ k 2 / 2 .
Let us put
C 1 = 8 ( σ 2 + C 2 ) ; C 2 = 64 C max ( ξ , C ) ; τ * = C 1 ( k + p ) / ( n k q X F 2 Z F 2 ) .
The statistical properties of mean estimator are given in the following theorem, where we propose a non-asymptotic analysis for our mean estimator.
Theorem 1.
Let Assumptions 1 and 2 be satisfied. Fix the parameter τ = τ * in the prior. Fix δ > 0 and define λ * : = n q min ( 1 / ( 2 C 2 ) , δ / [ C 1 ( 1 + δ ) ] ) . Then, for any ε ( 0 , 1 ) , we have, with probability at least 1 ε on the sample,
X M Z ^ λ * X M * Z F 2 inf 0 r p k inf M ¯ R p × k rank ( M ¯ ) r { ( 1 + δ ) X M ¯ Z X M * Z F 2 + C 1 ( 1 + δ ) 2 δ 4 r ( k + p + 2 ) log 1 + X F Z F M ¯ F C 1 n k q r ( k + p ) + k + p + 2 log 2 ε } .
The choice of λ = λ * is determined by optimizing an upper bound on the risk R (as shown in the proof of this theorem). However, it is important to note that this choice may not necessarily be the best choice in practice, even though it gives a good estimate of the order of magnitude for λ . To ensure optimal performance, the user can use cross-validation to properly adjust the temperature parameter. Additionally, it is worth noting that rank ( M ¯ ) 0 is not a requirement in the above formula. If  rank ( M ¯ ) = 0 , then M ¯ = 0 and we interpret 0 log ( 1 + 0 / 0 ) as 0.
The proof of this theorem is based on the PAC-Bayes theory, and it is provided in the Appendix A. By taking M ¯ = M * , we can obtain an upper bound on the infimum, leading to the following result.
Corollary 1.
Under the same assumptions and the same τ , λ * as in Theorem 1, let r * = rank ( M * ) . Then, for any ε ( 0 , 1 ) , we have, with probability at least 1 ε on the sample,
X M Z ^ λ * X M * Z F 2 C 1 ( 1 + δ ) 2 δ 4 r * ( k + p + 2 ) log 1 + X F Z F M * F C 1 n k q r ( k + p ) + k + p + 2 log 2 ε .
Theorem 1 provides an understanding of the statistical properties of the posterior mean. However, it is also important to understand the contraction properties of the quasi-posterior distribution. In the following theorem, we aim to provide a result that demonstrates this aspect of the proposed method.
Theorem 2.
Under the assumptions for Theorem 1, let ε n be any sequence in ( 0 , 1 ) such that ε n 0 when n . Define
M n = { M R p × k : X M Z ^ λ * X M * Z F 2 inf 0 r p k inf M ¯ R p × k rank ( M ¯ ) r { ( 1 + δ ) X M ¯ Z X M * Z F 2 + C 1 ( 1 + δ ) 2 δ 4 r ( k + p + 2 ) log 1 + X F Z F M ¯ F C 1 n k q r ( k + p ) + k + p + 2 log 2 ε n } .
Then,
E P M ρ ^ λ ( M M n ) 1 ε n n 1 .
The proof of this theorem is provided in Appendix A.

3. Inductive Matrix Completion

3.1. Model and Method

In the context of inductive matrix completion, given two side information matrices X and Z, we assume that only a random subset Ω of the entries of Y in model (1) is observed. More precisely, we assume that we observe m independent and identically random pairs ( I 1 , Y 1 ) , , ( I m , Y m ) given by
Y i = ( X M * Z ) I i + E i , i = 1 , , m
where M * R p × k is the unknown parameter matrix expected to be low-rank and observation sample size is assumed that m < n q . The noise variables E i are assumed to be independent with E ( E i ) = 0 . The variables I i are independent and identical copies of a random variable I having distribution Π on the set { 1 , , n } × { 1 , , q } , we denote Π x , y : = Π ( I = ( x , y ) ) .
Our goal in this paper is to investigate the problem of bilinear regression and also address the case where the response matrix contains missing data, a problem known as inductive matrix completion. In particular, when p = n , k = q and X = I n , Z = I q are the identity matrices, the problem reduces to the traditional matrix completion problem, which has been well studied in the literature [26]. Similarly, when k = q and Z = I q is the identity matrix, the problem becomes the reduced rank regression problem with incomplete response, which has also been studied in recent works, such as [23,41]. However, in the context of inductive matrix completion, we focus on the more general case where X and Z contain additional explanatory variables; in other words, we consider the side information from the n users and the q items in our model [8].
It has been acknowledged that there are two different ways to model the observed values of Y, either by including or excluding the possibility of observing the same entry multiple times. Previous studies have examined both of these methods, such as the examination of matrix completion without replacement in [25] and with replacement in [26]. Both methods have practical uses and use similar techniques for estimation. This particular study focuses on the scenario where the variables I i are independently and identically distributed, meaning that it is possible to observe the same entry multiple times. Additionally, it is important to note that, according to the findings presented in Section 6 of [42], the results of this study can also be applied to the scenario of sampling without replacement, as long as the sampling is performed uniformly and there is no observation noise.
We are now adapting the quasi-Bayesian approach for bilinear regression in Section 2 to the context of inductive matrix completion. For a probability distribution P on { 1 , , n 1 } × { 1 , , n 2 } , we generalize the Frobenius norm by A F , P 2 = i , j P [ ( i , j ) ] A i , j 2 ; note that when P is the uniform distribution, then A F , P 2 = A F 2 / ( n 1 n 2 ) .
For a matrix M R p × k , we denote the empirical risk of M, r ( M ) , and its expected risk R ( M ) respectively as
r ( M ) = 1 m i = 1 m Y i ( Π C ( X M Z ) ) I i 2 ,
R ( M ) = E [ r ( M ) ] = E [ Y 1 ( Π C ( X M Z ) ) I 1 2 ] .
As in Section 2, we will focus on the predictive aspects of the model, that is, a matrix M predicts almost as well as M * if R ( M ) R ( M * ) is small. Under the assumption that E i has a finite variance, based on the Pythagorean theorem, we have
R ( M ) R ( M * ) = Π C ( X M Z ) X M * Z F , Π 2
for any M, which means that our results can also be interpreted in terms of an estimation of M * with respect to a generalized Frobenius norm.
Here, the prior π is the low-rank inducing prior specified in the Section 2.2 above. For any λ > 0 , we define the quasi-posterior
ρ ^ λ ( d M ) exp ( λ r ( M ) ) π ( d M ) .
We will actually specify our choice of λ below.
The truncated posterior mean of X M Z is given by
X M Z ^ λ = Π C ( X M ) ρ ^ λ ( d M ) .
Here, for the same technical reasons as in the context of bilinear regression, this truncation has a very little impact in practice for reasonable values of C.

3.2. Theoretical Results

In this section, we derive the statistical properties of the posterior ρ ^ λ and the mean estimator X M Z ^ λ for the context of inductive matrix completion. Let us first state our assumptions on this model.
Assumption 3.
The noise variables E 1 , , E m are independent of I 1 , , I m . There exist two known constants σ > 0 and ξ > 0 such that
k 2 , E ( | E i | k ) σ 2 k ! ξ k 2 / 2 .
Assumptions 1 and 3 are both standard; they have been used in [41] for theoretical analysis of reduced rank regression and in [26] for trace regression and matrix completion.
Let us put
C 1 = 8 ( σ 2 + C 2 ) ; C 2 = 64 C max ( ξ , C ) ; τ * = C 1 ( k + p ) / ( m k p X F 2 Z F 2 ) .
Theorem 3.
Let Assumptions 1 and 3 be satisfied. Fix the parameter τ = τ * in the prior. Fix δ > 0 and define λ * : = m min ( 1 / ( 2 C 2 ) , δ / [ C 1 ( 1 + δ ) ] ) . Then, for any ε ( 0 , 1 ) , we have, with probability at least 1 ε on the sample,
X M Z ^ λ * X M * Z F , Π 2 inf 0 r p k inf M ¯ R p × k rank ( M ¯ ) r { ( 1 + δ ) X M ¯ Z X M * Z F , Π 2 + C 1 ( 1 + δ ) 2 δ 4 r ( k + p + 2 ) log 1 + X F Z F M ¯ F C 1 m k p r ( k + p ) + k + p + 2 log 2 ε m } .
Similar to the context of bilinear regression, the choices of λ = λ * , τ = τ * come from the optimization of an upper bound on the risk R (in the proof of this theorem). Therefore, these choices may not be necessarily the best choice in practice, even though it gives a good order of magnitude for tuning these parameters. The user could use cross-validation to properly tune them in practice. Note again that rank ( M ¯ ) 0 is not required in the above formula, if  rank ( M ¯ ) = 0 then M ¯ = 0 and we interpret 0 log ( 1 + 0 / 0 ) as 0. The proof of this theorem is provided in the Appendix A. In particular, we can upper bound the infimum on M ¯ by taking M ¯ = M * , which leads to the following result.
Corollary 2.
Under the assumptions that Theorem 3 holds, let r * = rank ( M * ) . Put
R δ , m , p , k , r * , ε : = C 1 ( 1 + δ ) 2 δ 4 r ( k + p + 2 ) log 1 + X F Z F M ¯ F C 1 m k p r ( k + p ) + k + p + 2 log 2 ε m ,
then
X M Z ^ λ * X M * Z F , Π 2 R δ , m , p , k , r * , ε
and in particular, if the sampling distribution Π is uniform,
X M Z ^ λ * X M * Z F 2 n q R δ , m , p , k , r * , ε .
Remark 1.
Up to a log-term, our error rate r ( k + p ) / m is similar to the best known up-to-date rate derived in [8].
While Theorem 3 is about the finite sample convergence rate of the posterior mean, it is actually possible to prove that the quasi-posterior ρ ^ λ contracts around M * at the same rate.
Theorem 4.
Under the same assumptions for Theorem 3, and the same definition for τ and λ * , let ε m be any sequence in ( 0 , 1 ) such that ε m 0 when m . Define
Ω m = { M R p × k : Π C ( X M Z ) X M * Z F , Π 2 inf 1 r p k inf M ¯ R p × k rank ( M ¯ ) r [ ( 1 + δ ) X M ¯ Z X M * Z F , Π 2 + C 1 ( 1 + δ ) 2 δ 4 r ( k + p + 2 ) log 1 + X F Z F M ¯ F C 1 m k p r ( k + p ) + k + p + 2 log 2 ε m m ] } .
Then
E P M ρ ^ λ ( M Ω m ) 1 ε m m 1 .
The proof of this theorem is provided in Appendix A.

4. Numerical Studies

4.1. Langevin Monte Carlo Implementation

In this section, we propose to sample from the (quasi) posterior, in Section 2 and Section 3, by a suitable version of the Langevin Monte Carlo (LMC) algorithm, a gradient-based sampling method. We propose to use a constant step-size unadjusted LMC algorithm; see [43] for more details. The algorithm is given by an initial matrix M 0 and the recursion
M k + 1 = M k h log ρ ^ λ ( M k ) + 2 h N k k = 0 , 1 ,
where h > 0 is the step-size, ρ ^ λ is the (quasi) posterior and N 0 , N 1 , are independent random matrices with independent and identical standard Gaussian entries. We provide a pseudo-code for LMC in Algorithm A1. For small values of the step-size h, the output of Algorithm A1, M ^ , is very close to the integral (3) of interest. However, for some h that may not be small enough, the sum can explode [44]. In such cases, we consider to include a Metropolis–Hastings correction in the algorithm. Another possible choice is to take a smaller h and restart the algorithm; although it slows down the algorithm, we keep some control over its time of execution. On the other hand, the Metropolis–Hastings approach ensures the convergence to the desired distribution; however, the algorithm is greatly slowed down because of an additional acceptance/rejection step at each iteration.
Next, we propose a Metropolis–Hasting correction to the LMC algorithm. It guarantees the convergence to the (quasi) posterior, and it also provides a useful way for choosing h. More precisely, we consider the update rule in (8) as a proposal for a new candidate:
M ˜ k + 1 = M k h log ρ ^ λ ( M k ) + 2 h N k , k = 0 , 1 , ,
Note that the matrix M ˜ k + 1 is normally distributed with mean M k h log ρ ^ λ ( M k ) and the covariance matrices equal to 2 h times the identity matrices. This proposal is then accepted or rejected according to the Metropolis–Hastings algorithm, where the proposal is accepted with probability:
A M A L A : = min 1 , ρ ^ λ ( M ˜ k + 1 ) q ( M k | M ˜ k + 1 ) ρ ^ λ ( M k ) q ( M ˜ k + 1 | M k ) ,
where
q ( x | x ) exp 1 4 h x x + h log ρ ^ λ ( x ) F 2
is the transition probability density from x to x . The details of the Metropolis-adjusted Langevin algorithm (denoted by MALA) are presented in Algorithm A2. Compared to the random-walk Metropolis–Hastings, MALA usually proposes moves into regions of higher probability, which are then more likely to be accepted.
We note that the step-size h for MALA is chosen such that the acceptance rate is approximately 0.5 following [45], while the step-size for LMC in the same setting should be smaller than the one for MALA [46].

4.2. Simulation Studies for Biliear Regression

We perform some numerical studies on simulated data to assess the performance of our proposed algorithms. All simulations were conducted using the R statistical software [47].
For fixed dimensions q = 10 , k = 20 of the data, we vary n = 100 and n = 1000 to check the effect of the samples, whereas the dimensions of the coefficient matrix are varied by p = 10 and p = 100 . The entries of the design matrices X and Z are independently simulated from the standard Gaussian N ( 0 , 1 ) . Then, given a matrix M * , we simulate the response matrix Y from model (1) whose entries of the noise matrix E are independent and identically sampled from N ( 0 , 1 ) . We consider the following setups for the true coefficient matrix:
  • Model I: The true coefficient matrix M * is a rank-2 matrix that is generated as M * = B 1 B 2 where B 1 R p × 2 , B 2 R k × 2 and all entries in B 1 and B 2 are independent and identically sampled from N ( 0 , 1 ) .
  • Model II: An approximate low-rank set up is studied. This series of simulations is similar to the Model I, except that the true coefficient is no longer rank 2, but it can be well approximated by a rank 2 matrix:
    M * = 2 · B 1 B 2 + U ,
    where U is a matrix whose entries are independent and identically sampled from N ( 0 , 0.1 ) .
We compare our approaches denoted by LMC and MALA against the (generalized) ordinary least square [2], denoted by OLS. The OLS is defined as follows:
M ^ OLS = ( X X ) X Y Z ( Z Z )
where A denotes the Moore–Penrose inverse of matrix A. We fixed λ = n q , τ = 1 , and the LMC and MALA methods are initiated at the OLS estimator and are run with 10,000 iterations, where the first 1000 steps are removed as burn-in periods.
The evaluations are performed by using the mean squared estimation error (Est) and the normalized (relative) mean square error (Nmse):
Est : = M ^ M * F 2 / ( p k ) , Nmse : = M ^ M * F 2 / M * F 2 ,
and the prediction error (Pred) as
Pred : = X ( M ^ M * ) Z F 2 / ( n q ) ,
where M ^ here is one of the estimators for LMC, MALA or OLS. We report the averages and the standard deviation of these errors over 100 data replications.
The results of our study are presented in Table 1 and Table 2. From the tables, it can be observed that our proposed methods perform similarly to the OLS method. However, the estimation method obtained from the MALA algorithm often results in smaller prediction errors, particularly in high-dimensional settings. This advantage is even more pronounced when the method is applied in the context of inductive matrix completion, as discussed in the next subsection.

4.3. Simulation Studies for Inductive Matrix Completion

The simulation settings for inductive matrix completion are similar to the settings for bilinear regression, Section 4.2. However, after obtaining the response matrix Y, we remove uniformly at random κ = 10 % and κ = 30 % of the entries of Y. Here, κ denotes the missing rate. We denote the response matrix with missing entries by Y miss .
As in the context of inductive matrix completion, we only observe the response matrix with missing entries, Y miss , and thus we cannot construct the OLS estimator as in the case of bilinear regression. For this purpose, we first impute the missing entries in Y miss by using the R package softImpute [48], where the rank of M * is specified as the true rank for matrix Y miss . We denote the resulting imputed matrix by Y imp .
We compare our approaches denoted by LMC and MALA against the (imputed and generalized) ordinary least square, denoted by OLS_imp. The OLS_imp is defined as follows:
M ^ OLS _ imp = ( X X ) X Y imp Z ( Z Z )
where A denotes the Moore–Penrose inverse of matrix A. The LMC and MALA methods are initiated at the OLS_imp estimator and are run with 10,000 iterations, where the first 1000 steps are removed as burn-in periods.
As previously discussed in Section 4.2, we present the averages and the standard deviation of the mean squared estimation error (Est), the normalized (relative) mean square error (Nmse), and the prediction error (Pred) over 100 data replications in our results.
The results are detailed in Table 3 and Table 4. It is evident from these tables that the results obtained from our MALA method surpass those of the other methods in terms of prediction error in most of the settings considered. This advantage becomes more pronounced as the missing rate in the response matrix increases. Additionally, it is worth noting that our MALA method is robust and performs well in the approximate low-rank setting (model II), while the OLS and LMC methods do not.

5. Discussion and Conclusions

In this paper, we focus on the problem of bilinear regression and its extension, the problem of inductive matrix completion, where the response matrix contains missing data. We propose a novel approach that combines elements of Bayesian statistics with a quasi-likelihood method. Our proposed method first addresses the problem of bilinear regression using a quasi-Bayesian approach and then adapts this approach to the problem of inductive matrix completion. By making use of a low-rankness assumption and leveraging the powerful PAC-Bayes bound technique, we provide statistical properties for our proposed estimators and for the quasi-posteriors.
Our proposed method includes an efficient gradient-based sampling algorithm that is designed to sample from the (quasi) posterior distribution. This algorithm allows for the approximate computation of mean estimators. These methods, referred to as LMC and MALA, were tested in various simulation studies and were found to perform well when compared to the ordinary least squared method. The ability to accurately sample from the (quasi) posterior distribution and compute mean estimators makes these methods a valuable tool for data analysis and modeling.
There are still some unresolved issues that require further investigation. One of these is the presence of missing data in the covariate matrices X and Z. This can have a significant impact on the analysis and may lead to biased results. Another area that needs further exploration is the assumption of independence and identically distributed data. In some cases, this assumption may not hold, and alternative models that allow for a dispersion matrix may be needed. These are potential topics for future research to address and further our understanding of these issues.

Funding

TTM is supported by the Norwegian Research Council grant number 309960 through the Centre for Geophysical Forecasting at NTNU.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The R codes used in the numerical experiments are available at: https://github.com/tienmt/blr_imc (accessed on 10 February 2023).

Acknowledgments

The author would like to thank the anonymous referees for their useful comments.

Conflicts of Interest

The author declare no conflict of interest.

Appendix A. Appendix: Proofs

The main technique for our proofs is the oracle-type PAC-Bayes bounds, in the spirit of [35]. We start with a few preliminary lemmas.

Appendix A.1. Preliminary Lemmas

First, we state a version of Bernstein’s inequality from Proposition 2.9 page 24 in [49].
Lemma A1
(Bernstein’s inequality). Let U 1 , …, U n be independent real valued random variables. Let us assume that there are two constants v and w such that i = 1 n E [ U i 2 ] v and for all integers k 3 , i = 1 n E ( U i ) k v k ! w k 2 2 . Then, for any ζ ( 0 , 1 / w ) ,
E exp ζ i = 1 n U i E ( U i ) exp v ζ 2 2 ( 1 w ζ ) .
Another basic tool to derive the PAC-Bayes bounds is Donsker and Varadhan’s variational inequality, see Lemma 1.1.3 in Catoni [17] for a proof (among others). From now, for any Θ R n 1 × n 2 , we let P ( Θ ) denote the set of all probability distributions on Θ equipped with the Borel σ -algebra. For ( μ , ν ) P ( Θ ) 2 , the Kullback–Leibler divergence is defined by K ( ν , μ ) = log d ν d μ ( θ ) ν ( d θ ) if ν admits a density d ν d μ with respect to μ , and  K ( ν , μ ) = + otherwise.
Lemma A2
(Donsker and Varadhan’s variational formula). Let μ P ( Θ ) . For any measurable, bounded function h : Θ R , we have
log e h ( θ ) μ ( d θ ) = sup ρ P ( Θ ) h ( θ ) ρ ( d θ ) K ( ρ , μ ) .
Moreover, the supremum with respect to ρ in the right-hand side is reached for the Gibbs measure μ h defined by its density with respect to μ
d μ h ( θ ) = e h ( θ ) d μ e h ( ϑ ) μ ( d ϑ ) .
These two lemmas are the only tools we need to prove Theorems 1 and 2. Their proofs are quite similar, with a few differences. For the sake of simplicity, we will state the common parts of the proofs as a separate result in Lemma A3. Note that the proof of this lemma will use Lemmas A1 and A2.
Lemma A3.
Under Assumptions 1 and 2, put
α = λ λ 2 C 1 2 n q ( 1 C 2 λ n q ) and β = λ + λ 2 C 1 2 n q ( 1 C 2 λ n q ) .
Then, for any ε ( 0 , 1 ) , and  λ ( 0 , n q / C 2 ) ,
E [ exp { α R ( M ) R ( M * ) + λ r ( M ) + r ( M * ) log d ρ ^ λ d π ( M ) log 2 ε } ρ ^ λ ( d M ) ] ε 2
and
E sup ρ P ( R p × k ) exp β R d ρ + R ( M * ) + λ r d ρ r ( M * ) K ( ρ , π ) log 2 ε ε 2 .
Proof of Lemma A3.
We prove the first inequality (A10) as follows. Fix any M with X M Z C and put
T i j = Y i j ( X M * Z ) i j 2 Y i j ( Π C ( X M Z ) ) i j 2 .
Note that the random variables T i j with i = 1 , , n ; j = 1 , , q are independent by construction. We have
i = 1 n j = 1 q E [ T i j 2 ] = i = 1 n j = 1 q E 2 Y i j ( X M * Z ) i j Π C ( X M Z ) i j 2 ( X M * Z ) i j Π C ( X M Z ) i j 2 = i = 1 n j = 1 q E 2 E i j + ( X M * Z ) i j Π C ( X M Z ) i j 2 ( X M * Z ) i j Π C ( X M Z ) i j 2 i = 1 n j = 1 q E 8 E i j 2 + C 2 ( X M * Z ) i j Π C ( X M Z ) i j 2 i = 1 n j = 1 q 8 σ 2 + C 2 E ( X M * Z ) i j ( X M Z ) i j 2 8 n q ( σ 2 + C 2 ) R ( M ) R ( M * ) = n q C 1 R ( M ) R ( M * ) = : v ( M , M * ) .
Next we have, for any integer k 3 , that
i = 1 n j = 1 q E ( T i j ) k i = 1 n j = 1 q E 2 Y i j ( X M * Z ) i j Π C ( X M Z ) i j k ( X M * Z ) i j Π C ( X M Z ) i j k i = 1 n j = 1 q E 2 k 1 | 2 E i j | k + ( 2 C ) k ( X M * Z ) i j Π C ( X M Z ) i j k i = 1 n j = 1 q E 2 2 k 1 | E i j | k + C k ( 2 C ) k 2 ( X M * Z ) i j Π C ( X M Z ) i j 2 2 2 k 1 σ 2 k ! ξ k 2 + C k ( 2 C ) k 2 i = 1 n j = 1 q E ( X M * Z ) i j ( X M Z ) i j 2 2 3 k 3 σ 2 k ! ξ k 2 + C k C k 2 8 ( σ 2 + C 2 ) v ( M , M * ) 2 3 k 6 σ 2 ξ k 2 + C k C k 2 ( σ 2 + C 2 ) k ! v ( M , M * ) 2 3 k 5 ξ k 2 + C k 2 C k 2 k ! v ( M , M * ) 2 3 k 4 max ( ξ , C ) k 2 C k 2 k ! v ( M , M * ) = [ 2 3 max ( ξ , C ) C ] k 2 2 2 k ! v ( M , M * )
and use the fact that, for any k 3 , 2 2 2 3 ( k 2 ) / 2 to obtain
i = 1 n j = 1 q E ( T i j ) k [ 2 6 max ( ξ , C ) C ] k 2 k ! v ( M , M * ) 2 = v ( M , M * ) k ! C 2 k 2 2 .
Thus, we can apply Lemma A1 with U i : = T i , v : = v ( M , M * ) , w : = C 2 and ζ : = λ / n q . We obtain, for any λ ( 0 , n q / w ) = ( 0 , n q / C 2 ) ,
E exp λ R ( M ) R ( M * ) r ( M ) + r ( M * ) exp v λ 2 2 ( n q ) 2 ( 1 w λ n q ) = exp C 1 R ( M ) R ( M * ) λ 2 2 n q ( 1 C 2 λ n q ) .
Rearranging terms, and using the definition of α in (A2),
E exp α R ( M ) R ( M * ) + λ r ( M ) + r ( M * ) 1 .
Multiplying both sides by ε / 2 and then integrating with respect to the probability distribution π ( . ) , we obtain
E exp α R ( M ) R ( M * ) + λ r ( M ) + r ( M * ) log 2 ε π ( d M ) ε 2 .
Next, Fubini’s theorem gives
E exp α R ( M ) R ( M * ) + λ r ( M ) + r ( M * ) log 2 ε π ( d M ) ε 2 .
and note that for any measurable function h,
exp [ h ( M ) ] π ( d M ) = exp h ( M ) log d ρ ^ λ d π ( M ) ρ ^ λ ( d M )
to obtain (A3).
Let us now prove (A4). Here again, we start with an application of Lemma A1, but this time with U i : = T i (we keep v : = v ( M , M * ) , w : = C 2 and ζ : = λ / n q ). We obtain, for any λ ( 0 , n q / C 2 ) ,
E exp λ r ( M ) + r ( M * ) R ( M ) + R ( M * ) exp C 1 R ( M ) R ( M * ) λ 2 2 n q ( 1 C 2 λ n q ) .
Rearranging terms, using the definition of β in (A2) and multiplying both sides by ε / 2 , we obtain
E exp β R ( M ) + R ( M * ) + λ r ( M ) r ( M * ) log 2 ε ε 2 .
We integrate with respect to π and use Fubini to obtain
E exp β R ( M ) + R ( M * ) + λ r ( M ) r ( M * ) log 2 ε π ( d M ) ε 2 .
Here, we use a different argument from the proof of the first inequality: we use Lemma A2 on the integral, and this gives directly (A4).    □
Finally, in both proofs, we will use quite often distributions ρ P ( R p × k ) that will be defined as translations of the prior π . We introduce the following notation.
Definition A1.
For any matrix M ¯ R p × k , we define ρ M ¯ P ( R p × k ) by
ρ M ¯ ( M ) = π ( M ¯ M ) .
The following technical lemmas from [31] will be useful in the proofs.
Lemma A4
(Lemma 1 in [31]). We have M F 2 π ( d M ) p k τ 2 .
Lemma A5
(Lemma 2 in [31]). For any M ¯ R p × k , we have
K ( ρ M ¯ , π ) 2 rank ( M ¯ ) ( k + p + 2 ) log 1 + M ¯ F τ 2 rank ( M ¯ )
with the convention 0 log ( 1 + 0 / 0 ) = 0 .

Appendix A.2. Proof of Theorem 1

Proof of Theorem 1.
An application of Jensen’s inequality on inequality (A3) yields
E exp α R d ρ ^ λ R ( M * ) + λ r d ρ ^ λ + r ( M * ) K ( ρ ^ λ , π ) log 2 ε ε 2 .
Using the standard Chernoff’s trick to transform an exponential moment inequality into a deviation inequality, that is: exp ( x ) 1 R + ( x ) , we obtain
P α R d ρ ^ λ R ( M * ) + λ r d ρ ^ λ + r ( M * ) K ( ρ ^ λ , π ) log 2 ε ] 0 } ε 2
Using (2) we have
R d ρ ^ λ R ( M * ) = 1 n q Π C ( X M Z ) X M * Z F 2 ρ ^ λ ( d M ) 1 n q Π C ( X M Z ) ρ ^ λ ( d M ) X M * Z F 2 1 n q X M Z ^ λ X M * Z F 2
where we used Jensen’s inequality in the second line, and the definition of X M Z ^ λ from the second to the third line. Plugging this into our probability bound (A5), and dividing both sides by α , we obtain
P 1 n q X M Z ^ λ X M * Z F 2 r d ρ ^ λ r ( M * ) + 1 λ K ( ρ ^ λ , π ) + log 2 ε α λ 1 ε 2
under the additional condition that λ is such that α > 0 , which we will assume from now (note that this is satisfied by λ * ). Using Lemma A2, we can rewrite this as
P 1 n q X M Z ^ λ X M * Z F 2 inf ρ P ( R p × k ) r d ρ r ( M * ) + 1 λ K ( ρ , π ) + log 2 ε α λ 1 ε 2 .
We consider now the consequences of the second inequality in Lemma A3, that is (A4). With Chernoff’s trick and rearranging terms a little, we obtain
P ρ P ( R p × k ) , r d ρ r ( M * ) β λ R d ρ R ( M * ) + 1 λ K ( ρ , π ) + log 2 ε 1 ε 2 .
which we can rewrite as, ρ P ( R p × k ) , with probability at least 1 ε 2 ,
r d ρ r ( M * ) β λ 1 n q Π C ( X M Z ) X M * Z F 2 ρ ( d M ) + 1 λ K ( ρ , π ) + log 2 ε .
Combining (A7) and (A6) with a union bound argument gives the following bound, with probability of at least 1 ε ,
1 n q X M Z ^ λ X M * Z F 2 inf ρ P ( R p × k ) β 1 n q Π C ( X M Z ) X M * Z F 2 ρ ( d M ) + 2 K ( ρ , π ) + log 2 ε α .
Noting that, for any ( i , j ) , ( X M * Z ) i , j [ C , C ] implies that
| ( Π C ( X M Z ) ) i , j ( X M * Z ) i , j | | ( X M Z ) i , j ( X M * Z ) i , j |
and thus
1 n q X M Z ^ λ X M * Z F 2 inf ρ P ( R p × k ) β 1 n q X M Z X M * Z F 2 ρ ( d M ) + 2 K ( ρ , π ) + log 2 ε α .
The end of the proof consists in making the right-hand side in the inequality more explicit. In order to do so, we restrict the infimum bound above to the distributions given by Definition A1:
P { 1 n q X M Z ^ λ X M * Z F 2 inf M ¯ R p × k β 1 n q X M Z X M * Z F 2 ρ M ¯ ( d M ) + 2 K ( ρ M ¯ , π ) + log 2 ε α } 1 ε .
We see immediately that Dalalyan’s lemma will be extremely useful for that. First, Lemma A5 provides an upper bound on K ( ρ M ¯ , π ) . Moreover,
X M Z X M * Z F 2 ρ M ¯ ( d M ) X M ¯ Z X M * Z X M Z F 2 π ( d M ) = X M ¯ Z X M * Z F 2 2 i , j ( X M ¯ Z X M * Z ) j , i ( X M Z ) i , j π ( d M ) + X M Z F 2 π ( d M ) .
The second term in the right-hand side is null because π is centered, and thus
X M Z X M * Z F 2 ρ M ¯ ( d M ) X M ¯ Z X M * Z F 2 + X M Z F 2 π ( d M ) X M ¯ Z X M * Z F 2 + X F 2 Z F 2 M F 2 π ( d M ) X M ¯ Z X M * Z F 2 + X F 2 Z F 2 p k τ 2
where we used elementary properties of the Frobenius norm, and Lemma A4 in the last line. We can now plug this (and Lemma A5) back into (A8) to obtain
P { 1 n q X M Z ^ λ X M * Z F 2 inf M ¯ R p × k [ β α 1 n q X M ¯ Z X M * Z F 2 + β α 1 n q X F 2 Z F 2 p k τ 2 + 1 α 4 rank ( M ¯ ) ( k + p + 2 ) log 1 + M ¯ F τ 2 rank ( M ¯ ) + 2 log 2 ε ] } 1 ε .
We are now making the constants explicit. First, if  λ n q / ( 2 C 2 ) , then 2 n q ( 1 C 2 λ / n q ) n p and thus
β α = 1 + λ C 1 2 n q ( 1 C 2 λ n q ) 1 λ C 1 2 n q ( 1 C 2 λ n q ) 1 + λ C 1 n q 1 λ C 1 n q .
Then, λ n q δ C 1 ( 1 + δ ) leads to β α ( 1 + δ ) .
Note that λ * = n q min ( 1 / ( 2 C 2 ) , δ / [ C 1 ( 1 + δ ) ] ) satisfies these two conditions, so from now, λ = λ * . We also use the following:
1 α = 1 λ * 1 λ * C 1 2 n q ( 1 C 2 λ * / n q ) β λ * α ( 1 + δ ) n q min ( 1 / ( 2 C 2 ) , δ / [ C 1 ( 1 + δ ) ] ) C 1 ( 1 + δ ) 2 n q δ .
So far the bound is:
P { 1 n q X M Z ^ λ * X M * Z F 2 inf M ¯ R p × k [ ( 1 + δ ) n q X M ¯ Z X M * Z F 2 + ( 1 + δ ) n q X F 2 Z F 2 p k τ 2 + C 1 ( 1 + δ ) 2 4 rank ( M ¯ ) ( k + p + 2 ) log 1 + M ¯ F τ 2 rank ( M ¯ ) + 2 log 2 ε n q δ ] } 1 ε .
In particular, with probability at least 1 ε , the choice τ 2 = C 1 ( k + p ) / ( n k q X F 2 Z F 2 ) gives
1 n q X M Z ^ λ * X M * Z F 2 inf M ¯ R p × k [ ( 1 + δ ) n q X M ¯ Z X M * Z F 2 + C 1 ( 1 + δ ) ( k + p ) n q + C 1 ( 1 + δ ) 2 4 rank ( M ¯ ) ( k + p + 2 ) log 1 + X F Z F M ¯ F C 1 n k q ( k + p ) rank ( M ¯ ) + 2 log 2 ε n q δ ] .
   □

Appendix A.3. Proof of Theorem 2

Proof of Theorem 2.
We also start with an application of Lemma A3, and focus on (A3), applied to ε : = ε n , that is:
E [ exp { α R ( M ) R ( M * ) + λ r ( M ) + r ( M * ) log d ρ ^ λ d π ( M ) log 2 ε n } ρ ^ λ ( d M ) ] ε n 2 .
Using Chernoff’s trick, this gives
E P M ρ ^ λ ( M A n ) 1 ε n 2
where
A n = M : α R ( M ) R ( M * ) + λ r ( M ) + r ( M * ) log d ρ ^ λ d π ( M ) + log 2 ε n .
Using the definition of ρ ^ λ , for  M A n we have
α R ( M ) R ( M * ) λ r ( M ) r ( M * ) + log d ρ ^ λ d π ( M ) + log 2 ε n log exp λ r ( M ) π ( d M ) λ r ( M * ) + log 2 ε n = λ r ( M ) ρ ^ λ ( d M ) r ( M * ) + K ( ρ ^ λ , π ) + log 2 ε n = inf ρ λ r ( M ) ρ ( d M ) r ( M * ) + K ( ρ , π ) + log 2 ε n .
Now, let us define
B n = ρ : β R d ρ + R ( M * ) + λ r d ρ r ( M * ) K ( ρ , π ) + log 2 ε n .
Using (A4), we have that
E 1 B n 1 ε n 2 .
We will now prove that, if  λ is such that α > 0 ,
E P M ρ ^ λ ( M M n ) E P M ρ ^ λ ( M A n ) 1 B n
which, together with
E P M ρ ^ λ ( M A n ) 1 B n = E ( 1 P M ρ ^ λ ( M A n ) ) ( 1 1 B n c ) E 1 P M ρ ^ λ ( M A n ) 1 B n c 1 ε n
will bring
E P M ρ ^ λ ( M M n ) 1 ε n .
In order to do so, assume that we are on the set B n , and let M A n . Then,
α R ( M ) R ( M * ) inf ρ λ r ( M ) ρ ( d M ) r ( M * ) + K ( ρ , π ) + log 2 ε n inf ρ β R ( M ) ρ ( d M ) R ( M * ) + 2 K ( ρ , π ) + 2 log 2 ε n
that is,
R ( M ) R ( M * ) inf ρ P ( R p × k ) β R d ρ R ( M * ) + 2 K ( ρ , π ) + log 2 ε α
or, rewriting it in terms of norms,
Π C ( X M Z ) X M * Z F 2 inf M ¯ R p × k β X M Z X M * Z F 2 ρ M ¯ ( d M ) + 2 K ( ρ M ¯ , π ) + log 2 ε α .
We upper-bound the right-hand side exactly as in the proof of Theorem 1, which gives M M n .   □

Appendix A.4. Proof of Theorem 3

Lemma A6.
Under Assumptions 1 and 3, put
α = λ λ 2 C 1 2 m ( 1 C 2 λ m ) and β = λ + λ 2 C 1 2 m ( 1 C 2 λ m ) .
Then for any ε ( 0 , 1 ) , and  λ ( 0 , m / C 2 ) ,
E [ exp { α R ( M ) R ( M * ) + λ r ( M ) + r ( M * ) log d ρ ^ λ d π ( M ) log 2 ε } ρ ^ λ ( d M ) ] ε 2
and
E sup ρ P ( R p × k ) exp β R d ρ + R ( M * ) + λ r d ρ r ( M * ) K ( ρ , π ) log 2 ε ε 2 .
Proof of Lemma A6.
The inequality (A10) is proved in a similar way to the proof of Lemma A3. That is, we apply Lemma A1 to the following independent random variables
V i = Y i ( X M * Z ) i 2 Y i ( Π C ( X M Z ) ) i 2 , i = 1 , , m .
The proof of the inequality (A11) is processed similar in the proof of Lemma A3 in which we apply Lemma A1 to the independent random variables V i , i = 1 , , m .    □
Proof of Theorem 3.
Similar to the proof of Theorem 1, until the (A6), and noting that using (6), we have
R d ρ ^ λ R ( M * ) = Π C ( X M Z ) X M * Z F , Π 2 ρ ^ λ ( d M ) Π C ( X M Z ) ρ ^ λ ( d M ) X M * Z F , Π 2 X M Z ^ λ X M * Z F , Π 2 ,
and thus, we obtain
P X M Z ^ λ X M * Z F , Π 2 inf ρ P ( R p × k ) r d ρ r ( M * ) + 1 λ K ( ρ , π ) + log 2 ε α λ 1 ε 2 .
We consider now the consequences of inequality (A11) in Lemma A6. With Chernoff’s trick and rearranging terms a little, we obtain ρ P ( R p × k ) , with probability at least 1 ε 2 ,
r d ρ r ( M * ) β λ Π C ( X M Z ) X M * Z F , Π 2 ρ ( d M ) + 1 λ K ( ρ , π ) + log 2 ε .
Combining (A13) and (A12) with a union bound argument gives the bound and noting that for any ( i , j ) , ( X M * Z ) i , j [ C , C ] implies that | ( Π C ( X M Z ) ) i , j ( X M * Z ) i , j | | ( X M Z ) i , j ( X M * Z ) i , j | and thus
P { X M Z ^ λ X M * Z F , Π 2 inf ρ P ( R p × k ) β X M Z X M * Z F , Π 2 ρ ( d M ) + 2 K ( ρ , π ) + log 2 ε α } 1 ε .
We are now making the right-hand side in the inequality more explicit. In order to do so, we restrict the infimum bound above to the distributions given by Definition A1:
P { X M Z ^ λ X M * Z F , Π 2 inf M ¯ R p × k β X M Z X M * Z F , Π 2 ρ M ¯ ( d M ) + 2 K ( ρ M ¯ , π ) + log 2 ε α } 1 ε .
We see immediately that Dalalyan’s lemma will be extremely useful for that. First, Lemma A5 provides an upper bound on K ( ρ M ¯ , π ) .
Moreover,
X M Z X M * Z F , Π 2 ρ M ¯ ( d M ) X M ¯ Z X M * Z X M Z F , Π 2 π ( d M ) = X M ¯ Z X M * Z F , Π 2 2 i , j Π i , j ( X M ¯ Z X M * Z ) j , i ( X M Z ) i , j π ( d M ) + X M Z F , Π 2 π ( d M ) .
The second term in the above right-hand side is null because π is centered, and thus
X M Z X M * Z F , Π 2 ρ M ¯ ( d M ) X M ¯ Z X M * Z F , Π 2 + X M Z F , Π 2 π ( d M ) X M ¯ Z X M * Z F , Π 2 + X M Z F 2 π ( d M ) X M ¯ Z X M * Z F , Π 2 + X F 2 Z F 2 M F 2 π ( d M ) X M ¯ Z X M * Z F , Π 2 + X F 2 Z F 2 p k τ 2
where we used the elementary properties of the Frobenius norm, and Lemma A4 in the last line. We can now plug this (and Lemma A5) back into (A14) to obtain:
P { X M Z ^ λ X M * Z F , Π 2 inf M ¯ R p × k [ β α X M ¯ Z X M * Z F , Π 2 + β α X F 2 Z F 2 p k τ 2 + 1 α 4 rank ( M ¯ ) ( k + p + 2 ) log 1 + M ¯ F τ 2 rank ( M ¯ ) + 2 log 2 ε ] } 1 ε .
We are now making the constants explicit. First, if  λ m / ( 2 C 2 ) , then 2 m ( 1 C 2 λ / m ) m and thus
β α = 1 + λ C 1 2 m ( 1 C 2 λ m ) 1 λ C 1 2 m ( 1 C 2 λ m ) 1 + λ C 1 m 1 λ C 1 m .
Then, λ m δ C 1 ( 1 + δ ) leads to β α ( 1 + δ ) .
Note that λ * = m min ( 1 / ( 2 C 2 ) , δ / [ C 1 ( 1 + δ ) ] ) satisfies these two conditions, so from now λ = λ * . We also use the following:
1 α = 1 λ * 1 λ * C 1 2 m ( 1 C 2 λ * / m ) β λ * α ( 1 + δ ) m min ( 1 / ( 2 C 2 ) , δ / [ C 1 ( 1 + δ ) ] ) C 1 ( 1 + δ ) 2 m δ .
So far the bound is:
P { X M Z ^ λ * X M * Z F , Π 2 inf M ¯ R p × k [ ( 1 + δ ) X M ¯ Z X M * Z F , Π 2 + ( 1 + δ ) X F 2 Z F 2 p k τ 2 + C 1 ( 1 + δ ) 2 4 rank ( M ¯ ) ( k + p + 2 ) log 1 + M ¯ F τ 2 rank ( M ¯ ) + 2 log 2 ε m δ ] } 1 ε .
In particular, with probability at least 1 ε , the choice τ 2 = C 1 ( k + p ) / ( m k p X F 2 Z F 2 ) gives
X M Z ^ λ * X M * Z F , Π 2 inf M ¯ R p × k [ ( 1 + δ ) X M ¯ Z X M * Z F , Π 2 + C 1 ( 1 + δ ) ( k + p ) m + C 1 ( 1 + δ ) 2 4 rank ( M ¯ ) ( k + p + 2 ) log 1 + X F Z F M ¯ F C 1 m k p ( k + p ) rank ( M ¯ ) + 2 log 2 ε m δ ] .
   □

Appendix A.5. Proof of Theorem 4

Proof. 
The proof is proceeded completely similar to the proof of Theorem 2, in Appendix A.3.
   □

Appendix B. Comments on Algorithm Implementation

For the case of inductive matrix completion, we write the logarithm of the density of the posterior
log ρ ^ λ ( M ) = λ n i = 1 n ( Y i ( Π C ( X M Z ) ) i ) 2 p + m + 2 2 log det ( τ 2 I m + M M ) .
Let us now differentiate this expression in M. Note that the term ( Y i ( Π C ( X M Z ) ) i ) 2 does actually not depend on M locally if ( X M Z ) i [ C , C ] , in this case its differential with respect to M is 0 p × m . Otherwise, ( Y i ( Π C ( X M Z ) ) i ) 2 = ( Y i ( X M Z ) i ) 2 . In order to be able to differentiate the term ( X M Z ) i , let us introduce a notation for the entries of I i : I i = ( a i , b i ) . Then ( X M Z ) i = D where the matrix D R p × m satisfies D x , y = 1 { x = b j } X a j , y . Then
log ρ ^ λ ( M ) = 2 λ n i = 1 n ( ( X M Z ) i ) ( Y i ( X M Z ) i ) 1 { | ( X M Z ) i | < C } ( p + m + 2 ) ( τ 2 I m + M M ) 1 M .
The above calculation still requires to calculate a p × p matrix inversion at each iteration; for very large p, this might be expensive and can slow down the algorithm. Therefore, we could replace this matrix inversion by its accurately approximation through a convex optimization. It is noted that the matrix B : = ( τ 2 I m + M M ) 1 M is the solution to the following convex optimization problem: min B I p M B F 2 + τ 2 B F 2 . The solution of this optimization problem can be conveniently obtained by using the package ‘glmnet’ [50] (with the family option ‘mgaussian’). This avoids performing matrix inversion or other costly calculation. However, we note here that the LMC algorithm is being used with approximate gradient evaluation; theoretical assessment of this approach can be found in [51].
Algorithm A1 LMC
  • Input: The data.
  • Parameters: Positive real numbers λ , τ , h , T .
  • Output: The matrix M ^
  • Initialize: M 0 , M ^ = 0 m × p
  • for k 1 to T do
  •     Sample M k from (8);
  •      M ^ M ^ + M k / T
  • end for
Algorithm A2 MALA
  • Input: The data.
  • Parameters: Positive real numbers λ , τ , h , T
  • Output: The matrix M ^
  • Initialize: M 0 ; M ^ = 0 m × p
  • for k = 1 to T do
  •     Sample M ˜ k from (9).
  •     Set M k = M ˜ k with probability A M A L A , from (10), otherwise M k = M k 1 .
  •      M ^ M ^ + M k / T .
  • end for

References

  1. Rosen, D.V. Bilinear Regression with Rank Restrictions on the Mean and Dispersion Matrix. In Methodology and Applications of Statistics; Springer: Berlin/Heidelberg, Germany, 2021; pp. 193–211. [Google Scholar]
  2. Von Rosen, D. Bilinear regression analysis: An Introduction; Lecture Notes in Statistics; Springer: Berlin/Heidelberg, Germany, 2018; Volume 220. [Google Scholar]
  3. Potthoff, R.F.; Roy, S. A generalized multivariate analysis of variance model useful especially for growth curve problems. Biometrika 1964, 51, 313–326. [Google Scholar] [CrossRef]
  4. Woolson, R.F.; Leeper, J.D. Growth curve analysis of complete and incomplete longitudinal data. Commun. Stat.-Theory Methods 1980, 9, 1491–1513. [Google Scholar] [CrossRef]
  5. Kshirsagar, A.; Smith, W. Growth Curves; CRC Press: Boca Raton, FL, USA, 1995; Volume 145. [Google Scholar]
  6. Jana, S. Inference for Generalized Multivariate Analysis of Variance (GMANOVA) Models and High-Dimensional Extensions. Ph.D. Thesis, Mcmaster University, Hamilton, ON, Canada, 2017. Available online: http://hdl.handle.net/11375/22043 (accessed on 26 January 2023).
  7. Natarajan, N.; Dhillon, I.S. Inductive matrix completion for predicting gene–disease associations. Bioinformatics 2014, 30, i60–i68. [Google Scholar] [CrossRef]
  8. Zilber, P.; Nadler, B. Inductive Matrix Completion: No Bad Local Minima and a Fast Algorithm. In Proceedings of the 39th ICML, Baltimore, MA, USA, 17–23 July 2022; PMLR. Volume 162, pp. 27671–27692. [Google Scholar]
  9. Zhang, W.; Xu, H.; Li, X.; Gao, Q.; Wang, L. DRIMC: An improved drug repositioning approach using Bayesian inductive matrix completion. Bioinformatics 2020, 36, 2839–2847. [Google Scholar] [CrossRef] [PubMed]
  10. Hsieh, C.J.; Natarajan, N.; Dhillon, I. PU learning for matrix completion. In Proceedings of the International Conference on Machine Learning, PMLR, Lille, France, 7–9 July 2015; pp. 2445–2453. [Google Scholar]
  11. Jana, S.; Balakrishnan, N.; Hamid, J.S. Bayesian growth curve model useful for high-dimensional longitudinal data. J. Appl. Stat. 2019, 46, 814–834. [Google Scholar] [CrossRef]
  12. Knoblauch, J.; Jewson, J.; Damoulas, T. An Optimization-centric View on Bayes’ Rule: Reviewing and Generalizing Variational Inference. J. Mach. Learn. Res. 2022, 23, 1–109. [Google Scholar]
  13. Bissiri, P.G.; Holmes, C.C.; Walker, S.G. A general framework for updating belief distributions. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 2016, 78, 1103–1130. [Google Scholar] [CrossRef] [PubMed]
  14. Grünwald, P.; Van Ommen, T. Inconsistency of Bayesian inference for misspecified linear models, and a proposal for repairing it. Bayesian Anal. 2017, 12, 1069–1103. [Google Scholar] [CrossRef]
  15. McAllester, D. Some PAC-Bayesian theorems. In Proceedings of the Eleventh Annual Conference on Computational Learning Theory, Madison, WI, USA, 24–26 July 1998; ACM: New York, NY, USA, 1998; pp. 230–234. [Google Scholar]
  16. Shawe-Taylor, J.; Williamson, R. A PAC analysis of a Bayes estimator. In Proceedings of the Tenth Annual Conference on Computational Learning Theory, Nashville, TN, USA, 6–9 July 1997; ACM: New York, NY, USA, 1997; pp. 2–9. [Google Scholar]
  17. Catoni, O. PAC-Bayesian Supervised Classification: The Thermodynamics of Statistical Learning; IMS Lecture Notes—Monograph Series, 56; Institute of Mathematical Statistics: Beachwood, OH, USA, 2007; p. xii+163. [Google Scholar]
  18. Guedj, B. A primer on PAC-Bayesian learning. arXiv 2019, arXiv:1901.05353. [Google Scholar]
  19. Alquier, P. User-friendly introduction to PAC-Bayes bounds. arXiv 2021, arXiv:2110.11216. [Google Scholar]
  20. Mai, T.T.; Alquier, P. A Bayesian approach for noisy matrix completion: Optimal rate under general sampling distribution. Electron. J. Statist. 2015, 9, 823–841. [Google Scholar] [CrossRef]
  21. Cottet, V.; Alquier, P. 1-Bit matrix completion: PAC-Bayesian analysis of a variational approximation. Mach. Learn. 2018, 107, 579–603. [Google Scholar] [CrossRef]
  22. Mai, T.T.; Alquier, P. Pseudo-Bayesian quantum tomography with rank-adaptation. J. Stat. Plan. Inference 2017, 184, 62–76. [Google Scholar] [CrossRef] [Green Version]
  23. Mai, T.T.; Alquier, P. Optimal quasi-Bayesian reduced rank regression with incomplete response. arXiv 2022, arXiv:2206.08619. [Google Scholar]
  24. Jain, P.; Dhillon, I.S. Provable inductive matrix completion. arXiv 2013, arXiv:1306.0626. [Google Scholar]
  25. Candès, E.J.; Plan, Y. Matrix completion with noise. Proc. IEEE 2010, 98, 925–936. [Google Scholar] [CrossRef]
  26. Koltchinskii, V.; Lounici, K.; Tsybakov, A.B. Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion. Ann. Statist. 2011, 39, 2302–2329. [Google Scholar] [CrossRef]
  27. Foygel, R.; Shamir, O.; Srebro, N.; Salakhutdinov, R. Learning with the weighted trace-norm under arbitrary sampling distributions. In Proceedings of the Advances in Neural Information Processing Systems, Granada, Spain, 12–15 December 2011; pp. 2133–2141. [Google Scholar]
  28. Klopp, O. Noisy low-rank matrix completion with general sampling distribution. Bernoulli 2014, 20, 282–303. [Google Scholar] [CrossRef]
  29. Negahban, S.; Wainwright, M.J. Restricted strong convexity and weighted matrix completion: Optimal bounds with noise. J. Mach. Learn. Res. 2012, 13, 1665–1697. [Google Scholar]
  30. Dalalyan, A.S.; Tsybakov, A.B. Sparse regression learning by aggregation and Langevin Monte-Carlo. J. Comput. Syst. Sci. 2012, 78, 1423–1443. [Google Scholar] [CrossRef]
  31. Dalalyan, A.S. Exponential weights in multivariate regression and a low-rankness favoring prior. Annales de l’Institut Henri Poincaré Probabilités et Statistiques 2020, 56, 1465–1483. [Google Scholar] [CrossRef]
  32. Anderson, T.W. Estimating linear restrictions on regression coefficients for multivariate normal distributions. Ann. Math. Stat. 1951, 22, 327–351. [Google Scholar] [CrossRef]
  33. Izenman, A.J. Modern multivariate statistical techniques. Regres. Classif. Manifold Learn. 2008, 10, 978. [Google Scholar]
  34. Dalalyan, A.; Tsybakov, A.B. Aggregation by exponential weighting, sharp PAC-Bayesian bounds and sparsity. Mach. Learn. 2008, 72, 39–61. [Google Scholar] [CrossRef]
  35. Catoni, O. Statistical learning theory and stochastic optimization. In Saint-Flour Summer School on Probability Theory 2001; Picard, J., Ed.; Lecture Notes in Mathematics; Springer: Berlin, Germany, 2004; Volume 1851, p. viii+272. [Google Scholar] [CrossRef]
  36. Alquier, P.; Ridgway, J.; Chopin, N. On the properties of variational approximations of Gibbs posteriors. J. Mach. Learn. Res. 2016, 17, 8374–8414. [Google Scholar]
  37. Rigollet, P.; Tsybakov, A.B. Sparse estimation by exponential weighting. Stat. Sci. 2012, 27, 558–575. [Google Scholar] [CrossRef]
  38. Dalalyan, A.S.; Grappin, E.; Paris, Q. On the exponentially weighted aggregate with the Laplace prior. Ann. Stat. 2018, 46, 2452–2478. [Google Scholar] [CrossRef] [Green Version]
  39. Candes, E.J.; Wakin, M.B.; Boyd, S.P. Enhancing sparsity by reweighted 1 minimization. J. Fourier Anal. Appl. 2008, 14, 877–905. [Google Scholar] [CrossRef]
  40. Yang, L.; Fang, J.; Duan, H.; Li, H.; Zeng, B. Fast low-rank Bayesian matrix completion with hierarchical gaussian prior models. IEEE Trans. Signal Process. 2018, 66, 2804–2817. [Google Scholar] [CrossRef]
  41. Luo, C.; Liang, J.; Li, G.; Wang, F.; Zhang, C.; Dey, D.K.; Chen, K. Leveraging mixed and incomplete outcomes via reduced-rank modeling. J. Multivar. Anal. 2018, 167, 378–394. [Google Scholar] [CrossRef]
  42. Hoeffding, W. Probability Inequalities for Sums of Bounded Random Variables. J. Am. Stat. Assoc. 1963, 58, 13–30. [Google Scholar] [CrossRef]
  43. Durmus, A.; Moulines, E. High-dimensional Bayesian inference via the unadjusted Langevin algorithm. Bernoulli 2019, 25, 2854–2882. [Google Scholar] [CrossRef]
  44. Roberts, G.O.; Stramer, O. Langevin diffusions and Metropolis-Hastings algorithms. Methodol. Comput. Appl. Probab. 2002, 4, 337–357. [Google Scholar] [CrossRef]
  45. Roberts, G.O.; Rosenthal, J.S. Optimal scaling of discrete approximations to Langevin diffusions. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 1998, 60, 255–268. [Google Scholar] [CrossRef]
  46. Dalalyan, A.S. Theoretical guarantees for approximate sampling from smooth and log-concave densities. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 2017, 3, 651–676. [Google Scholar] [CrossRef]
  47. R Core Team. R: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2022. [Google Scholar]
  48. Hastie, T.; Mazumder, R. softImpute: Matrix Completion via Iterative Soft-Thresholded SVD, 2021. R Package Version 1.4-1. Available online: https://cran.r-project.org/package=softImpute (accessed on 26 January 2023).
  49. Massart, P. Concentration Inequalities and Model Selection; Lecture Notes in Mathematics; Springer: Berlin, Germany, 2007; Volume 1896, p. xiv+337. [Google Scholar]
  50. Friedman, J.; Hastie, T.; Tibshirani, R. Regularization Paths for Generalized Linear Models via Coordinate Descent. J. Stat. Softw. 2010, 33, 1–22. [Google Scholar] [CrossRef] [PubMed]
  51. Dalalyan, A.S.; Karagulyan, A. User-friendly guarantees for the Langevin Monte Carlo with inaccurate gradient. Stoch. Process. Their Appl. 2019, 129, 5278–5311. [Google Scholar] [CrossRef] [Green Version]
Table 1. Simulation results on simulated data in Model I in bilinear regression, with fixed q = 10, k = 20, for different methods, with their standard error in parentheses over 100 replications. (Est: average of estimation errors; Pred: average of prediction errors; Nmse: average of normalized estimation errors).
Table 1. Simulation results on simulated data in Model I in bilinear regression, with fixed q = 10, k = 20, for different methods, with their standard error in parentheses over 100 replications. (Est: average of estimation errors; Pred: average of prediction errors; Nmse: average of normalized estimation errors).
ErrorsLMCMALAOLS
n = 100 Est1.0053 (0.5480)1.0342 (0.5559)1.0052 (0.5478)
p = 10 Pred0.1138 (0.0171)0.0985 (0.0151)0.1014 (0.0154)
Nmse0.4931 (0.1178)0.5100 (0.1207)0.4930 (0.1178)
Est1.3544 (0.5867)1.3384 (0.5836)1.3544 (0.5867)
p = 100 Pred1.0066 (0.0430)0.8761 (0.0756)1.0030 (0.0424)
Nmse0.7049 (0.2944)0.6963 (0.2927)0.7049 (0.2944)
n = 1000 Est1.0776 (0.5671)1.0900 (0.5670)1.0776 (0.5671)
p = 10 Pred0.0099 (0.0013)0.0099 (0.0013)0.0099 (0.0013)
Nmse0.5185 (0.1198)0.5264 (0.1219)0.5185 (0.1198)
Est0.9662 (0.3240)0.9688 (0.3244)0.9662 (0.3240)
p = 100 Pred0.0999 (0.0051)0.0989 (0.0049)0.0998 (0.0051)
Nmse0.4961 (0.1183)0.4976 (0.1191)0.4961 (0.1183)
Table 2. Simulation results on simulated data in Model II (approximate low-rank) in bilinear regression, with fixed q = 10 , k = 20 , for different methods, with their standard error in parentheses over 100 replications. (Est: average of estimation errors; Pred: average of prediction errors; Nmse: average of normalized estimation errors).
Table 2. Simulation results on simulated data in Model II (approximate low-rank) in bilinear regression, with fixed q = 10 , k = 20 , for different methods, with their standard error in parentheses over 100 replications. (Est: average of estimation errors; Pred: average of prediction errors; Nmse: average of normalized estimation errors).
ErrorsLMCMALAOLS
n = 100 Est4.0731 (1.828)4.0989 (1.821)4.0731 (1.828)
p = 10 Pred0.1090 (0.0160)0.0969 (0.0140)0.0987 (0.0145)
Nmse0.5119 (0.1226)0.5162 (0.1241)0.5118 (0.1226)
Est4.6047 (1.812)4.6038 (1.813)4.6047 (1.812)
p = 100 Pred1.0062 (0.0462)1.0597 (0.0495)1.0006 (0.0469)
Nmse0.5801 (0.1942)0.5800 (0.1941)0.5801 (0.1942)
n = 1000 Est3.6733 (1.606)3.6884 (1.606)3.6733 (1.606)
p = 10 Pred0.0098 (0.0015)0.0098 (0.0015)0.0098 (0.0015)
Nmse0.4812 (0.1271)0.4835 (0.1260)0.4813 (0.1271)
Est3.9972 (1.375)3.9986 (1.376)3.9972 (1.375)
p = 100 Pred0.1000 (0.0043)0.1032 (0.0057)0.0999 (0.0043)
Nmse0.5013 (0.1061)0.5014 (0.1063)0.5013 (0.1062)
Table 3. Simulation results on simulated data in Model I in inductive matrix completion, with fixed q = 10 , k = 20 , for different methods, with their standard error in parentheses over 100 replications. ( κ is the missing rate; Est: average of estimation errors; Pred: average of prediction errors; Nmse: average of normalized estimation errors).
Table 3. Simulation results on simulated data in Model I in inductive matrix completion, with fixed q = 10 , k = 20 , for different methods, with their standard error in parentheses over 100 replications. ( κ is the missing rate; Est: average of estimation errors; Pred: average of prediction errors; Nmse: average of normalized estimation errors).
ErrorsLMCMALAOLS_imp
n = 100
κ = 10%
Est1.0559 (0.5060)1.0803 (0.5122)1.0559 (0.5060)
p = 10 Pred0.1028 (0.0193)0.1082 (0.0143)0.1020 (0.0197)
Nmse0.4986 (0.1116)0.5139 (0.1197)0.4986 (0.1116)
Est1.4008 (0.8555)1.3987 (0.8542)1.4009 (0.8555)
p = 100 Pred1.2250 (0.4568)1.4468 (0.4137)1.2252 (0.4570)
Nmse0.7148 (0.3591)0.7136 (0.3581)0.7148 (0.3591)
n = 100
κ = 30%
Est1.0432 (0.4963)1.0917 (0.5085)1.0432 (0.4963)
p = 10 Pred0.2402 (0.2705)0.1447 (0.0204)0.2446 (0.2780)
Nmse0.5242 (0.1257)0.5538 (0.1335)0.5242 (0.1257)
Est1.6242 (0.8179)1.6224 (0.8169)1.6242 (0.8179)
p = 100 Pred9.8879 (14.11)10.807 (13.84)9.8901 (14.11)
Nmse0.7993 (0.3340)0.7985 (0.3334)0.7993 (0.3340)
n = 1000
κ = 10%
Est0.9810 (0.4532)0.9882 (0.4478)0.9810 (0.4532)
p = 10 Pred0.0114 (0.0033)0.0112 (0.0015)0.0114 (0.0033)
Nmse0.4933 (0.1076)0.4984 (0.1075)0.4933 (0.1076)
Est1.0063 (0.3465)1.0088 (0.3471)1.0063 (0.3465)
p = 100 Pred0.1902 (0.1758)0.1116 (0.0049)0.1902 (0.1759)
Nmse0.5069 (0.1049)0.5082 (0.1050)0.5069 (0.1049)
n = 1000
κ = 30%
Est1.0110 (0.4886)1.0223 (0.4872)1.0110 (0.4886)
p = 10 Pred0.0539 (0.0599)0.0141 (0.0019)0.0540 (0.0599)
Nmse0.5129 (0.1030)0.5206 (0.1043)0.5129 (0.1030)
Est1.0291 (0.3567)1.0312 (0.3555)1.0291 (0.3567)
p = 100 Pred1.7529 (1.914)0.1475 (0.0078)1.7530 (1.913)
Nmse0.5054 (0.1055)0.5067 (0.1053)0.5054 (0.1055)
Table 4. Simulation results on simulated data in Model II (approximate low-rank) in inductive matrix completion, with fixed q = 10 , k = 20 , for different methods, with their standard error in parentheses over 100 replications. ( κ is the missing rate; Est: average of estimation errors; Pred: average of prediction errors; Nmse: average of normalized estimation errors).
Table 4. Simulation results on simulated data in Model II (approximate low-rank) in inductive matrix completion, with fixed q = 10 , k = 20 , for different methods, with their standard error in parentheses over 100 replications. ( κ is the missing rate; Est: average of estimation errors; Pred: average of prediction errors; Nmse: average of normalized estimation errors).
ErrorsLMCMALAOLS_imp
n = 100
imis 10%
Est3.8319 (1.691)3.8749 (1.719)3.8319 (1.690)
p = 10 Pred0.1604 (0.1271)0.1092 (0.0153)0.1598 (0.1322)
Nmse0.5116 (0.1154)0.5169 (0.1147)0.5116 (0.1155)
Est5.9500 (2.834)5.9452 (2.835)5.9500 (2.834)
p = 100 Pred4.7640 (5.272)4.6964 (5.515)4.7658 (5.275)
Nmse0.7313 (0.3454)0.7307 (0.3455)0.7313 (0.3454)
n = 100
imis 30%
Est4.1838 (1.850)4.2535 (1.859)4.1839 (1.850)
p = 10 Pred0.7221 (0.7562)0.1498 (0.0183)0.7371 (0.7741)
Nmse0.5182 (0.1128)0.5283 (0.1147)0.5182 (0.1128)
Est7.1589 (4.084)7.1558 (4.083)7.1589 (4.084)
p = 100 Pred39.899 (52.40)40.233 (51.76)39.908 (52.41)
Nmse0.8998 (0.3821)0.8994 (0.3820)0.8998 (0.3821)
n = 1000
imis 10%
Est3.9618 (1.678)3.9788 (1.677)3.9618 (1.678)
p = 10 Pred0.0409 (0.0269)0.0110 (0.0015)0.0409 (0.0269)
Nmse0.4968 (0.1196)0.4989 (0.1195)0.4968 (0.1196)
Est4.1153 (1.295)4.1163 (1.294)4.1153 (1.295)
p = 100 Pred1.0250 (0.9988)0.1135 (0.0051)1.0250 (0.9988)
Nmse0.5060 (0.1096)0.5062 (0.1096)0.5060 (0.1096)
n = 1000
imis 30%
Est4.1647 (1.990)4.1836 (1.995)4.1647 (1.990)
p = 10 Pred0.4615 (0.3497)0.0141 (0.0017)0.4616 (0.3498)
Nmse0.4905 (0.1157)0.4933 (0.1171)0.4905 (0.1157)
Est4.0578 (1.400)4.0565 (1.397)4.0578 (1.400)
p = 100 Pred8.5608 (6.419)0.1538 (0.0069)8.5609 (6.419)
Nmse0.4944 (0.1184)0.4943 (0.1180)0.4944 (0.1184)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mai, T.T. From Bilinear Regression to Inductive Matrix Completion: A Quasi-Bayesian Analysis. Entropy 2023, 25, 333. https://0-doi-org.brum.beds.ac.uk/10.3390/e25020333

AMA Style

Mai TT. From Bilinear Regression to Inductive Matrix Completion: A Quasi-Bayesian Analysis. Entropy. 2023; 25(2):333. https://0-doi-org.brum.beds.ac.uk/10.3390/e25020333

Chicago/Turabian Style

Mai, The Tien. 2023. "From Bilinear Regression to Inductive Matrix Completion: A Quasi-Bayesian Analysis" Entropy 25, no. 2: 333. https://0-doi-org.brum.beds.ac.uk/10.3390/e25020333

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop