Next Article in Journal
A New Biased Estimator to Combat the Multicollinearity of the Gaussian Linear Regression Model
Previous Article in Journal
Model Free Inference on Multivariate Time Series with Conditional Correlations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On the Number of Independent Pieces of Information in a Functional Linear Model with a Scalar Response

by
Eduardo L. Montoya
Department of Mathematics, California State University, Bakersfield, CA 93311, USA
Submission received: 18 September 2020 / Revised: 29 October 2020 / Accepted: 3 November 2020 / Published: 5 November 2020

Abstract

:
In a functional linear model (FLM) with scalar response, the parameter curve quantifies the relationship between a functional explanatory variable and a scalar response. While these models can be ill-posed, a penalized regression spline approach may be used to obtain an estimate of the parameter curve. The penalized regression spline estimate will be dependent on the value of a smoothing parameter. However, the ability to obtain a reasonable parameter curve estimate is reliant on how much information is present in the covariate functions for estimating the parameter curve. We propose to quantify the information present in the covariate functions to estimate the parameter curve. In addition, we examine the influence of this information on the stability of the parameter curve estimator and on the performance of smoothing parameter selection methods in a FLM with a scalar response.

1. Introduction

Functional data analysis (FDA) continues to be an active and growing area of research as measurements from continuous processes are increasingly becoming prevalent in many fields. This type of data is functional data because they can be viewed as samples from curves. Recent key references in FDA include those of Hsing and Eubank [1] and Kokoszka and Reimherr [2]. Consider the following linear model where the response is a scalar, but the explanatory variable is function:
y i = α + T x i ( t ) β ( t ) d t + ε i
for i = 1 , 2 , , n . The response y i is a scalar value, x i ( t ) is a function, α is the intercept, β ( t ) is a parameter curve, and ε i is uncorrelated random noise with zero mean and constant variance σ 2 . Model (1) is also referred to as a scalar-on-function regression model. Here, we assume T = [ 0 , 1 ] , but note that any closed continuous domain T in R can be transformed to [ 0 , 1 ] . Further, we assume that the covariate functions are known.
The objective for model (1) is to estimate the smooth parameter curve, β ( t ) . The parameter curve quantifies the relationship between the scalar response and a functional explanatory variable in the presence of uncertainty. A review of some common approaches for estimating β ( t ) are provided in [3], and these approaches depend on some variant of a tuning parameter (smoothing parameter, number of knots, bandwidth, number of retained principal components, etc.), where the size of the tuning parameter controls the trade-off between the goodness-of-fit and the smoothness of the parameter curve estimate. Regarding inference for model (1), some recently proposed methodologies include goodness-of-fit test [4] and testing linearity in a FLM with a scalar response [5]. Tekbudak et al. [6] provided a comparison of these and other recent testing procedures for linearity in a FLM with a scalar response. In influence diagnostics, Cook’s distance [7] and Pe n ˜ a’s distance [8] are extended to a FLM with a scalar response [9].
In this paper, our aim is to quantify the amount of information present in the covariate functions for estimating the parameter curve when β ( t ) is identifiable and assess the influence of the amount of this information on the numerical stability of the parameter curve estimator. In addition, we review the performance of different smoothing parameter selection methods based on the amount of information present in the covariate functions for estimating β ( t ) . To our knowledge, no study has explicitly focused on this aspect of model (1). Here, the concept of the numerical stability (referred to simply as stability hereafter) is tied to the idea that the parameter curve estimate will not substantially change when the set of covariate functions are slightly altered. In Section 2, we define a measure, denoted ζ x ( t ) , to quantify the amount of information present in the covariate functions for estimating β ( t ) when the parameter curve is identifiable. There are various computational methods to estimate the smooth parameter curve, β ( t ) , in model (1). Our approach is to estimate β ( t ) using penalized regression spline estimation. Penalized regression spline estimation of β ( t ) and common smoothing parameter selection methods are discussed in Section 3. Section 4 proposes measures to assess the performance of smoothing parameter selection methods and the stability of a parameter curve estimator. A simulation study that assesses the relationship between ζ x ( t ) and the stability of the parameter curve estimator, as well as examines the performance of different smoothing parameter selection methods under varying ζ x ( t ) s, is given in Section 5. Section 6 provides a real data application of ζ x ( t ) . We conclude with discussion in Section 7.

2. Number of Independent Pieces of Information in a FLM

Model (1) can be ill-posed to varying degrees. An ill-posed problem refers to one for which no solution exists, the solution is not unique, or the solution is unstable [10]. Cardot et al. [11] provided theoretical conditions for the existence and uniqueness of a solution to (1), where the solution falls in the space spanned by the eigenfunctions of the functional covariate’s covariance operator in which the model space is a separable Hilbert space of square integrable functions defined on [ 0 , 1 ] . In practice, it is generally assumed that theoretical conditions for identifiability are satisfied when estimating β ( t ) in model (1). In scalar-on-image regression models, Happ et al. [12] studied the impact of structural assumptions of the parameter image, such as smoothness and sparsity, on the model estimates, as well as measures to assess to what degree the assumptions are satisfied.
Our focus is assessing how much information is present in the covariate functions to estimate the parameter curve in model (1) when the parameter curve is identifiable. To our knowledge, no prior studies have given consideration to this aspect of model (1) and its influence on the stability of the parameter curve estimator. Given the different areas of application of model (1), assessing this relationship is essential as the stability of the parameter curve estimate would influence the reliability of model uncertainty estimates. We aim to study this aspect of model (1) by proposing to quantify the information present in the covariate functions to estimate the parameter curve. The idea underlying this work is motivated by the work of Wahba [13].
Wahba [13] proposed the idea of the number of independent pieces of information to gauge if one can obtain a reasonable solution in a type of general smoothing spline model (GSSM). In this context, the number of eigenvalues provided by the eigendecomposition of the inner-product of the representer of a bounded linear functional with itself when scaled by the reciprocal of the variance of the error component in the model that are greater than one are considered to be the number of independent pieces of information. If the number of independent pieces of information in a GSSM is large, then a solution to a GSSM is recoverable. However, no explicit criterion was given to quantify how many pieces are required. Motivated by this, we define a measure for the number of independent pieces of information in the covariate functions for estimating β ( t ) in model (1). Let v 1 , , v n be the eigenvalues from the eigendecomposition of 0 1 x ( t ) x ( t ) T d t where x ( t ) = ( x 1 ( t ) , , x n ( t ) ) T . We define the number of independent pieces of information in the covariate functions for estimating β ( t ) as
ζ x ( t ) = i = 1 n I ( v i / σ 2 > 1 ) ,
where σ 2 is the variance of the error component in model (1). Measure (2) may be estimated by plugging in an estimate of σ 2 . An estimate of σ 2 is provide in Section 3.
As an illustration of (2), Figure 1 contains four different sets of covariate functions that vary in their number of independent pieces of information. These covariate functions are used in a simulation study in Section 5. Figures in this study were produced using the R packages ggplot2 [14] and cowplot [15]. If there is less information present in the covariate functions to estimate β ( t ) as quantified by (2), then one would anticipate that a reasonable or stable parameter curve estimator would be less feasible. To assess whether the size of ζ x ( t ) indicates the degree of stability of a solution to model (1), we propose a stability measure of a parameter curve estimator in Section 4. We address the relationship between ζ x ( t ) and the stability of the estimator in Section 5.

3. Penalized Regression Spline Estimate of β ( t )

Here, we review penalized regression spline estimation of the parameter curve β ( t ) . Assuming that the parameter curve is smooth in the sense that it lies in a Sobolev space of order 4, we may seek an estimate of β ( t ) by minimizing
i = 1 n y i α 0 1 x i ( t ) β ( t ) d t 2 + λ 0 1 β ( t ) 2 d t ,
where the term 0 1 β ( t ) 2 d t penalizes curvature in the estimate. To ease notation, from here forward, we assume the intercept α = 0 , but a non-zero α value is easily incorporated into the computational approaches discussed.
We use the method of regularized basis functions [16], with a B-spline basis of order 4 to minimize fitting criterion (3) with respect to the parameter curve β ( t ) . With this approach, each covariate function x i ( t ) and the parameter curve β ( t ) are represented using a linear combination of B-spline basis functions. Let x i ( t ) be represented as j = 1 J x k i , j N j ( t ) for i = 1 , , n , where N j ( t ) denotes the jth B-spline function of order 4. Similarly, we represent β ( t ) as j = 1 J β c j N j ( t ) . Now, let j = 1 J x k i , j N j ( t ) = K [ i , ] N x ( t ) where N x ( t ) = [ N 1 ( t ) N J x ] T and K [ i , ] denotes the ith row of the matrix K that has elements K [ i , j ] = k i , j . Similarly, let j = 1 J β c j N j ( t ) = N β ( t ) T c where c = ( c 1 , , c J β ) T . In this basis representation framework, penalized least squares criterion (3) is re-expressed as finding c to minimize
y M c T y M c + λ c T R c ,
where y = ( y 1 , , y n ) T , M = K 0 1 N x ( t ) N β T ( t ) d t , and R = 0 1 N β ( t ) N β ( t ) T d t . For a given λ , an estimate of β ( t ) is obtained by minimizing (4) with respect to c via
β ^ λ ( t ) = N β ( t ) T c ^ λ = N β ( t ) T M T M + λ R 1 M T y ,
where the subscript λ signifies the dependence of the solution on the value of the smoothing parameter.
Various data-driven approaches have been proposed to select the smoothing parameter λ in (3). These methods include Akaike’s information criterion, Akaike’s information criterion corrected, cross-validation, generalized cross-validation criterion, L-curve criterion, restricted maximum likelihood, Schwarz information criterion, etc. Since the size of λ controls the size of the penalty in (3), most of these data-driven methods consist of two parts: one that measures the goodness of fit of the model and another that quantifies the complexity of the parameter curve estimate. Thus, these methods attempt to achieve an optimal balance between how well the model fits the data and the smoothness of the parameter curve estimate (see [4,9,17,18,19], and others cited therein for examples of recent studies that have used one or more of these criteria to select the smoothing parameter in models of type (1).)
In our study, we restrict our discussion to the following commonly used data-driven criteria: Akaike’s information criterion corrected ( A I C c ), cross-validation ( C V ), Schwarz information criterion ( S I C ), and the generalized cross-validation ( G C V ) criterion. The smoothing parameter selection methods used in our study are by no means exhaustive, nor are they meant to be. Rather, our intent is to explore if the amount of information present in the covariate functions for estimating β ( t ) may effect the performance of given smoothing parameter selection method. For each criterion, the value of the smoothing parameter λ that minimizes the criterion is assumed to be a reasonable value for λ . Each criterion discussed here is dependent on the residuals sum of squares defined by
R S S λ = i = 1 n y i 0 1 x i ( t ) β ^ λ ( t ) d t 2
and the smoother matrix, S λ = M ( M T M + λ R ) 1 M T . With CV, for each λ , we obtain an estimate of β ( t ) based on minimizing
j = 1 , j i N y j 0 1 x j ( t ) β ( t ) d t 2 + λ 0 1 β ( t ) 2 d t .
Denote the minimizing solution by β ^ λ ( i ) ( t ) , where ( i ) symbolizes an estimate based on all observations except for the ith case. Since there are n cases that one can delete for a given λ , a cross-validation score is defined as
C V ( λ ) = N 1 i = 1 N y i 0 1 x i ( t ) β ^ λ ( i ) ( t ) d t 2 .
A computationally friendly form of CV [16] is defined as
C V ( λ ) = 1 n i = 1 n y i a b x i ( t ) β ^ λ ( t ) d t 1 S i i 2 ,
where S i i denotes the ith diagonal element of S λ .
The GCV criterion [20] replaces the diagonal elements of the smoother matrix in the C V formula by t r ( S λ ) / n to obtain
G C V ( λ ) = R S S λ n 1 γ d f λ 2 ,
where d f λ = t r ( S λ ) . d f λ is referred to as the effective degrees of freedom [21]. R S S λ and d f λ are commonly used to estimate σ 2 via σ ^ λ 2 = R S S λ / ( n d f λ ) . An estimate of σ 2 may then be used to estimate ζ x ( t ) via
ζ ^ x ( t ) = i = 1 n I ( v i / σ ^ 2 > 1 ) .
The term γ in (5) represents an inflation of the effective degrees of freedom (EDF) for γ > 1 . Inflation of the EDF is used as a measure to guard against GCV selecting a smoothing parameter that over-fits the data in non-parametric models [22]. Some simulation studies suggest 1.4 to be reasonable in non-parametric models [22,23]. The SIC criterion [24] may be expressed as
S I C ( λ ) = l n 1 n R S S λ + d f λ l o g ( n ) n .
The AICc criterion proposed by Hurvich et al. [25] penalizes more complex estimates of β ( t ) than does the S I C for smaller sample sizes, and it may be defined as
A I C c ( λ ) = l n 1 n R S S λ + 2 ( d f λ + 1 ) n d f λ 2 .

4. Quantifying Stability of β ^ ( t ) and the Performance of Smoothing Parameter Selection Methods

An ideal value for the smoothing parameter, call it λ 🟉 , may be considered one that minimizes the integrated squared error,
I S E ( β , β ^ λ 🟉 ) = 0 1 β ( t ) β ^ λ 🟉 ( t ) 2 d t .
To assess the performance of the smoothing parameter selection methods discussed in Section 3, we use the median of the measure
K ( β , β ^ ) = 0 1 β ( t ) β ^ ( t ) 2 d t 0 1 β ( t ) β ^ λ 🟉 ( t ) 2 d t .
Note that any penalized regression spline estimate of the parameter curve will be dependent on the chosen value of the smoothing parameter, but the notation of this dependence is suppressed for better readability. Relative to β ( t ) , the estimator β ^ ( t ) is considered better the closer the median of (8) is to 1. The further away the median of (8) is from 1, the poorer is the performance of the smoothing parameter selection method. Measure (8) was motivated by a measure proposed by Lee [26] to compare smoothing parameter selection methods in smoothing splines.
To assess the stability of a parameter curve estimator, we propose a leave-one-out measure motivated by the DFFITS [27] statistic. Specifically, we use the median of a leave-one-out integrated squared error measure,
I S E ( n ) β , β ^ λ 🟉 ( i ) = 1 n i = 1 n 0 1 β ( t ) β ^ λ 🟉 ( i ) ( t ) 2 / σ ^ 2 ( i ) d t ,
where β ^ λ 🟉 ( i ) ( t ) and σ ^ λ * 2 ( i ) represent estimators of β ( t ) and σ 2 , respectively, based on all observations except for the ith case, and where λ 🟉 minimizes criterion (7). A large value of the median of (9) would reflect a less stable estimator. If ζ x ( t ) is to be considered an appropriate measure of information in the covariate functions for estimating β ( t ) , then large values of ζ x ( t ) would be associated with small values of the median of (9) in the sense that slightly altering the set of covariate functions did not lead to large changes in the parameter curve estimate. Similarly, smaller values of ζ x ( t ) would be associated with larger values of the median of (9). This is evaluated with a simulation study in the next section.

5. A Simulation Study

Using a simulation study, we examine the stability of a parameter curve estimator and the performance of the smoothing parameter selection methods at varying levels of ζ x ( t ) . The sampling distributions required to derive analytical formulas for the median of (8) and (9) are unattainable due to the dependence of the parameter curve estimator on the smoothing parameter. Therefore, a simulation estimate of the median of (8) is obtained by
S M K = M e d i a n K g ( β , β ^ ) .
The subscript g denotes the gth simulated dataset. Similarly, we estimate the median of (9) via
S M I S E ( n ) = M e d i a n I S E g ( β , β ^ λ 🟉 ) ) .
To obtain (10) and (11), simulated datasets are generated under various settings by the model,
y i = 0 1 x i ( t ) β ( t ) d t + ε i ,
for i = 1 , , n , where ε i s are assumed to independent and identically normal random variables with mean 0 and standard deviation σ . Overall, under four different sets of covariate functions, g = 2500 simulated datasets were generated for each combination of n { 25 , 50 , 100 } , x i ( t ) { x i 1 ( t ) , x i 2 ( t ) , x i 3 ( t ) , x i 4 ( t ) } , β { β 1 ( t ) , β 2 ( t ) , β 3 ( t ) } , and σ is chosen for each setting to ensure that κ { . 10 , . 20 } , where κ refers to the signal-to-noise ratio. As defined by Febrero-Bande et al. [9], κ = σ / ψ , where ψ denotes the standard deviation of 0 1 x i ( t ) β ( t ) d t for i = 1 , . . . , n . The different sets of covariate functions used in this study were first produced on a discretized scale of 50 equally spaced values. To ensure sufficient flexibility in their functional representations, the parameter curve and the set covariate functions are represented as functions using the approach described in Section 3, with J β = 50 and J x = 50 , respectively.
The simulations are performed using R [28], along with the extension and usage of code from the R package fda [29]. In addition, the R packages dplyr [30] and tidyr [31] were used for data management. The parameter curves in this study (shown in Figure 2) are defined as
β 1 ( t ) = 1 23 B e t a [ 20 , 5 ] ( t ) + 1 3 B e t a [ 12 , 12 ] ( t ) + 1 3 B e t a [ 7 , 30 ] ( t ) , β 2 ( t ) = 7 10 B e t a [ 3 , 10 ] ( t ) + 3 10 B e t a [ 7 , 2 ] ( t ) , β 3 ( t ) = 1 10 B e t a [ 2 , 5 ] ( t ) + 9 10 B e t a [ 5 , 2 ] ( t ) ,
where
B e t a [ q , p ] ( t ) = Γ ( q + p ) Γ ( q ) Γ ( p ) t q 1 ( 1 t ) p 1 for t [ 0 , 1 ] ,
and they were chosen to assess the performance of (10) and (11) at varying levels of complexity of the parameter curve in terms of their approximate curvature.
The different sets of covariate functions were generated under different conditions to ensure varying levels of ζ x ( t ) . The first set of covariate curves ( x i 1 ( t ) s) are produced by generating realizations from a normal random variable with mean 0 and standard deviation η that are randomly shifted about the y-axis by η , where η U n i f ( 1 , 30 ) . We denote this first set of covariate curves as Covariate Set 1. Covariate Set 2 ( x i 2 ( t ) s) is created by generating realizations from a Gaussian process having an exponential covariogram with variance parameter 10 and scale parameter 0.4 . Covariate Set 3 ( x i 3 ( t ) s) is produced by generating realizations from a mixture of beta random variables randomly shifted about the y-axis. Covariate Set 4 ( x i 4 ( t ) s) is obtained by generating realizations from a Brownian motion process with variance parameter 2.7 . As an illustration, the four different sets of covariate functions used in this study are shown in Figure 1 for a sample of size n = 25 at a specified signal-to-noise ratio.
Table 1, Table 2 and Table 3 provide the SMK when the parameter curves are β 1 ( t ) , β 2 ( t ) , and β 3 ( t ) , respectively, under each simulation setting and smoothing parameter selection method. The results are presented on the log scale to better depict the differences. Note that a small value of the S M K does not reflect whether a reasonable estimator of β ( t ) was obtained but rather reflects the performance of the given smoothing parameter selection method relative to criterion (7). Some of the differences between the smallest and the next smallest values of the SMK across the smoothing parameter selection methods for a given simulation scenario may not appear large. To better distinguish between the smallest and the next smallest values of the SMK, pairwise comparisons of the SMKs in a given simulation setting were performed using Mood’s median test with the Benjamini and Hochberg [32] correction as implemented in the R package RVAideMemoire [33] at significance level 0.05 . Here, we consider the performance of a smoothing parameter selection method to be favorable or best if it obtains the lowest S M K or an S M K that is not significantly different from the lowest S M K . We now summarize the favorable smoothing parameter selection method(s) that were most common across different simulation settings.
Under Covariate Set 1, S I C consistently obtained the lowest S M K across across all simulation settings. Under Covariate Set 2 at n = 25 , G C V γ = 1.4 , A I C c , and S I C consistently preformed best under the different simulation settings. However, S I C tended to performed best across all simulation settings at n = 50 . At n = 100 and the lower κ , A I C c generally performed best across the three parameter curves. At the higher κ , S I C was the better smoothing parameter selection method. When the covariate functions assume the form of Covariate Set 3, A I C c and G C V γ = 1.4 tended to perform just as well or better than the other smoothing parameter selection methods across all settings. Under Covariate Set 4 with n = 25 , A I C c was consistently among the better methods for all settings. For n = 50 , G C V and A I C c were favorable under parameter curves β 1 ( t ) and β 2 ( t ) , whereas S I C was favorable under β 3 ( t ) . When n = 100 , A I C c performed best or just as well as the other methods.
Covariate Sets 1 and 2 tended to have a lower ζ x ( t ) in our study for a given sample size and κ , whereas Covariate Sets 3 and 4 had a higher ζ x ( t ) . While a perfect one-to-one relationship does not appear evident between ζ x ( t ) and the performance of a smoothing parameter selection method, S I C tended to perform more favorably when ζ x ( t ) ranged between two and nine across the three different parameter curves. For the higher values of ζ x ( t ) , A I C c tended to perform just as well or better than the other methods more often than not. In addition, note that G C V γ = 1.4 obtained a lower S M K than G C V for almost all simulation settings.
Table 4 shows the S M I S E ( n ) under each simulation setting. The results are presented on the log scale to better depict the differences. The results show similar patterns or trends under each κ . For a given parameter curve and sample size, the S M I S E ( n ) decreases as ζ x ( t ) increases. Thus, the more information present in the covariate functions for estimating the parameter curve, the more stable the parameter curve estimator. We also note that, for a given covariate set, an increase in the sample size corresponds to a decrease in the S M I S E ( n ) and to a non-decreasing ζ x ( t ) . This reassures that ζ x ( t ) may be viewed as a measure of the amount of information present in the covariate functions in the sense that under a given covariate set, more observed data tends to increase ζ x ( t ) to varying degrees and provide a smaller S M I S E ( n ) . However, a large sample size does not imply a large ζ x ( t ) , as illustrated by Covariate Set 1. Similarly, a small sample size does not imply a low ζ x ( t ) , such as under Covariate Set 4. The differences in results under each κ are in part due to the scaling involved in (9) by σ ^ 2 ( i ) , where σ ^ 2 ( i ) would tend to be higher under κ = 0.20 .
Recall that for smoothing parameter selection under a covariate set with a low ζ x ( t ) , S I C performed best or just as well as the other smoothing parameter selection methods. For covariate sets with a higher ζ x ( t ) , A I C c was generally among the better smoothing parameter selection methods. To visualize the impact of ζ x ( t ) on the stability of the parameter curve estimate, Figure 3 shows the resulting approximate expected value of the parameter curve estimator (computed across all parameter curve estimates in the simulation study) plus or minus two times the approximate standard deviation of the parameter curve estimator when using the preferred smoothing parameter selection method suggested by Table 1, Table 2 and Table 3. For brevity, we only present the results for n = 25 . The parameter curve estimator, under Covariate Set 1, showed much higher variability than under the other covariate sets. Further, the covariate set with a higher ζ x ( t ) (Covariate Set 4) reflected the lowest variability. Similar behavior was exhibited at n = 50 and n = 100 . This behavior is consistent with the behavior of the S M I S E ( n ) for a given covariate set and sample size. This reflects that a low ζ x ( t ) is associated with a less stable solution, which in turn may substantially increase the variability of a parameter curve estimator, where such variability would not be reflected in an observed confidence interval for the parameter curve.
In practice, ζ x ( t ) will need to be estimated due to its dependence on σ 2 . An estimate of this measure, ζ ^ x ( t ) , is provided in (6) using the estimate of σ 2 provided in Section 3. The estimate of σ 2 will be dependent on a chosen value of the smoothing parameter. To better understand the impact of the chosen value of the smoothing parameter on (6), Table 5, Table 6 and Table 7 provide the average value of (6), computed as the average over all simulated datasets, when parameter curves are β 1 ( t ) , β 2 ( t ) , and β 3 ( t ) , respectively, under each simulation setting and smoothing parameter selection method. On average, (6) provides a reasonable estimate of ζ x ( t ) in the simulation settings considered.

6. A Real Data Illustration

In this section, we illustrate the use of ζ x ( t ) with two real datasets: the gasoline dataset described by Kokoszka and Reimherr [2] and used by Reiss and Ogden [18] and the streamflow and precipitation data described by Masselot et al. [34]. Model (1) was applied to both datasets to model the relationship between a functional covariate and scalar response. The gasoline data consist of near-infrared reflectance spectra of 60 gasoline samples (measured in 2-nm intervals from 900 to 1700 nm), as well as the octane numbers for each gasoline sample. These data are available in the R package refund [35]. The aim of modeling these data using model (1) is to determine the association between the octane rating (response variable) and the near-infrared reflectance spectra curves (covariate curves). We represent the near-infrared reflectance spectra discretized measurements and the parameter curve as functions using the methods described in Section 3, with J x = 50 and J β = 50 . The top left panel of Figure 4 contains the near-infrared reflectance spectra curves, x i ( ω ) for i = 1 , , 60 . The estimated number of independent pieces of information in these covariate functions for estimating parameter curve is 5. Since ζ ^ x ( t ) = 5 , we use the S I C for smoothing parameter selection because it performed just as well or better in our simulation study when ζ x ( t ) was small. The upper right panel shows the parameter curve estimate along with 95% point-wise confidence intervals for β ( ω ) when using S I C for smoothing parameter selection. Note that the estimated parameter curve has a positive effect in the intervals ( 950 , 1125 ) and ( 1325 , 1475 ), implying that higher values of near-infrared reflectance spectra are associated with higher octane levels in these intervals. Lower values of near-infrared reflectance spectra are associated with lower octane levels in the intervals ( 1175 , 1325 ) and ( 1525 , 1650 ). For a given λ , approximate point-wise confidence intervals may be constructed using the variance of the parameter curve estimator, σ ^ 2 N β T ( t ) T T T N β ( t ) , where T = M M T + λ R 1 M T [16]. Since ζ x ( t ) is low, our simulation study suggest that the variability of the parameter curve is greater than what is reflected by the confidence interval.
We briefly summarize the streamflow and precipitation data, referring to Masselot et al. [34] for further information on the study and the corresponding data. The data consist of yearly observations of the sum daily streamflow values from 1 July to 31 October, and yearly precipitation time series from 1 June to 31 October for years 1981–2012. These data were measured in areas of the Dartmouth River located in a region of the province of Quebec, Canada. In this study, investigators were interested in estimating and forecasting yearly total streamflow (scalar response) using the corresponding yearly precipitation profile (functional covariate). The precipitation time series and the parameter curve are represented as functions using the methods described in Section 3, with J x = 153 and J β = 22 . However, since precipitation measurements are non-negative, we constrain the functional representation of the precipitation measurements to be non-negative by imposing non-negative constraints on the B-spline coefficients. The bottom left panel of Figure 4 contains the precipitation curves, x i ( t ) for i = 1 , , 153 , covering the daily time domain from June to October in a given year. For these data, ζ ^ x ( t ) = 29 . We use the A I C c since it performed just as well or better than the other methods in our simulation study when ζ x ( t ) was large. The lower right panel shows the parameter curve estimate along with 95% point-wise confidence intervals for β ( t ) when using A I C c for smoothing parameter selection. The estimated parameter curve shows that the effect of precipitation on total streamflow is negative in June, as well as in October. Due to the size of the ζ ^ x ( t ) , our simulation study suggests that this parameter curve estimate is more stable than the one estimated for the gasoline data.

7. Discussion

We present a measure, ζ x ( t ) , for a FLM with a scalar response to determine how much information is present in the covariate curves for estimating the parameter curve β ( t ) when the parameter curve is identifiable. To estimate the parameter curve in model (1), penalized regression spline estimation is used, and we summarize several commonly used methods for selecting the smoothing parameter. To assess the stability of the parameter curve estimator under varying levels of ζ x ( t ) , we examine the S M I S E ( n ) of a parameter curve estimator. The results show that the greater is ζ x ( t ) , the more stable is the parameter curve estimator in that it produces a smaller S M I S E ( n ) than when ζ x ( t ) is smaller. Further, we assess the impact of ζ x ( t ) on smoothing parameter selection, and, while a one-to-one relationship is not clear between ζ x ( t ) and the performance of a smoothing selection method, S I C tends to perform just as well as or better than other methods when ζ x ( t ) is small, whereas A I C c tends to perform just as well as or better than other methods when ζ x ( t ) is large.
Overall, our simulation study showed that the size of ζ x ( t ) impacts both the stability of a parameter curve estimator and the performance of the smoothing parameter selection methods. Future work will study if these results are consistent under alternative parameter curve estimation procedures. An interesting direction for future work is to investigate if shape constraints on the parameter curve could serve as a remedial measure for improving stability of the parameter curve estimator, particularly when ζ x ( t ) is low. Scenarios in which shape constraints are imposed on the parameter curve do arise in practice in a FLM with a scalar response (see [36,37] for some recent examples). Identifying problematic data in functional regression models remains critical and an on-going challenge. We hope this study encourages others to consider approximating ζ x ( t ) when applying model (1) so that the amount of information present in the covariate curves for estimating parameter curve can be gauged. This, in turn, may provide guidance in choosing a smoothing parameter selection method, as well as considering the stability of the parameter curve estimate.

Funding

This research was supported in part by NSF grant HRD-1547784.

Acknowledgments

The reviewers of this manuscript are thanked for their helpful comments. The author expresses his gratitude to Pierre Masselot for sharing the streamflow and precipitation data.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Hsing, T.; Eubank, R. Theoretical Foundations of Functional Data Analysis, with an Introduction to Linear Operators; John Wiley & Sons: Chichester, UK, 2015. [Google Scholar]
  2. Kokoszka, P.; Reimherr, M. Introduction to Functional Data Analysis, 1st ed.; Chapman and Hall: Baca Raton, FL, USA, 2017. [Google Scholar]
  3. Reiss, P.T.; Goldsmith, J.; Shang, H.L.; Ogden, R.T. Methods for Scalar-on-Function Regression. Int. Stat. Rev. 2017, 85, 228–249. [Google Scholar] [CrossRef] [PubMed]
  4. García-Portugués, E.; González-Manteiga, W.; Febrero-Bande, M. A goodness-of-fit test for the functional linear model with scalar response. J. Comput. Graph. Stat. 2014, 23, 761–778. [Google Scholar] [CrossRef] [Green Version]
  5. McLean, M.W.; Hooker, G.; Ruppert, D. Restricted likelihood ratio tests for linearity in scalar-on-function regression. Stat. Comput. 2015, 25, 997–1008. [Google Scholar] [CrossRef] [Green Version]
  6. Tekbudak, M.Y.; Alfaro-Córdoba, M.; Maity, A.; Staicu, A.M. A comparison of testing methods in scalar-on-function regression. arXiv 2017, arXiv:1710.05729. [Google Scholar] [CrossRef] [Green Version]
  7. Cook, R.D. Detection of influential observation in linear regression. Technometrics 1977, 19, 15–18. [Google Scholar]
  8. Pe na, D. A new statistic for influence in linear regression. Technometrics 2005, 47, 1–12. [Google Scholar] [CrossRef] [Green Version]
  9. Febrero-Bande, M.; Galeano, P.; González-Manteiga, W. Measures of influence for the functional linear model with scalar response. J. Multivar. Anal. 2010, 101, 327–339. [Google Scholar] [CrossRef] [Green Version]
  10. Groetsch, C.W. Inverse Problems in the Mathematical Sciences; Springer: Brunswick, GA, USA, 1993; Volume 52. [Google Scholar]
  11. Cardot, H.; Ferraty, F.; Sarda, P. Spline estimators for the functional linear model. Stat. Sin. 2003, 13, 571–591. [Google Scholar]
  12. Happ, C.; Greven, S.; Schmid, V.J. The impact of model assumptions in scalar-on-image regression. Stat. Med. 2018, 37, 4298–4317. [Google Scholar] [CrossRef] [Green Version]
  13. Wahba, G. Ill Posed Problems: Numerical and Statistical Methods for Mildly, Moderately and Severely Ill Posed Problems with Noisy Data; Technical Report; Department of Statistics, University of Wisconsin: Madison, WI, USA, 1980. [Google Scholar]
  14. Wickham, H. ggplot2: Create Elegant Data Visualisations Using the Grammar of Graphics; R Package Version 3.3.1; 2019. [Google Scholar]
  15. Wilke, C.O. Cowplot: Streamlined Plot Theme and Plot Annotations for ‘ggplot2’; R Package Version 0.9.4; 2019. [Google Scholar]
  16. Ramsay, J.O.; Silverman, B.W. Functional Data Analysis, 2nd ed.; Springer: New York, NY, USA, 2005. [Google Scholar]
  17. Crambes, C.; Kneip, A.; Sarda, P. Smoothing splines estimators for functional linear regression. Ann. Stat. 2009, 37, 35–72. [Google Scholar] [CrossRef]
  18. Reiss, P.T.; Ogden, R.T. Smoothing parameter selection for a class of semiparametric linear models. J. R. Stat. Soc. Ser. B 2009, 71, 505–523. [Google Scholar]
  19. Shin, H.; Lee, S. An RKHS approach to robust functional linear regression. Stat. Sin. 2016, 26, 255–272. [Google Scholar] [CrossRef]
  20. Craven, P.; Wahba, W. Smoothing Noisy Data with Spline Functions: Estimating the Correct Degree of Smoothing by the Method of Generalized Cross-Validation. Numer. Math. 1979, 31, 377–403. [Google Scholar] [CrossRef]
  21. Buja, A.; Hastie, T.; Tibshirani, R. Linear smoothers and additive models. Ann. Stat. 1989, 17, 453–510. [Google Scholar] [CrossRef]
  22. Cummins, D.J.; Filloon, T.G.; Nychka, D. Confidence intervals for nonparametric curve estimates: Toward more uniform pointwise coverage. J. Am. Stat. Assoc. 2001, 96, 233–246. [Google Scholar] [CrossRef]
  23. Kim, Y.J.; Gu, C. Smoothing spline Gaussian regression: More scalable computation via efficient approximation. J. R. Stat. Soc. Ser. B 2004, 66, 337–356. [Google Scholar] [CrossRef] [Green Version]
  24. Schwarz, G. Estimating the dimension of a model. Ann. Stat. 1978, 6, 461–464. [Google Scholar] [CrossRef]
  25. Hurvich, C.M.; Simonoff, J.S.; Tsai, C.L. Smoothing parameter selection in nonparametric regression using an improved Akaike information criterion. J. R. Stat. Soc. Ser. B 1998, 60, 271–293. [Google Scholar] [CrossRef]
  26. Lee, T.C. Smoothing parameter selection for smoothing splines: A simulation study. Comput. Stat. Data Anal. 2003, 42, 139–148. [Google Scholar] [CrossRef]
  27. Belsley, D.A.; Kuh, E.; Welsch, R.E. Regression Diagnostics: Identifying Influential Data and Sources of Collinearity; John Wiley & Sons: New York, NY, USA, 2005; Volume 571. [Google Scholar]
  28. R Development Core Team. R: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2018. [Google Scholar]
  29. Ramsay, J.O.; Wickham, H.; Graves, S.; Hooker, G. Fda: Functional Data Analysis; R Package Version 2.4.7; 2017. [Google Scholar]
  30. Wickham, H.; François, R.; Henry, L.; Müller, K. Dplyr: A Grammar of Data Manipulation; R Package Version 0.7.8; 2018. [Google Scholar]
  31. Wickham, H.; Henry, L. Tidyr: Easily Tidy Data with ‘Spread()’ and ‘Gather()’ Functions; R Package Version 0.8.2; 2018. [Google Scholar]
  32. Benjamini, Y.; Hochberg, Y. Controlling the false discovery rate: A practical and powerful approach to multiple testing. J. R. Stat. Soc. Ser. B 1995, 57, 289–300. [Google Scholar] [CrossRef]
  33. Hervé, M. RVAideMemoire: Testing and Plotting Procedures for Biostatistics; R Package Version 0.9-73; 2019. [Google Scholar]
  34. Masselot, P.; Dabo-Niang, S.; Chebana, F.; Ouarda, T.B. Streamflow forecasting using functional regression. J. Hydrol. 2016, 538, 754–766. [Google Scholar]
  35. Goldsmith, J.; Scheipl, F.; Huang, L.; Wrobel, J.; Gellar, J.; Harezlak, J.; McLean, M.W.; Swihart, B.; Xiao, L.; Crainiceanu, C.; et al. Refund: Regression with Functional Data; R Package Version 0.1-17; 2018. [Google Scholar]
  36. Schipper, M.; Taylor, J.M.; Lin, X. Generalized monotonic functional mixed models with application to modelling normal tissue complications. J. R. Stat. Soc. Ser. C 2008, 57, 149–163. [Google Scholar] [CrossRef]
  37. Montoya, E.L.; Meiring, W. On the relative efficiency of a monotone parameter curve estimator in a functional nonlinear model. Stat. Comput. 2013, 23, 425–436. [Google Scholar]
Figure 1. Four different sets of covariate functions of size n = 25 and their respective number of independent pieces of information when the signal-to-noise ratio is 0.10 in model (1).
Figure 1. Four different sets of covariate functions of size n = 25 and their respective number of independent pieces of information when the signal-to-noise ratio is 0.10 in model (1).
Stats 03 00032 g001
Figure 2. The parameter curves β 1 ( t ) (left), β 2 ( t ) (middle), and β 3 ( t ) (right) used in the simulation study.
Figure 2. The parameter curves β 1 ( t ) (left), β 2 ( t ) (middle), and β 3 ( t ) (right) used in the simulation study.
Stats 03 00032 g002
Figure 3. In each panel, the solid line represents the true parameter curve. The orange shaded region corresponds to the approximate expected value plus or minus two times the approximate standard deviation of the parameter curve estimator.
Figure 3. In each panel, the solid line represents the true parameter curve. The orange shaded region corresponds to the approximate expected value plus or minus two times the approximate standard deviation of the parameter curve estimator.
Stats 03 00032 g003
Figure 4. (top left) The reflectance spectra curves; (top right) the parameter curve estimate displaying the association between the octane rating and the near-infrared reflectance spectra curves is shown in the, together with a 95% point-wise confidence interval; and (bottom) analogous graphs for the streamflow and precipitation data.
Figure 4. (top left) The reflectance spectra curves; (top right) the parameter curve estimate displaying the association between the octane rating and the near-infrared reflectance spectra curves is shown in the, together with a 95% point-wise confidence interval; and (bottom) analogous graphs for the streamflow and precipitation data.
Stats 03 00032 g004
Table 1. The SMK under each smoothing parameter selection criterion for each simulation setting under parameter curve β 1 ( t ) . A + next to a SMK for a given sample size indicates the lowest SMK across the different smoothing parameter selection methods. A * next to an SMK for a given sample size indicates that the SMK was not significantly different from lowest S M K .
Table 1. The SMK under each smoothing parameter selection criterion for each simulation setting under parameter curve β 1 ( t ) . A + next to a SMK for a given sample size indicates the lowest SMK across the different smoothing parameter selection methods. A * next to an SMK for a given sample size indicates that the SMK was not significantly different from lowest S M K .
n ζ x ( t ) CV GCV GCV γ = 1.4 AIC C SIC
Covariate Set 1 κ = 0.10 2530.1930.1800.169 * 0.171 * 0.168 +
5030.1930.1820.177 * 0.1800.173 +
10030.184 * 0.182 * 0.178 + 0.181 * 0.178 +
κ = 0.20 2520.1900.1830.139 * 0.151 * 0.138 +
5020.1840.1730.149 * 0.1640.138 +
10030.1740.1740.168 * 0.1730.158 +
Covariate Set 2 κ = 0.10 2560.3720.3630.279 + 0.297 * 0.289 *
5090.478 * 0.478 * 0.450 + 0.459 * 0.454 *
100140.536 * 0.529 * 0.559 * 0.509 + 0.719
κ = 0.20 2530.1650.1460.086 + 0.099 * 0.088 *
5050.2760.2760.1360.2200.095 +
10080.4490.4530.3730.4450.261 +
Covariate Set 3 κ = 0.10 2580.7170.611 * 0.6970.604 + 0.711
50100.4120.4000.357 * 0.352 + 0.412
100100.3100.3090.230 + 0.2690.265
κ = 0.20 2570.6710.6650.597 * 0.580 + 0.628 *
5080.566 * 0.538 * 0.553 * 0.515 + 0.639
10090.446 * 0.446 * 0.409 + 0.422 * 0.617
Covariate Set 4 κ = 0.10 25140.604 * 0.601 * 0.590 * 0.553 + 0.654
50170.4830.455 * 0.4600.411 + 0.599
100210.3500.3480.293 + 0.305 * 0.441
κ = 0.20 25100.5390.5210.417 * 0.410 + 0.455 *
50120.581 * 0.591 * 0.555 * 0.537 + 0.596 *
100160.550 * 0.549 * 0.538 * 0.523 + 0.739
Table 2. Analogous to Table 1 for parameter curve β 2 ( t ) .
Table 2. Analogous to Table 1 for parameter curve β 2 ( t ) .
n ζ x ( t ) CV GCV GCV γ = 1.4 AIC C SIC
Covariate Set 1 κ = 0.10 2530.0590.0550.047 + 0.049 * 0.047 +
5030.2790.2600.0790.1910.065 +
10030.6210.6190.4940.6040.289 +
κ = 0.20 2520.0520.0500.040 * 0.043 * 0.039 +
5030.0700.0630.055 * 0.0590.051 +
10030.2980.2810.0800.2480.063 +
Covariate Set 2 κ = 0.10 2560.2890.3070.215 + 0.239 * 0.222 *
5090.3970.3980.332 * 0.3670.298 +
100140.567 * 0.560 * 0.6330.542 + 0.827
κ = 0.20 2530.1330.1150.048 + 0.057 * 0.049 *
5050.1880.1760.0780.1460.051 +
10080.3870.3820.2970.3630.222 +
Covariate Set 3 κ = 0.10 2580.488 * 0.495 * 0.498 * 0.464 + 0.515 *
50100.4160.4270.383 * 0.363 + 0.451
100100.413 * 0.417 * 0.392 * 0.378 + 0.521
κ = 0.20 2570.4070.4520.360 + 0.361 * 0.386 *
5080.4460.418 * 0.411 * 0.382 + 0.512
10090.482 * 0.487 * 0.429 + 0.439 * 0.633
Covariate Set 4 κ = 0.10 25160.425 + 0.435 * 0.4780.426 * 0.496
50200.349 * 0.344 * 0.3900.324 + 0.496
100230.394 * 0.407 * 0.4720.377 + 0.731
κ = 0.20 25120.4770.4810.449 * 0.426 + 0.476
50160.440 * 0.425 * 0.5130.413 + 0.645
100170.481 * 0.471 * 0.5190.429 + 0.874
Table 3. Analogous to Table 1 for parameter curve β 3 ( t ) .
Table 3. Analogous to Table 1 for parameter curve β 3 ( t ) .
n ζ x ( t ) CV GCV GCV γ = 1.4 AIC C SIC
Covariate Set 1 κ = 0.10 2530.0950.0910.081 + 0.085 * 0.081 +
5030.0880.0860.077 * 0.0820.072 +
10030.0860.0870.082 * 0.0860.075 +
κ = 0.20 2520.0930.0890.069 + 0.075 * 0.069 +
5030.0940.0920.0780.0870.068 +
10030.0930.0920.0840.0910.075 +
Covariate Set 2 κ = 0.10 2560.0770.0720.060 + 0.063 * 0.061 *
5090.0910.0880.071 * 0.0770.068 +
100140.3160.3210.2730.3020.245 +
κ = 0.20 2530.0750.0690.046 + 0.052 * 0.046 +
5050.0720.0710.055 * 0.0640.052 +
10080.1000.0970.080 * 0.0920.073 +
Covariate Set 3 κ = 0.10 2580.3580.3470.308 * 0.307 + 0.323 *
50100.3320.321 * 0.3320.296 + 0.386
100100.335 * 0.340 * 0.301 + 0.311 * 0.409
κ = 0.20 2570.1600.1560.082 + 0.092 * 0.095 *
5080.3020.2930.246 * 0.2610.235 +
10090.3530.343 * 0.325 * 0.322 + 0.376
Covariate Set 4 κ = 0.10 25150.538 * 0.504 + 0.538 * 0.520 * 0.550 *
50190.583 * 0.555 + 0.7720.612 * 0.869
100230.377 * 0.389 * 0.5600.360 + 1.158
κ = 0.20 25110.1920.1890.081 + 0.088 * 0.098 *
50140.3340.3350.2130.2610.175 +
100170.465 * 0.466 * 0.450 * 0.450 * 0.432 +
Table 4. The S M I S E ( n ) under each simulation setting. The size of ζ x ( t ) is shown below each respective S M I S E ( n ) in parentheses.
Table 4. The S M I S E ( n ) under each simulation setting. The size of ζ x ( t ) is shown below each respective S M I S E ( n ) in parentheses.
n x 1 ( t ) x 2 ( t ) x 3 ( t ) x 4 ( t ) x 1 ( t ) x 2 ( t ) x 3 ( t ) x 4 ( t ) x 1 ( t ) x 2 ( t ) x 3 ( t ) x 4 ( t )
β 1 ( t ) β 2 ( t ) β 3 ( t )
κ = 0.10 κ = 0.10 κ = 0.10
2510.445.134.832.4911.125.765.563.5111.325.565.913.62
(3)(6)(8)(14)(3)(6)(8)(16)(3)(6)(8)(15)
509.664.413.071.2810.275.023.942.2110.615.014.672.54
(3)(9)(10)(17)(3)(9)(10)(20)(3)(9)(10)(19)
1009.313.942.670.619.184.443.291.5110.464.854.471.85
(3)(14)(10)(21)(3)(14)(10)(23)(3)(14)(10)(23)
κ = 0.20 κ = 0.20 κ = 0.20
258.954.013.861.749.964.694.483.029.974.344.282.79
(2)(3)(7)(10)(2)(3)(7)(12)(2)(3)(7)(11)
508.363.492.650.679.284.003.291.719.113.883.571.82
(2)(5)(8)(12)(3)(5)(8)(16)(3)(5)(8)(14)
1008.063.171.990.188.613.742.791.078.933.743.381.27
(3)(8)(9)(16)(3)(8)(9)(17)(3)(8)(9)(17)
Table 5. The average value of (6) computed across all simulated datasets for each simulation setting under parameter curve β 1 ( t ) .
Table 5. The average value of (6) computed across all simulated datasets for each simulation setting under parameter curve β 1 ( t ) .
n ζ x ( t ) CV GCV GCV γ = 1.4 AIC C SIC
Covariate Set 1 κ = 0.10 2532.712.712.712.712.71
5033.003.003.003.003.00
10033.003.003.003.003.00
κ = 0.20 2522.002.002.002.002.00
5022.492.502.492.492.49
10033.003.003.003.003.00
Covariate Set 2 κ = 0.10 2566.166.196.056.086.08
5099.219.229.149.199.10
1001414.1814.1814.1414.1714.09
κ = 0.20 2533.763.773.723.733.72
5055.335.335.305.325.29
10087.877.887.867.877.84
Covariate Set 3 κ = 0.10 2587.927.997.827.857.90
50109.789.789.749.769.72
1001010.3210.3210.3010.3110.27
κ = 0.20 2576.806.826.766.776.78
5087.747.757.697.727.65
10099.009.009.009.009.00
Covariate Set 4 κ = 0.10 251414.8414.8714.5814.6114.76
501717.4317.4317.3017.3617.25
1002121.1921.1921.1421.1721.11
κ = 0.20 251010.3210.3510.0710.1010.19
501212.9913.0012.8712.9312.80
1001615.9915.9915.9615.9815.92
Table 6. Analogous to Table 5 for parameter curve β 2 ( t ) .
Table 6. Analogous to Table 5 for parameter curve β 2 ( t ) .
n ζ x ( t ) CV GCV GCV γ = 1.4 AIC C SIC
Covariate Set 1 κ = 0.10 2532.962.962.962.962.96
5033.003.003.003.003.00
10033.003.003.003.003.00
κ = 0.20 2522.012.012.012.012.01
5032.992.992.992.992.99
10033.003.003.003.003.00
Covariate Set 2 κ = 0.10 2566.186.176.066.086.08
5098.968.968.888.928.84
1001414.0114.0113.9614.0013.88
κ = 0.20 2533.743.753.693.703.70
5055.175.165.125.145.10
10087.697.697.677.697.65
Covariate Set 3 κ = 0.10 2588.088.168.018.038.09
50109.819.819.779.799.75
1001010.3410.3410.3210.3310.29
κ = 0.20 2576.856.876.836.846.84
5087.817.827.757.797.69
10099.009.009.009.009.00
Covariate Set 4 κ = 0.10 251616.6216.6516.2816.3116.64
502020.3320.3420.1320.2120.08
1002323.3523.3523.3023.3223.25
κ = 0.20 251212.6812.7112.3712.4112.55
501615.4815.4915.3215.4015.24
1001717.6817.6817.5817.6417.48
Table 7. Analogous to Table 5 for parameter curve β 3 ( t ) .
Table 7. Analogous to Table 5 for parameter curve β 3 ( t ) .
n ζ x ( t ) CV GCV GCV γ = 1.4 AIC C SIC
Covariate Set 1 κ = 0.10 2532.932.932.932.932.93
5033.003.003.003.003.00
10033.003.003.003.003.00
κ = 0.20 2522.002.002.002.002.00
5032.902.902.902.902.90
10033.003.003.003.003.00
Covariate Set 2 κ = 0.10 2566.116.126.026.046.04
5099.089.089.029.059.00
1001414.0514.0514.0014.0413.97
κ = 0.20 2533.773.773.723.733.73
5055.225.225.185.205.17
10087.757.757.737.757.72
Covariate Set 3 κ = 0.10 2587.877.947.777.797.84
50109.779.789.719.759.65
1001010.3210.3210.2910.3110.25
κ = 0.20 2576.826.836.796.806.80
5087.777.787.707.747.66
10099.009.009.009.009.00
Covariate Set 4 κ = 0.10 251515.7115.7515.3815.4115.59
501918.9819.0018.7318.8618.63
1002322.5922.6022.4722.5622.26
κ = 0.20 251111.7011.7311.4511.4811.56
501414.3614.3614.2614.3014.22
1001716.9416.9416.9116.9316.88
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Montoya, E.L. On the Number of Independent Pieces of Information in a Functional Linear Model with a Scalar Response. Stats 2020, 3, 510-525. https://0-doi-org.brum.beds.ac.uk/10.3390/stats3040032

AMA Style

Montoya EL. On the Number of Independent Pieces of Information in a Functional Linear Model with a Scalar Response. Stats. 2020; 3(4):510-525. https://0-doi-org.brum.beds.ac.uk/10.3390/stats3040032

Chicago/Turabian Style

Montoya, Eduardo L. 2020. "On the Number of Independent Pieces of Information in a Functional Linear Model with a Scalar Response" Stats 3, no. 4: 510-525. https://0-doi-org.brum.beds.ac.uk/10.3390/stats3040032

Article Metrics

Back to TopTop