Next Article in Journal
Effects of Magnetic Fields, Coupled Stefan Blowing and Thermodiffusion on Ferrofluid Transport Phenomena
Next Article in Special Issue
Knowledge Graph Completion Algorithm Based on Probabilistic Fuzzy Information Aggregation and Natural Language Processing Technology
Previous Article in Journal
A Hash-Based Quantum-Resistant Designated Verifier Signature Scheme
Previous Article in Special Issue
A Grey Incidence Based Group Decision-Making Approach and Its Application
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Estimation of Linear Regression with the Dimensional Analysis Method

by
Luis Pérez-Domínguez
1,†,‡,
Harish Garg
2,*,‡,
David Luviano-Cruz
1,‡ and
Jorge Luis García Alcaraz
1,‡
1
Departamento de Ingeniería Industrial y Manufactura, Universidad Autónoma de Ciudad Juárez, Ciudad Juárez 32315, Mexico
2
School of Mathematics, Thapar Institute of Engineering & Technology, Deemed University, Patiala 147004, Punjab, India
*
Author to whom correspondence should be addressed.
Member in Grupo de Investigación en Software (GIS); Member of Canadian Operational Research Society (CORS); Member of Society for Industrial and Applied Mathematics.
These authors contributed equally to this work.
Submission received: 31 March 2022 / Revised: 30 April 2022 / Accepted: 9 May 2022 / Published: 12 May 2022
(This article belongs to the Special Issue Probability-Based Fuzzy Sets: Extensions and Applications)

Abstract

:
Dimensional Analysis (DA) is a mathematical method that manipulates the data to be analyzed in a homogenized manner. Likewise, linear regression is a potent method for analyzing data in diverse fields. At the same time, data visualization has gained attention in tendency study. In addition, linear regression is an important topic to address predictive models and patterns in data study. However, it is still pending to attack the manipulation of uncertainty related to the data transformation. In this sense, this work presents a new contribution with linear regression, combining the Dimensional Analysis (DA) to address instability and error issues. In addition, our method provides a second contribution related to including the decision maker’s attitude involved in the study. Therefore, the experimentation shows that DA manipulates the regression problem under a complex situation that the outcome may have in the investigation. A real-life case study is used to demonstrate our proposal.
MSC:
68T10; 68T30; 62J86; 68T37; 68V30; 60A86; 62F07; 62H30

1. Introduction

Linear regression (LR) model utilized in numerous areas of application [1]: for example, engineering, economics, ecological, social sciences, and medicines, among many others. Hence, linear regression is a potent and flexible technique in order to address regression issues. Thus, the trend of LR model is an extensive topic with important interest for researchers according [2,3,4].
Moreover, according to [5,6,7], the topic related to inventories play a key role because represent around 60% about operational cost for the organizations. Generally, the inventories are classified as following: (1) raw material, (2) work in process, (3) spare parts, and (4) finish good, etc. In addition, the interest related to inventory is due to cost management strategies by organizations [8,9,10,11]. Based on operation research, the inventories can be used in order to addressed considerable reduction cost. Therefore, the forecast method is imperative in order to determine the quantities with accuracy avoiding minimal error [12]. In this mode, the literature review explains the use of several methods focused to forecast of the inventories: for example, Linear regression and time series [13,14], Neural Networks [11,15], Machine Learning [16], and Bayesian principal component regression model [17], among others [10,18].
Additionally, dimensional analysis (DA) presents extensive interest in engineering [19,20]. DA has the potential to model, simplification the scale and dimension of the variable [21]. Thus, applications of neural networks [22], Matrix manipulations [23], physical sciences applications [24], revolutionary dimensional analysis methods [25,26], and statistical theories for dimensional analysis [27].
Therefore, dimensional analysis (DA) is a proficient tool that involves the interrelationship of the data or arguments under analysis. Much different research on DA is related to the statistical situation [28,29,30,31]. In this mode, Dovi [32], reported the improving the statistical accuracy of dimensional analysis correlations. However, this work does not consider the attitude of the decisors.
The literature reveal a substantial interest in dealing with the next crucial gaps:
  • The vital element is the instability of the prediction method [33,34];
  • Minimize the median squared difference between observed and fitted response [35];
  • Verify the efficacy of the algorithm, the greatest robustness, and accuracy in forecast results [12,36,37,38,39,40,41].
Based on the aforementioned considerations, the concrete contributions of this paper are the following:
  • We formulate a method to attack the drawbacks related to efficiency, instability, and minimal error.
  • This study explores the significant application of linear regression model under Dimensional Analysis.
  • The novelty of the current study also lies in considering the grade of importance of the decision makers or experts involved implied to solve the problem.
  • Finally, the proposed includes an application of DA to linear regression to deal with an inventory forecast problem.
The remainder of this paper is structured as follows: Section 2 presents basic concepts. The research methodology is described in Section 3. A numerical case study is presented in Section 4. Section 5 provides the discussions of findings. Finally, the conclusion is given in Section 6.

2. Basic Concepts

This section introduces the basic concepts used in this document.

2.1. Linear Regression

The linear regression (LR) method used to approximate a pendent variable reported to the values or changes of other variables studied in a linear shape.
Definition 1.
Let x i and y i be two variables within a random and continued distribution. In this mode, assuming a numerical data set mapped by ( x i , y i ) for i = 1 , 2 , , n , where x i U n and y i U n . A reasonable form of relation between the response variable called Y and the regressor X is the linear relationship. In this manner, a model can be represented as follows, where Y ¨ l of dimension n is a mapping in space R n R , captured by a vector of analysis related to metrics involved, and Υ ω R P , which is built by χ and Γ; In this mode, the simple linear regression depicted by means of Equation (1):
Y ¨ l = χ + Γ X l
where χ is the value stand for the value of when X l = 0 , also called the intersection, and Γ is the change in Y ¨ l called the slope of the line.

2.2. Dimensional Analysis

The mathematical expresion of DA is depicted by Equation (2) as follows:
D A = i = 1 n x i S i τ i .
where:
x i = is the numerical argument for i = 1 , 2 , , n ;
S i = is the best numerical datum taken from the data set under analysis;
τ i = is the grade of altitude or weight of the decision (expert) within a vector.
In this sense, the parameter τ j for j = 1 , 2 , , m and τ 0 , 1 .
Definition 2.
In this mode can be introduced the mean for x i based on the Equation (3). Considering x i , a numerical data set of R n R that maintains random conditions and continue distribution is created:
m e a n D A = i = 1 n x i S i τ j
The solution in terms of the original argument is obtained by substituting i = 1 n x i y i τ j :
m e a n D A = i = 1 n i = 1 n x i S i τ j = i = 1 n x 1 S 1 τ 1 × x 2 S 2 τ 2 × , , × x n S n τ m ,
then:
m e a n D A = i = 1 n i = 1 n x i S i τ j .
Example 1.
Assuming a set called a 1 = 2 , 3 , 6 , 7 , it will receive a vector weight τ = 0.25 , 0.25 , 0.25 , 0.25 and has been determined to be S i = ( 4.5 ) . Then, applying Equation (5), we will obtain the m e a n D A result for this example:
m e a n D A = i = 1 4 2 4.5 0.25 × 3 4.5 0.25 × 6 4.5 0.25 × 7 4.5 0.25 = 3.911 .

3. Main Results

In this section, we present the generalized linear regression method under the Dimensional Analysis environment.

3.1. Generalized Linear Regression Method under Dimensional Analysis Environment

This section presents a generalized linear regression method under the Dimensional Analysis environment called (GLR-DA):
Y ¨ = γ + β D A x .
Let A be the mean of x, and let x s depict the ideal value of x under dimensional analysis. Then:
A = i = 1 n x i x s τ j .
B represents y, and y s depicts the ideal value of y under dimensional analysis. Then:
B = i = 1 n y i y s τ j .
Y ¨ = γ + β D A x + D i .
where Y ¨ is a vector of responses; γ is a normally scattered vector of random data obtaining the expected value = 0 , also called the intersection; β D A is the weight vector corresponding to x; and x is the full rank matrix of not random variables.

3.2. Compute the Mean and Variance Estimators

The estimated values of the parameters γ and β D A given in the regression line (9) are found by using the method of the least-squares and get
γ = z = 1 Z y z β D A ( z = 1 Z x z ) n = y ¯ β D A x ¯
β D A = z = 1 Z x z A y z B i = 1 k x z A 2
In addition to the assumptions that the error D in the model is a random variable with mean λ = 0 and variance θ 2 constant, also suppose that D 1 , D 2 , , D n are independent from one run of the experiment to another. In this sense, under random conditions, the mean λ presented in Equation (12) and the variance θ 2 in Equation (14) can be obtained as follows: In Figure 1 depict the random conditions for error D.
Thus, the mathematical expectation is:
E ( λ λ x ) = E ( λ ) E ( λ x ) = 0
The covariance S ^ x y is depicted as follows:
S ^ x y = 1 Z z = 1 Z x z A y z B
where:
A = 1 Z z = 1 Z x z , a n d B = 1 Z z = 1 Z y z
To determine related bias information, it is imperative obtain the variance. In this sense, it is possible to carry out a modelization of error called ξ :
E ( S 2 ) = θ 2 = 1 n 2 z = 1 Z x z A y z B
Thus, the difference between observations x i and estimated value of A = 1 Z z = 1 Z x z is given as:
ξ t = z = 1 Z x z A 2 ,
Moreover, the difference between observations y i and estimated value of B = 1 Z z = 1 Z y z is given as:
ξ s = z = 1 Z y z B 2 ,
and the aggregation error of x i and y i , respectively, converge in the Equation (17):
S ^ = z = 1 Z x z A y z B .
In addition, the sum of the squares of the error can be presented as follows:
S S E = z = 1 Z ξ 2 = z = 1 Z y z γ β D A x z 2
Then, assuming that β D A = S ^ ξ t , the solution for Equation (18) and using Equations (14)–(16) can be obtained as follows:
S S E = z = 1 Z y z B β D A x z A 2
= z = 1 k y z γ 2 2 β D A z = 1 Z y z B x z A + β D A 2 z = 1 Z x z A 2
= ξ s 2 β D A S ^ + β D A 2 ξ t
= ξ s 2 S ^ ξ t S ^ + S ^ ξ t 2 ξ t
= ξ s 2 S ^ 2 ξ t + S ^ 2 ξ t 2 ξ t
= ξ s 2 S ^ 2 ξ t + S ^ 2 ξ t ξ t 2
Finally:
S S E = ξ s S ^ 2 ξ t = ξ s S ^ ξ t S ^
Then, the sum of the squares of the error (SSE) is depicted as follows:
S S E = ξ s β D A S ^
Additionally, an unbiased estimator of the mean square error (MSE) is:
S 2 = ξ s β D A 1 S ^ n 2 = S S E n 2 = M S E

4. Numerical Example

In this section, two numerical cases are presented to demonstrate our proposal.

4.1. Numerical Example from a Real Case Study

A real case is considered in a manufacturing company’s need to establish a forecast related to inventory handling. This company is facing problems related to the forecast accuracy of raw material. In this mode, the data set used in this study will be under an inventory prediction problem. Our proposed GLR-DA will addressing a inventory problem to estimate the inventory conditions. The data considered are depicted in Table 1.
The approach requires a pairwise comparison of the variables x and y based on the following model according Equation (5). In Table 2, the results obtained for each parameter A and B are depicted.
The following results applying our method proposed are depicted in Table 3.
Hence, the SME is obtained using Equation (20):
M S E = 644.97 10 = 64.44
The Pearson correlation [42] is presented in Table 4, where a strong correlation between the information for each variable is observed.
Finally, it is important to consider that the correlation coefficient is around to 98.27%. It is a strong value to confirm that the forecast obtained with our proposal is proficient and robust.

4.2. Numerical Example 2

The data for this experiment were taken from [43] focused on a sales estimation study. In this mode, the data are presented in Table 5, and for convenience, x represents the weeks and y describes the sales.
According to the results represents in Table 6, it can be see that our proposal has the potential to handle sales estimation problems.
In addition, the SME is obtained using Equation (20):
M S E = 176.40 10 = 17.64

5. Validations

In this section, statistical tests such as the normal probability test, correlation, means, standard deviation, confidence interval, and Tukey’s test were used. We conducted the statistical tests in order to evaluate the methodology proposal as followings.
In this manner, the normal probability test is realized to appraisal the data distribution. It can be observed in Figure 2 that the normal test results are consistent.
Thus, a cross-validation of the correlation of the information is presented in Figure 3. In fact, the chart presented is very interesting, showing the correlation of the data analyzed with proposed GLR-DA and the conventional linear regression method.
In addition, the box plot is presented in Figure 4. The data analyzed show small variations with respect to their means.
A statistical summary report is depicted in Table 7. It can be considered that the means and standard deviations are similar.
Then, a confidence interval was carried out to validate the information, and the results are depicted in Table 8. It can be observed that difference does not exist between the means analyzed.
In addition, it can be seen in Figure 5 the confidence interval with tendency within limit.
Hence, information was grouped using the Tukey’s Method [44]. The test was conducted using a 95% confidence, and the results are presented in Table 9. It can be see that the means are not significantly different.
Tukey simultaneous tests for differences of means. In this mode, using a individual confidence level = 98.04%. According to comparison of treatment means by Tukey’s Multiple the findings confirm the consistency about our proposal as shown in Table 10.
Using the information presented in Figure 6. It can be observed that the means have similar performances. In this manner, we can determine that they contain the same results.

6. Discussion and Conclusions

The linear regression using dimensional analysis is an alternative manner to obtain forecasts. In addition, the methodology’s findings proposed can potentially manipulate regression situation. Likewise, we carried out the different validations to confirm the consistency and stability of the results. Hence, the means comparisons using Tukey test explain the significant contribution to confirm the proficient results with our proposal. According to the different statistical tests realized, we can confirm the effectiveness of our proposal. In addition, the findings about the results from our proposal are reproducible. Therefore, based on those results, we confirm that our proposal is capable of dealing with efficiency, instability, and minimal error focused on the inventory forecast condition.
The literature revised indicates that linear regression continues to be an important topic to be investigated for academics. Nowadays, advances in technology and complexity are demanding accuracy forecast. At the same time, the companies face challenges in addressing the information in a sophisticated manner. Future research can be addressed to implement fuzzy applications and implement a software environment. In addition, it can be interesting to perform comparisons in other fields, for example, health, economics, agriculture, etc.

Author Contributions

Conceptualization, L.P.-D. and D.L.-C.; methodology, L.P.-D. and H.G.; validation, L.P.-D., D.L.-C. and J.L.G.A.; formal analysis, L.P.-D., D.L.-C. and H.G.; investigation, L.P.-D., D.L.-C. and H.G.; resources, L.P.-D., D.L.-C. and J.L.G.A.; data curation, L.P.-D., D.L.-C. and J.L.G.A.; writing—original draft preparation, L.P.-D., D.L.-C. and H.G.; writing—review and editing, L.P.-D., D.L.-C. and J.L.G.A.; visualization, L.P.-D., D.L.-C. and J.L.G.A.; supervision, L.P.-D., D.L.-C. and J.L.G.A.; funding acquisition, L.P.-D., D.L.-C. and H.G. All authors have read and agreed to the published version of the manuscript.

Funding

The APC was funded by National Council of Science and Technology (CONACYT).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Acknowledgments

The authors would like to thank the constructive comments of the reviewers which improved the quality of the paper. The authors are very grateful to the Universidad Autónoma de Ciudad Juárez. The author (Harish Garg) is grateful to DST-FIST grant SR/FST/MS-1/2017/13 for providing technical support.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
DADimensional Analysis
LRLinear regression
SSESum of the squares of the error
GLR-DAGeneralized linear regression method under Dimensional Analysis environment called
MSEThe mean square error named

References

  1. Hothorn, T.; Bretz, F.; Westfall, P. Simultaneous inference in general parametric models. Biom J. J. Math. Methods Biosci. 2008, 50, 346–363. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Liu, M.; Hu, S.; Ge, Y.; Heuvelink, G.B.; Ren, Z.; Huang, X. Using multiple linear regression and random forests to identify spatial poverty determinants in rural China. Spat. Stat. 2020, 42, 100461. [Google Scholar] [CrossRef]
  3. Cook, J.R.; Stefanski, L.A. Simulation-extrapolation estimation in parametric measurement error models. J. Am. Stat. Assoc. 1994, 89, 1314–1328. [Google Scholar] [CrossRef]
  4. Park, J.Y.; Phillips, P.C. Statistical inference in regressions with integrated processes: Part 2. Econom. Theory 1989, 5, 95–131. [Google Scholar] [CrossRef] [Green Version]
  5. Johnston, F.; Boylan, J.E.; Shale, E.A. An examination of the size of orders from customers, their characterisation and the implications for inventory control of slow moving items. J. Oper. Res. Soc. 2003, 54, 833–837. [Google Scholar] [CrossRef]
  6. Babai, M.; Chen, H.; Syntetos, A.; Lengu, D. A compound-Poisson Bayesian approach for spare parts inventory forecasting. Int. J. Prod. Econ. 2021, 232, 107954. [Google Scholar] [CrossRef]
  7. Mansur, A.; Kuncoro, T. Product inventory predictions at small medium enterprise using market basket analysis approach-neural networks. Procedia Econ. Financ. 2012, 4, 312–320. [Google Scholar] [CrossRef] [Green Version]
  8. Dekker, R.; Bloemhof, J.; Mallidis, I. Operations Research for green logistics–An overview of aspects, issues, contributions and challenges. Eur. J. Oper. Res. 2012, 219, 671–679. [Google Scholar] [CrossRef] [Green Version]
  9. Bakker, M.; Riezebos, J.; Teunter, R.H. Review of inventory systems with deterioration since 2001. Eur. J. Oper. Res. 2012, 221, 275–284. [Google Scholar] [CrossRef]
  10. Saha, E.; Ray, P.K. Modelling and analysis of inventory management systems in healthcare: A review and reflections. Comput. Ind. Eng. 2019, 137, 106051. [Google Scholar] [CrossRef]
  11. van Steenbergen, R.; Mes, M. Forecasting demand profiles of new products. Decis. Support Syst. 2020, 139, 113401. [Google Scholar] [CrossRef]
  12. Kourentzes, N.; Trapero, J.R.; Barrow, D.K. Optimising forecasting models for inventory planning. Int. J. Prod. Econ. 2020, 225, 107597. [Google Scholar] [CrossRef] [Green Version]
  13. Wilson, B.T.; Knight, J.F.; McRoberts, R.E. Harmonic regression of Landsat time series for modeling attributes from national forest inventory data. ISPRS J. Photogramm. Remote Sens. 2018, 137, 29–46. [Google Scholar] [CrossRef]
  14. Seifbarghy, M.; Amiri, M.; Heydari, M. Linear and nonlinear estimation of the cost function of a two-echelon inventory system. Sci. Iran. 2013, 20, 801–810. [Google Scholar]
  15. Ryu, S.; Noh, J.; Kim, H. Deep neural network based demand side short term load forecasting. Energies 2017, 10, 3. [Google Scholar] [CrossRef]
  16. Dalla Corte, A.P.; Souza, D.V.; Rex, F.E.; Sanquetta, C.R.; Mohan, M.; Silva, C.A.; Zambrano, A.M.A.; Prata, G.; de Almeida, D.R.A.; Trautenmüller, J.W.; et al. Forest inventory with high-density UAV-Lidar: Machine learning approaches for predicting individual tree attributes. Comput. Electron. Agric. 2020, 179, 105815. [Google Scholar] [CrossRef]
  17. Junttila, V.; Laine, M. Bayesian principal component regression model with spatial effects for forest inventory variables under small field sample size. Remote Sens. Environ. 2017, 192, 45–57. [Google Scholar] [CrossRef]
  18. Ulrich, M.; Jahnke, H.; Langrock, R.; Pesch, R.; Senge, R. Distributional regression for demand forecasting in e-grocery. Eur. J. Oper. Res. 2021, 294, 831–842. [Google Scholar] [CrossRef] [Green Version]
  19. Georgi, H. Generalized dimensional analysis. Phys. Lett. 1993, 298, 187–189. [Google Scholar] [CrossRef] [Green Version]
  20. Butterfield, R. Dimensional analysis for geotechnical engineers. Geotechnique 1999, 49, 357–366. [Google Scholar] [CrossRef]
  21. Cheng, Y.T.; Cheng, C.M. Scaling, dimensional analysis, and indentation measurements. Mater. Sci. Eng. R Rep. 2004, 44, 91–149. [Google Scholar] [CrossRef]
  22. Bellamine, F.; Elkamel, A. Model order reduction using neural network principal component analysis and generalized dimensional analysis. Eng. Comput. 2008, 25, 443–463. [Google Scholar] [CrossRef]
  23. Moran, M.; Marshek, K. Some matrix aspects of generalized dimensional analysis. J. Eng. Math. 1972, 6, 291–303. [Google Scholar] [CrossRef]
  24. Longo, S. Principles and Applications of Dimensional Analysis and Similarity; Springer Nature Switzerland AG: Cham, Switzerland, 2022. [Google Scholar]
  25. Szava, I.R.; Sova, D.; Peter, D.; Elesztos, P.; Szava, I.; Vlase, S. Experimental Validation of Model Heat Transfer in Rectangular Hole Beams Using Modern Dimensional Analysis. Mathematics 2022, 10, 409. [Google Scholar] [CrossRef]
  26. Szirtes, T. Applied Dimensional Analysis and Modeling; Butterworth-Heinemann: Oxford, UK, 2007. [Google Scholar]
  27. Shen, W.; Lin, D.K. Statistical theories for dimensional analysis. Stat. Sin. 2019, 29, 527–550. [Google Scholar] [CrossRef]
  28. Shen, W.; Davis, T.; Lin, D.K.; Nachtsheim, C.J. Dimensional analysis and its applications in statistics. J. Qual. Technol. 2014, 46, 185–198. [Google Scholar] [CrossRef]
  29. Albrecht, M.C.; Nachtsheim, C.J.; Albrecht, T.A.; Cook, R.D. Experimental design for engineering dimensional analysis. Technometrics 2013, 55, 257–270. [Google Scholar] [CrossRef]
  30. Bridgman, P.W. Dimensional Analysis; Yale University Press: New Haven, CT, USA, 1922. [Google Scholar]
  31. Gibbings, J.C. Dimensional Analysis; Springer Nature & Business Media: New York, NY, USA, 2011. [Google Scholar]
  32. Dovi, V.; Reverberi, A.; Maga, L.; De Marchi, G. Improving the statistical accuracy of dimensional analysis correlations for precise coefficient estimation and optimal design of experiments. Int. Commun. Heat Mass Transf. 1991, 18, 581–590. [Google Scholar] [CrossRef]
  33. Breiman, L. Bagging predictors. Mach. Learn. 1996, 24, 123–140. [Google Scholar] [CrossRef] [Green Version]
  34. Kohli, S.; Godwin, G.T.; Urolagin, S. Sales Prediction Using Linear and KNN Regression. In Advances in Machine Learning and Computational Intelligence; Springer: Singapore, 2021; pp. 321–329. [Google Scholar]
  35. Gentleman, J.F. New developments in statistical computing. Am. Stat. 1986, 40, 228–237. [Google Scholar] [CrossRef]
  36. Liang, K.Y.; Zeger, S.L. Longitudinal data analysis using generalized linear models. Biometrika 1986, 73, 13–22. [Google Scholar] [CrossRef]
  37. Prion, S.K.; Haerling, K.A. Making Sense of Methods and Measurements: Simple Linear Regression. Clin. Simul. Nurs. 2020, 48, 94–95. [Google Scholar] [CrossRef]
  38. Kuhn, M.; Johnson, K. Applied Predictive Modeling; Springer: New York, NY, USA, 2013; Volume 26. [Google Scholar]
  39. Wright, J.; Yang, A.Y.; Ganesh, A.; Sastry, S.S.; Ma, Y. Robust face recognition via sparse representation. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 31, 210–227. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  40. Cheng, J.; Ai, M. Optimal designs for panel data linear regressions. Stat. Probab. Lett. 2020, 163, 108769. [Google Scholar] [CrossRef]
  41. Huseyin Tunc, B.G. A column generation based heuristic algorithm for piecewise linear regression. Expert Syst. Appl. 2021, 171, 114539. [Google Scholar] [CrossRef]
  42. Fieller, E.C.; Hartley, H.O.; Pearson, E.S. Tests for rank correlation coefficients. I. Biometrika 1957, 44, 470–481. [Google Scholar] [CrossRef]
  43. Conrad, S. Sales data and the estimation of demand. J. Oper. Res. Soc. 1976, 27, 123–127. [Google Scholar] [CrossRef]
  44. Tukey, J.W. Comparing individual means in the analysis of variance. Biometrics 1949, 5, 99–114. [Google Scholar] [CrossRef]
Figure 1. Normal Distribution.
Figure 1. Normal Distribution.
Mathematics 10 01645 g001
Figure 2. Normal Test.
Figure 2. Normal Test.
Mathematics 10 01645 g002
Figure 3. Correlation Chart about the comparisons.
Figure 3. Correlation Chart about the comparisons.
Mathematics 10 01645 g003
Figure 4. Box Plot Chart.
Figure 4. Box Plot Chart.
Mathematics 10 01645 g004
Figure 5. Confidence interval.
Figure 5. Confidence interval.
Mathematics 10 01645 g005
Figure 6. Tukey simultaneous 95% confidence intervals.
Figure 6. Tukey simultaneous 95% confidence intervals.
Mathematics 10 01645 g006
Table 1. Numerical data.
Table 1. Numerical data.
xyxy
132789
240894
340996
4451096
56411121
67112125
Table 2. Estimation of the parameters A and B.
Table 2. Estimation of the parameters A and B.
AB
0.115.63142487
0.144.45475331
0.193.63920858
0.213.34356494
0.263.56587087
0.223.42892914
0.193.55394256
0.353.4168655
0.353.25595148
0.413.0892899
0.433.30630209
0.463.21766878
mean 3.32mean 43.903772
Table 3. Details and error study.
Table 3. Details and error study.
RealPrediction Case M ξ s | ξ s | ξ s 2
32.0020.5027.911.511.5132.2
40.0030.6136.69.49.488.2
40.0040.7145.4−0.70.70.5
45.0050.8154.2−5.85.833.8
64.0060.9262.93.13.19.5
71.0071.0271.7−0.00.00.0
89.0081.1280.57.97.962.0
94.0091.2389.22.82.87.7
96.00101.3398.0−5.35.328.4
96.00111.43106.8−15.415.4238.2
121.00121.54115.5−0.50.50.3
125.00131.64124.3−6.66.644.1
Total0.1469.11644.97
Average0.015.7653.75
Table 4. Correlation matrix.
Table 4. Correlation matrix.
Real DataPrediction with GLR-DAConventional Regression
Real Data1.00000000.98275220.9827528
Prediction with GLR-DA0.98275221.00000001.0000000
Conventional regression0.98275281.00000001.0000000
Table 5. Sales data by weeks.
Table 5. Sales data by weeks.
xy
19
28
310
410
510
68
77
810
98
1010
1110
129
139
Table 6. Details of the estimations.
Table 6. Details of the estimations.
RealPrediction ξ s | ξ s | ξ s 2
9.002.346.76.744.4
8.003.224.84.822.9
10.004.105.95.934.8
10.004.985.05.025.2
10.005.864.14.117.1
8.006.741.31.31.6
7.007.62−0.60.60.4
10.008.501.51.52.2
8.009.38−1.41.41.9
10.0010.27−0.30.30.1
10.0011.15−1.11.11.3
9.0012.03−3.03.09.2
9.0012.91−3.93.915.3
Total18.9039.62176.40
Average1.902.9813.43
Table 7. Summary analysis.
Table 7. Summary analysis.
FactorsAdj. Total
Mean
Adj. Total
StDev
Item-Adj
Corr.
Cronbach’s
Alpha
Real Data152.1668.030.98280.995
Prediction with GLR-DA152.1763.490.99560.9912
Conventional regression152.1668.290.99620.9874
Table 8. Confidence Interval test.
Table 8. Confidence Interval test.
FactorNMeanStDev95% CI
Real Data1276.0832.16(56.43, 5.74)
Prediction with GLR-DA1276.136.4(56.4, 5.7)
Conventional regression1276.0831.61(56.43, 5.74)
Table 9. Tukey Pairwise Comparisons.
Table 9. Tukey Pairwise Comparisons.
FactorNMeanGrouping
Conventional regression1276.08A
Real Data1276.08A
Prediction with GLR-DA1276.1A
Table 10. Tukey Simultaneous Tests.
Table 10. Tukey Simultaneous Tests.
Difference of LevelsDifference of MeansSE of Difference95% CIT-ValueAdjusted p-Value
Prediction-Real Data013.7(−33.5, 33.5)01
Conventional-Real Data013.7(−33.5, 33.5)01
Conventional-Prediction013.7(−33.5, 33.5)01
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Pérez-Domínguez, L.; Garg, H.; Luviano-Cruz, D.; García Alcaraz, J.L. Estimation of Linear Regression with the Dimensional Analysis Method. Mathematics 2022, 10, 1645. https://0-doi-org.brum.beds.ac.uk/10.3390/math10101645

AMA Style

Pérez-Domínguez L, Garg H, Luviano-Cruz D, García Alcaraz JL. Estimation of Linear Regression with the Dimensional Analysis Method. Mathematics. 2022; 10(10):1645. https://0-doi-org.brum.beds.ac.uk/10.3390/math10101645

Chicago/Turabian Style

Pérez-Domínguez, Luis, Harish Garg, David Luviano-Cruz, and Jorge Luis García Alcaraz. 2022. "Estimation of Linear Regression with the Dimensional Analysis Method" Mathematics 10, no. 10: 1645. https://0-doi-org.brum.beds.ac.uk/10.3390/math10101645

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop