Next Article in Journal
Text Input in Virtual Reality: A Preliminary Evaluation of the Drum-Like VR Keyboard
Previous Article in Journal
An Acoustic-Based Smart Home System for People Suffering from Dementia
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Framework for Prediction of Household Energy Consumption Using Feed Forward Back Propagation Neural Network

1
Department of Computer Engineering, Jeju National University, Jeju City 63243, Korea
2
College of Computer Science, King Khalid University, Abha 61421, Saudi Arabia
3
Department of Mathematics, Kohat University of Science & Technology (KUST), Kohat 26000, Pakistan
4
Department of Computer Engineering, University of Kuala Lumpur (UniKl-MIIT), Kuala Lumpur 50250, Malaysia
*
Author to whom correspondence should be addressed.
Submission received: 25 January 2019 / Revised: 20 March 2019 / Accepted: 26 March 2019 / Published: 1 April 2019

Abstract

:
Energy is considered the most costly and scarce resource, and demand for it is increasing daily. Globally, a significant amount of energy is consumed in residential buildings, i.e., 30–40% of total energy consumption. An active energy prediction system is highly desirable for efficient energy production and utilization. In this paper, we have proposed a methodology to predict short-term energy consumption in a residential building. The proposed methodology consisted of four different layers, namely data acquisition, preprocessing, prediction, and performance evaluation. For experimental analysis, real data collected from 4 multi-storied buildings situated in Seoul, South Korea, has been used. The collected data is provided as input to the data acquisition layer. In the pre-processing layer afterwards, several data cleaning and preprocessing schemes are applied to the input data for the removal of abnormalities. Preprocessing further consisted of two processes, namely the computation of statistical moments (mean, variance, skewness, and kurtosis) and data normalization. In the prediction layer, the feed forward back propagation neural network has been used on normalized data and data with statistical moments. In the performance evaluation layer, the mean absolute error (MAE), mean absolute percentage error (MAPE), and root mean squared error (RMSE) have been used to measure the performance of the proposed approach. The average values for data with statistical moments of MAE, MAPE, and RMSE are 4.3266, 11.9617, and 5.4625 respectively. These values of the statistical measures for data with statistical moments are less as compared to simple data and normalized data which indicates that the performance of the feed forward back propagation neural network (FFBPNN) on data with statistical moments is better when compared to simple data and normalized data.

1. Introduction

Technological advancement in the Internet of Things (IoT) has opened the doors to smart buildings which are gaining popularity throughout the world [1]. Smart buildings provide a facility so that the electronic devices can be operated remotely using mobile apps, they can respond in certain situations like the presence of an unknown person in the home, the visitors visited home in owner absence and so forth [2]. The devices in smart buildings consume more energy compared to ordinary buildings due to the installation of sensors and other devices [3]. The energy production requires a lot of financial resources due to which there is a need to produce the energy as per the requirements of the building or the customers. To reduce energy wastage, future energy consumption prediction is required. For the energy consumption prediction, mostly different approaches have been adopted using machine learning techniques, prediction algorithms, and neural network-based approaches [4]. The energy consumption prediction is also considered as the first step for the optimization of energy consumption in smart buildings. Energy prediction is mostly used in the power sector to predict the demand for the next day, week, month, year to produce the desired amount of energy. The same concept is now being used in smart buildings to save energy costs. Researchers are trying to predict the energy of the next hour, day and month in smart homes so that the energy can be saved and through the prediction, the user will know well in advance about future consumption [5,6,7]. The residential sector consumes more energy as compared to the commercial sector. Hence there is a need for a system to save more and more energy to overcome the issue of higher energy demand in the future [8]. More energy saving can be achieved with the real-time monitoring of energy consumption in smart homes [9]. The structure of buildings, the material used in the construction of the building, ecological conditions of the surrounding, the ratio of direct sunlight on the building, behavior of the residents, types of equipment used by the residents and so forth directly influence the energy consumption [8]. Mostly, the significant portion of energy consumption in a building is for heating and cooling. In hot areas, the energy consumption of air conditioners will be higher.
Techniques used by authors have mostly predicted the energy consumption of individual appliances and total hourly consumption and then the daily consumption of a complete smart home. The prediction will be helpful for the energy management system of smart homes [10]. The precise prediction of energy consumption is difficult due to the different factors like weather, geographical area, occupancy level and so forth [11]. It has been observed that neural network-based approached have performed better as compared to the other techniques due to their strong learning capability. The deep learning methodology also provides more accurate results as compared to a conventional neural network.
The objective of this paper is to predict the energy consumption in residential buildings using feed forward back propagation neural network (FFBPNN) on original data, data with statistical moments and normalized data. The data used in this paper is the real data which has been collected from a residential building located in Seoul, Republic of Korea. We have selected the FFBPNN for energy consumption prediction because it is the most common and effective energy consumption method [5,6,7]. The data with statistical moments has been selected because it improves the performance of an artificial neural network. The normalization has been applied for the complexity reduction and improvement of prediction accuracy of FFBPNN. The description of the abbreviations used in this work is given in Appendix A, Table A1 for ease of the readers.
The structure of the paper is organized as: the related work is discussed in detail in Section 2, the proposed work is carried out in Section 3, the implementation, results, and discussion in Section 4 and the paper is concluded in Section 5.

2. Related Work

The literature regarding energy consumption prediction has been reviewed and reported here. The approached are mostly based on prediction algorithms, machine learning techniques, and neural network-based techniques.
Wahid et al., in [5] have proposed an hourly energy consumption prediction method using a multi-layer perceptron. The performance of the method was measured using mean absolute error (MAE), mean squared error (MSE) and root mean squared error (RMSE). The purpose of the hourly energy consumption prediction was to help the suppliers of energy to produce the energy according to the need. The outliers and missing values were handled with the windows-based averaging mechanism. The mean, variance skewness and kurtosis have been extracted from the hourly data, for the energy prediction.
Wahid et al., in [6] proposed a daily energy consumption prediction method using machine learning techniques. The multi-layer perceptron, random forest and logistic regression have been used for the energy consumption prediction. The prediction accuracy of the three classifiers has been compared, and the results of logistic regression were better as compared to the other two methods.
Fayaz and Kim in [7] used a deep extreme learning machine for the prediction of energy consumption in smart homes. They have also used the adaptive neuro-fuzzy inference system and artificial neural network for the prediction of energy consumption. The performance of the deep extreme learning was better as compared to the other two algorithms. They have used the trial and error method for the selection of hidden layers and activation function.
Li et al., in [11] used an optimized artificial neural network (ANN) for the hourly energy consumption prediction in smart buildings. The ANN structure threshold weights and values were adjusted using an improved particle swarm optimization (iPSO) algorithm.
Biswas et al., in [12] addressed the nonlinearity of building energy data using Levenberg–Marquardt and feedforward neural network approaches. Its implementation in the TxAIRE research house has tested the model. The model has promising results as compared to the previous models in terms of energy consumption prediction.
Hu in [13] proposed a neural network based grey forecasting method for energy consumption optimization in buildings. The traditiona grey model (GM) (1,1) model employs the least square method to obtain the values, but it has some issues related to the coefficient and control variables. The author has resolved the dependency problem of coefficient and control variables using proposed noval neural network-based grey model (NNGM) (1,1) model for the accurate energy consumption prediction.
Basu et al., in [14] proposed an energy management system for the smart building environment using three-layered architecture, i.e., anticipative layer, reactive layer, and the local layer. The anticipative layer has been used for the prediction of power consumption, price, and weather. The local layer has been used to control the status of appliances for the user comfort index. The fourth layer has been used for the supplier to provide the exact amount of power to the appliances.
Li et al., in [15] used deep extreme learning approach for accurate energy consumption prediction in a smart building. The feature extraction has been carried out using stacked autoencoders (SAEs), and the energy consumption has been predicted using the extreme learning machine (ELM). The results of the model have been compared with backward propagation neural network (BPNN), support vector regression (SVR) the generalized radial basis function neural network (GRBFNN) and multiple linear regression (MLR).
Bara et al., in [16] proposed a method for the energy consumption prediction using a neural network. The focus of the technique was on the householder’s profiles for the understanding of consumers behavior by the energy suppliers. Rahman et al., in [17] proposed a recurrent neural network model for the prediction of medium and short-term energy consumption prediction. In terms of aggregated energy consumption prediction, the results of the multi-layered perceptron model were better as compared to the proposed model. Kuo et al., in [18] proposed a deep neural network algorithm for the short-term load forecasting which has proved to have better results as compared to the other algorithms available for the short term load forecasting.
Despite the use of an artificial neural network, the further improvement regarding the prediction accuracy is possible. The proposed methodology will tackle the issue of prediction accuracy for further improvement in neural network based models.

3. Proposed Methodology

The proposed methodology for the energy consumption prediction consists of four stages namely data layer, the pre-processing layer, the prediction layer, and performance evaluation layer. The proposed methodology is shown in Figure 1 and explained in the subsections in detail.

3.1. Data Layer

The building energy consumption data for this work is acquired in the data acquisition layer. For data collection, the contextual information, such as environmental, circumstances, temperature, humidity, user occupancy, and so forth; sensors have been used. For user occupancy detection, several passive infra-red (PIR) sensors can be utilized to obtain information in 0,1 form, such as busy or not busy. The were collected from the designated residential buildings of Seoul, Republic of Korea, from January 2010 to December 2010. The floor-wise information is available in this collection from smart meters. The installations of these meters have been carried out floor wise in chosen buildings. The conceptual view of hourly energy consumption data for two days is shown in Figure 2. As residential buildings are very busy during noon and night times, the energy consumption is higher in these periods. It also shows a direct correlation between energy consumption and occupancy in the dataset.
The data has been collected from a residential building having 33 floors (394 feet, tall) in Seoul. The building structure has been built using reinforced concrete the same as the other buildings in Korea. The main source of energy in the building is the electricity provided by the nearest grid station. The electricity in the buildings is mostly used for cooking, heating, washing, entertainment, drying, lighting and so forth. For the heating of the floor, a separate system based on gas is installed in the buildings. The collected data only considers the electricity consumption per floor of the designated building. Figure 3 illustrates the data collection detail, from a residential building.

3.2. Pre-Processing Layer

In this layer, data has been preprocessed in order to make it smooth for further processing. Different smoothing filter can be used for this purpose, such as moving average, loess, lowess, Rloess, RLowess, Savitsky–Golay, and so forth. In this study, we have used the moving average method which is a very significant data smoothing filter used by various authors [7] for data smoothing. Equation (1) is the mathematical representation of moving average filter.
y i =   1 M J = 0 M 1 X i + j
In this equation, x [ ] is the input, y [ ] is the output, and M is the number of points used in the moving average. In the pre-processing layer, first, we have calculated the statistical moments and concatenated with original data. The dataset comprises four parameters as inputs; namely hours of the day (P1), days of the week (P2), weeks of the month (P3) and month (P4). The statistical moments, namely mean, variance, skewness, and kurtosis [7] can be calculated using Equations (2)–(5).
µ   =   1 n i = 1 n x i
σ = 1 n i = 1 n x i µ 2
S = 1 n i = 1 n x j µ σ 3
K = 1 n i = 1 n x j µ σ 4
where µ, σ , S, K represent mean, variance, skewness, and kurtosis, respectively. P i represents values of hours of the day (P1), days of the week (P2), weeks of the month (P3) and month (P4). values, i = 1,2,3, and 4. For trial and test purpose, we have normalized the data by using Equation (6).
  x new = x x min x max x min
where x new represent the output normalized value, x indicate the current value, x min represents the minimum value in the set and x max indicates maximum value [19].

3.3. Prediction Layer

ANN is one of the most important ANN models that are used for regression. Nowadays researchers have applied NNs for analyzing different types of regression problems in different situations. The ANN model applied in the suggested work is the FFBPNN as shown in Figure 4 and Figure 5 for original and normalized data, and data with statistical moments respectively. Inputs to the neural network are simple data, normalized data, and statistical moments data. We have used four parameters as inputs namely hour of the day (P1), days of the week (P2), weeks of the month (P3), month (P4), arithmetic mean of the data (P5), standard deviation of the data (P6), skewness of the data (P7), and kurtosis (P8) as shown in the following figures. The ANN model joined with the error propagation algorithm (FFBPNN) is a very popular artificial neural network model for estimation and prediction [20]. It usually has three layers, namely the input layer, hidden layer, and output layer, though more than one hidden layer can be specified, and a hidden layer could have a bias node.
The detailed mathematical formulations of the artificial neural network are given below which have been taken from Reference [21]. In order to calculate the hidden layer value, the following Equation (7) can be used.
  v j = 1 + exp 1 × i = 1 1 x i w i j   1
where v j represents node j in the hidden layer, x j represents node I the input layer, and the weight between nodes are represented by wij. The output layer node value can be calculated by (8).
y = 1 + exp 1 × j = 1 j v j w i j 1
where y represents the output layer node (In this research, we have taken only one output node, multiple nodes can be used). Error E between observed and computed data can be calculated as (9).
  Error = 0.5 d y 2
where d represents the observed data propagation from the output layer and a hidden layer that is represented in Equations (10) and (11) respectively.
δ y = d y 1 y
δ y = v j d v j 1 y δ y w j 1   ,   j = 1 , . , J
The weight adjustment between hidden and output layers and input and the hidden layer can be carried out using below amounts (12–13), correspondingly:
  Δ w i j =   α δ y v j , i = 1 , .. , I ;   j = 1 , . , J
Δ w i j n =   α δ y v j , j = 1 , .. , J  
where α represents learning rate, additionally momentum can be measured using Equations (14) and (15):
Δ w i j n =   α δ y v j + β   Δ w j 1 n 1 ,     j = 1 , ,   J
Δ w i j n =   α δ y v j + β   Δ w j 1 n 1 , i = 1 , ,   I ;   j = 1 , .. , J  
where n indicates iterations of error back-propagation; and β represents momentum constant. The training process in the flat region of the error surface and avoids fluctuations in the weights are accelerated by using this momentum method.
There are different types of activation functions, such as linear, tan-sigmoid, logarithmic sigmoid, sigmoid, and so forth, that can be used in different layers of ANN. In the proposed work, we have used the tan-sigmoid function in the hidden layer and linear function in the output layer. The selection of the tan-sigmoid function in the hidden layer is carried out because it is the most appropriate activation function and its performance is considered better as compared to other activation functions. Similarly, we have used the linear function in the output layer because it is a regression problem and hence we needed to apply linear function in the output layer. The linear and sigmoid functions are represented mathematically in Equations (16) and(17) respectively [22].
χ x = linear   x
Φ ( x )   =   2 1 + e 2 x 1

3.4. Performance Evaluation Layer

The performance of the model has been measured using root mean square error (RMSE), mean absolute error (MAE), and the mean absolute percentage error (MAPE) [22]. These matrices are normally used in the literature to assess regression accuracy. The mathematical representation of RMSE, MAE, and MAPE are represented in Equations (18)–(20) respectively.
RMSE   =   1 N k = 0 n ( A P i ) 2
MAE   =   1 N i = 1 n A i P i
MAPE =   1 N i = 1 n A i P i A i x   100
where N denotes the entire number of observations, A denotes the actual value and P represents the estimated value.

4. Implementation and Results

4.1. Implementation Setup

All implementations of the proposed approach have been carried out using MATLAB R2010a version 7.10.0.499 with an Intel Core i5 system having Windows 7 operating system. To find the best-estimated risk index for WSPs different types of experiments has been carried out. Usually, input features play a vital role in the performance of any machine learning algorithm. Hence, in this research, hour of the day (P1), days of the week (P2), weeks of the month (P3), month (P4), arithmetic mean of the data (P5), standard deviation of the data (P6), skewness of the data (P7), and kurtosis (P8) have been given as inputs to the feed forward back propagation with the error correction neural network. The FFBPNN is a supervised machine learning algorithm; hence it is desirable to divide the data into a specific ratio for training and testing. As in this work we have carried out energy consumption prediction in residential building for a different period, hence we have divided the data into different training and testing ratios. Following Equations (21) and (22) can be used to divide the data into training and testing.
Ts   =   D k   ×   24
Tr   =   Y D     Ts
where Ts, Tr, YD represent testing data, training data, yearly data (complete data set) respectively, and D k represents daily hourly base data where i = 1, 2, 5, and 7. In the proposed work the k values are 1, 2, 5, and 7 for one day, two days, five days and one week respectively.
Different combinations of neurons in the hidden layer with input and output layer have been tried, and the best-suited combination of some neurons in the hidden layer (10) with an input layer and output layer has been selected, as shown in Figure 6 which is taken from ANN toolbox available in MATLAB [23].
Another set of the experiment has been performed with normalized data and 16 neurons in hidden layer with four neurons in the input layer and one neuron in the output layer has been applied as shown in Figure 7 which is taken from the ANN toolbox in MATLAB [23]. Similarly, we have tried a different combination of neurons in the hidden layer, and found this combination (10 neurons in the hidden layer) appropriate in combination with the input and output layer.
Third, we performed experiments by combining statistical moments with original data. Twenty neurons are specified in the hidden layer with eight neurons in the input layer and one neuron in the output. In order to find the best combination of some neurons in the hidden layer with input and output layer, we have tried a different number of neurons in hidden layer and found this combination as an appropriate combination as shown in Figure 8, which is taken from ANN toolbox in MATLAB [23].

4.2. Results

In the proposed work, we have used three types of data namely simple data, normalized data, and data with statistical moments. The results shown in Figure 9, Figure 10, Figure 11 and Figure 12 are for one day, two days, five days and one-week hourly energy consumption prediction for ANN applied to simple data. In the following figures, the green color lines repent the actual power consumption, and the deep orange color lines represent the predicted energy consumption prediction results.
Table 1 present the statistical summary of the results of the FFBPNN for one day (1D), two days (2D), five days (5D), and one week (W) hourly energy consumption prediction on simple data. The values of these statistical measures are high which indicate that the FFBPNN performance on data with no preprocessing is not impressive. In order to improve the performance, some tweaks are required to be added into data.
The recorded energy consumption prediction results for one day, two days, five days, and one week have been shown in Figure 13, Figure 14, Figure 15 and Figure 16, respectively, for ANN applied to normalized data.
Table 2 present the statistical summary of the results of FFBPNN for one day (1D), two days (2D), five days (5D), and one week (W) hourly energy consumption prediction results of ANN on normalized data. The MAE, MAPE, and RMSE values for normalized data are lower when compared to the simple data which illustrate that FFBPNN performs better on normalized data as compared to simple data.
In this work, we have also applied the ANN on data with statistical moments (SMD) for one day, two days, five days, and one-week energy consumption prediction in residential buildings. The results shown in Figure 17, Figure 18, Figure 19 and Figure 20 are for one day, two days, five days and one-week hourly energy consumption prediction respectively.
The statistical measures result for one day (1D), two days (2D), five days (5D), and one week (W) hourly energy consumption prediction using ANN on SMD are presented in Table 3. The values of the statistical measures are lower as compared to simple data and normalized data. Hence, it proved that the performance of FFBPNN on data with statistical moments is far better as compared to simple data and normalized data.
In Table 4 we have calculated the average of the MAE, MAPE, and RMSE values of the above tables for simple data, normalized data, and statistical moments data. The average of the statistical moments for these different types of data is calculated in order to measure the overall performance of ANN on each type of data. The results indicate that the performance of the ANN on data with statistical moments is better when compared to simple and normalized data.

5. Conclusion and Future Work

The energy consumption prediction and modeling have always remained a challenging task for researchers and scientists. The reason for the challenges is noise disturbance and randomness. To tackle the challenge in this paper, a robust and more flexible model has been proposed for the prediction of energy consumption in residential buildings. The prediction has been carried out with FFBPNN using simple data, normalized data, and data with statistical moments for energy consumption prediction in a residential building for a various number of days. The machine learning techniques have proved better accuracy and opened an opportunity for the prediction systems to be implemented in a real smart building environment due to their better accuracy. The design of the model has been kept flexible to accommodate the modifications and implementation of different machine learning algorithms. We have applied the FFBPNN on simple data, normalized data, and data with statistical moments, and the results indicate that FFBPNN performs better on data with statistical moments. The performance of the algorithms in the system has been measured using different statistical measures (MAE, RMSE, MAPE). These statistical measure values indicate that the performance of a feed-forward neural network on statistical moments is better as compared to simple and normalized data. It has been observed that with larger data the performance of a neural network was better when compared to the smaller data. However, there is still a need to test more data on the model and compare results with other algorithms which will be the subject of future work.

Author Contributions

M.F. conceived the idea for the paper, designed and performed the experiments and wrote the paper. H.S, and A.M.A assisted in paper revision and supervised the work. W.K.M helped in paper designing. A.S.S helped in paper writing, paper revision, proofreading and editing of the paper.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Description of abbreviations/notations used in the paper.
Table A1. Description of abbreviations/notations used in the paper.
NotationDescription
FFBPNNFeed Forward Back Propagation Neural Network
MAEMean Absolute Error
MAPEMean Absolute Percentage Error
RMSERoot Mean Square Error
SDSimple Data
NDNormalized Data
SMDStatistical Moments Data
YDOne Year Data
TrTraining Data
TsTesting Data
DDay

References

  1. i-Scoop. Smart Homes Automation. Available online: https://www.i-scoop.eu/smart-home-home-automation/ (accessed on 20 November 2018).
  2. Gartner. Gartner Survey Shows Connected Home Solutions Adoption Remains Limited to Earlyadopters. 2017. Available online: https://www.gartner.com/en/newsroom/press-releases/2017-03-06-gartner-survey-shows-connected-home-solutions-adoption-remains-limited-to-e.arly-adopters (accessed on 25 March 2019).
  3. Controls, J. 2017 Energy Efficiency Indicator Survey. 2017. Available onlinehttps://www.johnsoncontrols.com/media-center/news/press-releases/2017/10/12/-/media/d23ec7c884d34719b0ec5b00d3a8abe2.ashx (accessed on 25 March 2019).
  4. Shah, A.S.; Nasir, H.; Fayaz, M.; Lajis, A.; Shah, A. A review on energy consumption optimization techniques in iot based smart building environments. Information 2019, 10, 108. [Google Scholar] [CrossRef]
  5. Wahid, F.; Ghazali, R.; Fayaz, M.; Shah, A.S. Statistical features based approach (sfba) for hourly energy consumption prediction using neural network. Int. J. Inf. Technol. Comput. Sci. 2017, 9, 23–30. [Google Scholar] [CrossRef]
  6. Wahid, F.; Ghazali, R.; Fayaz, M.; Shah, A.S. A simple and easy approach for home appliances energy consumption prediction in residential buildings using machine learning techniques. JAEBS 2017, 7, 108–119. [Google Scholar]
  7. Fayaz, M.; Kim, D. A prediction methodology of energy consumption based on deep extreme learning machine and comparative analysis in residential buildings. Electronics 2018, 7, 222. [Google Scholar] [CrossRef]
  8. 55-2010,A.A.S. Thermal Environmental Conditions for Human Occupancy.; American Society of Heating, Refrigerating, and Air Conditioning Engineers Inc.: Atlanta, GE, USA, 2010. [Google Scholar]
  9. Stinson, J.; Willis, A.; Williamson, J.B.; Currie, J.; Smith, R.S. Visualising energy use for smart homes and informed users. Energy Procedia 2015, 78, 579–584. [Google Scholar] [CrossRef]
  10. Ha, D.L.; Ploix, S.; Zamai, E.; Jacomino, M. Realtimes dynamic optimization for demand-side load management. IJMSEM 2008, 3, 243–252. [Google Scholar] [CrossRef]
  11. Li, K.; Hu, C.; Liu, G.; Xue, W. Building‘s electricity consumption prediction using optimized artificial neural networks and principal component analysis. Energy Build. 2015, 108, 106–113. [Google Scholar] [CrossRef]
  12. Biswas, M.A.R.; Robinson, M.D.; Fumo, N. Prediction of residential building energy consumption: A neural network approach. Energy 2016, 117, 84–92. [Google Scholar] [CrossRef]
  13. Hu, Y.-C. Electricity consumption prediction using a neural-network-based grey forecasting approach. J. Oper. Res. Soc. 2017, 68, 1259–1264. [Google Scholar] [CrossRef]
  14. Basu, K.; Hawarah, L.; Arghira, N.; Joumaa, H.; Ploix, S. A prediction system for home appliance usage. Energy Build. 2013, 67, 668–679. [Google Scholar] [CrossRef]
  15. Li, C.; Ding, Z.; Zhao, D.; Yi, J.; Zhang, G. Building energy consumption prediction: An extreme deep learning approach. Energies 2017, 10, 1525. [Google Scholar] [CrossRef]
  16. Bâra, A.; Oprea, S.V. Electricity consumption and generation forecasting with artificial neural networks. In Advanced applications for artificial neural networks; IntechOpen: London, UK, 2017. [Google Scholar]
  17. Rahman, A.; Srikumar, V.; Smith, A.D. Predicting electricity consumption for commercial and residential buildings using deep recurrent neural networks. Appl. Energy 2018, 212, 372–385. [Google Scholar] [CrossRef]
  18. Kuo, P.-H.; Huang, C.-J. A high precision artificial neural networks model for short-term energy load forecasting. Energies 2018, 11, 213. [Google Scholar] [CrossRef]
  19. González, P.A.; Zamarreno, J.M. Prediction of hourly energy consumption in buildings based on a feedback artificial neural network. Energy Build. 2005, 37, 595–601. [Google Scholar] [CrossRef]
  20. Gibbs, M.S.; Morgan, N.; Maier, H.R.; Dandy, G.C.; Nixon, J.B.; Holmes, M. Investigation into the relationship between chlorine decay and water distribution parameters using data driven methods. Math. Comput. Model. 2006, 44, 485–498. [Google Scholar] [CrossRef]
  21. Geem, Z.W.; Roper, W.E. Energy demand estimation of south korea using artificial neural network. Energy Policy 2009, 37, 4049–4054. [Google Scholar] [CrossRef]
  22. Wahid, F.; Kim, D.H. Short-term energy consumption prediction in korean residential buildings using optimized multi-layer perceptron. Kuwait J. Sci. 2017, 44. Available online: https://journalskuwait.org/kjs/index.php/KJS/article/view/1473 (accessed on 25 March 2019).
  23. MATLAB Version 8; (R2013a); The Mathworks Inc.: Natick, MA, USA, 2013.
Figure 1. Proposed methodology.
Figure 1. Proposed methodology.
Technologies 07 00030 g001
Figure 2. Visualization of two days’ hourly energy consumed data collected from Building-IV.
Figure 2. Visualization of two days’ hourly energy consumed data collected from Building-IV.
Technologies 07 00030 g002
Figure 3. Data collection.
Figure 3. Data collection.
Technologies 07 00030 g003
Figure 4. Structure of model M1 for four inputs.
Figure 4. Structure of model M1 for four inputs.
Technologies 07 00030 g004
Figure 5. Structure of model M2 for eight inputs.
Figure 5. Structure of model M2 for eight inputs.
Technologies 07 00030 g005
Figure 6. Artificial neural network (ANN) configuration applied on original data.
Figure 6. Artificial neural network (ANN) configuration applied on original data.
Technologies 07 00030 g006
Figure 7. ANN configuration applied to normalized data.
Figure 7. ANN configuration applied to normalized data.
Technologies 07 00030 g007
Figure 8. ANN configuration applied to data with statistical moments.
Figure 8. ANN configuration applied to data with statistical moments.
Technologies 07 00030 g008
Figure 9. One day hourly energy consumption prediction using feed forward back propagation neural network (FFBPNN) on simple data.
Figure 9. One day hourly energy consumption prediction using feed forward back propagation neural network (FFBPNN) on simple data.
Technologies 07 00030 g009
Figure 10. Two days of hourly energy consumption prediction using FFBPNN on simple data.
Figure 10. Two days of hourly energy consumption prediction using FFBPNN on simple data.
Technologies 07 00030 g010
Figure 11. Five days of hourly energy consumption prediction using FFBPNN on simple data.
Figure 11. Five days of hourly energy consumption prediction using FFBPNN on simple data.
Technologies 07 00030 g011
Figure 12. One week hourly energy consumption prediction using FFBPNN on simple data.
Figure 12. One week hourly energy consumption prediction using FFBPNN on simple data.
Technologies 07 00030 g012
Figure 13. One day hourly energy consumption prediction using FFBPNN on normalized data.
Figure 13. One day hourly energy consumption prediction using FFBPNN on normalized data.
Technologies 07 00030 g013
Figure 14. Two days of hourly energy consumption prediction using FFBPNN on normalized data.
Figure 14. Two days of hourly energy consumption prediction using FFBPNN on normalized data.
Technologies 07 00030 g014
Figure 15. Five days of hourly energy consumption prediction using FFBPNN on normalized data.
Figure 15. Five days of hourly energy consumption prediction using FFBPNN on normalized data.
Technologies 07 00030 g015
Figure 16. One-week hourly energy consumption prediction using FFBPNN on normalized data.
Figure 16. One-week hourly energy consumption prediction using FFBPNN on normalized data.
Technologies 07 00030 g016
Figure 17. One day hourly energy consumption prediction using FFBPNN on statistical moments data.
Figure 17. One day hourly energy consumption prediction using FFBPNN on statistical moments data.
Technologies 07 00030 g017
Figure 18. Two days hourly energy consumption prediction using FFBPNN on statistical moments data.
Figure 18. Two days hourly energy consumption prediction using FFBPNN on statistical moments data.
Technologies 07 00030 g018
Figure 19. Five days hourly energy consumption prediction using FFBPNN on statistical moments data.
Figure 19. Five days hourly energy consumption prediction using FFBPNN on statistical moments data.
Technologies 07 00030 g019
Figure 20. One week hourly energy consumption prediction using FFBPNN on statistical moments data.
Figure 20. One week hourly energy consumption prediction using FFBPNN on statistical moments data.
Technologies 07 00030 g020
Table 1. Statistical measures result in ANN on simple data (SD).
Table 1. Statistical measures result in ANN on simple data (SD).
Statistical MeasuresMAEMAPERMSE
ANN on SD (1D)1.66094.55031.9046
AN on SD (2D)1.96495.26042.4105
ANN on SD (5D)2.32616.35722.8343
ANN on SD(W)2.10695.90652.6226
MAE: mean absolute error, MAPE: mean absolute percentage error, RMSE root mean square error, SD: simple data.
Table 2. Statistical measurements result for ANN prediction on normalized data.
Table 2. Statistical measurements result for ANN prediction on normalized data.
Statistical MeasuresMAEMAPERMSE
ANN on ND (1D)1.68324.62642.2857
ANN on ND (2D)1.33833.58471.7186
ANN on ND (5D)2.21146.22512.7352
ANN on ND (W)1.95455.25772.3558
MAE: mean absolute error, MAPE: mean absolute percentage error, RMSE: root mean square error, ND: normalized data.
Table 3. Statistical measurement results of ANN on statistical data.
Table 3. Statistical measurement results of ANN on statistical data.
Statistical MeasuresMAEMAPERMSE
ANN on SMD(1D)1.03212.90391.212
ANN on SMD(2D)0.95752.5161.1388
ANN on SMD(5D)1.61054.47722.1394
ANN on SMD(W)0.72652.06460.9723
MAE: mean absolute error, MAPE: mean absolute percentage error, RMSE: root mean square error, and, SMD: statistical moments data.
Table 4. Average values of MAE, MAPE, and RMSE of above three tables.
Table 4. Average values of MAE, MAPE, and RMSE of above three tables.
Statistical MeasuresMAEMAPERMSE
ANN (simple Data)8.058822.07449.772
ANN on ND7.187419.69399.0953
ANN on SMD4.326611.96175.4625
MAE: mean absolute error, MAPE: mean absolute percentage error, root mean square error, ND: normalized data.

Share and Cite

MDPI and ACS Style

Fayaz, M.; Shah, H.; Aseere, A.M.; Mashwani, W.K.; Shah, A.S. A Framework for Prediction of Household Energy Consumption Using Feed Forward Back Propagation Neural Network. Technologies 2019, 7, 30. https://0-doi-org.brum.beds.ac.uk/10.3390/technologies7020030

AMA Style

Fayaz M, Shah H, Aseere AM, Mashwani WK, Shah AS. A Framework for Prediction of Household Energy Consumption Using Feed Forward Back Propagation Neural Network. Technologies. 2019; 7(2):30. https://0-doi-org.brum.beds.ac.uk/10.3390/technologies7020030

Chicago/Turabian Style

Fayaz, Muhammad, Habib Shah, Ali Mohammad Aseere, Wali Khan Mashwani, and Abdul Salam Shah. 2019. "A Framework for Prediction of Household Energy Consumption Using Feed Forward Back Propagation Neural Network" Technologies 7, no. 2: 30. https://0-doi-org.brum.beds.ac.uk/10.3390/technologies7020030

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop