Next Article in Journal
Efficient in silico Chromosomal Representation of Populations via Indexing Ancestral Genomes
Previous Article in Journal
New Heuristics for Rooted Triplet Consistency
Article

Noise Reduction for Nonlinear Nonstationary Time Series Data using Averaging Intrinsic Mode Function

1
Centre for Bio-Inspired Technology, Imperial College London, South Kensington, London SW7 2AZ, UK
2
Ubon Rachathani Rajabhati Univeristy, 2 Ratchathani Road, Ubon Ratchathani 34000, Thailand
*
Author to whom correspondence should be addressed.
Algorithms 2013, 6(3), 407-429; https://0-doi-org.brum.beds.ac.uk/10.3390/a6030407
Received: 28 May 2013 / Revised: 13 June 2013 / Accepted: 4 July 2013 / Published: 19 July 2013

Abstract

A novel noise filtering algorithm based on averaging Intrinsic Mode Function (aIMF), which is a derivation of Empirical Mode Decomposition (EMD), is proposed to remove white-Gaussian noise of foreign currency exchange rates that are nonlinear nonstationary times series signals. Noise patterns with different amplitudes and frequencies were randomly mixed into the five exchange rates. A number of filters, namely; Extended Kalman Filter (EKF), Wavelet Transform (WT), Particle Filter (PF) and the averaging Intrinsic Mode Function (aIMF) algorithm were used to compare filtering and smoothing performance. The aIMF algorithm demonstrated high noise reduction among the performance of these filters.
Keywords: empirical mode decomposition; Intrinsic Mode Function; Wavelet Transform; noise reduction; exchanges rates empirical mode decomposition; Intrinsic Mode Function; Wavelet Transform; noise reduction; exchanges rates

1. Introduction

Kalmam Filter (KF) was conceptualised for use in a linear system. In a nonlinear filter, Extended KF (EKF) requires Jacobian mappings, which can be computationally intensive if the vector measurement is high. However, limitations in computer processing may be a problem [1,2]. In brief, an EKF can estimate a highly non-stationary data series, with a known state space model incorporated along with EKF [3].
Sanjeev et al. explained that Particle Filter (PF) has received much attention in various fields over the past decades [4]. The basic principle of PF is to use a set of weighted samples, also known as particles, to approximate the posterior probability of a time-varying signal of interest, given related observations. PFs generalize traditional KFs and can be applied to nonlinear and non-Gaussian state-space models. Similar to EKF, the PF algorithm is a two-state approach, i.e., prediction and correction; and is a technique for implementing recursive Bayesian filters by Monte Carlo sampling. In summary, PF represents the posterior density using a set of random particles with associated weights and requires a large number of particles. The most common applications of PFs are in the areas of image processing and segmentation, model selection, tracking and navigation, channel estimation, blind equalization, positioning in wireless networks, biochemistry, economics, finance, geosciences, immunology, materials science, pharmacology, and toxicology. Theoretically, the advantages of PF over EKF are in the representation of nonlinear functions because optimal estimation also uses nonlinear non-Gaussian state-space models [5,6,7].
In 1996, Huang and Okine [8] developed a posteriori algorithm for analysing nonlinear and nonstationary datasets using EMD, which is known as the Hilbert-Huang transform (HHT). It was applied in many applications such as bioinformatics [6,7], signal processing [5,8,9], geophysics [10,11], and finance [8,9,12]. It was claimed that the HHT model could replace the narrow-band filter technique proposed by Hilbert-Gabor, when Fourier analysis was a vital tool for signal detections in the early days. EMD is based on the local time scale of the signal, which is decomposed using a sifting process [13,14]. The principle of Wavelet Transform (WT) is that an input signal is split into various small waves, which are compact or are functions with finite length. In recent years, a few studies have used a method that combines EMD and WT to reduce nonlinear, nonstationary signals such as electrocardiography (ECG) signals. However, there is no end solution to determine the stop criteria of the EMD process. A comparison of WT and HHT shows that both have similar time and frequency distributions using amplitude as the common axis, where the analytical results for WT indicate a number of deficiencies such as harmonics and small spikes in the frequency scale. Thus, WT may not be suitable for the analysis and capture of large volumes of data [15].
This paper presents a new novel digital filter termed “aIMF” which is a derivative of EMD. The aIMF algorithm is used to reduce a noise associated with nonlinear nonstationary time series data sets (exchange rates). To verify the performance of the proposed aIMF digital filter, we compare it with WT, EKF and PF. The result shows the aIMF algorithm outperforms those mentioned filters. Section 2 presents theoretical considerations of EMD, aIMF, WT, EKF and PF. The simulation and results including robustness test are presented in Section 3, whereas conclusion and discussion are in Section 4.

2. Theoretical Considerations of the Digital Filter

Referring to Section 1, we propose theoretical considerations of existing algorithms using for filtering nonlinear nonstationary times series data; and those are, WT, EKF, PF, and our proposed aIMF, which is a derivative of EMD .

2.1. EMD Algorithm

The key part of the HHT algorithm is EMD because any complex dataset can be decomposed into a finite that admits a well-behaved Hilbert transform. Because the decomposition is based on the local characteristic time scale of the data, it is applicable to nonlinear and nonstationary processes [8]. During signal processing, high frequency noise from the input data may be considered as different simple intrinsic mode oscillations [8]. In terms of signal processing, the EMD algorithm is viewed as an a posteriori method based on adaptive characteristic scale separation. This process is useful when the input signal oscillation is nonlinear and/or nonstationary. EMD is not suitable for sampling using a fixed time slot in the time series because the local mean of a signal is defined by enveloping without resorting to any time scale. Technically, EMD employs a sifting process and a cubic spline technique for smoothing and filtering a signal [8]. The cubic spline interpolation is applied as a two-sided filter, which improves the confidence interval of the dataset distribution.
HHT is designed to decompose nonlinear and nonstationary signals, especially those with high volatility and fast frequency changes. The signal is adaptively decomposed into components and, after the decomposition of each step, the transformed signal is known as an Intrinsic Mode Function (IMF), which includes the different time scales intrinsic to the signal. IMF is a mono-component and can be transformed into an instantaneous frequency. IMF signals have also been defined as a function with a zero mean and with as many zero crossings as maxima or minima. The HHT process is divided into two parts: EMD and Hilbert Spectral Analysis (HSA). Figure 1 shows the EMD mechanism using a sifting process. It starts by identifying the maxima and minima points, extracting them, interpolating the maxima and minima envelopes, and computing the means of the local envelope.
Figure 1. Flowchart of Empirical Mode Decomposition (EMD) in the Hilbert-Huang transform (HHT).
Figure 1. Flowchart of Empirical Mode Decomposition (EMD) in the Hilbert-Huang transform (HHT).
Algorithms 06 00407 g001
A nonlinear and nonstationary time series dataset is denoted as xi(t). The IMF, ci(t) is defined under the conditions that: (i) The numbers of extrema (maxima plus minima) and zero crossings in the entire data series must either be equal to or differ by at most one; and (ii) At any point, the mean value of the envelope defined by the local maxima and that defined by the local minima, must be zero [10]. These conditions are met
[ n U i ( t ) + n L i ( t ) = n Z i ( t ) , n U i ( t ) + n L i ( t ) = n Z i ( t ) ± 1 ]
and
[ U i ( t ) + L i ( t ) 2 = 0 ]
where nUi(t), nLi(t) and nZi(t), are the values of the maxima (upper peak), minima (lower peak) and zero crossing, of the EMD respectively. The EMD algorithm can be represented as follows.
Step 1: Spline the EMD datasets denoted to xk(t) by interrelating using Ui(t) and Li(t), which is given by
f int { U k ( t ) , x k ( t ) } = U 1 int ( t ) , U 2 int ( t ) , U 3 int ( t ) , .... , U n int ( t )
and
f int { L i ( t ) , x k ( t ) } = L 1 int ( t ) , L 2 int ( t ) , L 3 int ( t ) , .... , L 1 int ( t )
The algorithm used for parabolic interpolation can be described as follows.
(1)
When constructing the upper and lower envelopes, we calculate the parabola coefficients of ax2 + bx + c using x(k − 1), x(k), (k + 1);
(2)
If the second-degree coefficient, a, equals zero, is x(k) is certainly not an extremum so the sliding window moves further on a discrete value of x(k);
(3)
If the first-degree coefficient, b, equals zero, x(k) is an extremum, either a maximum or minimum, depending on the sign. We then calculate the top of this parabola by introducing t top = b 2 a ; similarly, applying to the bottom part of the parabolic curves;
(4)
Repeat (2) and (3) and stop after executing x(k + n).
Step 2: Average the maxima and minima in order of the time series, which is represented as
m i ( t ) = U 1 int + L 1 int 2 , U 2 int + L 2 int 2 , U 3 int + L 3 int 2 , ... , U n int + L n int 2
Step 3: Subtract the xi(t) from the average of the local extrema (maxima and minima) mi(t) in order of the time series, and the new decomposed signal is
h i ( t ) = x k ( t ) m i ( t )
Step 4: Repeat steps (i) through (iii) k times until h 1 k ( t ) equals c1(t). Using Equation (5) where h 1 ( k 1 ) ( t ) m 1 k = h 1 k ( t ) , we designate c1(t) as the first IMF.
Step 5: Find other IMFs by calculating the first residual, which is given by
r 1 ( t ) = x k ( t ) c 1 ( t )
We derive c 2 ( t ) , c 3 ( t ) , c 4 ( t ) , ... , c n ( t ) ; thus, r 1 ( t ) is treated as a new dataset in the next loop, which is completes after obtaining cn(t). This procedure is represented by
r 2 ( t ) = r 1 ( t ) c 2 ( t )
            ……
r n ( t ) = r n 1 ( t ) c n ( t )
IMFs are narrowed band, zero-mean signals and the signal is decomposed into k IMFs by EMD, so each IMF is located in lower frequency regions in the time-frequency domain than the lagged signal. EMD can act as a dyadic filter bank for noise [16] and can be expressed as follows.
x ( t ) = i = 1 n r n ( t ) + r n ( t )
Since HHT has formed forms another function, HSA, it can be used as a tool for the time-frequency analysis of nonlinear and nonstationary datasets to present the results as a time-frequency-energy distribution [12]. For a comprehensive explanation of the Hilbert transform, refer to Cohen [16]. Given a real signal x(t), the analytic signal z(t) is defined as
z(t) = x(t) + iy(t)
where y(t) is the Hilbert transform defined using the Cauchy principle value denoted by p.v. of x(t) such that
y ( t ) = H [ x ( t ) ] = p . v . + x ( τ ) π ( t τ ) d τ
We can define the complex signals, amplitude A(t) and phase ψ(t) as follows [8]
Algorithms 06 00407 i001
ψ ( t ) = arg z ( t ) = arctan y ( t ) x ( t )
where the time-varying phase or instantaneous frequency ω(t) can be given as ω ( t ) = d ψ ( t ) d t .
From Equation (9), the original data is rearranged as follows:
x ( t ) = Re i = 1 n A i ( t ) exp ( j ω i ( t ) d t )
where Re {.} denotes real part of the complex quantity.
Using Equation (14) and performing the Hilbert transform on each IMF cn(t), the analytical data can be expressed in terms of the Hilbert amplitude and instantaneous frequency, as follows:
x ¯ ( t ) = i = 1 n R i ( t ) exp [ j ω i ( t ) d t ]
As the HHT model comprises two main parts: EMD and HSA, the EMD process under a cubic spline and sifting technique decomposes the original signals into IMF. HSA calculates the instantaneous frequency using the Hilbert transform and analyses the entire time-varying instantaneous spectrum. It is crucial to mention that the analysis of the Hilbert spectrum is conducted to view the spectrum after the EMD process is finished.

2.2. Creating the aIMF Algorithm

Kaslovsky and Meyer [17] explained that a meaningful instantaneous frequency (IF) decomposed by the EMD algorithm must be nearly monochromatic, of which is a condition that is not guaranteed by the algorithm and fails to be met clearly when the signals is corrupted by noise. Several reports demonstrated that the EMD performance is likely to be sensitive and produces a large quantity of noise [12,18]. Moreover, the accuracy of HHT analysis suffers from several mathematical and numerical effects that require further studies. This is mainly because HHT signals emerged from EMD algorithm are not shift-invariant in times (stationary) and is likely be full narrow band [18]. To reduce noise, one of the solutions was to normalise the analytic signal prior to Hilbert transform [8,19]. There were many studies to improve the HHT algorithms, and those at least are; adoption of wavelet packet transform as the pre-processes to decompose the signals into a set of narrow-band signals prior to the application of the EMD [20,21], and an ensemble empirical mode decomposition (EEMD) which consists of shifting an ensemble of white-added noise signals and treats the mean as a final true result, using a windowed average. One of the publications of the EEMD was used to add noise to provide stoppage criteria for the EMD process [9]. Kaiser [22] mentioned that Fourier spectral analysis and its derivatives such as WT encountered with a limited time window width by sliding a predetermined window(s) along the time axis. Moreover, there is a trade-off between the time consumed in the window width and the frequency resolution, and this phenomenon has been considered by the uncertainty principle Heisenberg [21]. In the WT, the window width must be preselected and it is known as the primary or mother wavelet which is a prototype that provides less flexibility when handling datasets where the mean and variances are highly volatile.
It is imperative to restate that two conditions of EMD process defined in Section 2.1. However, with more iteration, the more residual rn(t) becomes either over-distorted or a monochromatic function from which no further IMF can be decomposed [11]. The other study confirmed that the real complex quantity of all IMFs decomposed i.e., IMF1 to IMF7, which are in the amplitude-frequency domain, shifted to the lower region [19].
To reduce noise produced in the EMD process, we propose to extract noise spreading to all of the spectrums by averaging all IMFs and subtract them from the original signals in which is represented by
c a ( t ) = i = 1 n c i ( t ) n
Next, we subtract the averaged IMF in Equation (16), ca(t), from the original nonlinear, nonstationary times series datasets, xn(t). Thus, a new digital filter is created, which is given by
aIMF ( t ) = y n ( t ) = x i ( t ) c a ( t )
where yn(t) is a function of the aIMF algorithm.
In Figure 2, the aIMF process starts by inputting the nonlinear, nonstationary times series data, Euro and US dollars (EUR-USD), exchange rates into the EMD process. The next block uses the results from the EMD process whereas the aIMF algorithm begins in the third block where all of the IMFs are averaged and the averaged IMFs are subtracted from the original datasets.
Figure 2. Block diagram of the proposed averaging Intrinsic Mode Function (aIMF) algorithm.
Figure 2. Block diagram of the proposed averaging Intrinsic Mode Function (aIMF) algorithm.
Algorithms 06 00407 g002
Referring to Figure 2, the proposed algorithm aIMF has an advantage which provides stoppage criteria once the two conditions of the EMD process defined in Section 2.1 are accomplished. This is because with more iteration, the more residual rn(t) becomes either over-distorted or a monochromatic function from which no further IMF can be decomposed.

2.3. Theoretical Considerations of the WT Algorithm

Helong et al. [23] applied the WT of a signal f(x), which was early defined in [21] as follows
W T ( a , b ; x , ψ ) = | a | 1 2 a + f ( x ) ψ ( x b a ) ¯ d x
where WT represents the calculated coefficients, and b are the translation parameters and scale, respectively, ψ(x) is the transforming function (mother wavelet) and the bar over ψ indicates its complex conjugate. A wavelet is classified as an adaptive algorithm, which is used in many fields such as astronomy, acoustics, nuclear engineering, signal processing and the prediction of earthquakes by solving partial optimized equations and reducing the random noise [15]. A factorization of f at different resolution levels is defined by
f = i = + c J , i ϕ J , i + j = J 1 i = + d j , i ψ j , i
where CJ,i represents the information in the signal on the coarsest level, ϕJ,i is the scaling function and dJ,i represents the details (wavelet coefficients) at the different scales necessary to reconstruct the function at the fine scale 0, at which the wavelet and scaling functions are compactly supported.
The next step is to introduce the thresholding technique that is used to remove noise from each local set of dJ,i, which are normally affected at different levels of the scale j. This is given by Li [24]
γ = { d i λ d i λ 0 d i λ d i + λ d i λ
where γ is the indicator and λ is the thresholding value. To find the thresholding value λ, we introduce factorizing f and we obtain f ^ , which approximates f. The error (risk) between f and its approximation f ^ is given by
R ( f ^ , f ) = 1 N i = 0 N 1 f ^ i f i 2 2
In terms of the wavelet coefficients under Parseval’s identity [24], the transform shown in Equation (20) can be expressed as
R ( f ^ , f ) d d 2 2 = j k ( d ^ j , k d j , k ) 2
Applying Equations (19) and (21) with respect to any resolution level, j = 1, 2, ..., J, we use Stein’s principle [25] to minimize the risk in Equation (21). The thresholding value λ is finally obtained as
λ = 2 1 N j k ( d ^ j , k d j , k ) 2

2.4. Theoretical Considerations of the EKF Algorithm

In a nonlinear system, however, KF can be enhanced using a Taylor series to expand the state Equation (23) and output Equation (24) around a nominal state, known as a linearized KF. A summary of the linearized KF algorithm is
xk+1 = f(xk,uk) + wk
yk = h(xk) + vk
where xk+1 is the state of the system, denoted for the next time series of the original datasets xk; k is the time index; uk is the driving function that may call a signal control or distribution function; wk is a noise, independent and identically distributed (i.i.d.) N(0,Q), where Q is the covariance (matrix) of the state; vk and is another noise, i.i.d. N(0,R), where R is the covariance (matrix) of the measurement of noise; yk is the measured output; and f(.) and h(.) are the state equation and output equation, respectively. The state and output functions in the case are nonlinear functions. Thus, Equations (27) and (28) are nominal states that are known (predicted) ahead of time, which are represented by
Algorithms 06 00407 i002
Algorithms 06 00407 i003
where x ¯ k + 1 is the nominal state of the system and y is the nominal measured output state.
During each step, we compute the partial derivative matrices of Ak and Ck with respect to xk, and we obtain in the following equations
Algorithms 06 00407 i004
Algorithms 06 00407 i005
where A and C are matrices. Next, we define Δyk as the difference between the actual measurement yk and the nominal measurement yk, which is given by
Δ y k = y k y ¯ k = y k h ( x ¯ k )
In this state, the following linearized KF equations can be executed as follows:
Algorithms 06 00407 i006
Δ x ^ k + 1 = A k Δ x ^ k + K k ( Δ y k C K Δ x ^ k )
Algorithms 06 00407 i007
x ^ k + 1 = x ¯ k + 1 + Δ x ^ k + 1
where Kk is the Kalman gain, Pk is the covariance of the error of the estimation and I is the identity matrix.
In the linearized KF, there is a limitation that the nominal state x must be set prior the execution. EKF then assumes that x equals x ^ in the bootstrapping approach to the state equation. Thus, Equations (27) and (28) are subsequently changed to
Algorithms 06 00407 i008
C k = h ( x ^ k )
In Equation (31), where x equals x ^ , Equation (31) can be rearranged to produce
Algorithms 06 00407 i009
substitute Equations (34) and (35) into Equation (36), we obtain Equation (37). As a result, EKF equation can be represented by applying Equations (31), (32) and (37) which is given by
x ^ k + 1 = f ( x ^ k , u k ) + K k [ y k h ( x ^ k ) ]
In the initial state where k = 0, x is predetermined using its means and Qk and Rk are relevant to x. In the execution mode, the measurement update (output state) adjusts the projected estimate based on an actual measurement at that time. It should be noted that EKF applies Equations (30), (31) and (37) to update the prediction mode after the state is changed periodically and the Kalman gain Kk determines how the observer responds to the difference between its estimated output and the noisy measurement. To simplify this, we can present EKF as the system block diagram shown in Figure 3.
Figure 3. System block diagram of Extended Kalman Filter (EKF).
Figure 3. System block diagram of Extended Kalman Filter (EKF).
Algorithms 06 00407 g003

2.5. Theoretical Considerations of the PF Algorithm

The theoretical considerations of the PF model start by considering a hidden Markov model (HMM) that observes the outputs xi indirectly using state yi, and we can specify a simple model as follows
x 1 ~ μ ( x 1 )
f ( x k | x k 1 )
where in state equation (transition) for k > 1, and
h ( y k | x k )   in measured state  ( observation )
where all states are homogeneous, and the probabilities of transitions and observations are independent of time, see Figure 4.
Figure 4. HMM showing the transition and observation states.
Figure 4. HMM showing the transition and observation states.
Algorithms 06 00407 g004
The goal is to estimate xk, which is equivalent to xi the original datasets, given all observations up to point (y1:n). Alternatively, we need to find the posterior distribution of p(x1:n|y1:n). Using Bayes, we end up with two steps as follows:
(i)
Update step:
p ( x k | y 1 : k ) = h ( y k | x k ) p ( x k | y 1 : k 1 ) p ( y k | y 1 : k 1 )
(ii)
Prediction step:
p ( x k | y 1 : k 1 ) = f ( x k | x k 1 ) p ( x k 1 | y 1 : k 1 ) d x k 1
We often find that these distributions were intractable, especially the nonlinear and Gaussian model in closed form. The solution is to approximate the distributions using a large number of samples (particles). To address the problems in Equations (41) and (42), it might not be too difficult to appropriate the intractable integrals appearing in those equations directly, where the alternative is to use important sampling or sequential importance sampling (SIS). The advantage of SIS is that it does not guarantee to fail as t increases and it becomes more and more skewed, especially when sampling high-dimensional spaces [26].
Thus, we propose a bootstrap PF, which is an iterative method of Bayesian inference for a dynamic state space. The algorithm of the bootstrap PF model is described as follows:
(1)
Assume p ( x k 1 | y k 1 ) is the posterior probability distribution at k − 1 where the transition state (state equation) is Algorithms 06 00407 i010
(2)
Resample p ( x k | x k 1 ) , which is the prior probability distribution at k − 1 using the bootstrap algorithm
(3)
Find a weight by Monte Carlo integration using p ( x k 1 | y k 1 ) and p ( x k | x k 1 ) to obtain, p ( x k | x k 1 ) p ( x k 1 | y k 1 ) d x k 1 i w k 1 i p ( x k | x k 1 i )
(4)
From (3), use Algorithms 06 00407 i011 to estimate the particle at k − 1 and obtain S ^ k 1 = { ( x ^ k 1 i , w ^ k 1 i ) ; i = 1 , 2 , ... , N } .
(5)
Update the likelihood function p ( y k | x k ) with x ^ k 1 i and w ^ k 1 i .
(6)
From (5) use the new updated likelihood to update p ( x k | y k ) , which is the posterior probability distribution at k using a normalised weight from time to time. The weight can either be the averaging of all the weights or the last weight only.
(7)
Finally, we obtain p ( x k | y k ) the posterior probability distribution at k.

3. Simulation and Results

The aim of this section is to compare the performances of all the proposed digital filters and recommend the best fit filter. We use R Programming to simulate the original datasets that has a suite of noise added to it. After completing the simulations, we applied a variety of loss estimations, i.e., Mean Square Error (MSE), Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), R2, AIC, BIC and Accuracy count which is a sum of the upward and downward movements of all the underlying local signals after they had transited the reversion points. See flowchart of simulation and results in Figure 5.
Figure 5. Flowchart shows simulation and results of aIMF, WT, EKF and PF.
Figure 5. Flowchart shows simulation and results of aIMF, WT, EKF and PF.
Algorithms 06 00407 g005

3.1. Adding White-Gaussian Noise

We created six sets of sine wave which is i.i.d. N(0, σ 2 ) for the frequencies in Hz of 1, 5 and 10 with the amplitude of 10% and 20% of the original datasets, at 100% random distribution. Those parameters were applied to the exchange rates data points xk(t) in time series. The distribution of these added noise are similar to white-Gaussian noise as shown in Figure 6.
Figure 6. Six sets of sine wave as frequencies in Hz of 1, 5 and 10 with the amplitude of 10% and 20% of exchange rates.
Figure 6. Six sets of sine wave as frequencies in Hz of 1, 5 and 10 with the amplitude of 10% and 20% of exchange rates.
Algorithms 06 00407 g006
For ease of presentation, Figure 6 shows the x-axis representing just 100 data points out of the total 2322 data points in the time series, whereas the y-axis is representing the amplitude. We introduced a variety of testing methods, namely; (a) Anderson-Darling, Lilliefors (Kolmogorov-Smirnov) and Pearson chi-square for nonlinear test; and (b) Augmented Dickey-Fuller and Elliott-Rothenberg-Stock for nonstationary test. Referring to Equation (23) xk represented datasets, EUR-USD, exchange rates, in which their nonlinear and nonstationary characteristics were verified by all the testing methods; and wk represented the six sets of sine wave created in Section 3.1. The equation which used to add white Gaussian noise to the original datasets can be rearranged as follows
x k g ( t ) = x k ( t ) + ε k g ( t )
where xkg(t) is the original signal with the noise, xk(t) is the original datasets and ε k g ( t ) is the added noise with i.i.d. N(0,1) at section of various frequencies in Hz of 1, 5 and 10 with 100% of random distribution of xk(t). To verify that the Equation (43) is nonlinear equation, we tested the xkg(t) with the different parameters of the noise added. As per the results, all of the p-value generated by all the testing methods mentioned earlier were less than 0.05. This served to imply that the characteristics of xkg(t) was nonlinear and nonstationary. Later, xkg(t) was used as input signal to estimate the performance of the following algorithms: (i) aIMF, (ii) WT, (iii) EKF and (iv) PF.

3.2. Simulation and Performance Measurements of the aIMF Algorithm

The objective of this section is to measure the performance of the aIMF algorithm compared with the original datasets, i.e., the EUR-USD exchange rates. To test the performance of the proposed aIMF algorithm when filtering the original signals with white Gaussian noise, we applied a variety of loss estimators, i.e., MSE, MAE, MAPE, R2 and Accuracy count.
At the beginning, we tested the five exchange rates i.e., EUR-USD with Normality and Unit Root tests and found that they are nonlinear and nonstationary time series. Next, we simulated the aIMF algorithm using an R Programming that comprised C++ scripts, which followed the logic in Section 2.2. Figure 7 shows that we produced a new signal, which was filtered by the aIMF algorithm.
Figure 7. Plots of the original datasets, original datasets + noise, and the aIMF algorithm.
Figure 7. Plots of the original datasets, original datasets + noise, and the aIMF algorithm.
Algorithms 06 00407 g007
Having simulated the aIMF algorithm with all noise parameters mentioned in the early of Section 3.1, the results are in the same magnitude and directions. To simplify the graph presentation, we selected the x-axis representing the data points in the time series of overall data range, and two sets of y-axis representing the original datasets, EUR-USD exchange rates (Y2) and the original datasets + noise (Y1) of which shows only the noise’s plot of 1 Hz at 10% amplitude (Y1) with 100% random distribution. We analysed the plots and found that the aIMF curve resided within the curve of “the original datasets + noise”. The trend of those three plots was in the same direction. In terms of estimation, Table 1, Table 2, Table 3, Table 4, Table 5 and Table 6 indicate that all loss estimators, namely; MSE, MAE, and MAPE are relatively low ranging from 0.00296–0.00596, 0.04838–0.06618 and 4.08815–5.59678, respectively.
The R2 is in the high ranging, from 0.8533–0.9228, whereas, AIC and BIC are high, up to −6305.47 and −6974.09, respectively. The Accuracy count showed 57.69%–79.66%. As a result, the performance of the aIMF is acceptable.
We continued the simulations using 5% of random distribution of the added noise instead of 100%. The results showed that the less numbers of the random noise distributed the better performance of the aIMF algorithm. For example, at 5% random distribution of the added noise at 1 Hz with 10% amplitude, the loss estimators, namely; MSE, MAE, MAPE, R2, AIC, BIC and Accuracy count were 0.00014, 0.01019, 0.88981, 0.99820, −15723.93, −15706.68, and 95.86%, respectively; and it outperformed those of simulations with 100% random distribution.
Table 1. Performance measurements of original datasets using noise distribution at 100% with 1 Hz and 10% amplitude of the original signals.
Table 1. Performance measurements of original datasets using noise distribution at 100% with 1 Hz and 10% amplitude of the original signals.
EUR-USDMSEMAEMAPER2AICBICAccuracy count (%)
aIMF0.003990.056364.770930.8962−6305.47−6288.2277.34
WT0.005060.054025.546770.7231−6877.81−7857.6353.82
EKF0.005480.064325.444750.8701−5783.21−5765.9647.13
PF4.526581.65756140.0150.0062−1060.50−1043.2550.32
Table 2. Performance measurements of original datasets using noise distribution at 100% with 1 Hz and 20% amplitude of the original signals.
Table 2. Performance measurements of original datasets using noise distribution at 100% with 1 Hz and 20% amplitude of the original signals.
EUR-USDMSEMAEMAPER2AICBICAccuracy count (%)
aIMF0.002960.048384.088150.9228−6991.34−6974.0979.66
WT0.054390.053115.791780.6759−6397.83−6804.5054.23
EKF0.020430.1276510.79890.6385−3407.66−3390.4148.51
PF4.430791.64779139.7470.0067−1061.73−1044.4849.76
Table 3. Performance measurements of original datasets using noise distribution at 100% with 5 Hz and 10% amplitude of the original signals.
Table 3. Performance measurements of original datasets using noise distribution at 100% with 5 Hz and 10% amplitude of the original signals.
EUR-USDMSEMAEMAPER2AICBICAccuracy count (%)
aIMF0.005220.063035.327700.8760−5891.33−5874.0874.92
WT0.005990.056364.771510.8962−6303.39−6286.1555.51
EKF0.005030.061595.220730.8807−5980.49−5963.2447.65
PF4.473221.64897139.4880.0096−1067.39−1050.1450.24
Table 4. Performance measurements of original datasets using noise distribution at 100% with 5 Hz and 20% amplitude of the original signals.
Table 4. Performance measurements of original datasets using noise distribution at 100% with 5 Hz and 20% amplitude of the original signals.
EUR-USDMSEMAEMAPER2AICBICAccuracy count (%)
aIMF0.005960.064155.427920.8534−5501.99−5484.7464.41
WT0.005710.076604.787540.8761−6890.19−6897.2156.13
EKF0.018700.1224710.36830.6637−3575.62−3558.3848.00
PF4.507701.64377138.9830.0074−1063.27−1046.0250.58
Table 5. Performance measurements of original datasets using noise distribution at 100% with 10 Hz and 10% amplitude of the original signals.
Table 5. Performance measurements of original datasets using noise distribution at 100% with 10 Hz and 10% amplitude of the original signals.
EUR-USDMSEMAEMAPER2AICBICAccuracy count (%)
aIMF0.005270.064445.443960.8710−5799.61−5782.3667.26
WT0.005080.068905.798830.8948−6273.51−6745.2956.82
EKF0.004060.055464.695400.9017−6429.89−6412.6451.66
PF4.473221.64897139.4870.0096−1067.39−1050.1450.24
Table 6. Performance measurements of original datasets using noise distribution at 100% with 10 Hz and 20% amplitude of the original signals.
Table 6. Performance measurements of original datasets using noise distribution at 100% with 10 Hz and 20% amplitude of the original signals.
EUR-USDMSEMAEMAPER2AICBICAccuracy count (%)
aIMF0.005800.066185.596780.8533−5501.09−5483.8457.69
WT0.005210.060034.574160.6941−6843.56−6899.7853.34
EKF0.014900.110469.346390.7140−3951.51−3934.2651.62
PF4.616091.67868141.8680.0049−1057.44−1040.1950.45

3.3. Simulation and Performance Measurements of the WT Algorithm

The objective of this section was to measure the performance of the WT algorithm compared with the original datasets, i.e., the EUR-USD exchange rates using the same simulating model and loss estimators employed in Section 3.2.
Similar to Figure 7, Figure 8 shows three plots, i.e., the original datasets (Y2), the “original datasets + noise” (Y1) and “WT” (Y1). We analysed the plots and found that the WT curve resided within the curve of “the original datasets + noise” except a spike occurred at between the data 1112nd to 1213rd rank of the x-axis. The trend of those three plots was in the same direction. In terms of estimation, Table 1, Table 2, Table 3, Table 4, Table 5 and Table 6 indicate that all loss estimators, namely; MSE, MAE, MAPE are relatively low, ranging from 0.00506–0.00599, 0.05402–0.07660 and 4.7715–5.79883, respectively. The R2 is in the below standard ranging from 0.85340–0.8948, whereas, AIC and BIC are relatively high, up to −6890.19 and −6286.15, respectively; whereas the Accuracy counts showed 53.82%–56.13%. As a result, we rated the performance of the WT algorithm is relatively low and hardly acceptable.
Figure 8. Plots of the original datasets, original datasets + noise, and the Wavelet Transform (WT) algorithm.
Figure 8. Plots of the original datasets, original datasets + noise, and the Wavelet Transform (WT) algorithm.
Algorithms 06 00407 g008

3.4. Simulation and Performance Measurements of the EKF Algorithm

The objective of this section is to measure the performance of the EKF algorithm compared with the original datasets, i.e., the EUR-USD exchange rates using the same methods employed by Section 3.2.
Similar to Figure 7, Figure 9 shows three plots, i.e., the original datasets (Y2), the “original datasets + noise” (Y1) and EKF (Y1). We analysed the plots and found that the EKF curve resided within the curve of “the original datasets + noise” except a spike occurred at the beginning of the x-axis. The trend of those three plots was in the same direction. In terms of estimation, Table 1, Table 2, Table 3, Table 4, Table 5 and Table 6 indicate that all loss estimators, namely; MSE, MAE, MAPE are relatively low ranging from 0.00406–0.01870, 0.05546–0.12765 and 4.69540–10.7989, respectively. The R2 is in the ranging from 0.6637–0.9017, whereas, AIC and BIC are high, up to −6429.89 and −6412.64, respectively, and the Accuracy count showed 47.13%–51.66%. As a result, the performance of the EKF algorithm is unacceptable.
Figure 9. Plots of the original datasets, original datasets + noise, and the EKF algorithm.
Figure 9. Plots of the original datasets, original datasets + noise, and the EKF algorithm.
Algorithms 06 00407 g009

3.5. Simulation and Performance Measurements of the PF Algorithm

The objective of this section is to measure the performance of the PF algorithm compared with the original datasets, i.e., the EUR-USD exchange rate using the same methods employed by Section 3.2.
Similar to Figure 7, Figure 10 shows three plots, i.e., the original datasets (Y2), the “original datasets + noise” (Y2) and PF (Y1). We analysed the plots and found that the PF curve did not agree with the curve of “the original datasets + noise” and the original dataset’s, inasmuch as fluctuations of those three plots were not the same. The loss estimations emerged by varieties of noise parameters, in terms of estimation, Table 1, Table 2, Table 3, Table 4, Table 5 and Table 6 indicate that all loss estimators, namely; MSE, MAE, MAPE are relatively low ranging from 4.43079–4.52658, 1.64377–1.67868 and 139.488–141.868, respectively. The R2 is unacceptable with the ranging of 0.0049–0.096, whereas, AIC and BIC are relatively too low i.e., −1067.39 and −1046.02, respectively, whereas the Accuracy count showed 49.76%–50.58%. As a result, the performance of the PF algorithm is also unacceptable.
Figure 10. Plots of the original datasets, original datasets + noise, and the Particle Filter (PF) algorithm.
Figure 10. Plots of the original datasets, original datasets + noise, and the Particle Filter (PF) algorithm.
Algorithms 06 00407 g010

3.6. Discussion

Based on the simulations of the algorithms namely; aIMF, WT, EKF and PF, we have found that the aIMF performed the best, following in the order to WT, EKF and PF. Theoretically, the successful application of EMD resides on the fact that the noise is not biased. Therefore, there is not so much of a restrictive constraint, comparing to the scenario of encountering with non-zero mean noise. As mentioned, the characteristics of IMF after several iterations move towards the normal distribution; see Figure 11. Thus, subtracting the averaged IMF with the original signals, given aIMF, can reduce the noise inevitably. However, the proposed aIMF algorithm using cubic spline interpolation does not intend to preserve edges of the datasets/signals. This is because of our target to reduce noise that may associate with the upper and lower boundaries of the curves, which are in time series domain. In this particular case, preserving the edges/curves can unavoidably keep the noise mixing within the signals. Unlike the images whose distribution is random walk, the noise reduction can be achieved while the edges are preserved [27]. Referring to Section 3.1, we manually added a variety of noises into the datasets with separate simulations; and later proved that the noises have been removed significantly, displayed in Figure 7 and Table 1, Table 2, Table 3, Table 4, Table 5 and Table 6. To continue to prove the aIMF algorithm’s performance, we simulated the aIMF algorithm with the original datasets—without adding extra noise. The results measured by MSE, MAE, MAPE, R2 and Accuracy count were 8.20211E−05, 0.00719, 0.57085, 0.9980 and 99.95%, respectively. It is noted that the original datasets, EUR-USD exchange rates, contained a certain level of noise, not pure signal only. In the real application, data of exchange rates are normally fluctuated before closing hours of trading by retailers and speculators who want to manipulate the price. The manipulations are always executed with low volumes of trade, but enable the price changes at the end of trading hours. In the financial community, we deem these trades as noise. Finally, the figures from the last loss estimators shown were similar to the results in Table 1, Table 2, Table 3, Table 4, Table 5 and Table 6. Hence, it is safe to assume that the proposed aIMF algorithm can remove the unwanted signals.
Figure 11. Graphs (a)–(c) show plots of IMF1, IMF5 and IMF7, of which their local extrema of higher order IMF moved toward the normal distribution.
Figure 11. Graphs (a)–(c) show plots of IMF1, IMF5 and IMF7, of which their local extrema of higher order IMF moved toward the normal distribution.
Algorithms 06 00407 g011aAlgorithms 06 00407 g011b
On another front, the simulation results from the WT algorithm seemed to be unacceptable under the rationale that Fourier spectral analysis and its derivatives such as WT encountered with a limited time window width by sliding a predetermined window(s) along the time axis [28]. Moreover, there is a trade-off between the time consumed in the window width and the frequency resolution, and this phenomenon has been considered by the uncertainty principle Heisenberg [21]. In this particular case, the WT’s window width must be preselected and it is known as the primary or mother wavelet which is a prototype that provides less flexibility when handling datasets were the mean and variances are highly volatile. In case of EKF simulation, the advantage of SIS is that it does not guarantee to fail as time increases and it becomes more and more skewed, especially when sampling high-dimensional spaces [26]. For the PF algorithm, we have found that the number of particles (data-points) was not adequate for Monte Carlo simulation. However, the drawback of the aIMF algorithm is that it requires long time to spline the local extrema.

3.7. Robustness Test

Robustness testing is any quality assurance methodology focused on testing the consistent accuracy of software. In this section, we test the algorithms of aIMF, WT, EKF and PF which function as noise reduction models. The testing strategies used different inputs other than the EUR-USD exchange rates with the added noise, which are created from 1 Hz sine wave at 10% amplitude of the original datasets with the 10% random distribution. Those different inputs are EUR-JPY with the added noise, EUR-CHF with the added noise, finally, EUR-GBP with the added noise. Later, we used loss estimators to measure the prediction performances of the proposed aIMF, WT, EKF and PF, i.e., MSE, MAE, MAPE, R2, AIC, BIC and Accuracy count. Having simulated with all the loss estimators indicated in Table 7, Table 8 and Table 9 under the same conditions used to test for EUR-USD as input, the results shared the same trend with few deviations from each other. This served to confirm that the aIMF algorithm performed significantly better when filtering a nonlinear nonstationary time series, i.e., EUR-JPY, EUR-CHF and EUR-GBP exchange rates, followed by WT and EKF algorithms. Moreover, we rejected using the PF algorithm to reduce the noise for the nonlinear nonstationary time series data. Additionally, we have discovered that the EKF and WT algorithm must be optimised in the areas of resampling and building up mother wavelet, respectively.
Table 7. Performance measurements of original dataset, EUR-JPY, using noise distribution at 10% with 10 Hz and 20% amplitude of the original signals’.
Table 7. Performance measurements of original dataset, EUR-JPY, using noise distribution at 10% with 10 Hz and 20% amplitude of the original signals’.
EUR-JPYMSEMAEMAPER2AICBICAccuracy count (%)
aIMF0.004240.0578548.297460.5219−6782.23−6764.9863.85
WT0.006260.056768.671690.5514−6839.68−682.4253.46
EKF0.005160.0637979.155590.4046−6272.94−6255.6949.50
PF3.371331.42611204.6960.0020−5074.33−5057.0850.19
Table 8. Performance measurements of original dataset, EUR-CHF, using noise distribution at 10% with 10 Hz and 20% amplitude of the original signals’.
Table 8. Performance measurements of original dataset, EUR-CHF, using noise distribution at 10% with 10 Hz and 20% amplitude of the original signals’.
EUR-CHFMSEMAEMAPER2AICBICAccuracy count (%)
aIMF0.037720.1454570.4207820.9983−1086.41−1069.1688.96
WT0.081240.17454810.534780.5905−6852.24−6699.5453.20
EKF0.370380.1762220.5190260.5588−11843.6−11860.945.71
PF1218.5034.52886101.88930.4421−12388.2−12405.550.84
Table 9. Performance measurements of original dataset, EUR-GBP, using noise distribution at 10% with 10 Hz and 20% amplitude the original signals’.
Table 9. Performance measurements of original dataset, EUR-GBP, using noise distribution at 10% with 10 Hz and 20% amplitude the original signals’.
EUR-CHFMSEMAEMAPER2AICBICAccuracy count (%)
aIMF0.004240.0578548.3064780.5191−6802.20−6784.9864.92
WT0.008260.0659469.3194820.6435−6699.65−6382.9851.08
EKF0.005170.0638519.1719880.4055−6309.97−6292.6849.20
PF3.337891.435658206.73950.0006−5104.53−5087.2748.68
The following configurations were used to perform all the simulations:
(i)
Intel(R) Xeon(R) server with 2 × 2.4 GHz E5620 CPUs, 3.99 GB RAM and a 64-bit Microsoft Windows Operating System is configured as the main processor.
(ii)
Sony Visio, Sony L2412M1EB Desktop with an Intel Core i5, 2.5 GHz, 8 GB RAM, and a 64-bit Microsoft Windows Operating System is used as the front-end connection to the data terminal from Bloomberg via web access using a Citrix client.
(iii)
Application programs written using R programming scripts and some amendments to suit the requirements.
The simulation results showed that there were no bugs in the software scripts, and an average execution time of 3 s for all the Ordinary Least Square (OLS) models.

4. Conclusion

Noise reduction for a nonlinear nonstationary time series is challenging since the models require a large amount of computational power and more complicated logic than conventional filters. This paper proposed a new filter, the aIMF algorithm, which demonstrated its accuracy and robustness, compared with the WT, EKF and PF algorithms. In this study, we discovered that PF is not suitable for this kind of work. Additionally, the EKF and WT algorithms should be optimised. It may be because the number of particles for sampling and weights were not enough albeit the number of data-points used were more than 2000. Future work to enhance the efficiency of the aIMF algorithm are in the area of (i) optimising the cubic spline algorithm to be more suitable to the input which sometimes persisted to its mean, and (ii) investigating the possibility of designing a DSP chip for the aIMF algorithm.

Acknowledgments

This work is fully inspired by the collaborations of the Department of Electrical and Electronic Engineering and Centre of Bio-Inspired Technology, Imperial College London. Additionally, we are thankful to Wong Fan Voon and Janpen Jantratornkul for their editing.

References

  1. Huttunen, J.M.J.; Lehikoinen, A.; Hämäläinen, J.; Kaipio, J. Importance sampling approach for the nonstationary approximation error method. Inverse Probl. 2002, 26, 1–16. [Google Scholar]
  2. Brown, R.G.; Hwang, P.Y.C. Introduction to Random Signals and Applied Kalman Filtering, 3rd ed.; John Wiley & Sons, Inc.: New Jersey, NJ, USA, 1992; pp. 128–140. [Google Scholar]
  3. Calabrese, A.; Paninski, L. Kalman filter mixture model for spike sorting of nonstationary data. J. Neurosci. Methods 2011, 196, 159–169. [Google Scholar] [CrossRef] [PubMed]
  4. Sanjeev Arulampalam, M.; Maskell, S.; Gordon, N.; Clapp, T. A tutorial on particle filters for online nonlinear/non-gaussian bayesian tracking. IEEE Trans. Signal Process. 2002, 50, 174–187. [Google Scholar] [CrossRef]
  5. Caron, F.; Davy, M.; Duflos, E.; Vanheeghe, P. Particle filtering for multi-sensor data fusion with switching observation models: Applications to land vehicle positioning. IEEE Trans. Signal Process. 2007, 55, 2703–2719. [Google Scholar] [CrossRef]
  6. Gustafsson, F.; Gunnarsson, F.; Bergman, N.; Forssel, U. Particle filters for positioning, navigation and tracking. IEEE Trans. Signal Process. 2002, 50, 425–437. [Google Scholar] [CrossRef]
  7. Thrun, S.; Burgard, B.; Fox, D. Probabilistic Robotics; MIT Press: Massachusetts, MA, USA, 2005; pp. 39–117. [Google Scholar]
  8. Huang, N.E.; Attoh-Okine, N.O. The Hilbert Transform in Engineering; CRC Press, Taylor & Francis Group: Florida, FL, USA, 2005. [Google Scholar]
  9. Wu, Z.; Huang, N.E. Ensemble empirical mode decomposition: A noise-assisted data analysis method. Adv. Adapt. Data Anal. 2009, 1, 1–41. [Google Scholar] [CrossRef]
  10. Huang, W.; Shen, Z.; Huang, N.E.; Fung, Y.C. Nonlinear indicial response of complex nonstationary oscillations as pulmonary hypertension responding to step hypoxia. Proc. Natl. Acad. Sci. USA 1996, 96, 1834–1839. [Google Scholar] [CrossRef]
  11. Huang, N.E.; Shen, Z.; Long, S.R. The empirical mode decomposition and the hilbert spectrum for nonlinear and nonstationary time series analysis. Proc. R. Soc. Lond. A 1998, 454, 903–995. [Google Scholar] [CrossRef]
  12. Huang, N.E.; Shen, S.S.P. Hilbert-Huang Transform and Its Applications; World Scientific Publishing Company: Singapore, 2005; pp. 2–26. [Google Scholar]
  13. Wang, L.; Koblinsky, C.; Howden, S.; Huang, N.E. Interannual variability in the South China Sea from expendable bathythermograph data. J. Geophys. Res. 1990, 104, 509–523. [Google Scholar] [CrossRef]
  14. Datig, M.; Schlurmann, T. Performance and limitations of the Hilbert-Huang Transformation (HHT) with an application to irregular water waves. Ocean Eng. 2004, 31, 1783–1834. [Google Scholar] [CrossRef]
  15. Guhathakurta, K.; Mukherjee, I.; Roy, A. Empirical mode decomposition analysis of two different financial time series and their comparison, Chaos. Solut. Fractals 2008, 37, 1214–1227. [Google Scholar] [CrossRef]
  16. Cohen, L. Frequency analysis. IEEE Trans. Signal Process. 1995, 55, 2703–2719. [Google Scholar]
  17. Kaslovsky, D.N.; Meyer, F.G. Noise corruption of empirical mode decomposition and its effect on instantaneous frequency. Adv. Adapt. Data Anal. 2010, 2, 373–396. [Google Scholar] [CrossRef]
  18. Flandrin, P.; Rilling, G.; Goncalves, P. Empirical mode decomposition as a filter Bank. IEEE Signal Process. Lett. 2004, 11, 112–114. [Google Scholar] [CrossRef]
  19. Peng, Z.K.; Tse, P.W.; Chu, F.L. An improved Hilbert-Huang transform and its application in vibration signal analysis. J. Sound Vib. 2005, 286, 187–205. [Google Scholar] [CrossRef]
  20. Gilks, W.R.; Richardson, S.; Spiegelhalter, D.J. Morkov Chain Monte Carlo in Practice; Chapman & Hall/CRC: Florida, FL, USA, 1996; p. 1. [Google Scholar]
  21. Heisenberg, W. Physical Principles of the Quantum Theory; The University of Chicago Press: Chicago, IL, USA, 1930. [Google Scholar]
  22. Kaiser, G. A Friendly Guide to Wavelets; Birkhause: Boston, MA, USA, 1994; pp. 44–45. [Google Scholar]
  23. Helong, L.; Xiaoyan, D.; Hongliang, D. Structural damage detection using the combination method of EMD and wavelet analysis. Mech. Syst. Signal Process. 2007, 21, 298–306. [Google Scholar]
  24. Li, L. On the block thresholding wavelet estimators with censored data. J. Multivar. Anal. 2008, 9, 1518–1543. [Google Scholar] [CrossRef]
  25. Stein, C. Estimation of the mean of a multivariate normal distribution. Annu. Stat. 1981, 9, 1135–1151. [Google Scholar] [CrossRef]
  26. Dang, D.S.; Weifeng, T.; Feng, Q. EMD- and LWT-based stochastic noise eliminating method for fiber optic gyro. Measurement 2010, 44, 2190–2193. [Google Scholar] [CrossRef]
  27. Palma, R.; Vmoraru, C.; Woo, J. Pahikkala, Parseval Equality, version 8; 2012. Available online: http://planetmath.org/?op=getobj;from=objects;id=4717 (accessed on 12 December 2012).
  28. Randhawa, S.; Li, J.J.S. Adaptive Order Spline Interpolation for Edge-Preserving Colour Filter Array Demosaicking. In Proceeding of the 2011 International Conference on Digital Image Computing: Techniques and Applications (DICTA), Noosa, QLD, Australia, 6–8 December 2011; pp. 666–671. [CrossRef]
Back to TopTop