Next Article in Journal
Flow Separation Control of Nacelle Inlets in Crosswinds by Dielectric Barrier Discharge Plasma Actuation
Next Article in Special Issue
Currents Analysis of a Brushless Motor with Inverter Faults—Part II: Diagnostic Method for Open-Circuit Fault Isolation
Previous Article in Journal
Multistage Micropump System towards Vacuum Pressure
Previous Article in Special Issue
Multi-Objective Optimal Design of μ–Controller for Active Magnetic Bearing in High-Speed Motor
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Currents Analysis of a Brushless Motor with Inverter Faults—Part I: Parameters of Entropy Functions and Open-Circuit Faults Detection

1
Ecole Supérieure des Techniques Aéronautiques et de Construction Automobile, ESTACA’Lab Paris-Saclay, 12 Avenue Paul Delouvrier—RD10, 78180 Montigny-le-Bretonneux, France
2
ESTACA Campus Ouest, Rue Georges Charpak—BP 76121, 53 009 Laval, France
*
Author to whom correspondence should be addressed.
Submission received: 4 May 2023 / Revised: 25 May 2023 / Accepted: 28 May 2023 / Published: 31 May 2023
(This article belongs to the Special Issue Linear Motors and Direct-Drive Technology)

Abstract

:
In the field of signal processing, it is interesting to explore signal irregularities. Indeed, entropy approaches are efficient to quantify the complexity of a time series; their ability to analyze and provide information related to signal complexity justifies their growing interest. Unfortunately, many entropies exist, each requiring setting parameter values, such as the data length N, the embedding dimension m, the time lag τ , the tolerance r and the scale s for the entropy calculation. Our aim is to determine a methodology to choose the suitable entropy and the suitable parameter values. Therefore, this paper focuses on the effects of their variation. For illustration purposes, a brushless motor with a three-phase inverter is investigated to discover unique faults, and then multiple permanent open-circuit faults. Starting from the brushless inverter under healthy and faulty conditions, the various possible switching faults are discussed. The occurrence of faults in an inverter leads to atypical characteristics of phase currents, which can increase the complexity in the brushless response. Thus, the performance of many entropies and multiscale entropies is discussed to evaluate the complexity of the phase currents. Herein, we introduce a mathematical model to help select the appropriate entropy functions with proper parameter values, for detecting open-circuit faults. Moreover, this mathematical model enables to pick up many usual entropies and multiscale entropies (bubble, phase, slope and conditional entropy) that can best detect faults, for up to four switches. Simulations are then carried out to select the best entropy functions able to differentiate healthy from open-circuit faulty conditions of the inverter.

1. Introduction

One of the most powerful tools to assess the dynamical characteristics of time series is entropy. Entropy used in several kinds of applications is able to account for vibrations of rotary machines [1] (electric machines), to detect battery faults [2] (short-circuit and open-circuit faults), to reveal important information about seismically actives zones [3] (electroseismic time series), to measure financial risks [4] (economic sciences), to categorize softwood species under uniform and gradual cross-sectional structures [5] (biology) and to categorize benign and malignant tissues of different subjects [5] (biomedical).
Various entropy measures have been established over the past two decades. Pincus [6] proposed the approximation entropy A p E n , which calculates the complexity of data and measures the frequency of similar patterns of data in a time series. However, A p E n also has some disadvantages: due to self-matching, the bias of A p E n is important for small time series and depends on the entropy parameters. To avoid self-matching, Richman [6] defined the sample entropy S a m p E n . Since the introduction of A p E n [6], other entropies have been proposed, such as Kolmogorov entropy K 2 E n , conditional entropy C o n d E n , dispersion entropy D i s p E n , cosine similarity entropy C o S i E n , bubble entropy B u b b E n , fuzzy entropy F u z z E n , increment entropy I n c r E n , phase entropy P h a s E n , slope entropy P h a s E n , entropy of entropy E n o f E n , attention entropy A t t E n and several other multiscale entropies.
Entropy is now widely applied to analyze signals in various fields having universal applications. A combination of wavelet transformation and entropy is proposed and applied in power grid fault detection [7]. Wavelet transform is commonly used to extract characteristic quantities, and to analyze transient signals, while entropy is ideal for the measurement of uncertainty. The approximate entropy features of a multiwavelet transform [8] combined with an artificial neural network recognizes transmission line faults. The multi-level wavelet Shannon entropy was proposed to locate single-sensor fault [9]. Guan [10] developed a precise diagnosis method of structural faults of rotating machinery based on a combination of empirical mode decomposition, S a m p E n and deep belief network. Entropy measures [11] are used in machine fault diagnosis. In [12], the variational mode decomposition energy entropy for each phase current cycle is calculated to accurately diagnose the arc fault: the noise component is removed according to the permutation entropy. [13] proposed to diagnose multi-circuit faults of three-phase motors. This method only needs to collect phase currents to diagnose multi-circuit points accurately: it improves the independence of diagnosis. Based on signal feature extraction, a combination of the empirical mode decomposition entropy and index energy methods is adopted in [14] to extract the Draft Tube’s dynamic feature information for a water turbine. Open-circuit fault diagnosis of a multilevel inverter [15] uses the fast fault detection algorithm based on two sample techniques and the fault localization algorithm using the entropy of wavelet packets as a feature. The authors in [16] presented a fast feature extraction technique including wavelet packet decomposition, an entropy of wavelet packets for fault detection and classification of IGBT-based converters.
Open-circuit fault diagnosis methods can be divided into voltage-based methods and current-based methods, according to different fault characteristics. Voltage-based methods [17,18] can be implemented with external hardware or modeled. Recently, current-type methods based on current waveform analysis have attracted much attention [19,20,21].
An effective open-circuit fault diagnosis using the phase current performance of a brushless motor or inverters is shown in [22]. Other practical current-based diagnostic algorithms are addressed in [19,23,24]: they identify the reference current errors and the average absolute value of currents. Then, the average value of the current error and the average absolute value of the motor phase current are used to realize the diagnostic variable. A fast approach based on the amplitude of the d-q axis referential currents is proposed by [21]. The development of intelligent algorithms, such as fuzzy logic [25], sliding mode observer [26], neural networks [27], machine learning [28], an optimized support vector machine method [29] and wavelet transform [30], which allows to detect and identify faulty switches.
A mathematical model of healthy and faulty conditions is developed by [31]: it detects an open-circuit in interleaved boost converters with the Filippov method. The stable range of the load variation is extended using an original fault-tolerant strategy based on this model. In [32], one or a maximum of two open-circuit faults are detected by entropy functions. Seven entropies are investigated, but only sample and fuzzy entropies are able to differentiate healthy from open-circuit faulty conditions of the AC-DC-AC converter considered in [32].
We now propose a fault-detection method for a brushless motor with a three-phase inverter. The occurrence of faults in an inverter leads to atypical characteristics of phase currents, specific to the drive circuit. Usual and multiscale entropies are then used to detect multiple open-circuit faults. In this paper, we broaden the spectrum of investigation to 52 entropies, to evaluate their ability to differentiate healthy states from open-circuit faulty conditions. This is why we herein introduce a mathematical model to select the appropriate entropy functions with an appropriate parameter combination for open-circuit faults detection. The entropy calculation has several parameters, such as data length N, embedding dimension m, time lag τ , tolerance r and scale s. However, the dependence of the entropy effectiveness on the choice of parameters used for the phase currents analysis has not yet been investigated for a brushless motor. Moreover, using this mathematical model, we are able to pick up many usual entropies and multiscale entropies (bubble, phase, slope and conditional entropy) that can better detect faults for up to four switches. Our goal herein is to be able to select the appropriate entropy.
The paper is organized as follows. The usual entropies and multiscale entropies are introduced in Section 2. Section 3 and Section 4 present the brushless motor and the dataset we used of output currents under healty state, with one, two, three and four open-circuit faults. Then, Section 5 illustrates the evaluation of the different entropies under variation of the data length, embedding dimension, time lag, tolerance and scale. We end with a Conclusion in Section 7.

2. Entropy Methods

  • Sample entropy S a m p E n and approximate entropy A p E n are the most commonly used measures for analyzing time series. For a time series { x i } i = 1 N with a given embedding dimension m, tolerance r and time lag τ , the embedding vector x i m = [ x i , x i + τ , , x i + ( m 1 ) τ ] is constructed. The number of vectors x i m and x j m , close to each other, in Chebyshev distance:
    C h e b D i s t i , j m = m a x k = 1 , m ¯ { | x i m [ k ] x j m [ k ] | } r
    is expressed by the number P i m ( r ) . This number is used to calculate the local probability of occurrence of similar patterns:
    B i m ( r ) = 1 N m + 1 P i m ( r ) .
    The global probability of the occurrence of similar patterns is:
    B m ( r ) = 1 N m + 1 i = 1 N m + 1 B i m ( r )
    with a tolerance r. For m + 1 :
    B m + 1 ( r ) = 1 N m i = 1 N m B i m + 1 ( r ) .
    The approximation entropy is:
    A p E n ( m , τ , r , N ) = l n B m ( r ) B m + 1 ( r ) .
  • Kolmogorov entropy [33]— K 2 E n is defined as the probability of a trajectory crossing a region of the phase space: suppose that there is an attractor in phase space and that the trajectory { x i } i = 1 N is in the basin of attraction. K 2 E n defines the probability distribution of each trajectory, calculated from the state space, and computes the limit of Shannon entropy. The state of the system is now measured at intervals of time. The time series { x i } i = 1 N is divided into a finite partition α = { C 1 , C 2 , , C k } , according to C k = [ x ( i τ ) , x ( ( i + 1 ) τ ) , , x ( ( i + k 1 ) τ ) ] . The Shannon Entropy of such a partition is given by:
    K ( τ , k ) = C α p ( C ) · log p ( C ) .
    K 2 E n is then defined by:
    K 2 E n = sup α f i n i t e p a r t i t i o n lim N 1 N n = 0 N 1 ( K n + 1 ( τ , k ) K n ( τ , k ) ) .
  • Conditional entropy [34]— C o n d E n quantifies the variation of information necessary to specify a new state in a one-dimensional incremented phase space. Small Shannon entropy values are obtained when a pattern appears several times. C o n d E n uses the normalization:
    x ( i ) = X ( i ) a v [ X ] s t d [ X ] ,
    where a v [ X ] is the series’ mean and s t d [ X ] is the standard deviation of the series. From the normalized series, the vector x L ( i ) = [ x ( i ) , x ( i 1 ) , , x ( i L + 1 ) ] of L consecutive pattern is constructed in L dimensional phase space. With a variation in the Shannon entropy of x L ( i ) , the C o n d E n is obtained as:
    C o n d E n ( L ) = L p L · l o g p L + L 1 p L 1 · l o g p L 1 .
  • Dispersion entropy [35,36]— D i s p E n focuses on the class sequence that maps the elements of time series into positive integers. According to the mapping rule of dispersion entropy, the same dispersion pattern results from multiple forms of sample vectors. The time series { x i } i = 1 N is reduced with the standard distribution function to normalized series y j m = [ y j , y j + τ , , y j + ( m 1 ) τ ] :
    y i = 1 σ 2 π inf x i exp ( s μ ) 2 2 σ 2 d s
    where y i ( 0 , 1 ) . The phase space is restructured in c class number as z i c = r o u n d ( c · y i + 0.5 ) and z j m , c = [ z j c , z j + τ c , , z j + ( m 1 ) τ c ] . Each z i c corresponds to the dispersion pattern υ . The frequency of υ can be deduced as:
    p = N u m b e r j | j n ( m 1 ) τ , υ n ( m 1 ) τ
    where N u m b e r { j | j n ( m 1 ) τ , υ } is the number of dispersion patterns υ corresponding to z j m , c . Dispersion entropy can be defined according to information entropy theory:
    D i s p E n ( m , c , τ ) = υ = 1 c m p · log ( p ) .
  • Cosine similarity entropy [37]— C o S i E n evaluates the angle between two embedding vectors instead of the Chebyshev distance. The global probability of occurrence of similar patterns using the local probability of occurrence of similar patterns is used to estimate entropy. The angular distance for all pairwise embedding vectors is:
    A n g D i s t i , j m = 1 π c o s 1 x i m · x j m | x i m | · | x j m | , i j ,
    where x i m = [ x i , x i + τ , , x i + ( m 1 ) τ ] is the embedding vector of { x i } i = 1 N . When A n g D i s t i , j m r , the number of similar patterns P i m ( r ) is obtained. The local and global probabilities of occurrence are:
    B i m ( r ) = 1 N m 1 P i m ( r ) and B m ( r ) = 1 N m i = 1 N m B i m ( r ) .
    Finally, cosine similarity entropy is defined by:
    C o S i E n ( m , τ , r , N ) = B m ( r ) · l o g 2 B m ( r ) 1 B m ( r ) · l o g 2 1 B m ( r ) .
  • Bubble entropy [38,39]— B u b b E n reduces the significance of the parameters employed to obtain an estimated entropy. Based on permutation entropy, the B u b b E n vectors are ranked in the embedding space. The bubble sort algorithm is used for the ordering procedure and counts the number of swaps performed for each vector. More coarse-grained distributions are created and then compute the entropy of this distribution. B u b b E n reduces the dependence on input parameters (such as N and m) by counting the number of sample swaps necessary to achieve the ordered subsequences instead of counting order patterns. B u b b E n embeds a given time series { x i } i = 1 N into an m dimensional space, producing a series of vectors of size N m + 1 : X 1 , X 2 , …, X N , where X i = ( x i , x i + 1 , , x i + m 1 ) . The number of swaps required for sorting is counted for each vector X i . The probability p i of having i swaps is used to evaluate Renyi entropy:
    B 2 m ( x ) = log i = 0 m ( m 1 ) 2 p i 2 .
    Increasing by one the embedding dimension m, the procedure is repeated to obtain a new entropy value B 2 m + 1 . Finally, B u b b E n t r o p y is obtained as for A p E n t r o p y :
    B u b b E n ( x , m , N ) = B 2 m + 1 B 2 m l o g m + 1 m 1 .
  • Fuzzy entropy [40,41]— F u z z E n employs the fuzzy membership functions as triangular, trapezoidal, bell-shaped, Z-shaped, Gaussian, constant-Gaussian and exponential functions. F u z z E n has less dependence on N and uses the same step as in the S a m p E n approach. Firstly, the zero-mean embedding vectors (centered using their own means) are constructed q i m = x i m μ i m , where:
    x i m = [ x i , x i + τ , , x i + ( m 1 ) τ ] and μ i m = 1 m k = 1 m x i m [ k ] .
    F u z z E n calculates the S i m ( r , η ) fuzzy similarity:
    S i m ( r , η ) = e C h e b D i s t i , j m η / r
    obtained from a fuzzy membership function, where η is the order of the Gaussian function. The Chebyshev distance is:
    C h e b D i s t i , j m = m a x k = 1 , m ¯ { q i m [ k ] q j m [ k ] } , i j .
    As in the S a m p E n approach, the local and global probabilities of occurrence are computed, obtaining a subsequent fuzzy entropy:
    F u z z E n ( m , τ , r , N ) = l n B m ( r ) B m + 1 ( r ) .
  • Increment entropy [42]: the I n c r E n approach (similar to the permutation entropy) encodes the time series in the form of symbol sequences. For a time series { x i } i = 1 N , an increment series v ( i ) = x ( i + 1 ) x ( i ) , ( 1 i N ) is constructed and then divided into vectors of m length V ( l ) = [ v ( l ) , , v ( l + m 1 ) ] , 1 l N m . Each element in each vector is mapped to a word consisting of the sign s k = s g n ( v ( j ) ) and the size q k , which is:
    q k = m i n ( q , v ( i ) · q s t d ( v ( i ) ) .
    However, the sign indicates the direction of the volatility between the corresponding neighboring elements in the original time series. The pattern vector w is a combination of all corresponding s k and q k pairs. The relative frequency of each word w n is defined as P ( w n ) = Q ( w n ) / ( N m ) , where Q ( w n ) is the total number of instances of the nth word. Finally, I n c r E n is defined as:
    I n c r E n = 1 m 1 n = 1 ( 2 q + 1 ) m P ( w n ) l o g P ( w n ) .
  • P h a s E n t r o p y [43] quantifies the distribution of the time series { x i } in a two-dimensional phase space. First, the time-delayed time series Y [ n ] and X [ n ] are calculated as follows:
    Y [ n ] = x [ n + 2 ] x [ n + 1 ]
    X [ n ] = x [ n + 1 ] x [ n ]
    The second-order difference plot of x is constructed as a scatter plot of Y [ n ] against X [ n ] . The slope angle of θ [ n ] of each point ( X [ n ] , Y [ n ] ) is measured from the origin (0, 0). The plot is split into k sectors serving as a coarse-graining parameter. For each k, the sector slope angle S θ [ i ] is the addition of the slope angle of points as follows:
    S θ [ i ] = j = 1 N i θ [ n ]
    where i = 1, 2, …, k and N i is the points number of the ith sector. The probability distribution p ( i ) of the sector is:
    p ( i ) = S θ j = 1 i S θ
    The estimation of the Shannon entropy of the probability distribution p ( i ) leads to P h a s E n , computed as:
    P h a s E n = 1 l o g ( k ) i = 1 k p ( i ) · l o g p ( i ) .
  • Slope entropy [44]— S l o p E n includes amplitude information in a symbolic representation of the input time series { x i } i = 1 N . Thus, each subsequence of length m drawn from { x i } i = 1 N , can be transformed into another subsequence of length m 1 with the differences of x i x i 1 . In order to find the corresponding symbols, a threshold is added to these differences. Then, S l o p E n uses 0, 1 and 2 symbols with positive and negative versions of the last two. Each symbol covers a range of slopes for the segment joining two consecutive samples of the input data. The frequency of each pattern found is mapped into a value using a Shannon entropy approach: it is applied with the factor corresponding to the number of slope patterns found.
  • Entropy of entropy [45]— E n of E n : the time series { x i } i = 1 N is divided into consecutive non-overlapping windows w j τ of length τ : w j τ = x ( j 1 ) τ + 1 , , x ( j 1 ) τ + τ . The probability p j k for the interval x i over w j τ to occur in state k is:
    p j k = total number of x i over w j τ in state k τ .
    Shannon entropy is used now to characterize the system state inside each window. Consequently:
    y j τ = k = 1 p j k · l o g p j k .
    In the second step, the probability p l for the interval y j to occur in state l is:
    p l = total number of y j τ in level l N / τ .
    Shannon entropy is used for the second time instead of the Sample entropy, to characterize the degree of the state change.
    E n o f E n ( τ ) = l = 1 p l · l o g p l .
  • Attention entropy [46]— A t t E n ; traditional entropy methods focus on the frequency distribution of all the observations in a time-series, while attention entropy only uses the key patterns. Instead of counting the frequency of all observations, it analyzes the frequency distribution of the intervals between the key patterns in a time-series. The last calculus is the Shannon entropy of intervals. The advantages of attention entropy are that it does not need any parameter to tune, is robust to the time-series length and requires only a linear time to compute.
  • Multiscale entropy [5,47,48]— M S E n extends entropy to multiple time scales by calculating the entropy values for each coarse-grained time series. The multiple time scales are constructed from the original time series { x 1 , x 2 , , x N } of length N by averaging the data points within non-overlapping windows of increasing length. The coarse-grained time series { y ( s ) } is:
    y j ( s ) = 1 τ i = ( j 1 ) s + 1 j s x i , 1 j [ N / s ] .
    M S E n t r o p y is:
    M S E n ( m , r , s ) = l n A s m ( r ) B s m ( r )
    where A s m ( r ) and B s m ( r ) represent the probability that two sequences match for m + 1 points and m points, respectively, calculated from the coarse-grained time series at the scale factor s. Multiscale entropy reduces the accuracy of entropy estimation and is often undefined as the data length becomes shorter with an increase in scale s. This is true in the case of S a m p E n , which is sensitive to parameters (data length N, embedding dimension m, time lag τ , tolerance r) of short signals. To avoid this, many variants of the traditional multiscale entropy method, such as composite multiscale entropy [49,50] and refined multiscale entropy [51,52], are proposed. In the classical multiscale entropy method, there is only one coarse-grained time series derived from a non-overlapping coarse-grained procedure at scale s. However, s is the number of coarse-grained time series in the composite multiscale entropy method. The sliding windows of all coarse-grained procedures overlap. The mean of entropy values for all coarse-grained time series is defined as the composite multiscale entropy value at the scale s to improve the multiscale entropy accuracy. At a scale factor s, the c M S E n t r o p y is defined as:
    c M S E n ( m , r , s ) = 1 s k = 1 s l n n k , s m + 1 n k , s m ,
    where n k , s m is the total number of m-dimensional matched vector pairs and is calculated from the kth coarse-grained time series at a scale factor s.
    The refined multiscale entropy [51,52], based on the multiscale entropy approach, applies different entropies as a function of time scale in order to perform a multiscale irregularity assessment. r M S E n prevents the influence of the reduced variance on the complexity evaluation and removes the fast temporal scales. Thus, an r M S E n method improves the coarse-grained process.

3. System Description

Many industrial applications require precise regulation of the speed of the drive motors. A brushless motor operates under various speed and load conditions and the knowledge of some physical parameters (speed, torque, current) for proper speed regulation is essential. Figure 1 shows a system implementation for brushless motor control as a Permanent Magnet Synchronous Machine in Matlab/Simulink. A three-phase inverter is used to feed the motor phases, thereby injecting currents in the coils to create the necessary magnetic fields for three phases. The three-phase inverter is modeled as an universal bridge in Matlab, with three arms and MOSFET/ Diode as power electronic devices ( T i and B i , i = a, b, c), controlled by pulse width modulation.
A simplified model of stator consists of three coils arranged in a, b and c directions. To ensure the brushless motor movement, the a, b and c stator windings are powered according to the rotor’s position. The rotor magnetic field position is detected by three Hall sensors (placed every 120°) and provides the corresponding winding excitation through the commutation logic circuit. Table 1 summarizes the main specifications of this brushless machine.
In permanent magnet synchronous motors, a physical phenomenon can appear: the electromagnetic torque oscillations. These oscillations are named the Cogging effect and are taken in consideration by [53,54]. The Cogging phenomenon is the interaction of the magnetic field produced by the permanent magnet rotor with the stator teeth. This interaction can be reduced by the physical modification of the rotor and stator internal structure. Another modality to reduce the phenomenon is the control technique introducing this knowledge directly in the controller design, as in [53,54]. Cogging torque is a significant problem for high-precision applications where position control is required. In order to simplify the analysis, the previously mentioned Cogging phenomenon can be neglected.
The brushless motor design and the analysis of various control techniques are discussed in [55], where double closed loops (speed and current) are considered. The current loop is used to improve the dynamic performance of the controlled system. Wu [56] used only one speed control loop as [57]. Nga [58] assumed that the current control loop is ideal, meaning that the transfer function of the closed current control loop is equal to 1. In the cascaded control structure, the inner loops are designed to achieve fast response and outer loop is designed to achieve optimum regulation and stability.
The inner loop also keeps the torque output below a safe limit. Moreover, the controller should be developed in such a manner that it produces less torque ripple. Torque ripple is developed from motor control through inefficient commutation strategies and internal gate control schemes. Ideally, the torque ripple is constant due to the in-phase back electromotive force and quasi-square wave stator current. In this paper, we consider the dynamics of the current control loop much faster than that of the speed control loop in order to decouple both dynamics. Satisfying this condition, the reference value of the inner loop, which is the output of the outer controller can be considered nearly constant (a simple current limit closed control loop). To achieve the regulation objective, we are interested by the steady-state phase currents and not by their dynamic performances. This paper proposes a simple control structure using only the speed control loop.
The outer loop helps to control the speed of the motor. The speed feedback comes from the Hall sensor positions. The three-phase control technique for brushless motor uses a proportional integral (PI) controller. The controller receives the error signal and generates the control signal to regulate the speed response (referred to the target speed). The PI controls the duty cycle of the PWM pulses to maintain the desired speed. The proportional and integral gains of the controller are described in Table 1. The inner loop synchronizes the inverter gate states of the brushless motor and stator winding excitation as in Table 2. With this table, the commutation logic can easily control the commutation. The fault gate circuit is implemented by gains destined to each MOSFET. The gain is zero for an open-circuit fault, and one if not faulty.
Both single-switch and multi-switch open-circuit faults are classified and studied.
One open-circuit fault may occur in the switch: T a or B a of the first phase a, T b or B b of the second phase b, T c or B c of the phase c.
Open-circuit phase fault can be detected in T a and B a , T b and B b or T c and B c .
If two upper M O S s faults are detected, the two open-circuit faults can be T a and T b , T a and T c or T b and T c . The two open-circuit faults, B a and B b , B a and B c or B b and B c , are the symmetrical faults of the lower arms.
The cases of two open-circuit faults on the upper and lower arms are T a and B b , T a and B c , T b and B a , T b and B c , T c and B a and T c and B b .
The brushless motor is still running even in three fault cases: T a , B a , T b ; T a , B a , B b ; T a , B a , T c ; T a , B a , B c ; T b , B b , T a ; T b , B b , B a ; T b , B b , T c ; T b , B b , B c ; T c , B c , T a ; T c , B c , B a ; T c , B c , T b ; T c , B c , B b ; T a , B b , T c ; T a , B b , B c ; T a , T b , B c ; B a , B b , T c ; B a , T b , B c ; B a , T b , T c .
If the upper and lower arms are affected by multiple open-circuits, the open-circuit faults can be: T a , B a , T b , B c ; T a , B a , B b , T c ; T a , T b , B b , B c ; B a , T b , B b , T c ; T a , B b , T c , B c ; B a , T b , T c , B c .
With no loss of generality, this work focuses on the open-circuit fault on the first switch T a of the first phase a. Two open switch faults are also considered: on the second switch B a of the first phase and on the first switch T b of the second phase; then, two open-circuit faults on the first phases T a , B a , followed by the case T b , T c . The cases of multiple open-circuits are: B a , B b , T c , then B a , T b , B b and finally B a , T b , B b , T c .

4. Datasets

Each inverter phase has two arms, i.e., the upper arm and the lower arm, whose currents are denoted as i a , i b and i c . The three-phase currents of the brushless motor are recorded as a one-dimensional time series.
Let us observe the current of phase a under normal operating conditions (i.e., without any fault on the switches T a , B a , T b , B b , T c or B c of the inverter). Figure 2a shows this time-series data, sampled with sampling time T = 5 μ s and composed of N = 6000 samples. The currents of phase b and c are similar to phase a’s current. Under healthy conditions, Table 3 (first line) shows the zero average of the phase currents.
Then, an open-circuit fault occurs in phase a on the T a switch. The output currents of phases a, b and c corresponding to an open-circuit fault are shown in Figure 2b and Figure 3a,b. When an open-circuit fault occurs in phase a, the positive phase current gets distorted and that phase average current becomes negative; it is positive for the two others. The D C side of phase a output current can be observed in Table 3 (line 2). The current amplitude of phases b and c change; their means change too. Similarly, when an open-circuit fault occurs in phase a on switch B a , the negative phase current gets distorted and the average current of that phase becomes positive when it is negative for the two others.
Considering a phase fault in switches T a and B a , the phase current waveforms are illustrated in Figure 4a,b and Figure 5a. Consequently to these faults, the mean of the phase current i a has a very low amplitude. The currents of the other phases recover the alternating waveforms. Line 8 of Table 3 presents the current means.
For instance, when two upper open-circuit faults simultaneously occur in T a and T b , the currents in the upper half-bridges are only able to flow in T c . Figure 5b and Figure 6a,b show the abnormal distortions of the currents of phases a, b and c, which differ from normal operating conditions. During this process, the open-circuit faults degrade the system’s performances, but do not cause a shutdown.
If two open-circuit faults occur in T a and B b , the phase a current remains positive and the phase b current remains negative, as shown in Figure 7a,b. The other phase current ( i c ) is also affected with unbalance during these fault conditions, as shown in Figure 8a.
Considering three faults in T a , B a , T b , the phase a current is near zero and the phase b current during the positive cycle is eliminated as shown in Figure 8b and Figure 9a. Consequently, the phase c current only remains during the positive cycles, as shown in Figure 9b.
Similarly, the effects of B a , T b and B c faults on the phase currents are easy to find out. When the lower switch B a is faulty, the current i a flows only T a , having a positive mean (Figure 10a). With T b fault, the positive cycle of phase current i b vanishes, as shown in Figure 10b. Figure 11a shows i c : when the open-circuit fault of B c occurs, current i c has a positive waveform.
In the case of multiple open-circuit faults in several switches ( B a , T b , B b , T c ), phase current waveforms are seriously affected as shown in Figure 11b and Figure 12a,b: phase b current is near zero according to phase fault, while phase a current and phase c current have a positive mean and a negative mean, respectively.
The faults are divided into eight categories: no fault, 6 single-switch faults, 3 double-switch faults in the same bridge arm, 3 two upper-switch faults and 3 two lower-switch faults, 6 double faults from crossed half-bridges, 12 triple-switch faults with phase failure, 6 triple-switch faults in different bridge arms and, finally, 6 multiple faults with phase failure. For a typical three-phase inverter, there are 45 possible open-circuit faults, as shown in Table 3. For these cases, the mean of the phase currents are calculated and shown in Table 3.
For the first fault case ( T a ), the signs of the phase currents ( i a , i b and i c ) are negative, positive and positive. Positive, negative and negative values are found, respectively, on lines 15 ( B b and B c faults), 17 ( T a and B c faults) and 36 ( T a , B b and B c faults). If a phase is faulty, for example phase b, the line 9 of Table 3 presents a negative, zero and positive mean values. However, negative, zero and positive mean values can also be on the other fault cases: line 27 ( T b , B b and T a faults), line 30 ( T b , B b and B c faults) and line 43 ( T a , T b , B b and B c faults). For the last example with negative, negative and positive mean current values, there are four fault cases: line 7 ( B c fault), line 11 ( T a and T b faults), line 18 ( T a and B c faults) and line 37 ( T a , T b and B c faults).

5. Selection of Entropy Functions

In this part, entropy is employed to characterize the complexity of signals in the open-circuit case, such as the healthy and faulty waveforms, as in Figure 2a–Figure 12b. Phase currents are directly used as information. A fault is detected, based on the average current. When the fault occurs in the inverter, the current waveforms vary. M S E n , c M S E n and r M S E n algorithms can use S a m p E n , K 2 E n , C o n d E n , D i s p E n , C o S i E n , B u b b E n , A p E n , F u z z E n , I n c r E n , P h a s E n , S l o p E n and E n of E n approaches or A t t E n , giving 52 entropy functions to evaluate the signals complexity. For the ease of comparison, the entropy of phases a, b and c when one open-circuit fault occurs on T a , is divided by the entropy of phase a under healthy conditions. Similarly, the entropy of phases a, b and c when multiple open-circuit faults occur on T i and B i i = a , b , c , is divided by the entropy of phase a under healthy conditions.

5.1. One Open-Circuit Fault on T a on the Phase a

This study investigates the efficiency of different entropies with several parameters, such as data length N = 6000 samples, embedding dimension m = 2, time delay τ = 1, tolerance r = 0.2 and scale s = 3. The entropies of the 6000 samples are shown in Figure 13. The B u b b E n entropy of phase a samples (represented in red), where the open-circuit fault occurs, has larger value than the entropy of phases b and c (represented in black). Incontestably, they are clearly separated. Even the entropy of phase a is lower than the entropy of phases b and c for S a m p E n , K 2 E n , D i s p E n , A p E n , S l o p E n and A t t E n . The separation of the three phases a, b and c is shown in Figure 13: phases b and c have an entropy very close to each other, and different from that of phase a. Each of these entropies is able to detect the faulty phase. Figure 13 represents the larger difference between the entropy of phase a; the entropy of phases b and c is given by B u b b E n .
Many values represented in Figure 13 are an average of two, three or four entropies. Relevant values of S a m p E n , M S S a m p E n , c M S S a m p E n and r M S S a m p E n are averaged to give a mean entropy of phases a, b and c. In the same way, for slope entropy, the same entropy value is obtained with S l o p E n , M S S l o p E n , c M S S l o p E n and r M S S l o p E n functions. With dispersion entropy also, for D i s p E n , M S D i s p E n , c M S D i s p E n and r M S D i s p E n , the same value is obtained. K 2 E n , M S K 2 E n , c M S K 2 E n and r M S K 2 E n give similar entropy values. For bubble entropy, a pertinent value is obtained only with r M S B u b b E n : unfortunately, B u b b E n , M S B u b b E n and c M S B u b b E n do not distinguish the open-circuit fault on phase a from phases b and c. Figure 13 presents the approximation entropy using only A p E n . The other values of M S A p E n , c M S A p E n and r M S A p E n do not distinguish the open-circuit fault on phase a. A relevant value of attention entropy is obtained with r M S A t t E n . For the other entropies, such as C o n d E n , C o S i E n , F u z z E n , I n c r E n , P h a s E n and E n of E n , the distance between faulty phase a and phases b or c is nearly zero: the open-circuit fault is not detected.
The optimal entropy should be searched from all possible combinations, according to the following rules:
m a x d i s t a n c e j = 1 52 e n t r o p y p h a s e a e n t r o p y p h a s e b
and
m a x d i s t a n c e j = 1 52 e n t r o p y p h a s e a e n t r o p y p h a s e c .
The objective is to maximise the distance between the phase a entropy, where the open-circuit fault occurs and the entropy of phases b and c. For a typical brushless motor, 52 possible open-circuit faults can be diagnosed using Equations (36) and (37) according to the principle shown in Table 4. Normally, the entropy is able to detect the faulty phase: it is denoted with ‘✓’. Otherwise, if the open-circuit fault is not detected (the distance between faulty phase a and phases b or c is nearly zero), it is pointed out by ‘✘’. The neutral mark ‘-’ is employed if the distance between phase a and phases b or c is not zero but not enough to detect the open-circuit fault. However, the distance is only an approximate measure on the characteristic plot.

5.2. Two Open-Circuit Faults on B a —Phase a and on T b —Phase b

The embedding dimension m, data length N, time delay τ and the choice of tolerance r remain unchanged. Figure 14 shows the performance of several entropies with two open-circuit faults: on B a —phase a and on T b —phase b. The entropies of faulty phases a and b are in red, the entropy of phase c is in black.
The optimal entropy should be searched from all possible combinations, according to the following rules:
m a x d i s t a n c e j = 1 52 e n t r o p y p h a s e a e n t r o p y p h a s e c
and
m a x d i s t a n c e j = 1 52 e n t r o p y p h a s e b e n t r o p y p h a s e c .
The objective is to maximise the distance between the entropy of phase a and this of phase c, as distance between the entropy of phase b and phase c. Relevant results are obtained with r M S S a m p E n , C o S i E n , F u z z E n , E n of E n and r M S A t t E n . For following explanations, S l o p E n and r M S B u b b E n are also represented even if they do not distinguish as well the open-circuits on phases a and b.

5.3. Two Open-Circuit Faults on T a and on B a —Phase a

The entropy parameters are unchanged. We investigate a phase fault: on T a and on B a —phase a. Figure 15 shows the investigation of different entropies: the phase a entropy with open-circuit is in red, the entropies of phases b and c are in black. The largest distance between phase a entropy and those of phases b and c are obtained with S a m p E n , A p E n and r M S B u b b E n . The entropies are selected using Equations (36) and (37). The values of S a m p E n , A p E n are for the particular form of the phase current i a . As shown in Figure 4a, this current has a regular shape with a very small amplitude.

5.4. Two Open-Circuit Faults on T b —Phase b and on T c —Phase c

The two open-circuit faults considered in this subsection are on T b —phase b and on T c —phase c. Figure 16 shows the different entropies: the entropy of phase a is in black, phases b and c entropies are in red. The biggest distance between the phase a entropy and those of phases b and c is obtained with r M S B u b b E n . The entropies are selected using Equations (36) and (37). For the following explanations, S l o p E n is also represented even if it does not distinguish as well the open-circuits on phases b and c compared with phase a.

5.5. Three Open-Circuit Faults on B a —Phase a, T b —Phase b and on T c —Phase c

Three open-circuit faults occur on B a —phase a, on T b —phase b and T c —phase c. Figure 17 shows the entropies of phases a, b and c with open-circuit faults, in red. The optimal entropy should be searched from all possible combinations, according to the following rules:
m i n d i s t a n c e j = 1 52 e n t r o p y p h a s e a e n t r o p y p h a s e c
and
m i n d i s t a n c e j = 1 52 e n t r o p y p h a s e b e n t r o p y p h a s e c .
According to Equations (40) and (41), Figure 17 presents B u b b E n and I n c r E n . The entropies of phases a, b and c are very closed.

5.6. Three Open-Circuit Faults on B a —Phase a, T b and B b —Phase b

Three open-circuit faults occur on B a —phase a, on T b and B b —phase b. As we can see in Figure 18, the entropies of phases a and b with open-circuit faults is represented in red, and the entropy of phase c is in black. The optimal entropy should be searched from all possible combinations, according to Equations (38) and (39). This time, only P h a s E n is able to detect the phases where the open-circuit faults occur. For example, this is not the case with S l o p E n . Phase a entropy is too close to phase c entropy.

5.7. Four Open-Circuit Faults on B a —Phase a, T b and B b —Phase b and T c —Phase c

Four open-circuit faults occur on B a —phase a, on T b and B b —phase b and T c —phase c. Figure 19 shows the entropies of phases a, b and c with open-circuit faults, in red according to Equations (40) and (41). Once again, I n c r E n presents very good results.

6. Optimization of Parameters L , m , r , τ and s

The parameters, data length N, embedding dimension m, time lag τ and tolerance r, are discussed in the next subsections. The calculated values of entropy depend on the parameters as embedded dimension m and tolerance r. The scale s may also affect the performance of our fault detection method. Finding an optimum set is a major challenge. The parameter optimization is carried out by the maximization of the distance:
m a x d i s t a n c e j = 1 52 e n t r o p y ( L , m , r , τ , s ) p h a s e x e n t r o p y ( L , m , r , τ , s ) p h a s e y
and
m a x d i s t a n c e j = 1 52 e n t r o p y ( L , m , r , τ , s ) p h a s e z e n t r o p y ( L , m , r , τ , s ) p h a s e w .
in the cases of single or two open-circuit faults and three open-circuit faults (two faults on the same phase). For three and four open-circuit faults on the three phases, the parameter optimization is carried out by the minimization of the distance:
m i n d i s t a n c e j = 1 52 e n t r o p y ( L , m , r , τ , s ) p h a s e x e n t r o p y ( L , m , r , τ , s ) p h a s e y
and
m i n d i s t a n c e j = 1 52 e n t r o p y ( L , m , r , τ , s ) p h a s e z e n t r o p y ( L , m , r , τ , s ) p h a s e w .
All x, y, z and w cases are presented in Table 3. There are five main parameters for the entropy methods, including length L, embedding dimension m, threshold r, time delay τ and scale s. The optimal combination of L, m, r, τ and s should be searched. In order to check the incidence of the parameters variation on the entropy, we present D i s p E n r M S B u b b E n and r M S A t t n E n with one open-circuit on T a (Figure 13). The values of C o S i E n and E n o f E n are evaluated if two open-circuits occur on B a and T b (Figure 14). r M S S a m p E n K 2 E n , M S A p E n , F u z z E n and S l o p E n are studied considering the phase fault as in Figure 15. Then, we examine C o n d E n with two other open-circuit faults: T b and T c , as in Figure 14. If there are three open-circuit faults on B a , T b and T c , I n c r E n is evaluated (Figure 17). The last entropy, P h a s E n , is considered with three open-circuit fauts B a , T b and B b , as in Figure 18.

6.1. Varied Data Length (L)

Figure 20 show the analysis for the data length L. The parameter we used are: m = 2, τ = 1, r = 0.2 and s = 2.
When changing from 1000 samples to 6000 samples, the data lengths are: L 1 = 1000 points; then, the length increases up to L 2 = 2000 samples, approximately two periods of the signal, followed by L 3 = 3000 points; L 4 = 4000 represents four signal periods; then, 5000 points are saved of L 5 data; and, finally, L 6 = 6000 samples, i.e., six signal periods, as in Figure 2b, Figure 3b, Figure 4b, Figure 5b, Figure 6b, Figure 7b, Figure 8b, Figure 9b, Figure 10b, Figure 11b and Figure 12b.
r M S S a m p E n , K 2 E n , C o n d E n , D i s p E n , C o S i E n , r M S B u b b E n , M S A p E n , F u z z E n , P h a s E n , S l o p E n , E n of E n and r M S A t t E n are performed with a specifier open-circuit fault. Then, the distance between the healthy phase (represented by a red curve) and the open-circuit phases (represented by a black curve) is maximal, except for one case: the distance between the three red curves is minimal for I n c r E n , performed with three open-circuit faults B a , T b and T c .
Figure 20a shows r M S S a m p E n as function of the data length. In Figure 20a, r M S S a m p E n , increases for L 1 L 2 in the sample range (1000, 2000), decreases for [ L 2 - L 3 ] in the sample range (2000, 3000) and is followed by an increase in the range L 3 L 4 in the sample range (3000, 4000). Then, it slowly decreases to a constant value for L 6 . K 2 E n r M S B u b b E n are unchanged as L increases, keeping a constant entropy value as in Figure 20b,f. In Figure 20c, C o n d E n increases, decreases and increases slowly, keeping a constant distance between the entropies curves. D i s p E n of the healthy phase gradually decreases when the data length increases, as shown in Figure 20d. For D i s p E n , it is appropriate to choose L 1 because the entropy values are length independent. To ensure a large difference between phases a and b entropies and phase c entropy (Figure 20e), it is appropriate to choose L 6 for C o S i E n .
In Figure 20g, M S A p E n increases slowly for [ L 1 L 6 ] in the sample range (1000, 6000). With regards to the entropy shape, a large length of data ensures a maximum distance between the healthy and faulty phases. Figure 20h,k show F u z z E n and S l o p E n . After an insignificant variation, the entropies are nearly constant when L is in the range [ L 5 , L 6 ]. A suitable value of data length is L 3 for I n c r E n : the distance between the three entropies is minimal. A maximal distance between the healthy phase (red curve) and the open-circuit phases (black curve) of P h a s E n is for L 6 , as in Figure 20j. The results of E n of E n and r M S A t t E n are shown in Figure 20l,m. Even if these entropies vary (increase or decrease), the distance between the healthy and faulty phases is constant. It seems better to choose L 5 as the data length.

6.2. Varied Embedding Dimension (m)

Let us change m from 2 to 8 to study the effect of m on these approaches ( r M S S a m p E n , K 2 E n , C o n d E n , D i s p E n , C o S i E n , r M S B u b b E n , M S A p E n , F u z z E n , I n c r E n and S l o p E n ) as in Figure 21a–j. The functions P h a s E n , E n of E n and A t t E n do not have an embedding dimension m.
Results of S a m p E n , r M S B u b b E n and A p E n are shown in Figure 21a,f,g. These entropies decrease faster, ensuring a large difference between phase a entropy and phases b and c entropies for m = 2. K 2 E n , C o n d E n , F u z z E n and C o S i E n are constant when m is in the range [2, 8], as in Figure 21b,c,e,h. In Figure 21d, D i s p E n gradually decreases when the embedding dimension m increases, keeping a constant difference between the entropy of phase a and the entropies of phases b and c, as in Figure 21d.
Figure 21i shows the entropies I n c r E n of phases a, b and c (in red), which are very close to each other. m = 2 is chosen in order to minimise the distance between these entropies. For the last entropy, S l o p E n (Figure 21j): the entropies of the healthy phases b and c decrease when m increases; the entropy of the open-circuit phase a increases when m increases. It is appropriate to choose m = 2 for S l o p E n .

6.3. Varied Time Lag ( τ )

Here, we deepen the influence of another indicator, such as time lag τ on some entropies. Time lag τ varies from 1 to 7. We already illustrated the influence of data length L and embedding dimension m on the entropy; let us now examine the performance of r M S S a m p E n , K 2 E n , C o n d E n , D i s p E n , C o S i E n , r M S B u b b E n , M S A p E n , F u z z E n , I n c r E n , P h a s E n , S l o p E n and E n of E n with the variation of time lag τ . The function A t t n E n does not require a time lag τ . The data length L, embedding dimension m, scale s and tolerance r were fixed at N = 6000, m = 2, s = 2 and r = 0.2 in the following analysis.
Figure 22a,f,g, show the impact of different values of τ on r M S S a m p E n , r M S B u b b E n and r M S A p E n : a steep decrease of these entropies of phase a and a nearly constant value for phases b and c can be observed when τ increases. The major difference between the curves is for a smaller τ = 1. We find that the difference between the C o n d E n , D i s p E n , C o S i E n , P h a s E n and E n of E n of healthy phase and open-circuit phase is nearly constant, suggesting correlation, as plotted in Figure 22c–e,j,l.
In Figure 22b the shape of r M S K 2 E n in function of τ , decreases at the beginning of the interval τ = [1, 2], followed by a slow increase for τ = [2, 4], ending with an abrupt increase of open-circuit phase entropy. Figure suggests that τ = 7 suits well for the calculation of r M S K 2 E n . F u z z E n entropy of phases a and b is shown in Figure 22h: only larger time-lag entropies have a relevant significance. For τ equals to 1, F u z z E n is 1.1, and exceeds 1.7 for τ = 7 .
Figure 22i shows the entropies I n c r E n of phases a, b and c, which are very close to each other. τ = 1 is chosen in order to minimize the distance between these entropies. S l o p E n presents a peak for τ = 3 : the difference between F u z z E n of phase a and of phase b is then maximal. However, as the time-lag increases, the difference between black and red curves become smaller. Only a lower time-lag ( τ = 3) entropy has a relevant significance, as in Figure 22k.

6.4. Varied Tolerance (r)

The analysis of the tolerance r, changing from 0.2 to 0.7, was only performed on r M S S a m p E n , r M S K 2 E n , C o S i E n , r M S A p E n and F u z z E n . The data length, time lag and embedding dimension are N = 6000, τ = 2, s = 2 and m = 2.
Figure 23a,c present the impact of several r values on, respectively, r M S S a m p E n and r M S A p E n . The increase of r results in a monotone increase of r M S S a m p E n and r M S A p E n of faulty phase a (Figure 23a) except for the constant values for r = [0.4, 0.5]. r M S S a m p E n and r M S A p E n both of phase b, with no-fault, are nearly constant. The largest difference between the two curves is for a large r. The figures suggest that r = 0.7 is suitable for the calculation of r M S S a m p E n and r M S A p E n values. Figure 23b shows r M S K 2 E n : the entropy of the healthy phases b and c increases when the embedding dimension m increases; the entropy of the open-circuit phase a decreases when the embedding dimension m increases. It is appropriate to choose r = 0.2 for r M S K 2 E n . For the last entropy, the difference between phases a and b F u z z E n is nearly constant, as plotted in Figure 23d. The entropy F u z z E n is valid for any value of r.

6.5. Varied Scale (s)

Figure 24a–l illustrate the performance of all entropies: r M S S a m p E n , K 2 E n , C o n d E n , D i s p E n , C o S i E n , r M S B u b b E n , r M S A p E n , F u z z E n , I n c r E n , P h a s E n , S l o p E n , E n of E n and r M S A t t E n . To investigate the effects of scale s on these entropies, we used: L = 6000 points, m = 2, τ = 1 and r = 0.2.
Figure 24a shows r M S S a m p E n as a functions of scale. The entropy of the open-circuit phase a decreases for s = [1, 2], is nearly constant in the range s = [3, 7], followed by an increase in the range of s = [7, 9], ending with a decrease for s = [9, 10]. In the meantime, the entropy of the healthy phase is nearly constant for all ranges of s. The scale s = 2 is appropriate. As for r M S S a m p E n , r M S K 2 E n is nearly constant for a healthy case. After a very slow variation, r M S K 2 E n of the open-circuit phase a increases in the range s = [5, 9], decreasing at the end for the last scale. To ensure a large difference between phase a entropy and phases b and c entropies, it is appropriate to choose s = 9 for r M S K 2 E n , as in Figure 24b.
In Figure 24c–e,h,j, C o n d E n , D i s p E n , C o S i E n , F u z z E n and P h a s E n are represented. The differences between the healty phase entropy and the open-circuit phase entropyare nearly constant over the range s = [2, 10].
Results of r M S B u b b E n are shown in Figure 24f. The entropy of phase a decreases gradually with an increase of s. Meanwhile, entropy of phase b undergoes slight variations. The first scale s = 2 gives the largest distance between the entropies of phases a and b. The same result is obtained for r M S A p E n , as in Figure 24g. At the end of the s interval, the two curves merge and the open-circuit fault on phase a cannot be detected any more. Only a lower scale (s = 2) entropy has relevant significance, as in Figure 24g. Scale s = 4 or 5 gives the smallest distance between the faulty phases a, b and c for I n c r E n , as shown in Figure 24i.
c M S S l o p E n entropy of phases a and b is shown in Figure 24k: only lower scale entropies show a relevant significance. For s = 2, c M S S l o p E n is 1, exceeding 2.4 for s = 4. Furthermore, the scale analysis reveals additional entropy information not previously observed at scale s = 2. c M S S l o p E n clearly presents two peaks for s = 4 and 6: the difference between phase a and phase b c M S S l o p E n is maximal for s = 4. However, the difference between the black and red curves becomes smaller as the scale increases.
The results of M S E n of E n are shown in Figure 24l. Even if these entropies vary (increase or decrease), the distance between the healty and faulty phases is constant. It seems better to choose the scale s = 2.
Figure 24m shows that r M S A t t E n with repeated up and down, where phase a entropy and phases b and c entropies are interlaced. As long as the two curves (phase a entropy and other phases entropy) merge, the phase a open-circuit fault cannot be detected. Only lower scale (s = 2) and middle scale (s = 5, 6 or 7) entropies have relevant significance.

6.6. New Setting of Parameters

Parameters are now set to: N = 2000, m = 2, τ = 1, r = 0.7, s = 2 for r M S S a m p E n ; N = 2000, m = 2, τ = 7, r = 0.2, s = 9 for K 2 E n ; N = 2000, m = 2, τ = 1, s = 2 for C o n d E n ; N = 5000, m = 8, τ = 7, s = 9 for D i s p E n ; N = 5000, m = 2, τ = 1, r = 0.2, s = 10 for C o S i E n ; N = 2000, m = 2, τ = 1, s = 2 for r M S B u b b E n ; N = 6000, m = 2, τ = 1, r = 0.7, s = 2 for r M S A p E n ; N = 2000, m = 2, τ = 7, r = 0.2, s = 2 for F u z z E n ; N = 3000, m = 2, τ = 1, s = 2 for I n c r E n ; N = 2000, τ = 1 and s = 2 for P h a s E n ; N = 2000, m = 2, τ = 3, s = 4 for S l o p E n ; N = 5000, τ = 1 and s = 4 for E n o f E n ; N = 3000 and s = 6 for r M S A t t n E n . These new settings of parameters are able to increase the distance between faulty and non-faulty phases or to decrease this distance in the I n c r E n case. Some entropy functions will be applied to distinguish the healthy state and an open-circuit faulty state and to fault classification, considering the new setting parameters.

7. Conclusions

In this paper, we provide a systematic overview of many known entropy measures, highlighting their applicability to inverter fault detection. Several usual entropies (sample entropy, Kolmogorov entropy, dispersion entropy, cosine entropy, bubble entropy, approximation entropy, fuzzy entropy, incremental entropy, phase entropy, slope entropy, entropy of entropy, attention entropy) and multiscale entropies (and also refined multiscale entropy, composite multiscale entropy) are proposed to quantify the complexity of the brushless motor currents. Their roles in fault detection are summarized into the entropy distance between a healthy phase and an open-circuit faulty phase. Moreover, this paper reveals the great ability of some entropies to distinguish between a healthy and an open-circuit faulty phase. Finally, the simulation results show that these entropies are able to detect and locate the arms of the bridge with one, two, three or even four open-circuit faults.

Author Contributions

Formulation: C.M.; Problem solving: C.M.; Contribution to the numerical computation and results: C.M., S.R., B.L.G. and J.P.; Writing of the manuscript: C.M.; Discussion: S.C.; Revision: C.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wu, S.D.; Wu, P.H.; Wu, C.W.; Ding, J.J.; Wang, C.C. Bearing fault diagnosis based on multiscale permutation entropy and support vector machine. Entropy 2012, 14, 1343–1356. [Google Scholar] [CrossRef]
  2. Shang, Y.; Lu, G.; Kang, Y.; Zhou, Z.; Duan, B.; Zhang, C. A multi-fault diagnosis method based on modified Sample Entropy for lithium-ion battery strings. J. Power Sources 2020, 446, 227275. [Google Scholar] [CrossRef]
  3. Guzman-Vargas, L.; Ramirez-Rojas, A.; Angulo-Brown, F. Multiscale entropy analysis of electroseismic time series. Nat. Hazards Earth Syst. Sci. 2008, 8, 855–860. [Google Scholar] [CrossRef]
  4. Niu, H.; Wang, J. Quantifying complexity of financial short-term time series by composite multiscale entropy measure. Commun. Nonlinear Sci. Numer. Simul. 2015, 22, 375–382. [Google Scholar] [CrossRef]
  5. Morel, C.; Humeau-Heurtier, A. Multiscale permutation entropy for two-dimensional patterns. Pattern Recognit. Lett. 2021, 150, 139–146. [Google Scholar] [CrossRef]
  6. Richman, J.; Randall Moorman, J. Physiological time-series analysis using approximate entropy and sample entropy. Am. J. Physiol. Heart Circ. Physiol. 2000, 278, 2039–2049. [Google Scholar] [CrossRef]
  7. Jana, S.; Dutta, G. Wavelet entropy and neural network based fault detection on a non radial power system network. J. Electr. Electron. Eng. 2012, 2, 26–31. [Google Scholar] [CrossRef]
  8. Dasgupta, A.; Nath, S.; Das, A. Transmission line fault classification and location using wavelet entropy and neural network. Electr. Power Compon. Syst. 2012, 40, 1676–1689. [Google Scholar] [CrossRef]
  9. Yang, Q.; Wang, J. Multi-Level Wavelet Shannon Entropy-Based Method for Single-Sensor Fault Location. Entropy 2015, 17, 7101–7117. [Google Scholar] [CrossRef]
  10. Guan, Z.Y.; Liao, Z.Q.; Li, K.; Chen, P. A precise diagnosis method of structural faults of rotating machinery based on combination of empirical mode decomposition, sample entropy, and deep belief network. Sensors 2019, 19, 591. [Google Scholar] [CrossRef]
  11. Huo, Z.; Miguel Martínez-García, M.; Zhang, Y.; Yan, R.; Shu, L. Entropy Measures in Machine Fault Diagnosis: Insights and Applications. IEEE Trans. Instrum. Meas. 2020, 69, 2607–2620. [Google Scholar] [CrossRef]
  12. Li, B.; Jia, S. Research on diagnosis method of series arc fault of three-phase load based on SSA-ELM. Sci. Rep. 2022, 12, 592. [Google Scholar] [CrossRef]
  13. Wei, H.; Zhang, Y.; Yu, L.; Zhang, M.; Teffah, K. A new diagnostic algorithm for multiple IGBTs open-circuit faults by the phase currents for power inverter in electric vehicles. Energies 2018, 11, 1508. [Google Scholar] [CrossRef]
  14. Lu, S.; Ye, W.; Xue, Y.; Tang, Y.; Guo, M. Dynamic feature information extraction using the special empirical mode decomposition entropy value and index energy. Energy 2020, 193, 116610. [Google Scholar] [CrossRef]
  15. Sarita, K.; Kumar, S.; Saket, R.K. OC fault diagnosis of multilevel inverter using SVM technique and detection algorithm. Comput. Electr. Eng. 2021, 96, 107481. [Google Scholar] [CrossRef]
  16. Patel, B.; Bera, P.; Saha, B. Wavelet packet entropy and rbfnn based fault detection, classification and localization on HVAC transmission line. Electr. Power Compon. Syst. 2018, 46, 15–26. [Google Scholar] [CrossRef]
  17. Ahmadi, S.; Poure, P.; Saadate, S.; Khaburi, D.A. A Real-Time Fault Diagnosis for Neutral-Point-Clamped Inverters Based on Failure-Mode Algorithm. IEEE Trans. Ind. Inform. 2021, 17, 1100–1110. [Google Scholar] [CrossRef]
  18. Nsaif, Y.; Lipu, M.S.H.; Hussain, A.; Ayob, A.; Yusof, Y.; Zainuri, M.A. A New Voltage Based Fault Detection Technique for Distribution Network Connected to Photovoltaic Sources Using Variational Mode Decomposition Integrated Ensemble Bagged Trees Approach. Energies 2022, 15, 7762. [Google Scholar] [CrossRef]
  19. Estima, J.O.; Cordoso, A.J.M. A new approach for real-time multiple open-circuit fault diagnosis in voltage-source inverters. IEEE Trans. Ind. Appl. 2011, 47, 2487–2494. [Google Scholar] [CrossRef]
  20. Lee, J.S.; Lee, K.B.; Blaabjerg, F. Open-switch fault detection method of a back-to-back converter using NPC topology for wind turbine systems. IEEE Trans. Ind. Appl. 2014, 51, 325–335. [Google Scholar] [CrossRef]
  21. Yu, L.; Zhang, Y.; Huang, W.; Teffah, K. A fast-acting diagnostic algorithm of insulated gate bipolar transistor open-circuit faults for power inverters in electric vehicles. Energies 2017, 10, 552. [Google Scholar] [CrossRef]
  22. Park, G.B.; Lee, K.J.; Kim, R.Y.; Kim, T.S.; Ryu, J.S.; Hyun, D.S. Simple fault diagnosis based on operating characteristic of brushless direct-current motor drives. IEEE Trans. Ind. Electron. 2011, 58, 1586–1593. [Google Scholar] [CrossRef]
  23. Wu, Y.; Zhang, Z.; Li, Y.; Sun, Q. Open-Circuit Fault Diagnosis of Six-Phase Permanent Magnet Synchronous Motor Drive System Based on Empirical Mode Decomposition Energy Entropy. IEEE Access 2021, 9, 91137–91147. [Google Scholar] [CrossRef]
  24. Estima, J.O.; Freire, N.M.A.; Cardoso, A.J.M. Recent advances in fault diagnosis by Park’s vector approach. IEEE Workshop Electr. Mach. Des. Control Diagn. 2013, 2, 279–288. [Google Scholar]
  25. Yan, H.; Xu, Y.; Cai, F.; Zhang, H.; Zhao, W.; Gerada, C. PWM-VSI fault diagnosis for PMSM drive based on fuzzy logic approach. IEEE Trans. Power Electron. 2018, 34, 759–768. [Google Scholar] [CrossRef]
  26. Faraz, G.; Majid, A.; Khan, B.; Saleem, J.; Rehman, N. An Integral Sliding Mode Observer Based Fault Diagnosis Approach for Modular Multilevel Converter. In Proceedings of the 2019 International Conference on Electrical, Communication, and Computer Engineering (ICECCE), Swat, Pakistan, 24–25 July 2019; pp. 1–6. [Google Scholar]
  27. Wang, Q.; Yu, Y.; Hoa, A. Fault detection and classification in MMC-HVDC systems using learning methods. Sensors 2020, 20, 4438. [Google Scholar] [CrossRef]
  28. Xing, W. An open-circuit fault detection and location strategy for MMC with feature extraction and random forest. In Proceedings of the 2021 IEEE Applied Power Electronics Conference and Exposition—APEC, Virtual, 14–17 June 2021; Volume 52007166, pp. 1111–1116. [Google Scholar]
  29. Ke, L.; Liu, Z.; Zhang, Y. Fault Diagnosis of Modular Multilevel Converter Based on Optimized Support Vector Machine. In Proceedings of the 2020 39th Chinese Control Conference, Shenyang, China, 27–29 July 2020; pp. 4204–4209. [Google Scholar]
  30. Wang, C.; Lizana, F.; Li, Z. Submodule short-circuit fault diagnosis based on wavelet transform and support vector machines for a modular multi-level converter with series and parallel connectivity. In Proceedings of the IECON 2017—43rd Annual Conference of the IEEE Industrial Electronics Society, Beijing, China, 29 October–1 November 2017; pp. 3239–3244. [Google Scholar]
  31. Morel, C.; Akrad, A.; Sehab, R.; Azib, T.; Larouci, C. IGBT Open-Circuit Fault-Tolerant Strategy for Interleaved Boost Converters via Filippov Method. Energies 2022, 15, 352. [Google Scholar] [CrossRef]
  32. Morel, C.; Akrad, A. Open-Circuit Fault Detection and Location in AC-DC-AC Converters Based on Entropy Analysis. Energies 2023, 16, 1959. [Google Scholar] [CrossRef]
  33. Unakafova, V.; Unakafov, A.; Keller, K. An approach to comparing Kolmogorov-Sinai and permutation entropy. Eur. Phys. J. ST 2013, 222, 353–361. [Google Scholar] [CrossRef]
  34. Porta, A.; Baselli, G.; Liberati, D.; Montano, N.; Cogliati, C.; Gnecchi-Ruscone, T.; Malliani, A.; Cerutti, S. Measuring regularity by means of a corrected conditional entropy in sympathetic outflow. Biol. Cybern. 1998, 78, 71–78. [Google Scholar] [CrossRef]
  35. Fu, W.; Tan, J.; Xu, Y.; Wang, K.; Chen, T. Fault Diagnosis for Rolling Bearings Based on Fine-Sorted Dispersion Entropy and SVM Optimized with Mutation SCA-PSO. Entropy 2019, 21, 404. [Google Scholar] [CrossRef]
  36. Rostaghi, M.; Azami, H. Dispersion entropy: A measure for time-series analysis. IEEE Signal Process. Lett. 2016, 23, 610–614. [Google Scholar] [CrossRef]
  37. Theerasak, C.; Mandic, D. Cosine Similarity Entropy: Self-Correlation-Based Complexity Analysis of Dynamical Systems. Entropy 2017, 19, 652. [Google Scholar]
  38. Cuesta-Frau, D.; Vargas, B. Permutation Entropy and Bubble Entropy: Possible interactions and synergies between order and sorting relations. Math. Biosci. Eng. 2019, 17, 1637–1658. [Google Scholar] [CrossRef]
  39. Manis, G.; Bodini, M.; Rivolta, M.W.; Sassi, R. A Two-Steps-Ahead Estimator for Bubble Entropy. Entropy 2021, 23, 761. [Google Scholar] [CrossRef]
  40. Chen, W.; Wang, Z.; Xie, H.; Yu, W. Characterization of surface EMG signal based on fuzzy entropy, Neural Systems and Rehabilitation Engineering. IEEE Trans. Neural Syst. Rehabil. Eng. 2007, 15, 266–272. [Google Scholar] [CrossRef]
  41. Azami, H.; Li, P.; Arnold, S.; Escuder, J.; Humeau-Heurtier, A. Fuzzy Entropy Metrics for the Analysis of Biomedical Signals: Assessment and Comparison. IEEE Access 2019, 7, 104833–104847. [Google Scholar] [CrossRef]
  42. Liu, X.; Wang, X.; Zhou, X.; Jiang, A. Appropriate use of the increment entropy for electrophysiological time series. Comput. Biol. Med. 2018, 95, 13–23. [Google Scholar] [CrossRef]
  43. Reyes-Lagos, J.; Pliego-Carrillo, A.C.; Ledesma-Ramírez, C.I.; Peña-Castillo, M.A.; García-González, M.T.; Pacheco-López, G.; Echeverría, J.C. Phase Entropy Analysis of Electrohysterographic Data at the Third Trimester of Human Pregnancy and Active Parturition. Entropy 2020, 22, 798. [Google Scholar] [CrossRef]
  44. Cuesta-Frau, D. Slope Entropy: A New Time Series Complexity Estimator Based on Both Symbolic Patterns and Amplitude Information. Entropy 2019, 21, 1167. [Google Scholar] [CrossRef]
  45. Chang, F.; Wei, S.-Y.; Huang, H.P.; Hsu, L.; Chi, S.; Peng, C.K. Entropy of Entropy: Measurement of Dynamical Complexity for Biological Systems. Entropy 2017, 19, 550. [Google Scholar]
  46. Yang, J.; Choudhary, G.; Rahardja, S. Classification of Interbeat Interval Time-series Using Attention Entropy. IEEE Trans. Affect. Comput. 2020, 14, 321–330. [Google Scholar] [CrossRef]
  47. Liu, T.; Cui, L.; Zhang, J.; Zhang, C. Research on fault diagnosis of planetary gearbox based on Variable Multi-Scale Morphological Filtering and improved Symbol Dynamic Entropy. Int. J. Adv. Manuf. Technol. 2022, 124, 3947–3961. [Google Scholar] [CrossRef]
  48. Silva, L.; Duque, J.; Felipe, J.; Murta, L.; Humeau-Heurtier, A. Twodimensional multiscale entropy analysis: Applications to image texture evaluation. Signal Process. 2018, 147, 224–232. [Google Scholar] [CrossRef]
  49. Wu, S.; Wu, C.; Lin, S.; Wang, C.; Lee, K. Time series analysis using composite multiscale entropy. Entropy 2013, 15, 1069–1084. [Google Scholar] [CrossRef]
  50. Humeau-Heurtier, A. The Multiscale Entropy Algorithm and Its Variants: A Review. Entropy 2015, 17, 3110–3123. [Google Scholar] [CrossRef]
  51. Valencia, J.; Porta, A.; Vallverdu, M.; Claria, F.; Baranovski, R.; Orlowska-Baranovska, E.; Caminal, P. Refined multiscale entropy: Application to 24-h Holter records of heart period variability in hearthy and aortic stenosis subjects. IEEE Trans. Biomed. Eng. 2009, 56, 2202–2213. [Google Scholar] [CrossRef]
  52. Wu, S.; Wu, C.; Lin, S.; Lee, K.; Peng, C. Analysis of complex time series using refined composite multiscale entropy. Phys. Lett. A 2014, 378, 1369–1374. [Google Scholar] [CrossRef]
  53. Dini, P.; Saponara, S. Model-Based Design of an Improved Electric Drive Controller for High-Precision Applications Based on Feedback Linearization Technique. Electronics 2021, 10, 2954. [Google Scholar] [CrossRef]
  54. Dini, P.; Saponara, S. Design of an observer-based architecture and non-linear control algorithm for cogging torque reduction in synchronous motors. Energies 2020, 13, 2077. [Google Scholar] [CrossRef]
  55. Mohanraj, D.; Aruldavid, R.; Verma, R.; Sathyasekar, K.; Barnawi, A.B.; Chokkalingam, B.; Mihet-Popa, L. A Review of BLDC Motor: State of Art, Advanced Control Techniques, and Applications. IEEE Access 2017, 2, 40. [Google Scholar] [CrossRef]
  56. Wu, H.; Wen, M.-Y.; Wong, C.-C. Speed control of BLDC motors using hall effect sensors based on DSP. In Proceedings of the IEEE International Conference on System Science and Engineering, Puli, Taiwan, 7–9 July 2016. [Google Scholar]
  57. Akin, B.; Bhardwaj, M.; Warriner, J. Trapezoidal Control of BLDC Motors Using Hall Effect Sensors. Tex. Instrum.—SPRAB4 2011, 2954, 34. [Google Scholar]
  58. Nga, N.T.T.N.; Chi, N.T.P.; Quang, N.H. Study on Controlling Brushless DC Motor in Current Control Loop Using DC-Link Current. Am. J. Eng. Res. 2016, 7, 522–528. [Google Scholar]
Figure 1. Power circuit structure of a brushless motor.
Figure 1. Power circuit structure of a brushless motor.
Actuators 12 00228 g001
Figure 2. (a) Current i a during a no−fault case; (b) Current i a during open−circuit faults on T a .
Figure 2. (a) Current i a during a no−fault case; (b) Current i a during open−circuit faults on T a .
Actuators 12 00228 g002
Figure 3. (a) Current i b during open−circuit faults on T a ; (b) Current i c during open−circuit faults on T a .
Figure 3. (a) Current i b during open−circuit faults on T a ; (b) Current i c during open−circuit faults on T a .
Actuators 12 00228 g003
Figure 4. (a) Current i a during open−circuit faults on T a and B a ; (b) Current i b during open−circuit faults on T a and B a .
Figure 4. (a) Current i a during open−circuit faults on T a and B a ; (b) Current i b during open−circuit faults on T a and B a .
Actuators 12 00228 g004
Figure 5. (a) Current i c during open−circuit faults on T a and B a ; (b) Current i a during open−circuit faults on T a and T b .
Figure 5. (a) Current i c during open−circuit faults on T a and B a ; (b) Current i a during open−circuit faults on T a and T b .
Actuators 12 00228 g005
Figure 6. (a) Current i b during open−circuit faults on T a and T b ; (b) Current i c during open−circuit faults on T a and T b .
Figure 6. (a) Current i b during open−circuit faults on T a and T b ; (b) Current i c during open−circuit faults on T a and T b .
Actuators 12 00228 g006
Figure 7. (a) Current i a during open−circuit faults on T a and B b ; (b) Current i b during open−circuit faults on T a and B b .
Figure 7. (a) Current i a during open−circuit faults on T a and B b ; (b) Current i b during open−circuit faults on T a and B b .
Actuators 12 00228 g007
Figure 8. (a) Current i c during open−circuit faults on T a and B b ; (b) Current i a during open−circuit faults on T a , B a and T b .
Figure 8. (a) Current i c during open−circuit faults on T a and B b ; (b) Current i a during open−circuit faults on T a , B a and T b .
Actuators 12 00228 g008
Figure 9. (a) Current i b during open−circuit faults on T a , B a and T b ; (b) Current i c during open−circuit faults on T a , B a and T b .
Figure 9. (a) Current i b during open−circuit faults on T a , B a and T b ; (b) Current i c during open−circuit faults on T a , B a and T b .
Actuators 12 00228 g009
Figure 10. (a) Current i a during open−circuit faults on B a , T b and B c ; (b) Current i b during open−circuit faults on B a , T b and B c .
Figure 10. (a) Current i a during open−circuit faults on B a , T b and B c ; (b) Current i b during open−circuit faults on B a , T b and B c .
Actuators 12 00228 g010
Figure 11. (a) Current i c during open−circuit faults on B a , T b and B c ; (b) Current i a during open−circuit faults on B a , T b , B b and T c .
Figure 11. (a) Current i c during open−circuit faults on B a , T b and B c ; (b) Current i a during open−circuit faults on B a , T b , B b and T c .
Actuators 12 00228 g011
Figure 12. (a) Current i b during open−circuit faults on B a , T b , B b and T c ; (b) Current i c during open−circuit faults on B a , T b , B b and T c .
Figure 12. (a) Current i b during open−circuit faults on B a , T b , B b and T c ; (b) Current i c during open−circuit faults on B a , T b , B b and T c .
Actuators 12 00228 g012
Figure 13. Entropy evaluation using S a m p E n (mean of four sample entropies), K 2 E n (mean of four Kolmokov entropies), D i s p E n (mean of four dispersion entropies), r M S B u b b E n , A p E n , S l o p E n (mean of four slope entropies) and r M S A t t E n for one open−circuit fault on T a : phase a entropy with an open−circuit fault in red and phases b and c entropies in black.
Figure 13. Entropy evaluation using S a m p E n (mean of four sample entropies), K 2 E n (mean of four Kolmokov entropies), D i s p E n (mean of four dispersion entropies), r M S B u b b E n , A p E n , S l o p E n (mean of four slope entropies) and r M S A t t E n for one open−circuit fault on T a : phase a entropy with an open−circuit fault in red and phases b and c entropies in black.
Actuators 12 00228 g013
Figure 14. r M S S a m p E n , C o s i E n , F u z z E n , E n of E n , A p E n , r M S S l o p E n and r M S A t t E n for one open−circuit fault on B a and T b : entropy of faulty phases a and b in red and phase c in black.
Figure 14. r M S S a m p E n , C o s i E n , F u z z E n , E n of E n , A p E n , r M S S l o p E n and r M S A t t E n for one open−circuit fault on B a and T b : entropy of faulty phases a and b in red and phase c in black.
Actuators 12 00228 g014
Figure 15. S a m p E n , K 2 E n , A p E n , F u z z E n and S l o p E n for two open−circuit faults on T a and B a : phase a entropy with open−circuit fault in red and phases b and c entropies in black.
Figure 15. S a m p E n , K 2 E n , A p E n , F u z z E n and S l o p E n for two open−circuit faults on T a and B a : phase a entropy with open−circuit fault in red and phases b and c entropies in black.
Actuators 12 00228 g015
Figure 16. S a m p E n , K 2 E n , C o n d E n , D i s p E n , r M S B u b b E n and S l o p E n for two open−circuit faults on T b and T c : phase a entropy in black and phases b and c entropies with open−circuits in red.
Figure 16. S a m p E n , K 2 E n , C o n d E n , D i s p E n , r M S B u b b E n and S l o p E n for two open−circuit faults on T b and T c : phase a entropy in black and phases b and c entropies with open−circuits in red.
Actuators 12 00228 g016
Figure 17. B u b b E n and I n c r E n with three open−circuit faults on B a , T b and T c : phase a, b and c entropies in red.
Figure 17. B u b b E n and I n c r E n with three open−circuit faults on B a , T b and T c : phase a, b and c entropies in red.
Actuators 12 00228 g017
Figure 18. P h a s E n and S l o p E n for three open−circuit faults on B a , T b and B b : phases a and b entropies with open−circuit faults in red and entropy of phase c in black.
Figure 18. P h a s E n and S l o p E n for three open−circuit faults on B a , T b and B b : phases a and b entropies with open−circuit faults in red and entropy of phase c in black.
Actuators 12 00228 g018
Figure 19. C o n d E n , I n c r E n and P h a s e E n for four open−circuit faults on B a , T b , B b and T c : entropies of phase a, b and c with open−circuit faults in red.
Figure 19. C o n d E n , I n c r E n and P h a s e E n for four open−circuit faults on B a , T b , B b and T c : entropies of phase a, b and c with open−circuit faults in red.
Actuators 12 00228 g019
Figure 20. Entropies computed with (a) r M S S a m p E n , (b) K 2 E n , (c) C o n d E n , (d) D i s p E n , (e) C o S i E n , (f) r M S B u b b E n , (g) M S A p E n , (h) F u z z E n , (i) I n c r E n , (j) P h a s E n , (k) S l o p E n , (l) E n of E n and (m) r M S A t t E n for the data length L: healthy phase represented by a black curve and the open−circuit phase represented by a red curve.
Figure 20. Entropies computed with (a) r M S S a m p E n , (b) K 2 E n , (c) C o n d E n , (d) D i s p E n , (e) C o S i E n , (f) r M S B u b b E n , (g) M S A p E n , (h) F u z z E n , (i) I n c r E n , (j) P h a s E n , (k) S l o p E n , (l) E n of E n and (m) r M S A t t E n for the data length L: healthy phase represented by a black curve and the open−circuit phase represented by a red curve.
Actuators 12 00228 g020
Figure 21. Entropies computed with (a) r M S S a m p E n , (b) K 2 E n , (c) C o n d E n , (d) D i s p E n , (e) C o S i E n , (f) r M S B u b b E n , (g) r M S A p E n , (h) F u z z E n , (i) I n c r E n and (j) S l o p E n for the embedding dimension m: healthy phase represented by a black curve and the open−circuit phase represented by a red curve.
Figure 21. Entropies computed with (a) r M S S a m p E n , (b) K 2 E n , (c) C o n d E n , (d) D i s p E n , (e) C o S i E n , (f) r M S B u b b E n , (g) r M S A p E n , (h) F u z z E n , (i) I n c r E n and (j) S l o p E n for the embedding dimension m: healthy phase represented by a black curve and the open−circuit phase represented by a red curve.
Actuators 12 00228 g021
Figure 22. Entropies computed with (a) r M S S a m p E n , (b) r M S K 2 E n , (c) C o n d E n , (d) D i s p E n , (e) C o S i E n , (f) r M S B u b b E n , (g) r M S A p E n , (h) F u z z E n , (i) I n c r E n , (j) P h a s E n , (k) S l o p E n and (l) E n of E n for the time lag τ : healthy phase represented by a black curve and the open−circuit phase represented by a red curve.
Figure 22. Entropies computed with (a) r M S S a m p E n , (b) r M S K 2 E n , (c) C o n d E n , (d) D i s p E n , (e) C o S i E n , (f) r M S B u b b E n , (g) r M S A p E n , (h) F u z z E n , (i) I n c r E n , (j) P h a s E n , (k) S l o p E n and (l) E n of E n for the time lag τ : healthy phase represented by a black curve and the open−circuit phase represented by a red curve.
Actuators 12 00228 g022
Figure 23. Entropies computed with (a) r M S S a m p E n , (b) r M S K 2 E n , (c) r M S A p E n and (d) F u z z E n for the tolerance r: healthy phase represented by a black curve and the open−circuit phase represented by a red curve.
Figure 23. Entropies computed with (a) r M S S a m p E n , (b) r M S K 2 E n , (c) r M S A p E n and (d) F u z z E n for the tolerance r: healthy phase represented by a black curve and the open−circuit phase represented by a red curve.
Actuators 12 00228 g023
Figure 24. Entropies computed with (a) r M S S a m p E n , (b) r M S K 2 E n , (c) M S C o n d E n , (d) M S D i s p E n , (e) c M S C o S i E n , (f) r M S B u b b E n , (g) r M S A p E n , (h) M S F u z z E n , (i) M S I n c r E n , (j) M S P h a s E n , (k) c M S S l o p E n , (l) M S E n of E n and (m) r M S A t t E n for the scale s: healthy phase represented by a black curve and the open−circuit phase represented by a red curve.
Figure 24. Entropies computed with (a) r M S S a m p E n , (b) r M S K 2 E n , (c) M S C o n d E n , (d) M S D i s p E n , (e) c M S C o S i E n , (f) r M S B u b b E n , (g) r M S A p E n , (h) M S F u z z E n , (i) M S I n c r E n , (j) M S P h a s E n , (k) c M S S l o p E n , (l) M S E n of E n and (m) r M S A t t E n for the scale s: healthy phase represented by a black curve and the open−circuit phase represented by a red curve.
Actuators 12 00228 g024
Table 1. Mean specifications of the brushless machine.
Table 1. Mean specifications of the brushless machine.
SpecificationParameterValue
Brushless motorStator phase resistance2.8 Ω
Stator phase inductance8.5 · 10−3 H
Flux linkage0.175
Inertia0.8−3 kg/m2
Viscous damping0.001 Nms
Pole pairs4
Rotor flux position90°
Speed controllerProportional0.015
Integral16
Min output−500
Max output500
Table 2. Truth table of Hall effect sensors and gate state of the brushless motor.
Table 2. Truth table of Hall effect sensors and gate state of the brushless motor.
Hall 1Hall 2Hall 3 T a B a T b B b T c B c
101100001
001010010
011000110
010001001
110011000
100100100
Table 3. Current means of phases a, b and c with no−fault, one open−circuit fault, two open−circuit faults, three open−circuit faults and four open−circuit faults on the switches for a couple = 3 Nm and a reference speed = 3000 tr/min.
Table 3. Current means of phases a, b and c with no−fault, one open−circuit fault, two open−circuit faults, three open−circuit faults and four open−circuit faults on the switches for a couple = 3 Nm and a reference speed = 3000 tr/min.
No.Open-Circuit Fault i a Current Mean i b Current Mean i c Current Mean xyzw
1.No fault0.00033 0.00001 0.00034
2. T a −1.41640.21671.1994 a b a c
3. T b 1.2017 1.4161 0.2144 a b b c
4. T c 0.28851.1260 1.4145 a c b c
5. B a 1.4158 0.2652 1.1506 a b a c
6. B b 1.1234 1.2713 0.1479 a b b c
7. B c 0.2702 1.1422 1.4125 a c b c
8. T a , B a 0.00060.2708 0.2714 a b a c
9. T b , B b −0.27610.00070.2754 a b b c
10. T c , B c −0.1730.1757−0.0027 a c b c
11. T a , T b −0.8987−1.65162.5503 a c b c
12. T b , T c 2.465−0.8106−1.6545 a b a c
13. T a , T c −1.64742.5753−0.9278 a b b c
14. B a , B b 0.92191.6456−2.5675 a c b c
15. B b , B c −2.43870.92581.5129 a b a c
16. B a , B c 1.6466−2.50870.8621 a b b c
17. T a , B b −1.77751.39160.3858 a c b c
18. T a , B c −1.3516−0.42631.7779 a b b c
19. T b , B a 1.9577−1.5853−0.3724 a c b c
20. T b , B c 0.5589−1.88931.3304 a b a c
21. T c , B a 1.56270.5331−2.0958 a b b c
22. T c , B b −0.63302.0175−1.3846 a b a c
23. T a , B a , T b 0.0004−2.63712.6367 a c b c
24. T a , B a , B b 0.00442.9920−2.9964 a c b c
25. T a , B a , T c 0.00252.4690−2.4715 a b b c
26. T a , B a , B c −0.0032−2.82722.8304 a b b c
27. T b , B b , T a −2.97020.00252.9678 a c b c
28. T b , B b , B a 2.35380.0007−2.3546 a c b c
29. T b , B b , T c 2.9561−0.0022−2.9538 a b a c
30. T b , B b , B c −2.3683−0.00192.3702 a b a c
31. T c , B c , T a −2.55772.55640.0014 a b b c
32. T c , B c , B a 2.2754−2.2745−0.0009 a b b c
33. T c , B c , T b 2.6707−2.6692−0.0015 a b a c
34. T c , B c , B b −2.97482.9760−0.0012 a b a c
35. T a , B b , T c −1.64652.4144−0.7679 a b a c
36. T a , B b , B c −2.52680.93571.5911 a b a c
37. T a , T b , B c −0.7666−1.64622.4128 a b a c
38. B a , B b , T c 0.92291.6451−2.5680 a b a c
39. B a , T b , B c 1.6455−2.53180.8864 a b a c
40. B a , T b , T c 2.3986−0.7597−1.6389 a b a c
41. T a , B a , T b , B c 0.0016−2.60552.6038 a b a c
42. T a , B a , B b , T c 0.00413.0170−3.0211 a b a c
43. T a , T b , B b , B c −2.85490.00232.8526 a b a c
44. B a , T b , B b , T c 2.6972−0.0011−2.6961 a b a c
45. T a , B b , T c , B c −2.43362.43160.002 a b a c
46. B a , T b , T c , B c 2.3253−2.32550.0001 a b a c
Table 4. Several entropies’ fault-detection capability with one open-circuit fault, two open-circuit faults, three open-circuit faults and four open-circuit switches faults for a couple = 3 Nm and a reference speed = 3000 tr/min.
Table 4. Several entropies’ fault-detection capability with one open-circuit fault, two open-circuit faults, three open-circuit faults and four open-circuit switches faults for a couple = 3 Nm and a reference speed = 3000 tr/min.
Entropies T a B a T b T a B a T b T c B a B b T c B a T b B b B a T b B b T c
S a m p E n
M S S a m p E n
c M S S a m p E n
r M S S a m p E n
K 2 E n
M S K 2 E n
c M S K 2 E n
r M S K 2 E n
C o n d E n ---
M S C o n d E n --
c M S C o n d E n --
r M S C o n d E n ---
D i s p E n -
M S D i s p E n -
c M S D i s p E n -
r M S D i s p E n -
C o S i E n -
M S C o S i E n --
c M S C o S i E n -
r M S C o S i E n -
B u b b E n -
M S B u b b E n -
c M S B u b b E n -
r M S B u b b E n --
A p E n --
M S A p E n
c M S A p E n
r M S A p E n
F u z z E n -
M S F u z z E n
c M S F u z z E n
r M S F u z z E n
I n c r E n
M S I n c r E n
c M S I n c r E n
r M S I n c r E n ---
P h a s E n --
M S P h a s E n --
c M S P h a s E n --
r M S P h a s E n --
S l o p E n
M S S l o p E n
c M S S l o p E n
r M S S l o p E n
E n of E n -
M S E n E n -
c M S E n E n -
r M S E n
A t t E n
M S A t t E n
c M S A t t E n
r M S A t t E n --
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Morel, C.; Rivero, S.; Le Gueux, B.; Portal, J.; Chahba, S. Currents Analysis of a Brushless Motor with Inverter Faults—Part I: Parameters of Entropy Functions and Open-Circuit Faults Detection. Actuators 2023, 12, 228. https://0-doi-org.brum.beds.ac.uk/10.3390/act12060228

AMA Style

Morel C, Rivero S, Le Gueux B, Portal J, Chahba S. Currents Analysis of a Brushless Motor with Inverter Faults—Part I: Parameters of Entropy Functions and Open-Circuit Faults Detection. Actuators. 2023; 12(6):228. https://0-doi-org.brum.beds.ac.uk/10.3390/act12060228

Chicago/Turabian Style

Morel, Cristina, Sébastien Rivero, Baptiste Le Gueux, Julien Portal, and Saad Chahba. 2023. "Currents Analysis of a Brushless Motor with Inverter Faults—Part I: Parameters of Entropy Functions and Open-Circuit Faults Detection" Actuators 12, no. 6: 228. https://0-doi-org.brum.beds.ac.uk/10.3390/act12060228

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop