Next Article in Journal
Prediction of Bubble Size Distributions in Large-Scale Bubble Columns Using a Population Balance Model
Next Article in Special Issue
Development of Simple-To-Use Predictive Models to Determine Thermal Properties of Fe2O3/Water-Ethylene Glycol Nanofluid
Previous Article in Journal
Multi Similarity Metric Fusion in Graph-Based Semi-Supervised Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Extreme Multiclass Classification Criteria

NYU Tandon School of Engineering, Department of Electrical and Computer Engineering 5 MetroTech Center, Brooklyn, NY 11201, USA
*
Author to whom correspondence should be addressed.
Submission received: 2 February 2019 / Revised: 5 March 2019 / Accepted: 8 March 2019 / Published: 12 March 2019
(This article belongs to the Special Issue Machine Learning for Computational Science and Engineering)

Abstract

:
We analyze the theoretical properties of the recently proposed objective function for efficient online construction and training of multiclass classification trees in the settings where the label space is very large. We show the important properties of this objective and provide a complete proof that maximizing it simultaneously encourages balanced trees and improves the purity of the class distributions at subsequent levels in the tree. We further explore its connection to the three well-known entropy-based decision tree criteria, i.e., Shannon entropy, Gini-entropy and its modified variant, for which efficient optimization strategies are largely unknown in the extreme multiclass setting. We show theoretically that this objective can be viewed as a surrogate function for all of these entropy criteria and that maximizing it indirectly optimizes them as well. We derive boosting guarantees and obtain a closed-form expression for the number of iterations needed to reduce the considered entropy criteria below an arbitrary threshold. The obtained theorem relies on a weak hypothesis assumption that directly depends on the considered objective function. Finally, we prove that optimizing the objective directly reduces the multi-class classification error of the decision tree.

1. Introduction

This paper focuses on the multiclass classification setting, where the number of classes is very large. The recent widespread development of data-acquisition web services and devices has helped make large data sets, such as multiclass data sets, commonplace. Straightforward extensions of the binary approaches to the multiclass setting, such as the one-against-all approach [1], which for each data point computes a score for each class and returns the class with the maximum score, do not often work in the presence of strict computational constraints as their running time often scales linearly with the number of labels k. On the other hand, the most computationally efficient approaches for multiclass classification are given by O ( log k ) train/test running time [2]. This running time can naturally be achieved by hierarchical classifiers that build the hierarchy over the labels.
This paper considers a hierarchical multiclass decision tree structure, where each node of the tree contains a binary classifier h from some hypothesis class H that sends an example reaching that node to either left ( h ( x ) 0 ) or right ( h ( x ) > 0 ) child node depending on the sign of h ( x ) (each node has its own splitting hypothesis). The test example descends from the root to the leaf of such tree guided by the classifiers lying on its path, and is labeled according to the label with the highest frequency amongst the training examples that were reaching the leaf that it descended to. The tree is constructed and trained in a top-down fashion, where splitting the data in every node of the tree is done by maximizing the following objective function recently introduced in the literature [3] (along with the algorithm (we refer the reader to the referenced paper for the algorithm’s details), called LOMtree, optimizing it in an online fashion):
J ( h ) : = 2 i = 1 k | π i P ( h ( x ) > 0 ) P ( h ( x ) > 0 , i ) P ( h ( x ) > 0 | i ) π i | ,
where x X R d are the data points (each with a label from the set { 1 , 2 , , k } ), π i denotes the proportion of label i amongst the examples reaching a node, and probabilities P ( h ( x ) > 0 ) and P ( h ( x ) > 0 | i ) denote the fraction of examples reaching a node for which h ( x ) > 0 , marginally and conditional on class i respectively. The objective measures the dependence between the split and the class distribution. Note that it satisfies J ( h ) [ 0 , 1 ] and, as implied by its form, maximizing it encourages the fraction of examples going to the right from class i to be substantially different from the background fraction for each class i. Thus for a balanced split (i.e., P ( h ( x ) > 0 ) = 0.5 ), the examples of class i are encouraged to be sent exclusively to the left ( P ( h ( x ) > 0 | i ) = 0 ) or right ( P ( h ( x ) > 0 | i ) = 1 ) refining the purity of the class distributions at subsequent levels in the tree. The LOMtree algorithm effectively maximizes this objective over hypotheses h H in an online fashion with stochastic gradient descent (SGD) and obtains good-quality multiclass tree predictors with logarithmic train and test running times. Despite that, this objective and its properties (including the relation to the more standard entropy criteria) remain largely ununderstood. Its exhaustive analysis is instead provided in this paper.
Our contributions are the following:
  • We provide an extensive theoretical analysis of the properties of the considered objective and prove that maximizing this objective in any tree node simultaneously encourages balanced partition of the data in that node and improves the purity of the class distributions at its children nodes.
  • We show a formal relation of this objective to some more standard entropy-based objectives, i.e., Shannon entropy, Gini-entropy and its modified variant, for which online optimization schemes in the context of multiclass classification are largely unknown. In particular we show that i) the improvement in the value of entropy resulting from performing the node split is lower-bounded by an expression that increases with the value of the objective and thus ii) the considered objective can be used as a surrogate function for indirectly optimizing any of the three considered entropy-based criteria.
  • We present three boosting theorems for each of the three entropy criteria, which provide the number of iterations needed to reduce each of them below an arbitrary threshold. Their weak hypothesis assumptions rely on the considered objective function.
  • We establish the error bound that relates maximizing the objective function with reducing the multi-class classification error.
  • Finally, in the Appendix A we establish an empirical connection between the multiclass classification error and the entropy criteria and show that Gini-entropy most closely resembles the behavior of the test error in practice.
The main theoretical analysis of this paper is kept in the boosting framework [4] and relies on the assumption that the objective function can be weakly optimized in the internal nodes of the tree. This weak advantage is amplified in the tree leading to hierarchies achieving any desired level of entropy (either Shannon entropy, Gini-entropy or its modified variant). Our work adds new theoretical results to the theory of multiclass boosting. Note that the multiclass boosting is largely ununderstood from the theoretical perspective [5] (we refer the reader to [5] for comprehensive review of the theory of muticlass boosting).
The paper is organized as follows: related literature is discussed in Section 2, the theoretical properties of the objective J ( h ) are shown in Section 3, the main theoretical results are presented in Section 4, and finally the mathematical properties of the entropy criteria and the proofs of the main theoretical results are provided in Section 5. Conclusions (Section 6) end the paper. Appendix A contains basic numerical experiments (Appendix A.1) and additional proofs (Appendix A.2).

2. Related Work

The extreme multiclass classification problem has been addressed in the literature in different ways. We discuss them here, putting emphasis on the ones that build hierarchical predictors as these techniques are the most relevant to this paper. Only a few authors [2,3,6,7,8] simultaneously address logarithmic time training and testing. The methods they propose are either hard to apply in practical problems [7] or use fixed tree structures [6,8]. Furthermore, an alternative approach based on using a random tree structure was shown to potentially lead to considerable underperformance [3,9]. At the same time, for massive datasets making multiple passes through the data is computationally costly, which justifies the need for developing online approaches, where the algorithm streams over a potentially infinitely large data set (online approaches are also plausible for non-stationary problems). It is unclear how to optimize standard decision tree objectives, such as Shannon or Gini-entropy, in this setting (early attempt was recently proposed [2] for Shannon entropy). One of the prior works to this paper [3] introduces an objective function which enjoys certain advantages over entropy criteria. In particular, it can be easily and efficiently optimized online. The authors however present an incomplete theoretical analysis and leave a number of open questions, which this paper instead aims at addressing. The algorithms for incremental learning of classification with decision trees also include some older works [10,11,12], which split any node according to the outcome of the node split-test based on the values of selected attributes of the data examples reaching that node. These approaches are different from the one in this paper, where the node split is performed according to the value of the learned (e.g., with SGD) hypothesis computed for the entire vector of attributes of the data examples reaching that node.
Other tree-based approaches include conditional probability trees [13] and clustering methods [9,14,15] ([9] was later improved in [16]), but they allow training time to be linear in the label complexity. The remaining techniques for multiclass classification include sparse output coding [17], variants of error correcting output codes [18], variants of iterative least-squares [19], and a method based on guess-averse loss functions [20].
Finally note that the conditional density estimation problem is also challenging in the large-class settings and in this respect remains parallel to the extreme multiclass classification problem [21]. In the context of conditional density estimation problem, there have also been some works that use tree structured models to accelerate computation of the likelihood and gradients [8,22,23,24]. They typically use heuristics based on using ontologies [8], Huffman coding [24], and various other mechanisms.

3. Theoretical Properties of the Objective Function

In this section we describe the objective function introduced in Equation (1) and provide its theoretical properties. The proofs are deferred to the Appendix. We first introduce the definitions of the concept of balancedness and purity of the node split.
Definition 1 (Purity and balancedness).
The hypothesis h H induces a pure split if α : = i = 1 k π i min ( P ( h ( x ) > 0 | i ) , P ( h ( x ) < 0 | i ) ) δ , where δ [ 0 , 0.5 ) , and α is called the purity factor.
The hypothesis h H induces a balanced split if β : = P ( h ( x ) > 0 ) [ c , 1 c ] , where c ( 0 , 0.5 ] , and β is called the balancing factor.
A partition is perfectly pure if α = 0 (examples of the same class are sent exclusively to the left or to the right). A partition is called perfectly balanced if β = 0.5 (equal number of examples are sent to the left and to the right). The notions of balancedness and purity are conveniently illustrated in Figure 1, where it is shown that the purity criterion helps to refine the choice of the splitting hypothesis from among well-balanced candidates.
Next, we show the first theoretical property of the objective function J ( h ) that characterizes its behavior at the optimum ( J ( h ) = 1 ).
Lemma 1.
The hypothesis h H induces a perfectly pure and balanced partition if and only if J ( h ) = 1 .
For some data sets however there exist no hypotheses producing perfectly pure and balanced splits. We next show that increasing the value of the objective leads to more balanced splits.
Lemma 2.
For any hypothesis h and any distribution over data examples the balancing factor β satisfies β 0.5 ( 1 1 J ( h ) ) , 0.5 ( 1 + 1 J ( h ) ) .
We refer to the interval to which β belongs to as β -interval. Thus the larger (closer to 1) the value of J ( h ) is, the narrower the β -interval is, leading to more balanced splits at the extremes of this interval ( β closer to 0.5 ).
This result combined with the next lemma implies that, at the extremes of the β interval, the value of the upper-bound on the purity factor decreases as the value of J ( h ) increases (since J ( h ) gets closer to 1 and the balancing factor β gets closer to 0.5 at the extremes of the β interval). The recovered splits therefore have better purity ( α closer to 0).
Lemma 3.
(Lemma 1 in [3]). For any hypothesis h and any distribution over data examples the purity factor α and the balancing factor β satisfy α min ( 2 J ( h ) ) / 4 β β , 0.5 .
Note that the equality condition in Lemma 3 is achieved when P ( h ( x ) > 0 | i ) = P ( h ( x ) < 0 | i ) = 0.5 (and thus, α = 0 , β = 0.5 , and J ( h ) = 0 ).
We thus showed that maximizing the objective in Equation (1) in each tree node simultaneously encourages trees that are balanced and whose purity of the class distributions is gradually improving when moving from the root to a subsequent tree levels. Lemmas 2 and 3 are illustrated in Figure 2.
In the next section we show that the objective J ( h ) is related to the more standard decision tree entropy-based objectives and that maximizing it leads to the reduction of these criteria. We consider three different entropy criteria in this paper. The theoretical analysis relies on the boosting framework and depends on the weak learning assumption. Three different entropy-based criteria lead to three different theoretical statements, where we bound the number of splits required to reduce the value of the criterion below given level. The bounds we obtain, and their dependences on the number of classes (k), critically depend on the strong concativity properties of the considered entropy-based objectives.

4. Main Theoretical Results

4.1. Notation

We first introduce notation. Let T denote the tree under consideration. π l , i ’s denote the probabilities that a randomly chosen data point x drawn from P , where P is a fixed target distribution over X , has label i given that x reaches node l (note that i = 1 k π l , i = 1 ), t denotes the number of internal tree nodes, L t denotes the set of all tree leaves at time t, and w l is the weight of leaf l defined as the probability a randomly chosen x drawn from P reaches leaf l (note that l L t w l = 1 ). We study a tree construction algorithm where we recursively find the leaf node with the highest weight, and choose to split it into two children. Consider the tree constructed over t steps where in each step we take one leaf node and split it (thus the number of splits is equal to the number of internal nodes of the tree) ( t = 1 corresponds to splitting the root, thus the tree consists of one node (root) and its two children (leaves) in this step). We measure the quality of the tree at any given time t with three different entropy criteria:
  • Shannon entropy G t e :
    G t e = l L t w l i = 1 k π l , i ln 1 π l , i
  • Gini-entropy G t g :
    G t g = l L t w l i = 1 k π l , i ( 1 π l , i )
  • Modified Gini-entropy G t m :
    G t m = l L t w l i = 1 k π l , i ( C π l , i ) ,
    where C is a constant such that C > 2 .
These criteria are the natural extensions of the criteria used in the context of binary classification [25] to the multiclass classification setting (note that there is more than one way of extending the entropy-based criteria from [25] to the multiclass classification setting, e.g., the modified Gini-entropy could as well be defined as G t m = l L t w l i = 1 k π l , i ( C π l , i ) , where C [ 1 , 2 ] . This and other extensions will be investigated in future works). We will next present the main results of this paper, which will be followed by their proofs. We begin with introducing the weak hypothesis assumption.

4.2. Theorems

Definition 2 (Weak Hypothesis Assumption).
Let m denote any internal node of the tree T , and let β m = P ( h m ( x ) > 0 ) and P m , i = P ( h m ( x ) > 0 | i ) . Furthermore, let γ R + be such that for all m, γ ( 0 , min ( β m , 1 β m ) ] . We say that the weak hypothesis assumption is satisfied when for any distribution P over X at each node m of the tree T there exists a hypothesis h m H such that J ( h m ) / 2 = i = 1 k π m , i | P m , i β m | γ .
The weak hypothesis assumption says that in every node of the tree we are able to recover a hypothesis from H which corresponds to the value of the objective that is above 0 (thus the corresponding split is “weakly” pure and “weakly” balanced).
Consider next any time t and let n be the heaviest leaf at time t that we split and its weight w n be denoted by w for brevity. Similarly, let h denote the regressor at node n (shorthand for h n ). We denote the difference between the contribution of node n to the value of the entropy-based objectives in times t and t + 1 as
Δ t e : = G t e G t + 1 e ; Δ t g : = G t g G t + 1 g ; Δ t m : = G t m G t + 1 m .
Then the following lemma holds (the proof in provided in Section 5):
Lemma 4.
Under the Weak Hypothesis Assumption, the change in entropies occuring due to the node split can be bounded as
Δ t e w J ( h ) 2 8 ( 1 γ ) 2 ; Δ t g w J ( h ) 2 4 k ( 1 γ ) 2 ; Δ t m ( C 2 ) 2 C 3 · w J ( h ) 2 4 k ( 1 γ ) 2 .
Clearly, maximizing the objective J ( h ) improves the entropy reduction. The considered objective can therefore be viewed as a surrogate function for indirectly optimizing any of the three considered entropy-based criteria, for which efficient online optimization strategies are largely unknown but highly desired in the multiclass classification setting. To be more specific, the standard packages for binary classification trees, such as CART [26] and C4.5 [27], require running a brute force search to find a partition at every node of the tree from a set of all possible partitions that leads to the biggest improvement of the entropy-based criterion of interest [25]. This is prohibitive in case of the multiclass problem. J ( h ) however can be efficiently optimized with SGD instead.
We next state the three boosting theoretical results captured in Theorems 1–3. They guarantee that the top-down decision tree algorithm which optimizes J ( h ) in each node will amplify the weak advantage, captured in the weak learning assumption, to build a tree achieving any desired level of entropy (either Shannon entropy, Gini-entropy or its modified variant).
Theorem 1.
Under the Weak Hypothesis Assumption, for any α [ 0 , 2 ln k ] , to obtain G t e α it suffices to make t 2 ln k α 4 ( 1 γ ) 2 γ 2 log 2 e ln k splits.
Theorem 2.
Under the Weak Hypothesis Assumption, for any α [ 0 , 2 1 1 k ] , to obtain G t g α it suffices to make t 2 1 1 k α 2 ( 1 γ ) 2 γ 2 log 2 e ( k 1 ) splits.
Theorem 3.
Under the Weak Hypothesis Assumption, for any α [ C 1 , 2 k C 1 ] , to obtain G t m α it suffices to make t 2 k C 1 α 2 ( 1 γ ) 2 C 3 γ 2 ( C 2 ) 2 log 2 e k k C 1 splits.
Finally, we provide the error guarantee in Theorem 4. Denote y ( x ) to be a fixed target function with domain X , which assigns the data point x to its label, and let P be a fixed target distribution over X . Together y and P induce a distribution on labeled pairs ( x , y ( x ) ) . Let t ( x ) be the label assigned to data point x by the tree. We denote as ϵ ( T ) the error of tree T , i.e., ϵ ( T ) : = x P i = 1 [ t ( x ) = i , y ( x ) i ]
Theorem 4.
Under the Weak Hypothesis Assumption, for any α [ 0 , 1 ] , to obtain ϵ ( T ) α it suffices to make t 2 ln k log 2 e α 4 ( 1 γ ) 2 γ 2 log 2 e ln k splits.
Remark 1.
The main theorems show how fast the entropy criteria or the multi-class classification error drop as the tree grows and performs node splits. These statements therefore provide a platform for comparing different entropy criteria and answer two questions: 1) for a fixed α , γ , C , and k, which criterion is reduced the most with each split? and 2) can the multi-class error match the convergence speed of the best entropic criterion? Hence, it can be noted that the Shannon entropy has the most advantageous dependence on the label complexity, since the bound scales only logarithmically with k, and thus achieves the fastest convergence. Simultaneously, the multi-class classification rate matches this advantageous convergence rate and also scales favorably (logarithmically) with k. Finally, even though the weak hypothesis requires only slightly favorable γ, i.e., γ > 0 , in practice when constructing the tree one can optimize J in every node of the tree, which effectively pushes γ to be as high as possible. In that case γ becomes a well-behaving constant in the above theorems, ideally equal to 1 / 2 , and does not negatively affect the split count.
We next discuss in details the mathematical properties of the entropy-based criteria, which are important to prove the above theorems.

5. Proofs

5.1. Properties of the Entropy-Based Criteria

Each of the presented entropy-based criteria has a number of useful properties that we give next, along with their proofs. We first give bounds on the values of the entropy-based functions. As before, let w be the weight of the heaviest leaf in the tree at time t.

5.1.1. Bounds on the Entropy-Based Criteria

Lemma 5.
The Shannon entropy function G t e at time t is bounded as 0 G t e ( t + 1 ) w ln k .
Lemma 6.
The Gini-entropy function G t g at time t is bounded as 0 G t g ( t + 1 ) w 1 1 / k .
Lemma 7.
The modified Gini-entropy function G t m at time t is bounded as C 1 G t m ( t + 1 ) w k C 1 .
The upper-bounds in Lemmas 5–7 are tight, where the equalities hold for the special case when i { 1 , , k } , l L t π l , i = 1 / k , e.g., when each internal node of the tree produce a perfectly pure and balanced split.

5.1.2. Strong Concativity Properties of the Entropy-Based Criteria

So far we have been focusing on the time step t. Recall that n is the heaviest leaf at time t and its weight w n is denoted by w for brevity. Consider splitting this leaf to two children n 0 and n 1 . For ease of notation let w 0 = w n 0 and w 1 = w n 1 , β = P ( h n ( x ) > 0 ) and P i = P ( h n ( x ) > 0 | i ) , and furthermore let π i and h be the shorthands for π n , i and h n , respectively. Recall that β = i = 1 k π i P i and i = 1 k π i = 1 . Notice that w 0 = w ( 1 β ) and w 1 = w β . Let π be the k-element vector with i t h entry equal to π i . Finally, let G ˜ e ( π ) = i = 1 k π i ln 1 π i , G ˜ g ( π ) = i = 1 k π i ( 1 π i ) , and G ˜ m ( π ) = i = 1 k π i ( 1 π i ) . Before the split the contribution of node n to resp. G t e , G t g , and G t m was resp. w G ˜ e ( π ) , w G ˜ g ( π ) , and w G ˜ m ( π ) . Note that π n 0 , i = π i ( 1 P i ) 1 β and π n 1 , i = π i P i β are the probabilities that a randomly chosen x drawn from P has label i given that x reaches nodes n 0 and n 1 respectively. For brevity, let π n 0 , i and π n 1 , i be denoted respectively as π 0 , i and π 1 , i . Let π 0 be the k-element vector with i t h entry equal to π 0 , i and let π 1 be the k-element vector with i t h entry equal to π 1 , i . Notice that π = ( 1 β ) π 0 + β π 1 . After the split the contribution of the same, now internal, node n changes to resp. w ( ( 1 β ) G ˜ e ( π 0 ) + β G ˜ e ( π 1 ) ) , w ( ( 1 β ) G ˜ g ( π 0 ) + β G ˜ g ( π 1 ) ) , and w ( ( 1 β ) G ˜ m ( π 0 ) + β G ˜ m ( π 1 ) ) . We can compute the difference between the contribution of node n to the value of the entropy-based objectives in times t and t + 1 as
Δ t e = G t e G t + 1 e = w G ˜ e ( π ) ( 1 β ) G ˜ e ( π 0 ) β G ˜ e ( π 1 ) ,
Δ t g = G t g G t + 1 g = w G ˜ g ( π ) ( 1 β ) G ˜ g ( π 0 ) β G ˜ g ( π 1 ) ,
Δ t m = G t m G t + 1 m = w G ˜ m ( π ) ( 1 β ) G ˜ m ( π 0 ) β G ˜ m ( π 1 ) .
The next three lemmas, Lemmas 8–10, describe the strong concativity properties of the entropy, Gini-entropy and modified Gini-entropy, which can be used to lower-bound Δ t e , Δ t g , and Δ t m (Equations (2)–(4) correspond to a gap in the Jensen’s inequality applied to the strongly concave function).
Lemma 8.
The Shannon entropy function G ˜ e is strongly concave with respect to l 1 -norm with modulus 1, and thus the following holds G ˜ e ( π ) ( 1 β ) G ˜ e ( π 0 ) β G ˜ e ( π 1 ) 1 2 β ( 1 β ) π 0 π 1 1 2 .
Lemma 9.
The Gini-entropy function G ˜ g is strongly concave with respect to l 2 -norm with modulus 2, and thus the following holds G ˜ g ( π ) ( 1 β ) G ˜ g ( π 0 ) β G ˜ g ( π 1 ) β ( 1 β ) π 0 π 1 2 2 .
Lemma 10.
The modified Gini-entropy function G ˜ m is strongly concave with respect to l 2 -norm with modulus 2 ( C 2 ) 2 C 3 , and thus the following holds G ˜ m ( π ) ( 1 β ) G ˜ m ( π 0 ) β G ˜ m ( π 1 ) ( C 2 ) 2 C 3 β ( 1 β ) π 0 π 1 2 2 .
Figure 3 illustrates different entropy criteria normalized to the [ 0 , 1 ] interval.

5.2. Proof of Lemma 4 and Theorems 1–3

We finally proceed to proving all three boosting theorems, Theorems 1–3. Lemma 4 is a by-product of these proofs.
Proof. 
For the Shannon entropy it follows from Equation (2), Lemmas 5 and 8 that
Δ t e 1 2 w β ( 1 β ) π 0 π 1 1 2 = 1 2 w β ( 1 β ) i = 1 k π i ( P i β ) 2 = w J ( h ) 2 8 β ( 1 β ) J ( h ) 2 G t e 8 β ( 1 β ) ( t + 1 ) ln k γ 2 G t e 2 ( 1 γ ) 2 ( t + 1 ) ln k ,
where the last inequality comes from the fact that 1 γ β γ (see the definition of γ in the weak hypothesis assumption) and J ( h ) 2 γ (see weak hypothesis assumption). For the Gini-entropy criterion notice that from Equation (3), Lemmas 6, 9, and A4 it follows that
Δ t g w β ( 1 β ) π 0 π 1 2 2 1 k w β ( 1 β ) π 0 π 1 1 2 γ 2 G t g ( 1 γ ) 2 ( t + 1 ) ( k 1 ) ,
where the last inequality is obtained similarly as the last inequality in Equation (5). And finally for the modified Gini-entropy it follows from Equation (4), Lemmas 7, 10, and A4 that
Δ t m w ( C 2 ) 2 C 3 β ( 1 β ) π 0 π 1 2 2 1 k w ( C 2 ) 2 C 3 β ( 1 β ) π 0 π 1 1 2 γ 2 G t m C 3 ( C 2 ) 2 ( 1 γ ) 2 ( t + 1 ) k k C 1 ,
where the last inequality is obtained as before.
Clearly the larger the objective J ( h ) is at time t, the larger the entropy reduction ends up being. Let
η e = 2 2 γ ( 1 γ ) ln k , η g = 4 γ ( 1 γ ) k 1 , η m = 4 γ ( 1 γ ) C 3 ( C 2 ) 2 k k C 1 .
For simplicity of notation assume Δ t corresponds to either Δ t e , or Δ t g , or Δ t m , and G t stands for G t e , or G t g , or G t m . Thus Δ t > η 2 G t 16 ( t + 1 ) , and we obtain
G t + 1 G t Δ t < G t η 2 G t 16 ( t + 1 ) = G t 1 η 2 16 ( t + 1 ) .
One can now compute the minimum number of splits required to reduce G t below α , where α [ 0 , 1 ] , from this recurrence inequality. Assume log 2 ( t + 1 ) Z + .
G t + 1 G t 1 η 2 16 ( t + 1 ) = G 1 1 η 2 16 · 2 1 η 2 16 · 3 ( 1 η 2 16 · ( t + 1 ) ) = G 1 1 η 2 16 · 2 t = 3 4 1 η 2 16 · t t = ( 2 r / 2 ) + 1 2 r 1 η 2 16 · t t = ( 2 log 2 ( t + 1 ) / 2 ) + 1 2 log 2 ( t + 1 ) 1 η 2 16 · t ,
where r = { 2 , 3 , , log 2 ( t + 1 ) } . Recall that
t = ( 2 r / 2 ) + 1 2 r 1 η 2 16 · t t = ( 2 r / 2 ) + 1 2 r 1 η 2 16 · 2 r = 1 η 2 16 · 2 r 2 r / 2 e η 2 / 32 ,
where the last step follows from Lemma A5. Also note that by the same lemma 1 η 2 16 · 2 e η 2 / 32 . Thus,
G t + 1 G 1 e η 2 log 2 ( t + 1 ) / 32 .
Therefore to reduce G t + 1 α (where α ’s are defined in Theorems 1–3) it suffices to make t + 1 splits such that log 2 ( t + 1 ) ln G 1 α 32 η 2 splits. Since log 2 ( t + 1 ) = ln ( t + 1 ) · log 2 ( e ) , where e = exp ( 1 ) . Thus,
ln ( t + 1 ) ln G 1 α 32 η 2 log 2 ( e ) t + 1 G 1 α 32 η 2 log 2 ( e ) .
Recall that by resp. Lemmas 5–7 we have resp. G 1 e 2 ln k , G 1 g 2 ( 1 1 k ) , G 1 g 2 k C 1 . We consider the worst case setting (giving the largest possible number of split) thus we assume G 1 e = 2 ln k , G 1 g = 2 ( 1 1 k ) , and G 1 g 2 k C 1 . Combining that with Equations (8) and (10) yields statements of the main theorems. □

5.3. Proof of Theorem 4

We next proceed to directly proving the error bound. Recall that π l , i is the probability that the data point x corresponds to label i given that x reached l, i.e., π l , i = P ( y ( x ) = i | x reached l ) . Let the label assigned to the leaf be the majority label and thus lets assume that the leaf is assigned to label i if and only if the following is true z = { 1 , 2 , , k } z i π l , i π l , z . Therefore we can write that
ϵ ( T ) = P ( t ( x ) y ( x ) ) = l L t w l P ( t ( x ) y ( x ) | x reached l )
Let i l be the majority label in leaf l, thus z = { 1 , 2 , , k } z i l π l , i l π l , z . We can continue as follows
ϵ ( T ) = l L t w l P ( t ( x ) i l | x reached l ) = l L t w l ( 1 π l , i l ) = l L t w l ( 1 max ( π l , 1 , π l , 2 , , π l , k )
Consider again the Shannon entropy G ( T ) of the leaves of tree T that is defined as
G t e = l L t w l i = 1 k π l , i ln 1 π l , i G t e = 1 l o g 2 e l L t w l i = 1 k π l , i log 2 1 π l , i
Note that
G t e = 1 l o g 2 e l L t w l i = 1 π l , i log 2 1 π l , i 1 l o g 2 e l L t w l i = 1 i i l π l , i log 2 1 π l , i 1 l o g 2 e l L t w l i = 1 i i l π l , i = 1 l o g 2 e l L t w l ( 1 max ( π l , 1 , π l , 2 , , π l , k ) ) = 1 l o g 2 e ϵ ( T ) ,
where the last inequality comes from the fact that i = { 1 , 2 , , } i i l π l , i 0.5 and thus i = { 1 , 2 , , } i i l 1 π l , i [ 2 ; + ] and consequently i = { 1 , 2 , , } i i l log 2 1 π l , i [ 1 ; + ] .

6. Conclusions

This paper aims at introducing theoretical tools, encapsulated in the boosting framework, that enable the comparison of different multi-class classification objective functions. The multi-class boosting is largely ununderstood from the theoretical perspective [5]. We provide an exhaustive theoretical analysis of the objective function underlying the recently proposed LOMtree algorithm for extreme multi-class classification and explore the connection of this objective to entropy-based criteria. We show that optimizing this objective simultaneously optimizes Shannon entropy, Gini-entropy and its modified variant, as well as the multi-class classification error. We expect that discussed tools can be used to obtain theoretical guarantees in the multi-label [28,29,30] and memory-constrained settings (we will explore this research direction in the future). We also consider extensions to different variants of the multi-class classification problem [31,32] and multi-output learning tasks [33,34]. We thus plan to build a unified theoretical framework for understanding extreme classification trees.

Author Contributions

A.C. derived the theoretical results and did the empirical evaluation. I.K.J. was working on improving the write-up of the paper and checking mathematical correctness.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Extreme Multiclass Classification Criteria

Appendix A.1. Numerical Experiments

We run the LOMtree algorithm, which is implemented in the open source learning system Vowpal Wabbit [35], on four benchmark multiclass data sets: Mnist (10 classes, downloaded from http://yann.lecun.com/exdb/mnist/), Isolet (26 classes, downloaded from http://www.cs.huji.ac.il/~shais/datasets/ClassificationDatasets.html), Sector (105 classes, downloaded from http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/multiclass.html), and Aloi (1000 classes, downloaded from http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/multiclass.html). The data sets were divided into training ( 90 % ) and testing ( 10 % ), where 10 % of the training data set was used as a validation set. The regressors in the tree nodes are linear and were trained by SGD [36] with 20 epochs and the learning rate chosen from the set { 0.25 , 0.5 , 0.75 , 1 , 2 , 4 , 8 } . We investigated different swap resistances chosen from the set { 4 , 8 , 16 , 32 , 64 , 128 , 256 } . We selected the learning rate and the swap resistance as the one minimizing the validation error, where the number of splits in all experiments was set to 10 k.
Figure A1 shows the Shannon entropy, Gini-entropy, modified Gini-entropy (all normalized to the interval [ 0 , 1 ] ), and the multiclass classification error computed on the test data set as the function of the number of splits. The behavior of the Shannon entropy and Gini-entropy match the theoretical findings. However, the modified Gini-entropy instead drops the fastest with the number of splits, which in particular suggests that in this case perhaps tighter bounds could possibly be proved (for the binary case tighter analysis was shown in [25], but it is highly non-trivial to generalize this analysis to the multiclass classification setting). Furthermore, it can be observed that the behavior of the error closely mimics the behavior of the Gini-entropy. The Gini-entropy in all cases well-approximates the upper-bound on the error.
Figure A1. Functions G t e , G t g , and G t m , and the test error, all normalized to the interval [ 0 , 1 ] , versus the number of splits. Figure is recommended to be read in color.
Figure A1. Functions G t e , G t g , and G t m , and the test error, all normalized to the interval [ 0 , 1 ] , versus the number of splits. Figure is recommended to be read in color.
Computation 07 00016 g0a1

Appendix A.2. Additional Proofs

Proof of Lemma 1.
The proof that if h induces a maximally pure and balanced partition then J ( h ) = 1 was done in [3] (Lemma 2) and is very basic. We focus here on the remaining part of statement, which is harder to show, and prove that if J ( h ) = 1 then h induces a maximally pure and balanced partition.
Without loss of generality assume each π i ( 0 , 1 ) . Recall that β = P ( h ( x ) > 0 ) , and let P i = P ( h ( x ) > 0 | i ) . Also recall that β = i = 1 k π i P i . Thus J ( h ) = 2 i = 1 k π i j = 1 k π j P j P i . The objective is certainly maximized in the extremes of the interval [ 0 , 1 ] , where each P i is either 0 or 1 (also note that at maximum, where J ( h ) = 1 , it cannot be that all P i ’s are 0 or all P i ’s are 1). The function J ( h ) is differentiable in these extremes ( J ( h ) is non-differentiable only when j = 1 k π j P j = P i , but at considered extremes the left-hand side of this equality is in ( 0 , 1 ) , whereas the right-hand side is either 0 or 1). We then write
J ( h ) = 2 i P π i j = 1 k π j P j P i + 2 i N π i P i j = 1 k π j P j ,
where P = { i : j = 1 k π j P j P i } and N = { i : j = 1 k π j P j < P i } . Also let P + = { i : j = 1 k π j P j > P i } (clearly i P + π i 1 and i N π i 1 in the extremes of the interval [ 0 , 1 ] where J ( h ) is maximized). We then can compute the derivatives of J ( h ) with respect to P r , where r = { 1 , 2 , , k } , everywhere where the function is differentiable as follows
J P r = 2 π r ( i P + π i 1 ) i f r P + 2 π r ( 1 i N π i ) i f r N ,
and note that in the extremes of the interval [ 0 , 1 ] where J ( h ) is maximized J P r 0 , since i P + π i 1 , i N π i 1 , and each π i ( 0 , 1 ) . Since J ( h ) is convex, and by the fact that in particular the derivative of J ( h ) with respect to any P r cannot be 0 in the extremes of the interval [ 0 , 1 ] where J ( h ) is maximized, it follows that the J ( h ) can only be maximized ( J ( h ) = 1 ) at the extremes of the [ 0 , 1 ] interval. Thus we already proved that if J ( h ) = 1 then h induces a maximally pure partition. We are left with showing that if J ( h ) = 1 then h induces also a maximally balanced partition. We prove it by contradiction. Assume β 0.5 . Denote as before I 0 = { i : P ( h ( x ) > 0 | i ) = 0 } and I 1 = { i : P ( h ( x ) > 0 | i ) = 1 } . Recall β = i = 1 k π i P i = i I 0 π i · 0 + i I 1 π i · 1 = i I 1 π i . Thus,
J ( h ) = 1 = 2 i I 0 π i β + 2 i I 1 π i β 1 = 2 β i I 0 π i + 2 ( 1 β ) i I 1 π i = 2 β ( 1 i I 1 π i ) + 2 ( 1 β ) i I 1 π i = 2 β ( 1 β ) + 2 ( 1 β ) β = 4 β 2 + 4 β < 1 ,
where the last inequality comes from the fact that the quadratic form 4 β 2 + 4 β is equal to 1 only when β = 0.5 , and otherwise it is smaller than 1. Thus we obtain the contradiction which ends the proof. □
Proof of Lemma 2.
We use the following notation: β = P ( h ( x ) > 0 ) , and P i = P ( h ( x ) > 0 | i ) . Also let P = { i : β P i } and N = { i : β < P i } . Recall that β = i { P N } π i P i , and i { P N } π i = 1 . We split the proof into two cases.
  • Let i P π i 1 β . Then
    J ( h ) = 2 i = 1 k π i β P i = 2 i P π i ( β P i ) + 2 i N π i ( P i β ) = 2 i P π i β 2 i P π i P i + 2 ( β i P π i P i ) 2 β ( 1 i P π i ) = 4 β i P π i 4 i P π i P i 4 β i P π i 4 β ( 1 β )
    Thus 4 β 2 + 4 β J ( h ) 0 which, when solved, yields the lemma.
  • Let i P π i 1 β (thus i N π i β ). Note that J ( h ) can be written as
    J ( h ) = 2 i = 1 k π i P ( h ( x ) 0 ) P ( h ( x ) 0 | i ) ,
    since P ( h ( x ) 0 ) = 1 P ( h ( x ) > 0 ) and P ( h ( x ) 0 | i ) = 1 P ( h ( x ) > 0 | i ) . Let β = P ( h ( x ) 0 ) = 1 β , and P i = P ( h ( x ) 0 | i ) = 1 P i . Note that P = { i : β P i } = { i : β < P i } and N = { i : β < P i } = { i : β P i } . Also note that β = i { P N } π i P i . Thus
    J ( h ) = 2 i = 1 k π i β P i = 2 i P π i ( P i β ) + 2 i N π i ( β P i ) = 2 ( β i N π i P i ) 2 β ( 1 i N π i ) + 2 i N π i β 2 i N π i P i = 4 β i N π i 4 i N π i P i 4 β i N π i = 4 ( 1 β ) i N π i 4 β ( 1 β ) .
    Thus as before we obtain 4 β 2 + 4 β J ( h ) 0 which, when solved, yields the lemma. □
Proof of Lemma 5.
The lower-bound follows from the fact that the entropy of each leaf i = 1 k π l , i ln 1 π l , i is non-negative. We next prove the upper-bound.
G t e = l L t w l i = 1 k π l , i ln 1 π l , i l L t w l ln k w ln k l L t 1 = ( t + 1 ) w ln k ,
where the first inequality comes from the fact that uniform distribution maximizes the entropy, and the last equality comes from the fact that a tree with t internal nodes has t + 1 leaves (also recall that w is the weight of the heaviest node in the tree at time t which is what we will also use in the next lemmas). □
Before proceeding to the actual proof of Lemma 6 we first introduce the helpful result captured in Lemma A1 and Corollary A1.
Lemma A1 (The inequality between Euclidean and arithmetic mean).
Let x 1 , , x k be a set of non-negative numbers. Then Euclidean mean upper-bounds the arithmetic mean as follows i = 1 k x i 2 k i = 1 k x i k .
Corollary A1.
Let { x 1 , , x k } be non-negative. Then i = 1 k x i 2 1 k i = 1 k x i 2 .
Proof. 
By Lemma A1 we have i = 1 k x i 2 k i = 1 k x i k i = 1 k x i 2 1 k i = 1 k x i 2 . □
Proof of Lemma 6.
The lower-bound is straightforward since all π l , i ’s are non-negative. The upper-bound can be shown as follows (the last inequality results from Corollary A1):
G t g = l L t w l i = 1 k π l , i ( 1 π l , i ) w l L t i = 1 k ( π l , i π l , i 2 ) = w l L t 1 i = 1 k π l , i 2 w l L t 1 1 k i = 1 k π l , i 2 = w l L t 1 1 k = ( t + 1 ) w 1 1 k .
 □
Proof of Lemma 7.
The lower-bound can be shown as follows. Recall that the function i = 1 k π l , i ( C π l , i ) is concave and therefore it is certainly minimized on the extremes of the [ 0 , 1 ] interval, meaning where each π l , i is either 0 or 1. Let I 0 = { i : π l , i = 0 } and let I 1 = { i : π l , i = 1 } . Thus i = 1 k π l , i ( C π l , i ) = i I 1 C 1 C 1 . Combining this result with the fact that l L t w l = 1 gives the lower-bound. We next prove the upper-bound. Recall that Lemma A1 implies that ( i = 1 k π l , i ( C π l , i ) ) / k ( i = 1 k π l , i ( C π l , i ) ) / k , thus
G t m = l L t w l i = 1 k π l , i ( C π l , i ) l L t w l k i = 1 k π l , i ( C π l , i ) = l L t w l k C k 2 i = 1 k 1 k π l , i 2 .
By Jensen’s inequality i = 1 k 1 k π l , i 2 ( i = 1 k 1 k π l , i ) 2 = 1 k 2 . Thus
G t m l L t w l k C 1 ( t + 1 ) w k C 1 .
 □
Proof of Lemma 8.
Lemma 8 is proven in [37] (Example 2.5). □
Lemma A2
(Lemma 14 in 38) If the function Φ ( π ) is twice differentiable, then the sufficient condition for strong concativity of Φ is that for all π , x , 2 Φ ( π ) x , x σ x 2 , where 2 Φ ( π ) is the Hessian matrix of Φ at π , and σ > 0 is the strong concativity modulus.
Proof of Lemma 9.
Note that 2 G ˜ g ( π ) x , x 2 x 2 2 , and apply Lemma A2. □
Lemma A3 
(Remark 2.2.4. in 39) The sum of strongly concave functions on R n with modulus σ is strongly concave with the same modulus.
Proof of Lemma 10.
Consider functions g ( π i ) = f ( π i ) , where f ( π i ) = π i ( C π i ) , C 2 , and π i [ 0 , 1 ] . Also let h ( x ) = x , where x [ 0 , C 2 4 ] . It is easy to see, using Lemma A2, that function f is strongly concave with respect to l 2 -norm with modulus 2, thus
f ( θ π i + ( 1 θ ) π i ) θ f ( π i ) + ( 1 θ ) f ( π i ) + θ ( 1 θ ) π i π i 2 2 ,
where π i , π i [ 0 , 1 ] and θ [ 0 , 1 ] . Also note that h is strongly concave with modulus 2 C 3 in its domain [ 0 , C 2 4 ] (the second derivative of h is h ( x ) = 1 4 x 3 2 C 3 ). The strong concativity of h implies that
θ x 1 + ( 1 θ ) x 2 θ x 1 + ( 1 θ ) x 2 + 1 C 3 θ ( 1 θ ) x 1 x 2 2 2 ,
where x 1 , x 2 [ 0 , C 2 4 ] . Let x 1 = f ( π i ) and x 2 = f ( π i ) . Then we obtain
θ f ( π i ) + ( 1 θ ) f ( π i ) θ f ( π i ) + ( 1 θ ) f ( π i ) + 1 C 3 θ ( 1 θ ) f ( π i ) f ( π i ) 2 2 .
Note that
f ( θ π i + ( 1 θ ) π i ) f ( θ π i + ( 1 θ ) π i ) θ ( 1 θ ) π i π i 2 2 θ f ( π i ) + ( 1 θ ) f ( π i ) θ f ( π i ) + ( 1 θ ) f ( π i ) + 1 C 3 θ ( 1 θ ) f ( π i ) f ( π i ) 2 2 ,
where the second inequality results from Equation (A1) and the last (third) inequality results from Equation (A2). Finally note that the first derivative of f is f ( π i ) = C 2 π i [ C 2 , C ] . Thus
| f ( π i ) f ( π i ) | | π i π i | C 2
f ( π i ) f ( π i ) 2 ( C 2 ) 2 π i π i 2 ,
and combining this result with previous statement yields
f ( θ π i + ( 1 θ ) π i ) θ f ( π i ) + ( 1 θ ) f ( π i ) + ( C 2 ) 2 C 3 θ ( 1 θ ) π i π i 2 ,
thus g ( π i ) is strongly concave with modulus 2 ( C 2 ) 2 C 3 . By Lemma A3, G ˜ m ( π ) is also strongly concave with the same modulus. □
The next two lemma are fundamental and they are used in the proof of Lemma 4 and the boosting theorems. The first one relates l 1 -norm and l 2 -norm and the second one is a simple property of the exponential function.
Lemma A4.
Let x R k then x 1 k x 2 .
Lemma A5.
For x 1 the following holds 1 1 x x 1 e .

References

  1. Rifkin, R.; Klautau, A. In Defense of One-Vs-All Classification. J. Mach. Learn. Res. 2004, 5, 101–141. [Google Scholar]
  2. Daume, H.; Karampatziakis, N.; Langford, J.; Mineiro, P. Logarithmic Time One-Against-Some. arXiv, 2016; arXiv:1606.04988. [Google Scholar]
  3. Choromanska, A.; Langford, J. Logarithmic Time Online Multiclass prediction. In Neural Information Processing Systems 2015; Neural Information Processing Systems Foundation, Inc.: Vancouver, BC, Canada, 2015. [Google Scholar]
  4. Schapire, R.E.; Freund, Y. Boosting: Foundations and Algorithms; The MIT Press: Cambridge, MA, USA, 2012. [Google Scholar]
  5. Mukherjee, I.; Schapire, R.E. A theory of multiclass boosting. J. Mach. Learn. Res. 2013, 14, 437–497. [Google Scholar]
  6. Beygelzimer, A.; Langford, J.; Ravikumar, P.D. Error-Correcting Tournaments. In Algorithmic Learning Theory; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
  7. Takimoto, E.; Maruoka, A. Top-down Decision Tree Learning As Information Based Boosting. Theor. Comput. Sci. 2003, 292, 447–464. [Google Scholar] [CrossRef]
  8. Morin, F.; Bengio, Y. Hierarchical probabilistic neural network language model. Aistats 2005, 5, 246–252. [Google Scholar]
  9. Bengio, S.; Weston, J.; Grangier, D. Label Embedding Trees for Large Multi-Class Tasks. In Advances in Neural Information Processing Systems 23 (NIPS 2010); NIPS: Vancouver, BC, Canada, 2010. [Google Scholar]
  10. Utgoff, P.E. Incremental Induction of Decision Trees. Mach. Learn. 1989, 4, 161–186. [Google Scholar] [CrossRef] [Green Version]
  11. Domingos, P.; Hulten, G. Mining High-speed Data Streams; KDD: Boston, MA, USA, 2000. [Google Scholar]
  12. Gama, J.; Rocha, R.; Medas, P. Accurate Decision Trees for Mining High-speed Data Streams. In Proceedings of the Ninth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Washington, DC, USA, 24–27 August 2003. [Google Scholar]
  13. Beygelzimer, A.; Langford, J.; Lifshits, Y.; Sorkin, G.B.; Strehl, A.L. Conditional Probability Tree Estimation Analysis and Algorithms. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, Montreal, QC, Canada, 18–21 June 2009. [Google Scholar]
  14. Madzarov, G.; Gjorgjevikj, D.; Chorbev, I. A Multi-class SVM Classifier Utilizing Binary Decision Tree. Informatica 2009, 33, 225–233. [Google Scholar]
  15. Weston, J.; Makadia, A.; Yee, H. Label Partitioning For Sublinear Ranking. In Proceedings of the 30th International Conference on Machine Learning, Atlanta, GA, USA, 16–21 June 2013. [Google Scholar]
  16. Deng, J.; Satheesh, S.; Berg, A.C.; Fei-Fei, L. Fast and Balanced: Efficient Label Tree Learning for Large Scale Object Recognition. In Advances in Neural Information Processing Systems 24 (NIPS 2011); NIPS: Vancouver, BC, Canada, 2011. [Google Scholar]
  17. Zhao, B.; Xing, E.P. Sparse Output Coding for Large-Scale Visual Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Portland, OR, USA, 23–28 June 2013. [Google Scholar]
  18. Hsu, D.; Kakade, S.; Langford, J.; Zhang, T. Multi-Label Prediction via Compressed Sensing. In Advances in Neural Information Processing Systems 22 (NIPS 2009); NIPS: Vancouver, BC, Canada, 2009. [Google Scholar]
  19. Agarwal, A.; Kakade, S.M.; Karampatziakis, N.; Song, L.; Valiant, G. Least Squares Revisited: Scalable Approaches for Multi-class Prediction. In Proceedings of the 31st International Conference on Machine Learning (ICML 2014), Beijing, China, 21–26 June 2014. [Google Scholar]
  20. Beijbom, O.; Saberian, M.; Kriegman, D.; Vasconcelos, N. Guess-Averse Loss Functions For Cost-Sensitive Multiclass Boosting. In Proceedings of the 31st International Conference on Machine Learning (ICML 2014), Beijing, China, 21–26 June 2014. [Google Scholar]
  21. Jernite, Y.; Choromanska, A.; Sontag, D. Simultaneous Learning of Trees and Representations for Extreme Classification and Density Estimation. arXiv, 2017; arXiv:1610.04658. [Google Scholar]
  22. Mnih, A.; Hinton, G.E. A Scalable Hierarchical Distributed Language Model. In Advances in Neural Information Processing Systems 21 (NIPS 2008); NIPS: Vancouver, BC, Canada, 2009. [Google Scholar]
  23. Djuric, N.; Wu, H.; Radosavljevic, V.; Grbovic, M.; Bhamidipati, N. Hierarchical Neural Language Models for Joint Representation of Streaming Documents and their Content. In Proceedings of the 24th International Conference on World Wide Web, Florence, Italy, 18–22 May 2015. [Google Scholar]
  24. Mikolov, T.; Sutskever, I.; Chen, K.; Corrado, G.S.; Dean, J. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems 26 (NIPS 2013); NIPS: Vancouver, BC, Canada, 2013. [Google Scholar]
  25. Kearns, M.; Mansour, Y. On the Boosting Ability of Top-Down Decision Tree Learning Algorithms. In Proceedings of the Twenty-Eighth Annual ACM Symposium on Theory of Computing (STOC ’96), Philadelphia, PA, USA, 22–24 May 1996. reprinted in J. Comput. Syst. Sci. 1999, 58, 109–128. [Google Scholar] [CrossRef]
  26. Breiman, L. Classification Regression Trees; Routledge: Abingdon, UK, 2017. [Google Scholar]
  27. Quinlan, J.R. C4.5: Programs for Machine Learning; Elsevier: Amsterdam, The Netherlands, 2014. [Google Scholar]
  28. Liu, W.; Tsang, I.W. Making decision trees feasible in ultrahigh feature and label dimensions. J. Mach. Learn. Res. 2017, 18, 2814–2849. [Google Scholar]
  29. Muñoz, E.; Nováček, V.; Vandenbussche, P.Y. Facilitating prediction of adverse drug reactions by using knowledge graphs and multi-label learning models. Brief. Bioinform. 2017, 20, 190–202. [Google Scholar] [CrossRef] [PubMed]
  30. Charte, F.; Rivera, A.J.; del Jesus, M.J.; Herrera, F. REMEDIAL-HwR: Tackling multilabel imbalance through label decoupling and data resampling hybridization. Neurocomputing 2019, 326, 110–122. [Google Scholar] [CrossRef]
  31. Koster, C.H.; Seutter, M.; Beney, J. Multi-classification of patent applications with Winnow. In International Andrei Ershov Memorial Conference on Perspectives of System Informatics; Springer: Berlin/Heidelberg, Germany, 2003; pp. 546–555. [Google Scholar]
  32. Liu, W.; Tsang, I.W.; Müller, K.R. An easy-to-hard learning paradigm for multiple classes and multiple labels. J. Mach. Learn. Res. 2017, 18, 3300–3337. [Google Scholar]
  33. Liu, W.; Xu, D.; Tsang, I.W.; Zhang, W. Metric learning for multi-output tasks. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 41, 408–422. [Google Scholar] [CrossRef] [PubMed]
  34. Petersen, N.C.; Rodrigues, F.; Pereira, F.C. Multi-output bus travel time prediction with convolutional LSTM neural network. Expert Syst. Appl. 2019, 120, 426–435. [Google Scholar] [CrossRef]
  35. Langford, J.; Li, L.; Strehl, A. Vowpal Wabbit (Fast Learning). 2007. Available online: http://hunch.net/~vw (accessed on 2 February 2019).
  36. Bottou, L. Online Algorithms and Stochastic Approximations. In Online Learning and Neural Networks; Cambridge University Press: New York, NY, USA, 1998. [Google Scholar]
  37. Shalev-Shwartz, S. Online Learning and Online Convex Optimization. Found. Trends Mach. Learn. 2012, 4, 107–194. [Google Scholar] [CrossRef]
  38. Shalev-Shwartz, S. Online Learning: Theory, Algorithms, and Applications. Ph.D. Thesis, The Hebrew University of Jerusalem, Jerusalem, Israel, 2007. [Google Scholar]
  39. Zhukovskiy, V. Lyapunov Functions in Differential Games; Stability and Control: Theory, Methods and Applications; Taylor & Francis: London, UK, 2003. [Google Scholar]
Figure 1. Red partition: highly balanced split but impure (the partition cuts through the black and green classes). Green partition: highly balanced and highly pure split. Figure should be read in color.
Figure 1. Red partition: highly balanced split but impure (the partition cuts through the black and green classes). Green partition: highly balanced and highly pure split. Figure should be read in color.
Computation 07 00016 g001
Figure 2. Left: Blue curve captures the behavior of the upper-bound on the balancing factor as a function of J ( h ) , red curve captures the behavior of the lower-bound on the balancing factor as a function of J ( h ) , green intervals correspond to the intervals where the balancing factor lies for different values of J ( h ) . Right: Red line captures the behavior of the upper-bound on the purity factor as a function of J ( h ) when the balancing factor is fixed to 1 2 . Figure should be read in color.
Figure 2. Left: Blue curve captures the behavior of the upper-bound on the balancing factor as a function of J ( h ) , red curve captures the behavior of the lower-bound on the balancing factor as a function of J ( h ) , green intervals correspond to the intervals where the balancing factor lies for different values of J ( h ) . Right: Red line captures the behavior of the upper-bound on the purity factor as a function of J ( h ) when the balancing factor is fixed to 1 2 . Figure should be read in color.
Computation 07 00016 g002
Figure 3. Functions G * e ( π 1 ) = G ˜ e ( π 1 ) / ln 2 = π 1 ln 1 π 1 + ( 1 π 1 ) ln 1 1 π 1 / ln 2 , G * g ( π 1 ) = 2 G ˜ g ( π 1 ) = 4 π 1 ( 1 π 1 ) , and G * m ( π 1 ) = ( G ˜ m ( π 1 ) C 1 ) / ( 2 * C 1 C 1 ) = ( π 1 ( C π 1 ) + ( 1 π 1 ) ( C 1 + π 1 ) C 1 ) / ( 2 C 1 C 1 ) (functions G ˜ e ( π 1 ) , G ˜ g ( π 1 ) , and G ˜ m ( π 1 ) were re-scaled to have values in [ 0 , 1 ] ) as a function of π 1 ( p i 1 ). Figure should be read in color.
Figure 3. Functions G * e ( π 1 ) = G ˜ e ( π 1 ) / ln 2 = π 1 ln 1 π 1 + ( 1 π 1 ) ln 1 1 π 1 / ln 2 , G * g ( π 1 ) = 2 G ˜ g ( π 1 ) = 4 π 1 ( 1 π 1 ) , and G * m ( π 1 ) = ( G ˜ m ( π 1 ) C 1 ) / ( 2 * C 1 C 1 ) = ( π 1 ( C π 1 ) + ( 1 π 1 ) ( C 1 + π 1 ) C 1 ) / ( 2 C 1 C 1 ) (functions G ˜ e ( π 1 ) , G ˜ g ( π 1 ) , and G ˜ m ( π 1 ) were re-scaled to have values in [ 0 , 1 ] ) as a function of π 1 ( p i 1 ). Figure should be read in color.
Computation 07 00016 g003

Share and Cite

MDPI and ACS Style

Choromanska, A.; Kumar Jain, I. Extreme Multiclass Classification Criteria. Computation 2019, 7, 16. https://0-doi-org.brum.beds.ac.uk/10.3390/computation7010016

AMA Style

Choromanska A, Kumar Jain I. Extreme Multiclass Classification Criteria. Computation. 2019; 7(1):16. https://0-doi-org.brum.beds.ac.uk/10.3390/computation7010016

Chicago/Turabian Style

Choromanska, Anna, and Ish Kumar Jain. 2019. "Extreme Multiclass Classification Criteria" Computation 7, no. 1: 16. https://0-doi-org.brum.beds.ac.uk/10.3390/computation7010016

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop