Next Article in Journal
Learning in Feedforward Neural Networks Accelerated by Transfer Entropy
Next Article in Special Issue
On the Difference between the Information Bottleneck and the Deep Information Bottleneck
Previous Article in Journal
On Heat Transfer Performance of Cooling Systems Using Nanofluid for Electric Motor Applications
Previous Article in Special Issue
Multistructure-Based Collaborative Online Distillation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Geometric Interpretation of Stochastic Gradient Descent Using Diffusion Metrics

1
Dipartimento di Matematica, piazza Porta San Donato 5, University of Bologna, 40126 Bologna, Italy
2
Department of Electrical and Systems Engineering, University of Pennsylvania, Philadelphia, PA 19104, USA
3
Computer Science Department, University of California, Los Angeles, CA 90095, USA
*
Author to whom correspondence should be addressed.
Submission received: 30 November 2019 / Revised: 8 January 2020 / Accepted: 10 January 2020 / Published: 15 January 2020
(This article belongs to the Special Issue The Information Bottleneck in Deep Learning)

Abstract

:
This paper is a step towards developing a geometric understanding of a popular algorithm for training deep neural networks named stochastic gradient descent (SGD). We built upon a recent result which observed that the noise in SGD while training typical networks is highly non-isotropic. That motivated a deterministic model in which the trajectories of our dynamical systems are described via geodesics of a family of metrics arising from a certain diffusion matrix; namely, the covariance of the stochastic gradients in SGD. Our model is analogous to models in general relativity: the role of the electromagnetic field in the latter is played by the gradient of the loss function of a deep network in the former.

1. Introduction

Deep neural networks are high-dimensional machine learning models that have demonstrated impressive performance on a number of challenging tasks in computer vision, natural language processing and reinforcement learning [1,2]. These models are typically trained to minimize the misprediction error compared to large amounts of human-annotated data. A large number of diverse models with varying properties are prevalent in these application domains. Despite this diversity, stochastic gradient descent (SGD) is the gold standard for training deep neural networks. It has been shown to obtain good generalization performance; i.e., to train a model that performs well on new data, across a wide range of applications. In spite of this popularity and efficacy, a precise understanding of SGD for deep learning remains elusive.
This paper develops a geometric understanding of stochastic gradient descent. We build upon the work of [3], wherein the authors model the dynamics of SGD as a stochastic differential equation with state-dependent Gaussian noise. We interpret the covariance of this noise, called the diffusion matrix henceforth, as a metric on the parameter space. Our result provides a deterministic Equation (10) that can be compared to SGD near equilibrium points Equation (1). We write the diffusion matrix D ( x ) in the form Equation (3) to show how it fundamentally captures the anisotropy of the dynamical system underlying SGD. This clarifies how D ( x ) is one of the key factors that differentiates steady-state solutions of SGD from those of ordinary gradient descent GD (see comparison in [3,4,5]). Using the diffusion matrix, we then define a family of metrics on the parameter space that we call diffusion metrics. We then take the Einstein equation describing the geodesic on a Riemannian manifold, for the motion of a particle subject to a gravitational and electromagnetic field. We replace the electromagnetic force by the ordinary gradient while gravity is taken into account using the diffusion metric itself. After some mild hypotheses on the architecture of the neural network, we obtain the result that geodesics with respect to this equation correspond precisely to the evolution of a dynamical system, which is not subject to Euclidean gradient descent but to relativistic gradient descent (RGD) with respect to the family of diffusion metrics.
In the end, we obtain Equation (10), which is along the same vein as natural gradient descent in [6], but whose significance is much deeper in the context of SGD, since it stems from the anisotropy of the gradients with respect to the various parameters. This anisotropy encodes the difference between the dynamics of GD and those of SGD. We also compare our result with the ones in [3] and show them to be perfectly compatible. In the end we provide Appendix A for the reader convenience, with a quick review of some key facts of Riemannian geometry.

2. Continuous-Time SGD and the Diffusion Matrix

Stochastic gradient descent performs an update of the weights x of a neural network, replacing the ordinary gradient of the loss function f = 1 N i = 1 N f i with B f :
d x = B f d t , B f = 1 | B | i B f i ,
where d x represents the continuous version of the weight update at step j: x j + 1 = x j η B f ( x j ) , with the learning rate η incorporated into the expression of B f , and B is the mini-batch. In the expression of the loss function f = 1 N i = 1 N f i , f i is the loss relative to the i-th element in our dataset Σ of size | Σ | = N . We assume that weights belong to a compact subset Ω R d and that the f i s satisfy suitable regularity conditions (see [3] Section 2 for more details).
We define the diffusion matrix as the product of the size of the mini-batch | B | and the variance of B f , viewed as a random variable, ϕ : Σ R d , ϕ ( z i ) = f i :
D ( x ) = E [ ( ϕ E [ ϕ ] ) ( ϕ E [ ϕ ] ) t ]
Notice that D ( x ) R d × d and does not depend on the size of the mini-batch; it only depends on the weights x, loss function f and the dataset Σ . With a direct calculation one can show that:
D = 1 N k ( f k ) ( f k ) t ( f ) ( f ) t = 1 N 2 ( r f ^ , s f ^ )
where:
f ^ = ( f 1 f 2 , f 1 f 3 , , f N 1 f N ) R N ( N 1 ) / 2
and , is the euclidean scalar product. In fact:
D r s = 1 N k = 1 N r f k s f k 1 N 2 i , j = 1 N r f i s f j = 1 N 2 [ N ( r f 1 s f 1 + + r f N s f N ) + ( r f 1 s f 1 + r f 1 s f 2 + + r f N s f N ) ] = = 1 N 2 [ ( N 1 ) r f 1 s f 1 r f 1 s f 2 r f 1 s f N + r f 2 s f 1 + ( N 1 ) r f 2 s f 2 r f 2 s f N + r f N s f 1 r f N s f 2 + + ( N 1 ) r f N s f N ]
which gives:
D r s = 1 N 2 [ ( r f 1 r f 2 ) ( s f 1 s f 2 ) + ( r f 1 r f 3 ) ( s f 1 s f 3 ) + + ( r f 1 r f N ) ( s f 1 s f N ) + ( r f 2 r f 3 ) ( s f 2 s f 3 ) + + ( r f N 1 r f N ) ( s f N 1 s f N ) = 1 N 2 ( r f ^ , s f ^ ) .
The diffusion matrix measures effectively the anisotropy of our data: D = 0 if and only if r ( f i ) = r ( f j ) for all r = 1 , , d and i , j = 1 , , N . In other words, the diffusion matrix measures how the loss of each datum depends, at first order, on the weights in a different way with respect to the loss of each of the other data-points. So, it tells us how much we should expect the SGD dynamics to differ from the GD one. Notice that the expression (Equation (3)) gives us immediately a bound on the rank of D; namely, rk ( D ) N 1 .
The Table 1 suggests that in many algorithms currently available, the diffusion matrix has low rank; hence, it is singular; this just by comparing the size d of D and its rank which bound by N. This fact turns out to be very important in the construction of the diffusion metrics, as we will see below.

3. Diffusion Metrics and General Relativity

The evolution of a dynamical system in general relativity takes place along the geodesics according to the metric imposed on the Minkowski space by the presence of gravitational masses. The equation for such geodesics, once Einstein’s equation is solved, is:
d 2 x μ d t 2 + Γ ρ σ μ d x ρ d t d x σ d t = q m F ν μ d x ν d t ,
where Γ ρ σ μ are the Christoffel symbols for the Levi–Civita connection:
Γ u v w = 1 2 g w z u g v z z g u v + v g u z
and q m F ν μ is a term regarding an external force; e.g., one coming from an electromagnetic field.
If we take the time derivative of the differential equation underlying the ordinary (i.e., non stochastic) gradient descent:
d 2 x μ d t 2 = d d t μ f
and compare with Equation (4), observe that d d t μ f effectively replaces the force term ( q / m ) F ν μ d x ν d t . Hence, the geodesic equation Equation (4) models the ordinary GD equation if we take a constant metric and replace the force term with the gradient of the loss; furthermore, this corresponds to the condition D = 0 in SGD dynamics Equation (1).
This suggests that one may define a metric dependent on the diffusion matrix; this metric should become constant when D = 0 . As a side remark, notice that since D is singular in many important practical applications (see Table 1), it is not reasonable to use it to define the metric itself. On the other hand, since D measures the anisotropy of the weight space, it is reasonable to employ it to perturb the euclidean metric. So the stochastic nature of the dynamical system ruled by the SGD is replaced by a perturbation of the dynamics for the ordinary gradient descent. As an analogy, the presence of (small) masses in space, in the weak field approximation (see [7]) of general relativity, generates gravity; this motivates our (small) deformation of the euclidean metric using the diffusion matrix.
At each point x Ω , we define a metric called diffusion metric as
g ( x ) = id + ε ( x ) D ( x )
with ε ( x ) < 1 / M x , where M x = max { λ k : λ k is an eigenvalue of D ( x ) } . This ensures that g is non-singular at each x Ω . We have, thus, defined a family of metrics that depend on the real parameter ε . We expect this model to approximate the solution to the Fokker–Planck equation when the parameter β 1 , whose interpretation is related with the temperature, is very small (see [3] for the notation and more details).
Notice that our heuristic hypothesis on ε allows us to make the so called weak field approximation (see [7]):
g 1 = id ε D ( x ) .
Hence, we have the following expression for the Christoffel’s symbols (in this approximation we discard O ( ε 2 ) ):
Γ u v w = 1 2 z ( δ w z ε w z d ) ε u d v z z d u v + v d u z = ε 2 ( u d v w w d u v + v d u w
where dij are the coefficients of D, and δwz is the Kronecker delta.
Let us now compute the Christoffel’s symbols and then substitute them into the geodesic equation given by Equation (4).
Γ i j k = ε 2 N 2 [ i j f ^ , k f ^ + j f ^ , i k f ^ i k f ^ , j f ^ i f ^ , k j f ^ + j k f ^ , i f ^ + k f ^ , i j f ^ ] = ε N 2 i j f ^ , k f ^ .
Let us substitute in Equation (4) (writing the sum now):
d 2 x k d t 2 + ε N 2 i , j i j f ^ , k f ^ d x i d t d x j d t = d d t k f .
Let us concentrate on the expression:
ε N 2 i , j i j f ^ , k f ^ d x i d t d x j d t = ε N 2 i , j , α i j f ^ α k f ^ α d x i d t d x j d t = ε N 2 α d 2 f ^ α d t 2 k f ^ α .
Now, we take the integral in dt (we compute the parts twice):
ε N 2 α d 2 f ^ α d t 2 k f ^ α d t = ε N 2 α d f ^ α d t k f ^ α d f ^ α d t k f ^ α d t d t = ε N 2 α d f ^ α d t k f ^ α f ^ α d d t k f ^ α + f ^ α d 2 d t 2 k f ^ α d t .
Notice that, in many practical applications we have:
d 2 d t 2 k f ^ α = 0
because i j k f ^ = 0 .
Hence
ε N 2 α d 2 f ^ α d t 2 k f ^ α d t = ε N 2 α , f ^ α d x d t k f ^ α f ^ α d d t k f ^ α = ε d k , d x d t ε N 2 α f ^ α d d t k f ^ α .
We now substitute the obtained expression into the Equation (8), taking the integral in dt:
d x k d t + ε d k , d x d t ε N 2 α f ^ α d d t k f ^ α = k f .
We may assume ε N 2 very small, as motivated by Equation (1); hence, discard this term. Writing the equation into vector form, we have:
d x d t + ε D d x d t = f .
By the weak field approximation ( I + ε D ) 1 ( I ε D ) , we can write:
d x d t = ( I ε D ) f = D f ,
where D f is the gradient computed according to the diffusion metric Equation (6).
We can summarize our result as follows: the SGD equation Equation (1) can be replaced, provided the approximation Equation (9) holds, by the deterministic equation Equation (10), where the dynamical system evolves with respect to the gradient computed according to the diffusion metric Equation (6).
We now want to compare our result Equation (10) with [3] Section 1, in order to understand how the steady state solutions of Equation (10) compare to the SGD steady state solutions described by Equation (1). In [3], the authors regard SGD as minimizing the function Φ instead of our loss f. Let us focus on (8) in [3], where the relation between f and Φ is discussed. In our case, we take ∇Φ = ∇Df so that Equation (11) becomes
f ( x ) + D ˜ ( x ) Φ ( x ) = 0 ,
where D ˜ ( x ) = ( I + ε D ( x ) ) is the diffusion metric. If the term D(x) in (8) in [3] is spelled out as our D ˜ ( x ) , we can write such equation as:
j ( x ) = f ( x ) + D ˜ ( x ) Φ ( x ) β 1 · D ˜ ( x ) .
Notice that according to our approximation Equation (9), · D ˜ ( x ) = 0 . Hence, Equation (12) (that is (8) in [3]) is perfectly compatible with our treatment, and furthermore, the assumption 4 in [3] is fully justified by the fact that j(x) = 0.

4. Conclusions

The general relativity (GR) model helps to provide a deterministic approach the evolution of the dynamical system described by SGD, through the use of the diffusion metric accounting for the anisotropy of the system. Our results are compatible with [3]; moreover, they give GD for an isotropic system. We plan to explore deep neural networks mixing our GR model with energy models in a forthcoming paper (see also [9]). This will provide new geometric insight to the theory.

Author Contributions

Conceptualization, R.F. and P.C.; Investigation, R.F. and P.C.; Methodology, R.F.; Supervision, S.S.; Writing and original draft, R.F.; Writing and review editing, P.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by MSCA EU project GHAIA grant number 777822.

Acknowledgments

The authors wish to thank A. Achille and F. Faglioni for many illuminating discussions.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Riemannian Geometry

We collected a few well known facts of Riemannian geometry, and invite the reader to consult [10] for more details.
In Riemannian geometry, we define a metric g on a smooth manifold M, as a smooth assignment p g p , which gives, for each p M , a (non degenerate) scalar product on T p ( M ) , the tangent space of M at p. Usually, this scalar product is assumed to be a positive definite; however, for general relativity, it is necessary to drop this assumption, so that we speak of a pseudometric, because our main example is the Minkowski metric. To ease the terminology, we say "metric" to include this more general setting as well.
Once a metric is given, we say that M is a Riemannian manifold. In local coordinates, x 1 , , x n , we express the metric using 1-forms:
g = i , j g i j d x i d x j .
where
g i j | p : = g p x i p , x j p and x 1 | p , , x n | p
is a basis of the tangent space T p M .
For example R n , identified with its tangent space at every point, has a canonical or standard metric given by:
g p can : T p R n × T p R n R , i a i x i , j b j x j i a i b i .
Here, g i j can = δ i j .
Usually, we drop the ∑ symbol, following Einstein’s convention.
An affine connection ∇ on a smooth manifold M is an bilinear map ( X , Y ) X Y associating with a pair of vector fields X , Y on M another vector field X Y , satisfying:
  • f X Y = f X Y for all functions f on M;
  • X ( f Y ) = d f ( X ) Y + f X Y .
Once this definition is given, it is possible to define ∇ on tensors of every order.
On a Riemannian manifold we have a unique affine connection, the LeviCivita connection ∇, which is torsion free and preserves the metric; i.e., g = 0 . In local coordinates the components of the connection are called the Christoffel symbols:
x i x j = Γ i j k x k .
By the above -mentioned uniqueness, the Γ i j k s are expressed in terms of the metric components g i j :
Γ j k l = 1 2 g l r k g r j + j g r k r g j k ,
where as usual, g i j are the coefficients of the dual metric tensor; i.e., the entries of the inverse of the matrix ( g k l ) . The torsion freeness is equivalent to the symmetry
Γ j k l = Γ k j l .
In R n , the gradient of a scalar function f is the vector field characterized by the property: grad ( f ) · v = D v (we shall denote it with ( f ) so that no confusion arises). In other words, its scalar product (in the euclidean metric) with a tangent vector v gives the directional derivative of f along v. When M is a Riemannian manifold with metric g, the gradient of a function f on M is defined in the same way, except that the scalar product is now given by g. So, in local coordinates, we have:
g f = f x i g i j x j .
Notice that when g i j = δ i j , we retrieve the usual definition in R n .
We end our short summary of the key concepts, with the notion of geodesic.
A geodesic γ on a smooth manifold M with an affine connection ∇ is a curve defined by the following equation:
γ ˙ γ ˙ = 0
Geometrically, this expresses the fact that the parallel transport, given by ∇, along the curve γ preserves any tangent vector to the curve. In local coordinates this becomes:
d 2 γ λ d t 2 + Γ μ ν λ d γ μ d t d γ ν d t = 0 .
Notice that when the metric is constant, we have the familiar equation:
d 2 γ λ d t 2 = 0 ;
that is, the geodesics are straight lines.

References

  1. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436. [Google Scholar] [CrossRef] [PubMed]
  2. LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef] [Green Version]
  3. Chaudhari, P.; Soatto, S. Stochastic gradient descent performs variational inference, converges to limit cycles for deep networks. arXiv 2017, arXiv:1710.11029. [Google Scholar]
  4. Chaudhari, P.; Soatto, S. On the energy landscape of deep networks. arXiv 2015, arXiv:1511.06485. [Google Scholar]
  5. Chaudhari, P.; Choromanska, A.; Soatto, S.; LeCun, Y.; Baldassi, C.; Borgs, C.; Chayes, J.; Sagun, L.; Zecchina, R. Entropy-SGD: Biasing gradient descent into wide valleys. arXiv 2016, arXiv:1611.01838. [Google Scholar] [CrossRef] [Green Version]
  6. Amari, S. Natural Gradient Works Efficiently in Learning. Neural Comput. 1998, 10, 251–276. [Google Scholar] [CrossRef]
  7. Adler, R.; Bazin, M.; Schiffer, M. Introduction to General Relativity; McGraw-Hill: New York, NY, USA, 1965. [Google Scholar]
  8. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely Connected Convolutional Networks. arXiv 2017, arXiv:1706.01350. [Google Scholar]
  9. Achille, A.; Soatto, S. On the emergence of invariance and disentangling in deep representations. arXiv 2017, arXiv:1706.01350. [Google Scholar]
  10. Petersen, P. Riemannian Geometry; (GTM); Springer: Cham, Switzerland, 1998. [Google Scholar]
Table 1. Values for N and d for various architectures on CIFAR and SVHN datasets (see [8]).
Table 1. Values for N and d for various architectures on CIFAR and SVHN datasets (see [8]).
Architecture d = | Weights | N = | Data | , CIFAR N = | Data | , SVHN
ResNet1.7 M60 K600 K
Wide ResNet11 M60 K600 K
DenseNet (k = 12)1 M60 K600 K
DenseNet (k = 24)27.2 M60 K600 K

Share and Cite

MDPI and ACS Style

Fioresi, R.; Chaudhari, P.; Soatto, S. A Geometric Interpretation of Stochastic Gradient Descent Using Diffusion Metrics. Entropy 2020, 22, 101. https://0-doi-org.brum.beds.ac.uk/10.3390/e22010101

AMA Style

Fioresi R, Chaudhari P, Soatto S. A Geometric Interpretation of Stochastic Gradient Descent Using Diffusion Metrics. Entropy. 2020; 22(1):101. https://0-doi-org.brum.beds.ac.uk/10.3390/e22010101

Chicago/Turabian Style

Fioresi, Rita, Pratik Chaudhari, and Stefano Soatto. 2020. "A Geometric Interpretation of Stochastic Gradient Descent Using Diffusion Metrics" Entropy 22, no. 1: 101. https://0-doi-org.brum.beds.ac.uk/10.3390/e22010101

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop