Next Article in Journal
Second law analysis of laminar flow in a channel filled with saturated porous media: a numerical solution
Previous Article in Journal
Numerical Prediction of Entropy Generation in Separated Flows
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Differential entropy and time

Institute of Physics, University of Zielona Góra, ul. Szafrana 4a, 65-516 Zielona Góra, Poland
Submission received: 22 August 2005 / Accepted: 17 October 2005 / Published: 18 October 2005

Abstract

:
We give a detailed analysis of the Gibbs-type entropy notion and its dynamical behavior in case of time-dependent continuous probability distributions of varied origins: related to classical and quantum systems. The purpose-dependent usage of conditional Kullback-Leibler and Gibbs (Shannon) entropies is explained in case of non-equilibrium Smoluchowski processes. A very different temporal behavior of Gibbs and Kullback entropies is confronted. A specific conceptual niche is addressed, where quantum von Neumann, classical Kullback-Leibler and Gibbs entropies can be consistently introduced as information measures for the same physical system. If the dynamics of probability densities is driven by the Schrödinger picture wave-packet evolution, Gibbs-type and related Fisher information functionals appear to quantify nontrivial power transfer processes in the mean. This observation is found to extend to classical dissipative processes and supports the view that the Shannon entropy dynamics provides an insight into physically relevant non-equilibrium phenomena, which are inaccessible in terms of the Kullback-Leibler entropy and typically ignored in the literature.

1. Introduction

Among numerous manifestations of the concept of entropy in physics and mathematics, the information-theory based entropy methods have been devised to investigate the large time behavior of solutions of various (mostly dissipative) partial differential equations. Shannon, Kullback and von Neumann entropies are typical information theory tools, designed to quantify the information content and possibly information loss for various classical and quantum systems in a specified state.
For quantum systems the von Neumann entropy vanishes on pure states, hence one presumes to have a complete information about the state of a system. On the other hand, for pure states the Gibbs-type (Shannon, e.g. differential) entropy gives access to another information-theory level, associated with a probability distribution inferred from a L2(Rn) wave packet.
Although Shannon or Kullback entropies are interpreted as information measures, it is quite natural to think of entropy as a measure of uncertainty. In view of the profound role played by the Shannon entropy in the formulation of entropic indeterminacy relations [1,2], the term information, in the present paper is mostly used in the technical sense, meaning the inverse of uncertainty.
In physics, the notion of entropy is typically regarded as a measure of the degree of randomness and the tendency (trends) of physical systems to become less and less organized. Throughout the paper we shall attribute a more concrete meaning to the term organization, both in the classical and quantum contexts. Namely, we shall pay special attention to quantifying, in terms of suitable entropy functionals, the degree of the probability distribution (de)localization, and the dynamical behavior of this specific - localization uncertainty - feature of a physical system.

1.1 Notions of entropy

Notions of entropy, information and uncertainty are intertwined and cannot be sharply differentiated. While entropy and uncertainty are - to some extent synonymous - measures of ignorance (lack of information, uncertainty), the complementary notion of information basically quantifies the ability of observers to make reliable predictions about the system, [4,6,7]: the more aware one is about chances of a concrete outcome, the lower is the uncertainty of this outcome.
Normally, the growth of uncertainty is identified with an increase of the entropy which in turn is interpreted as an information loss. Consult e.g. standard formulations of the celebrated Boltzmann H-theorem.
Following Ref. [3] let us recall that entropy - be it thermodynamical (Gibbs-Boltzmann), dynamical, von Neumann, Wehrl, Shannon, Renyi, Tsallis or any other conceivable candidate - has an exceptional status among physical quantities. As a derived quantity it does not show up in any fundamental equation of motion. Generically, there is no a priori preferred notion of entropy (perhaps, except for the thermodynamical Clausius case) in physical applications and its specific choice appears to be purpose-dependent.
As an obvious remnant of the standard thermodynamical reasoning, one expects entropy to be a state function of the system (thermodynamical notions of equilibrium or near-equilibrium are implicit). This state connotation is a source of ambiguities, since inequivalent notions of the system state are used in the description of physical systems, be them classical, thermodynamical and quantum. Not to mention rather specialized meaning of state employed in the standard information theory, [4,9,7].
A primitive information-theory system is simply a bit whose two admissible states are binary digits 1 and 0. Its quantum equivalent is a qubit whose admissible states are vectors in a two-dimensional Hilbert space, hence an infinity of pure states of a two-level quantum system.
The information theory framework, if extended to more complicated systems, employs a plethora of notions of state [4,7]. As very special cases we may mention a phase-space point as the determinative of the state of a classical dynamical system, or the macroscopic notion of a thermodynamical state in its classical and quantum versions, [3].
A symbolic mathematical representation of quantum states in terms of wave vectors and/or density operators is expected to provide an experimentally verifiable ”information” about the system. To obtain a catalogue of the corresponding statistical predictions, an a priori choice of suitable observables (and thus measurement procedures) is necessary. Then, a casual interpretation of entropy as a measure of one’s uncertainty about measurable properties of a system in a prescribed quantum state may acquire an unambiguous meaning.
When adopting the state notion to the Hilbert space language of quantum theory, we realize that normalized wave functions and density operators, which are traditionally supposed to determine the quantum state, allow to extend the notion of entropy to certain functionals of the state of the quantum system. The von Neumann entropy
Entropy 07 00253 i001
of a quantum state (e.g. the density operator ρ ^ ), though often infinite, is typically related to the degree of departure from purity (e.g. the ”mixedness” level) of the state and is particularly useful while quantifying measurements performed upon finite quantum systems.
For a given density operator ρ ^ , von Neumann entropy is commonly accepted as a reliable measure of the information content (about the departure from purity), to be experimentally extracted from of a quantum system in a given state. Only under very specific circumstances, like e.g. in an optimal ”quantum experiment” [11,12] which refers to the diagonal density operator (with pi, 1 ≤ iN being its eigenvalues), the information gain can be described in terms of both von Neumann’s and the standard Shannon measure of information:
Entropy 07 00253 i002
Since von Neumann entropy is invariant under unitary transformations, the result exhibits an invariance under the change of the Hilbert space basis and the conservation in time for a closed system (when there is no information/energy exchange with the environment). Thus, Schrödinger dynamics has no impact on the von Neumann encoding of information, see e.g. also [13,14] for a related discussion.
Pure states have vanishing von Neumann entropy ( S ( ρ ^ ) = 0 ”for the pure states and only for them”, [3]) and are normally considered as irrelevant from the quantum information theory perspective, since ”one has complete information” [3] about such states. One may even say that a pure state is an unjustified over-idealization, since otherwise it would constitute e.g. a completely measured state of a system in an infinite Hilbert space, [15]. A colloquial interpretation of this situation is: since the wave function provides a complete description of a quantum system, surely we have no uncertainty about this quantum system, [16].
Note that as a side comment we find in Ref. [15] a minor excuse: this idealization, often employed for position-momentum degrees of freedom, is usually an adequate approximation, to be read as an answer to an objection of Ref. [17]: ”although continuous observables such as the position are familiar enough, they are really unphysical idealizations”, c.f. in this connection [18] for an alternative view.
On the other hand, the classic Shannon entropy is known to be a natural measure of the amount of uncertainty related to measurements for pairs of observables, discrete and continuous on an equal footing, when a quantum system actually is in a pure state. Hence, a properly posed question reveals obvious uncertainties where at the first glance we have no uncertainty. The related entropic uncertainty relations for finite and infinite quantum systems have received due attention in the literature, in addition to direct investigations of the configuration space entropic uncertainty/information measure of L2(Rn) wave packets, [15], [19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42].
The commonly used in the literature notions of Shannon and von Neumann entropy, although coinciding in some cases, definitely refer to different categories of predictions and information measures for physical systems. In contrast to the exclusively quantum concept of von Neumann entropy, the Shannon entropy - quite apart from its purely classical provenance - appears to capture a number of properties of quantum systems which cannot be detected nor described by means of the von Neumann entropy.
Obviously, there is no use of Shannon entropy if one is interested in verifying for mixed quantum states, how much actually a given state is mixed. On the other hand, von Neumann entropy appears to be useless in the analysis of L2(R) wave packets and their dynamical manifestations (time-dependent analysis) which are currently in the reach of experimental techniques, [43,44]. It is enough to invoke pure quantum states in L2(Rn) and standard position-momentum observables which, quite apart from a hasty criticism [17], still stand for a valid canonical quantization cornerstone of quantum theory, [18].
Those somewhat underestimated facts seem to underlie statements about an inadequacy of Shannon entropy in the quantum context, [11], while an equally valid statement is that the von Neumann entropy happens to be inadequate. The solution of the dilemma lies in specifying the purpose, see also [12].

1.2 Differential entropy

We are primarily interested in the information content of pure quantum states in L2(Rn), and thus pursue the following (albeit scope-limited, c.f. [43,44] for experimental justifications) view: an isolated system is represented in quantum mechanics by a state vector that conveys statistical predictions for possible measurement outcomes.
Consequently, it is the state vector which we regard as an information (alternatively, predictions and uncertainty) resource and therefore questions like, [45]: how much information in the state vector or information about what, may be considered meaningful. Let us emphasize that we do not attempt to define an information content of a physical system as a whole, but rather we wish to set appropriate measures of uncertainty and information for concrete pure states of a quantum system.
A particular quantum state should not be misinterpreted to provide a complete description of the corresponding physical system itself, [46]. In fact, when we declare any Schrödinger’s ψ as the state of a quantum system, we effectively make a statement about the probabilities of obtaining certain results upon measurement of suitable observables, hence we refer to a definite experimental setup. Therefore, to change or influence this state is not quite the same as changing or influencing the system.
Our, still vague notion of information, does not refer to qubits since we shall basically operate in an infinite dimensional Hilbert space. This does not prohibit a consistent use of information-theory concepts, since an analytic information content of a quantum state vector, in the least reduced to a properly handled plane wave, is not merely an abstraction and can be dug out in realistic experiments, including those specific to time-dependent quantum mechanics, [43,44]. On the way one may verify a compliance with quantum theory of a number of well defined properties of the quantum system for which: the only features known before an experiment is performed are probabilities of various events to occur, [11].
In the case of a quantum mechanical position probability density, its analytic form is assumed to arise in conjunction with solutions of the Schrödinger equation. Then, we need to generalize the original Shannon’s entropy for a discrete set of probabilities to the entropy of a continuous distribution with the density distribution function [4], which is also named the differential entropy, [5,6].
Most of our further discussion will be set in a specific context of quantum position-momentum information/uncertainty measures, where the classical form of Shannon differential entropy [4] has been used for years in the formulation of entropic versions of Heisenberg-type indeterminacy relations, [19,20,21,25].
The entropic form of indeterminacy relations, enters the stage through the Fourier analysis of L2(Rn) wave packets, in conjunction with the Born statistical interpretation, hence with ψ-induced probability measures in position and momentum space, [19,20]. The experimental connotations pertaining to the notion of uncertainty or indeterminacy are rather obvious, although they do not quite fit to the current quantum information idea of a ”useful” quantum measurement, [11].
Given the probability density ρ(x) on Rn, we define the differential entropy [5,6,48]), as follows:
Entropy 07 00253 i003
One may consider a subset Γ ⊂ Rn to be a support of ρ instead of R; this is guaranteed by the convention that the integrand in Eq. (3) vanishes if ρ does. Note a minor but crucial notational difference between ρ ^ and ρ.
Let us stress that, modulo minor exceptions, throughout the paper we carefully avoid dimensional quantities (all relevant dimensional constants like the Planck ħ are scaled away), since otherwise the above differential entropy definition may be dimensionally defective and have no physical meaning. An extensive list od differential entropy values for various probability densities can be found in Re. [5].
Since our paper is supposed to be concerned with physical applications, in Section II we shall analyze the issue of how the differential entropy definition depends on the units used. The related difficulty, often overlooked in the literature, refers to literally taking the logarithm of a dimensional argument, see e.g. [9,10].
In the quantum mechanical context, we shall invoke either position S ( ρ ) or momentum S ( ρ ~ ) information entropies, with no recourse to the classical entropy given in terms of classical phasespace distributions f (q, p) or (Wehrl entropy) their Wigner/Husimi analogues, [3,28]. The notion of entropic uncertainty relations, [2,21,22,25] explicitly relies on the differential entropy input. Namely, an arithmetic sum of (presumed to be finite) momentum and position information entropies for any normalized L2(Rn) wave packet ψ(x), is bounded from below:
Entropy 07 00253 i004
where n stands for the configuration space (respectively momentum space) dimension, [21]. This feature is worth emphasizing, since neither S ( ρ ) nor S ( ρ ˜ ) on their own are bounded from below or from above. Nonetheless, both take finite values in physically relevant situations and their sum is always positive.

1.3 Temporal behavior-preliminaries

Since a normalized wave function ψ represents a pure state of a quantum system whose dynamics is governed by the Schrödinger equation, only for stationary states the differential entropy S ( ρ ) is for sure a conserved quantity. In general, the Schrödinger picture evolution of ψ(x, t) and so this |ρ(x, t)|2 = ρ(x,t) may give rise to a nontrivial dynamics of the information entropy associated with the wave packet ψ(x, t).
Let us point out that most of the ”entropic” research pertains to time-independent situations, like in case of stationary solutions of the Schrödinger equation. Notable exceptions are Refs. [27,33,34]. On general non-quantum grounds an information (differential entropy) dynamics is addressed in Refs. [6,47] and [59,60,61,62,63,64,65,66], see also [67,68,69,71,70].
The differential entropy, by a number of reasons [4,6], is said not to quantify the absolute ”amount of information carried by the state of the system” (Shannon’s uncertainty), unless carefully interpreted. Up to measure preserving coordinate transformations the latter objection remains invalid and this feature gave some impetus to numerically assisted comparative studies of the Shannon information content of different pure states of a given quantum system.
Results are ranging from simple atoms to molecules, nuclei, aggregates of particles, many-body Bose and Fermi systems, and Bose-Einstein condensates, see e.g. Refs. [29,30,31,32,33,34,35,36,37,38,39,40,41,42]. In these cases, Shannon’s differential entropy appears to be a fully adequate measure for the localization degree (which in turn is interpreted as both the uncertainty measure and the information content) of the involved wave packets.
A difference of two information entropies (evaluated with respect to the same coordinate system) S ( ρ ) S ( ρ ) is known to quantify an absolute change in the information content when passing from one state of a given system to another. All potential problems with dimensional units would disappear in this case, [4,9]. Alternatively, to this end one may invoke the familiar notion of the relative Kullback entropy − ∫Γ ρ(ln ρ − lnρ′) dx, [4,6], provided ρ′ is strictly positive.
Cogent recommendations towards the use of the Shannon information measure, plainly against the Kullback option, can be found in Ref. [72]. We shall come to this point later. For arguments just to the opposite see e.g. [73] and also [92,93].
In the present paper, we predominantly invoke the differential entropy. In Section IV we shall describe a number of limitations upon the use of the Kullback entropy. If both entropies can be safely employed for the same physical model (like e.g. for the diffusion type dynamics with asymptotic invariant densities), we establish direct links between the Shannon and Kullback entropy dynamics.
In the context of the induced (by time development of probability densities) ”information dynamics” S S ( t ) , [6,56], it is the difference S ( t ) S ( t ) between the (presumed to be finite) information entropy values for the time-dependent state of the same physical system, considered at times t′ < t, which properly captures the net uncertainty/information change in the respective time interval [t′, t]. Let us mention that the very same strategy underlies the Kolmogorov-Sinai entropy notion for classical dynamical systems, [60,61,48].
In particular, the rate in time of information entropy d S d t is a well defined quantity characterizing the temporal changes (none, gain or loss) in the information content of a given L2(Rn) normalized wave packet ψ(x, t) (strictly speaking, of the related probability density). We indicate that, at variance with standard thermodynamical intuitions, quantum mechanical information (differential) entropy needs not to be a monotonic function of time. In the course of its evolution it may oscillate, increase or decrease with the flow of time, instead of merely increasing with time or staying constant, as customarily expected. That, regardless from the intrinsic time reversal property of the quantum dynamics.
To conform with the information theory lore, we need to address an information entropy balance in the course of time, since for an isolated quantum system there is no analog of thermal reservoir, capable of decreasing (removal) or increasing (putting into) an information entropy of the particular state in which the system actually is. Since there in no quantifiable energy exchange with the environment, the actual purpose and ”fate” of the differential entropy need to be investigated.
The entropic method we follow in the present paper extends to any formalism operating with general time-dependent spatial probability densities, [47,48,50], even if set out of the explicit thermodynamic context (e.g. the phase space formulation of statistical mechanics).
Information entropy and its intrinsic dynamics, like e.g. the information flow and information entropy production rate, quantify properties of general reversible and/or irreversible dynamical systems. Normally, the microscopic dynamics of such systems is expected to follow well defined trajectories (deterministic paths of a dynamical system or sample paths of a stochastic process) and those may be thought to induce a corresponding dynamics for statistical ensembles of trajectories. It is seldom possible to have a sharp wisdom of the initial data x0X for the trajectory dynamics taking place in a phase space X of the system. This imprecision extends to the terminal data (x0xt after time t > 0) as well.
Therefore, even if one knows exact dynamical rules governing the behavior of individual trajectories in time, it is basically impossible to tell more about the system then: if its initial state can be found in a subset AX with a probability prob(x0A), then after time t one can identify the terminal state of the system xtX in a subset BX with a probability prob(xtB). An evolution of derived probability densities eventually may be obtained as a solution of an appropriate partial differential transport equation, [47,48,59,63]
In the present paper we take a more general view and we bypass a concept of the underlying trajectory dynamics by emphasizing the role of transport equations and their density solutions. Under such premises, we can safely address the dynamics of uncertainty/information generated by the Schrödinger picture quantum evolution of wave packets in closed (no system - reservoir/environment coupling) quantum mechanical systems.
Remark 1: 
Keeping in touch with quantum mechanical tradition, let us recall that at least two different ”trajectory pictures ” can be related to the very same mathematical model based on the Schrödinger wave packet dynamics: deterministic Bohmian paths [49] and random paths of (basically singular) diffusion-type processes, [50,51,52]. Additionally, under suitable restrictions (free motion, harmonic attraction) classical deterministic phase-space paths are supported by the associated with ψ(x,t) positive Wigner distribution function and its spatial marginal distribution. However, none of the above derived trajectory ”pictures” deserves the status of an underlying physical ”reality” for quantum phenomena although each of them may serve as an adequate pictorial description of the wave-packet dynamics.
Remark 2: 
In view of Born’s statistical interpretation postulate, the Schrödinger picture dynamics sets a well defined transport problem for a probability density ρ(x, t) ≐ |ψ(x,t)|2. Therefore, one is tempted to resolve such dynamics in terms of (Markovian) diffusion-type processes and their sample paths, see e.g. [50,51,52] and [53,54]. A direct interpretation in terms of random ”trajectories” of a Markovian diffusion-type process is here in principle possible under a number of mathematical restrictions, but is non-unique and not necessarily global in time. The nontrivial boundary data, like the presence of wave function nodes, create additional problems although the nodes are known to be never reached by the pertinent processes. The main source of difficulty lies in guaranteing the existence of a process per se i.e. of the well defined transition probability density function solving a suitable parabolic partial differential equation (Fokker-Planck or Kramers).
By adopting milder conditions upon the drift fields (instead of too restrictive growth restrictions, one may simply admit smooth functions) it is possible to construct well defined, albeit non-unique, diffusion-type processes. They are consistent with the time development of a given probability density, see Chap. 3 of Ref. [68] and [52].

1.4 Outline of the paper

The paper is structured as follows. We begin by recalling the standard lore of the Shannon information theory to attribute an unambiguous meaning to two principal notions, this of information and that of uncertainty. To this end various notions of state of a model system are invoked and suitable information measures are discussed.
Next we turn to the coarse-graining issue and set a connection between the Shannon entropy of a discrete probability measure and the differential entropy of a related (through a suitable limiting procedure) continuous probability density. On the way, we analyze the dimensional units impact on the entropy definition. We discuss various entropic inequalities for both differential and coarse-grained entropies of quantum mechanical densities.
In Section III, the localization degree of probability densities is analyzed by means of so-called entropy powers and of the Fisher information measure. We infer two chain inequalities, Eqs. (51) and (52), which imply that typically the differential entropy is a well behaved quantity, bounded both from below and above. The formalism is general enough to include quantum mechanical densities as merely the special case.
In Section IV we set a conceptual framework for time-dependent problems. Since classical dynamical, stochastic and quantum systems (in their pure states) in general give rise to time-dependent probability densities and information entropies, we resolve the exemplary density dynamics in terms of Smoluchowski diffusion processes, albeit with no explicit random path (e.g. random variable) input.
The entropy and Fisher information evolution equations are established. Close links of the differential and conditional Kullback entropies are established for Smoluchowski diffusion processes, when asymptotic invariant densities enter the scene. We discuss a compliance of the induced continual power release in the course of the diffusion process with the mean energy conservation law, Eqs (104) and (108).
In section V we analyze differential entropy dynamics and time evolution of the Fisher localization measure in quantum theory and next exemplify the general formalism for simple analytically tractable cases. The emergent continual power transfer effect has been analyzed in connection with the finite energy constraint for the mean energy of quantum motion, Eqs. (114) and (117).
Although uncertainty dynamics scenarios of sections IV and V are fundamentally different, nonetheless the respective methodologies appear to have an overlap, when restricted to steady states which support invariant densities for (reversible) stationary diffusion-type processes.

2. Differential entropy: uncertainty versus information

2.1 Prerequisites

The original definition of Shannon entropy conveys a dual meaning of both uncertainty and information measure. It is useful to interpret those features in a complementary (albeit colloquial) way: the less is the uncertainty of the system or its state, the larger (and more valuable) is the information we acquire as a result of the measurement (observation) upon the system, and in reverse.
We know that a result of an observation of any random phenomenon cannot be predicted a priori (i.e. before an observation), hence it is natural to quantify an uncertainty of this phenomenon. Let us consider μ = (μ1, ..., μN ) as a probability measure on N distinct (discrete) events Aj, 1 ≤ jN pertaining to a model system. Assume that Entropy 07 00253 i005 and μj = prob(Aj ) stands for a probability for an event Aj to occur in the game of chance with N possible outcomes.
Let us call − log μj an uncertainty function of the event Aj . Interestingly, we can coin here the name of the (”missing”) information function, if we wish to interpret what can be learned via direct observation of the event Aj: the less probable is that event, the more valuable (larger) is the information we would retrieve through its registration .
Then, the expression
Entropy 07 00253 i006
stands for the measure of the mean uncertainty of the possible outcome of the game, [6], and at the same time quantifies the mean information which is accessible from an experiment (i.e. actually playing the game). The base of the logarithm for a while is taken equal 2, but we recall that log b · ln 2 = ln b and ln 2 ≃ 0.69555 with the base e ≃ 2.71828 .
Thus, if we identify event values A1,..., AN with labels for particular discrete ”states” of the system, we may interpret Eq. (5) as a measure of uncertainty of the ”state” of the system, before this particular ”state” it is chosen out of the set of all admissible ones. This well conforms with the standard meaning attributed to the Shannon entropy: it is a measure of the degree of ignorance concerning which possibility (event Aj ) may hold true in the set {A1,A2,..., AN } with a given a priori probability distribution {μ1,..., μN }.
Notice that:
Entropy 07 00253 i007
ranges from certainty (one entry whose probability equals 1 and thus no information is missing) to maximum uncertainty when a uniform distribution μj = 1/N for all 1 ≤ jN occurs. In the latter situation, all events (or measurement outcomes) are equiprobable and log N sets maximum for a measure of the ”missing information”.
By looking at all intermediate levels of randomness allowed by the inequalities Eq. (5) we realize that the lower is the Shannon entropy the less information about ”states” of the system we are missing, i.e. we have more information about the system.
If the Shannon entropy increases, we actually loose an information available about the system. Consequently, the difference between two uncertainty measures can be interpreted as an information gain or loss.

2.2 Events, states, microstates and macrostates

The Boltzmann formula
Entropy 07 00253 i008
sets a link of entropy of the (thermodynamical) system with the probability P = 1/W that an appropriate ”statistical microstate” can occur. Here, W stands for a number of all possible (equiprobable) microstates that imply the prescribed macroscopic (e.g. thermodynamical) behavior corresponding to a fixed value of S .
It is instructive to recall that if P is a probability of an event i.e. of a particular microstate, then − ln P (actually, with log2 instead of ln) may be interpreted [8] as ”a measure of information produced when one message is chosen from the set, all choices being equally likely” (”message” to be identified with a ”microstate”). Another interpretation of − ln P is that of a degree of uncertainty in the trial experiment, [7].
As a pedestrian illustration let us invoke a classic example of a molecular gas in a box which is divided into two halves denoted ”1” and ”2”. We allow the molecules to be in one of two elementary states: A1 if a molecule can be found in ”1” half-box and A2 if it placed in another half ”2”.
Let us consider a particular n-th macrostate of a molecular gas comprising a total of G molecules in a box, with n molecules in the state A1 and Gn molecules in the state A2.
The total number of ways in which G molecules can be distributed between two halves of the box in this prescribed macrostate, i.e. the number W = W (n) of distinct equiprobable microstates, clearly is W (n) = G!/[n!(Gn)!]. Here, P (n) = 1/W (n) is a probability with which any of microstates may occur in a system bound to ”live” in a given macrostate. The maximum of W (n) and thus of kB ln W (n) corresponds to N1 = N2 = n.
To get a better insight into the information-uncertainty intertwine, let us consider an ensemble of finite systems which are allowed to appear in any of N > 0 distinct elementary states. The meaning of ”state” is left unspecified, although an ”alphabet” letter may be invoked for convenience.
Let us pick up randomly a large sample composed of G ≫ 1 single systems, each one in a certain (randomly assigned) state. We record frequencies n1/Gp1, ..., nN/GpN with which the elementary states of the type 1, ..., N do actually occur. This sample is a substitute for a ”message” or a ”statistical microstate” in the previous discussion.
Next, we identify the number of all possible samples of that fixed size G which would show up the very same statistics p1, ..., pN of elementary states. We interpret those samples to display the same ”macroscopic behavior”.
It was the major discovery due to Boltzmann, see e.g. [4], that the number W of relevant ”microscopic states” can be approximately read out from each single sample and is directly related to the the introduced a priori probability measure μ1, ..., μN, with an identification piμi for all 1 ≤ iN, by the Shannon formula:
Entropy 07 00253 i009
On the basis of this formula, we can consistently introduce S ( μ ) as the mean information per each (i-th) elementary state of the N-state system, as encoded in a given sample whose size G ≫ 1 is sufficiently large, [9].
To exemplify previous considerations, let us consider N = 2. It is instructive to compare the uncertainty level (alternatively - information content) of S ( μ ) for the two-state system, if we take 2 as the logarithm base instead of e. Then, Eq. (8) would refer to the binary encoding of the message (string) with G entries.
We find that μ1 = 0.1 and μ2 = 0.9 yield S ( μ ) = 0.469. Analogously 0.2 and 0.8 imply 0.7219, while 0.3 and 0.7 give 0.8813. Next, 0.4 and 0.6 imply 0.971, and we reach an obvious maximum S = 1 for μ1 = μ2 = 0.5. An instructive example of the ”dog-flea” model workings with G = 50 fleas jumping back and forth between their ”states of residence” on a dog ”1” or dog ”2”, can be found in Ref. [55]. Albeit, in a number of specific cases, an evolution of the Gibbs entropy may show up some surprises if the ”entropy growth dogma” is uncritically accepted, see e.g. examples in [55,56] and the discussion of Refs. [57,58].
By pursuing the Shannon’s communication theory track, [4], we can as well identify states of the model system with ”messages” (strings) of an arbitrary length G > 0 which are entirely composed by means of the prescribed N ”alphabet” entries (e.g. events or alphabet letters Aj with the previous probability measure μ). Then, Eq. (8) may be interpreted as a measure of information per alphabet letter, obtained after a particular message (string state of the model system) has been received or measured, c.f. our discussion preceding Eq. (8). In this case, the Shannon entropy interpolates between a maximal information (one certain event) and a minimal information (uniform distribution), cf. Eq. (6).
Remark 3: 
Any string containing G = 10.000 symbols which are randomly sampled from among equiprobable N = 27 alphabet letters, [9], stands for a concrete microstate. In view of μ = 1/27, a corresponding macrostate is described via Eqs. (5) and (8) in terms of the number S ( μ ) = ‒ log2(1/27) ≃ 4.76. Accordingly, log2 W = G · S ( μ ) ≃ 47.600, where W is the number of admissible microstates.

2.3 Shannon entropy and differential entropy

2.3.1 Bernoulli scheme and normal distribution

Let us consider again a two-state system where A1 appears with a probability μ1 = p while A2 with a probability μ2 = 1 − p. A probability with which A1 would have appeared exactly n times, in the series of G repetitions of the two-state experiment, is given by the Bernoulli formula:
Entropy 07 00253 i010
where, in view of the Newton formula for the binomial (p + q)G, after setting q = 1 − p we arrive at Entropy 07 00253 i011
Since the number n of successes in the Bernoulli scheme is restricted only by inequalities 0 ≤ nG, what we have actually defined is a probability measure μ = {P0, P1, ..., PG} for G distinct random events denoted B0, B1, ..., BG. Accordingly, we can introduce a random variable B and say that it has the Bernoulli distribution, if B takes values n = 0, 1, ..., G with the Bernoulli probabilities Pn of Eq. (9) for all n. A random event Bn is interpreted as ”taking the value n in the Bernoulli scheme”.
Let us denote P(B = k) ≐ P(Bk) = Pk. We know that P(B < n) = ∑k < n Pk. The mean value E(B) of B reads Entropy 07 00253 i012 The variance E([BE(B)]2) of B equals Gp(1 − p).
The local de Moivre-Laplace theorem tells us that for large values of G the binomial distribution can be approximated by the normal (Gauss) one:
Entropy 07 00253 i013
At this point we shall take an inspiration from Ref. [74] and relate the Bernoulii ”success” probabilities with probabilities of locating a particle in an arbitrary interval on a line R. Namely, let us first consider an interval of length L: [0, L] ⊂ R. Let G ≫ 1, we define an interval grating unit to be r = L/G and next redefine Eq. (10) to arrive at a probability per bin of length r ≪ 1:
Entropy 07 00253 i014
with: xn = nr, x0 = Gpr and σ2 = Gr2p(1 − p). Obviously, ρ(xn) is not a probability on its own, while r · ρ(xn) = Pn is a probability to find a particle in the n-th interval of length r out of the admitted number G = L/r of bins.
For convenience let us specify p = 1/2 which implies x0 = rG/2 and Entropy 07 00253 i015 We recall that almost all of the probability ”mass” of the Gauss distribution is contained in the interval −3σ < x0 < +3σ about the mean value x0. Indeed, we have prob(|xx0|) < 2σ) = 0.954 while prob(|xx0|) < 3σ) = 0.998.
The lower bound 100 < nG justifies the usage of simplified versions of the standard Stirling formula Entropy 07 00253 i016 in view of the above probability ”mass” estimates. Therefore, we can safely replace the Bernoulli probability measure by its (still discrete) Gaussian approximation Eq. (10) and next pass to Eq. (11) and its obvious continuous generalization.
Remark 4: 
By taking a concrete input of L = 1 and G = 104 we get the grating (spacing, resolution) unit r = 10−4. Then, x0 = 1/2 while σ = (1/2) · 10−2. It is thus a localization interval [1/2 − 3σ, 1/2 + 3σ] of length 6σ = 3 · 10−2 to be compared with L = 1. By setting G = 106 we would get 6σ = 3 · 10−3.
For future reference, let us stress that generally we expect r ≪ σ which implies a sharp distinction between the grating (resolution) unit r and the localization properties of the Gauss function expressed through its half-width σ.

2.3.2 Coarse-graining

For a given probability density function on R we can adopt the coarse-graining procedure, [47], giving account of an imprecision with which a spatial position x can be measured or estimated. Thus, if compared with the previous Bernoulli → Gauss argument, we shall proceed in reverse from density functions to approximating them by piece-wise constant, histogram-type discontinuous functions.
We need to partition the configuration space R into a family of disjoint subsets (intervals) {Bk} such that ∪k BkR and BiBj = Ø for ij. We denote μ(Bk) ≐ μk the length of the k-th interval, where μ stands for the Lebesgue measure on R.
A probability that a Gaussian random variable with the density ρ takes its value x in an interval Bk equals prob(Bk) ≐ pk = ∫Bk ρ(x)dx. An average of the density ρ over Bk we denote < ρ >k = pk/μk where μk = ∫Bkdx.
The probability density ρ coarse grained with respect to the partition {Bk} reads:
Entropy 07 00253 i017
where 1k (x) is an indicator (characteristic) function of the set Bk , which is equal 1 for xBk and vanishes otherwise. Since ∫ 1k (x)dx = μk it is clear that
Entropy 07 00253 i018
where an interchange of the summation with integration is presumed to be allowed.
By invoking arguments of the previous subsection, we choose a grating unit μk = r ≪ 1 for all k and notice that < ρ >k = pkρ(xk) · r for certain kBk.
In view of the twice ”triple half-width” spatial localization property of the Gauss function, we can safely assert that an interval L ∼ 6σ about x0 may be used in the coarse graining procedure, instead of the full configuration space R. Effectively, we arrive at a finite partition on L with the resolution L/G = r and then we can safely invoke the definition of pkPk = r · ρ(xk), in conformity with Eq. (11).
For a coarse grained probability density we introduce a coarse grained Shannon entropy whose relationship to the original differential entropy is of major interest. We have:
Entropy 07 00253 i019
with a standard interpretation of the mean information per bin of length r. Here, if a partition (grating) unit r is small, one arrives at an approximate formula (we admit | ln r| ≫ 1):
Entropy 07 00253 i020
with the obvious proviso that S(ρB) ≥ 0 and hence, in view of S ( ρ ) ≥ ln r, we need to have maintained a proper balance between σ and the chosen grating level r.
Remark 5: 
It is instructive to validate the above approximation for the choice of r = 10−6, hence ln r = −6 ln 10 −13.86 . We have: S ( ρ ) = (1/2) ln(2πeσ2) 0.92 + ln σ. By setting σ = (1/2)10−3 we realize that S ( ρ ) 0.92 − ln 2 − 3 ln 10 −6.7 hence is almost twice larger than the allowed lower bound ln r. The approximate value of the coarse grained entropy is here S (ρB) 7.16 and stands for the mean information per partition bin in the ”string” composed of G bins.
In view of Eq. (15), as long as we keep in memory the strictly positive grating unit r, there is a well defined ”regularization” procedure (add − ln r to S ( ρ ) ) which relates the coarse grained entropy with a given differential entropy. In a number of cases it is computationally simpler to evaluate the differential entropy, and then to extract - otherwise computationally intractable - coarse grained entropy.
Notice that one cannot allow a naive zero grating limit in Eq. (15), although r may be arbitrarily small. Indeed, for e.g. the Gaussian densities, differential entropy takes finite values and this would suggest that the coarse grained entropy might be arbitrarily large. This obstacle does not arise if one proceeds with some care. One should remember that we infer the coarse grained entropy from the differential entropy exactly at the price of introducing the resolution unit r. The smaller is r, the better is an approximation of the differential entropy by the second term on the right-hand-side of Eq. (15), but − ln r needs to remain as a finite entry in Eq. (15).
We have inequalities 0 S ( ρ B ) G where L = G · r. They extend to all approximately equal entries in Eq. (15). Since − ln r = − ln L + ln G, we arrive at new inequalities:
Entropy 07 00253 i021
where ∑k[(xk)] − ∫ ρ ln ρ dx with r → 0 and possibly L → ∞. A conclusion is that the differential entropy is unbounded both form below and from the above. In particular, S ( ρ ) may take arbitrarily low negative values, in plain contrast to its coarse grained version S ( ρ B ) which is always nonnegative.
We can be more detailed in connection with approximations employed in Eqs. (16) and (15). Actually, the right-hand-side of Eq. (15) sets a lower bound for the coarse-grained entropy S ( ρ B ) . Let us recall that the value of a convex function x ln x at the mean value of its argument 〈x〉, does not exceed the mean value 〈x ln x〉 of the function itself. Then, in our notation which follows Eq. (12), we can directly employ an averaging over Bk:
Entropy 07 00253 i022
Taking the minus sign, executing summations with respect to k (convergence of the series being presumed) and using Eqs. (15) and (16) we get:
Entropy 07 00253 i023
as a complement to Eq. (17), see e.g. also [27,22].
Equations (16) and (18) allow, with suitable reservations, to extend the standard information/uncertainty measure meaning from coarse-grained entropies to differential entropies per se. Namely, the difference of two coarse grained entropies, corresponding to the same partition but to different (coarse grained) densities, may be adequately approximated by the difference of the corresponding differential entropies:
Entropy 07 00253 i024
provided they take finite values, [6,27].

2.3.3 Coarse-graining exemplified: exponential density

An exponential density on a positive half-line R+ is known to maximize a differential entropy among all R+ density functions with the first moment fixed at 1/λ > 0. The density has the form: ρ(x) = λ exp(−λx) for x ≥ 0 and vanishes for x < 0. Its variance is 1/λ2. The differential entropy of the exponential density reads S ( ρ ) = 1 − ln λ.
In he notation of the previous subsection, let us coarse-grain this density at a particular value of λ = 1 and then evaluate the corresponding entropy as follows. We choose r ≪ 1, pkρ(xk) · r with xk = kr where k is a natural number. One can directly verify that for small r, we can write r ≃ 1 − exp(−r) and thence consider pk ≃ [1 − exp(−r)] exp(−kr), such that Entropy 07 00253 i025, with the well known quantum mechanical connotation.
Namely, let us set r = hν/kT; notice that for the first time in the present paper, we explicitly invoke dimensional units, in terms of which the dimensionless constant r is defined. We readily arrive at the probability of the -th oscillator mode in thermal bath at the temperature T.
Our assumption of r ≪ 1 corresponds to the low frequency oscillator problem with ν ≪ 1, see e.g. [9]. Clearly, we have for the mean number of modes
Entropy 07 00253 i026
which implies the familiar Planck formula
Entropy 07 00253 i027
For the variance we get
Entropy 07 00253 i028
The standard Shannon entropy of the discrete probability distribution pk, kN reads
Entropy 07 00253 i029
so that in view of S ( ρ B ) ≃ − ln r + 1 and S ( ρ ) = 1 for ρ(x) = exp(−x), we clearly have S ( ρ B ) ≃ — ln r + S ( ρ ) , as expected.
Let us point out our quite redundant sophistication. In fact, we could have skipped all the above reasoning and take Eq. (15) as the starting point to evaluate S ( ρ B ) for the coarse grained exponential density, at the assumed resolution r = hν/kBT ≪ 1, with the obvious result
Entropy 07 00253 i030
Remark 6: 
An analogous procedure can be readily adopted to analyze phenomenological histograms of energy spectra for complex systems, with an obvious extension of its range of validity to spectral properties of the classically chaotic case. In particular, semiclassical spectral series of various quantum systems admit an approximation of spacing histograms by continuous distributions on the positive half-line R+. Although a full fledged analysis is quite complicated, one may invoke quite useful, albeit approximate formulas for adjacent level spacing distributions. The previously mentioned exponential density corresponds to the so-called Poisson spectral series. In the family of radial densities of the form Entropy 07 00253 i031 where N and Γ is the Euler gamma function, [79], the particular cases N = 2, 3, 5 correspond to the generic level spacing distributions, based on the exploitation of the Wigner surmise. The respective histograms plus their continuous density interpolations are often reproduced in ”quantum chaos” papers, see for example [80].

2.3.4 Spatial coarse graining in quantum mechanics

The coarse grained entropy attributes the ”mean information per bin of length r” to systems described by continuous probability densities and their differential entropies. Effectively one has a tool which allows to accompany the coarse grained density histogram (of pk in the k-th bin on R) by the related histogram of uncertainties − ln pk, c.f. Section II.A where an uncertainty function has been introduced.
The archetypal example of position measurement in quantum mechanics presumes that position is measured in bins corresponding to the resolution of the measurement apparatus. This means that the continuous spectrum of the position observable is partitioned into a countable set of intervals (bins) whose maximum length we regard as a ”resolution unit”. For an interval BkR we may denote pk the probability of finding the outcome of a position measurement to have a value in Bk . We are free set the bin size arbitrarily, especially if computer assisted procedures are employed, [78].
Following [17] one may take the view that the most natural measure of the uncertainty in the result of a position measurement or preparation of specific localization features of a state vector (amenable to an analysis via spectral properties of the position operator) should be the information entropy coming from Eqs. (12) and (13): S ( ρ B ) ≐ − ∑k pk ln pk with a direct quantum input pk ≐ ∫Bk |ψ(x)|2 dx, where ψL2 (R is normalized. This viewpoint is validated by current experimental techniques in the domain of matter wave interferometry, [44,43], and the associated numerical experimentation where various histograms are generated, [78].
The formula Eq. (15) gives meaning to the intertwine of the differential and coarse grained entropies in the quantum mechanical context. When an analytic form of the entropy is in the reach, the coarse graining is straightforward. One should realize that most of the results known to date have been obtained numerically, hence with an implicit coarse-graining, although they were interpreted in terms of the differential entropy, see e.g. [28,29,30,31,32,33,34,35,36,37].
In connection with an entropic inequality Eq. (4) let us point out [2] that it is a generic property of normalized L2(Rn) wave functions that, by means of the Fourier transformation, they give rise to two interrelated densities (presently we refer to L2(R)): ρ = |ψ|2 and Entropy 07 00253 i032 where
Entropy 07 00253 i033
is the Fourier transform of ψ(x). The inequality (4) for the corresponding (finite) differential entropies follows, here with n = 1.
By choosing resolutions r ≪ 1 and Entropy 07 00253 i034 ≪ 1 we can introduce the respective coarse grained entropies, each fulfilling an inequality Eq. (18). Combining these inequalities with Eq. (4), we get the prototype entropic inequalities for coarse grained entropies:
Entropy 07 00253 i035
with the corresponding resolutions r and Entropy 07 00253 i034.
By referring to Eq. (15) we realize that the knowledge of S ( ρ B ) , completely determines S ( ρ ˜ B ) at the presumed resolution levels:
Entropy 07 00253 i036
and in reverse. This in turn implies that in all computer generated position-momentum differential entropy inequalities, where the coarse graining is implicit, the knowledge of position entropy and of the resolution levels provide sufficient data to deduce the combined position-momentum outcomes, see also [14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37].
In standard units (with ħ reintroduced), the previous discussion pertains to quantum mechanical position - momentum entropic uncertainty relations. In the notation of Refs. [26] and [22] we have:
Entropy 07 00253 i037
for measurement entropies with position and momentum ”abstract measuring device” resolutions δx and δp respectively, such that δx · δp ≪ h.
Note, that to reintroduce ħ we must explain how the differential entropy definition is modified if we pass to dimensional units, see e.g next subsection. Let us also point out that one should not confuse the above resolution units r, Entropy 07 00253 i034 and δx, δp with standard mean square deviation values ∆X and ∆P which are present in the canonical indeterminacy relations: ∆X · ∆Pħ/2.
Let us set ħ 1 (natural units system). If, following conventions we define the squared standard deviation (i.e. variance) value for an observable A in a pure state ψ as (∆A)2 = (ψ, [A − 〈A〉]2ψ) with 〈A〉 = (ψ, ), then for the position X and momentum P operators we have the following version of the entropic uncertainty relation (here expressed through so-called entropy powers, see e.g. [2]):
Entropy 07 00253 i038
which is an alternative for Eq. (4); n = 1 being implicit.

2.4 Impact of dimensional units

Let us come back to an issue of reintroducing physical units in Eq. (28). In fact, if x and p stand for one-dimensional phase space labels and f (x, p) is a normalized phase-space density, ∫ f(x,p)dxdp = 1, then the related dimensionless differential entropy is defined as follows, [10,40]:
Entropy 07 00253 i039
where h = 2πħ is the Planck constant. If ρ(x) = |ψ|2(x), where ψL2(R), then ρ ˜ h ( p ) = | F h ( ψ ) | 2 ( p ) is defined in terms of the dimensional Fourier transform:
Entropy 07 00253 i040
Let us consider the joint density
Entropy 07 00253 i041
and evaluate the differential entropy S h for this density. Remembering that ρ ( x ) d x = 1 = ρ ~ h ( p ) d p , we have formally, [40] Eq. (9):
Entropy 07 00253 i042
to be compared with Eq. (28). The formal use of the logarithm properties before executing integrations in the Entropy 07 00253 i043 has left us with the previously mentioned issue (Section 1.2) of ”literally taking the logarithm of a dimensional argument” i. e. that of ln h.
We recall that S h is a dimensionless quantity, while if x has dimensions of length, then the probability density has dimensions of inverse length and analogously in connection with momentum dimensions.
Let us denote xrδx and p Entropy 07 00253 i034δp where labels r and Entropy 07 00253 i034 are dimensionless, while δx and δp stand for respective position and momentum dimensional (hitherto - resolution) units. Then:
− ∫ ρ ln ρdx − ln(δx) ≐ − ∫ ρ ln(δxρ)dx
is a dimensionless quantity. Analogously
Entropy 07 00253 i044
is dimensionless. First left-hand-side terms in two above equations we recognize as Sx and Sp respectively.
Hence, formally we get a manifestly dimensionless decomposition
Entropy 07 00253 i045
instead of the previous one, Eq. (33). The last identity Eq. (36) gives an unambiguous meaning to the preceding formal manipulations with dimensional quantities.
As a byproduct of our discussion, we have resolved the case of the spatially interpreted real axis, when x has dimensions of length, c.f. also [10]: S δ x x is the pertinent dimensionless differential entropy definition for spatial probability densities.
Example 1: 
Let us go through an explicit example involving the Gauss density
Entropy 07 00253 i046
where σ is the standard deviation (its square stands for the variance). There holds S ( ρ ) = 1 2 ln ( 2 π e σ 2 ) which is a dimensionless outcome. If we pass to x with dimensions of length, then inevitably σ must have dimensions of length. It is instructive to check that in this dimensional case we have a correct dimensionless result:
Entropy 07 00253 i047
to be compared with Eq. (34). Then we realize that S δ x x vanishes if σ/δx = (2πe)−1/2, hence at the dimensional value of the standard deviation σ = (2πe)−1/2δx, compare e.g. [10].
Example 2: 
Let us invoke the simplest (naive) text-book version of the Boltzmann H-theorem, valid in case of the rarified gas (of mass m particles), without external forces, close to its thermal equilibrium, under an assumption of its space homogeneity, [57,58]. If the probability density function f(v) is a solution of the corresponding Boltzmann kinetic equation, then the Boltzmann H-functional (which is simply the negative of the differential entropy) H(t) = ∫ f(v) ln f(v)dv does not increase: d d t H ( t ) 0 . In the present case we know that there exists an invariant (asymptotic) density, which in one-dimensional case has the form f(v) = (m/2πkBT)1/2exp[−m(vv0)2/2kBT]. H(t) is known to be time-independent only if ff(v). We can straightforwardly, albeit formally, evaluate H = ∫ f ln fdv = −(1/2) ln(2πekBT/m) and become faced with an apparent dimensional difficulty, [9], since an argument of the logarithm is not dimensionless. For sure, a consistent integration outcome for H(t) should involve a dimensionless argument kBT/m[v]2 instead of kBT/m, provided [v] stands for any unit of velocity; examples are [v] = 1 m/s (here m stands for the SI length unit, and not for a mass parameter) or 10−5 m/s. To this end, in conformity with our previous discussion, it suffices to redefine H as follows, [9,10]:
Entropy 07 00253 i048
Multiplying f by [v] we arrive at the dimensionless argument of the logarithm in the above and cure the dimensional obstacle.
Remark 7: 
Let us also mention an effect of the scaling transformation upon the differential entropy, [4]. We denote
ραβ = β ρ[β(xα)]
where α > 0, β > 0. The respective Shannon entropy reads:
Entropy 07 00253 i049
In case of the Gaussian ρ, Eq. (37), we get Entropy 07 00253 i050. An obvious interpretation is that the β-scaling transformation of ρ(xα) would broaden this density if β < 1 and would shrink when β > 1. Clearly, S ( α , β ) takes the value 0 at σ = (2πe)−1/2β in analogy with our previous dimensional considerations. If an argument of ρ is assumed to have dimensions, then the scaling transformation with the dimensional β may be interpreted as a method to restore the dimensionless differential entropy value.

3 Localization: differential entropy and Fisher information

We recall that among all one-dimensional distribution functions ρ(x) with a finite mean, subject to the constraint that the standard deviation is fixed at σ, it is the Gauss function with halfwidth σ which sets a maximum of the differential entropy, [4]. For the record, let us add that if only the mean is given for probability density functions on R, then there is no maximum entropy distribution in their set.
Let us consider the Gaussian probability density on the real line R as a reference density function: Entropy 07 00253 i051 in conformity with Eq. (11), but without any restriction on the value of xR.
The differential entropy of the Gauss density has a simple analytic form, independent of the mean value x0 and maximizes an inequality:
Entropy 07 00253 i052
This imposes a useful bound upon the entropy power:
Entropy 07 00253 i053
with an obvious bearing on the spatial localization of the density ρ, hence spatial (un)certainty of position measurements. We can say that almost surely, with probability 0.998, there is nothing to be found (measured) beyond the interval of the length 6σ which is centered about the mean value x0 of the Gaussian density ρ.
Knowing that for arbitrary density functions the differential entropy Eq. (3) is unbounded form below and from above, we realize that in the subset of all densities with a finite mean and a fixed variance σ2, we actually have an upper bound set by Eq. (42). However, in contrast to coarse grained entropies which are always nonnegative, even for relatively large mean deviation Entropy 07 00253 i054 the differential entropy S ( ρ ) is negative.
Therefore, quite apart from the previously discussed direct information theory links, c.f. Eqs. (15), (18) and (19), the major role of the differential entropy is to be a measure of localization in the ”state space” (actually, configuration space) of the system, [75,76,77].
Let us consider a one-parameter family of probability densities ρα(x) on R whose first (mean) and second moments (effectively, the variance) are finite. The parameter-dependence is here not completely arbitrary and we assume standard regularity properties that allow to differentiate various functions of ρα with respect to the parameter α under the sign of an (improper) integral, [82].
Namely, let us denote ∫ α(x)dx = f(α) and ∫ x2ραdx < . We demand that as a function of xR, the modulus of the partial derivative ρα/αis bounded by a function G(x) which together with xG(x) is integrable on R. This implies, [82], the existence of f/∂α and an important inequality:
Entropy 07 00253 i055
directly resulting from
Entropy 07 00253 i056
via the standard Schwarz inequality. An equality appears in Eq. (44) if ρα(x) is the Gauss function with mean value α.
At this point let assume that the mean value of ρα actually equals α and we fix at σ2 the value 〈(xα)2〉= 〈x2〉 − α2 of the variance (in fact, standard deviation from the mean value ) of the probability density ρα. The previous inequality Eq. (44) now takes the familiar form:
Entropy 07 00253 i057
where an integral on the left-hand-side is the so-called Fisher information of ρα, known to appear in various problems of statistical estimation theory, as well as an ingredient of a number of information-theoretic inequalities, [23,24,75,82,83]. In view of Entropy 07 00253 i058 we realize that the Fisher information is more sensitive indicator of the wave packet localization than the entropy power, Eq. (43).
Let us define ρα(x) ≐ ρ(xα). Then, the Fisher information Entropy 07 00253 i059 is no longer the mean value α-dependent and can be readily transformed to the conspicuously quantum mechanical form (up to a factor D2 with D = ħ/2m):
Entropy 07 00253 i060
where u ≐ ∇ln ρ is named an osmotic velocity field, [50,53], and an average 〈Q〉 = ∫ ρ · Qdx is carried out with respect to the function
Entropy 07 00253 i061
As a consequence of Eq. (46), we have −〈Q〉 ≥ 1/2σ2 for all relevant probability densities with any finite mean and variance fixed at σ2.
When multiplied by D2, the above expression for Q(x) notoriously appears in the hydrodynamical formalism of quantum mechanics as the so-called de Broglie-Bohm quantum potential (D = ħ/2m). It appears as well in the corresponding formalism for diffusion-type processes, including the standard Brownian motion (then, D = kBT/mβ, see e.g. [53,54,84].
An important inequality, valid under an assumption ρα(x) = ρ(xα), has been proved in [23], see also [24,85]:
Entropy 07 00253 i062
It tells us that the lower bound for the Fisher information is in fact given a sharper form by means of the (squared) inverse entropy power. Our two information measures appear to be correlated.
Remark 8: 
Let us point out that the Fisher information F ( ρ ) may blow up to infinity under a number of circumstances, [83]: when ρ approaches the Dirac delta behavior, if ρ vanishes over some interval in R or is discontinuous. We observe that F > 0 because it may vanish only when ρ is constant everywhere on R, hence when ρ is not a probability density.
Remark 9: 
The values of F ( ρ α ) and S ( ρ α ) are α-independent if we consider ρα(x) = ρ(xα). This reflects the translational invariance of the Fisher and Shannon information measures, [86]. The scaling transformation ρα,β = β ρ[β(xα)], where α > 0, β > 0, transforms Eq. (43) to the form (2πe)−1/2 exp[ S ( ρ α , β ) ] ≤ σ/β.
Under an additional decomposition/factorization ansatz (of the quantum mechanical L2(Rn) provenance) that ρ(x) ≐ |ψ|2(x), where a real or complex function Entropy 07 00253 i063 is a normalized element of L2(R), another important inequality holds true, [2,23]:
Entropy 07 00253 i064
provided the Fisher information takes finite values. Here, Entropy 07 00253 i065 is the variance of the ”quantum mechanical momentum canonically conjugate to the position observable”, up to (skipped) dimensional factors. In the above, we have exploited the Fourier transform Entropy 07 00253 i066 of ψ tp arrive at Entropy 07 00253 i067 of Eq. (4) whose variance the above Entropy 07 00253 i065 actually is.
In view of two previous inequalities (49), (50) we find that not only the Fisher information, but also an entropy power is bounded from below and above. Namely, we have:
Entropy 07 00253 i068
which implies Entropy 07 00253 i069 and furthermore
Entropy 07 00253 i070
as a complement to Eq. (43). Most important outcome of Eq. (52) is that the differential entropy S ( ρ ) typically may be expected to be a well behaved quantity: with finite both lower and upper bounds.
We find rather interesting that the Heisenberg indeterminacy relationship Eq. (29), which is normally interpreted to set a lower bound on the experimentally accessible phase-space data (e.g. volume), according to Eq. (52) ultimately had appeared to give rise to lower and upper bounds upon the configurational (spatial) information measure and thence - the uncertainty (information) measure.
To our knowledge, the inequalities Eqs. (51) and (52), although implicit in various information theory papers, see especially [23] and [2], hitherto were never explicitly spelled out.

4 Asymptotic approach towards equilibrium: Smoluchowski processes

4.1 Random walk

Let us consider a classic example of a one-dimensional random walk where a particle is assumed to be displaced along R1 with a probability 1/2 forth and back, each step being of a unit length, [74]. If one begins from the origin 0, after G steps a particle can found at any of the points −G, −G + 1, ... − 1, 0, 1, ..., G. The probability that after G displacements a particle can be found at the point g ∈ [−G, G] is given by the Bernoulli distribution:
Entropy 07 00253 i071
where Entropy 07 00253 i072
We are interested in the asymptotic formula, valid for large G and g ≪ G. (Note that even for relatively small value of G = 20, and |g| ≤ 16, an accuracy level is satisfactory.) There holds:
Entropy 07 00253 i073
and, accordingly
Entropy 07 00253 i074
We assume r 10−6m, to be a grating unit (i.e. minimal step length for a walk). Let r ≪ ∆xG (size ∆x 10−4m is quite satisfactory). For large G and |g| ≪ G, we denote xg · r and ask for a probability ρG(x)∆x that a particle can be found in the interval [x, x + ∆x] after G displacements. The result is [74]:
Entropy 07 00253 i075
and by assuming that a particle suffers k displacements per unit time, we can give Eq. (56) the familiar form of the heat kernel:
Entropy 07 00253 i076
with the diffusion coefficient D = kr2/2. It is a fundamental solution of the heat equation tρ = Dρ which is the Fokker-Planck equation for the Wiener process.
The differential entropy of the above time-dependent density reads:
Entropy 07 00253 i077
and its time evolution clearly displays the localization uncertainty growth. By means of the formula Eq. (19) we can quantify the differential entropy dynamics for all solutions of the heat equation.
Since the heat kernel determines the transition probability density for the Wiener process (free Brownian motion in R), by setting xxx′ and ttt′ ≥ 0, we can replace the previous ρ(x, t) of Eq. (57) by p(xx′,tt′). This transition density allows to deduce any given solution ρ(x, t) of the heat equation from its past data, according to: ρ(x, t) = ∫ p(xx′, tt′)ρ(x′, t,)dx′. In particular, we can consider the process starting at t′ = 0 with any initial density ρ0(x).
Let ρυ denote a convolution of a probability density ρ with a Gaussian probability density having variance υ. The transition density of the Wiener process generates such a convolution for ρ0, with υ = σ2 ≐ 2Dt. Then, de Bruijn identity, [23,75], d S ( ρ υ ) / d υ = ( 1 / 2 ) F ( ρ υ ) , directly yields the information entropy time rate for S ( ρ ) = S ( t ) :
Entropy 07 00253 i078
The Fisher information F ( ρ ) is the α = 0 version of the general definition given in Eqs. (46) and (47). The derivation of Eq. (59) amounts to differentiating an υ-dependent integrand under the sign of an improper integral, [82,83])
The monotonic growth of S ( t ) is paralleled by linear in time growth of σ ( t ) and the decay of F , hence quantifies the uncertainty (disorder) increase related to the ”flattening” down of ρ, see also [83,86].

4.2 Kullback entropy versus differential entropy

We emphasize that in the present paper we have deliberately avoided the use of the relative Kullback-Leibler entropy, [6,47,81]. This entropy notion is often invoked to tell ”how far from each other” two probability qdensities are. The Kullback entropy is particularly useful if one investigates an approach of the system toward (or its deviation from) equilibrium, this being normally represented by a stationary density function, [48,88]. In this context, it is employed to investigate a major issue of the dynamical origins of the increasing entropy, see [47,48,92]. Consult also both standard motivations and apparent problems encountered in connection with the celebrated Boltzmann’s H-theorem, [57,58] and [67].
However, a reliability of the Kullback entropy may be questioned in case of general parameterdependent densities. In particular, this entropy fails to quantify properly certain features of a non-stationary dynamics of probability densities. Specifically if we wish to make a ”comparison” of once given density function to itself, but at different stages (instants) of its time evolution.
Let us consider a one parameter family of Gaussian densities ρα = ρ(xα), with the mean αR and the standard deviation fixed at σ. These densities are not differentiated by the information (differential) entropy and share its very same value Entropy 07 00253 i079 independent of α.
If we admit σ to be another free parameter, a two-parameter family of Gaussian densities ραρα,σ(x) appears. Such densities, corresponding to different values of σ and σ′ do admit an ”absolute comparison” in terms of the Shannon entropy, in accordance with Eq. (19):
Entropy 07 00253 i080
By denoting Entropy 07 00253 i081 and σ′ ≐ σ(t′) we make the non-stationary (heat kernel) density amenable to the ”absolute comparison” formula at different time instants t > t > 0: (σ′/σ) = Entropy 07 00253 i082
In the above we have ”compared” differential entropies of quite akin, albeit different, probability densities. Among many inequivalent ways to evaluate the ”divergence” between probability distributions, the relative (Kullback) entropy is typically used to quantify such divergence from the a priori prescribed reference density, [48,88].
We define the Kullback entropy Entropy 07 00253 i083 for a one-parameter family of probability densities ρθ, so that the ”distance” between any two densities in this family can be directly evaluated. Let ρθ′ stands for the prescribed (reference) probability density. We have, [6,81,88]:
Entropy 07 00253 i084
which, in view of the concavity of the function f(w) = −w ln w, is positive.
Let us indicate that the negative of Entropy 07 00253 i085, named the conditional entropy [6], is predominantly used in the literature [6,48,92] because of its affinity (regarded as a formal generalization) to the differential entropy. Then e.g. one investigates an approach of K towards its maximum (usually achieved at the value zero) when a running density is bound to have a unique stationary asymptotic, [92].
If we take θ′ ≐ θ + ∆θ with ∆θ ≪ 1, the following approximate formula holds true under a number of standard assumptions, [81]:
Entropy 07 00253 i086
where F θ denotes the Fisher information measure, previously defined by Eq. (46). With this proviso, we can evaluate the Kullback distance within a two-parameter (α, σ) family of Gaussian densities, by taking θα.
Passing to α′ = α + ∆α at a fixed value of σ we arrive at:
Entropy 07 00253 i087
For the record, we note that the respective Shannon entropies do coincide: Entropy 07 00253 i088.
Analogously, we can proceed with respect to the label σ at α fixed:
Entropy 07 00253 i089
when, irrespective of α:
Entropy 07 00253 i090
By choosing θσ2 at α fixed, we get (now the variance σ2 is modified by its increment ∆(σ2)):
Entropy 07 00253 i091
while
Entropy 07 00253 i092
which, upon identifications σ2 = 2Dt and ∆(σ2) = 2Dt, sets an obvious connection with the differential ( S ) ( t ) and thence with the time derivative Entropy 07 00253 i093 of the heat kernel differential entropy, Eq. (58) and the de Bruijn identity.
Our previous observations are a special case of more general reasoning. Namely, if we consider a two-parameter θ ≐ (θ1, θ2) family of densities, then instead of Eq. (62) we would have arrived at
Entropy 07 00253 i094
where i, j, = 1, 2 and the Fisher information matrix F i j has the form
Entropy 07 00253 i095
In case of Gaussian densities, labelled by independent parameters θ1 = α and θ2 = σ (alternatively θ2 = σ2), the Fisher matrix is diagonal and defined in terms of previous entries F α and F σ (or F σ 2 ).
It is useful to note (c.f. also [92]) that in self-explanatory notation, for two θ and θ′ Gaussian densities there holds:
Entropy 07 00253 i096
The first entry in Eq. (70) coincides with the ”absolute comparison formula” for Shannon entropies, Eq. (60). However for |θ′−θ| ≪ 1, hence in the regime of interest for us, the second term dominates the first one.
Indeed, let us set α′ = α and consider σ2 = 2Dt, ∆(σ2) = 2Dt. Then Entropy 07 00253 i097 while Entropy 07 00253 i098. Although, for finite increments ∆t we have
Entropy 07 00253 i099
the time derivative notion Entropy 07 00253 i100 can be defined exclusively for the differential entropy, and is meaningless in terms of the Kullback ”distance”.
Let us mention that no such obstacles arise in the standard cautious use of the relative Kullback entropy H c . Indeed, normally one of the involved densities stands for the stationary reference one ρθ′(x) = ρ(x), while another evolves in time ρθ(x) ≐ ρ(x, t), tR+, thence Entropy 07 00253 i101, see e.g. [48,92].

4.3 Entropy dynamics in the Smoluchowski process

We consider spatial Markov diffusion processes in R with a diffusion coefficient (constant or time-dependent) D and admit them to drive space-time inhomogeneous probability densities ρ = ρ(x, t). In the previous section we have addressed the special case of the free Brownian motion characterized by the current velocity (field) vv(x, t) = −u(x, t) = D∇lnρ(x, t) and the diffusion current jv · ρ which obeys the continuity equation tρ = −∇j, this in turn being equivalent to the heat equation.
It is instructive to notice that the gradient of a potential-type function Q = Q(x, t), c.f. Eq. (48), entirely composed in terms of u:
Entropy 07 00253 i102
almost trivially appears (i.e. merely as a consequence of the heat equation, [53,94]) in the hydrodynamical (momentum) conservation law appropriate for the free Brownian motion:
tv + (v · ∇)v = −∇Q.
A straightforward generalization refers to a diffusive dynamics of a mass m particle in the external field of force, here taken to be conservative: F = F(x) = −∇V. The associated Smoluchowski diffusion process with a forward drift Entropy 07 00253 i103 is analyzed in terms of the Fokker-Planck equation for the spatial probability density ρ(x, t), [88,89,90,91]:
tρ = Dρ − ∆(b · ρ)
with the initial data ρ0(x) = ρ(x, 0).
Note that if things are specialized to the standard Brownian motion in an external force field we know a priori (due to the Einstein fluctuation-dissipation relationship, [74]) that Entropy 07 00253 i104, where β is interpreted as the friction (damping) parameter, T is the temperature of the bath, kB being the Boltzmann constant .
We assume, modulo restrictions upon drift function [52,68], to resolve the Smoluchowski dynamics in terms of (possibly non-unique) Markovian diffusion-type processes. Then, the following compatibility equations follow in the form of hydrodynamical conservation laws for the diffusion process, [53,94]:
tρ + ∇(υρ) = 0
(t + v · ∇)v = ∇(Ω − Q)
where, not to confuse this notion with the previous force field potential V , we denote by Ω(x) the so-called volume potential for the process:
Entropy 07 00253 i105
where the functional form of Q is given by Eq. (72). Obviously the free Brownian law, Eq. (73), comes out as the special case.
In the above (we use a short-hand notation vv(x, t)):
Entropy 07 00253 i106
defines the current velocity of Brownian particles in external force field. This formula allows us to transform the continuity equation into the Fokker-Planck equation and back.
With a solution ρ(x, t) of the Fokker-Planck equation, we associate a differential (information) entropy S ( t ) = − ∫ ρ ln ρ dx which is typically not a conserved quantity, [28,29,30,31,32,33,34,35,36,37,38,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59]. The rate of change in time of S ( t ) readily follows.
Boundary restrictions upon ρ, and to vanish at spatial infinities (or at finite spatial volume boundaries) yield the rate equation:
Entropy 07 00253 i107
to be compared with the previous, b = 0 case, Eq. (73).
Anticipating further discussion, let us stress that even in case of plainly irreversible diffusive dynamics, it is by no means obvious that the differential entropy should grow, decay (diminish) or show up a mixed behavior. It is often tacitly assumed that one should ”typically” have Entropy 07 00253 i108 which is not true, [67,87].
We can rewrite Eq. (79) in a number of equivalent forms, like e.g. (note that 〈u2〉 = −D∇ · u〉)
Entropy 07 00253 i109
and specifically, [67,68], as the major entropy balance equation:
Entropy 07 00253 i110
where 〈·〉 denotes the mean value with respect to ρ.
This balance equation is extremely persuasive, since b = F/() and j = vρ combine into a characteristic ”power release” expression:
Entropy 07 00253 i111
Like in case of not necessarily positive Entropy 07 00253 i100, the ”power release” expression Entropy 07 00253 i112 may be positive which represents the power removal to the environment, as well as negative which corresponds to the power absorption from the environment.
In the formal thermodynamical lore, in the above we deal here with the time rate at which the mechanical work per unit of mass may possibly being dissipated (removed to the reservoir) in the form of heat, in the course of the Smoluchowski diffusion process: Entropy 07 00253 i113 with T being the temperature of the bath. When there is no external forces, we have b = 0, and then the differential entropy time rate formula for the free Brownian motion Eq. (59) reappears.
On the other hand, the positive terms in Eq. (81) and Eq. (59) represent the rate at which information entropy is put (pumped) into the diffusing system by the thermally active environment, thus causing a disorder/uncertainty growth. This particular ”entropy production” rate may possibly be counterbalanced (to this end we need external forces) by the heat removal due to dissipation, according to:
Entropy 07 00253 i114
where Entropy 07 00253 i112 is defined in Eq. (82) while Entropy 07 00253 i115.
Remark 10: 
In Refs. [68,67,69] a measure-theoretic and probabilistic justification was given to an interpretation of (1/D)〈v2〉 as the entropy production rate of the (originally - stationary) diffusion process with the current velocity v. We would like to point out that traditionally, [61,70,71], the statistical mechanical notion of an entropy production refers to the excess entropy that is pumped out of the system. An alternative statement tells about the entropy production by the physical system into the thermostat. In the present discussion, an increase of the information entropy of the Smoluchowski process definitely occurs due to the thermal environment: the differential entropy is being generated (produced) in the physical system by its environment.
Of particular interest is the case of constant information entropy Entropy 07 00253 i100 = 0 which amounts to the existence of steady states. In the simplest case, when the diffusion current vanishes, we encounter the primitive realization of the state of equilibrium with an invariant density ρ. Then, b = u = D∇ln ρ and we readily arrive at the classic equilibrium identity for the Smoluchowski process:
−(1/kBT)∇V = ∇ln ρ
which determines the functional form of the invariant density in case of a given conservative force field, [68,88]. There is an ample discussion in Ref. [68] of how these properties match with time reversal of the stationary diffusion process and the vanishing of the entropy production (in our lore, see e.g. [61,60]) rate ( Entropy 07 00253 i100)in.
Coming back to the general discussion, let us define the so-called thermodynamic force Fthv/D associated with the Smoluchowski diffusion and introduce its corresponding time-dependent potential function Ψ(x, t):
kBT Fth = FkBT∇ ln ρ ≐ −∇Ψ.
Notice that v = −(1/)∇Ψ. In the absence of external forces (free Brownian motion), we obviously get Fth = −∇ln ρ = −(1/D)u.
The mean value of the potential
Ψ = V + kBTln ρ
of the thermodynamic force associates with the diffusion process an obvious analogue of the Helmholtz free energy:
Entropy 07 00253 i116
where the dimensional version Entropy 07 00253 i117 of information entropy has been introduced (actually, it is a direct configuration-space analog of the Gibbs entropy). The expectation value of the mechanical force potential 〈V〉 plays here the role of (mean) internal energy, [67,71].
By assuming that ρV v vanishes at integration volume boundaries (or infinity), we easily get the time rate of Helmholtz free energy at a constant temperature T:
Entropy 07 00253 i118
By employing Eq. (83), we readily arrive at
Entropy 07 00253 i119
which either identically vanishes (equilibrium) or remains negative.
Thus, Helmholtz free energy either remains constant in time or decreases as a function of time at the rate set by the information entropy ”production” Entropy 07 00253 i120. One may expect that actualy 〈Ψ〉(t) drops down to a finite minimum as t → ∞.
However, this feature is a little bit deceiving. One should be aware that a finite minimum as well may not exist, which is the case e.g. for the free Brownian. Also, multiple minima need to be excluded as well.

4.4 Kullback entropy versus Shannon entropy in the Smoluchowski process

In the presence of external forces the property Eq. (89) may consistently quantify an asymptotic approach towards a minimum corresponding to an invariant (presumed to be unique) probability density of the process. Indeed, by invoking Eq. (84) we realize that
Entropy 07 00253 i121
where Z = ∫ exp(−V(x)/kBT)dx, sets the minimum of 〈Ψ〉(t) at 〈Ψ = Ψ = −kBTln Z.
Let us take the above ρ(x) as a reference density with respect to which the divergence of ρ(x, t) is evaluated in the course of the pertinent Smoluchowski process. This divergence is well quantified by the conditional Kullback entropy Entropy 07 00253 i122. Let us notice that
Entropy 07 00253 i123
Consequently, in view of Eqs. (88) and (83), we get
Entropy 07 00253 i124
so that Entropy 07 00253 i125. An approach of 〈Ψ〉(t) towards the minimum proceeds in the very same rate as this of Entropy 07 00253 i122 towards its maximum.
In contrast to Entropy 07 00253 i126 which is non-negative, we have no growth guarantee for the differential entropy Entropy 07 00253 i100 whose sign is unspecified. Nonetheless, the balance between the time rate of entropy production/removal and the power release into or out of the environment, is definitely correct. We have Entropy 07 00253 i127
The relationship between two different forms of entropy, differential (Shannon) and conditional (Kullback), and their dynamics is thereby set.
An exhaustive discussion of the temporal approach to equilibrium of the Gibbs and conditional entropies can be found in Refs. [92,93]. Both invertible deterministic and non-invertible stochastic systems were addresses.
Typically, the conditional entropy either remains constant or monotonically increases to its maximum at zero. This stays in conformity with expectations motivated by the Boltzamnn H-theorem and the second law of thermodynamics.
To the contrary, the Gibbs entropy displays a different behavior, even under the very same approach-to-equilibrium circumstances: it may monotonically increase or decrease, and as well display an oscillatory pattern. In below we shall demonstrate that this potentially strange behavior of the differential entropy gives an insight into nontrivial power transfer processes in the mean, whose assessment would not be possible in terms of the conditional entropy.

4.5 One-dimensional Ornstein-Uhlenbeck process

It is quite illuminating to exemplify previous considerations by a detailed presentation of the standard one-dimensional Ornstein-Uhlenbeck process. We denote b(x) = −γx with γ > 0.
If an initial density is chosen in the Gaussian form, with the mean value α0 and variance Entropy 07 00253 i128. the Fokker-Planck evolution Eq. (74) preserves the Gaussian form of ρ(x, t) while modifying the mean value α(t) = α0 exp(−γt) and variance according to
Entropy 07 00253 i129
Accordingly, since a unique invariant density has the form ρ = /γ/2πD exp(−γx2/2D) we obtain, [92]:
Entropy 07 00253 i130
while in view of our previous considerations, we have S ( t ) = (1/2) ln[2πeσ2(t)] and F = 1 / σ 2 ( t ) .
Therefore
Entropy 07 00253 i131
We observe that if Entropy 07 00253 i128 > D/γ, then Entropy 07 00253 i100 < 0, while Entropy 07 00253 i128 < D/γ implies Entropy 07 00253 i100 > 0. In both cases the behavior of the differential entropy is monotonic, though its growth or decay do critically rely on the choice of Entropy 07 00253 i128. Irrespective of Entropy 07 00253 i128 the asymptotic value of S ( t ) as t → ∞ reads (1/2)ln[2πe(D/γ).
The differential entropy evolution is anti-correlated with this of the localization, since
Entropy 07 00253 i132
For all Entropy 07 00253 i128 the asymptotic value of F reads γ/D.
We have here a direct control of the behavior of the ”power release” expression Entropy 07 00253 i133. Since
Entropy 07 00253 i134
in case of Entropy 07 00253 i100 < 0 we encounter a continual power supply Entropy 07 00253 i112 > 0 by the thermal environment.
In case of Entropy 07 00253 i100 > 0 the situation is more complicated. For example, if α0 = 0, we can easily check that Entropy 07 00253 i112 < 0, i.e. we have the power drainage from the environment for all tR+. More generally, the sign of Entropy 07 00253 i112 is negative for Entropy 07 00253 i188 < 2(Dγ Entropy 07 00253 i128)/γ. If the latter inequality is reversed, the sign of Entropy 07 00253 i112 is not uniquely specified and suffers a change at a suitable time instant tchange( Entropy 07 00253 i188, Entropy 07 00253 i128).

4.6 Mean energy and the dynamics of Fisher information

By considering (−ρ)(x, t) and s(x, t), such that v = ∇s, as canonically conjugate fields, we can invoke the variational calculus. Namely, one may derive the continuity (and thus Fokker-Planck) equation together with the Hamilton-Jacobi type equation (whose gradient implies the hydrodynamical conservation law Eq. (76)):
Entropy 07 00253 i135
by means of the extremal (least, with fixed end-point variations) action principle involving the (mean Lagrangian:
Entropy 07 00253 i136
The related Hamiltonian (which is the mean energy of the diffusion process per unit of mass) reads
Entropy 07 00253 i137
i. e.
Entropy 07 00253 i138
We can evaluate an expectation value of Eq. (98) which implies an identity Entropy 07 00253 i139. By invoking Eq. (86), with the time-independent V, we arrive at
Entropy 07 00253 i140
whose expectation value Entropy 07 00253 i141, in view of vρ = 0 at the integration volume boundaries, identically vanishes. Since v = −(1/)∇Ψ, we define
s(x, t) ≐ (1/)Ψ(x, t) ⇒ 〈ts〉 = 0
so that Entropy 07 00253 i142 identically.
We have thus arrived at the following interplay between the mean energy and the information entropy ”production” rate:
Entropy 07 00253 i143
generally valid for Smoluchowski processes with non-vanishing diffusion currents.
By recalling the notion of the Fisher information Eq. (47) and setting Entropy 07 00253 i144, we can rewrite the above formula as follows:
Entropy 07 00253 i145
where F /2 = −〈Q〉 > 0 holds true for probability densities with finite mean and variance.
We may evaluate directly the uncertainty dynamics of the Smoluchowski process, by recalling that the Fisher information F /2 is the localization measure, which for probability densities with finite mean value and variance σ2 is bounded from below by 1/σ2, see e.g. Section III.
Namely, by exploiting the hydrodynamical conservation laws Eq. (76) for the Smoluchowski process we get:
t(ρv2) = −∇ · [(ρv3)] − 2ρv · ∇(Q − Ω).
We assume to have secured conditions allowing to take a derivative under an indefinite integral, and take for granted that of ρv3 vanishes at the integration volume boundaries. This implies the following expression for the time derivative of 〈v3〉:
Entropy 07 00253 i146
Proceeding in the same vein, in view of Ω . , we find that
Entropy 07 00253 i147
and so the equation of motion for F follows:
Entropy 07 00253 i148
Since we have ∇Q = ∇P/ρ where P = D2ρ∆ ln ρ, the previous equation takes the form F . = − ∫ ρvQdx = − ∫ vPdx, which is an analog of the familiar expression for the power release (dE/dt = F · v, with F = −∇V) in classical mechanics; this to be compared with our previous discussion of the ”heat dissipation” term Eq. (82).
Remark 11: 
As indicates our previous example of the Ornstein-Uhlenbeck process in one dimension, there is nothing obvious to say about the growth or decay of various quantities involved. In this particular case, we have e.g. 〈v2〉(t) = (D/2) Entropy 07 00253 i126 = t(γ2 Entropy 07 00253 i188/D) exp(−2γt), hence an asymptotic value 0, while 〈u2〉(t) = (D/2) F (t) → γ/D. Accordingly, we have 〈〉(t) → − γ/2D.

5 Differential entropy dynamics in quantum theory

5.1 Balance equations

In the discussion of Smoluchowski diffusions, our major reference point was the conventional Fokker-Planck equation (74) for a probability density supporting a Markovian diffusion process. The (time-independent) drift function b was assumed to be known a priori (e.g. the conservative external forces were established on phenomenological or model construction grounds), while the initial and/or boundary data for the probability density of the process could be chosen (to a high degree) arbitrarily.
Under such ”normal” circumstances, the hydrodynamical conservation laws (76) come out as a direct consequence of the Fokker-Planck equation. Also, the functional expression for Ω of the form (77) is basically known to arise if one attempts to replace an elliptic diffusion operator by a Hermitian (and possibly self-adjoint) one, [53,68,88].
We shall depart from the standard Brownian motion setting to more general Markovian diffusion-type processes which, while still respecting the Fokker-Planck equation, admit general time-dependent forward drifts. In fact, we invoke at this point a well defined stochastic counterpart of the Schrödinger picture quantum dynamics of wave packets, [50,51,52,53,54,68,85], where the notion of differential entropy and its dynamics finds a proper place. The dynamics of quantal probability densities is here resolved in terms of diffusion-type processes.
Let us assume to have chosen an arbitrary continuous (it is useful, if bounded from below) function Entropy 07 00253 i149 with dimensions of energy. we consider the Schrödinger equation (set D = ħ/2m) in the form
Entropy 07 00253 i150
The Madelung decomposition ψ = ρ1/2 exp(is) with the phase function s = s(x, t) defining v = ∇s is known to imply two coupled equations: the standard continuity equation tρ = −∇() and the Hamilton-Jacobi-type equation
Entropy 07 00253 i151
where Entropy 07 00253 i152 and the functional form of Q coincides with this introduced previously in Eq. (72). Notice a ”minor” sign change in Eq. (110) in comparison with Eq. (98).
Those two equations form a coupled system, whose solutions describe a Markovian diffusion-type process: the probability density is propagated by a Fokker-Planck dynamics of the form Eq. (74) with the drift b = vu where u = D∇ln ρ is an osmotic velocity field.
We can mimic the calculus of variations steps of the previous section, so arriving at the Hamiltonian (actually, the mean energy of the quantum motion per unit of mass):
Entropy 07 00253 i153
to be compared with Eq. (100). There holds
Entropy 07 00253 i154
Of particular interest (due to its relative simplicity) is the case of time-independent V , when
Entropy 07 00253 i155
is known to be a conserved finite quantity, which is not necessarily positive. Since generally Entropy 07 00253 i156, we deal here with so-called finite energy diffusion-type processes, [51,52]. The corresponding Fokker-Planck equation propagates a probability density |ψ|2 = ρ, whose differential entropy S may quite nontrivially evolve in time.
Keeping intact the previous derivation procedures for ( Entropy 07 00253 i100)in (while assuming the validity of mathematical restrictions upon the behavior of integrands), we encounter the information entropy balance equations in their general form disclosed in Eqs. (81)-(83). The related differential entropy ”production” rate reads:
Entropy 07 00253 i157
We recall that Entropy 07 00253 i158 which implies Entropy 07 00253 i159. Therefore, the localization measure F has a definite upper bound: the pertinent wave packet cannot be localized too sharply.
We notice that the localization (Fisher) measure
Entropy 07 00253 i160
in general evolves in time. Here E is a constant and Ω . = 0.
By invoking the hydrodynamical conservation laws, we find out that the dynamics of Fisher information follows an equation:
Entropy 07 00253 i161
and that there holds
Entropy 07 00253 i162
which is to be compared (notice the opposite sign of the right-hand expression) with the result we have obtained for Smoluchowski processes.
Obviously, now we have Entropy 07 00253 i163, with the same functional form for P as before. We interpret F . as the measure of power transfer in the course of which the (de)localization ”feeds” the diffusion current and in reverse. Here, we encounter a negative feedback between the localization and the proper energy of motion which keeps intact an overall mean energy H = E of the quantum motion. See e.g. also [53].
In case of v = 0, we have Entropy 07 00253 i164 and no entropy ”production” nor dynamics of uncertainty. There holds Entropy 07 00253 i100 = 0 and we deal with time-reversible stationary diffusion processes and their invariant probability densities ρ(x), [52,68].
Remark 12: 
Let us indicate that the phase function s(x, t) shows up certain (remnant) features of the Helmholtz Ψ and 〈Ψ〉. This behavior is not unexpected, since e.g. the ground state densities (and other invariant densities of stationary states) are directly related to time-reversible stationary diffusion-type processes of Refs. [52,68]. We have −〈ts〉 = E . In view of v = ∇s and assumed vanishing of sρv at the integration volume boundaries, we get:
Entropy 07 00253 i165
The previously mentioned case of no entropy ”production” refers to v = 0 and thus s = s0 E · t. We recall that the corresponding derivation of Eq. (89) has been carried out for v = −(1/)∇Ψ, with Entropy 07 00253 i141 = 0). Hence, as close as possible link with the present discussion is obtained if we re-define s into sΨs. Then we have
Entropy 07 00253 i166
For stationary quantum states, when v = 0 identically, we get Entropy 07 00253 i167, in contrast to the standard Fokker-Planck case of Entropy 07 00253 i168.
Interestingly enough, we can write the generalized Hamilton-Jacobi equation, while specified to the v = regime, with respect to sΨ. Indeed, there holds tsΨ = Ω − Q, in close affinity with Eq. (98) in the same regime.

5.2 Differential entropy dynamics exemplified

5.2.1 Free evolution

Let us consider the probability density in one space dimension:
Entropy 07 00253 i169
and the phase function
Entropy 07 00253 i170
which determine a free wave packet solution of equations (109) and (110), i.e. obtained for the case of V ≡ 0 with the initial data ψ(x,0) = (πα2)−1/4 exp(−x2/2α2).
We have:
Entropy 07 00253 i171
and the the Fokker-Planck equation with the forward drift b(x, t) is solved by the above ρ.
In the present case, the differential entropy reads:
Entropy 07 00253 i172
where 〈X2〉 ≐ ∫ x2ρdx = (α4 + 4D2t2)/2α2. Its time rate Entropy 07 00253 i173 equals:
Entropy 07 00253 i174
for t ≥ 0. Its large time asymptotic is D/t.
Furthermore, we have
Entropy 07 00253 i175
with the obvious large time asymptotic value 2D2/α2: the differential entropy production remains untamed for all times.
Due to 〈u2〉 = (2D2α2)/(α4 + 4D2t2) there holds
Entropy 07 00253 i176
Accordingly, the quantum mechanical analog of the entropy (rather than heat) ”dissipation” term −D · Q in the quantum case reads
Entropy 07 00253 i177
and while taking negative values for t < α2/2D, it turns out to be positive for larger times. Formally speaking, after a short entropy ”dissipation” period we pass to the entropy ”absorption” regime which in view of its D/t asymptotic, for large times definitely dominates D( Entropy 07 00253 i100)in 2D2/α2.
Those differential entropy balance features do parallel a continual growth of the mean kinetic energy (1/2)〈v2〉 from an initial value 0 towards its asymptotic value D2/α2 = E . Note that the negative feedback is here displayed by the the behavior of 〈u2〉 which drops down from the initial value 2D2/α2 towards 0. It is also instructive to notice that in the present case F (t) = D2/(X2)(t).
We can readily check that Entropy 07 00253 i178.

5.2.2 Steady state

We choose the probability density in the form:
Entropy 07 00253 i179
where the classical harmonic dynamics with particle mass m and frequency ω is involved such that q(t) = q0 cos(ωt) + (p0/) sin(ωt) and p(t) = p0 cos(ωt) − mωq0 sin(ωt).
One can easily verify that (9), and (40) hold true identically once we set V = 1 ω2x2 and consider:
s(x, t) = (1/2m)[xp(t) − (1/2)p(t)q(t) − mDωt].
A forward drift takes the form:
Entropy 07 00253 i180
and the above ρ solves the corresponding Fokker-Planck equation.
The differential entropy is a constant equal S = (1/2) ln(2πeD/ω). Although trivially d S /dt = 0, all previous arguments can be verified.
For example, we have v = ∇s = p(t)/2m and therefore an oscillating entropy ”production” term D( Entropy 07 00253 i100)in = p2(t)/4m2 which is balanced by an oscillating ”dissipative” counter-term to yield Entropy 07 00253 i100.
Suitable expressions for 〈s〉 and 〈ts〉 easily follow.
Concerning the Fisher measure, we have obviously F = ω/D which is a constant of motion.

5.2.3 Squeezed state

Let us consider [33] the squeezed wave function of the harmonic oscillator. We adopt the re-scaled units ħ = ω = m = 1, hence also D = 1. The solution of the Schröd inger equation i∂tψ = −(1/2)∆ψ + (x2/2)ψ with the initial data ψ(x,0) = (γ2π)−1/4 exp(−x2/2γ2) and γ ∈(0, ), is defined in terms of the probability density:
Entropy 07 00253 i181
where
Entropy 07 00253 i182
and the phase function
Entropy 07 00253 i183
Now, the differential entropy S = (1/2) ln[2πeσ2(t)] displays a periodic behavior in time, whose level of complexity depends on the particular value of the squeezing parameter γ. The previously mentioned negative feedback is here manifested through (counter)oscillations of the localization, this in conformity with the dynamics of σ2(t) and the corresponding oscillating dynamics of the Fisher measure F = 1/σ2(t). .
See e.g. also [33] for a pictorial analysis and an instructive computer assisted discussion of the Schrödinger cat state (superposition of the harmonic oscillator coherent states with the same amplitude but with opposite phases), with the time evolution of the corresponding differential entropy.

5.2.4 Stationary states

In contrast to generic applications of the standard Fokker-Planck equation, where one takes for granted that there is a unique positive stationary probability density, the situation looks otherwise if we admit the Schrödinger equation as a primary dynamical rule for the evolution of (inferred) probability densities. For a chosen potential, all available stationary quantum states may serve the purpose, since then we have nonnegative (zeroes are now admitted) ρ(x), and v(x) = 0 identically (we stay in one spatial dimension).
The standard harmonic oscillator may serve as an instructive example. One may e.g. consult Fig. 3 in [30] to check the behavior of both position and momentum differential entropies, and their sum, depending on the energy eigenvalue. All these stationary state values grow monotonically with n = 1, 2, ..., 60, [30] and follow the pattern in the asymptotic regime n 500, [33].
For convenience we shall refer to the Schrödinger eigenvalue problem with scaled away physical units. We consider ( compare e.g. Eq. (109) with D → 1/2)
Entropy 07 00253 i184
In terms of a suitable Hamilton-Jacobi type equation we can address the same problem by seeking solutions of an equation
n + 1/2 = Ω − Q
with respect to Entropy 07 00253 i185, provided we set Ω = x2/2, define u = ∇ln Entropy 07 00253 i185 and demand that Q = u2/2 + (1/2)∇ · u.
For the harmonic oscillator problem, we can refer to standard textbooks. For each value of n we recover a corresponding unique stationary density: Entropy 07 00253 i186 with n = 0,1,2,...). We have:
Entropy 07 00253 i187
where Hn(x) stands for the n-th Hermite polynomial: H0 = 1, H1 = 2x, H2 = 2(2x2 − 1), H3 = 4x(2x2 − 3), and so on.
We immediately infer e.g. b0 = −xQ = x2/2 − 1/2, next b1 = (1/x) − xQ = x2/2 − 3/2, and b2 = [4x/(2x2 − 1)] − xQ = x2/2 − 5/2, plus b3 = [(1/x) + 4x/(2x2 − 3)] → Q = x2 − 7/2, that is to be continued for n > 3. Therefore Eq. (135) is here a trivial identity.
Obviously, except for the ground state which is strictly positive, all remaining stationary states are nonnegative.
An open problem, first generally addressed in [95], see also [96], is to implement a continuous dynamical process for which any of induced stationary densities may serve as an invariant asymptotic one. An obvious (Ornstein-Uhlenbeck) solution is known for the ground state density. Its ample discussion (albeit without mentioning the quantum connotation) has been given in Section IV.

6 Outlook

One may raise an issue of what are the entropy functionals good for. In non-equilibrium statistical mechanics of gases one invokes them to solve concrete physical problems: for example to address the second law of thermodynamics and the related Boltzmann H-theorem from a probabilistic point of view.
We are strongly motivated by a number of reported problems with the rigorous formulation of effects of noise on entropy evolution and a justification of the entropy growth paradigm for model systems, [13,57,58,89,92,97] and a long list of mathematically oriented papers on the large time asymptotic of stochastic systems, [98,99,100,101,102]. Therefore, the major issue addressed in the present paper is that of quantifying the dynamics of probability densities in terms of proper entropy functionals. The formalism was designed to encompass standard diffusion processes of non-equilibrium statistical physics and the Schrödinger picture implemented dynamics of probability densities related to pure quantum states in L2(R). In the latter case an approach towards equilibrium has not been expected to occur at all.
To this end the behavior of Shannon and Kullback-Leibler entropies in time has been investigated in classical and quantum mechanical contexts. The utility of a particular form of entropy for a given dynamical model appears to be basically purpose dependent. The use of the Kullback- Leibler entropy encounters limitations which are not shared by the differential entropy, when the dynamical process is rapid. Alternatively, if one is interested in its short-time features.
On the contrary, it is the conditional Kullback-Leibler entropy which is often invoked in rigorous formulations of the Boltzmann H-theorem under almost-equilibrium conditions and its analogues for stochastic systems. The large time asymptotics of solutions of Fokker-Planck equations, if analyzed in terms of this entropy, gives reliable results.
However, our analysis of Smoluchowski diffusion processes and of the exemplary Ornstein-Uhlenbeck process, demonstrates that a deeper insight into the underlying non-equilibrium physical phenomena (the inherent power transfer processes) is available only in terms of the Shannon entropy and its time rate of change. This insight is inaccessible in terms of the Kullback-Leibler entropy .
The differential entropy needs not to increase, even in case of plainly irreversible dynamics. The monotonic growth in time of the conditional Kullback entropy (when applicable), not necessarily should be related to the ”dynamical origins of the increasing entropy”, [47,93]. We would rather tell that the conditonal entropy is well suited to stay in correspondence with the lore of the second law of thermodynamics, since by construction its time behavior is monotonic, if one quantifies an asymptotic approach towards a stationary density.
In case of Smoluchowski processes, the time rate of the conditional Kullback entropy was found to coincide with the corresponding differential (Shannon) entropy ”production” rate. The differential entropy itself needs not to grow and may as well change its dynamical regime from growth to decay and in reverse, even with the entropy ”production” involved.
Balance equations for the differential entropy and the Fisher information measure involve a nontrivial power transfer. In case of Smoluchowski processes this power release can be easily attributed to the entropy removal from the system or the entropy absorption from (drainage) the thermostat. In the quantum mechanical regime, the inherent power transfer is related to metamorphoses of various forms of mean energy among themselves and needs not the notion of external to the system thermostat.
Apart from the above observations, we have provided a comprehensive review of varied appearances of the differential entropy in the existing literature on both classical and quantum dynamical systems. As a byproduct of the general discussion, we have described its specific quantum manifestations, in the specific (pure quantum states) regime where the traditional von Neumann entropy is of no much use.

Acknowledgments

The paper has been supported by the Polish Ministry of Scientific Research and Information Technology under the (solicited) grant No PBZ-MIN-008/P03/2003. I would like to thank Professor Robert Alicki for help in quest for some hardly accessible references.

Dedication

This paper is dedicated to Professor Rafael Sorkin on the occasion of his 60th birthday, with friendly admiration.

References

  1. Alicki, R.; Fannes, M. Quantum Dynamical Systems; Oxford University Press: Oxford, 2001. [Google Scholar]
  2. Ohya, M.; Petz, D. Quantum Entropy and Its use; Springer-Verlag: Berlin, 1993. [Google Scholar]
  3. Wehrl, A. General properties of entropy. Rev. Mod. Phys. 1978, 50, 221–260. [Google Scholar] [CrossRef]
  4. Shannon, C.E. A mathematical theory of communication. Bell Syst. Techn. J. 1948, 27, 379–423, 623–656. [Google Scholar] [CrossRef]
  5. Cover, T.M.; Thomas, J.A. Elements of Information Theory; Wiley: NY, 1991. [Google Scholar]
  6. Sobczyk, K. Information Dynamics: Premises, Challenges and Results. Mechanical Systems and Signal Processing 2001, 15, 475–498. [Google Scholar] [CrossRef]
  7. Yaglom, A.M.; Yaglom, I.M. Probability and Information; D. Reidel: Dordrecht, 1983. [Google Scholar]
  8. Hartley, R.V.L. Transmission of information. Bell Syst. Techn. J. 1928, 7, 535–563. [Google Scholar] [CrossRef]
  9. Brillouin, L. Science and Information Theory; Academic Press: NY, 1962. [Google Scholar]
  10. Ingarden, R.S.; Kossakowski, A.; Ohya, M. Information Dynamics and Open Systems; Kluwer: Dordrecht, 1997. [Google Scholar]
  11. Brukner, Ĉ.; Zeilinger, A. Conceptual inadequacy of the Shannon information in quantum measurements. Phys. Rev. 2002, A 63, 022113. [Google Scholar] [CrossRef]
  12. Mana, P.G.L. Consistency of the Shannon entropy in quantum experiments. Phys. Rev. 2004, A 69, 062108. [Google Scholar] [CrossRef]
  13. Jaynes, E.T. Information theory and statistical mechanics.II. Phys. Rev. 1957, 108, 171–190. [Google Scholar] [CrossRef]
  14. Stotland, A.; et al. The information entropy of quantum mechanical states. Europhys. Lett. 2004, 67, 700–706. [Google Scholar] [CrossRef]
  15. Partovi, M.H. Entropic formulation of uncertainty for quantum measurements. Phys. Rev. Lett. 1983, 50, 1883–1885. [Google Scholar] [CrossRef]
  16. Adami, C. Physics of information. 2004; arXiv:quant-ph/040505. [Google Scholar]
  17. Deutsch, D. Uncertainty in quantum measurement. Phys. Rev. Lett. 1983, 50, 631–633. [Google Scholar] [CrossRef]
  18. Garbaczewski, P.; Karwowski, W. Impenetrable barrriers and canonical quantization. Am. J. Phys. 2004, 72, 924–933. [Google Scholar] [CrossRef]
  19. Hirschman, I.I. A note on entropy. Am. J. Math. 1957, 79, 152–156. [Google Scholar] [CrossRef]
  20. Beckner, W. Inequalities in Fourier analysis. Ann. Math. 1975, 102, 159–182. [Google Scholar] [CrossRef]
  21. Bia-lynicki-Birula, I.; Mycielski, J. Uncertainty Relations for Information Entropy in Wave Mechanics. Commun. Math. Phys. 1975, 44, 129–132. [Google Scholar] [CrossRef]
  22. Bia-lynicki-Birula, I.; Madajczyk, J. Entropic uncertainty relations for angular distributions. Phys. Lett. 1985, A 108, 384–386. [Google Scholar] [CrossRef]
  23. Stam, A.J. Some inequalities satisfied by the quantities of information of Fisher and Shannon. Inf. and Control 1959, 2, 101–112. [Google Scholar] [CrossRef]
  24. Dembo, A.; Cover, T. Information theoretic inequalities. IEEE Trans. Inf. Th. 1991, 37, 1501–1518. [Google Scholar] [CrossRef]
  25. Maasen, H.; Uffink, J.B.M. Generalized Entropic Uncertainty Relations. Phys. Rev. Lett. 1988, 60, 1103–1106. [Google Scholar] [CrossRef] [PubMed]
  26. Blankenbecler, R.; Partovi, M.H. Uncertainty, entropy, and the statistical mechanics of microscopic systems. Phys. Rev. Lett. 1985, 54, 373–376. [Google Scholar] [CrossRef] [PubMed]
  27. Sa´nchez-Ruiz, J. Asymptotic formula for the quantum entropy of position in energy eigenstates. Phys. Lett. 1997, A 226, 7–13. [Google Scholar]
  28. Halliwell, J.J. Quantum-mechanical histories and the uncertainty principle: Information-theoretic inequalities. Phys. Rev. 1993, D 48, 2739–2752. [Google Scholar] [CrossRef] [Green Version]
  29. Gadre, S.R.; et al. Some novel characteristics of atomic information entropies. Phys. Rev. 1985, A 32, 2602–2606. [Google Scholar] [CrossRef]
  30. Yan˜ez, R.J.; Van Assche, W.; Dehesa, J.S. Position and information entropies of the D-dimensional harmonic oscillator and hydrogen atom. Phys. Rev. 1994, A 50, 3065–3079. [Google Scholar]
  31. Yan˜ez, R.J.; et al. Entropic integrals of hyperspherical harmonics and spatial entropy of D-dimensional central potentials. J. Math. Phys. 1999, 40, 5675–5686. [Google Scholar]
  32. Buyarov, V.; et al. Computation of the entropy of polynomials orthogonal on an interval. SIAM J. Sci. Comp. to appear (2004), also math.NA/0310238. [CrossRef]
  33. Majernik, V.; Opatrny´, T. Entropic uncertainty relations for a quantum oscillator. J. Phys. A: Math. Gen. 1996, 29, 2187–2197. [Google Scholar] [CrossRef]
  34. Majernik, V.; Richterek, L. Entropic uncertainty relations for the infinite well. J. Phys. A: Math. Gen. 1997, 30, L49–L54. [Google Scholar] [CrossRef]
  35. Massen, S.E.; Panos, C.P. Universal property of the information entropy in atoms, nuclei and atomic clusters. Phys. Lett. 1998, A 246, 530–532. [Google Scholar] [CrossRef]
  36. Massen, S.E.; et al. Universal property of information entropy in fermionic and bosonic systems. Phys. Lett. 2002, A 299, 131–135. [Google Scholar] [CrossRef]
  37. Massen, S.E. Application of information entropy to nuclei. Phys. Rev. 2003, C 67, 014314. [Google Scholar] [CrossRef]
  38. Coffey, M.W. Asymtotic relation for the quantum entropy of momentum in energy eigenstates. Phuys. Lett. 2004, A 324, 446–449. [Google Scholar] [CrossRef]
  39. Coffey, M.W. Semiclassical position entropy for hydrogen-like atoms. J. Phys. A: Math. Gen. 2003, 36, 7441–7448. [Google Scholar] [CrossRef]
  40. Dunkel, J.; Trigger, S.A. Time-dependent entropy of simple quantum model systems. Phys. Rev. 2005, A 71, 052102. [Google Scholar] [CrossRef] [Green Version]
  41. Santhanam, M.S. Entropic uncertainty relations for the ground state of a coupled sysytem. Phys. Rev. 2004, A 69, 042301. [Google Scholar] [CrossRef]
  42. Balian, R. Random matrices and information theory. Nuovo Cim. 1968, B 57, 183–103. [Google Scholar] [CrossRef]
  43. Werner, S.A.; Rauch, H. Neutron interferometry: Lessons in Experimental Quantum Physics; Oxford University Press: Oxford, 2000. [Google Scholar]
  44. Zeilinger, A.; et al. Single- and double-slit diffraction of neutrons. Rev. Mod. Phys. 1988, 60, 1067–1073. [Google Scholar] [CrossRef]
  45. Caves, C.M.; Fuchs, C. Quantum information: how much information in a state vector? Ann. Israel Phys. Soc. 1996, 12, 226–237. [Google Scholar]
  46. Newton, R.G. What is a state in quantum mechanics? Am. J. Phys. 2004, 72, 348–350. [Google Scholar] [CrossRef]
  47. Mackey, M.C. The dynamic origin of increasing entropy. Rev. Mod. Phys. 1989, 61, 981–1015. [Google Scholar] [CrossRef]
  48. Lasota, A.; Mackey, M.C. Chaos, Fractals and Noise; Springer-Verlag: Berlin, 1994. [Google Scholar]
  49. Berndl, K.; et al. On the global existence of Bohmian mechanics. Commun. Math. Phys. 1995, 173, 647–673. [Google Scholar] [CrossRef]
  50. Nelson, E. Dynamical Theories of the Brownian Motion; Princeton University Press: Princeton, 1967. [Google Scholar]
  51. Carlen, E. Conservative diffusions. Commun. Math. Phys. 1984, 94, 293–315. [Google Scholar] [CrossRef]
  52. Eberle, A. Uniqueness and Non-uniqueness of Semigroups Generated by Singular Diffusion Operators; LNM vol. 1718, Springer-Verlag: Berlin, 2000. [Google Scholar]
  53. Garbaczewski, P. Perturbations of noise: Origins of isothermal flows. Phys. Rev. E 1999, 59, 1498–1511. [Google Scholar] [CrossRef]
  54. Garbaczewski, P.; Olkiewicz, R. Feynman-Kac kernels in Markovian representations of the Schrödinger interpolating dynamics. J. Math. Phys. 1996, 37, 732–751. [Google Scholar] [CrossRef]
  55. Ambegaokar, V.; Clerk, A. Entropy and time. Am. J. Phys. 1999, 67, 1068–1073. [Google Scholar] [CrossRef] [Green Version]
  56. Tre¸bicki, J.; Sobczyk, K. Maximum entropy principle and non-stationary distributions of stochastic systems. Probab. Eng. Mechanics 1996, 11, 169–178. [Google Scholar]
  57. Huang, K. Statistical Mechanics; Wiley: New York, 1987. [Google Scholar]
  58. Cercignani, C. Theory and Application of the Boltzmann Equation; Scottish Academic Press: Edinburgh, 1975. [Google Scholar]
  59. Daems, D.; Nicolis, G. Entropy production and phase space volume contraction. Phys. Rev. E 1999, 59, 4000–4006. [Google Scholar] [CrossRef]
  60. Dorfman, J.R. An Introduction to Chaos in Nonequilibrium Statistical Physics; Cambridge Univ. Press: Cambridge, 1999. [Google Scholar]
  61. Gaspard, P. Chaos, Scattering and Statistical Mechanics; Cambridge Univ. Press: Cambridge, 1998. [Google Scholar]
  62. Deco, G.; et al. Determining the information flow of dynamical systems from continuous probability distributions. Phys. Rev. Lett. 1997, 78, 2345–2348. [Google Scholar] [CrossRef]
  63. Bologna, M.; et al. Trajectory versus probability density entropy. Phys. Rev. E 2001, E 64, 016223. [Google Scholar] [CrossRef] [PubMed]
  64. Bag, C.C.; et al. Noise properties of stochastic processes and entropy production. Phys. Rev. 2001, E 64, 026110. [Google Scholar] [CrossRef] [PubMed]
  65. Bag, B.C. Upper bound for the time derivative of entropy for nonequilibrium stochastic processes. Phys. Rev. 2002, E 65, 046118. [Google Scholar] [CrossRef] [PubMed]
  66. Hatano, T.; Sasa, S. Steady-State Thermodynamics of Langevin Systems. Phys. Rev. Lett. 2001, 86, 3463–3466. [Google Scholar] [CrossRef] [PubMed]
  67. Qian, H. Mesoscopic nonequilibrium thermodynamics of single macromolecules and dynamic entropy-energy compensation. Phys. Rev. 2001, E 65, 016102. [Google Scholar] [CrossRef] [PubMed]
  68. Jiang, D.-Q.; Qian, M.; Qian, M.-P. Mathematical theory of nonequilibrium steady states; LNM vol. 1833, Springer-Verlag: Berlin, 2004. [Google Scholar]
  69. Qian, H.; Qian, M.; Tang, X. Thermodynamics of the general diffusion process: time-reversibility and entropy production. J. Stat. Phys. 2002, 107, 1129–1141. [Google Scholar] [CrossRef]
  70. Ruelle, D. Positivity of entropy production in nonequilibrium statistical mechanics. J. Stat. Phys. 1996, 85, 1–23. [Google Scholar] [CrossRef]
  71. Munakata, T.; Igarashi, A.; Shiotani, T. Entropy and entropy production in simple stochastic models. Phys. Rev. 1998, E 57, 1403–1409. [Google Scholar]
  72. Tribus, M.; Rossi, R. On the Kullback information measure as a basis for information theory: Comments on a proposal by Hobson and Chang. J. Stat. Phys. 1973, 9, 331–338. [Google Scholar] [CrossRef]
  73. Smith, J.D.H. Some observations on the concepts of information-theoretic entropy and randomness. Entropy 2001, 3, 1–11. [Google Scholar] [CrossRef]
  74. Chandrasekhar, S. Stochastic problems in physics and astronomy. Rev. Mod. Phys. 1943, 15, 1–89. [Google Scholar] [CrossRef]
  75. Hall, M.J.W. Universal geometric approach to uncertainty, entropy and infromation. Phys. Rev. 1999, A 59, 2602–2615. [Google Scholar] [CrossRef]
  76. Pipek, J.; Varga, I. Universal classification scheme for the spatial-localization properties of one-particle states in finite d-dimensional systems. Phys. Rev. A 1992, A 46, 3148–3163. [Google Scholar] [CrossRef] [PubMed]
  77. Varga, I.; Pipek, J. Rényi entropies characterizing the shape and the extension of the phase-space representation of quantum wave functions in disordered systems. Phys. Rev. 2003, E 68, 026202. [Google Scholar] [CrossRef] [PubMed]
  78. McClendon, M.; Rabitz, H. Numerical simulations in stochastic mechanics. Phys. Rev. 1988, A 37, 3479–3492. [Google Scholar] [CrossRef]
  79. Garbaczewski, P. Signatures of randomness in quantum spectra. Acta Phys. Pol. 2002, A 33, 1001–1024. [Google Scholar]
  80. Hu, B.; et al. Quantum chaos of a kicked particle in an infinite potential well. Phys. Rev. Lett. 1999, 82, 4224–4227. [Google Scholar] [CrossRef]
  81. Kullback, S. Information Theory and Statistics; Wiley: NY, 1959. [Google Scholar]
  82. Cramér, H. Mathematical methods of statistics; Princeton University Press: Princeton, 1946. [Google Scholar]
  83. Hall, M.J.W. Exact uncertainty relations. Phys. Rev. 2001, A 64, 052103. [Google Scholar] [CrossRef]
  84. Garbaczewski, P. Stochastic models of exotic transport. Physica 2000, A 285, 187–198. [Google Scholar] [CrossRef]
  85. Carlen, E.A. Superadditivity of Fisher’s information and logarithmic Sobolev inequalities. J. Funct. Anal. 1991, 101, 194–211. [Google Scholar] [CrossRef] [Green Version]
  86. Frieden, B.R.; Sofer, B.H. Lagrangians of physics and the game of Fisher-information transfer. Phys. Rev. 1995, E 52, 2274–2286. [Google Scholar] [CrossRef]
  87. Catalan, R.G.; Garay, J.; Lopez-Ruiz, R. Features of the extension of a statistical measure of complexity to continuous systems. Phys. Rev. 2002, E 66, 011102. [Google Scholar] [CrossRef] [PubMed]
  88. Risken, H. The Fokker-Planck Equation; Springer-Verlag: Berlin, 1989. [Google Scholar]
  89. Hasegawa, H. Thermodynamic properties of non-equilibrium states subject to Fokker-Planck equations. Progr. Theor. Phys. 1977, 57, 1523–1537. [Google Scholar] [CrossRef]
  90. Vilar, J.M.G.; Rubi, J.M. Thermodynamics ”beyond” local equilibrium. Proc. Nat. Acad. Sci. (NY) 2001, 98, 11081–11084. [Google Scholar] [CrossRef] [PubMed]
  91. Kurchan, J. Fluctuation theorem for stochastic dynamics. J. Phys. A: Math. Gen. 1998, 31, 3719–3729. [Google Scholar] [CrossRef]
  92. Mackey, M.C.; Tyran-Kamin´ska, M. Effects of noise on entropy evolution. 2005; arXiv.org preprint cond-mat/0501092. [Google Scholar]
  93. Mackey, M.C.; Tyran-Kaminńska, M. Temporal behavior of the conditional and Gibbs entropies. 2005; arXiv.org preprint cond-mat/0509649. [Google Scholar]
  94. Czopnik, R.; Garbaczewski, P. Frictionless Random Dynamics: Hydrodynamical Formalism. Physica 2003, A 317, 449–471. [Google Scholar] [CrossRef]
  95. Fortet, R. Résolution d’un systéme d’équations de M. Schrödingeer. J. Math. Pures Appl. 1040, 9, 83. [Google Scholar]
  96. Blanchard, Ph.; Garbaczewski, P. Non-negative Feynman-Kac kernels in Schrödinger’s interpolation problem. J. Math. Phys. 1997, 38, 1–15. [Google Scholar] [CrossRef]
  97. Jaynes, E.T. Violations of Boltzmann’s H Theorem in Real Gases. Phys. Rev. 1971, A 4, 747–750. [Google Scholar] [CrossRef]
  98. Voigt, J. Stochastic operators, Information and Entropy. Commun. Math. Phys. 1981, 81, 31–38. [Google Scholar] [CrossRef]
  99. Voigt, J. The H-Theorem for Boltzmann type equations. J. Reine Angew. Math 1981, 326, 198–213. [Google Scholar]
  100. Toscani, G. Kinetic approach to the asymptotic behaviour of the solution to diffusion equation. Rend. di Matematica 1996, Serie VII 16, 329–346. [Google Scholar]
  101. Bobylev, A.V.; Toscani, G. On the generalization of the Boltzmann H-theorem for a spatially homogeneous Maxwell gas. J. Math. Phys. 1992, 33, 2578–2586. [Google Scholar] [CrossRef]
  102. Arnold, A.; et al. On convex Sobolev inequalities and the rate of convergence to equilibrium for Fokker-Planck type equations. Comm. Partial Diff. Equations 2001, 26, 43–100. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Garbaczewski, P. Differential entropy and time. Entropy 2005, 7, 253-299. https://0-doi-org.brum.beds.ac.uk/10.3390/e7040253

AMA Style

Garbaczewski P. Differential entropy and time. Entropy. 2005; 7(4):253-299. https://0-doi-org.brum.beds.ac.uk/10.3390/e7040253

Chicago/Turabian Style

Garbaczewski, Piotr. 2005. "Differential entropy and time" Entropy 7, no. 4: 253-299. https://0-doi-org.brum.beds.ac.uk/10.3390/e7040253

Article Metrics

Back to TopTop