## 1. Introduction

^{2}(R

^{n}) wave packet.

#### 1.1 Notions of entropy

_{i}, 1 ≤ i ≤ N being its eigenvalues), the information gain can be described in terms of both von Neumann’s and the standard Shannon measure of information:

^{2}(R

^{n}) wave packets, [15], [19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42].

^{2}(R) wave packets and their dynamical manifestations (time-dependent analysis) which are currently in the reach of experimental techniques, [43,44]. It is enough to invoke pure quantum states in L

^{2}(R

^{n}) and standard position-momentum observables which, quite apart from a hasty criticism [17], still stand for a valid canonical quantization cornerstone of quantum theory, [18].

#### 1.2 Differential entropy

^{2}(R

^{n}), and thus pursue the following (albeit scope-limited, c.f. [43,44] for experimental justifications) view: an isolated system is represented in quantum mechanics by a state vector that conveys statistical predictions for possible measurement outcomes.

^{2}(R

^{n}) wave packets, in conjunction with the Born statistical interpretation, hence with ψ-induced probability measures in position and momentum space, [19,20]. The experimental connotations pertaining to the notion of uncertainty or indeterminacy are rather obvious, although they do not quite fit to the current quantum information idea of a ”useful” quantum measurement, [11].

^{n}, we define the differential entropy [5,6,48]), as follows:

^{n}to be a support of ρ instead of R; this is guaranteed by the convention that the integrand in Eq. (3) vanishes if ρ does. Note a minor but crucial notational difference between $\hat{\rho}$ and ρ.

^{2}(R

^{n}) wave packet ψ(x), is bounded from below:

#### 1.3 Temporal behavior-preliminaries

^{2}= ρ(x,t) may give rise to a nontrivial dynamics of the information entropy associated with the wave packet ψ(x, t).

_{Γ}ρ(ln ρ − lnρ′) dx, [4,6], provided ρ′ is strictly positive.

^{2}(R

^{n}) normalized wave packet ψ(x, t) (strictly speaking, of the related probability density). We indicate that, at variance with standard thermodynamical intuitions, quantum mechanical information (differential) entropy needs not to be a monotonic function of time. In the course of its evolution it may oscillate, increase or decrease with the flow of time, instead of merely increasing with time or staying constant, as customarily expected. That, regardless from the intrinsic time reversal property of the quantum dynamics.

_{0}∈ X for the trajectory dynamics taking place in a phase space X of the system. This imprecision extends to the terminal data (x

_{0}→ x

_{t}after time t > 0) as well.

_{0}∈ A), then after time t one can identify the terminal state of the system x

_{t}∈ X in a subset B ⊂ X with a probability prob(x

_{t}∈ B). An evolution of derived probability densities eventually may be obtained as a solution of an appropriate partial differential transport equation, [47,48,59,63]

**Remark 1:**

**Remark 2:**

^{2}. Therefore, one is tempted to resolve such dynamics in terms of (Markovian) diffusion-type processes and their sample paths, see e.g. [50,51,52] and [53,54]. A direct interpretation in terms of random ”trajectories” of a Markovian diffusion-type process is here in principle possible under a number of mathematical restrictions, but is non-unique and not necessarily global in time. The nontrivial boundary data, like the presence of wave function nodes, create additional problems although the nodes are known to be never reached by the pertinent processes. The main source of difficulty lies in guaranteing the existence of a process per se i.e. of the well defined transition probability density function solving a suitable parabolic partial differential equation (Fokker-Planck or Kramers).

#### 1.4 Outline of the paper

## 2. Differential entropy: uncertainty versus information

#### 2.1 Prerequisites

_{1}, ..., μ

_{N}) as a probability measure on N distinct (discrete) events A

_{j}, 1 ≤ j ≤ N pertaining to a model system. Assume that and μ

_{j}= prob(A

_{j}) stands for a probability for an event A

_{j}to occur in the game of chance with N possible outcomes.

_{j}an uncertainty function of the event A

_{j}. Interestingly, we can coin here the name of the (”missing”) information function, if we wish to interpret what can be learned via direct observation of the event A

_{j}: the less probable is that event, the more valuable (larger) is the information we would retrieve through its registration .

_{1},..., A

_{N}with labels for particular discrete ”states” of the system, we may interpret Eq. (5) as a measure of uncertainty of the ”state” of the system, before this particular ”state” it is chosen out of the set of all admissible ones. This well conforms with the standard meaning attributed to the Shannon entropy: it is a measure of the degree of ignorance concerning which possibility (event A

_{j}) may hold true in the set {A

_{1},A

_{2},..., A

_{N}} with a given a priori probability distribution {μ

_{1},..., μ

_{N}}.

_{j}= 1/N for all 1 ≤ j ≤ N occurs. In the latter situation, all events (or measurement outcomes) are equiprobable and log N sets maximum for a measure of the ”missing information”.

#### 2.2 Events, states, microstates and macrostates

_{2}instead of ln) may be interpreted [8] as ”a measure of information produced when one message is chosen from the set, all choices being equally likely” (”message” to be identified with a ”microstate”). Another interpretation of − ln P is that of a degree of uncertainty in the trial experiment, [7].

_{1}if a molecule can be found in ”1” half-box and A

_{2}if it placed in another half ”2”.

_{1}and G − n molecules in the state A

_{2}.

_{B}ln W (n) corresponds to N

_{1}= N

_{2}= n.

_{1}/G ≐ p

_{1}, ..., n

_{N}/G ≐ p

_{N}with which the elementary states of the type 1, ..., N do actually occur. This sample is a substitute for a ”message” or a ”statistical microstate” in the previous discussion.

_{1}, ..., p

_{N}of elementary states. We interpret those samples to display the same ”macroscopic behavior”.

_{1}, ..., μ

_{N}, with an identification p

_{i}≐ μ

_{i}for all 1 ≤ i ≤ N, by the Shannon formula:

_{1}= 0.1 and μ

_{2}= 0.9 yield $\mathcal{S}(\mu )$ = 0.469. Analogously 0.2 and 0.8 imply 0.7219, while 0.3 and 0.7 give 0.8813. Next, 0.4 and 0.6 imply 0.971, and we reach an obvious maximum $\mathcal{S}$ = 1 for μ

_{1}= μ

_{2}= 0.5. An instructive example of the ”dog-flea” model workings with G = 50 fleas jumping back and forth between their ”states of residence” on a dog ”1” or dog ”2”, can be found in Ref. [55]. Albeit, in a number of specific cases, an evolution of the Gibbs entropy may show up some surprises if the ”entropy growth dogma” is uncritically accepted, see e.g. examples in [55,56] and the discussion of Refs. [57,58].

_{j}with the previous probability measure μ). Then, Eq. (8) may be interpreted as a measure of information per alphabet letter, obtained after a particular message (string ≡ state of the model system) has been received or measured, c.f. our discussion preceding Eq. (8). In this case, the Shannon entropy interpolates between a maximal information (one certain event) and a minimal information (uniform distribution), cf. Eq. (6).

**Remark 3:**

_{2}(1/27) ≃ 4.76. Accordingly, log

_{2}W = G · $\mathcal{S}(\mu )$ ≃ 47.600, where W is the number of admissible microstates.

#### 2.3 Shannon entropy and differential entropy

#### 2.3.1 Bernoulli scheme and normal distribution

_{1}appears with a probability μ

_{1}= p while A

_{2}with a probability μ

_{2}= 1 − p. A probability with which A

_{1}would have appeared exactly n times, in the series of G repetitions of the two-state experiment, is given by the Bernoulli formula:

^{G}, after setting q = 1 − p we arrive at

_{0}, P

_{1}, ..., P

_{G}} for G distinct random events denoted B

_{0}, B

_{1}, ..., B

_{G}. Accordingly, we can introduce a random variable B and say that it has the Bernoulli distribution, if B takes values n = 0, 1, ..., G with the Bernoulli probabilities P

_{n}of Eq. (9) for all n. A random event B

_{n}is interpreted as ”taking the value n in the Bernoulli scheme”.

_{k}) = P

_{k}. We know that P(B < n) = ∑

_{k < n}P

_{k}. The mean value E(B) of B reads The variance E([B − E(B)]

^{2}) of B equals Gp(1 − p).

_{n}= nr, x

_{0}= Gpr and σ

^{2}= Gr

^{2}p(1 − p). Obviously, ρ(x

_{n}) is not a probability on its own, while r · ρ(x

_{n}) = P

_{n}is a probability to find a particle in the n-th interval of length r out of the admitted number G = L/r of bins.

_{0}= rG/2 and We recall that almost all of the probability ”mass” of the Gauss distribution is contained in the interval −3σ < x

_{0}< +3σ about the mean value x

_{0}. Indeed, we have prob(|x − x

_{0}|) < 2σ) = 0.954 while prob(|x − x

_{0}|) < 3σ) = 0.998.

**Remark 4:**

^{4}we get the grating (spacing, resolution) unit r = 10

^{−4}. Then, x

_{0}= 1/2 while σ = (1/2) · 10

^{−2}. It is thus a localization interval [1/2 − 3σ, 1/2 + 3σ] of length 6σ = 3 · 10

^{−2}to be compared with L = 1. By setting G = 10

^{6}we would get 6σ = 3 · 10

^{−3}.

#### 2.3.2 Coarse-graining

_{k}} such that ∪

_{k}B

_{k}⊆ R and B

_{i}∩ B

_{j}= Ø for i ≠ j. We denote μ(B

_{k}) ≐ μ

_{k}the length of the k-th interval, where μ stands for the Lebesgue measure on R.

_{k}equals prob(B

_{k}) ≐ p

_{k}= ∫

_{Bk}ρ(x)dx. An average of the density ρ over B

_{k}we denote < ρ >

_{k}= p

_{k}/μ

_{k}where μ

_{k}= ∫

_{Bk}dx.

_{k}} reads:

_{k}(x) is an indicator (characteristic) function of the set B

_{k}, which is equal 1 for x ∈ B

_{k}and vanishes otherwise. Since ∫ 1

_{k}(x)dx = μ

_{k}it is clear that

_{k}= r ≪ 1 for all k and notice that < ρ >

_{k}= p

_{k}≃ ρ(x

_{k}) · r for certain

_{k}∈ B

_{k}.

_{0}may be used in the coarse graining procedure, instead of the full configuration space R. Effectively, we arrive at a finite partition on L with the resolution L/G = r and then we can safely invoke the definition of p

_{k}≐ P

_{k}= r · ρ(x

_{k}), in conformity with Eq. (11).

_{B}) ≥ 0 and hence, in view of $\mathcal{S}(\rho )$ ≥ ln r, we need to have maintained a proper balance between σ and the chosen grating level r.

**Remark 5:**

^{−6}, hence ln r = −6 ln 10 ∼ −13.86 . We have: $\mathcal{S}(\rho )$ = (1/2) ln(2πeσ

^{2}) ∼ 0.92 + ln σ. By setting σ = (1/2)10

^{−3}we realize that $\mathcal{S}(\rho )$ ∼ 0.92 − ln 2 − 3 ln 10 ∼ −6.7 hence is almost twice larger than the allowed lower bound ln r. The approximate value of the coarse grained entropy is here $\mathcal{S}$(ρ

_{B}) ∼ 7.16 and stands for the mean information per partition bin in the ”string” composed of G bins.

_{k}[rρ(x

_{k})] ⇒ − ∫ ρ ln ρ dx with r → 0 and possibly L → ∞. A conclusion is that the differential entropy is unbounded both form below and from the above. In particular, $\mathcal{S}(\rho )$ may take arbitrarily low negative values, in plain contrast to its coarse grained version $\mathcal{S}({\rho}_{B})$ which is always nonnegative.

_{k}:

#### 2.3.3 Coarse-graining exemplified: exponential density

^{+}is known to maximize a differential entropy among all R

^{+}density functions with the first moment fixed at 1/λ > 0. The density has the form: ρ(x) = λ exp(−λx) for x ≥ 0 and vanishes for x < 0. Its variance is 1/λ

^{2}. The differential entropy of the exponential density reads $\mathcal{S}(\rho )$ = 1 − ln λ.

_{k}≃ ρ(x

_{k}) · r with x

_{k}= kr where k is a natural number. One can directly verify that for small r, we can write r ≃ 1 − exp(−r) and thence consider p

_{k}≃ [1 − exp(−r)] exp(−kr), such that , with the well known quantum mechanical connotation.

_{T}; notice that for the first time in the present paper, we explicitly invoke dimensional units, in terms of which the dimensionless constant r is defined. We readily arrive at the probability of the kν-th oscillator mode in thermal bath at the temperature T.

_{k}, k ∈ N reads

_{B}T ≪ 1, with the obvious result

**Remark 6:**

^{+}. Although a full fledged analysis is quite complicated, one may invoke quite useful, albeit approximate formulas for adjacent level spacing distributions. The previously mentioned exponential density corresponds to the so-called Poisson spectral series. In the family of radial densities of the form where N and Γ is the Euler gamma function, [79], the particular cases N = 2, 3, 5 correspond to the generic level spacing distributions, based on the exploitation of the Wigner surmise. The respective histograms plus their continuous density interpolations are often reproduced in ”quantum chaos” papers, see for example [80].

#### 2.3.4 Spatial coarse graining in quantum mechanics

_{k}in the k-th bin on R) by the related histogram of uncertainties − ln p

_{k}, c.f. Section II.A where an uncertainty function has been introduced.

_{k}⊂ R we may denote p

_{k}the probability of finding the outcome of a position measurement to have a value in B

_{k}. We are free set the bin size arbitrarily, especially if computer assisted procedures are employed, [78].

_{k}p

_{k}ln p

_{k}with a direct quantum input p

_{k}≐ ∫

_{Bk}|ψ(x)|

^{2}dx, where ψ ∈ L

^{2}(R is normalized. This viewpoint is validated by current experimental techniques in the domain of matter wave interferometry, [44,43], and the associated numerical experimentation where various histograms are generated, [78].

^{2}(R

^{n}) wave functions that, by means of the Fourier transformation, they give rise to two interrelated densities (presently we refer to L

^{2}(R)): ρ = |ψ|

^{2}and where

^{2}= (ψ, [A − 〈A〉]

^{2}ψ) with 〈A〉 = (ψ, Aψ), then for the position X and momentum P operators we have the following version of the entropic uncertainty relation (here expressed through so-called entropy powers, see e.g. [2]):

#### 2.4 Impact of dimensional units

^{2}(x), where ψ ∈ L

^{2}(R), then ${\tilde{\rho}}_{h}(p)={|{\mathcal{F}}_{h}(\psi )|}^{2}(p)$ is defined in terms of the dimensional Fourier transform:

^{x}and S

^{p}respectively.

**Example 1:**

^{−1/2}, hence at the dimensional value of the standard deviation σ = (2πe)

^{−1/2}δx, compare e.g. [10].

**Example 2:**

_{∗}(v) = (m/2πk

_{B}T)

^{1/2}exp[−m(v − v

_{0})

^{2}/2k

_{B}T]. H(t) is known to be time-independent only if f ≐ f

_{∗}(v). We can straightforwardly, albeit formally, evaluate H

_{∗}= ∫ f

_{∗}ln f

_{∗}dv = −(1/2) ln(2πek

_{B}T/m) and become faced with an apparent dimensional difficulty, [9], since an argument of the logarithm is not dimensionless. For sure, a consistent integration outcome for H(t) should involve a dimensionless argument k

_{B}T/m[v]

^{2}instead of k

_{B}T/m, provided [v] stands for any unit of velocity; examples are [v] = 1 m/s (here m stands for the SI length unit, and not for a mass parameter) or 10

^{−5}m/s. To this end, in conformity with our previous discussion, it suffices to redefine H

_{∗}as follows, [9,10]:

_{∗}by [v] we arrive at the dimensionless argument of the logarithm in the above and cure the dimensional obstacle.

**Remark 7:**

_{αβ}= β ρ[β(x − α)]

^{−1/2}β in analogy with our previous dimensional considerations. If an argument of ρ is assumed to have dimensions, then the scaling transformation with the dimensional β may be interpreted as a method to restore the dimensionless differential entropy value.

## 3 Localization: differential entropy and Fisher information

_{0}and maximizes an inequality:

_{0}of the Gaussian density ρ.

^{2}, we actually have an upper bound set by Eq. (42). However, in contrast to coarse grained entropies which are always nonnegative, even for relatively large mean deviation the differential entropy $\mathcal{S}(\rho )$ is negative.

_{α}(x) on R whose first (mean) and second moments (effectively, the variance) are finite. The parameter-dependence is here not completely arbitrary and we assume standard regularity properties that allow to differentiate various functions of ρ

_{α}with respect to the parameter α under the sign of an (improper) integral, [82].

_{α}(x)dx = f(α) and ∫ x

^{2}ρ

_{α}dx < ∞. We demand that as a function of x ∈ R, the modulus of the partial derivative ∂ρ

_{α}/∂αis bounded by a function G(x) which together with xG(x) is integrable on R. This implies, [82], the existence of ∂f/∂α and an important inequality:

_{α}(x) is the Gauss function with mean value α.

_{α}actually equals α and we fix at σ

^{2}the value 〈(x − α)

^{2}〉= 〈x

^{2}〉 − α

^{2}of the variance (in fact, standard deviation from the mean value ) of the probability density ρ

_{α}. The previous inequality Eq. (44) now takes the familiar form:

_{α}, known to appear in various problems of statistical estimation theory, as well as an ingredient of a number of information-theoretic inequalities, [23,24,75,82,83]. In view of we realize that the Fisher information is more sensitive indicator of the wave packet localization than the entropy power, Eq. (43).

_{α}(x) ≐ ρ(x − α). Then, the Fisher information is no longer the mean value α-dependent and can be readily transformed to the conspicuously quantum mechanical form (up to a factor D

^{2}with D = ħ/2m):

^{2}for all relevant probability densities with any finite mean and variance fixed at σ

^{2}.

^{2}, the above expression for Q(x) notoriously appears in the hydrodynamical formalism of quantum mechanics as the so-called de Broglie-Bohm quantum potential (D = ħ/2m). It appears as well in the corresponding formalism for diffusion-type processes, including the standard Brownian motion (then, D = k

_{B}T/mβ, see e.g. [53,54,84].

_{α}(x) = ρ(x − α), has been proved in [23], see also [24,85]:

**Remark 8:**

**Remark 9:**

_{α}(x) = ρ(x − α). This reflects the translational invariance of the Fisher and Shannon information measures, [86]. The scaling transformation ρ

_{α,β}= β ρ[β(x − α)], where α > 0, β > 0, transforms Eq. (43) to the form (2πe)

^{−1/2}exp[$\mathcal{S}({\rho}_{\alpha ,\beta})$] ≤ σ/β.

^{2}(R

^{n}) provenance) that ρ(x) ≐ |ψ|

^{2}(x), where a real or complex function is a normalized element of L

^{2}(R), another important inequality holds true, [2,23]:

## 4 Asymptotic approach towards equilibrium: Smoluchowski processes

#### 4.1 Random walk

^{1}with a probability 1/2 forth and back, each step being of a unit length, [74]. If one begins from the origin 0, after G steps a particle can found at any of the points −G, −G + 1, ... − 1, 0, 1, ..., G. The probability that after G displacements a particle can be found at the point g ∈ [−G, G] is given by the Bernoulli distribution:

^{−6}m, to be a grating unit (i.e. minimal step length for a walk). Let r ≪ ∆x ≪ G (size ∆x ∼ 10

^{−4}m is quite satisfactory). For large G and |g| ≪ G, we denote x ≐ g · r and ask for a probability ρ

_{G}(x)∆x that a particle can be found in the interval [x, x + ∆x] after G displacements. The result is [74]:

^{2}/2. It is a fundamental solution of the heat equation ∂

_{t}ρ = D∆ρ which is the Fokker-Planck equation for the Wiener process.

_{0}(x).

_{υ}denote a convolution of a probability density ρ with a Gaussian probability density having variance υ. The transition density of the Wiener process generates such a convolution for ρ

_{0}, with υ = σ

^{2}≐ 2Dt. Then, de Bruijn identity, [23,75], $d\mathcal{S}({\rho}_{\upsilon})/d\upsilon =(1/2)\mathcal{F}({\rho}_{\upsilon})$, directly yields the information entropy time rate for $\mathcal{S}(\rho )=\mathcal{S}(t)$:

#### 4.2 Kullback entropy versus differential entropy

_{α}= ρ(x − α), with the mean α ∈ R and the standard deviation fixed at σ. These densities are not differentiated by the information (differential) entropy and share its very same value independent of α.

_{α}→ ρ

_{α,σ}(x) appears. Such densities, corresponding to different values of σ and σ′ do admit an ”absolute comparison” in terms of the Shannon entropy, in accordance with Eq. (19):

_{θ}, so that the ”distance” between any two densities in this family can be directly evaluated. Let ρ

_{θ′}stands for the prescribed (reference) probability density. We have, [6,81,88]:

^{2}at α fixed, we get (now the variance σ

^{2}is modified by its increment ∆(σ

^{2})):

^{2}= 2Dt and ∆(σ

^{2}) = 2D∆t, sets an obvious connection with the differential $(\u2206\mathcal{S})(t)$ and thence with the time derivative of the heat kernel differential entropy, Eq. (58) and the de Bruijn identity.

_{1}, θ

_{2}) family of densities, then instead of Eq. (62) we would have arrived at

_{1}= α and θ

_{2}= σ (alternatively θ

_{2}= σ

^{2}), the Fisher matrix is diagonal and defined in terms of previous entries ${\mathcal{F}}_{\alpha}$ and ${\mathcal{F}}_{\sigma}$ (or ${\mathcal{F}}_{{\sigma}^{2}}$).

^{2}= 2Dt, ∆(σ

^{2}) = 2D∆t. Then while . Although, for finite increments ∆t we have

_{θ′}(x) = ρ

_{∗}(x), while another evolves in time ρ

_{θ}(x) ≐ ρ(x, t), t ∈ R

^{+}, thence , see e.g. [48,92].

#### 4.3 Entropy dynamics in the Smoluchowski process

_{t}ρ = −∇j, this in turn being equivalent to the heat equation.

_{t}v + (v · ∇)v = −∇Q.

_{t}ρ = D∆ρ − ∆(b · ρ)

_{0}(x) = ρ(x, 0).

_{B}being the Boltzmann constant .

_{t}ρ + ∇(υρ) = 0

_{t}+ v · ∇)v = ∇(Ω − Q)

^{2}〉 = −D〈∇ · u〉)

**Remark 10:**

^{2}〉 as the entropy production rate of the (originally - stationary) diffusion process with the current velocity v. We would like to point out that traditionally, [61,70,71], the statistical mechanical notion of an entropy production refers to the excess entropy that is pumped out of the system. An alternative statement tells about the entropy production by the physical system into the thermostat. In the present discussion, an increase of the information entropy of the Smoluchowski process definitely occurs due to the thermal environment: the differential entropy is being generated (produced) in the physical system by its environment.

_{B}T)∇V = ∇ln ρ

_{in}.

_{th}≐ v/D associated with the Smoluchowski diffusion and introduce its corresponding time-dependent potential function Ψ(x, t):

_{B}T F

_{th}= F − k

_{B}T∇ ln ρ ≐ −∇Ψ.

_{th}= −∇ln ρ = −(1/D)u.

_{B}Tln ρ

#### 4.4 Kullback entropy versus Shannon entropy in the Smoluchowski process

_{B}T)dx, sets the minimum of 〈Ψ〉(t) at 〈Ψ〉

_{∗}= Ψ

_{∗}= −k

_{B}Tln Z.

_{∗}(x) as a reference density with respect to which the divergence of ρ(x, t) is evaluated in the course of the pertinent Smoluchowski process. This divergence is well quantified by the conditional Kullback entropy . Let us notice that

#### 4.5 One-dimensional Ornstein-Uhlenbeck process

_{0}and variance . the Fokker-Planck evolution Eq. (74) preserves the Gaussian form of ρ(x, t) while modifying the mean value α(t) = α

_{0}exp(−γt) and variance according to

_{∗}= /γ/2πD exp(−γx

^{2}/2D) we obtain, [92]:

^{2}(t)] and $\mathcal{F}=1/{\sigma}^{2}(t)$.

_{0}= 0, we can easily check that < 0, i.e. we have the power drainage from the environment for all t ∈ R

^{+}. More generally, the sign of is negative for < 2(D − γ )/γ. If the latter inequality is reversed, the sign of is not uniquely specified and suffers a change at a suitable time instant t

_{change}( , ).

#### 4.6 Mean energy and the dynamics of Fisher information

_{t}s〉 = 0

^{2}is bounded from below by 1/σ

^{2}, see e.g. Section III.

_{t}(ρv

^{2}) = −∇ · [(ρv

^{3})] − 2ρv · ∇(Q − Ω).

^{3}vanishes at the integration volume boundaries. This implies the following expression for the time derivative of 〈v

^{3}〉:

^{2}ρ∆ ln ρ, the previous equation takes the form $\stackrel{.}{\mathcal{F}}$ = − ∫ ρv∇Qdx = − ∫ v∇Pdx, which is an analog of the familiar expression for the power release (dE/dt = F · v, with F = −∇V) in classical mechanics; this to be compared with our previous discussion of the ”heat dissipation” term Eq. (82).

**Remark 11:**

^{2}〉(t) = (D/2) = t(γ

^{2}/D) exp(−2γt), hence an asymptotic value 0, while 〈u

^{2}〉(t) = (D/2)$\mathcal{F}$(t) → γ/D. Accordingly, we have 〈Ω〉(t) → − γ/2D.

## 5 Differential entropy dynamics in quantum theory

#### 5.1 Balance equations

^{1/2}exp(is) with the phase function s = s(x, t) defining v = ∇s is known to imply two coupled equations: the standard continuity equation ∂

_{t}ρ = −∇(vρ) and the Hamilton-Jacobi-type equation

^{2}= ρ, whose differential entropy $\mathcal{S}$ may quite nontrivially evolve in time.

_{in}(while assuming the validity of mathematical restrictions upon the behavior of integrands), we encounter the information entropy balance equations in their general form disclosed in Eqs. (81)-(83). The related differential entropy ”production” rate reads:

**Remark 12:**

_{t}s〉 = $\mathcal{E}$. In view of v = ∇s and assumed vanishing of sρv at the integration volume boundaries, we get:

_{0}− $\mathcal{E}$ · t. We recall that the corresponding derivation of Eq. (89) has been carried out for v = −(1/mβ)∇Ψ, with = 0). Hence, as close as possible link with the present discussion is obtained if we re-define s into s

_{Ψ}≐ s. Then we have

_{Ψ}. Indeed, there holds ∂

_{t}s

_{Ψ}= Ω − Q, in close affinity with Eq. (98) in the same regime.

#### 5.2 Differential entropy dynamics exemplified

#### 5.2.1 Free evolution

^{2})−

^{1/4}exp(−x

^{2}/2α

^{2}).

^{2}〉 ≐ ∫ x

^{2}ρdx = (α

^{4}+ 4D

^{2}t

^{2})/2α

^{2}. Its time rate equals:

^{2}/α

^{2}: the differential entropy production remains untamed for all times.

^{2}〉 = (2D

^{2}α

^{2})/(α

^{4}+ 4D

^{2}t

^{2}) there holds

^{2}/2D, it turns out to be positive for larger times. Formally speaking, after a short entropy ”dissipation” period we pass to the entropy ”absorption” regime which in view of its D/t asymptotic, for large times definitely dominates D( )

_{in}∼ 2D

^{2}/α

^{2}.

^{2}〉 from an initial value 0 towards its asymptotic value D

^{2}/α

^{2}= $\mathcal{E}$. Note that the negative feedback is here displayed by the the behavior of 〈u

^{2}〉 which drops down from the initial value 2D

^{2}/α

^{2}towards 0. It is also instructive to notice that in the present case $\mathcal{F}$(t) = D

^{2}/(X

^{2})(t).

#### 5.2.2 Steady state

_{0}cos(ωt) + (p

_{0}/mω) sin(ωt) and p(t) = p

_{0}cos(ωt) − mωq

_{0}sin(ωt).

^{2}x

^{2}and consider:

_{in}= p

^{2}(t)/4m

^{2}which is balanced by an oscillating ”dissipative” counter-term to yield .

_{t}s〉 easily follow.

#### 5.2.3 Squeezed state

_{t}ψ = −(1/2)∆ψ + (x

^{2}/2)ψ with the initial data ψ(x,0) = (γ

^{2}π)−

^{1/4}exp(−x

^{2}/2γ

^{2}) and γ ∈(0, ∞), is defined in terms of the probability density:

^{2}(t)] displays a periodic behavior in time, whose level of complexity depends on the particular value of the squeezing parameter γ. The previously mentioned negative feedback is here manifested through (counter)oscillations of the localization, this in conformity with the dynamics of σ

^{2}(t) and the corresponding oscillating dynamics of the Fisher measure $\mathcal{F}$ = 1/σ

^{2}(t). .

#### 5.2.4 Stationary states

_{∗}(x), and v(x) = 0 identically (we stay in one spatial dimension).

^{2}/2, define u = ∇ln and demand that Q = u

^{2}/2 + (1/2)∇ · u.

_{n}(x) stands for the n-th Hermite polynomial: H

_{0}= 1, H

_{1}= 2x, H

_{2}= 2(2x

^{2}− 1), H

_{3}= 4x(2x

^{2}− 3), and so on.

_{0}= −x → Q = x

^{2}/2 − 1/2, next b

_{1}= (1/x) − x → Q = x

^{2}/2 − 3/2, and b

_{2}= [4x/(2x

^{2}− 1)] − x → Q = x

^{2}/2 − 5/2, plus b

_{3}= [(1/x) + 4x/(2x

^{2}− 3)] → Q = x

^{2}− 7/2, that is to be continued for n > 3. Therefore Eq. (135) is here a trivial identity.

## 6 Outlook

^{2}(R). In the latter case an approach towards equilibrium has not been expected to occur at all.

## Acknowledgments

## Dedication

## References

- Alicki, R.; Fannes, M. Quantum Dynamical Systems; Oxford University Press: Oxford, 2001. [Google Scholar]
- Ohya, M.; Petz, D. Quantum Entropy and Its use; Springer-Verlag: Berlin, 1993. [Google Scholar]
- Wehrl, A. General properties of entropy. Rev. Mod. Phys.
**1978**, 50, 221–260. [Google Scholar] [CrossRef] - Shannon, C.E. A mathematical theory of communication. Bell Syst. Techn. J.
**1948**, 27, 379–423, 623–656. [Google Scholar] [CrossRef] - Cover, T.M.; Thomas, J.A. Elements of Information Theory; Wiley: NY, 1991. [Google Scholar]
- Sobczyk, K. Information Dynamics: Premises, Challenges and Results. Mechanical Systems and Signal Processing
**2001**, 15, 475–498. [Google Scholar] [CrossRef] - Yaglom, A.M.; Yaglom, I.M. Probability and Information; D. Reidel: Dordrecht, 1983. [Google Scholar]
- Hartley, R.V.L. Transmission of information. Bell Syst. Techn. J.
**1928**, 7, 535–563. [Google Scholar] [CrossRef] - Brillouin, L. Science and Information Theory; Academic Press: NY, 1962. [Google Scholar]
- Ingarden, R.S.; Kossakowski, A.; Ohya, M. Information Dynamics and Open Systems; Kluwer: Dordrecht, 1997. [Google Scholar]
- Brukner, Ĉ.; Zeilinger, A. Conceptual inadequacy of the Shannon information in quantum measurements. Phys. Rev.
**2002**, A 63, 022113. [Google Scholar] [CrossRef] - Mana, P.G.L. Consistency of the Shannon entropy in quantum experiments. Phys. Rev.
**2004**, A 69, 062108. [Google Scholar] [CrossRef] - Jaynes, E.T. Information theory and statistical mechanics.II. Phys. Rev.
**1957**, 108, 171–190. [Google Scholar] [CrossRef] - Stotland, A.; et al. The information entropy of quantum mechanical states. Europhys. Lett.
**2004**, 67, 700–706. [Google Scholar] [CrossRef] - Partovi, M.H. Entropic formulation of uncertainty for quantum measurements. Phys. Rev. Lett.
**1983**, 50, 1883–1885. [Google Scholar] [CrossRef] - Adami, C. Physics of information. 2004; arXiv:quant-ph/040505. [Google Scholar]
- Deutsch, D. Uncertainty in quantum measurement. Phys. Rev. Lett.
**1983**, 50, 631–633. [Google Scholar] [CrossRef] - Garbaczewski, P.; Karwowski, W. Impenetrable barrriers and canonical quantization. Am. J. Phys.
**2004**, 72, 924–933. [Google Scholar] [CrossRef] - Hirschman, I.I. A note on entropy. Am. J. Math.
**1957**, 79, 152–156. [Google Scholar] [CrossRef] - Beckner, W. Inequalities in Fourier analysis. Ann. Math.
**1975**, 102, 159–182. [Google Scholar] [CrossRef] - Bia-lynicki-Birula, I.; Mycielski, J. Uncertainty Relations for Information Entropy in Wave Mechanics. Commun. Math. Phys.
**1975**, 44, 129–132. [Google Scholar] [CrossRef] - Bia-lynicki-Birula, I.; Madajczyk, J. Entropic uncertainty relations for angular distributions. Phys. Lett.
**1985**, A 108, 384–386. [Google Scholar] [CrossRef] - Stam, A.J. Some inequalities satisfied by the quantities of information of Fisher and Shannon. Inf. and Control
**1959**, 2, 101–112. [Google Scholar] [CrossRef] - Dembo, A.; Cover, T. Information theoretic inequalities. IEEE Trans. Inf. Th.
**1991**, 37, 1501–1518. [Google Scholar] [CrossRef] - Maasen, H.; Uffink, J.B.M. Generalized Entropic Uncertainty Relations. Phys. Rev. Lett.
**1988**, 60, 1103–1106. [Google Scholar] [CrossRef] [PubMed] - Blankenbecler, R.; Partovi, M.H. Uncertainty, entropy, and the statistical mechanics of microscopic systems. Phys. Rev. Lett.
**1985**, 54, 373–376. [Google Scholar] [CrossRef] [PubMed] - Sa´nchez-Ruiz, J. Asymptotic formula for the quantum entropy of position in energy eigenstates. Phys. Lett.
**1997**, A 226, 7–13. [Google Scholar] - Halliwell, J.J. Quantum-mechanical histories and the uncertainty principle: Information-theoretic inequalities. Phys. Rev.
**1993**, D 48, 2739–2752. [Google Scholar] [CrossRef] - Gadre, S.R.; et al. Some novel characteristics of atomic information entropies. Phys. Rev.
**1985**, A 32, 2602–2606. [Google Scholar] [CrossRef] - Yan˜ez, R.J.; Van Assche, W.; Dehesa, J.S. Position and information entropies of the D-dimensional harmonic oscillator and hydrogen atom. Phys. Rev.
**1994**, A 50, 3065–3079. [Google Scholar] - Yan˜ez, R.J.; et al. Entropic integrals of hyperspherical harmonics and spatial entropy of D-dimensional central potentials. J. Math. Phys.
**1999**, 40, 5675–5686. [Google Scholar] - Buyarov, V.; et al. Computation of the entropy of polynomials orthogonal on an interval. SIAM J. Sci. Comp. to appear (2004), also math.NA/0310238. [CrossRef]
- Majernik, V.; Opatrny´, T. Entropic uncertainty relations for a quantum oscillator. J. Phys. A: Math. Gen.
**1996**, 29, 2187–2197. [Google Scholar] [CrossRef] - Majernik, V.; Richterek, L. Entropic uncertainty relations for the infinite well. J. Phys. A: Math. Gen.
**1997**, 30, L49–L54. [Google Scholar] [CrossRef] - Massen, S.E.; Panos, C.P. Universal property of the information entropy in atoms, nuclei and atomic clusters. Phys. Lett.
**1998**, A 246, 530–532. [Google Scholar] [CrossRef] - Massen, S.E.; et al. Universal property of information entropy in fermionic and bosonic systems. Phys. Lett.
**2002**, A 299, 131–135. [Google Scholar] [CrossRef] - Massen, S.E. Application of information entropy to nuclei. Phys. Rev.
**2003**, C 67, 014314. [Google Scholar] [CrossRef] - Coffey, M.W. Asymtotic relation for the quantum entropy of momentum in energy eigenstates. Phuys. Lett.
**2004**, A 324, 446–449. [Google Scholar] [CrossRef] - Coffey, M.W. Semiclassical position entropy for hydrogen-like atoms. J. Phys. A: Math. Gen.
**2003**, 36, 7441–7448. [Google Scholar] [CrossRef] - Dunkel, J.; Trigger, S.A. Time-dependent entropy of simple quantum model systems. Phys. Rev.
**2005**, A 71, 052102. [Google Scholar] [CrossRef] - Santhanam, M.S. Entropic uncertainty relations for the ground state of a coupled sysytem. Phys. Rev.
**2004**, A 69, 042301. [Google Scholar] [CrossRef] - Balian, R. Random matrices and information theory. Nuovo Cim.
**1968**, B 57, 183–103. [Google Scholar] [CrossRef] - Werner, S.A.; Rauch, H. Neutron interferometry: Lessons in Experimental Quantum Physics; Oxford University Press: Oxford, 2000. [Google Scholar]
- Zeilinger, A.; et al. Single- and double-slit diffraction of neutrons. Rev. Mod. Phys.
**1988**, 60, 1067–1073. [Google Scholar] [CrossRef] - Caves, C.M.; Fuchs, C. Quantum information: how much information in a state vector? Ann. Israel Phys. Soc.
**1996**, 12, 226–237. [Google Scholar] - Newton, R.G. What is a state in quantum mechanics? Am. J. Phys.
**2004**, 72, 348–350. [Google Scholar] [CrossRef] - Mackey, M.C. The dynamic origin of increasing entropy. Rev. Mod. Phys.
**1989**, 61, 981–1015. [Google Scholar] [CrossRef] - Lasota, A.; Mackey, M.C. Chaos, Fractals and Noise; Springer-Verlag: Berlin, 1994. [Google Scholar]
- Berndl, K.; et al. On the global existence of Bohmian mechanics. Commun. Math. Phys.
**1995**, 173, 647–673. [Google Scholar] [CrossRef] - Nelson, E. Dynamical Theories of the Brownian Motion; Princeton University Press: Princeton, 1967. [Google Scholar]
- Carlen, E. Conservative diffusions. Commun. Math. Phys.
**1984**, 94, 293–315. [Google Scholar] [CrossRef] - Eberle, A. Uniqueness and Non-uniqueness of Semigroups Generated by Singular Diffusion Operators; LNM vol. 1718, Springer-Verlag: Berlin, 2000. [Google Scholar]
- Garbaczewski, P. Perturbations of noise: Origins of isothermal flows. Phys. Rev. E
**1999**, 59, 1498–1511. [Google Scholar] [CrossRef] - Garbaczewski, P.; Olkiewicz, R. Feynman-Kac kernels in Markovian representations of the Schrödinger interpolating dynamics. J. Math. Phys.
**1996**, 37, 732–751. [Google Scholar] [CrossRef] - Ambegaokar, V.; Clerk, A. Entropy and time. Am. J. Phys.
**1999**, 67, 1068–1073. [Google Scholar] [CrossRef] - Tre¸bicki, J.; Sobczyk, K. Maximum entropy principle and non-stationary distributions of stochastic systems. Probab. Eng. Mechanics
**1996**, 11, 169–178. [Google Scholar] - Huang, K. Statistical Mechanics; Wiley: New York, 1987. [Google Scholar]
- Cercignani, C. Theory and Application of the Boltzmann Equation; Scottish Academic Press: Edinburgh, 1975. [Google Scholar]
- Daems, D.; Nicolis, G. Entropy production and phase space volume contraction. Phys. Rev. E
**1999**, 59, 4000–4006. [Google Scholar] [CrossRef] - Dorfman, J.R. An Introduction to Chaos in Nonequilibrium Statistical Physics; Cambridge Univ. Press: Cambridge, 1999. [Google Scholar]
- Gaspard, P. Chaos, Scattering and Statistical Mechanics; Cambridge Univ. Press: Cambridge, 1998. [Google Scholar]
- Deco, G.; et al. Determining the information flow of dynamical systems from continuous probability distributions. Phys. Rev. Lett.
**1997**, 78, 2345–2348. [Google Scholar] [CrossRef] - Bologna, M.; et al. Trajectory versus probability density entropy. Phys. Rev. E
**2001**, E 64, 016223. [Google Scholar] [CrossRef] [PubMed] - Bag, C.C.; et al. Noise properties of stochastic processes and entropy production. Phys. Rev.
**2001**, E 64, 026110. [Google Scholar] [CrossRef] [PubMed] - Bag, B.C. Upper bound for the time derivative of entropy for nonequilibrium stochastic processes. Phys. Rev.
**2002**, E 65, 046118. [Google Scholar] [CrossRef] [PubMed] - Hatano, T.; Sasa, S. Steady-State Thermodynamics of Langevin Systems. Phys. Rev. Lett.
**2001**, 86, 3463–3466. [Google Scholar] [CrossRef] [PubMed] - Qian, H. Mesoscopic nonequilibrium thermodynamics of single macromolecules and dynamic entropy-energy compensation. Phys. Rev.
**2001**, E 65, 016102. [Google Scholar] [CrossRef] [PubMed] - Jiang, D.-Q.; Qian, M.; Qian, M.-P. Mathematical theory of nonequilibrium steady states; LNM vol. 1833, Springer-Verlag: Berlin, 2004. [Google Scholar]
- Qian, H.; Qian, M.; Tang, X. Thermodynamics of the general diffusion process: time-reversibility and entropy production. J. Stat. Phys.
**2002**, 107, 1129–1141. [Google Scholar] [CrossRef] - Ruelle, D. Positivity of entropy production in nonequilibrium statistical mechanics. J. Stat. Phys.
**1996**, 85, 1–23. [Google Scholar] [CrossRef] - Munakata, T.; Igarashi, A.; Shiotani, T. Entropy and entropy production in simple stochastic models. Phys. Rev.
**1998**, E 57, 1403–1409. [Google Scholar] - Tribus, M.; Rossi, R. On the Kullback information measure as a basis for information theory: Comments on a proposal by Hobson and Chang. J. Stat. Phys.
**1973**, 9, 331–338. [Google Scholar] [CrossRef] - Smith, J.D.H. Some observations on the concepts of information-theoretic entropy and randomness. Entropy
**2001**, 3, 1–11. [Google Scholar] [CrossRef] - Chandrasekhar, S. Stochastic problems in physics and astronomy. Rev. Mod. Phys.
**1943**, 15, 1–89. [Google Scholar] [CrossRef] - Hall, M.J.W. Universal geometric approach to uncertainty, entropy and infromation. Phys. Rev.
**1999**, A 59, 2602–2615. [Google Scholar] [CrossRef] - Pipek, J.; Varga, I. Universal classification scheme for the spatial-localization properties of one-particle states in finite d-dimensional systems. Phys. Rev. A
**1992**, A 46, 3148–3163. [Google Scholar] [CrossRef] [PubMed] - Varga, I.; Pipek, J. Rényi entropies characterizing the shape and the extension of the phase-space representation of quantum wave functions in disordered systems. Phys. Rev.
**2003**, E 68, 026202. [Google Scholar] [CrossRef] [PubMed] - McClendon, M.; Rabitz, H. Numerical simulations in stochastic mechanics. Phys. Rev.
**1988**, A 37, 3479–3492. [Google Scholar] [CrossRef] - Garbaczewski, P. Signatures of randomness in quantum spectra. Acta Phys. Pol.
**2002**, A 33, 1001–1024. [Google Scholar] - Hu, B.; et al. Quantum chaos of a kicked particle in an infinite potential well. Phys. Rev. Lett.
**1999**, 82, 4224–4227. [Google Scholar] [CrossRef] - Kullback, S. Information Theory and Statistics; Wiley: NY, 1959. [Google Scholar]
- Cramér, H. Mathematical methods of statistics; Princeton University Press: Princeton, 1946. [Google Scholar]
- Hall, M.J.W. Exact uncertainty relations. Phys. Rev.
**2001**, A 64, 052103. [Google Scholar] [CrossRef] - Garbaczewski, P. Stochastic models of exotic transport. Physica
**2000**, A 285, 187–198. [Google Scholar] [CrossRef] - Carlen, E.A. Superadditivity of Fisher’s information and logarithmic Sobolev inequalities. J. Funct. Anal.
**1991**, 101, 194–211. [Google Scholar] [CrossRef] - Frieden, B.R.; Sofer, B.H. Lagrangians of physics and the game of Fisher-information transfer. Phys. Rev.
**1995**, E 52, 2274–2286. [Google Scholar] [CrossRef] - Catalan, R.G.; Garay, J.; Lopez-Ruiz, R. Features of the extension of a statistical measure of complexity to continuous systems. Phys. Rev.
**2002**, E 66, 011102. [Google Scholar] [CrossRef] [PubMed] - Risken, H. The Fokker-Planck Equation; Springer-Verlag: Berlin, 1989. [Google Scholar]
- Hasegawa, H. Thermodynamic properties of non-equilibrium states subject to Fokker-Planck equations. Progr. Theor. Phys.
**1977**, 57, 1523–1537. [Google Scholar] [CrossRef] - Vilar, J.M.G.; Rubi, J.M. Thermodynamics ”beyond” local equilibrium. Proc. Nat. Acad. Sci. (NY)
**2001**, 98, 11081–11084. [Google Scholar] [CrossRef] [PubMed] - Kurchan, J. Fluctuation theorem for stochastic dynamics. J. Phys. A: Math. Gen.
**1998**, 31, 3719–3729. [Google Scholar] [CrossRef] - Mackey, M.C.; Tyran-Kamin´ska, M. Effects of noise on entropy evolution. 2005; arXiv.org preprint cond-mat/0501092. [Google Scholar]
- Mackey, M.C.; Tyran-Kaminńska, M. Temporal behavior of the conditional and Gibbs entropies. 2005; arXiv.org preprint cond-mat/0509649. [Google Scholar]
- Czopnik, R.; Garbaczewski, P. Frictionless Random Dynamics: Hydrodynamical Formalism. Physica
**2003**, A 317, 449–471. [Google Scholar] [CrossRef] - Fortet, R. Résolution d’un systéme d’équations de M. Schrödingeer. J. Math. Pures Appl.
**1040**, 9, 83. [Google Scholar] - Blanchard, Ph.; Garbaczewski, P. Non-negative Feynman-Kac kernels in Schrödinger’s interpolation problem. J. Math. Phys.
**1997**, 38, 1–15. [Google Scholar] [CrossRef] - Jaynes, E.T. Violations of Boltzmann’s H Theorem in Real Gases. Phys. Rev.
**1971**, A 4, 747–750. [Google Scholar] [CrossRef] - Voigt, J. Stochastic operators, Information and Entropy. Commun. Math. Phys.
**1981**, 81, 31–38. [Google Scholar] [CrossRef] - Voigt, J. The H-Theorem for Boltzmann type equations. J. Reine Angew. Math
**1981**, 326, 198–213. [Google Scholar] - Toscani, G. Kinetic approach to the asymptotic behaviour of the solution to diffusion equation. Rend. di Matematica
**1996**, Serie VII 16, 329–346. [Google Scholar] - Bobylev, A.V.; Toscani, G. On the generalization of the Boltzmann H-theorem for a spatially homogeneous Maxwell gas. J. Math. Phys.
**1992**, 33, 2578–2586. [Google Scholar] [CrossRef] - Arnold, A.; et al. On convex Sobolev inequalities and the rate of convergence to equilibrium for Fokker-Planck type equations. Comm. Partial Diff. Equations
**2001**, 26, 43–100. [Google Scholar] [CrossRef]

© 2005 by MDPI (http://www.mdpi.org). Reproduction for noncommercial purposes permitted.