Next Article in Journal
Configurational Information as Potentially Negative Entropy: The Triple Helix Model
Next Article in Special Issue
Entropy and Energy, – a Universal Competition
Previous Article in Journal / Special Issue
Entropy Diagnostics for Fourth Order Partial Differential Equations in Conservation Form
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Second Entropy: A Variational Principle for Time-dependent Systems

School of Chemistry F11, University of Sydney, NSW 2006 Australia
Submission received: 3 May 2008 / Accepted: 4 September 2008 / Published: 25 September 2008

Abstract

:
The fundamental optimization principle for non-equilibrium thermodynamics is given. The second entropy is introduced as the quantity that is maximised to determine the optimum state of a non-equilibrium system. In contrast, the principles of maximum or minimum dissipation, which have previously been proposed by Onsager, Prigogine, and others as the variational principle for such systems, are shown to be incapable of fulfilling that rôle.

1. Introduction

The second entropy is a a new type of entropy that has been introduced by the author for nonequilibrium systems. The concept has emerged from a theory for non-equilibrium thermodynamics and statistical mechanics that has been developed over the last few years. For reviews see Refs [1, 2]. Amongst other things, the theory has lead for the first time to the generalization of the Boltzmann distribution to non-equilibrium systems, and to the development of a Monte Carlo computer algorithm for such time-dependent systems [3]. In this paper the theory is briefly reviewed. New results are presented for the distinction between ‘non-equilibrium’ and ‘not in equilibrium’ systems, (Sec. 2), for the variational principle for non-equilibrium systems, (Sec. 3), and for the physical interpretation of the fluctuation formulation of the second law, (Sec. 4). The discussion in Sec. 3 of several other non-equilibrium variational principles that have previously been advocated, including the Principles of Minimum and of Maximum Dissipation, is particularly detailed.

2. First Entropy

Entropy was originally introduced by Clausius in the context of the second law of thermodynamics, which may be stated as
The entropy increases during spontaneous changes in the structure of the total system.
Later Boltzmann identified entropy as the logarithm of the number of molecular configurations in a structural macrostate. In formal mathematical terms this is
S ( 1 ) ( x | E ) = k B ln E d Γ δ ( x x ^ ( Γ ) ) ,
where x is the value of the macrostate, kB is Boltzmann’s constant, Γ is a point in phase space of the system, all of which are equally weighted, and the integral is restricted to the energy surface E.
The second law of thermodynamics provides a variational principle for determining the equilibrium state. That is, the equilibrium state is the state of maximum entropy. Hence one imagines constraining the system to have a particular structure, (structure may mean the arrangement of molecules, or the spatial distribution of energy, density, etc.), and calculating the entropy. The structure with the largest entropy is the equilibrium one. If at any instant the system is not in equilibrium, (i. e. it has a structure with a smaller entropy), then its structure will evolve toward the equilibrium one.
In thermodynamics it is quite common to invoke minimisation of a free energy or thermodynamic potential. These can be measured directly, and many find their physical interpretation more intuitive than that of entropy. However, this is not a separate variational principle to that discussed above since it can be shown that the thermodynamic potential is just minus the temperature times the entropy of the total system (sub-system plus reservoir) [4, 5]. Hence minimisation of a constrained thermodynamic potential is mathematically and conceptually identical to maximisation of the entropy.
Variational principles have a long history in science, and they offer many conceptual, mathematical, and computational advantages over other approaches. It hardly needs saying that the principle of maximum entropy embodied in the second law has played a dominant rôle in equilibrium thermodynamics.
For time-dependent or non-equilibrium systems, there have been many attempts to formulate a similar variational principle. In general these have been based upon the second law of thermodynamics. In the opinion of the present author, such approaches are seriously flawed because they fundamentally misinterpret the second law.
To see this one needs to distinguish between a not-in-equilibrium system and a non-equilibrium system. A not-in-equilibrium system is a system constrained to have a non-optimum structure. A nonequilibrium system is system that changes macroscopically with time. The second law of thermodynamics deals solely with not-in-equilibrium systems. It gives a principle for determining whether or not a system is in equilibrium, the distance from equilibrium, and the direction toward equilibrium. It does not give the speed of motion or the rate of change, or in fact anything of a quantitative nature concerning time.
The fact that the second law of equilibrium thermodynamics, (as henceforth it will be called), is not concerned with time means that the first entropy, (this is the ordinary entropy; the second entropy will be introduced below), also has nothing directly to do with time. It is for this reason that all attempts to base a theory for non-equilibrium thermodynamics solely upon the second law of equilibrium thermodynamics and the first entropy are doomed to failure simply because non-equilibrium systems change with time, and a quantitative theory for them must be able to give their speed or rate of change.

3. Principles of Extreme Dissipation

The distinction made here between a not-in-equilibrium system and a non-equilibrium system has not previously been emphasized, and consequently attempts to use the first entropy for non-equilibrium systems are quite wide-spread. Because time is missing in the equilibrium second law, the only practical way of introducing it into the non-equilibrium theory is via the derivative of the first entropy. Unfortunately, the second law of equilibrium thermodynamics has nothing to say about the rate of change of entropy, which is the fundamental problem with previous approaches, as is now shown.
As above, let x be the structure or state of the system, and let S(1)(x) be the corresponding first entropy. The thermodynamic force is defined as X(x) = ∂S(1)(x)/∂x. In terms of these the general expression for the rate of change of the first or ordinary entropy is
S ˙ ( 1 ) ( x ˙ , X ) = x ˙ X .
This is just the formal mathematical definition of the time derivative. It is essential to note that the flux x ˙ and the force X are independent variables in this expression. That is to say, one can imagine that the non-equilibrium system is constrained to have a particular flux that is not the optimum flux for the given force, (exactly as one can have a constrained non-optimum structure in the equilibrium second law). This expression for the rate of entropy production remains valid for such a constrained system.
The optimum state for a non-equilibrium system is the steady state. This is true for the case of a fixed thermodynamic gradient applied by a reservoir to a sub-system, which case is the focus of the present analysis. In such a steady state, the flux is linearly proportional to the force,
x ˙ ¯ ( X ) = λ X .
For example, in Fourier’s law the left hand side would be the energy flux, which is the rate of change of the first energy moment, the force would be the imposed temperature gradient, and λ would be proportional to the thermal conductivity. In general the constant λ is called a linear transport coefficient. The over-line is used consistently here to denote the optimum state. Inserting this into Eq. (3) gives the rate of entropy production in the steady state,
S ˙ ¯ ( 1 ) ( X ) = λ X 2 2 Ψ ( X ) .
Alternatively this may be written
S ˙ ¯ ( 1 ) ( x ˙ ) = λ 1 x ˙ 2 = 2 Φ ( x ˙ ) .
These two functions have appeared many times in the literature [6,7,8,9,10]. Onsager denoted the first by 2Ψ(X) and the second by 2Φ( x ˙ ) [8].
It must be emphasized that these last two equalities only hold in in the steady state. For a system with an arbitrary constrained flux, x ˙ x ˙ ¯ ( X ) , or for a system with an arbitrary constrained force, X X ¯ ( x ˙ ) , neither equation gives correctly the rate of entropy production. In contrast Eq. (3) gives the correct rate of entropy production in an arbitrary constrained state. That equation is valid on all points in the x ˙ X plane, whereas Eqs (5) and (6) only hold on the one-dimensional curve corresponding to the steady-state in that plane.
Those authors who have attempted to formulate a variational theory for non-equilibrium thermodynamics proceeding from the second law of equilibrium thermodynamics have always invoked as the fundamental variational object the rate of first entropy production, which is called the dissipation. Before treating them in detail, two serious objections may be noted: First, and as already stated, since the second law of equilibrium thermodynamics says nothing quantitative about time nor about the rate of change of first entropy, then it is not possible to make a theory for non-equilibrium systems based solely upon that law. Second, even if there were an extremum principle based upon the dissipation, one could not use Eqs (5) or (6) as the variational object because they do not give the dissipation in the constrained, non-optimum state. These functions have no physical meaning away from the steady state, and in consequence any law that invokes Ψ or Φ in a constrained non-optimum state must also be unphysical.
To make the first point more concrete, one need only look at the formally exact function S ˙ ( 1 ) ( x ˙ , X ) . Since this is a bilinear form, extremization of this with respect to flux yields x ˙ ¯ ( X ) = ± , and extremization with respect to force yields X ¯ ( x ˙ ) = ± , both of which are obviously unphysical. If, as almost all authors do, one avoids the problem by combining the bilinear form with the quadratic forms Eqs (5) or (6), then one faces the second problem that these quadratic forms have no physical meaning in the constrained state. These examples simply demonstrate the fundamental point that one cannot construct a variational principle for non-equilibrium systems based solely upon the rate of production of first entropy.
Onsager gave two variational principles [6,7,8]. The first he names ‘the Principle of the Least Dissipation of Energy’, and he describes it as an extension Rayleigh’s Principle of the same name. He also states that ‘the rate of increase of entropy plays the role of a potential’ [6]. His first expression is
O 1 ( x ˙ | X ) S ˙ ( 1 ) ( x ˙ , X ) Φ ( x ˙ ) .
Maximising this with respect to the flux gives the linear transport law, Eq. (4), which is indeed the optimum, steady-state value of the flux. However, as just noted, this variational function has no physical meaning in the constrained state. Obviously this is but one of an infinite number of variational functions that one could construct from S ˙ ( 1 ) ( x ˙ , X ) , Φ ( x ˙ ) , and Ψ(X) that have an extremum in the steady state, but which have no physical meaning in constrained states.
The significance of this last point is that the first entropy that appears in the second law of equilibrium thermodynamics not only gives a variational principle for the equilibrium state, but it is also physically meaningful in constrained states. For example, the thermodynamic force that is its derivative is a real force that quantitatively drives the system towards the equilibrium state. (Quantitatively here means that the flux is proportional to the force with a known, thermodynamic coefficient.) Onsager’s functional has no such physical meaning in the constrained state, and its derivative is not related to the real force that drives the system to the steady state.
Onsager’s second variational principle is written in terms of a functional of a trajectory over an interval τ = t2t1 that passes through x1 and x2 [8] ,
O 2 ( x 2 | x 1 , τ ) = 1 2 { t 1 t 2 d t [ Φ ( x ˙ ( t ) ) + Ψ ( X ( x ( t ) ) ) S ˙ ( 1 ) ( x ˙ ( t ) , X ( x ( t ) ) ) ] } min .
This variational functional is rather common in the field of stochastic differential equations and there are a number of thermodynamic Lagrangians that are based upon it [11,12,13,14,15,16,17,18]. In particular, the integrand is the negative of the variational principle used by Gyarmati [19, 20], and it is equal to the thermodynamic Lagrangian given by Lavenda, Eq. (1.17), [15]. As for the first functional, this is extremized when the flux or force are related by the linear transport law, Eq. (4), and so it does indeed give the steady state. But the same criticism may be made of it as of the first functional: it has no physical meaning in the constrained state, and it is but one of an infinite family of variational functionals that could be constructed to yield the steady state, but which are physically meaningless more generally.
Two further principles may be found in the literature: The Principle of Least Dissipation, and the Principle of Maximum Dissipation. Prigogine [9], and de Groot and Mazur [10] amongst others have advocated the former. The second Principle is diametrically opposed to the first and it is the most common non-equilibrium Principle to be found on the internet and in the natural sciences [21,22,23,24,25,26]. Both Principles are concerned with the rate of production of first entropy, and both invoke the Ψ and Φ functions used by Onsager. Accordingly they are subject to the same criticisms that have already been made.
Prigogine minimizes the function Ψ(X) and refers to this as the Principle of Minimum Rate of Entropy Production [9]. The rationale appears to be that Ψ(X) is a positive semi-definite quadratic form in the forces, and therefore if some of the forces are regarded as unspecified, the function may be minimized with respect to them. In so far as Ψ gives half the rate of entropy production in the steady state, Prigogine regards this as minimizing the rate of entropy production. The problem with this nomenclature is that Ψ does not give the rate of entropy production away from the steady state, which is where the variations are carried out. And as already mentioned, Ψ has no physical meaning away from the optimum state, and so the gradients in Ψ have no relation to the thermodynamic forces that drive the system to the steady state. Further, the function Ψ(X) is not particularly useful because it does not give the optimum flux in the steady state, (it deals with forces, not fluxes). In practice, those working with Ψ(X) often invoke the linear transport law to replace some or all of the X by x ˙ ¯ , which is only valid in the optimum state. However, what is then done is that x ˙ is treated as if it were a constrained, non-optimized variable, which is obviously mathematically improper. Finally, proofs and manipulations of Ψ generally assume the validity of spatially localized thermodynamic relations, as is particularly evident in the analysis of de Groot and Mazur [10]. It is difficult to argue for a fundamental and general variational principle for non-equilibrium systems if such an approximation needs to be made in its formulation. Complementary critiques of Prigogine’s approach may be found in the literature [15,27,28,29].
The Principle of Maximum Rate of Entropy Production is also popular [21,22,23,24,25,26]. Its proponents assert that it does not really contradict the Principle of Minimum Dissipation because Prigogine’s Principle is restricted to systems near thermodynamic equilibrium with fixed boundary conditions, whereas the Maximium Dissipation Principle is valid in the opposite case, (see p. 5 of Ref. [25]). Whether or not that explains the discrepancy or is consistent with a universal Principle, it remains a fact that the Principle of Maximum Dissipation is still based upon the function Ψ(X), and possible combinations of Φ ( x ˙ ) and S ˙ ( 1 ) ( x ˙ , X ) , and so it is subject to exactly the same criticisms that have already been made: it assumes the linear transport laws rather than deriving them, it generally assumes local thermodynamic equilibrium, it has no physical meaning in the constrained state, and it is based solely upon the rate of production of first entropy.

4. Second Entropy

The second law of equilibrium thermodynamics, Eq. (1), gives the direction of change, but it does not give the rate of change. For non-equilibrium systems it is the rate that is of primary interest, whether it be the quantitative value of a flux, the speed of motion, the response to a time-varying external field, the rate of evolution of a system, or the rate of a chemical reaction, etc. None of these are given by the second law of equilibrium thermodynamics. As has already been emphasized, it therefore follows that one cannot have a quantitative theory for time-dependent systems based solely upon the second law of thermodynamics, or upon the entropy that appears in it.
For this reason the author has introduced the second law of non-equilibrium thermodynamics [30],
The second entropy increases during spontaneous changes in the dynamic structure of the total system.
Dynamic structure is a macroscopic flux or rate; it is a transition between macrostates in a specified time. The new law is deliberately analogous to the familiar second law of equilibrium thermodynamics. Instead of a constrained macrostate, one has a constrained flux, and instead of the equilibrium state in which the macrostate no longer changes and entropy is a maximum, one has the steady state in which the fluxes cease to change and the second entropy is a maximum. The second entropy could also be called the transition entropy, and it is the number of molecular configurations associated with a transition between macrostates in a specified time. Formally it may be written,
S ( 2 ) ( x , x | τ , E ) = k B ln E d Γ δ ( x x ^ ( Γ ) ) δ ( x x ^ ( Γ ( τ | Γ , 0 ) ) ) ,
where Γ(τ|Γ,0) is the trajectory. The exponential of this second entropy is the unconditional probability of observing the transition xx in time τ. This expression shows why the second entropy is the relevant entropy for determining a dynamic state. The flux or a rate of change is defined in terms of the coarse velocity as
x ° x x τ .
For technical reasons that need not be discussed in detail here, τ is generally not an infinitesimal. This detail is not important in the steady state, since then the coarse velocity is equal to the instantaneous velocity anyway [30].
The second entropy theory can be briefly illustrated by applying it to the steady state in the linear regime. The Principle is of course much more general than this, and more detailed analyses and broader applications are given elsewhere [1,2,3, 31].
Fluctuation theory represents one of the earliest applications of equilibrium thermodynamics, and it has provided a bridge between equilibrium thermodynamics and equilibrium statistical mechanics. What follows then represents a fluctuation theory for non-equilibrium thermodynamics.
Consider an isolated, equilibrium system, and let x be the value of a macrostate, (moment of energy, density, etc., or the coordinate of a Brownian particle, or a collection of these). One writes the second entropy as a quadratic form,
S ( 2 ) ( x , x | τ ) = 1 2 A ( τ ) [ x 2 + x 2 ] + B ( τ ) x x .
Here has been discarded constant and linear terms, which implies that x measures the departure from the optimum or average value, x = 0. This is the most general quadratic form that can be written consistent with the symmetry of the problem. Denoting the most likely state with an over-line, maximizing the second entropy,
S ( 2 ) ( x , x | τ ) x | x ¯ = 0 ,
yields
x ¯ ( x , τ ) = A ( τ ) 1 B ( τ ) x .
This says that the future, τ > 0, or past, τ < 0, state is linearly proportional to the current state. Define the time correlation function,
Q ( τ ) x ( t + τ ) x ( t ) .
For Gaussean statistics, in the averand one may replace x(t + τ) by x ¯ ( x , τ ) , which allows the quadratic coefficients of the second entropy to be expressed in terms of the time correlation function,
A ( τ ) 1 B ( τ ) = Q ( τ ) Q ( 0 ) 1 .
The second entropy must reduce to the first entropy upon integration over x. Again for Gaussean statistics, this is the same as evaluating it at x ¯ ( x , τ ) , so that one has the reduction condition
S ( 2 ) ( x ¯ ( x , τ ) , x | τ ) = S ( 1 ) ( x ) .
In the fluctuation regime the first entropy may be written
S ( 1 ) ( x ) = 1 2 S x 2 = k B 2 Q ( 0 ) 1 x 2 ,
where kB is Boltzmann’s constant. Note that fluctuation theory is based upon the fact that S is both negative definite and small, which means that terms quadratic in S are relatively negligible. The wellknown result that the fluctuation matrix is the inverse of the correlation matrix is readily proved by an integration by parts, using the fact that the exponential of the entropy is the probability distribution. Combining the last three equations with the quadratic form for the second entropy yields
A ( τ ) 1 = S 1 k B 2 Q ( τ ) S Q ( τ ) ,
and
B ( τ ) = k B 1 [ 1 k B 2 S Q ( τ ) S Q ( τ ) ] 1 S Q ( τ ) S .
In the intermediate regime the time correlation function goes like [30]
Q ( τ ) ~ Q ( 0 ) | τ | D ,
where D is the transport matrix. This has the appearance of an expansion for small τ, but due to the absolute value, which occurs because the time correlation function is an even function of time, it is not a Taylor expansion. This says that on intermediate time scales, the time correlation function decreases linearly from its initial value. Using this result the most likely position is
x ¯ ( x , τ ) = k B 1 Q ( τ ) S x = x + | τ | D S x .
Hence the most likely flux is
x ° ¯ = τ ^ D S x = τ ^ D X ,
since X∂S(1)(x)/∂x = Sx is the thermodynamic force. Since in the future τ ^ ≡ sign(τ) = +1, this says that the flux is linearly proportional to, and in the same direction as, the force. (The present derivation characterizes fluctuations of an isolated system, which are symmetric in time. The steady state is the response to an external force that had been applied in the past, which is the same as the future decay of a fluctuation with same value of the internal force.) This is the linear transport law, Eq. (4). From the nature of the derivation, for the multi-component case D is a symmetric matrix, and so this result gives the reciprocal relations which were originally discovered by Onsager [6].
The above derivation is valid for an isolated system, and so it says that following a fluctuation away from the equilibrium position, the system returns to equilibrium at a rate determined by the internal thermodynamic force. Onsager invoked the regression hypothesis, which is to say that the flux that develops in an isolated system is the same as that that develops in a system acted upon by an external mechanical or thermodynamic force that is equal to the internal force [6]. In the present second entropy approach it can be shown that in the optimum state the internal force is indeed equal to the external force, and so the present expression holds for the flux in a driven steady state system [1, 2].
Its worth mentioning that the definition of the transport coefficient in terms of the time correlation function is equivalent to
D = lim τ 1 | τ | [ Q ( τ ) Q ( 0 ) ] = lim τ 1 | τ | 0 | τ | d t x ˙ ( t ) x ( 0 ) 0 .
This may be recognized as the Green-Kubo expression for the transport coefficient [6, 30, 32,33,34,35,36].
A physical interpretation of the second entropy is readily attainable. In view of the above results the quadratic form may be written as
S ( 2 ) ( x 2 , x 1 | τ ) = 1 2 A ( τ ) [ x 2 x 1 ] 2 + [ x 2 x 1 ] [ A ( τ ) B ( τ ) ] x 1 + x 1 [ A ( τ ) B ( τ ) ] x 1 = 1 2 A ( τ ) [ x 2 x 1 ] 2 + [ x 2 x 1 ] S x 1 + x 1 S x 1 + O S 2 .
This is written in a form that allows the future point x2 to be predicted given the current point x1. Hence the final term, which is obviously twice the first entropy, is a constant that need not be considered further. The term of order S2, (which is what the notation O means), is neglected because S is small in the fluctuation regime. Since A(τ) < 0, (because it is a fluctuation matrix), the first term is negative and unfavorable. This term is quadratic in the flux, [x2x1]/τ and it reflects the entropy cost of the dynamic order. It is quadratic because it cannot be sensitive to the direction of the flux; whether the flux is positive or negative costs as much dynamic order. The middle term is the flux times the thermodynamic force, or τ S ˙ ( 1 ) . This is the production of first entropy or dissipation term; it is positive in the steady state, and it is what drives the flux. The dissipation is obviously sensitive to the direction of the flux, and so this term has to be linear in the flux. So one sees that the development of the steady state is a combination of two physical effects: the production of first entropy and the arrangement of dynamic order. The first drives the system from the not-in-equilibrium state back to equilibrium, and it increases linearly with the flux. This linear increase is limited by the quadratic cost of ordering the flux. It is the balance of these two terms that gives rise to the steady or optimum state.

5. Conclusion

It is the cost of dynamic order that is missing in previous non-equilibrium Principles based on the second law of equilibrium thermodynamics and the rate of first entropy production. This physical effect is explicitly included in the second entropy theory, which is specifically designed to account for transitions. As such the second entropy provides a complete basis for the non-equilibrium state.

References

  1. Attard, P. Theory for non-equilibrium statistical mechanics. Phys. Chem. Chem. Phys. 2006, 8, 3585. [Google Scholar] [CrossRef] [PubMed]
  2. Attard, P. The second law of nonequilibrium thermodynamics: how fast time flies. Adv. Chem. Phys. 2008, 140, 1. [Google Scholar]
  3. Attard, P. Statistical mechanical theory for steady state systems. V. Nonequilibrium probability density. J. Chem. Phys. 2006, 124, 224103. [Google Scholar] [CrossRef] [PubMed]
  4. Attard, P. The explicit density functional and its connection with entropy maximisation. J. Stat. Phys. 2000, 100, 445. [Google Scholar] [CrossRef]
  5. Attard, P. Thermodynamics and statistical mechanics: equilibrium by entropy maximisation; Academic Press: London, 2002. [Google Scholar]
  6. Onsager, L. Reciprocal relations in irreversible processes. I. Phys. Rev. 1931, 37, 405. [Google Scholar] [CrossRef]
  7. Onsager, L. Reciprocal relations in irreversible processes. II. Phys. Rev. 1931, 38, 2265. [Google Scholar] [CrossRef]
  8. Onsager, L.; Machlup, S. Fluctuations and irreversible processes. Phys. Rev. 1953, 91, 1505. [Google Scholar] [CrossRef]
  9. Prigogine, I. Introduction to thermodynamics of irreversible processes; Interscience: New York, 1967. [Google Scholar]
  10. de Groot, S. R.; Mazur, P. Non-equilibrium thermodynamics; New York, Dover, 1984. [Google Scholar]
  11. Haken, H. Generalized Onsager-Machlup function and classes of path integral solutions of FokkerPlanck equation and master equation. Z. Phys. B 1976, 24, 321. [Google Scholar] [CrossRef]
  12. Graham, R. Path integral formulation of general diffusion processes. Z. Phys. B 1977, 26, 281. [Google Scholar] [CrossRef]
  13. Grabert, H.; Green, M. S. Fluctuations and nonlinear irreversible processes. Phys. Rev. A 1979, 19, 1747. [Google Scholar] [CrossRef]
  14. Hunt, K. L. C.; Ross, J. Path integral solutions of stochastic equations for nonlinear irreversible processes: the uniqueness of the thermodynamic Lagrangian. J. Chem. Phys. 1981, 75, 976. [Google Scholar] [CrossRef]
  15. Lavenda, B. H. Nonequilibrium statistical thermodynamics; Wiley: Chichester, 1985. [Google Scholar]
  16. Keizer, J. Statistical thermodynamics of nonequilibrium processes; Springer-Verlag: New York, 1987. [Google Scholar]
  17. Eyink, G. L. Dissipation and large thermodynamic fluctuations. J. Stat. Phys. 1990, 61, 533. [Google Scholar] [CrossRef]
  18. Peng, B.; Hunt, K. L. C.; Hunt, P. M.; Suarez, A.; Ross, J. Thermodynamic and stochastic theory of nonequilibrium systems: fluctuation probabilities and excess work. J. Chem. Phys. 1995, 102, 4548. [Google Scholar] [CrossRef]
  19. Gyarmati, I. On the most general form of the thermodynamic integral principle. Z. Phys. Chemie 1968, 239, 133. [Google Scholar]
  20. Gyarmati, I. Nonequilibrium thermodynamics; Springer: Berlin, 1970. [Google Scholar]
  21. Paltridge, G. W. Climate and thermodynamic systems of maximum dissipation. Nature 1979, 279, 630. [Google Scholar] [CrossRef]
  22. Swenson, R.; Turvey, M. T. Thermodynamic reasons for perception-action cycles. Ecological Psychology 1991, 3, 317. [Google Scholar] [CrossRef]
  23. Schneider, E. D.; Kay, J. J. Life as a manifestation of the second law of thermodynamics. Math. Comp.Mod. 1968, 19(6), 25. [Google Scholar] [CrossRef]
  24. Dewar, R. Information theory explanation of the fluctuation theorem, maximum entropy production and self-organized criticality in non-equilibrium stationary states. J. Phys. Math. Gen. 2003, 36, 631. [Google Scholar] [CrossRef]
  25. Kleidon, A.; Lorenz, R. D. (Eds.) Non-equilibrium thermodynamics and the production of entropy: life, earth, and beyond; Springer: Berlin, 2005.
  26. Martyushev, L. M.; Seleznev, V. D. Maximum entropy production principle in physics, chemistry and biology. Phys. Rep. 2006, 426, 1. [Google Scholar] [CrossRef]
  27. Keizer, J.; Fox, R. F. Qualms regarding the range of validity of the Glansdorff-Prigogine criterion for stability of non-equilibrium states. Proc. Nat. Acad. Sci. USA 1974, 71, 192. [Google Scholar] [CrossRef] [PubMed]
  28. Hunt, K. L. C.; Hunt, P. M.; Ross, J. Dissipation in steady states of chemical systems and deviations from minimum entropy production. Phys. A 1987, 147, 48. [Google Scholar] [CrossRef]
  29. Ross, J.; Vlad, M. O. Exact solutions for the entropy production rate of several irreversible processes. J. Phys. Chem. 2005, 109, 10607. [Google Scholar] [CrossRef] [PubMed]
  30. Attard, P. Statistical mechanical theory for steady state systems. II. Reciprocal relations and the second entropy. J. Chem. Phys. 2005, 122, 154101. [Google Scholar] [CrossRef] [PubMed]
  31. Attard, P.; Gray-Weale, A. Statistical mechanical theory for steady state systems. VIII. General theory for a Brownian particle driven by a time- and space-varying force. J. Chem. Phys. 2008, 128, 114509. [Google Scholar] [CrossRef] [PubMed]
  32. Green, M. S. Markoff random processes and the statistical mechanics of time-dependent phenomena. II. Irreversible processes in fluids. J. Chem. Phys. 1954, 22, 398. [Google Scholar]
  33. Kubo, R. The fluctuation-dissipation theorem. Rep. Progr. Phys. 1966, 29, 255. [Google Scholar] [CrossRef]
  34. Hansen, J.-P.; McDonald, I. R. Theory of simple liquids; Academic Press: London, 1986. [Google Scholar]
  35. Boon, J. P.; Yip, S. Molecular hydrodynamics; Dover, New York, 1991. [Google Scholar]
  36. Zwanzig, R. Non-equilibrium statistical mechanics; Oxford University Press: Oxford, 2001. [Google Scholar]

Share and Cite

MDPI and ACS Style

Attard, P. The Second Entropy: A Variational Principle for Time-dependent Systems. Entropy 2008, 10, 380-390. https://0-doi-org.brum.beds.ac.uk/10.3390/e10030380

AMA Style

Attard P. The Second Entropy: A Variational Principle for Time-dependent Systems. Entropy. 2008; 10(3):380-390. https://0-doi-org.brum.beds.ac.uk/10.3390/e10030380

Chicago/Turabian Style

Attard, Phil. 2008. "The Second Entropy: A Variational Principle for Time-dependent Systems" Entropy 10, no. 3: 380-390. https://0-doi-org.brum.beds.ac.uk/10.3390/e10030380

Article Metrics

Back to TopTop