Next Article in Journal
No Existence and Smoothness of Solution of the Navier-Stokes Equation
Next Article in Special Issue
Metriplectic Structure of a Radiation–Matter-Interaction Toy Model
Previous Article in Journal
Complex Networks and the b-Value Relationship Using the Degree Probability Distribution: The Case of Three Mega-Earthquakes in Chile in the Last Decade
Previous Article in Special Issue
General Non-Markovian Quantum Dynamics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Para-Hamiltonian form for General Autonomous ODE Systems: Introductory Results

Faculty of Physics, University of Bialystok, ul. Ciołkowskiego 1L, 15-245 Białystok, Poland
*
Author to whom correspondence should be addressed.
Submission received: 4 December 2021 / Revised: 21 February 2022 / Accepted: 23 February 2022 / Published: 26 February 2022
(This article belongs to the Special Issue Non-Hamiltonian Dynamics, Open Systems and Entropy)

Abstract

:
We propose a new tool to deal with autonomous ODE systems for which the solution to the Hamiltonian inverse problem is not available in the usual, classical sense. Our approach allows a class of formally conserved quantities to be constructed for dynamical systems showing dissipative behavior and other, more general, phenomena. The only ingredients of this new framework are Hamiltonian geometric mechanics (to sustain certain desirable properties) and the direct reformulation of the notion of the derivative along the phase curve. This seemingly odd and inconsistent marriage of apparently remote ideas leads to the existence of the generator of motion for every autonomous ODE system. Having constructed the generator, we obtained the Lie invariance of the symplectic form ω for free. Various examples are presented, ranging from mathematics, classical mechanics, and thermodynamics, to chemical kinetics and population dynamics in biology. Applications of these ideas to geometric integration techniques of numerical analysis are suggested.

1. Introduction

Physics clearly distinguishes systems with conservation laws obeyed, as this simplifies the treatment of physical input given by initial or boundary conditions and a formal description in terms of admissible mathematical tools. Hamiltonian systems, which are model examples of how conserved quantities fit into physics, are very convenient since their nice formal features and associated structure provided by symplectic geometry additionally reveal several hidden links between objects engaged in the theory [1]. Nonetheless, often, the analytic methods are not sufficient, as we need to ponder the possibility of the numerical solution of the problem under consideration or even some trickier purely formal methods, as asymptotic expansion in the vicinity of singularities [2].
This article proposes some form of these “trickier methods” that could be applied to numerical algorithms, as they offer a weak form of the conservation laws (we mean here a conserved charge without the corresponding classical conservation law.
Numerical integration algorithms range from general-purpose classic Runge–Kutta and multi-step schemes [3] to more specific, geometric methods [4]. The latter admit of course better qualitative behavior (e.g., long time stability [5]), while the former usually are of higher order with less-flexible adaptation possibilities when it comes to the change of the simulation circumstances (e.g., the scheme fails to maintain high accuracy when the simulation lasts too long). Hence, geometric methods show some superiority, when we go to the extremes of bending the simulation parameters.
We found the main motivation of the undertaken research in the geometric numerical treatment of ODEs known as Geometric Numerical Integration (GNI) (see, e.g., [4,6,7,8,9]), although in this paper, we confine ourselves mainly to the construction of the proper continuous counterpart of the discrete framework needed for the treatment of general autonomous, non-conservative systems.
As is well known, geometric methods demand some kind of in-built qualitative structure of the considered problem to be preserved by the scheme. A good example of such a structure, and maybe the simplest one, is a single conserved quantity. This case can be reduced to the usage of a certain discrete gradient algorithm; see, e.g., [10,11,12,13]. The last reference here is an excellent example of accessing the second invariant quantity through discrete gradients. Such a numerical treatment was indeed applied to systems with first integrals and Lyapunov functions, but for arbitrary systems, it generally ceased to function properly because of a lack of a structural property guiding the evolution of the system. Although several frameworks of interest have been proposed for the qualitative description of such problems (see, e.g., [14,15,16,17,18]), none of these provides a simple and immediate way to gain access to the conserved quantity, helping directly in the performance of a specially crafted numerical algorithm.
In this paper, we show that Hamiltonian systems are indeed ubiquitous when we consider some consequences of the theorem on the existence and uniqueness of solutions to ordinary differential equations (ODEs) [19,20,21] and seek for the structure described above even if it is apparently absent. This approach leads to an abundance of so-called effective integrals of motion.
To begin with, let us consider the simple Initial-Value Problem (IVP) for an autonomous ODE:
x ˙ = f ( x ) , x ( t 0 ) = x 0 ,
given on some open domain D R d with f : D R d ; obviously, an overset dot stands for a shortcoming denoting time derivation.
Given f ( x ) C 0 ( D ) , the existence and continuity of the solutions are guaranteed. The assumption of f ( x ) C r ( D ) , r 1 ensures the uniqueness of the solution and its respective differentiability properties [1,19,21].
McLachlan et al. [12,22] showed that, in a neighborhood of a non-degenerate fixed point of a dynamical system, the existence of the first integral of (1) is equivalent to the existence and boundedness of skew-symmetric matrix B ( x ) such that:
f ( x ) = B ( x ) H ( x ) ,
where H ( x ) denotes the mentioned integral. Note that B is not determined uniquely, since we can add any solution of the homogeneous equation 0 = B H to the particular solution:
S ( P ) = 1 | H | 2 f H
The exterior product could be defined as:
v w : = i , j ( v i w j v j w i ) , i , j = 1 , 2 , , d ,
being explicitly anti-symmetric (note that in R 3 , the structure provided by such a product is equivalent to the one given by the usual cross-product). The Formula (3) explicitly demands | H | 0 . However, if we are already given the problem in the form of (2), the assumption of H 0 is redundant.
This article is meant to serve to extend the classical Hamiltonian approach to non-conservative systems, therefore unifying the perspective of gradient discretization for all autonomous ODEs and their study, as this has its origin in [22]. The approach dedicated to even more general systems (non-autonomous, optimal control, stochastic) is still under development.
In Section 2, we provide an overview of the subject of geometric mechanics (for the notation and basic terminology, check [1,23]), Section 3 serves in the same way to refresh the basic information on the biological population and chemical reaction dynamics. Section 4 introduces the main ideas of the paper, emphasizing direct overuse of the existence and uniqueness theorem for ODE systems. Section 5 presents versatile examples of the application of the introduced formalism, and Section 6 gives concluding remarks.

2. Hamiltonian Mechanics

If B ( x ) in (2) obeys the Jacobi identity, we can term the system Poisson or Hamiltonian (non-canonical). Since Poisson structure matrix B is always of an even rank [24], odd-dimensional systems are necessarily degenerate (some even-dimensional ones also are; this implies the existence of Casimir functions) [25,26].
On the chosen symplectic leaf with prescribed Clebsch coordinates, treated as symplectic manifold ( T * M , ω ) , we have:
ω = d x i d p i
(the summation convention assumed throughout the paper) the existence of which is equivalent to the non-degenerate canonical Poisson bracket given by the Poisson bi-vector:
π = ω 1 : 2 T * M R .
where:
π ( d f , d g ) = { f , g } ,
and the f , g F ( T * M ) Lie algebra of functions defined on the phase space.
This also gives rise to the first-order differential operator:
X H ( · ) = π ( · , d H )
known as a Hamiltonian vector field.
Now, we call it the the first integral generator of motion if:
x ˙ = { x , H } = X H ( x )
understood componentwise. In other words:
x ˙ i = H p i , p ˙ i = H x i .
or simply:
X H ω = d H ,
denoting by ⌋ the substitution of a vector field into the form ω (contraction).
Note that the flow of the Hamiltonian vector field preserves the canonical symplectic form on the phase space, which is clearly given by the proper Lie–Ślebodziński derivative:
L X H ( ω ) = X H d ω + d ( X H ω ) = d ( d H ) = 0 ,
by the closedness of the symplectic form and the nilpotency of exterior derivative d.
The basic hydrodynamic interpretation of the canonical formalism is also of some value. Let us consider a phase fluid of many systems with various initial conditions, moving on the phase space [27]. The velocity field of the fluid is clearly given by the Hamiltonian vector field. Note that since H C 2 ( D ) , we have div v = 0 .
It surely obeys the continuity law:
ρ t + div ( ρ v ) = 0 ,
or, in other words:
d ρ d t + ρ ( · v ) = 0 .
Now, because the phase fluid is incompressible:
ρ = const
on a target energy level set. Now, let us ponder ρ = C , and then, we vary the constant as C = C ( x , p ) to obtain the condition:
C · v = 0 C = c H ,
where c is a rightful constant.
The fluid should also undergo Euler’s equation (the fluid’s particle velocity denoted by v ; here, we considered just the fluid of an ensemble of free harmonic oscillators with the equations of motion x ˙ = p , p ˙ = x ):
d v d t = 1 ρ P
which in an elementary manner leads to Bernoulli’s law, if we take the inner product of both sides with v and perform trivial integration:
1 2 c ( x 2 + p 2 ) + P ( x ) = const ,
from which we can evaluate the pressure function and constant appearing in the formula, as the pressure needs to be non-negative.

3. M -Systems of Ecology and Chemical Kinetics

The Hamiltonian (in general Poisson) structure can be met also in ecology, where evolving populations of different species share resources in a common domain of living. Various environmental factors also can be modeled through some additional terms in differential equations governing the evolution (optimal control problems, etc. [28]).
We point out that a similar approach can be adopted for chemical kinetics problems, where the role of populations is played by the concentrations of various different chemical substances. The resource function here, if it exists, reflects the amount of reacting substances. In both domains, the systems share the properties of non-negativity, realizability, reducibility, and semi-stability [6,29,30].
The mentioned processes may involve a great number of variables, if they are to be described exactly. For our purposes, we stick to ODE models, as they are fair enough to describe phenomena with satisfying accuracy, yet simple enough not to complicate things unnecessarily, so we used the following assumptions:
1.
Reagents (species) are well mixed (distributed homogeneously), otherwise the problem would be inhomogeneous in space, hence yielding Partial Differential Equations (PDEs) (e.g., the reaction–diffusion problem; see, e.g., [31]) instead of ODEs;
2.
The concentrations of the substrates (species) are high enough to prevent stochastic behavior during reacting (coexisting; the abandonment of this assumption would lead to Wiener processes). Otherwise, we would obtain Stochastic Differential Equations (SDEs) (e.g., the Kubo oscillator) instead of ODEs.
Generally we could consider a few types of interaction between species: parasitic invasion, competition for resources, etc. For example, the competition of U and V given by the system:
u ˙ = f ( u , v ) , v ˙ = g ( u , v )
would undergo conditions f u > 0 , g v > 0 , f v < 0 , g u < 0 to reflect the fact that the consumption of a nutrient/the depletion of a resource by one species would prevent the other from doing the same, as well as describing the competition between the members of the same population.
Typically considered in this context is the Lotka–Volterra system of the form:
u ˙ = α u + β u v , v ˙ = γ u v + δ v ,
with a properly adjusted set of constants (note that the above equations fulfill the previously mentioned restrictions, if u , v are sufficiently bounded) and u being the population of the predator species and the v-population concentration of the prey.
A specific kind of interaction appears in systems of the Rosenzweig–MacArthur [32,33] type, involving predator and prey coexistence. In (19), we would obtain:
f ( u , v ) = r u 1 u K v h ( u ) , g ( u , v ) = v ( β + α h ( u ) ) ,
where h ( u ) denotes the number of prey caught by the predator per unit of time and α , β , r , K are the constant environmental parameters of the system.
The described model may be decomposed into symmetric and anti-symmetric part-evolutions, although it does not give up the Poissonian description at all. The ambitious objective of this paper was to consequently remedy the situation, putting (21) into not only the Poissonian, but the altogether canonical form. This type of impossibility of the direct integration of the equations of motion to solve the inverse Hamiltonian problem renders the main motivation for all efforts considered here.
Functions as h ( u ) are known as the functional response of the predator to a variation in the prey population. The basic types of these were classified by C.S. Holling into three categories: I. linear, II. hyperbolic (saturation), and III. so-called θ -sigmoid [33].
Various parameters in the equations of the population dynamics can be made into variable ones, often leading to their re-appearance as new dynamical (dependent) variables. To study these and more of not only single-population models such as, e.g., a chemostat system with continuous/batch cultivated bacteria, the Monod equation, the grazers–vegetation cycles, the Ivlev or Ayala–Gilpin–Ehrenfeld model of population growth, epidemic and endemic (SIR/SIS) models, or even more complicated problems associated with the optimal control of invasive species and the harvesting of populations, the reader is referred to the literature on the subject [28,34,35].
Of course, some of these systems admit Hamiltonian/Poisson representation, where we treat the resource function M ( x , t ) as the basic object (hence the name M -systems). It serves as a generator for the equations of motion:
x ˙ = B ( x , t ) M .
For instance, in the case of the Lotka-Volterra system (20) we have:
B = 0 u v u v 0
with the generator:
M ( u , v ) = α ln v δ ln u + β v + γ u
yielding the fine Poisson system or the Hamiltonian system of a non-canonical form. Sometimes, the term “M-systems” is used to describe molecular systems in biology; however, we stick to the meaning proposed in [36] and accepted by some other authors (e.g., [37]).
Remark 1.
Similarly, we can formulate the Hamiltonian-like M -system substituting J in place of B and properly transforming the variables. However, a big difference occurs when we ponder both Poisson and Hamilton formulations of the same problem: the concentration variables for different species should be of non-negative values, but in the Hamiltonian case are not (incompressible fluid on R m ). Because of that, we can interpret the compressibility of the phase fluid in the Poisson case as partially arising from constraining the phase space to be R + m .
The set of chemical reactions is termed the reaction network. Having:
A i + E i P i + B i , i = 1 , , n ,
we call species on the left the reactants and species on the right products of the reaction. When all stoichiometric coefficients are equal to one, we call such a reaction an elementary reaction. Note that every particular reaction can be written as an elementary reaction when we substitute A 1 X = X + + X ( A 1 times). As an example, we may consider the Robertson reaction network, mentioned in Section 5, and there should be no confusion in retaining the system in the potential, Poissonian form.
Chemical reactions formulated as a population dynamics problem use the mass action law [29,35]: at constant temperature, for any elementary reaction, its rate is proportional to the concentrations of the reactants.
The matrix formulation is also accessible to problems concerning the mass action law; see, e.g., [6,29]. This simple rule underlies the differential description of such reactions as given in the Michaelis–Menten model, Hill enzymatic equation, or Robertson network.
One of the fundamental features of mass action kinetics is that it produces differential equations with polynomial non-linearities. This also means that when we encounter such a set of equations, we may find the reaction network obeying these. Such a process is referred to as the realizability of the mass action kinetics [29]. An example of this procedure may be the so-called Lotka–Volterra reactions, retrieved from (76); see [6,38].

4. Para-Hamiltonian Description of Non-Conservative Systems

Basically speaking, we pondered a classical system with energy E = 1 2 p 2 + V ( x ) , where the number of dimensions is irrelevant (for now). However, in terms of Newtonian forces, in general, they are essentially non-self-adjoint (simply meaning these are not integrable [39]):
x ˙ = p , p ˙ = grad V ( x ) D ( x , p ) ,
where D stands for dissipative forces.
The theory of ODEs confirms that if we are able to find a unique solution to the problem, then the phase trajectories do not intersect, in order to keep the vector field generating the ODE well defined. This means, provided sufficiently differentiable V and D, that any given instant t in time connects by the 1:1 correspondence to some ( x , p ) , and vice versa. The following exposition will lean heavily upon this fact and exploit the mentioned correspondences in terms of the particular differentiation procedure along the trajectory. As an effect, we construct a new class of effectively conserved quantities. Hence, we assumed that V C , D C in all their arguments in a given domain, in a classical sense.
Definition 1.
Given a Pfaff form θ = κ ( x , p ) d x + ν ( x , p ) d p (where κ and ν are some functions) and a system of equations whose trajectory (parameterized by t) maps initial data ( x 0 , p 0 ) to ( x , p ) , we define:
Ψ ( x , p ) = Ψ ( x 0 , p 0 ) + γ ( t 0 , t ) θ Ψ ( x 0 , p 0 ) + ( x 0 , p 0 ) ( x , p ) κ ( x , p ) d x + ν ( x , p ) d p ,
where the path of the integration is along this trajectory, denoted by γ ( t 0 , t ) . Then, we denote đ Ψ : = θ , and:
Ψ x ( t ) : = κ ( x ( t ) , p ( t ) ) , Ψ p ( t ) : = ν ( x ( t ) , p ( t ) ) .
Note that the above expressions are not standard partial derivatives, except in the case when the Pfaff form θ is an exact differential. In the following, these operations are referred to as derivatives along the trajectory.
We start with the equations of motion (26) as given. Then, (assuming, for simplicity, the one-dimensional case) we seek for a “generator of motion”, however, not in a standard sense, but in the sense of Definition 1 (this is what me mean by the “para-Hamiltonian” description).
x ˙ = p = ? K p ( t ) K = 1 2 p 2 + c 1 ( x ) , p ˙ = V ( x ) D ( x , p ) = ? K x ( t ) K = V ( x ) + γ ( t 0 , t ) D ( x , p ) d x + c 2 ( p ) ,
where c 1 ( x ) and c 2 ( p ) can be derived by the comparison of both equations. Hence:
K = 1 2 p 2 + V ( x ) + γ ( t 0 , t ) D ( x , p ) d x
In other words, we define in the phase space ( x , p ) one-form đ K , which is neither closed nor exact:
đ K = K x ( t ) d x + K p ( t ) d p = ( V ( x ) + D ( x , p ) ) d x + p d p .
There is ongoing energy exchange with the environment (not necessarily a loss) due to non-potential function D ( x , p ) . We introduce a weird object w called the “reservoir”, dependent on x through the upper limit of integration. Its x-derivative along the trajectory is D ( x , p ) , and theoretically, it is a redundant variable; hence:
w = x 0 x ( t ) D ( x ( t ) , p ( t ( x ) ) ) d x ( t ) ,
where all needed inter-dependencies are guaranteed by the existence and uniqueness theorem for ODEs [1]. In the above formula, we perform the Riemann–Stieltjes integral to guarantee there exists some form of 1:1 correspondence between dynamical variables.
In practice, adjoining the reservoir to the system means adjoining non-potential (dissipative) forces as working for the positive account of the energy of the system (meaning they are treated as part of the system). By means of physical analysis, we incorporate the “power continuity law”:
d E d t + p D ( x , p ) = 0 ,
also giving rise to one-form đ K , which is perceived as a fundamental quantity of our approach (consequently, we vary only the dependent variables).
Meanwhile, it is worth noting that among the many obtained result aimed at grasping dissipative behavior correctly, the Nosé–Hoover system [13] and its generalizations [40] deserve some amount of attention. While cast in a Hamiltonian-like form (a not-so-canonical Poisson tensor), the equations of motion link the dynamics of the system to the state of the thermostat (through temperature T), which gives effective access to the thermodynamics of the ensemble of such systems (although not the statistical dynamics of such, the divergence of the vector field is in general non-zero). The importance of this remark rests on the similarities of this description with the one proposed here.
Theorem 1.
The quantity:
K = E + w
is conserved.
Proof. 
We may easily check:
K ˙ = E ˙ + w ˙ = D ( x , p ) p + D ( x , p ) p = 0 ;
however, it is not a well-defined function of x and p, since it depends on the path taken by the system. □
The above result is important not because of its complicated nature, but because of its simplicity. We accessed a novel type of conserved quantity that is rather of no use in pure theoretical considerations. As outlined in the Introduction, it is of huge practical/computational value.
Let us observe that K possesses an extremely trivial physical interpretation: it is just the initial energy (provided that w ( t 0 ) = 0 ). Since, for general ( x 0 , p 0 ) , w ( t 0 ) = const , we have K = E ( x 0 , p 0 ) + w ( t 0 ) , therefore, K is a smooth function of initial values provided that E is smooth.
We call the differential form not being the differential of any function a (pure) Pfaffian form [41]. An example of such a quantity is:
đ w = D ( x , p ) d x .
In the future, we will denote by w x the expression standing next to d x in đ w .
Therefore, we can define:
đ K = đ E + đ w ,
as a Pfaffian differential form.
In the above definition, we paid attention to the fact that the Hamiltonian is no longer preserved. Its decrement is exactly the increment of w with the opposite sign. Since the change of the Hamiltonian now obviously depends on the path taken by the system on the phase space, we cannot even claim that the Hamiltonian is still a potential-type function or a properly defined function at all, although, as a shortcoming, we use the term “potential part of the generator of motion” with respect to the Hamiltonian (so, strictly speaking, d E becomes the Pfaffian form).
Since E ˙ = D ( x , p ) p (see (33)), the addition of a reservoir with the exactly opposite time derivative yields the conserved quantity. Consequently, we tried to describe the problem in two different sets of canonical coordinates in several dimensions: ( x i , p i ) , ( u i , w i ) , i = 1 , , n . Motion begins with K 0 = L 0 = E ( x 0 , p 0 ) ; energy as a number at a given time is invariant with respect to the change of coordinates (“frozen” time transformations); since the initial value K 0 = L 0 is also invariant, it induces the invariance of the integral w = x 0 x D i ( x , p ) d x i while changing the description of the system so that:
đ K = p i d p i + x i V ( x ) d x i + D i ( x , p ) d x i , đ L = w i d w i + u i V ˜ ( u ) d u i + F i ( u , w ) d u i ,
where the summation over repeating indices is assumed.
Demanding the para-Hamiltonian framework to hold in both sets of symplectic coordinates:
x ˙ i = K p i ( t ) , u ˙ i = L w i ( t ) , p ˙ i = K x i ( t ) , w ˙ i = L u i ( t )
we obtain the condition for action differentials to differ by an exact differential (as usual):
p i d x i w i d u i = d ϕ ,
where the time-dependent differentials cancel due to the common initial conditions. Traditionally performing proper Legendre transforms, we can express ϕ as a function of the preferred two subsets of the old/new variables.
Here, the integral parts with the non-potential functions match due to the invariant character of energy; the covectorial character of momenta induces w j = p i ( x , w ) p i w j ; the left-over condition gives:
D i ( x , p ) F j ( u , w ) ) u j x i d x i = 0 ,
where we tacitly assumed u ( x 0 ) = u 0 . The demand for the integral to vanish is sufficient for the integrand to vanish as well. The vanishing of the integral demand is sufficient for the integrand to also vanish. However, the dependence of D on p (or w) forbids its form as a simple gradient function. Hence, D is a covector, but it is not a gradient of any scalar function.
The potential part is invariant due to the symplectic structure; hence, the dissipative form integrated from x 0 to x or from u ( x 0 ) to u ( x ) has to be invariant in order to ensure the independence of the energy dissipated during the motion. We call such invariance secondary (or induced) invariance.
From earlier considerations, we may generalize (8) to:
X K = π ( · , đ K )
giving rise to the Poisson bracket:
{ f , K } = X K ( f ) ,
for any function given on the phase space. We can conceive of K as the formal generator of non-potential motion: its Poisson bracket with canonical coordinates gives proper equations of motion:
x ˙ = { x , K } = π ( d x , đ K ) = p , p ˙ = { p , K } = π ( d p , đ K ) = V ( x ) D ( x , p )
moreover:
w ˙ = { w , K } = π ( đ w , đ K ) = D ( x , p ) p ,
which is indeed the case.
We can regard the condition:
đ K ( v ) = 0
for any v T ( T * M ) as a formal substitute for the law of the conservation of energy, determining the trajectory uniquely.
In agreement with the thermodynamic description, we may think of the energy function as an analogue of the internal energy U. Thus, we can yield two different interpretations:
.1
Adjoining external forces to the language in which we describe the system, we obtain a closed system; hence, d U = P d V = đ w = D ( x , p ) d x , and hence, the non-potential force exerts a kind of “pressure” on the ensemble of systems in the phase space (see below);
2.
If đ d K = d U = 0 , then we view T d S and P d V as equal, so reaching for the phase pressure obtained below for the phase fluid and considering phase volume d x d p as d V , we can pick the monotonically growing component of the expression P d V and treat it as an infinitesimal increment of the entropy analogue of the system, the remaining factor being the “temperature”.
Summing up all the differentiability conditions, we seek E ( x , p ) C n , w ( x ) ( C ) 1 such that K = E + w C 1 , but K C m , m , n arbitrary.
With this setting in mind, we can derive new formulas for the vector field algebra. Considering K the formal generator of motion and f a well-differentiable function, we obtain:
[ X f , X K ] L = X { f , K } + div v X f ,
seeing that compressible terms ( v is of course a tangent vector to the phase flow, namely X K ) that are responsible for the discontinuities are causing an anomaly in the vector field algebra to occur.
It is possible to find a similar formula for the pair of reservoir-containing K , L , say. However, it is not very useful in the context of the Hamiltonian description of mechanical systems since there is only one of these needed to govern the dynamics. The situation dramatically changes in Nambu, or generalized Nambu, mechanics (e.g., [42]), where the dynamics is given in terms of a few such vector fields.
We stick to this working approach, especially since it guarantees:
ω = d x d p
as a symplectic form. During the former considerations, it was preserved by the flow of Hamiltonian vector field X H , and now, it satisfies:
L X K ( ω ) = X K d ω + d ( X K ω ) = d ( đ E + đ w ) = d ( đ w + đ w ) = 0 .
as it provided by the equations of motion: p ˙ = V ( x ) D ( x , p ) , hence d p = V ; ( x ) d t D ( x , p ) d t , so multiplying both sides by p, we obtain đ E = p d p + V ( x ) d x = D ( x , p ) d x .
X K is defined unambiguously throughout the phase space as a section over T ( T * M ) , hence providing that phase trajectories do not cross each other.
When it comes to the value of K, it is determined by providing initial conditions ( q 0 , p 0 ) . Hence, K F ( T * M ( t 0 ) ) , so its constant value is uniquely determined by the state of the system at the initial moment. As a function of the flow, K may be given as:
K = t 0 t v ω
making its dependence on the initial conditions much less manifest.
A little bit more sophisticated is the simultaneous use of two reservoirs for the system:
x ˙ = p + F ( x , p ) , p ˙ = V ( x ) D ( x , p ) ,
yielding:
đ K = ( p + F ( x , p ) ) d p + ( V ( x ) + D ( x , p ) ) d x
and, accordingly:
X K = ( p + F ( x , p ) ) x ( V ( x ) + D ( x , p ) ) p .
Since we are playing with ODE system, when we assume that A ( x ) K = f ( x ) C r ( D ) , r 1 , the theorem on the existence and uniqueness of solutions is in power [1,19,20]. Therefore, there is a 1:1 correspondence between every moment in time and the points in the phase space. Trajectories on the phase space of the system are obviously not crossing. Hence, we can perform in an unambiguous sense any integral of a function of variables of the system with respect to some of these variables (or time) as a Riemann–Stieltjes integral.
Therefore, we can write:
K = 1 2 p 2 + V ( x ) + w + z ,
where the reservoirs are defined by:
w = t 0 t D ( x ( t ) , p ( t ) ) d x ( t ) , z = t 0 t F ( x ( t ) , p ( t ) ) d p ( t ) ,
so that K ˙ = 0 .
Now, in order to make the current discussion as similar to the conservative case as possible, we focus for a moment on the hydrodynamical analogy, starting from the continuity equation:
d ρ d t + ρ · v = 0 ,
hence:
ρ = C e t 0 t D p ( q , p ) d t
where C is a constant and the lower index denotes the derivative with respect to an argument.
The integral in the exponent does not cause any trouble, since all the fluid’s particles obey the equations of motion; hence, we may again apply the Riemann–Stieltjes integral.
Note that we can consider C as depending on the canonical variables, where from the continuity equation, we obtain the constraint v · C = 0 , so C may depend on the K value, so on the phase trajectory of interest. Here, as always, v = X K .
Additionally, we have Bernoulli’s law (from Euler’s equation):
1 2 ρ v 2 + P = const .
Taking an example of linearly damped harmonic oscillators with the equation of motion:
x ˙ = p , p ˙ = x b p .
Since all fluid particles obey these equations of motion, the continuity equation yields:
ρ = c K e b t ,
where c is a constant.
Writing Bernoulli’s law:
1 2 c K e b t ( x 2 + p 2 + 2 b p x + b 2 p 2 ) + P = const ,
and remembering that E e b t , we see:
P = P 0 c K e b t ( b x p + 1 2 b 2 p 2 ) .
Fortunately, we know solutions to the damped oscillator, being x e b 2 t ( cos ( ω t + δ ) ) ; hence, p e b 2 t ( cos ( ω t + δ ) ω sin ( ω t + δ ) ) . Thus, we that see there is no danger of the variable part of pressure growing to infinity. Provided that the engaged constants obey:
P 0 c K A 0 2 ( b + b 2 2 ) cos 2 δ ( ω b + b 2 ) sin δ cos δ + 1 2 b 2 ω 2 sin 2 δ 0 ,
where A 0 is the initial amplitude, the pressure is always positive. Notice that for different K (initial energy), this demand can somewhat change quantitatively.

5. Particular Non-Potential Systems

Here, we give a sequence of illustrative examples, treatable along the lines of the presented approach. Their objective is to show the applicability of the invented framework to low-dimensional systems and to directly show the presence and surprising preservation of the symplectic structure. As mentioned in the Introduction, the general statement with the proof will be published elsewhere.
Example 1. van der Pol oscillator
 
The van der Pol oscillator is a system that arises as some generalization of an RLC circuit (through the so-called Liénard-form equation of the VdP oscillator [19]), and its equations of motion are:
x ˙ = p , p ˙ = x + ε ( 1 x 2 ) p .
The non-potential generator of motion is:
K = 1 2 ( x 2 + p 2 ) + w ,
where the reservoir variable is given by:
w = ε t 0 t ( 1 x 2 ( t ) ) p ( t ) d x ( t ) ,
turning K into an effectively conserved quantity.
Remark 2.
Note that taking the K ( t ) derivative along the phase curve, we have:
x ˙ p ˙ = K p ( t ) K x ( t ) = 0 1 1 0 K
so that the system possesses a symplectic structure. Nevertheless, it may seem as it should not hold along the trajectory in the phase space, since K C 2 ( T * M ) (exactly, we have K C 1 ( T * M ) ). However, due to the vanishing of đ K by the construction, we have (49), and the symplectic structure is preserved.
Remark 3.
Note that onward, to avoid cumbersome writing in integrals such as (66), we often write y instead of y ( t ) , etc.
Example 2. Brusselator
 
The Brusselator is the dynamical system modeling the auto-catalytic reaction network [21]:
A X , 2 X + Y 3 X , B + X Y + D , X E .
We claim that substrates A , B are abundant in the environment, so we can denote their concentrations a , b as being constant. We identify the X and Y species’ concentrations as dynamical variables x and y, respectively. From (68), with the use of the mass action principle, we obtain:
x ˙ = a + x 2 y b x x , y ˙ = b x x 2 y .
A quick look at these confirms there is no resource function M in the usual sense; the non-potential generator of motion becomes:
K = a y 1 2 b x 2 + w + z ,
where the reservoir variables are given by:
w = t 0 x ( t ) ( x ) 2 y d x z = t 0 y ( t ) ( ( x ) 2 y b x x ) d y ,
so that K ˙ = 0 .
Remark 4.
Note that again, in the x , y variables, the system is in canonical form x ˙ = J K ( t ) with:
J = 0 1 1 0 ,
which is preserved due to the courtesy of K being constant during the evolution.
Note that (69) has a fixed point at ( a , b a ) . This equilibrium is unstable when b > 1 + a 2 ; if b < 1 + a 2 , it is stable. The case b = 1 + a 2 presents some doubts: in this situation, the origin appears to be the center (from the procedure of linearization); however, we know that if the dimension (here: the number of dependent variables) of the ODE system n 2 , then the Hartman–Grobman theorem on linearization often fails at predicting the existence of the center [19,20,21].
Example 3. The Nosé–Hoover system [13,40]
We put the thermostatted system in the form:
q ˙ = p m = K p ( t ) , p ˙ = Φ q p p η Q = K q ( t ) , η ˙ = p η Q = K p η ( t ) , p ˙ η = p 2 m k T = K η ( t ) ,
which gives:
K = p 2 2 m + Φ ( q ) + p η 2 2 Q + k T η + q 0 q ( t ) p p η Q d q η 0 η ( t ) p 2 m d η ,
from which, by differentiation, we can retain the equations of motion. Note that the transition to the time parameterization of the reservoirs (integral variables) shows simply K = H , the Hamiltonian presented, for example, by Ezra [13].
Remark 5.
The almost-Poisson structure provided in [40], given by:
B ( x ) = 0 0 1 τ p m 0 0 0 1 1 0 0 p τ p m 1 p 0
is obviously of the non-canonical form. The time-dimensional parameter τ governing the evolution of the Nosé variable η must be set to zero, to obtain again the simple dynamics of (73).
Finally, we note that in our formulation of non-potential Hamiltonian mechanics, the system is canonical with two physical degrees of freedom. This property is obviously preserved along the evolution.
Example 4. Lotka–Volterra system
 
The Lotka–Volterra model describes the basic predator–prey interaction (with a linear response):
u ˙ = u α u v , v ˙ = β v + u v ,
where u is the prey concentration in an environment, v is the predator concentration, α being the rate at which the consumption of the prey by a predator proceeds, and β is the death rate of a predator. Note that we chose the unit rate for the birth of the prey and the predator feeding on the prey.
We can write down these equations as a Poisson system:
x ˙ = B ( x ) M , B ( x ) = 0 u v u v 0
claiming that x = ( u , v ) T , = ( u , v ) T , and the resource function is:
M = β ln u + ln v u α v .
We should observe that such a formulated LV problem is given on the phase space R + 2 and, as a Poisson system, has a compressible phase fluid: div x ˙ = 1 β α v + u ; compare Remark 1.
We used the Poisson structure as even-dimensional and non-degenerate, so we can bring the system to its canonical form by transformation u = e q , v = e p , where the equations of motion become:
q ˙ = 1 α e p , p ˙ = β + e q ,
with the incompressible phase fluid on the R 2 symplectic phase space with the separable resource function:
M = p α e p + β q e q .
Note that the LV Poisson system would become of the canonical form also with:
đ K = u ( 1 α v ) d v + v ( β u ) d u .
Moreover, we have:
K T B ( x ) M = 0 = M T J K ;
hence, the integrals of motion commute in terms of each Poisson structure, but this only preserves the equilibria. What is more important is that we have the following.
Corollary 1.
The transition from the Poisson dynamics governed by the resource function M in (78) to the canonical form, the evolution of which is dictated by (81), preserves the Poisson bracket of a target function with the generator of motion (with its coupled matrix structure).
This remark is easily verifiable on a case-by-case basis, provided that the Poisson bracket of M with coordinates u , v is preserved.
Knowing that the non-potential generator K provokes the anomalies of the vector fields to occur, we expect and then obtain:
[ X f , X K ] L = X { f , K } + div ( X K ) X f .
Example 5. Robertson reactions
 
The reaction network:
X a Y , Y + Y b Y + Z , Y + Z c X + Z
is a system of auto-catalytic reactions where a , b , c are the reaction rates.
The mass action law gives a system clearly expressible in the gradient form ( x = ( x , y , z ) T ):
x ˙ = B ( x ) H , B ( x ) = 0 c y z + b y 2 a x b y 2 c y z b y 2 0 a x a x + b y 2 a x 0
with conserved H = x + y + z (classical rule of mass conservation). Note additionally that B ( x ) does not obey Jacobi’s identity, although its skew symmetry itself guarantees the conservation property (the system is almost Poisson) [12].
We are able to write down the system in a different form:
x ˙ = ε K ,
ε being the totally anti-symmetric Cartesian-tensor of order three and:
K = 1 2 a x 2 1 3 b y 3 a y 0 y ( t ) x d y z 0 z ( t ) ( b y 2 + c y z ) d z ;
therefore, we need a pair of reservoirs. In this form, the system is a Poisson one: the anti-symmetric structure matrix obeys the Jacobi identity; moreover, the system admits the Casimir function H, since it is obvious that ε H = 0 ; hence, we can proceed with the construction of the Darboux coordinates on a single symplectic leaf of the system, e.g., ( y , z ) , where x = m 0 y z , m 0 constant.
The system reduces to:
y ˙ = μ a y a z b y 2 c y z = K z , z ˙ = b y 2 = K y ,
where μ = a m 0 . To cast the above system in the gradient form, we need only a single reservoir:
K = μ z + 1 2 a z 2 + 1 3 b y 3 + z 0 z ( t ) ( a y + b y 2 + c y z ) d z
and it is explicitly of the canonical form.
It is worth stressing that we can apply the Casimir function to the generator governing evolution of the system (87) to reduce the number of variables, but the formula will be different from that obtained applying the given Casimir to the equations of motion and then finding the reduced generator (89). The results are obviously unequal, but their differentials are cohomologically equivalent [1].
Example 6.
To illustrate that the ideas presented here work also in higher dimensions, we considered the five-dimensional system considered in [43].
p ˙ = α ( z ) β p y + γ p q , q ˙ = δ ( z ) q ε q x γ p q , x ˙ = ε q x ζ x , y ˙ = β p y η y , z ˙ = α ( z ) p δ ( z ) q + η y + ζ x .
Summing all the equations together, we find:
d d t ( p + q + x + y + z ) = α ( z ) ( 1 p ) ,
and performing the Riemann–Stieltjes integral, we gain access to:
K = p + q + x + y + z t 0 t α ( z ) ( 1 p ) d t = C 0
on the target level set.
Eliminating x, we obtain:
p ˙ = α ( z ) β p y + γ p q , q ˙ = δ ( z ) q ε q C 0 + t 0 t α ( z ) ( 1 p ) d t p q y z γ p q , y ˙ = β p y η y , z ˙ = α ( z ) p δ ( z ) q + η y + ζ C 0 + t 0 t α ( z ) ( 1 p ) d t p q y z .
which is writable in the Hamiltonian form with four reservoirs, provided we choose canonical pairs, for example ( p , y ) , ( q , z ) .
Observe the interesting novelty: the traditional first integrals led to a reduction of the order of the ODE by one; effectively conserved quantities turned the problem into an integro-differential one instead. The significance of this fact and our capability to cope with it numerically will be discussed in future papers. The following example illustrates the possibility to shape the occurring systems according to our needs and taste.
Example 7.
Consider the pretty general system on R 3 :
x ˙ = f ( x , y , z ) , y ˙ = g ( x , y , z ) , z ˙ = h ( x , y , z ) ,
with given initial conditions ( x 0 , y 0 , z 0 ) .
If we would like to enclose the system as Poisson, we can assume:
f = a K y b K z , g = a K x + c K z , h = b K x c K y ,
with:
B ( x ) = 0 a b a 0 c b c 0
being the Poisson structure matrix, the entries of which satisfy the Jacobi identity:
a z b y a x + c z + b x c y = 0 .
This constraint is the only requirement for a , b , c to obey; despite this, they can be truly arbitrary.
Solving the linear system of Equation (95), we can determine the formal generator of motion up to a constant (of course, we demand that K x , K y , and K z be sufficiently smooth functions). Doing this, we have:
K x = 1 2 h b g a , K y = 1 2 f a h c , K z = 1 2 g c f b .
thus introducing the maximum three reservoirs (in fact, multiplicative factors could be chosen in a much more general way).
Obviously, for a given B ( x ) , we can introduce the formal Casimir function and then put the system in Hamiltonian form. Please notice that this general three-dimensional dynamical system is unique in a very specific sense: for dimensions greater than three, we obtain the whole family of non-potential generators equivalent in terms of gauge-type transformations. This remark leaves the door open for the future development of the presented reservoir approach.

6. Conclusions

Our article had a twofold objective. In Section 2 and Section 3, we reintroduced some notions of the geometric language of classical mechanics in convenient notation and summarized the basic terminology of kinetic chemistry and population dynamics. The considered examples were presented in a convenient Hamiltonian-like or Poisson-like framework.
In the main section of this paper, Section 4, we chose a rather pragmatic, not to say banausic, view of the theorem of the existence and uniqueness of the solutions of ordinary differential equations and exploited this for computational ends, to gain access to the so-called non-potential (or effective) integrals of motion. Here, we emphasize that they can be treated along the lines of a weak formulation of the integrals of motion (level manifolds). As a result, we proposed a new, para-Hamiltonian, description of ODEs.
Section 5 presented versatile examples showing how to use the constructed formalism. Among a variety of formal features, our approach is distinguished by the possibility of imposing the prescribed form of the problem under study, due only to our taste and convenience, which is clearly shown in Example 7.
Apparently, the innocent play with the dependent variables provided us with a tiny bit of additional information about the dynamical system not possessing first integrals in the classical sense. Non-potential, conserved quantities are of no use from the point of view of pure mathematics; yet, they can be very helpful in the numerical analysis of dynamical problems and the construction of new algorithms. In particular, we plan to construct geometric numerical integration schemes of any order for dissipative systems, analogous to those studied in [44]. The work in this direction is in progress.

Author Contributions

Conceptualization, A.K.; methodology, A.K. and J.L.C.; formal analysis, A.K.; investigation, A.K. and J.L.C.; writing—original draft preparation, A.K.; writing—review and editing, A.K. and J.L.C., supervision, J.L.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Arnold, V.I. Mathematical Methods of Classical Mechanics; Springer: New York, NY, USA, 1978. [Google Scholar]
  2. Bender, C.M.; Orszag, S.A. Advanced Mathematical Methods for Scientists and Engineers; Springer: New York, NY, USA, 1999. [Google Scholar]
  3. Süli, E.; Mayers, D.F. An Introduction to Numerical Analysis; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
  4. Hairer, E.; Lubich, C.; Wanner, G. Geometric Numerical Integration: Structure Preserving Algorithms for Ordinary Differential Equations; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  5. Cieśliński, J.L.; Ratkiewicz, B. Long-time behavior of discretizations of the simple pendulum equation. J. Phys. A Math. Theor. 2009, 42, 105204. [Google Scholar] [CrossRef]
  6. Bertolazzi, E. Positive and conservative schemes for mass action kinetics. Comput. Math. Appl. 1996, 32, 29–43. [Google Scholar] [CrossRef] [Green Version]
  7. Cieśliński, J.L. Locally exact modifications of numerical schemes. Comput. Math. Appl. 2013, 65, 1920–1938. [Google Scholar] [CrossRef]
  8. Cieśliński, J.L.; Kobus, A. Locally Exact Integrators for the Duffing Equation. Mathematics 2020, 8, 231. [Google Scholar] [CrossRef] [Green Version]
  9. Mickens, R.E. Numerical integration of population models satisfying conservation laws: NSFD methods. J. Biol. Dyn. 2007, 1, 427–436. [Google Scholar] [CrossRef]
  10. Itoh, T.; Abe, K. Hamiltonian-conserving discrete canonical equations based on variational difference quotients. J. Comput. Phys. 1988, 76, 85–102. [Google Scholar] [CrossRef]
  11. Gonzalez, O. Time integration and discrete Hamiltonian systems. J. Nonlinear Sci. 1996, 6, 449. [Google Scholar] [CrossRef]
  12. McLachlan, R.I.; Quispel, G.R.W.; Rubidoux, N. Geometric integration using discrete gradients. Philos. Trans. R. Soc. Lond. Ser. A 1999, 357, 1021–1045. [Google Scholar] [CrossRef]
  13. Ezra, G.S. Reversible measure-preserving integrators for non-Hamiltonian systems. J. Chem. Phys. 2006, 125, 034104. [Google Scholar] [CrossRef] [Green Version]
  14. Schwarz, F.; Steeb, W.-H. Symmetries and first integrals for dissipative systems. J. Phys. A Math. Gen. 1984, 17, L819. [Google Scholar] [CrossRef]
  15. Honein, T.; Chien, N.; Herrmann, G. On conservation laws for dissipative systems. Phys. Lett. A 1991, 155, 223–224. [Google Scholar] [CrossRef]
  16. Delphenich, D.H. Integrability and the variational formulation of non-conservative mechanical systems. Ann. Phys. 2009, 18, 45–56. [Google Scholar] [CrossRef]
  17. León, M.; Sardón, C. A geometric approach to solve time dependent and dissipative Hamiltonian systems. arXiv 2016, arXiv:1607.01239v1. [Google Scholar]
  18. García-Naranjo, L.C.; Marrero, J.C. The geometry of nonholonomic Chaplygin systems revisited, 2020 IOP Publishing Ltd & London Mathematical Society. Nonlinearity 2020, 33, 1297. [Google Scholar]
  19. Hale, J.K.; Kocak, H. Dynamics and Bifurcations; Springer: New York, NY, USA, 1996. [Google Scholar]
  20. Hirsch, M.W.; Smale, S. Differential Equations, Dynamical Systems, and Linear Algebra; Academic Press: San Diego, CA, USA, 1974. [Google Scholar]
  21. Kuznetsov, Y.A. Elements of Applied Bifurcation Theory; Springer: New York, NY, USA, 1995. [Google Scholar]
  22. Mclachlan, R.; Quispel, G.; Robidoux, N. Unified Approach to Hamiltonian Systems, Poisson Systems, Gradient Systems, and Systems with Lyapunov Functions or First Integrals. Phys. Rev. Lett. 1998, 81, 2399. [Google Scholar] [CrossRef] [Green Version]
  23. Babelon, O.; Bernard, D.; Talon, M. Introduction to Classical Integrable Systems; Cambridge Monographs on Mathematical Physics; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar] [CrossRef]
  24. Weinstein, A. The local structure of Poisson manifolds. J. Diff. Geom. 1983, 18, 523–557. [Google Scholar] [CrossRef]
  25. Weinstein, A. Poisson geometry. Differ. Geom. Its Appl. 1998, 9, 213–238. [Google Scholar] [CrossRef] [Green Version]
  26. Karasözen, B. Poisson integrators. Math. Comput. Model. 2004, 40, 1225–1244. [Google Scholar] [CrossRef]
  27. Kozlov, V.V. Hamiltonian Systems with Three Degrees of Freedom and Hydrodynamics. In Hamiltonian Systems with Three or More Degrees of Freedom; Simó, C., Ed.; NATO ASI Series; Springer: Berlin/Heidelberg, Germany, 1999; Volume 533. [Google Scholar]
  28. Baker, C.M.; Diele, F.; Lacitignola, D.; Marangi, C.; Martiradonna, A. Optimal control of invasive species through a dynamical systems approach. Nonlinear Anal. Real World Appl. 2019, 49, 45–70. [Google Scholar] [CrossRef]
  29. Chellaboina, V.; Bhat, S.; Haddad, W.; Bernstein, D. Modeling and analysis of mass action kinetics. IEEE Control Syst. Mag. 2009, 29, 60–78. [Google Scholar] [CrossRef]
  30. Formaggia, L.; Scotti, A. Positivity and Conservation Properties of Some Integration Schemes for Mass Action Kinetics. SIAM J. Numer. Anal. 2011, 49, 1267–1288. [Google Scholar] [CrossRef]
  31. Diele, F.; Marangi, C.; Ragni, S. IMSP schemes for spatially explicit models of cyclic populations and metapopulation dynamics. Math. Comput. Simul. 2015, 110, 83–95. [Google Scholar] [CrossRef]
  32. Brauer, F.; Castillo-Chavez, C. Mathematical Models in Population Biology and Epidemiology; Springer: New York, NY, USA, 2012. [Google Scholar]
  33. Turchin, P. Complex Population Dynamics; Princeton University Press: Princeton, NJ, USA, 2003. [Google Scholar]
  34. Al-Moqbali, M.K.; Al-Salti, N.S.; Elmojtaba, I.M. Prey-predator models with variable carrying capacity. Mathematics 2018, 6, 102. [Google Scholar] [CrossRef] [Green Version]
  35. Strogatz, S.H. Nonlinear Dynamics and Chaos: Nonlinear Dynamics and Chaos: With Applications to Physics, Biology, Chemistry, and Engineering; Westview Press: Boulder, CO, USA, 2015. [Google Scholar]
  36. Martinez-Linares, J. Phase Space Formulation of Population Dynamics in Ecology. arXiv 2013, arXiv:1304.2324. [Google Scholar]
  37. Diele, F.; Marangi, C. Geometric Numerical Integration in Ecological Modeling. Mathematics 2020, 8, 25. [Google Scholar] [CrossRef] [Green Version]
  38. Boros, B.; Hofbauer, J.; Müller, S.; Regensburger, G. The Center Problem for the Lotka Reactions with Generalized Mass-Action Kinetics. Qual. Theory Dyn. Syst. 2018, 17, 403–410. [Google Scholar] [CrossRef] [Green Version]
  39. Santilli, R.M. Foundations of Theoretical Mechanics I: The Inverse Problem in Newtonian Mechanics; Springer: New York, NY, USA; Heidelberg/Berlin, Germany, 1983. [Google Scholar]
  40. Sergi, A.; Ferrario, M. Non-Hamiltonian equations of motion with a conserved energy. Phys. Rev. E 2001, 64, 056125. [Google Scholar] [CrossRef]
  41. Popescu, P.; Popescu, M. On Pfaff systems. BSG Proc. 2012, 19, 152–162. [Google Scholar]
  42. Nambu, Y. Generalized Hamiltonian Dynamics. Phys. Rev. D 1973, 7, 2405. [Google Scholar] [CrossRef]
  43. Hadley, S.A.; Forbes, L.K. Dynamical Systems Analysis of a Five-Dimensional Trophic Food Web Model in the Southern Oceans. J. Appl. Math. 2009, 2009, 575047. [Google Scholar] [CrossRef]
  44. Cieśliński, J.L.; Ratkiewicz, B. Discrete gradient algorithms of high order for one-dimensional systems. Comput. Phys. Commun. 2012, 183, 617–627. [Google Scholar] [CrossRef] [Green Version]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kobus, A.; Cieśliński, J.L. Para-Hamiltonian form for General Autonomous ODE Systems: Introductory Results. Entropy 2022, 24, 338. https://0-doi-org.brum.beds.ac.uk/10.3390/e24030338

AMA Style

Kobus A, Cieśliński JL. Para-Hamiltonian form for General Autonomous ODE Systems: Introductory Results. Entropy. 2022; 24(3):338. https://0-doi-org.brum.beds.ac.uk/10.3390/e24030338

Chicago/Turabian Style

Kobus, Artur, and Jan L. Cieśliński. 2022. "Para-Hamiltonian form for General Autonomous ODE Systems: Introductory Results" Entropy 24, no. 3: 338. https://0-doi-org.brum.beds.ac.uk/10.3390/e24030338

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop