High-energy physics, and in particular high-energy heavy ion physics is an interdisciplinary topic. It uses the theory of relativistic quantum fields, statistical physics, thermo- and hydrodynamics, and even the theory of curved space-times. Earlier studies show that non-extensive statistical physics provides a useful tool to describe particle-particle collisions, where “particle” now stands either for electron/positron, or for proton or a heavy nucleus. The non-extensivity in high-energy physics manifests itself both in the non-exponential energy- and non-Poissonian multiplicity distributions.
3.1. The Description of the Inclusive Hadron Production
The Quantum Chromodynamics is the fundamental theory of the strong interaction. Due to the energy-scale dependent behavior of the strong coupling, the perturbative QCD (pQCD) based parton model–initiated by Bjorken and Feynman–works extremely well at high energies [
29]. In the framework of the pQCD-based parton model, all hadrons are made up from partons (bare, nearly massless quarks and gluons), therefore the inner structure of the initial colliding and the finally produced hadrons are described by the parton distribution functions (PDF) and by the fragmentation functions (FF), respectively. These non-perturbative distribution functions are defined in the momentum space and can be parametrized by a polynomial
ansatz. The PDF,
gives the distribution of parton
a inside the hadron
h at the energy scale,
, while
is the momentum fraction carried by that parton. On the other hand, the confinement of the parton
c into the final state hadron
h with the momentum fraction
can be described at scale,
with the help of fragmentation functions
In this framework the inclusive cross-section of a given hadron
h produced in proton-proton collisions can be calculated by the following convolution:
Here the parton distribution function of a proton is denoted by and is the differential cross-section of the partonic process, the variable is related to the 4-momentum exchange of the particles.
The hadronization is described within the parton model by the above phenomenological fragmentation functions, for which several forms of parametrization exist in the literature. These parametrizations are usually fitted to lepton scattering data, therefore they describe existing experimental results in a broad-range in the parameter space. In
Section 5, after investigating the energy dependence, we show the latest results of a new fragmentation function parametrization based on non-extensive phenomena.
3.2. Hadronization Using Non-Extensive Statistics
As we have already mentioned, the transverse energy distribution of the measured hadrons—the particle yield measured in the midrapidity region–is an important quantity accessible to measurement. In practice, the low-energy regime is described by exponential-like functions, as a thermalized system, while the high regime behaves like a power-law, . The Tsallis–Pareto-like distributions handle these two regimes simultaneously.
The technical apparatus in the high-energy physics shows a great advancement, nowadays the statistics of these spectra is larger than ever. It is no surprise that the Tsallis–Pareto distributions are widely used by the high-energy community to describe hadron spectra. The STAR and PHENIX collaborations at RHIC BNL (USA) and the European CERN’s ALICE, ATLAS, and CMS collaborations at the LHC are using the following form to characterize the particle yield [
30,
31,
32,
33,
34,
35]:
where
n and
C are fit parameters and
is the transverse mass, including the rest mass
m of the given identified hadron species. We note that this formula is based on the QCD-Hagedorn formula [
36,
37,
38,
39,
40]. This and other variations of the distribution are exhaustively tested e.g., in [
9,
10,
11,
12,
13,
14,
15,
25,
26]. Below we theoretize over the origin of such Tsallis-type formulas. Contrary to the fixed fit parameters of the Tsallis–Pareto distributions as in Equation (
4), we assume that the identified hadron spectra are characterized with a scaling Tsallis-distribution, where an energy scaling of the Tsallis-parameters is also present. In the following we refer to these as Tsallis-like distributions.
In extensive systems the entropy is finite in the thermodynamical limit,
. This is the case with the Boltzmann–Gibbs–Shanon entropy formula,
, where
is the probability of being in state
i. In strongly correlated systems, it turns out that the total entropy of the system is not the sum of the entropy of the subsystems:
For our generalization, we use the well established terminology of the thermodynamics, since we expect to include the classical Boltzmann–Gibbs case too. Let us consider a monotonic, transformed entropy,
, which satisfies additivity ,
Note, on general terms
is the logarithm of the formal group of phase space factors
. Due to this assumption, applied recursively in ensembles, we arrive at the following general class of entropies [
41]:
Because
is by definition a monotonic function, the most likely state of a heat reservoir and its subsystem at maximum entropy is equivalently at
where
is the energy of the subsystem,
is the energy of the reservoir. While keeping
const in the entropy maximum Equation (
8) we obtain
It makes the use of the usual definition of thermodynamical temperature expedient,
Assuming now
in high-energy collisions, we consider:
and
of the particle,
Inserting this into Equation (
10), we arrive at the formula,
By looking for an
universal termostat, lending to
an absolute temperature interpretation, we assume that the energy of the subsystem is independent from the energy of the reservoir, i.e., we require the term linear in
to vanish. After ordering we obtain:
This equality among general functions,
and
is possible only if both are equal with a constant,
where
C is the heat capacity of the reservoir.
The solution of this differential equation has all desired features:
Replacing it into the Equation (
7),
is the (now additive) Tsallis entropy, while
turns out to be the Rényi entropy.
This argumentation can be used also for microcanonical systems, with
and
being the distribution of the subsystem’s states. Using the previously-defined generalized entropy,
, one arrives at the following energy distribution, which maximizes the
q-entropy:
It is a Tsallis–Pareto distribution with the individual energy, and Z is calculated form .
In high-energy collisions we also have to deal with fluctuations event by event. Following the calculations in references [
41,
42], one may assume that the multiplicity of the created hadrons follow a negative-binomial distribution for bosons, a binomial one for fermions. Due to such general reservoir fluctuations, the
q non-extensivity parameter receives a correction [
41,
42]:
As the average number of created particles can vary in a wide range depending on the studied system–typically in proton-proton collisions, while in nucleus-nucleus collisions. We expect this fluctuation effect to overcome the finite heat capacity condition, therefore one observes . It is also straightforward to see that enlarging the system results in , and if fluctuations become sufficiently suppressed, we get back the Boltzmann–Gibbs case with exponential distribution. On the other hand, the assumption leads to a Gaussian distribution for the β values, which can be an approximation, but never the complete truth. This parameter is used to call the entropic index or the non-extensivity parameter, present as a measure of the deviation from the Boltzmann–Gibbs case, . We note, in high-energy nuclear collisions this value is in the range , which suggests fluctuations override the system size effects, related to the heat capacity of the reservoir.
The distributions Equations (
4) and (
18) behave similarly, they both can be regarded as Tsallis–Pareto-type distributions. The authors in [
25,
26] investigated how the different types fit the experimental data. Many further useful readings regarding the thermodynamically consistent non-extensive approach can be found in the literature. The first possibility is presented in [
22,
23,
24], representing the case where the power is proportional to
. An another kind of approach where the power is
, as discussed in references [
43,
44,
45,
46]. For our analysis the chosen form is the following [
47]:
As it was shown in [
42,
48] the parameters
q and
T for an ideal case are connected to the mean multiplicity and its variance:
They also may depend on each other. Since in case of one has , the parameter q is a measure of non-extensitivity (i.e., non-Gaussivity in β fluctuations, non-Poissonity in the multiplicity distribution ). T is like the kinetic temperature.
Based on Equation (
21), for fixed
one obtains:
On the other hand for an NBD (Negative Binomial Distribution) with fixed
one gets
Our aim in the followings is to explore the center-of-mass energy evolution of the parameters
q and
T, especially keeping in our mind their physical meaning. Based on the definition of the PDF and FF of the pQCD-based parton model, we expect a logarithmic scaling. Since this was observed even in fits of electron-positron data [
9], where PDFs do not appear, we connect the non-extensive features with the hadronization (fragmentation) processes only. The argumentation behind this will be explained in the next subsection.
3.3. Motivation for Qcd-Like Energy Scaling of the Parameters
Partons, the elementary momentum carriers in the strongly interacting matter, are tagged with a quantum number named color, which property is not observable directly. The quarks, antiquarks and gluons together confine into color singlet hadrons during the hadronization process. Hadron formation can happen at any energy scale, . The dependency on it can be factorized into the running nature of the strong interaction’s coupling constant, . Since any observable quantity should be independent of the arbitrary fixing of the energy scale appearing in perturbative QCD calculations, the mathematical description ought to be (energy-)scale independent at any fixed order. To satisfy this request is not an easy task, due to the non-perturbative nature of non-Abelian fields at low-energy.
As we presented the hadron production within the pQCD-based parton model in
Section 3.1, the convolution in Equation (
3) includes the scale parameter (
) in its kernel. However, the cross-section—being an observable quantity—should be independent of this inner parameter. Technically, this is achieved with the following method: while calculating the partonic (color) cross sections at a given order, the scale can be factorized out and merged into the non-perturbative phenomenological functions. These are the parton distribution and fragmentation functions, and they must satisfy a proper scale evolution equation for avoiding scale-dependent hadronic yields.
To obtain the scale invariance, the following formula should be fulfilled at any fixed order for any generic form of such phenomenological functions [
49,
50,
51]:
where
can be either the PDF or FF, typically at the momentum fraction of the mother and daughter particles,
z. In general, this evolution equation determines the possible form of quantity
R, which naively depends on the current energy scale at a given fixed order. Solving this Callan–Symanzik equation one can obtain the energy scale dependence of the running coupling at a given order in any theory [
52].
In the perturbative QCD based parton model, the parametrized parton distribution and fragmentation
ansatz functions are typically given in a power-law form [
53]. One can get then the proper scale evolution by solving the Doksitzer–Gribov–Lipatov–Altarelly–Parisi (DGLAP) equations [
49,
50,
51]. Due to its polynomial power-law form, the predictive power of the calculations gets weaker at low-
z. We expect that a Tsallis-like distribution with the appropriate parameter evolution can resolve this problem, providing a better description. This motivates us to fit
q and
T parameters as a function of the center-of-mass energy
in a similar fashion, as it was done in reference [
54]. Here our aim is to test the validity of this approach via investigating the energy-evolution dependence of the parameters.
3.4. The Improved Quark-Coalescence Model
Another description of the hadron formation is based on the constituent quark scaling. In the
quark coalescence model the usual underlying assumption is that the hadronization takes place in a thermal system, where all the participating partons emerge at the same temperature [
55,
56]. This idea was developed for the description of hadron production in high-energy heavy-ion collisions, where the bulk of the hadron yield closely follows the exponential shape. In larger colliding systems, like in central collisions of large nuclei, this idea worked well, especially for the low transverse momentum regime,
< 3–5 GeV/
c, with a single temperature parameter.
In the original approach the energy distribution of the partons follow the Boltzmann–Gibbs statistics. Then, one approximates the formation rate as the multiplication of
k such Boltzmann–Gibbs distributions:
In the present non-extensive framework we still assume that the partons are part of a simple ensemble, but we replace the Boltzmann–Gibbs exponentials by Tsallis–Pareto distributions. Now the rate is the following:
with
. In order to test this model, we check the fitted
q parameters of the identified hadrons. If the equal-energy quark-coalescence theory is correct for both the quark-antiquark containing mesons and triple (anti)quark (anti)baryons, then one observes:
The meson-baryon ratio for the Tsallis parameters should be around 3/2:
This idea formulated by Equation (
28) can be tested as fitting the hadron spectra and investigating the ratio of the parameter
for different identified hadron species. As the constituent quark scaling is getting stronger at larger energies, we expect to reach this theoretical value only asymptotically.