Next Article in Journal
Determining Factors of FDI Flows to Selected Caribbean Countries
Next Article in Special Issue
HF-SCA: Hands-Free Strong Customer Authentication Based on a Memory-Guided Attention Mechanisms
Previous Article in Journal
Home Countries Matter for the Internationalisation of Emerging Market Multinational Enterprises
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Diffusion on the Peer-to-Peer Network

Independent Researcher, 59118 Wambrechies, France
J. Risk Financial Manag. 2022, 15(2), 47; https://doi.org/10.3390/jrfm15020047
Submission received: 21 December 2021 / Revised: 3 January 2022 / Accepted: 14 January 2022 / Published: 20 January 2022
(This article belongs to the Special Issue Smart Cities Research in Enabling Technologies and Tools)

Abstract

:
In a peer-to-peer complex environment, information is permanently diffused. Such an environment can be modeled as a graph, where there are flows of information. The interest of such modeling is that (1) one can describe the exchanges through time from an initial state of the network, (2) the description can be used through the fit of a real-world case and to perform further forecasts, and (3) it can be used to trace information through time. In this paper, we review the methodology for describing diffusion processes on a network in the context of exchange of information in a crypto (Bitcoin) peer-to-peer network. Necessary definitions are posed, and the diffusion equation is derived by considering two different types of Laplacian operators. Equilibrium conditions are discussed, and analytical solutions are derived, particularly in the context of a directed graph, which constitutes the main innovation of this paper. Further innovations follow as the inclusion of boundary conditions, as well as the implementation of delay in the diffusion equation, followed by a discussion when doing approximations useful for the implementation. Numerous numerical simulations additionally illustrate the theory developed all along the paper. Specifically, we validated, through simple examples, the derived analytic solutions, and implemented them in more sophisticated graphs, e.g., the ring graph, particularly important in crypto peer-to-peer networks. As a conclusion for this article, we further developed a theory useful for fitting purposes in order to gain more information on its diffusivity, and through a modeling which the scientific community is aware of.

1. Introduction

This document aims at describing the diffusion of information in the peer-to-peer (P2P) network related to a crypto, for instance Bitcoin, through a fundamental diffusion approach. The information could be anything, but the object of communication between agents (i.e., miners and users), and, to add context, it could be the hash calculation. This designates the processes which calculate hashes for mining blocks, given relevant remaining transactions to hash. More specifically, miners have the purpose of hash transactions within a list of transactions—the memory pool—which appear in the next block in the blockchain Lipton and Treccani (2021). Each transaction circulates from node to node in the memory pool used for remembering all new transactions, and miners receiving new transaction lists, supposing no double spending, start to generate hashes to create the new block. To this extent, the hash calculation starts as long as miners receive the list of transactions, thus the hash calculation can be seen as the information of transactions diffusing along the network (see Shahsavari et al. (2017) for a more theoretical discussion). The more a miner receives transactions to be mined, the more he/she is likely to calculate hashes, i.e., the stronger the hash calculation. On the contrary, if there is a low amount of information, i.e., not so many transactions to hash, then the lower the amount of transactions are to be hashed, and the lower the hash calculation. Conversely, if the hash calculation is high, this means that the miner has likely received a long list of transactions ready to be hashed. The hash calculation is intimately linked to the hash rate, so we will name this type of diffusion: diffusion of hash rate. In another more economical context, the diffused information could simply be Bitcoin cash flows.
This is true only if all miners have equal power of computing hashes, which we know is not realistic. This is not an issue: the diffusion model should depend on each node, and we will need to incorporate a rate of creation of hashes to assign to each node. In essence, not only do we have a diffusion of hash rate, but also the creation of hashes, increasing the hash rate for each miner. Being able to derive the hash rate with respect to time and node is being able to predict such an evolution for all nodes, at a given time and context, which is one main motivation for this approach.
A simple but fundamental physics model of diffusion is given by the famous diffusion equation,
h t = D Δ h + Γ ,
where h is the hash rate (or more generally, a ‘rate’ of information), t is the time variable, D is the diffusion coefficient throughout the network, and Γ is the local hash creation rate. The operator Δ is called the Laplacian operator. One may think we could handle this approach and solve the equation for h. However, what do spatial coordinates represent? It seems that the majority of physics fields where this equation applies contains spatial coordinates, which are real numbers with respect to a pre-defined coordinate system. This is not the case for a proper network.
This means we need to handle the diffusion theory on a graph. There have already been established approaches somehow related to the diffusion description on a graph, with many applications (e.g., Bondy and Murty 2007; Pacreau et al. 2021; Thanou et al. 2017; Viriyasitavat and Hoonsopon 2019). We discuss some of the approaches for modeling diffusion on a network.
The paper An et al. (2021) specifies an optimality approach for minimizing the maximum regret due to decision taken in an uncertainty environment, and specifically calculates the maximal regret through a linear program. This program has the advantage of considering the connectivity of the network, and the evolution of the information diffusion should be a relevant complement to describe the structure of the network. The paper Mikhaylov (2021) defends Hayek’s theory of private money, which, once applied to a state, should avoid a significant amount of private money within the population of this state and reduce the inequalities by means of further regulations through central banks digital currencies (CBDC). This imposes a certain structure in the network of exchanges: the current Bitcoin P2P network is distributed, and the application of this theory would make it change into a more centralized network. The diffusion properties may significantly change from one type to the other (see Section 7 for numerical illustrations), and our approach applies to both different types. The authors in the paper Hollanders et al. (2014) model a P2P network as a bipartite graph, where one family is the set of files, and the other is the set of peers. This way of modeling allows to find an ‘S’ shape in the dynamic of the information diffusion, and such a shape is a strong expectation of the social networks community. The equations guiding the evolution are through the susceptible/infectious (SI) equations. There are underlying assumptions, such as the strong one: the constancy of population. The paper Manini and Gribaudo (2008) engages a probabilistic approach of diffusion of files into a P2P network. In Musa et al. (2018), the diffusion methodologies applied to a P2P network are explicitly reviewed. If not based on the SI approach, the models are essentially probabilistic. Finally, the authors in Li et al. (2019) see the blockchain as a kind of Markov chain.
Interestingly, all the mentioned approaches overall focus on probabilistic modeling for the diffusion processes on a graph. This present study proposes to focus on the more fundamental aspects of the diffusion behavior, mainly governed by an equation of the style of Equation (1). It is worth stressing that the operator Γ is very general, and can be the source of randomness through, for instance, non-linearities, bringing chaos into the system. However, one possibly can also include probability in Γ , which would make the present study very general. More conceptually, Γ can gather any network specificity, such as its consensus, since it is related to sources of information.
However, some definitions already introduced in some of the previous studies, and including the Laplacian operator, may need to be further discussed, especially from a more intuitive and fundamental view of sharing information between two vertices of a graph. This is especially true when the graph is directed. Thus, in the context of diffusion on a P2P network, we need, for deriving the appropriate diffusion equation, to introduce the derivative and the integration on a graph, for a function f defined on each node x at a given time t. The Laplacian operator will follow. One understands that deriving the appropriate diffusion equation is a task in itself. Then, once derived, we need to solve the equation, which represents another task. In addition, we will see that there are different possible Laplacian operators for a diffusion theory (at least two), and we will focus on them separately. We will need some linear algebra tools to do it, as a graph is equivalent to its adjacency matrix, and, conversely, any matrix gives a (weighted) graph.
We now discuss the two hypotheses made (and mitigated) in this study. The first one is that graphs are undirectional. This is a strong assumption, as it means that once a vertex agent receives information that it did not previously have, then it will give back part of it to the source. Although this might be convenient in the context of bitcoin exchange (Alice sends some bitcoins to Bob, but also to herself, i.e., she generates new addresses she owns to send part of the total amount of bitcoins), a piece of information, whatever its type, has no reason to come back to its source. Considering that a directed graph is essential if we need to describe the interactions within a P2P network, this is what we do in this paper, and this constitutes the main innovation of the present study.
Another hypothesis is made in this modeling: when a vertex agent receives a piece of information, he/she immediately treats it and diffuses it to its neighbors. In other words, the response to the information reception is immediate. This is also a strong assumption, as, for instance, cash flows are performed in a delayed manner. Thus, Alice sends some bitcoins to Bob, but Bob does not immediately send a part (or more) of it to Charlie. In fact, the immediate response is the straightforward consequence of the structure of the diffusion equation, whose solutions are a differentiable function of time. It turns out that one way to make a non-differentiable solution at some given times is to introduce time dependency in the Laplacian operator, and the Heaviside function (whose value is 0 below a given time, and 1 above it) is a relevant candidate for differentiability breaking at selected points in time. Introducing Heaviside functions in the Laplacian operator is a much easier task in the context of graph modeling, for the reason that there are no discontinuity issues for the variable x, usually continuous, and here being discrete (and representing the node index). We engage this point of view at the end of the study.
The rest of this paper is depicted as follows. Section 2 is an introduction to the modeling, with many fundamental definitions which will imply the Laplacian ones, as well as the diffusion theory. Section 3 derives and solves the diffusion equation defined on an undirected graph. It is worth pointing out that the two previous sections are strongly inspired from Chung et al. (2007). The rest of this paper constitutes the main innovations. Section 4 introduces two different types of equilibrium notion, the standard one ( t ) and the blockchain one ( t = 10 min for Bitcoin), as well as some conceptual links between both. Section 5 shows the implementation of the derived solution in Section 3. Section 6 is an important generalization of the approach done thus far: it introduces the diffusion equation and its resolution in a directed graph, very important properties of the P2P network as already discussed above. Section 7 shows numerical illustrations of the previous section for several famous networks (including the Erdos–Reyni network, or the binary tree). Section 8 is the direct implication of the previous directed graph case when there are boundary conditions, typically certain vertices constantly sending information to the rest of the network. Finally, Section 9 includes delay in the response of nodes when they receive information. The last section concludes the study.

2. Modelling the P2P Network

2.1. Introduction to Graph Theory

Let G = ( E , V ) be a graph, that is a set of nodes E and a set of hedges V, composed of pairs of nodes in E. Thus, if x V , i.e., x is a node of G, and y E is another node of G; if ( x , y ) V , we say that the nodes x and y are connected in G. We suppose that G is connected (that is, there exists a connection between any two nodes of G), unoriented (x to y or y to x is the same) and unweighted (all the paths are equally effective); this last hypothesis may be relaxed in the future. An example of such a graph is shown in Figure 1.
There is a matrix equivalence for such a graph. Node 1 is connected to nodes 2 and 3, in Figure 1, but not to any other node. We can represent the graph G by a matrix A, which is named the adjacency matrix. If we arbitrarily label nodes with numbers, the adjacency matrix for the graph G in Figure 1 is given by
A = 0 1 1 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 1 1 0 1 1 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 1 0 1 0 0 0 0 0 0 0 1 0 1 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 .
The first row corresponds to the connections of node 1. Node 1 is connected to nodes 2 and 3 only, so columns 2 and 3 for the first row prints 1; otherwise, it prints 0, and so on. The diagonal (from top left to bottom right) of this matrix is composed of zeroes. It actually turns out the this matrix is symmetric (i.e., with respect to its diagonal). In this example, E is isomorphic to { 1 , 2 , , 9 } —we will identify both sets if there is no ambiguity—and the element a i j of A is 1 if ( i , j ) V for ( i , j ) E 2 .
Studying the matrix A or the graph G is therefore equivalent, and the P2P network could indeed be represented this way. In addition, in Figure 1, node 6 spreads the information of new transactions to nodes 3, 5, and 7. Node 6 starts to hash first, and then nodes 3, 5, and 7 start once they receive the information: the ‘hash rate’ propagates. Here, node 6 firstly computes hashes (purple), and secondly nodes 3, 5, and 7 compute hashes (they become purple). This approach generalizes to all the nodes that create hashes, i.e., diffuses list of transactions to other nodes as well as starting to compute hashes, and therefore will make its neighbors compute hashes.
The arising question is, how is the diffusive process of such hash information, i.e., can we describe it and thus allow a prediction? The answer is yes.

2.2. Derivative

We need to introduce the derivative and the integration properly before doing anything else. It is worth noting that significant effort has been made to get closer to the notations involved in the differential calculus. From now on, the sequence of this paper will be mathematical, unless specified otherwise. The diffusive quantity of interest, here the hash rate, is given by the function f, function of nodes x V and time t R + .
In the following, we focus on a graph G = ( E , V ) with associated adjacency matrix A = ( a i j ) 1 i , j n ( n N * \ { 1 } nodes). The graph G is connected, unoriented and unweighted so that the matrix A is symmetrical, and its elements are in { 0 , 1 } . We finally call N ( x ) the set of all neighbors of x V , and we write d A x = Card N ( x ) the degree of x, i.e., number of neighbors of x, and we call Vol ( G ) = x V d A x the volume of the graph G. We write x V or x G equivalently in the sequence.
Definition 1
(Derivative). Let f be a function of E × R + in R .1 The directional derivative of f from z G to y G is given by
z f ( y ) = f ( y ) f ( z ) a y z .
The operator z is a linear map of f, for any z G .
Figure 2 represents the derivative. The rest of the definitions depend on this, so it is important to well understand this last one.
Definition 2
(Gradient). Let f : E × R + R . The gradient of f at y G is the vector given by 2
y f = z f ( y ) z N ( y ) = 1 f ( y ) 2 f ( y ) d A y f ( y ) .
The gradient at a node is the collection of directional derivatives toward this node.
Definition 3
(Laplacians). Let f : E × R + R . The normalized Laplacian of f from x G to y G is given by
Δ x f ( y ) = 1 d A y z N ( x ) z z f ( y ) ,
while the non-normalized Laplacian of f from x G to y G is given by
L x f ( y ) = z N ( x ) z z f ( y ) .
The global normalized Laplacian of f at x is given by
Δ f ( x ) = 1 d A x z G z z f ( x ) ,
while the global non-normalized Laplacian of f at x is given by
L f ( x ) = z G z z f ( x ) .
L f (or Δ f ) is said to be the Laplacian of f.
The Laplacian Δ of f is the average of the second directional derivative of f at node x, while the Laplacian L of f is the average of the second directional derivative of f at node x. We then could define the following operator:
Definition 4
(Divergence). Let x G and f z : E × R + R , z N ( x ) be d A x functions. Define the vectorial function f = ( f z ) z N ( x ) . The divergence of f at x G is given by
x · f = z N ( x ) f z ( z ) ,
or
x · f = 1 d A x z N ( x ) f z ( z ) ,
depending on whether we choose L or Δ.
This is the sum (or average) value of f for a given node, the average being taken at each of the considered node’s neighbors.
Proposition 1
(Div of grad is Laplacian). For any x G , we have
x · x f = L x f or Δ x f ,
depending on whether we choose L x or Δ x for the definition of the divergence.
Proof. 
Note that, for all y , z G , we have
z z f ( y ) = z ( f ( y ) f ( z ) a y z ) = z f ( y ) z f ( z ) a y z = f ( y ) f ( z ) a y z 0 a y z = f ( y ) f ( z ) a y z 2 .
Since a y z 2 = a y z { 0 , 1 } , we therefore have that z z f ( y ) = z f ( y ) . It turns out that x · x f = Δ x f , or · = Δ . □
Thus, we note that z z = z , and it turns out that Δ f ( x ) = Δ x f ( x ) , for any x G . A straightforward but essential property for the Laplacian is depicted as follows.
Proposition 2.
Let f : E × R + R . The Laplacians of f at x G satisfy the following property:
L f ( x ) = d A x f ( x ) z N ( x ) f ( z ) ,
Δ f ( x ) = f ( x ) 1 d A x z N ( x ) f ( z ) .
Thus, the Laplacian of f at x is the value of f at x, minus the average value of f taken by all its neighbors.
Proof. 
From the definition, and since a x z = 1 if and only if z N ( x ) , we have
Δ f ( x ) = 1 d A x z N ( x ) z z f ( x ) = 1 d A x z N ( x ) f ( x ) z N ( x ) f ( z ) ,
which implies the results. □
This actually allows to introduce the Laplacian matrix.
Definition 5
(Laplacian matrix). The non-normalized Laplacian matrix is given by
L = D A ,
where A is the adjacency matrix, and D = diag ( d A x , d A y , ) is the diagonal matrix of degrees.
The Laplacian matrix is given by
Δ = D 1 L = 1 D 1 A ,
where 1 is the identity matrix.
Contrary to any other field in mathematics, there are no criteria of derivability. The function just needs to be defined at the two considered points.

2.3. Integration

After having defined the derivatives, we have to define the integration.
Definition 6.
Let f , h : E R . We introduce the map scalar product of f by h on any sub-graph g G of G, as follows:
f , h g = x g f ( x ) h ( x ) .
The following proposition is straightforward, but fundamental.
Proposition 3
(Scalar product). The map · , · g : f , h f , h g is a bilinear form from E 2 to R , for any g G . This justifies its name scalar product. Thus, the family ( G , · , · ) is said to be a Euclidean graph.
In the following, the graph G is always considered a Euclidean graph.
Definition 7
(Integration). Let f : E R . We introduce the integration of f on any sub-graph g G of G, as follows:
g f d A = f , d A = x g f ( x ) d A x .
d A is a measure on G by · , · .
Here again, the function, to be integrable in g, just needs to be defined on g. The previous and following properties are still satisfied for a function f : E × R + R .
The above definition actually is consistent with integration by parts.
Proposition 4
(Integration by parts). Let f , h : E R . We have
N ( x ) h Δ x f d A = N ( x ) Δ x h f d A ,
G h Δ f d A = G Δ h f d A .
Proof. 
We calculate
N ( x ) h Δ x f d A = y N ( x ) h ( y ) Δ x f ( y ) d A y = y N ( x ) z N ( x ) h ( y ) ( f ( y ) f ( z ) ) = 1 2 y N ( x ) z N ( x ) h ( y ) ( f ( y ) f ( z ) ) + 1 2 y N ( x ) z N ( x ) h ( z ) ( f ( z ) f ( y ) ) = 1 2 y N ( x ) z N ( x ) ( h ( y ) h ( z ) ) ( f ( y ) f ( z ) ) = y N ( x ) z N ( x ) ( h ( y ) h ( z ) ) f ( y ) = y N ( x ) Δ x h ( y ) f ( y ) d A y = N ( x ) Δ x h f d A .
Equation (18) is proven. In addition, we have
G h Δ f d A = x G h ( x ) Δ f ( x ) d A x = x G y G h ( x ) ( f ( x ) f ( y ) ) a x y = 1 2 x G y G ( h ( x ) h ( y ) ) ( f ( x ) f ( y ) ) a x y ,
which aims at proving Equation (19) as was previously done. □
We now have all the tools to develop the diffusion theory on a network, which is the object of the next session.

3. The Diffusion Equation

3.1. Derivation of the Equation

In order to derive the diffusion equation, we need the following essential proposition. In the sequence, we call f : E × R + R the number of hashes produced per miner, as a function of nodes and time.
Proposition 5.
The density of the current of hashes from at x G at time t is given by the following law of diffusion:
j ( t , x ) = D x f ( t , · ) ,
where D > 0 is the diffusion coefficient of information.
We indeed assume that D is constant and is a network characteristic. Despite the fact that this law is very intuitive and is a typical Fick law (the information goes toward where there is not any information yet and hence the negative sign), further developments need to be made to rigorously justify this equation in the context of the network. Let us write j ( t , x ) = ( j z ( t , x ) ) z N ( x ) the vector j at x and t. Thus, j z ( t , x ) represents the flow of information from z to x.
Here is the first important result of the paper.
Theorem 1.
Let f : E × R + R represent the information function. If D is the diffusion coefficient and Γ : E × R + R the creation function of the information, the following diffusion equation holds for any x V and t R + :
f t ( t , x ) = D L f ( t , x ) + Γ ( t , x )
Separately, the following diffusion equation also is valid
f t ( t , x ) = D Δ f ( t , x ) + Γ ( t , x )
As long as one remembers that it is local, these equations can simply be written as
f t = D L f + Γ , f t = D Δ f + Γ .
The minus sign is actually a necessity for the system to be described as diffusive, and not an explosive one, contrary to standard math and physics systems.
Proof. 
On the one hand, let δ N be the quantity of hashes produced during the infinitesimal variation of time d t , at node x G . The hash rate is the flux of rates given by ϕ = δ N / d t , and the number of such hashes produced per neighbors, for node x G , from time t and t + d t , is f ( t + d t , x ) f ( t , x ) = δ N / d A x . It turns out that
δ N = f t ( t , x ) d t d A x .
On the other hand, node x (i) receives the diffused information from its neighbor, implying δ N enter hashes, and (ii) can create (or destruct) information, implying δ N create hashes. We therefore must have the following conservation of information:
δ N = δ N entering + δ N create .
On point (i), the quantity of information entering into node x is provided from all its neighbors, and thus δ N entering = z N ( x ) j z ( t , x ) d t . On point (ii), we introduce the creation rate per node Γ ( t , x ) at node x and time t, which is the rate per node per second hashes are calculated. We therefore have δ N create = Γ ( t , x ) d t d A x .
Bearing all the above in mind, Equation (23) becomes
f t ( t , x ) d t d A x = z N ( x ) j z ( t , x ) d t + Γ ( t , x ) d t d A x ,
Equivalently,
f t ( t , x ) = 1 d A x z N ( x ) j z ( t , x ) + Γ ( t , x ) ,
or
f t ( t , x ) = x · j ( t , x ) + Γ ( t , x ) .
According to Proposition 5, we have x · j ( t , x ) = D Δ f ( t , x ) or L Δ f ( t , x ) , which does conclude the proof. □

3.2. Laplacian Matrix: Eigenvalues/Eigenvectors

3.2.1. Quick reminder: Eigenvalues/Eigenvectors

Let G = ( E , V ) be a graph, with n N nodes and the adjacency matrix A, and F : E n E n . We say that λ C is an eigenvalue of F if the function F λ I d ( I d is the identity map) is not injective, which means that there exists a vector V R n , named eigenvector, such that F ( V ) = λ V , or component-wise F ( V ( x ) ) = λ V ( x ) for x G . The function F has n eigenvalues, which we write λ 1 , λ 2 , , λ n , the index being chosen such that | λ 1 | | λ 2 | | λ n | 0 . To each eigenvalue λ i , we associate its eigenvector V i , for all { 1 , 2 , , n } .
Separately, the function F can be seen as a matrix M, and the above equation is thus written as M V = λ V , or component-wise M V ( x ) = λ V ( x ) . We adapt to the matrix viewpoint in the following. If M is symmetric, then it is diagonalizable and ( V i ) i { 1 , 2 , , n } is an orthonormal basis of R n , that is
V i , V j G = x G V i ( x ) V j ( x ) = δ i j ,
where δ i j is the Kronecker symbol.

3.2.2. Laplacian: Eigenvalues/Eigenvectors

The above applies to the Laplacian matrix (see Definition 5). For instance, the non-normalized Laplacian matrix for Figure 1 writes as
L = 1 1 1 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 1 1 1 1 1 1 0 0 0 0 0 1 1 0 0 0 0 0 0 0 1 0 1 1 0 0 0 0 0 1 0 1 1 1 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 1 0 1 .
The matrix L is symmetric and positive (semi-)definite. We write ( μ i ) i { 1 , 2 , , n } the eigenvalues of the matrix L such that μ 1 μ 2 μ n 0 with the associated eigenvalues ( v i ) i { 1 , 2 , , n } , forming an orthonormal basis of R n . The normalized Laplacian matrix writes
Δ = 1 1 / 2 1 / 2 0 0 0 0 0 0 1 / 2 1 1 / 2 0 0 0 0 0 0 1 / 5 1 / 5 1 1 / 5 1 / 5 1 / 5 0 0 0 0 0 1 1 0 0 0 0 0 0 0 1 / 2 0 1 1 / 2 0 0 0 0 0 1 / 3 0 1 / 3 1 1 / 3 0 0 0 0 0 0 0 1 / 3 1 1 / 3 1 / 3 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 1 0 1 .
The matrix Δ is only positive semi-definite.
In the sequence, we focus our attention on the matrix Δ ˜ = D 1 / 2 Δ D 1 / 2 . This matrix is positive semi-definite, but also, since Δ ˜ = D 1 / 2 D 1 L D 1 / 2 = D 1 / 2 L D 1 / 2 , it is symmetric. As an example, we have
Δ ˜ = 1 1 / 2 1 / 10 0 0 0 0 0 0 1 / 2 1 1 / 10 0 0 0 0 0 0 1 / 10 1 / 10 1 1 / 5 1 / 10 1 / 15 0 0 0 0 0 1 / 5 1 0 0 0 0 0 0 0 1 / 10 0 1 1 / 6 0 0 0 0 0 1 / 15 0 1 / 6 1 1 / 3 0 0 0 0 0 0 0 1 / 3 1 1 / 3 / 3 0 0 0 0 0 0 1 / 3 1 0 0 0 0 0 0 0 1 / 3 0 1 .
We write ( λ i ) i { 1 , 2 , , n } the eigenvalues of the matrix Δ ˜ such that λ 1 λ 2 λ n 0 with the associated eigenvalues ( V i ) i { 1 , 2 , , n } , forming an orthonormal basis of R n . The graph G is fully connected; therefore λ n = 0 .
It is interesting to further develop the eigenvector V n associated with the eigenvalue λ n = 0 . Set v n as the eigenvector of L associated with the eigenvalue λ n = μ n = 0 (same common eigenvalue between L and Δ ˜ ). In addition, we have L v n = 0 ; hence, D v n = A v n , which implies that v n = e / n with the appropriate normalization.
Actually, we easily can establish that
Δ ˜ D 1 / 2 v n = λ n D 1 D 1 / 2 v n = 0 = λ n D 1 / 2 v n .
It turns out that V n = D 1 / 2 v n , and with the appropriate normalization, we finally have
V n ( x ) = d A x Vol ( G ) .
Two additional points to bear in mind are that Δ ˜ is equivalent to Δ by construction; therefore, both matrices have the same eigenvalues. Finally, since Δ ˜ = D 1 / 2 L D 1 / 2 , then, according to Sylvester’s inertia theorem, Δ ˜ and L, and thus Δ , have the same number of negative, zero, and positive eigenvalues. Further developments can be done here.

3.3. Resolution of the Equation

We need the following lemma
Lemma 1.
The operator L is self-adjoint, in the sense that
h , L x f N ( x ) = L x h , f N ( x ) ,
h , L f G = L h , f G .
In addition, the operator Δ ˜ also is self-adjoint
h , Δ ˜ x f N ( x ) = Δ ˜ x h , f N ( x ) ,
h , Δ ˜ f G = Δ ˜ h , f G .
Proof. 
The first two equations have proofs very similar to the one for Proposition 4. We thus focus on Δ ˜ x and Δ ˜ .
First we note, for any x G , that
Δ ˜ f ( x ) = f ( x ) 1 d A x z N ( x ) f ( z ) d A z = 1 d A x z N ( x ) f ( x ) d A x f ( z ) d A z .
It turns out that
Δ ˜ x f ( y ) = 1 d A y z N ( x ) f ( y ) d A y f ( z ) d A z a y z , y G .
Bearing this in mind, we calculate
h , Δ ˜ x f N ( x ) = y N ( x ) h ( y ) Δ ˜ x f ( y ) = y N ( x ) h ( y ) 1 d A y z N ( x ) f ( y ) d A y f ( z ) d A z a y z = y N ( x ) z N ( x ) h ( y ) d A y f ( y ) d A y f ( z ) d A z a y z = 1 2 y N ( x ) z N ( x ) h ( y ) d A y f ( y ) d A y f ( z ) d A z a y z + 1 2 y N ( x ) z N ( x ) h ( z ) d A z f ( z ) d A z f ( y ) d A y a z y = 1 2 y N ( x ) z N ( x ) h ( y ) d A y h ( z ) d A z a y z × f ( y ) d A y f ( z ) d A z a y z = y N ( x ) z N ( x ) h ( y ) d A y h ( z ) d A z a y z f ( y ) d A y = y N ( x ) Δ ˜ x h ( y ) f ( y ) = Δ ˜ x h , f N ( x ) .
Equation (31) is proven. For Equation (32), we have
h , Δ ˜ f G = x G h ( x ) Δ ˜ f ( x ) = x G h ( x ) 1 d A x y G f ( x ) d A x f ( y ) d A y a x y = 1 2 x G y G h ( x ) d A x h ( y ) d A y a x y × f ( x ) d A x f ( y ) d A y a x y ,
which aims at proving the result, as previously done. □
Here is the second important result of the paper. We have the following analytical expression for the hash rate per node.
Theorem 2.
The solution for the diffusion Equation (21) is given by
f ( t , x ) = i = 1 n C i L e D μ i t + 0 t e D μ i ( t τ ) Γ ( τ , · ) , v i G d τ v i ( x )
with ( C 1 L , C 2 L , , C n L ) R n .
The solution for the diffusion Equation (22) is given by
f ( t , x ) = 1 d A x i = 1 n C i Δ ˜ e D λ i t + 0 t e D λ i ( t τ ) D 1 / 2 Γ ( τ , · ) , V i G d τ V i ( x )
with ( C 1 Δ ˜ , C 2 Δ ˜ , , C n Δ ˜ ) R n .
It is worth pointing out that these analytical solutions easily are implementable in a computer.
Proof. 
We first focus on the solution for Equation (21). We decompose f ( t , x ) into the basis ( v i ) i { 1 , 2 , , n } of R n made of eigenvectors of the matrix L.
f ( t , x ) = i = 1 n α i ( t ) v i ( x ) ,
where, for all i { 1 , 2 , , n } , we have
α i ( t ) = f , v i G .
We have
f , L v i G = μ i f , v i G = μ i α i ( t ) .
Using Lemma 1, we have
μ i α i ( t ) = L f , v i G = L f , v i G ,
and using Equation (21), we have
D μ i α i ( t ) = f t ( t , · ) , v i G Γ ( t , · ) , v i G .
Implementing Equation (35), we have
D μ i α i ( t ) = d d t j = 1 n α j ( t ) v j , v i G Γ ( t , · ) , v i G . D μ i α i ( t ) = d d t j = 1 n α j ( t ) δ i j + Γ ( t , · ) , v i G = d α i ( t ) d t + Γ ( t , · ) , v i G . d α i ( t ) d t = D μ i α i ( t ) + Γ ( t , · ) , v i G .
We can easily solve this differential equation, and we have
α i ( t ) = α i ( 0 ) e D μ i t + 0 t e D μ i ( t τ ) Γ ( τ , · ) , v i G d τ .
We obtain the final result by placing the previous equation to Equation (35).
We now focus on the solution for Equation (22). We decompose ( D 1 / 2 f ) ( t , x ) into the basis ( V i ) i { 1 , 2 , , n } of R n made of eigenvectors of the matrix Δ ˜ . The rest of the calculation remains exactly the same as below, with the additional trick Δ ˜ D 1 / 2 = D 1 / 2 Δ . □
We note that C i L = C i Δ ˜ / d A x , for any i 1 , n and x G . It turns out that if Γ = 0 , using L or Δ leads to the exact same result.
Corollary 1.
The solution for the diffusion Equation (21) is written as
f ( t , x ) = f ( 0 , · ) , U L ( t , x , · ) G + 0 t Γ ( t τ , · ) , U L ( τ , x , · ) G d τ
f ( t , x ) = f ( 0 , · ) , U Δ ˜ ( t , x , · ) G + 0 t Γ ( t τ , · ) , U Δ ˜ ( τ , x , · ) G d τ
where U L : R + × E 2 R is the diffusion propagator for the matrix L, and is such that
U L ( t , x , y ) = i = 1 n e D μ i t v i ( x ) v i ( y ) , ( t , x , y ) R + × E 2 .
We also have
U Δ ˜ ( t , x , y ) = d A y d A x i = 1 n e D λ i t V i ( x ) V i ( y ) , ( t , x , y ) R + × E 2 .
The advantage of this last expression is the explicit dependency of f with respect to the initial condition on all the nodes of G, i.e., with f ( 0 , y ) for all y G .
Proof. 
We derive the expression for the matrix Δ ˜ . From the general solution expression, we have
f ( 0 , x ) = 1 d A x j = 1 n C j V j ( x ) .
Therefore, we have
C i = V i , d A f ( 0 , · ) G = y G V i ( y ) d A y f ( 0 , y ) .
Implementing into the main solution expression, we have
f ( t , x ) = 1 d A x i = 1 n y G V i ( y ) d A y f ( 0 , y ) e D λ i t + 0 t y G d A y V i ( y ) Γ ( τ , y ) e D λ i ( t τ ) d τ V i ( x ) .
Some re-ordering leads to
f ( t , x ) = y G d A y d A x i = 1 n V i ( x ) V i ( y ) f ( 0 , y ) e D λ i t + 0 t y G d A y d A x i = 1 n V i ( x ) V i ( y ) Γ ( τ , y ) e D λ i ( t τ ) d τ .
This explicitly is what we are looking for. A variable change will lead to the result. □
Remark 1.
We have
U Δ ˜ ( 0 , x , y ) = d A y d A x i = 1 n V i ( x ) V i ( y ) = d A y d A x δ x y = δ x y ,
where δ x y = 1 if x = y and 0 otherwise: the same for matrix L. We also have
lim t + U Δ ˜ ( t , x , y ) = d A y d A x V n ( x ) V n ( y ) = d A y d A x d A x Vol ( G ) d A y Vol ( G ) = d A y Vol ( G ) ,
in which case lim t + U Δ ˜ ( t , x , y ) does not depend on x. We also found an interesting interpretation for the eigenvector V n . Indeed, we have
lim t + U Δ ˜ ( t , x , y ) = V n 2 ( y ) .
Therefore, V n 2 ( y ) represents the propagation of information due to the diffusion, to node y.
The next two sections deal with further developments on the implications of the facts established above. This also holds for the operator L.

4. Standard Equilibrium and Blockchain Equilibrium

The equilibrium is an important aspect of the current paper. We identified two types of equilibrium, which we further develop here. The zero eigenvalue as well as its eigenvector will play an important role, as we are going to see.

4.1. Standard Equilibrium

4.1.1. Integrable Creation Destruction Function

The standard equilibrium corresponds to the steady state, i.e., when t + . It is expected to have permanent exchanges between nodes at the same rate, which is exactly equivalent to the fact that each node keeps having the same quantity of information, i.e., the same quantity of hash generation.
More specifically, we have the following proposition.
Theorem 3
(Standard Equilibrium Framework). Let f be solution of the diffusion Equation (1). We assume that t Γ ( t , y ) is integrable on R + for any y G , and
0 t Γ ( τ , y ) e D λ i ( t τ ) d τ ( t + ) γ i ( y ) , with 0 γ i ( y ) < + , y G .
Then the standard equilibrium state corresponds to:
lim t + f ( t , x ) = 1 Vol ( G ) y G d A y f ( 0 , y ) + 0 + Γ ( τ , y ) d τ .
We have a similar result by considering the matrix L.
Proof. 
From the expression
U ( t , x , y ) = d A y d A x i = 1 n e D λ i t V i ( x ) V i ( y ) ,
we see that all the terms cancel since λ 1 λ 2 λ n 1 > λ n = 0 , except the last one. Thus
lim t + U ( t , x , y ) = d A y d A x V n ( x ) V n ( y ) = d A y Vol ( G ) .
This does not depend on x. We then start from the following equation
f ( t , x ) = y G d A y d A x i = 1 n V i ( x ) V i ( y ) f ( 0 , y ) e D λ i t + 0 t y G d A y d A x i = 1 n V i ( x ) V i ( y ) Γ ( τ , y ) e D λ i ( t τ ) d τ .
Hence,
lim t + f ( t , x ) = 1 Vol ( G ) y G d A y f ( 0 , y ) + y G d A y d A x i = 1 n V i ( x ) V i ( y ) lim t + 0 t Γ ( τ , y ) e D λ i ( t τ ) d τ .
The previous equation is the most general one. We come to the factor between parenthesis. The previous equation is then
lim t + f ( t , x ) = 1 Vol ( G ) y G d A y f ( 0 , y ) + y G d A y d A x i = 1 n V i ( x ) V i ( y ) γ i ( y ) .
We now focus on the Equations (40). We will prove that γ i ( y ) = 0 , for all i { 1 , , n 1 } and y G , which will prove Equation (41). A variable change gives
0 t Γ ( τ , y ) e D λ i ( t τ ) d τ = 0 t Γ ( t τ , y ) e D λ i τ d τ .
Bearing this in mind, for i = n , we have
0 t Γ ( τ , y ) d τ ( t + ) γ n ( y ) ,
that is
0 + Γ ( τ , y ) d τ = γ n ( y ) < + .
Hence | Γ ( τ , y ) | C ( y ) < + with C function from G to R , and lim τ + Γ ( τ , y ) = 0 , for any y G . It turns out that for any i { 1 , 2 , , n 1 } , we have
ϵ > 0 τ ˜ > 0 τ > τ ˜ | Γ ( τ , y ) | < ϵ D λ i C ( y ) + 1 , y G .
Hence, for any i { 1 , 2 , , n 1 } and t > τ , we have
τ ˜ t Γ ( τ , y ) e D λ i ( t τ ) d τ < τ ˜ t Γ ( τ , y ) e D λ i ( t τ ) d τ < ϵ 1 e D λ i ( t τ ˜ ) C ( y ) + 1 .
Hence
0 t Γ ( τ , y ) e D λ i ( t τ ) d τ 0 τ ˜ Γ ( τ , y ) e D λ i ( t τ ) d τ < ϵ 1 e D λ i ( t τ ˜ ) C ( y ) + 1 .
By a variable change and using triangular inequality, we have
0 t Γ ( τ , y ) e D λ i ( t τ ) d τ < t τ ˜ t Γ ( t τ , y ) e D λ i τ d τ + ϵ 1 e D λ i ( t τ ˜ ) C ( y ) + 1 .
In other words, we have
0 t Γ ( τ , y ) e D λ i ( t τ ) d τ < C ( y ) e D λ i t e D λ i ( t τ ˜ ) D λ i + ϵ 1 e D λ i ( t τ ˜ ) C ( y ) + 1 .
Suppose now that t R + * is such that
t = max 1 D λ i ln ϵ D λ i C ( y ) + 1 , τ .
Then we have
0 t Γ ( τ , y ) e D λ i ( t τ ) d τ < ϵ .
Therefore,
lim t + 0 t Γ ( τ , y ) e D λ i ( t τ ) d τ = γ i ( y ) = 0 ,
which concludes the proof. □
For example, set Γ ( t , x ) = P ( t ) e α t , where P is a polynomial and α R with some restrictions that we need to discover.
Let n N . A quick calculation allows to establish that
τ n e a τ d τ = j = 0 n ( 1 ) n j 1 a n j + 1 n ! j ! τ n j e a τ + C te if a 0 τ n + 1 n + 1 + C te otherwise
If we set Γ ( τ , x ) = τ n e α τ , then we deduce that
e D λ i t 0 t τ n e ( D λ i + α ) τ d τ = j = 0 n ( 1 ) n j 1 D λ i + α n j + 1 n ! j ! t n j e α t if α D λ i e D λ i t t n + 1 n + 1 otherwise
We deduce that, for γ i ( y ) to be finite, we need to have α R * , while n N without restriction, and the degree of P is in N . However, if α = 0 , then we necessarily need to have n = 0 . The less negligible function Γ is thus Γ ( t , x ) = P ( 0 ) , which is the only form giving a nonzero γ i ( y ) = 0 .
Remark 2.
Equation (40) is actually close to the one found by applying the mean value theorem, provided Γ ( · , y ) is continuous for any y G : there exists a i [ 0 , t ] such that
0 t Γ ( τ , y ) e D λ i τ d τ = Γ ( a i , y ) D λ i ( e D λ i t 1 ) , y G .
Thus, γ i ( y ) = Γ ( a i , y ) / D λ i , with a R + * .
Remark 3.
General function theory might be interesting if one decides to break the assumption that Γ ( · , y ) is continuous.

4.1.2. Continuous-by-Parts Creation Destruction Function

If we require the solution f to be continuous at all times, then, according to the diffusion equation, Γ must be at least continuous by parts. In this case, the function Γ ( · , x ) is continuous by parts, for any x E , i.e., there are some points on the time axis R + where the function Γ ( · , x ) is not continuous (discontinuity of first kind). In this case, for any x G , the function Γ ( · , x ) still is integrable on R + .

4.2. Blockchain Equilibrium (Approximation)

In fact, we have to consider that time is discretized, i.e., subdivided into interval of time which are of length 10 min approximately. Ten minutes is the average time needed for a miner to mine a new block in the Bitoin blockchain Lipton and Treccani (2021), and this could be changed to 15 s for the Ethereum blockchain. We will go to deeper detail to this point in the future, but approximately every 10 min, the blockchain equilibrium equation is satisfied. The following are functions of interest:
  • r b b-block reward;
  • c—cost per hash, supposed to be constant;
  • P t —price of one Bitcoin, in USD, at time t;
  • f i —fee for transaction i I , where I is the set of all transactions that will be stacked into block b;
  • N b —number of hashes in the network, for block b.
From these five variables, we can easily compute the revenues R b for block b, as follows.
R b = r b + i I f i P T .
We can compose the costs, more precisely the expected costs C b , for block b, as well:
C b = c N b .
We have the blockchain equilibrium equation:
R b = C b r b + i I f i P T = c N b .
Thus, N b is connected to f ( t , x ) , which gives the cost for forming a block. We can, in fact, make the assumption that the circulation of transaction lists within miners is very fast: it takes approximately 1 millisecond for the nodes to receive the transaction list from an original list. It is relevant to assume that the diffusion happens very fast once one miner has a transaction list to be sent, and that the diffusion coefficient D is very high. This is the blockchain equilibrium approximation: D λ n 1 t 1 , for any time t.
In other words, D λ n 1 t is so high that e D λ i t ( D λ n 1 t + ) 0 , for any i { 1 , 2 , , n 1 } . Only the term corresponding to λ n = 0 survives.
Proposition 6
(Blockchain Equilibirum Approximation). Let f be solution of the diffusion Equation (1). We focus on t [ 0 , T ] , where T is set to be 10 min. Then the blockchain equilibrium state corresponds to
f ( T , x ) ( D λ n 1 t + ) 1 Vol ( G ) y G d A y f ( 0 , y ) + 0 T Γ ( τ , y ) d τ .
We have a similar result by considering the matrix L.
It turns out that f ( T , x ) , which does not depend on x, is the same for all the nodes.
Proof. 
We have
e D λ i τ ( D λ n 1 t + ) 0 if i { 1 , 2 , , n 1 } 1 if i = n
and
0 t Γ ( τ , y ) e D λ i ( t τ ) d τ ( D λ n 1 t + ) 0 if i { 1 , 2 , , n 1 } 0 + Γ ( τ , y ) d τ if i = n
We deduce the equation from
f ( t , x ) = y G d A y d A x i = 1 n V i ( x ) V i ( y ) f ( 0 , y ) e D λ i t + y G d A y d A x i = 1 n V i ( x ) V i ( y ) 0 t Γ ( τ , y ) e D λ i ( t τ ) d τ ,
which concludes the proof. □
Thus, f ( T , x ) is to be associated to N b . Finally, in essence, the standard equilibrium and blockchain equilibrium give the same results. In the following, ‘ f ( T , · ) ’ refers to the function at the equilibrium state.

5. Undirected Graph: Case Studies

5.1. 2 Nodes

As a quick application, suppose E = { x , y } , only two nodes, and V = { ( x , y ) } , one connection. Suppose Γ = 0 , f ( 0 , x ) = f 0 R + , f ( 0 , y ) = 0 . We have V 1 ( x ) 2 + V 1 ( y ) 2 = 1 , λ 2 = 0 , and
f ( t , x ) = f 0 U ( t , x , x ) = f 0 V 1 ( x ) 2 e D λ 1 t + f 0 V 2 ( x ) 2 , f ( t , y ) = f 0 U ( t , y , x ) = f 0 V 1 ( x ) V 1 ( y ) e D λ 1 t + f 0 V 2 ( x ) V 2 ( y ) .
We can calculate the eigenvalues and eigenvectors for the network. The non-normalized Laplacian matrix is given by
L = 1 1 1 1
and the Laplacian matrix by Δ = Δ ˜ = L . The eigenvalues are λ = 2 and λ 2 = 0 , and the eigenvectors are V 1 = t ( 1 / 2 1 / 2 ) and V 2 = t ( 1 / 2 1 / 2 ) . We finally obtain
f ( t , x ) = f 0 2 1 + e 2 D t f ( t , y ) = f 0 2 1 e 2 D t
See Figure 3 for a plot of these two solutions.
The equilibrium is therefore given by
f ( T , x ) = f ( T , y ) = f 0 / 2 .
Finally, due to the diffusion effect, both nodes share the same information at the equilibrium.

5.2. 3 Nodes

We set that E = { 1 , 2 , 3 } and V = { ( 1 , 2 ) , ( 2 , 3 ) } . The non-normalized Laplacian of such a matrix is
L = 1 1 0 1 2 1 0 1 1
and is
Δ ˜ = D 1 / 2 L D 1 / 2 = 1 1 / 2 0 1 / 2 2 1 / 2 0 1 / 2 1 .
The eigenvalues of Δ ˜ are the roots of the polynomial ( 1 X ) ( 1 X ) 2 ( 1 X ) / 2 1 / 2 , that is λ 1 = 2 , λ 2 = 1 , and λ 3 = 0 . The eigenvectors are V 1 = t ( 1 / 2 1 / 2 1 / 2 ) , V 2 = t ( 1 / 2 0 1 / 2 ) , and V 3 = t ( 1 / 2 1 / 2 1 / 2 ) .
We set Γ = 0 . In order to be very explicit, the propagators are written as, for x = 1 ,
U ( t , 1 , 1 ) = 1 4 + 1 2 e D t + 1 4 e 2 D t U ( t , 1 , 2 ) = 1 2 1 2 e 2 D t U ( t , 1 , 3 ) = 1 4 1 2 e D t + 1 4 e 2 D t
for x = 2
U ( t , 2 , 1 ) = 1 4 1 4 e 2 D t U ( t , 2 , 2 ) = 1 2 + 1 2 e 2 D t U ( t , 2 , 3 ) = 1 4 1 4 e 2 D t
and for x = 3
U ( t , 1 , 1 ) = 1 4 1 2 e D t + 1 4 e 2 D t U ( t , 1 , 2 ) = 1 2 1 2 e 2 D t U ( t , 1 , 3 ) = 1 4 + 1 2 e D t + 1 4 e 2 D t
Therefore, we have
f ( t , 1 ) = f ( 0 , 1 ) 1 4 + 1 4 e D t + 1 4 e 2 D t + f ( 0 , 2 ) 1 2 1 2 e 2 D t + f ( 0 , 3 ) 1 4 1 4 e D t + 1 4 e 2 D t f ( t , 2 ) = f ( 0 , 1 ) 1 4 1 4 e 2 D t + f ( 0 , 2 ) 1 2 + 1 2 e 2 D t + f ( 0 , 3 ) 1 4 1 4 e 2 D t f ( t , 3 ) = f ( 0 , 1 ) 1 4 1 4 e D t + 1 4 e 2 D t + f ( 0 , 2 ) 1 2 1 2 e 2 D t + f ( 0 , 3 ) 1 4 + 1 4 e D t + 1 4 e 2 D t
Figure 4 plots the evolution of the three functions.
The equilibrium state corresponds to the following expressions.
f ( T , 1 ) = f ( T , 2 ) = f ( T , 3 ) = 1 4 f ( 0 , 1 ) + 1 2 f ( 0 , 2 ) + 1 4 f ( 0 , 3 ) .
We are now going to propose the solutions for specific P2P networks. It is worth stressing that implementation of the above in a prototype is in progress.

6. Generalization to Directed Graphs

This section focuses on the generalization of what has been seen so far, but applied to a directed graph. Concretely, information between different layers of the whole Bitcoin network is directed. A very simple example is that the users layer is sending information to the P2P network as transaction lists, and they do not expect any information back to them since they are not miners. The diffusion is thus directed. This breaks the symmetry of the adjacency matrix, which is not diagonalizable anymore. Thus, we could handle a more general framework to perform the diffusion. So far, we may handle with straight numerical solutions for the implementation, but we are going to see that we can derive an analytical solution base on the singular value decomposition (SVD).
Definition 8
(SVD). Let A M n , m ( R ) be a matrix of size n × m , with n , m N * . Then, A admits the following decomposition:
A = U Σ t V ,
where U and V are orthogonal square matrices of size n and m, respectively, and Σ M n , m ( R ) is a 0 matrix, except its diagonal elements, which are non-negative numbers.
If m > n , we have
Σ = σ 1 0 0 0 0 0 0 σ 2 0 0 0 0 0 0 σ 3 0 0 0 0 0 0 σ n 0 0 .
If Σ is a diagonal matrix Σ = diag ( σ 1 , , σ n ) . The numbers σ 1 σ n 0 are the singular values of A. The columns of matrices U and V are the singular vectors of A.

6.1. Matrix Exponential and Adjoint of the Laplacian Operator

As a reminder, we need the following three definitions.
Definition 9
(Exponential of Matrix). Let A M n ( R ) for n N * . The exponential of the matrix A is given by
exp ( A ) = k = 0 + A k k ! .
We write e A for exp ( A ) .
Definition 10
(Hyperbolic Cosine and Sine of Matrix). Let A M n ( R ) for n N * . The hyperbolic cosine of the matrix A is given by
ch ( A ) = 1 2 ( e A + e A ) .
The hyperbolic sine of the matrix M is given by
sh ( A ) = 1 2 ( e A e A ) .
It turns out that
ch ( A ) = k = 0 + A 2 k ( 2 k ) ! ,
sh ( A ) = k = 0 + A 2 k + 1 ( 2 k + 1 ) ! .
Definition 11.
Let T be a (linear) operator and g G be a subgraph. The adjoint operator t T of T is such that
h , T f g = t T h , f g ,
for any f , g : G R . The adjoint operator is the transposed of the associated matrix.

6.2. Solution for the Directed Diffusion Involving the Operator L

6.2.1. Intuition

In case of the directed network, the diffusion also is oriented. To gain intuition, suppose there are two nodes x and y, so that x is connected to y, but not the converse. In this case, the adjacency matrix writes
A = 0 1 0 0
so that the non-normalized Laplacian matrix is
L = 1 1 0 0 .
In addition, the node x loses information, while y gains the same amount of information that x loses. Therefore, at any time t R + , the conservation of total information is satisfied:
f ( t , x ) + f ( t , y ) = f ( 0 , x ) + f ( 0 , y ) = C te .
Since x loses information but in a diffusive effect such that the more information it has, the more it gives, we must have
f t ( t , x ) = D f ( t , x ) .
Solving this equation leads to
f ( t , x ) = f ( 0 , x ) e D t ,
so that
C te f ( t , y ) = ( C te f ( 0 , y ) ) e D t f ( t , y ) = C te ( 1 e D t ) + f ( 0 , y ) e D t .
This is the solution of the differential equation:
f t ( t , y ) = D f ( t , x ) .
Introducing the vector F ( t ) = t ( f ( t , x ) f ( t , y ) ) , we thus obtain the following system of differential equations:
d F d t ( t ) = D t L F ( t ) .

6.2.2. The Directed Diffusion Equation Involving L

Definition 12.
Let f : E × R + R representing the information function for a graph with n nodes. If D is the diffusion coefficient and Γ : E × R + R the creation function of the information, the following directed diffusion equation holds, for any t R + :
f t ( t , x ) = D t L f ( t , x ) + Γ ( t , x )
We have the same Equation (21) if and only if L = t L , that is, if and only if A is symmetrical.

6.2.3. Solution Involving L

From Equation (12), we can use the Cauchy theorem to obtain the following solution.
Theorem 4
(Application of the Cauchy Theorem). The solution of the related Cauchy problem associated with Equation (12) is given by
f ( t , x ) = e D t L t f ( 0 , x ) + 0 t e D t L ( t τ ) Γ ( τ , x ) d τ .
This expression is straightforwardly implementable in a computer. As a tip, we would rather use the following equation:
f ( t + Δ t , x ) = e D t L Δ t f ( t , x ) + t t + Δ t e D t L ( t τ ) Γ ( τ , x ) d τ .
The advantage of this equation is to force step-by-step dependency of configuration at time t Δ t , for any time t, for the increment Δ t > 0 . See Section 8.3 for a practical implementation from this equation.

6.2.4. Analytical Solution Involving L

The reader may find it useful to derive an analytical solution based on SVD.
Theorem 5.
Let L = u S t v be the SVD of L (here S is indeed square), and let ( u i ) i { 1 , 2 , , n } and ( v i ) i { 1 , 2 , , n } be the columns of the matrices u and v, respectively. The solution for the directed diffusion Equation (52) is given by
f ( t , x ) = t α u ( t ) · u ( x ) = t α v ( t ) · v ( x ) ,
where
α u ( t ) α v ( t ) = ch ( D t v u S t ) sh ( D t v u S t ) t v u sh ( D S t v u t ) t u v ch ( D S t v u t ) · α u ( 0 ) α v ( 0 ) + 0 t ch ( D t v u S ( t τ ) ) sh ( D t v u S ( t τ ) ) t v u sh ( D S t v u ( t τ ) ) t u v ch ( D S t v u ( t τ ) ) · Γ ( τ , · ) , U · G Γ ( τ , · ) , V · G d τ
with α u ( 0 ) , α v ( 0 ) R n , and
Γ ( τ , · ) , u · G = Γ ( τ , · ) , u 1 G Γ ( τ , · ) , u n G ,
Γ ( τ , · ) , v · G = Γ ( τ , · ) , v 1 G Γ ( τ , · ) , v n G .
Proof. 
From the SVD L = u S t v , we deduce that L t L = u S t S t u and t L L = v t S S t v . This means that u and v are the matrices of eigenvectors of the symmetric matrices L t L and t L L , respectively, which thus are diagonalizable matrices. Therefore the columns of u and v form an orthonormal basis of R n . We can decompose the function f on those two bases:
f ( t , x ) = i = 1 n α i u ( t ) u i ( x ) = i = 1 n α i v ( t ) v i ( x ) .
Set i { 1 , 2 , , n } . On the one hand, we have
f ( t , · ) , L v i G = t L f ( t , · ) , v i G
The left-hand-side gives
f ( t , · ) , L v i G = f ( t , · ) , s i u i G = s i α i u ( t ) .
The right-hand-side gives, after some calculations
D t L f ( t , · ) , v i G = d d t f ( t , · ) , v i G + Γ ( t , · ) , v i G .
This leads to
d α i v d t ( t ) = D s i α i u ( t ) + Γ ( t , · ) , v i G .
On the other hand, we have
f ( t , · ) , t L u i G = L f ( t , · ) , u i G .
The left-hand-side gives
f ( t , · ) , t L u i G = f ( t , · ) , s i v i G = s i α i v ( t ) .
The right-hand-side gives
L f ( t , · ) , u i G = t L f ( t , · ) , u i G + ( L t L ) f ( t , · ) , u i G .
There is the supplement term ( L t L ) f ( t , · ) , u i G . We calculate it below.
We have
( L t L ) f ( t , · ) , u i G = j = 1 n α j v ( t ) ( L t L ) v j , u i G = j = 1 n α j v ( t ) ( L t L ) v j , u i G .
Let S ˜ = t v L u = ( s ˜ i j ) 1 i , j n . Some calculations lead to
( L t L ) v j , u i G = t v j ( L t L ) u i = s ˜ j i s i δ i j .
Finally, we obtain
d α i u d t ( t ) = j = 1 n D α j v ( t ) s ˜ j i + Γ ( t , · ) , u i G .
The system to solve is, therefore,
d α i u d t ( t ) = j = 1 n D α j v ( t ) s ˜ j i + Γ ( t , · ) , u i G d α i v d t ( t ) = D s i α i u ( t ) + Γ ( t , · ) , v i G
We are now facing a system of differential equations, first order, non-homogeneous, with constant coefficients. In fact, we can solve this system of equations using a matrix approach.
First, we note that
S S ˜ = ( S t v u ) 2 , S ˜ S = ( t v u S ) 2 .
Next, we set
M = 0 S ˜ S 0 M 2 n ( R ) ,
also
γ ( t ) = Γ ( t , · ) , u · G Γ ( t , · ) , v · G ,
and
α ( t ) = α u ( t ) α v ( t ) = α 1 u ( t ) α 2 u ( t ) α n u ( t ) α 1 v ( t ) α 2 v ( t ) α n v ( t ) .
The system of differential equations then becomes
d α d t ( t ) = D M α ( t ) + γ ( t ) .
The Cauchy theorem leads to
α ( t ) = e D M t α ( 0 ) + 0 t e D M ( t τ ) γ ( τ ) d τ
We can go further. We are calculating the powers of the matrix M , as follows. In fact, we can prove by mathematical induction that
M 2 n = ( S ˜ S ) n 0 0 ( S S ˜ ) n n N , and M 2 n + 1 = 0 ( S ˜ S ) n S ˜ ( S S ˜ ) n S 0 n N .
Therefore
e D M t = l = 0 + ( D t ) l l ! M l = l = 0 + ( D t ) 2 l ( 2 l ) ! M 2 l + l = 0 + ( D t ) 2 l + 1 ( 2 l + 1 ) ! M 2 l + 1 = l = 0 + ( D t ) 2 l ( 2 l ) ! ( S ˜ S ) l l = 0 + ( D t ) 2 l + 1 ( 2 l + 1 ) ! ( S ˜ S ) l S ˜ l = 0 + ( D t ) 2 l + 1 ( 2 l + 1 ) ! ( S S ˜ ) l S l = 0 + ( D t ) 2 l ( 2 l ) ! ( S S ˜ ) l = l = 0 + ( D t ) 2 l ( 2 l ) ! ( t v u S ) 2 l l = 0 + ( D t ) 2 l + 1 ( 2 l + 1 ) ! ( t v u S ) 2 l S ˜ l = 0 + ( D t ) 2 l + 1 ( 2 l + 1 ) ! ( S t v u ) 2 l S l = 0 + ( D t ) 2 l ( 2 l ) ! ( S t v u ) 2 l = l = 0 + ( D t ) 2 l ( 2 l ) ! ( t v u S ) 2 l l = 0 + ( D t ) 2 l + 1 ( 2 l + 1 ) ! ( t v u S ) 2 l + 1 t v u l = 0 + ( D t ) 2 l + 1 ( 2 l + 1 ) ! ( S t v u ) 2 l + 1 t u v l = 0 + ( D t ) 2 l ( 2 l ) ! ( S t v u ) 2 l ,
which we can write as
e D M t = ch ( D t v u S t ) sh ( D t v u S t ) t v u sh ( D S t v u t ) t u v ch ( D S t v u t ) .
This achieves the proof. □
As a simple verification, what does this theorem become if we set L to be symmetric? We would then have s i = μ i , for any i { 1 , , n } , and u = v , so that t v u = t u v = Id and α u = α v . In addition Σ ˜ would be diagonal and s ˜ i i = μ i , i.e., S ˜ = S . Then
ch ( D t v u S t ) = ch ( D μ 1 t ) 0 0 ch ( D μ n t )
and, if Γ = 0 , we have
α i ( t ) = ch ( D μ i t ) α i ( 0 ) sh ( D μ i t ) α i ( 0 ) = e D μ i t α i ( 0 ) .
This conforms to the previous theorem’s relevance and truth.

6.3. Solution for the Directed Diffusion Involving the Operator Δ

6.3.1. The Directed Diffusion Equation Involving Δ

Definition 13.
Let f : E × R + R represent the information function for a graph with n nodes. If D is the diffusion coefficient and Γ : E × R + R the creation function of the information, the following directed diffusion equation holds for any t R + :
f t ( t , x ) = D t Δ f ( t , x ) + Γ ( t , x )
We have the same Equation (22) if and only if Δ = t Δ , that is if and only if A is symmetrical.

6.3.2. Solution Involving Δ

Theorem 6
(Application of the Cauchy Theorem). The solution of the related Cauchy problem associated with Equation (13) is given by
f ( t , x ) = e D t Δ t f ( 0 , x ) + 0 t e D t Δ ( t τ ) Γ ( τ , x ) d τ .
This expression is straightforwardly implementable in a computer.

6.3.3. Analytical Solution Involving Δ

Similarly, we can prove the following theorem.
Theorem 7.
Let Δ ˜ = U Σ t V be the SVD of Δ ˜ (here Σ is indeed square), and let ( U i ) i { 1 , 2 , , n } and ( V i ) i { 1 , 2 , , n } be the columns of the matrices U and V, respectively. The solution for the directed diffusion Equation (57) is given by
f ( t , x ) = 1 d A x t β u ( t ) · U ( x ) = 1 d A x t β v ( t ) · V ( x ) ,
where
β u ( t ) β v ( t ) = ch ( D t V U S t ) sh ( D t V U S t ) t V U sh ( D S t V U t ) t U V ch ( D S t V U t ) · β u ( 0 ) β v ( 0 ) + 0 t ch ( D t V U S ( t τ ) ) sh ( D t V U S ( t τ ) ) t V U sh ( D S t V U ( t τ ) ) t U V ch ( D S t V U ( t τ ) ) · D 1 / 2 Γ ( τ , · ) , U · G D 1 / 2 Γ ( τ , · ) , V · G d τ
with β u ( 0 ) , β v ( 0 ) R n , and
D 1 / 2 Γ ( τ , · ) , U · G = D 1 / 2 Γ ( τ , · ) , U 1 G D 1 / 2 Γ ( τ , · ) , U n G ,
D 1 / 2 Γ ( τ , · ) , V · G = D 1 / 2 Γ ( τ , · ) , V 1 G D 1 / 2 Γ ( τ , · ) , V n G .

7. Directed Graph: Case Studies

This section shows simulations for important graphs.

7.1. Erdos–Reyni

The Erdos–Reyni graph is uniformly random: fix the number of nodes and edges. Then the connections are all random. Figure 5 shows a simulation.
There might be counter-intuitive facts, e.g., the node which has a high number of neighbors is not necessarily the most influential in the network. The influence depends on connectivity, but also on a given network configuration, as distribution of information among all the nodes.

7.2. Binary Tree

The binary graph is the simplest hierarchical structure on graphs: every node has two children. Figure 6 shows a simulation.
In the standard configuration as in Figure 6, the information is circulating from parents to children. We, however, observed (as in Mandala networks) the other possible direction.

7.3. 4-Tree

Figure 7 shows a simulation.

7.4. Ring Graph

The ring graph is the simplest cycle on a network. Cycles are a very important aspect of graphs: they allow to extract seasonality patterns for a given context. Figure 8 shows a simulation.
The most important characteristics we observe is the envelope: the cycle behaves as two distinct sub-systems which interact continuously up to the dissipation level. One sub-system gives the information to the other, which receives it, and vice versa.

8. Layers Constantly Giving Information

8.1. Motivations

This section deals with an important aspect of the Bitcoin network. There are different families (layers) of nodes in this network. In fact, the P2P network is a subnetwork of the whole system. There is, in addition, the layer of users which may send information (e.g., a transaction) to the miners so that they mine it. There also is the layer of exchanges, which sends in the blockchain any information which may be about public key diffusion, or even collects prices from different exchanges. In particular, an exchange needs to be in the center of the whole network, communicating to nodes of all the layers by sending and receiving all types of information. Such a complex system could in fact be simplified and illustrated in Figure 9. If the related graph is directed, then the layers are said to be directed, and the layer which receives information from others, and whose nodes exchange information with each other, is named the main layer. A graph configuration where the main layer is the layer of miners could represent all the list of transactions diffusion from users to miners, and also between miners (see Figure 9), whereas if the main layer is the layer of exchanges, the resulting diffusion model would describe all the interactions (i.e., emission/reception) of data from users, exchanges, and miners, the exchanges could have. In particular, this graphical representation of an exchange should genuinely reveal, in a clear way, the whole flow of information dealing with the exchange. The diffusion model, for a given time condition, should predict the evolution of information propagation all around the network. Modeling this is the object of the current section. Surprisingly, bearing in mind the previous section, the result is simple.

8.2. Directed Diffusion with Boundary Conditions—Analytical Expression

Let S G be the main layer (the green layer in Figure 9), and S ¯ G (the red layer in Figure 9) such that S S ¯ = G and S S ¯ = . We keep the notations in the previous sections. The following theorem is the Theorem 5, with an additional boundary condition.
Theorem 8.
Suppose L is written as
L = L S * L S ¯ ,
where, in particular, L S is the Laplacian of the sub-graph S with singular vectors u S and v S . Suppose f ( t , x ) = B ( t , x ) , fixed and known, for any ( t , x ) R + × S ¯ (boundary condition). Then, Theorem 5 applies to the sub-graph S, by replacing Γ ( τ , · ) , u · G with Γ ˜ ( τ , · ) , u · S S , and Γ ( τ , · ) , v · G with Γ ˜ ( τ , · ) , v · S S , where
Γ ˜ ( t , x ) = Γ ( t , x ) + D y S ¯ B ( t , y ) a y x
Proof. 
Let μ ˜ 1 μ ˜ s 1 > μ ˜ s = 0 be the singular values for L S , and let u S and v S be the singular vector matrices for L S . In fact, we set u i S ( x ) = 0 if x S , idem for v S , for any i { 1 , 2 , , n } , so that u i S and v i S are n-dimensional vectors.
Bearing in mind the proof for Theorem 5, we need to set
f | S ( t , x ) = i = 1 n a i u ( t ) u i S ( x ) = i = 1 n a i v ( t ) v i S ( x ) , ( t , x ) R + × S .
On the one hand, we have
D μ ˜ j a j u ( t ) = D f | S ( t , · ) , μ ˜ j u j S S = D f | S ( t , · ) , L S v j S S = D f ( t , · ) , L v j S G D f ( t , · ) , L v j S S ¯ = D t L f ( t , · ) , v j S G D y S ¯ f ( t , y ) L v j S ( y ) = d d t a j v ( t ) + Γ ( t , · ) , v j S G D x G y S ¯ f ( t , y ) v j S ( y ) v j S ( x ) a y x = d d t a j v ( t ) + Γ ( t , · ) , v j S G + D x S y S ¯ B ( t , y ) v j S ( x ) a y x .
Thus
D μ ˜ j a j u ( t ) = d d t a j v ( t ) + Γ ( t , · ) + D y S ¯ B ( t , y ) a y · , v j S G .
Identically, we have
D μ ˜ j a j v ( t ) = D f | S ( t , · ) , t L S u j S S = D t L f ( t , · ) , u j S G + D ( L t L ) f ( t , · ) , u j S G D f ( t , · ) , t L u j S S ¯ .
In particular, the last term is written as
D f ( t , · ) , t L u j S S ¯ = D y S ¯ f ( t , y ) x G u j S ( y ) t a y x u j S ( x ) t a x y = D x G y S ¯ B ( t , y ) u j S ( y ) t a y x u j S ( x ) t a x y = D x S y S ¯ B ( t , y ) u j S ( x ) t a x y = D y S ¯ B ( t , y ) a y · , u j S S .
The same calculations performed as in the proof for Theorem 5 actually allow to conclude. □
Remark 4.
When the exchanges constitute the main layer S, two other layers could be of interest, the layers of users (customers) S ¯ 1 and the layers of miners S ¯ 2 , so that the whole network G is such that G = S S ¯ 1 S ¯ 2 , and S S ¯ 1 S ¯ 2 = . Setting S ¯ = S ¯ 1 S ¯ 2 , we deduce that Theorem 8 applies to the layer S of exchanges, and, considering the two distinct sources { customers , miners } , we specifically have
Γ ˜ ( t , x ) = Γ ( t , x ) + D y S ¯ 1 B 1 ( t , y ) a y x + D y S ¯ 2 B 2 ( t , y ) a y x .

8.3. Directed Diffusion with Boundary Conditions—Practical Case

The previous section demonstrates that any boundary condition must be considered as a constraint to the nodes whose neighbors are boundary nodes, multiplied by the diffusion coefficient D. Figure 9 thus becomes the object of Figure 10.
Basically, each contrainst node has a stronger diffusive effect toward the rest of the network. More specifically, since (supposing Δ t sufficiently small)
t t + Δ t e D t L ( t τ ) Γ ( τ , x ) d τ = Γ ( t , x ) Δ t + o ( Δ t ) .
and from Equation (54), we can set
f ( t + Δ t , x ) = e D t L Δ t f ( t , x ) + Γ ( t , x ) Δ t + E ( t , x ) ,
where E ( t , x ) = o ( Δ t ) is the error committed by the approximation. In practice, the approximation is sufficiently good if Δ t 0.05 second such that we set Δ t = 0.05 . Provided that Γ is of class C , we indeed have an error of
E ( t , x ) = t t + Δ t e D t L ( t τ ) Γ ( τ , x ) d τ Γ ( t , x ) Δ t i = 2 + Δ t n n ! Γ ( n 1 ) ( t , x ) i = 2 + Δ t n n ! Γ ( n 1 ) ( t , x ) .
If the function Γ is sufficiently smooth, then
E ( t , x ) i = 2 + Δ t n n ! sup n N * Γ ( n ) ( t , x ) = ( e Δ t 1 Δ t ) sup n N * Γ ( n ) ( t , x ) .
In practice, we shall have sup n N * Γ ( n ) ( t , x ) 1 so that the error E never exceeds 2 % such that the equality E ( t , x ) = o ( Δ t ) is justified, for any x.
Figure 11 plots a diffusion without the creation/destruction ( Γ = 0 ) of a graph, where the information is provided by node 1, and diffuses up to node 10 (without ever reaching 11). Note that nodes 2 and 3 (respectively, 5, 6, and 7) play symmetrical roles and have the same information amount through time.
We now add the constraint Γ such that
Γ ( t , x ) = Γ ( t Δ t , x ) + r Δ t if x = 2 and f ( t , 2 ) 1 Γ ( t Δ t , x ) r Δ t if x = 10 and f ( t , 10 ) 1 0 otherwise
In words, node 2 is constantly fed by external layers with a rate of r (creation) if it does not have a sufficient resource, while node 10 is constantly feeding external layers with a rate of r (destruction) if it does have too much of a resource. We choose rates r = r = Δ t = 0.05 .
With a cubic spline smoothing, we may observe that sup n N * Γ ( n ) ( t , x ) 1 so that E 0.2 % .
Figure 12 shows the simulation.
It is interesting to note the literal change of behavior by applying such constraints. Node 2 is submitted to permanent oscillations around 1 with a frequency-decreasing transition regime, so node 10 once it reaches information higher than 1. The oscillations induced at node 2 transfer visible oscillations to node 4, but the diffusion is not sufficiently strong to see them.
We therefore increase the diffusion coefficient D from 1 to 5, then to 10, and Figure 13 shows the simulations.
For all intermediate nodes, oscillations of information are observed at a frequency identical to the one imposed by node 2 reception. Concretely, it means that one node providing constant information to the others without exceeding its resources is governing the dynamics of the network. The shift of the minimum and maximum values at the permanent regime is due to the delay to obtain the information, and the shift is higher if the node is more distant from node 2.
In fact, the graph in Figure 12 and Figure 13 behaves as a cycle: nodes 2 and 10 could be connected to each other through an intermediate set of layers. This means that cycles are maintaining oscillations through a permanent regime. These oscillations are importantly induced by one parameter: the diffusion coefficient D, which therefore appears as a bifurcation parameter (its value determines chaotic behavior in the network).

9. Delay in the Differential Equation

So far, there has been an immediate response from a stimulation of the network. However, time lags are essential to consider since response to the information never is automatic: there is a response time to understand the information and find an appropriate answer, from the human as well as machine viewpoints. It is essential to find a way to describe delay in the study.
Generally speaking, a delayed differential equation—an active topic of academic research nowadays—for the above diffusion equation could be written as a delayed-response diffusion equation:
f t ( t , x ) = D t L f ( t τ , x ) + Γ ( t , x ) ,
where τ 0 is the time lag for the answer of the system due to diffusion effects. Usually, the function f ( · , x ) , for any x G , is defined on [ τ , + [ and is given on [ τ , 0 ] .
Although there are methods for solving such an equation, we may develop some technics to make an easy implementation in the following. One trick is to make L time dependent, as we are going to see.

9.1. Time Dependent Laplacian L ( t )

The adjacency matrix may be time dependent, as nodes can change their configuration through time. In the concern of expressing the time-dependent Laplacian matrix L ( t ) = D ( t ) A ( t ) with respect to A ( t ) exclusively, we note that
D ( t ) = i , j = 1 n E i , i A ( t ) E j , i ,
and the Laplacian matrix is rewritten as L ( t ) = i , j = 1 n E i , i A ( t ) E j , i A ( t ) , which behaves as a functional of the matrix A ( t ) .
Bearing this in mind, a way to introduce delay in a graph system is to make connections at a given time τ > 0 . Without any loss of generality, we suppose that node r th , for a fixed r 1 , n , is an isolated node at time t [ 0 , τ [ , then gets connected to, say, k 1 , n { r } (from k to r) in the network, from time t = τ .
This makes the adjacency matrix A ( t ) time dependent, since
a k r ( t ) = 0 if t [ 0 , τ [ 1 if t [ τ , + [ = H ( t τ ) ,
where H is the Heaviside distribution.
Thus, we can prove that the r th diagonal element of L is l r r ( t ) = i = 1 n a r i = l r r ( 0 ) + H ( t τ ) , while l k r ( t ) = a k r ( t ) = H ( t τ ) . In this configuration, we therefore have
L ( t ) = L ( 0 ) + H ( t )
where H ( t ) is zero everywhere, except at element ( k , r ) , where it is equal to H ( t τ ) , and at its diagonal element ( r , r ) , which is equal to H ( t τ ) . We actually can generalize the above approach to the following proposition.
Proposition 7.
If p N * nodes x 1 , x 2 , , x p are connected to x 11 , , x 1 q 1 , …, and x p 1 , , x p q p , respectively, at times τ 11 , , τ 1 q 1 , …, and τ p 1 , , τ p q p , respectively, then
L ( t ) = L ( 0 ) + H ( t ) ,
where H ( t ) is zero except element ( x i , x i ) which is k = 1 q i H ( t τ i k ) , for all i 1 , p , and either element ( x i k , x i ) if the connection is from x i k to x i , or element ( x i , x i k ) otherwise, whose value is H ( t τ i k ) , for all i 1 , p , k 1 , q i .
Thus, the matrix H behaves as a Laplacian matrix whose elements are either 0 or Heaviside distributions.
The idea now is to find a solution with the Heaviside distribution as a kernel.

9.2. Definition Set for the Solution and Its Derivative

From the previous Section 9.1, the diffusion equation is written as
f t ( t , x ) = D t ( L ( 0 ) + H ( t ) ) f ( t , x ) + Γ ( t , x ) .
It is worth stressing that the matrix H ( t ) is not defined at times when an evenness occurs, which therefore implies that f is not derivable on those points. It turns out that f is of class C almost everywhere (and not everywhere), provided that Γ is continuous almost everywhere as well, but still is required to be continuous at all times. If the study starts at t = 0 and there is no connection between distant nodes, then L ( 0 ) = 0 M n ( R ) and the equation simplifies to
f t ( t , x ) = D t H ( t ) f ( t , x ) + Γ ( t , x ) ,
which has the usual form.

9.3. Simple Cases

9.3.1. Heaviside integration

Suppose we would like to calculate
I = H ( t τ ) g ( t ) d t ,
where g is a continuous and integrable function. Let
G ( t ) = g ( t ) d t ,
up to a constant. Then
I = H ( t τ ) d G ( t ) .
By integration by parts, we have
I = H ( t τ ) G ( t ) G ( t ) d H ( t τ ) = H ( t τ ) G ( t ) G ( t ) δ ( t τ ) d t = H ( t τ ) G ( t ) H ( t τ ) G ( τ ) ,
true up to a constant C te R , and we thus can write
H ( t τ ) g ( t ) d t = H ( t τ ) ( G ( t ) G ( τ ) ) + C te .

9.3.2. Simple Case of Two Nodes

Suppose in E = { 1 , 2 } , the network is composed of two nodes. Without creation/destruction, suppose that node 1 sends information to node 2 from time τ > 0 . The Laplacian matrix is written as
L = H ( t τ ) H ( t τ ) 0 0 .
This equation is just from the diffusion equations given by the system:
d f 1 d t ( t ) = D H ( t τ ) f 1 ( t ) d f 2 d t ( t ) = D H ( t τ ) f 1 ( t )
According to Equation (67) with g = 1 , the first equation gives
ln f 1 ( t ) = ln f 1 ( τ ) D ( t τ ) H ( t τ ) ,
or
f 1 ( t ) = f 1 ( τ ) e D ( t τ ) H ( t τ ) .
Note that we can rewrite this function as
f 1 ( t ) = f 1 ( τ ) ( 1 H ( t τ ) ) + f 1 ( τ ) e D ( t τ ) H ( t τ ) = f 1 ( τ ) 1 ( 1 e D ( t τ ) ) H ( t τ ) .
Then putting this expression into the second equation and using the fact that H ( t τ ) 2 = H ( t τ ) , we have
d f 2 d t ( t ) = D H ( t τ ) e D ( t τ ) .
Using again Equation (67) with g ( t ) = e D ( t τ ) , we have
f 2 ( t ) = f 2 ( τ ) + f 1 ( τ ) ( 1 e D ( t τ ) ) H ( t τ ) .
This behavior is the one expected.

9.3.3. Simple Case of Three Nodes

The case of three nodes E = { 1 , 2 , 3 } might be of particular interest as well. We could break the symmetry by supposing that node 1 sends information to node 2 at rate α from time τ > 0 , and to node 3 at rate β from time τ τ . The Laplacian matrix is written as
L = α H ( t τ ) + β H ( t τ ) α H ( t τ ) β H ( t τ ) 0 0 0 0 0 0 .
This equation is just from the diffusion equations given by the system:
d f 1 d t ( t ) = D α H ( t τ ) + β H ( t τ ) f 1 ( t ) d f 2 d t ( t ) = D α H ( t τ ) f 1 ( t ) d f 3 d t ( t ) = D β H ( t τ ) f 1 ( t )
Again, according to Equation (67), the first equation gives
ln f 1 ( t ) = ln f 1 ( τ ) D ( t τ ) α H ( t τ ) D ( t τ ) β H ( t τ ) ,
or
f 1 ( t ) = f 1 ( τ ) e D ( ( t τ ) α H ( t τ ) + ( t τ ) β H ( t τ ) ) ,
which we can rewrite, since f 1 ( τ ) = f 1 ( 0 ) , as
f 1 ( t ) = f 1 ( 0 ) 1 ( 1 e D α ( t τ ) ) H ( t τ ) 1 ( 1 e D β ( t τ ) ) H ( t τ ) .
Deriving f 2 is the most complex task here. First of all, we note that H ( t τ ) H ( t τ ) = H ( t τ ) , and then we have
d f 2 d t ( t ) = D α H ( t τ ) f 1 ( 0 ) 1 ( 1 e D α ( t τ ) ) H ( t τ ) × 1 ( 1 e D β ( t τ ) ) H ( t τ ) = D α H ( t τ ) f 1 ( 0 ) e D α ( t τ ) × 1 ( 1 e D β ( t τ ) ) H ( t τ ) = D α f 1 ( 0 ) e D α ( t τ ) H ( t τ ) ( e D α ( t τ ) e D α ( t τ ) + β ( t τ ) ) H ( t τ ) = D α f 1 ( 0 ) e D α ( t τ ) H ( t τ ) e D α ( t τ ) H ( t τ ) + e D α ( t τ ) + β ( t τ ) ) H ( t τ ) .
We deduce, from Equation (67), that
f 2 ( t ) = f 2 ( 0 ) if t < τ C 1 te + f 1 ( 0 ) ( 1 e D α ( t τ ) ) if τ t < τ C 2 te + α α + β f 1 ( 0 ) e D α ( τ τ ) e D α ( t τ ) + β ( t τ ) if τ t
The continuity of the function f 2 at times τ and τ allows to find an equation for C 1 te and one for C 2 te . We therefore derive the function f 2 given by
f 2 ( t ) = f 2 ( 0 ) + f 1 ( 0 ) 1 e D α ( t τ ) H ( t τ ) α α + β e D α ( τ τ ) e D α ( t τ ) + β ( t τ ) H ( t τ ) .
We now derive expression for f 3 . We find that
f 3 ( t ) = D β f 1 ( 0 ) e D α ( t τ ) + β ( t τ ) H ( t τ ) .
The resolution leads to
f 3 ( t ) = f 3 ( 0 ) + β α + β f 1 ( 0 ) e D α ( τ τ ) e D α ( t τ ) + β ( t τ ) H ( t τ ) .
It is interesting to note that the symmetry is broken in the long-run
lim t + f 2 ( t ) = f 2 ( 0 ) + f 1 ( 0 ) 1 β α + β e D α ( τ τ ) , lim t + f 3 ( t ) = f 3 ( 0 ) + f 1 ( 0 ) β α + β e D α ( τ τ ) ,
and, quite importantly, we can demonstrate that
f 1 ( t ) + f 2 ( t ) + f 3 ( t ) = f 1 ( 0 ) + f 2 ( 0 ) + f 3 ( 0 ) , t R + .
The conservation of information is held at all time.

9.4. Delay in the Differential Equation—Practical Case

We will need the following results, which generalize Definition 12 (specifically Equation (52)) and Theorem 4.

9.4.1. The Time-Dependent Directed Diffusion Equation Involving the Operator L ( t )

Definition 14.
Let f : E × R + R representing the information function for a graph with n nodes. If D is the diffusion coefficient and Γ : E × R + R the creation function of the information, with a time-dependent Laplacian operator L ( t ) , the following directed diffusion equation holds, for any t R + :
f t ( t , x ) = D t L ( t ) f ( t , x ) + Γ ( t , x )
Here the main difference is that the Laplacian operator L is a function of time t.

9.4.2. Solution Involving L ( t )

From Equation (12), we can use the Cauchy theorem to obtain the following solution.
Theorem 9
(Application of the Cauchy Theorem). The solution of the related Cauchy problem associated with Equation (12) is given by
f ( t , x ) = e D 0 t ( t L ( s ) d s ) f ( 0 , x ) + 0 t e D τ t ( t L ( s ) d s ) Γ ( τ , x ) d τ .
As was done previously, we would rather use the following equation:
f ( t + Δ t , x ) = e D t t + Δ t ( t L ( s ) d s ) f ( t , x ) + t t + Δ t e D τ t ( t L ( s ) d s ) Γ ( τ , x ) d τ
which we can reduce again by noticing that t t + Δ t ( t L ( s ) d s ) = t L ( t ) Δ t + o ( Δ t ) and τ t ( t L ( s ) d s ) = t L ( τ ) Δ t + o ( Δ t ) since t τ Δ t , so that Equation (64) is also verified:
f ( t + Δ t , x ) = e D t L ( t ) Δ t f ( t , x ) + Γ ( t , x ) Δ t + E ( t , x ) .

9.4.3. Practical Case—Committed Error

Here, the error E ( t , x ) committed due to the approximation being split into two terms, and one has
E ( t , x ) = E D ( t , x ) + E Γ ( t , x ) ,
where
E D ( t , x ) = e D t t + Δ t ( t L ( s ) d s ) f ( t , x ) e D t L ( t ) Δ t f ( t , x )
is the error due to the approximation at the diffusion term, and
E Γ ( t , x ) = t t + Δ t e D τ t ( t L ( s ) d s ) Γ ( τ , x ) d τ Γ ( t , x ) Δ t
is the error due to the approximation at the creation/destruction term. We derive an upper bound for these two errors. Regarding the error on the diffusion term, we note that
t t + Δ t ( t L ( s ) d s ) = n = 1 + ( Δ t ) n n ! t L ( n 1 ) ( t ) ,
so that
E D ( t , x ) = exp D n = 1 + ( Δ t ) n n ! t L ( n 1 ) ( t ) exp D t L ( t ) Δ t f ( t , x ) = 1 D Δ t t L ( t ) + D 2 ( Δ t ) 2 t L ( t ) + D 2 2 ( Δ t ) 2 ( t L ( t ) ) 2 f ( t , x ) 1 D Δ t t L ( t ) + D 2 2 ( Δ t ) 2 ( t L ( t ) ) 2 f ( t , x ) + o ( ( Δ t ) 2 ) = 1 2 ( D Δ t t L ( t ) ) 2 f ( t , x ) + o ( ( Δ t ) 2 ) .
This proves that E D ( t , x ) = o ( Δ t ) . Regarding the error on the creation/destruction term, we introduce
γ ( τ ) = e D τ t ( t L ( s ) d s )
so that
E Γ ( t , x ) = t t + Δ t γ ( τ ) Γ ( τ , x ) d τ Γ ( t , x ) Δ t .
We have
γ ( τ ) = D t L ( τ ) γ ( τ ) .
Bearing all above in mind, we have
E Γ ( t , x ) = n = 1 + ( Δ t ) n n ! ( γ ( t ) Γ ( t , x ) ) ) ( n 1 ) Γ ( t , x ) Δ t = γ ( t ) Γ ( t , x ) Δ t + ( Δ t ) 2 2 γ ( t ) Γ ( t , x ) + γ ( t ) Γ ( t , x ) Γ ( t , x ) Δ t + o ( ( Δ t ) 2 ) = ( Δ t ) 2 2 γ ( t ) Γ ( t , x ) + γ ( t ) Γ ( t , x ) + o ( ( Δ t ) 2 ) = ( Δ t ) 2 2 D t L ( t ) Γ ( t , x ) + Γ ( t , x ) + o ( ( Δ t ) 2 ) .
This proves that E Γ ( t , x ) = o ( Δ t ) . Finally, we have proven that the committed error is such that E ( t , x ) = o ( Δ t ) . Thus, Equation (72) can be implemented by neglecting E ( t , x ) as this represents a negligible error (see Section 8.3 for numbers).

9.4.4. Practical Case—Implementation

3-Node Case

We implement the model through Equation (72). We took a 3-node graphs as a first step, and wanted to plot the solutions for the case depicted in Section 9.3.3. We set f 1 ( 0 ) = 1 and f 2 ( 0 ) = f 3 ( 0 ) = 3 , with D = 3 , and we set the adjacency matrix A as
A = 0 0.4 0.6 0 0 0 0 0 0 .
Figure 14 shows the information evolution per node (yellow is node 1; orange is node 2; and red is node 3).
We compare these results with the ones obtained from adding delays: node 2 starts receiving the information from time τ = 0.3 and node 3 from time τ = 0.9 . The time-dependent adjacency matrix becomes
A ( t ) = 0 0.4 H ( t 0.3 ) 0.6 H ( t 0.9 ) 0 0 0 0 0 0 .
Figure 15 plots the results.
The results are showing expected behaviors, with another interesting aspect: if a node takes too much delay to receive information, even if it has strong affinity, it will receive much less than the expectations. This could modify the whole network configuration and change its dynamical properties.
A last remark is about the obvious discontinuity of the derivative at times where nodes start receiving the information.

Random Walk

Random walk is an important modelization of the propagation of information in a network. It represents the abrupt movement of a bunch of bitcoins (resp. epidemia) among different addresses (resp. people), and is a powerful tool for the traceability of sub-signals of one signal.
From what we have seen above, we can confidently state that random walk actually is a particular case of the above delayed differential equation theory. Indeed, a random walk is obtained by taking the limit D + , with delay times as a multiple of a fixed delay τ > 0 , marking the time step of the movement of information from node to node (creating a discretization of time from a continuous time environment). It is worth pointing out that, in this case, diffusion becomes propagation. Thus, more generally, on a network, it turns out that a signal propagation is a particular diffusion of it, which is a non-trivial statement (but the converse is incorrect, as the sub-signals are not modified in essence through their propagation throughout the network).

10. Conclusions

In this paper, we studied the process of diffusion on a network, undirected and directed, with boundary conditions and response delays. More specifically, we introduced some fundamental definitions in the context of a crypto’s P2P network, and then derived the diffusion equation, with two different Laplacian operators. The main innovations are the analytic solution derivation (by means of singular value decomposition), with or without boundary conditions. We also have characterized delays into the response of vertices.
It is worth pointing out that a tool, based on the evolution of information within a real blockchain network, could be envisaged through this approach. For instance, a visualization of any kind of exchange in the Bitcoin network could be implemented. The tool can trace any piece of the exchange, from the past to the future, using the forecasting ability, and this at any instant. The forecast, thus, can be updated by feeding the system with all historical information up to the present. We also can imagine a parametrized machine learning process which captures the historical configurations of the Bitcoin network, and suggests likely ones in the future. Thus, the traceability of information is a starting point for further transparency within all the agents, and implies the diminution of money-laundering risk. Modeling the P2P network is useful for any exchange, not only for trading and forecast purposes, but also for compliance duties.
Finally, it is worth stressing that the results developed in this paper are generalizable to any kind of P2P network, except Section 4.2, which is Bitcoin specific, even if the approach could be adapted. No matter what the underlying protocol is, we will systematically find exchange of information between agents, nodes, and users. The structure of the network as well as its underlying consensus mechanism (e.g., proof-of-work and proof-of-stake) are additional data which the general diffusion equation on a directed graph, given by Equation (52), can be a fitting model with parameter D, and, more importantly, with the operator Γ , function of the consensus, as Γ is related to the source of information. One could also make the model even more general by implementing a tensor D as a function of the vertices and of the edges of the network. This is left for further studies.
[custom]

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Notes

1
We write f : E × R + R . The function has two variables, i.e., time and node, and takes real values.
2
numbers 1, 2, etc. designates the neighboring nodes of y.

References

  1. An, Jaehyung, Alexey Mikhaylov, and Sang-Uk Jung. 2021. A Linear Programming approach for robust network revenue management in the airline industry. Journal of Air Transport Management 91: 101979. [Google Scholar] [CrossRef]
  2. Bondy, J. Adrian, and Uppaluri S. R. Murty. 2007. Graph Theory: An Advanced Course (Graduate Texts in Mathematics). Berlin and Heidelberg: Springer. [Google Scholar]
  3. Chung, Soon-Yeong, Yun-Sung Chung, and Jong-Ho Kim. 2007. Diffusion and Elastic Equations on Networks. Publications of the Research Institute for Mathematical Sciences 43: 699–726. [Google Scholar] [CrossRef] [Green Version]
  4. Hollanders, Romain, Daniel F. Bernardes, Bivas Mitra, Raphaël M. Jungers, Jean-Charles Delvenne, and Fabien Tarissan. 2014. Data-driven traffic and diffusion modeling in peer-to-peer networks: A real case study. Network Science 2: 341–66. [Google Scholar] [CrossRef] [Green Version]
  5. Li, Quan-Lin, Jing-Yu Ma, Yan-Xia Chang, Fan-Qi Ma, and Hai-Bo Yu. 2019. Markov processes in blockchain systems. Computational Social Networks 6: 5. [Google Scholar] [CrossRef]
  6. Lipton, Alexander, and Adrien Treccani. 2021. Blockchain And Distributed Ledgers: Mathematics, Technology, And Economics. Singapore: World Scientific. [Google Scholar]
  7. Manini, Daniele, and Marco Gribaudo. 2008. An Analytical Study of the Resource Diffusion in Non-homogeneous P2P Networks. Paper presented at International Conference on Analytical and Stochastic Modeling Techniques and Applications, Nicosia, Cyprus, June 4–6; Berlin and Heidelberg: Springer, pp. 88–100. [Google Scholar]
  8. Mikhaylov, Alexey Yu. 2021. Development of Friedrich von Hayek’s theory of private money and economic implications for digital currencies. Terra Economicus 19: 53–62. [Google Scholar] [CrossRef]
  9. Musa, Ahmad, Hamad Almohannadi, and Jassim Alhamar. 2018. Malware Propagation Modelling in Peer-to-Peer Networks: A Review. Paper presented at 2018 6th International Conference on Future Internet of Things and Cloud Workshops (FiCloudW), Barcelona, Spain, August 6–8; pp. 198–202. [Google Scholar]
  10. Pacreau, Grégoire, Edmond Lezmi, and Jiali Xu. 2021. Graph Neural Networks for Asset Management. IEEE Transactions on Engineering Management, 1–18. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3976168 (accessed on 15 December 2021).
  11. Shahsavari, Yahya, Kaiwen Zhang, and Chamseddine Talhi. 2017. A Theoretical Model for Block Propagation Analysis in Bitcoin Network. IEEE Transactions on Engineering Management, 1–18. [Google Scholar] [CrossRef]
  12. Thanou, Dorina, Xiaowen Dong, Daniel Kressner, and Pascal Frossard. 2017. Learning Heat Diffusion Graphs. IEEE Transactions on Signal and Information Processing over Networks 3: 484–99. [Google Scholar] [CrossRef] [Green Version]
  13. Viriyasitavat, Wattana, and Danupol Hoonsopon. 2019. Blockchain characteristics and consensus in modern business processes. Journal of Industrial Information Integration 13: 32–39. [Google Scholar] [CrossRef]
Figure 1. Illustration of a graph. The balls are nodes, which are connected by edges, the segments. Here, nodes have between one and four neighbors. In particular, node 6 diffuses information (e.g., list of transactions) to its neighbors, i.e., nodes 3, 5, and 7.
Figure 1. Illustration of a graph. The balls are nodes, which are connected by edges, the segments. Here, nodes have between one and four neighbors. In particular, node 6 diffuses information (e.g., list of transactions) to its neighbors, i.e., nodes 3, 5, and 7.
Jrfm 15 00047 g001
Figure 2. The directional derivative of f from z to y. This actually represents the (finite) variation of f for the information going from node z to node y. It is positive if y gets more information than z, and negative otherwise. This concept makes sense only if y and z are neighbors; hence, the presence of a y z { 0 , 1 } in the definition.
Figure 2. The directional derivative of f from z to y. This actually represents the (finite) variation of f for the information going from node z to node y. It is positive if y gets more information than z, and negative otherwise. This concept makes sense only if y and z are neighbors; hence, the presence of a y z { 0 , 1 } in the definition.
Jrfm 15 00047 g002
Figure 3. Illustration of the diffusion information for 2 nodes. The initial conditions are Γ = 0 , f ( 0 , x ) = f 0 R + , and f ( 0 , y ) = 0 . We set D = 1 / 2 , and f 0 = 2 . Blue is for f ( t , x ) , while red is for f ( t , y ) .
Figure 3. Illustration of the diffusion information for 2 nodes. The initial conditions are Γ = 0 , f ( 0 , x ) = f 0 R + , and f ( 0 , y ) = 0 . We set D = 1 / 2 , and f 0 = 2 . Blue is for f ( t , x ) , while red is for f ( t , y ) .
Jrfm 15 00047 g003
Figure 4. Illustration of the diffusion information for 3 nodes. Node 2 is connected to nodes 1 and 3, but nodes 1 and 3 are not connected. The initial conditions are Γ = 0 , f ( 0 , 1 ) = 1 , f ( 0 , 2 ) = 2 and f ( 0 , 3 ) = 0 . We set D = 1 . Blue is for f ( t , 1 ) , red is for f ( t , 2 ) , and yellow is for f ( t , 3 ) .
Figure 4. Illustration of the diffusion information for 3 nodes. Node 2 is connected to nodes 1 and 3, but nodes 1 and 3 are not connected. The initial conditions are Γ = 0 , f ( 0 , 1 ) = 1 , f ( 0 , 2 ) = 2 and f ( 0 , 3 ) = 0 . We set D = 1 . Blue is for f ( t , 1 ) , red is for f ( t , 2 ) , and yellow is for f ( t , 3 ) .
Jrfm 15 00047 g004
Figure 5. Diffusion on a directed Erdos–Reyni graph.
Figure 5. Diffusion on a directed Erdos–Reyni graph.
Jrfm 15 00047 g005
Figure 6. Diffusion on a directed binary tree.
Figure 6. Diffusion on a directed binary tree.
Jrfm 15 00047 g006
Figure 7. Diffusion on a 4-tree.
Figure 7. Diffusion on a 4-tree.
Jrfm 15 00047 g007
Figure 8. Diffusion on a directed ring.
Figure 8. Diffusion on a directed ring.
Jrfm 15 00047 g008
Figure 9. Two directed layers, i.e., families of nodes. The red layer sends information toward the green layer, while nodes of the last layer send information to each other. Thus, the green layer could represent the layer of miners, while the red layer could represent the layer of exchanges. In such a model, the miners are the main layer. We could easily change the configuration so that exchanges are the main layer, and the following theory also applies.
Figure 9. Two directed layers, i.e., families of nodes. The red layer sends information toward the green layer, while nodes of the last layer send information to each other. Thus, the green layer could represent the layer of miners, while the red layer could represent the layer of exchanges. In such a model, the miners are the main layer. We could easily change the configuration so that exchanges are the main layer, and the following theory also applies.
Jrfm 15 00047 g009
Figure 10. Provided that the red-nodes layer plays the role of boundary condition layer in Figure 9, the two directed layers network is mathematically equivalent to one unique network, whose node neighbors with the boundary layer nodes have a reaction term Γ applied for the diffusion of information.
Figure 10. Provided that the red-nodes layer plays the role of boundary condition layer in Figure 9, the two directed layers network is mathematically equivalent to one unique network, whose node neighbors with the boundary layer nodes have a reaction term Γ applied for the diffusion of information.
Jrfm 15 00047 g010
Figure 11. The information diffuses from node 1 ( f ( 0 , 1 ) = 1 while f ( 0 , x ) = 0 for any x 1 ) to node 10. Here, Γ = 0 and D = 1 .
Figure 11. The information diffuses from node 1 ( f ( 0 , 1 ) = 1 while f ( 0 , x ) = 0 for any x 1 ) to node 10. Here, Γ = 0 and D = 1 .
Jrfm 15 00047 g011
Figure 12. The information diffuses from node 1 ( f ( 0 , 1 ) = 1 while f ( 0 , x ) = 0 for any x 1 ) to node 10 with additional creation (to node 2) and destruction (to node 10) added. Here D = 1 .
Figure 12. The information diffuses from node 1 ( f ( 0 , 1 ) = 1 while f ( 0 , x ) = 0 for any x 1 ) to node 10 with additional creation (to node 2) and destruction (to node 10) added. Here D = 1 .
Jrfm 15 00047 g012
Figure 13. The information diffuses from node 1 ( f ( 0 , 1 ) = 1 while f ( 0 , x ) = 0 for any x 1 ) to node 10 with additional creation (to node 2) and destruction (to node 10) added. Here D = 5 (left panel) and D = 10 (right panel).
Figure 13. The information diffuses from node 1 ( f ( 0 , 1 ) = 1 while f ( 0 , x ) = 0 for any x 1 ) to node 10 with additional creation (to node 2) and destruction (to node 10) added. Here D = 5 (left panel) and D = 10 (right panel).
Jrfm 15 00047 g013
Figure 14. A 3 node-graph. Node 1 gives information to node 2 (resp. 3) with rate 0.4 (resp. 0.6). There is no delay, and D = 3 .
Figure 14. A 3 node-graph. Node 1 gives information to node 2 (resp. 3) with rate 0.4 (resp. 0.6). There is no delay, and D = 3 .
Jrfm 15 00047 g014
Figure 15. Same structure as in Figure 14, but with a delay τ = 0.3 for node 2 and τ = 0.9 for node 3.
Figure 15. Same structure as in Figure 14, but with a delay τ = 0.3 for node 2 and τ = 0.9 for node 3.
Jrfm 15 00047 g015
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Riposo, J. Diffusion on the Peer-to-Peer Network. J. Risk Financial Manag. 2022, 15, 47. https://0-doi-org.brum.beds.ac.uk/10.3390/jrfm15020047

AMA Style

Riposo J. Diffusion on the Peer-to-Peer Network. Journal of Risk and Financial Management. 2022; 15(2):47. https://0-doi-org.brum.beds.ac.uk/10.3390/jrfm15020047

Chicago/Turabian Style

Riposo, Julien. 2022. "Diffusion on the Peer-to-Peer Network" Journal of Risk and Financial Management 15, no. 2: 47. https://0-doi-org.brum.beds.ac.uk/10.3390/jrfm15020047

Article Metrics

Back to TopTop