Next Article in Journal
A Novel Dynamic Bit Rate Analysis Technique for Adaptive Video Streaming over HTTP Support
Previous Article in Journal
Malware Detection in Internet of Things (IoT) Devices Using Deep Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Low-Complexity Hyperbolic Embedding Schemes for Temporal Complex Networks

1
School of Electronic Information, Wuhan University, Wuhan 430072, China
2
Wuhan Digital Engineering Institute, Wuhan 430074, China
*
Authors to whom correspondence should be addressed.
Submission received: 8 September 2022 / Revised: 23 November 2022 / Accepted: 23 November 2022 / Published: 29 November 2022
(This article belongs to the Section Sensor Networks)

Abstract

:
Hyperbolic embedding can effectively preserve the property of complex networks. Though some state-of-the-art hyperbolic node embedding approaches are proposed, most of them are still not well suited for the dynamic evolution process of temporal complex networks. The complexities of the adaptability and embedding update to the scale of complex networks with moderate variation are still challenging problems. To tackle the challenges, we propose hyperbolic embedding schemes for the temporal complex network within two dynamic evolution processes. First, we propose a low-complexity hyperbolic embedding scheme by using matrix perturbation, which is well-suitable for medium-scale complex networks with evolving temporal characteristics. Next, we construct the geometric initialization by merging nodes within the hyperbolic circular domain. To realize fast initialization for a large-scale network, an R tree is used to search the nodes to narrow down the search range. Our evaluations are implemented for both synthetic networks and realistic networks within different downstream applications. The results show that our hyperbolic embedding schemes have low complexity and are adaptable to networks with different scales for different downstream tasks.

1. Introduction

Most real-world networks are complex networks with small-world, scale-free and strong-clustering properties. Complex network embedding is a valid tool for downstream network analysis tasks [1,2,3]. Many network embedding approaches based on Euclidean space have been addressed well  [4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19]. Complex networks usually have latent tree-like and scale-free properties [20]; Euclidean space mapping can not capture the above features well. According to this, some researchers propose non-Euclidean network embedding [21,22]. They have shown that the hyperbolic space is more suitable for complex network representation with a tree-like hierarchical organization [20]. The hyperbolic space extracts the hierarchical topology organization by approximating tree-like structures smoothly with constant negative curvature rather than the flat Euclidean space [23]. The network hyperbolic embedding theory makes the geometrical representation of the complex network while it preserves the small-world and scale-free properties well. It can effectively interpret the hierarchical topology characteristics and generation mechanisms of complex networks. Compared with network embedding in Euclidean space, dynamic hyperbolic space embedding is a research area not yet fully studied. Some of the hyperbolic space embedding approaches are rather highly complex.
Current hyperbolic embedding approaches are mainly divided into three categories. The first category is based on manifold learning. The research work [24] proposes a data-driven manifold learning approach based on Laplacian network embedding. The approach is similar to Laplacian matrix decomposition in Euclidean space by using the Laplacian matrix for eigenvalue decomposition. The second category is based on the maximum likelihood estimation approach. HyperMap [25] infers angular coordinates by reproducing the generation of a network generation model. All nodes are sorted in descending order. The possible angle values are traversed to maximize the likelihood function for the most suitable angular coordinates. HyperMap-CN [26] proposes to derive the hidden geometric coordinates of nodes in a complex network based on the number of common neighborhoods. It utilizes the common neighbor information from the likelihood function of the HyperMap method to improve the accuracy of node embedding coordinates. The third category is the hybrid approach that combines both manifold learning and maximum likelihood estimation. Although LaBNE has high embedding efficiency, it sacrifices embedding performance. Similarly, HyperMap has high embedding performance, but the complexity is also high. Accordingly, LaBNE+HM [27] proposes using LaBNE for network embedding to obtain initial embedding values of nodes, and it utilizes HyperMap to obtain the final embedding coordinates by sampling the angles near initial values.
Nevertheless, most of the aforementioned methods are designed for static networks. In the real world, networks have inherent dynamics with evolving characteristics. For example, nodes in social networks add and delete neighbors with varied social relations. The nodes in brain networks make changes to the neighboring relations according to new connections of neurons. However, efficient representations with varied nodes and edges are extremely crucial, especially for the stabilization evolution of network application scenarios [28]; it presents challenges for the embedding of dynamic networks.
Inspired by Node2vec [10], which extended Deepwalk by changing the random walk method, researchers introduced temporal meta-paths [29,30] to modify the sampling method. Both approaches are derivatives of static approaches, which do not capture the dynamics and high-order proximity of nodes and edges of a local structure well. The high-order proximity has proven to be valuable in capturing the network structure [31]. Research in [32] proposes to separate the dynamic network into several snapshots and then process the static network embedding according to the variation. Inevitably, some complex terms involving global structural information occur in the process of preserving global higher-order approximations, which results in high complexity. With this dilemma, some researchers propose conducting dynamic embedding with the consideration of network evolution [32,33,34,35]. They propose to capture characteristics with variations to reflect network dynamics and then improve the efficiency of application tasks based on features. Cao [33] et al. made a review on current dynamic network embedding approaches. They point out that current embedding approaches have made breakthroughs in many ways, but there are still problems to be solved. For example, how to effectively capture the influence of node variations on neighboring nodes and the local network structure are still key problems. How to overcome problems such as data storage, training efficiency and heterogeneous information fusion [36] for large-scale network embedding is also still not well addressed.
According to the above, the main challenges for hyperbolic space embedding of temporal complex networks include: (1) The embedding complexity of hyperbolic space is a key factor for complex network analysis efficiency. (2) Dynamic network embedding needs to be adaptive toward variations within network evolution.
In this paper, we propose low-complexity hyperbolic embedding schemes for temporal complex networks. First, we propose a low-complexity hyperbolic embedding approach using matrix perturbation with time-evolving features for a medium-scale complex network. Next, we propose a fast update hyperbolic embedding approach with a local maximum likelihood estimation-based geometric initialization and R tree-based local search for large-scale complex networks.
The main contributions of this paper are summarized as follows:
(1)
We propose MpDHE to implement dynamic network hyperbolic embedding with low complexity, i.e., O ( T ( n + d x + l x ) + n 2 ) . To the best of our knowledge, we are the first to model the increment of the network utilizing the matrix perturbation when inferring hyperbolic coordinates.
(2)
We computed geometric initialization to embed medium-scale dynamic networks via hyperbolic circular domain construction.
(3)
We implement the proposed schemes for real-world network scenarios with several kinds of downstream application tasks, including community discovery, visualization and routing, which proves the efficiency and effectiveness.
The remainder of the paper is organized as follows. Section 2 gives some preliminaries for hyperbolic embedding. Section 3 proposes a novel low-complexity embedding scheme for dynamic temporal complex networks. Section 4 gives the performance evaluations. Section 5 concludes the paper.

2. Some Preliminaries

2.1. Complex Network and Generation Model

In a realistic world, many complex systems can be represented by networks with a collection of nodes and edges, i.e., G = ( V , E ) . Different from small-scale networks, most complex systems are large-scale networks following a power-law distribution. They are modeled as temporal complex networks within the time evolution processes. Time is divided into continuous time steps, which then form the sequence of network snapshots for each time step. The temporal complex network can be represented by: G ( t ) = ( V ( t ) , E ( t ) ) . For each time step t, the adjacency matrix is denoted as A ( t ) . The element in A ( t ) is 1 if there is an edge between node i and j; otherwise it is 0. The Laplacian matrix of the graph is: L ( t ) = D ( t ) A ( t ) , where D is a matrix with node degrees on its diagonal (with 0 elsewhere). We assume all the networks considered in the paper are connected networks. For unconnected networks, each of the connected subnetworks is taken into consideration. In this case, A, D and L are symmetric matrices, and the degree of each node is positive.
Two commonly used complex network generation models are the popularity-similar optimization (PSO) model [37] and the nonuniform PSO (nPSO) model [38]. The PSO model keeps a trade-off between node generation time and node similarity. The node generation time is positively related to node popularity. The PSO model can generate a complex network of N nodes with real, known hyperbolic coordinates. The model parameters include an average node degree 2 m , the scaling exponent γ and the network temperature T. The PSO model can simulate how random geometric graphs grow in the hyperbolic space, generating realistic networks with small-world, scale-free and strong-clustering features. PSO cannot reproduce the community structure of a network, and the nPSO model is proposed for community structure based on this. It enables heterogeneous angular node attractiveness by sampling the angular coordinates from a tailored nonuniform probability distribution, e.g., a mixture of Gaussians. The nPSO model can explicitly determine the number and size of community structures with an adjustment of network temperature, which controls network clustering and generates highly clustered networks efficiently.

2.2. Hyperbolic Space and Poincare Disk Model

Hyperbolic space is hard to imagine and equivalently embedded into Euclidean space. Hyperbolic space is even “larger” than Euclidean space. In this paper, we use the Poincare disk model as the embedding target. The circumference and area equation of a hyperbolic disk with hyperbolic radius R and centroid as ( 0 , 0 ) can be represented by (1) and (2).
L = 2 π s i n h ( R )
A = 2 π ( c o s h ( R ) 1 )
where both the circumference and the area present an exponential increase along radius R (i.e., sinh x = e x e x 2 , cosh x = e x + e x 2 ). The hyperbolic space grows rapidly along the radius. The region grows to a fairly large space at the edge of the disk, which is the most prominent feature of the hyperbolic space.
Hyperbolic space is suitable to represent complex networks, especially for tree-like property-based structures. Actually, hyperbolic space can be viewed as the continuous version of a tree-based network. For a n-numeration tree in the network system, the circumference and the area of the Poincare disk correspond to the number of nodes within s-hop from the root as: ( n + 1 ) n s 1 and the total number of nodes within s-hop from the root as: ( n + 1 ) n s 2 n 1 , respectively. If the curvature of the hyperbolic space satisfies ζ = | K | = ln n , then the circumference and the area of the hyperbolic space increase with the rate of e ζ r , consistent with the growth rate n r of the n-numeration tree. In this case, the tree structure can be regarded as a discrete hyperbolic space, which is shown in Figure 1.
Branches of the tree structure need a storage space of exponential magnitude, which is well supported by hyperbolic geometry methodology. The scale-free and tree-like structure of complex networks fit with the negative curvature and the exponential expansion of hyperbolic space, so the hyperbolic space embedding approaches are well suited for geometry-based representation learning of complex networks. For hyperbolic space embedding, the radial coordinates in the Poincare disk represent the popularity of nodes. The angular coordinates represent the node similarities. Moreover, we can effectively illustrate the evaluation of complex networks as the completion between popularity and similarity by using the Poincare disk model. Further, the Poincare disk model is also effective for explaining the topology features of complex networks.

2.3. Initial Static Embedding

To mine the evolution characteristics of the network and reduce the complexity, we utilize the Laplacian matrix decomposition-based hyperbolic embedding approach LaBNE in  [24] to make an initial static embedding for the network snapshot at time step t 0 . We then update the network embedding results for the subsequent time steps by capturing the main variations in the topology structure. The complexity of hyperbolic embedding mainly comes from angular coordinate embedding, so the embedding for temporal complex networks focuses on the update of angular coordinates.
LaBNE for initial static embedding: The common sense of the hyperbolic network model is that the connection probability between nodes is negatively correlated with the angle difference. The connected nodes have similar angles. In LaBNE, the network is embedded into the two-dimensional hyperbolic plane H 2 represented by a Euclidean circle, which gives matrix Y with shape N × 2 as: Y = [ y 1 , y 2 ] . In which each row represents the embedding coordinate of the node. By using the Laplace operator, the objective is to minimize t r ( Y T L Y ) = 1 2 i , j a i j | | Y i Y j | | 2 . Where trace is the weighted sum of the distance between adjacent nodes. By minimizing the trace, it can reduce the Euclidean distance between connected nodes. If nodes are distributed around a circle centered at the origin point, then the distance in Euclidean space also reflects the angle difference. To avoid being arbitrarily scaled, the problem also includes an additional constraint as: Y T Y = I . The optimization problem can be described as:
min t r ( Y T L Y ) s . t . Y T Y = I
Using the Rayleigh–Ritz theorem, the solution of this problem consists of eigenvectors corresponding to two minimum non-zero eigenvalues of the generalized eigenvalue problem: L ( t ) Y = λ D ( t ) Y . Where the minimum eigenvalue is zero, we take eigenvectors corresponding to the second- and third-smallest eigenvalues.
Embedding the network in a two-dimensional hyperbolic disk needs radial coordinates and angular coordinates of nodes. Angular coordinates can be approximated by: θ = arctan ( y 2 y 1 ) according to the conformal properties, where y 1 and y 2 are the first and second items of the row vectors corresponding to the nodes in Y. In addition, we need an additional equidistant adjustment step to distribute nodes on the disk evenly. There are two ways to calculate the radial coordinate by utilizing the PSO model and static estimation [39]. We choose the latter to get the radius of hyperbolic disk R and the radial coordinate r ( i ) , which can be calculated by (4) and (5).
R = 2 ln ( 4 n 2 α 2 T | E | · sin ( π T ) ( 2 α 1 ) 2 )
r ( i ) = min { R , 2 ln ( 2 n α T d e g ( i ) · sin ( π T ) ( α 1 2 ) ) }
where n is the total number of nodes in the network (the maximum connected subgraph), α = γ 1 2 and γ is the power-law distribution exponent. T is the clustering coefficient of the network. | E | is the number of edges.
The calculation radial coordinate embedding has complexity O ( n ) . The calculation of angular coordinate embedding needs the first two items of the generalized eigen decomposition with complexity O ( n 2 ) .

3. Hyperbolic Embedding Schemes for Temporal Complex Networks

We propose dynamic hyperbolic embedding schemes to tackle the challenges of complexity and dynamics in the temporal complex network. First, we propose the matrix perturbation-based dynamic network hyperbolic embedding scheme (MpDHE) to achieve low complexity, which is adaptive within the scale-fixing network. We then generalize the implementation of MpDHE into a large-scale network utilizing the geometry initialization in a hyperbolic circular domain. Figure 2 shows the overview of the proposed scheme for dynamic hyperbolic embedding.

3.1. MpDHE Scheme

To reduce the time complexity for dynamic hyperbolic embedding, we propose the use of matrix perturbation [32,40] to update the embedding coordinates in the MpDHE scheme. Matrix perturbation is also commonly used in the dynamic network embedding of Euclidean space. Compared with the matrix at time step t, a matrix perturbation is involved in the matrix at time step t + 1 . For a specific feature dimension i with feature pair as: ( λ i , y i ) , i = 1 , 2 , , d , the generalized eigenvalue problem after the perturbation is shown in (6).
( L ( t ) + Δ L ) ( y i + Δ y i ) = ( λ i + Δ λ i ) ( D ( t ) + Δ D ) ( y i + Δ y i )
According to the matrix perturbation, the approximate solutions of the increments of eigenvalues and eigenvectors are shown in (7) and (8).
Δ λ i = y i Δ L y i λ i y i Δ D y i
Δ y i = 1 2 y i Δ D y i y i + j = 2 , j i 3 ( y j Δ L y i λ i y j Δ D y i λ i λ j ) y j
The objective function of network embedding in Euclidean space is the same as the objective of solving angular coordinates in hyperbolic embedding; the above scheme can be directly used to incrementally update angular coordinates in dynamic hyperbolic embedding. Specifically, the embedding vectors in Euclidean space are deduced as follows:
y i ( t + 1 ) = y i ( t ) + Δ y i
λ i ( t + 1 ) = λ i ( t ) + Δ λ i
Then, according to the conformal property, the angular coordinates Θ ( t ) = { θ j ( t ) } 1 × n at time t can be calculated by the corresponding embedding vectors { y i ( t ) } i = 2 , 3 as follows:
Θ ( t ) = arctan y 2 ( t ) y 3 ( t )
Obviously, the differences in the network between continuous time steps induce changes in the embedding vectors, which are analytically formulated with the incremental eigen decomposition. Therefore, the dynamic hyperbolic embedding at a later time step can be implemented in low complexity. Specifically, the time complexity of MpDHE is analyzed as follows, and the framework is summarized as Algorithm 1.
Time complexity analysis: Suppose T is the total number of time steps to be predicted. Each time the radial coordinate embedding complexity is the same as LaBNE, i.e., O ( n ) , then the total complexity of the radial coordinate calculation is O ( n T ) . For angular coordinate embedding, the complexity of the initial value setting is O ( n 2 ) . The complexity of updating eigenvalues is O ( d x + l x ) , and the complexity of updating k eigenvectors is O ( n + d x + l x ) . Where d x and l x are the numbers of non-zero items in sparse matrices Δ D and Δ L , respectively. In general, the complexity of MpDHE is O ( T ( n + d x + l x ) + n 2 ) . For d x n , l x n , the MpDHE scheme effectively reduces the embedding complexity.
Algorithm 1: MpDHE algorithm
Sensors 22 09306 i001

3.2. Geometric Initialization

However, the matrix perturbation in the above MpDHE cannot embed new nodes in the network. The basis of matrix perturbation makes updates based on the eigenvectors from the previous time step. For the node not existing at the previous time step t but appearing at time step t + 1 , MpDHE is not suitable for this case. It cannot update for no eigenvectors from the previous time step.
For this reason, we technically use eigenvectors at time step t to calculate values for new nodes appearing at t + 1 and construct a geometric initialization by hyperbolic circular domain.
Obviously, the hyperbolic distance between nodes determines the connection probability of them in the hyperbolic disk. The shorter the hyperbolic distance between the two nodes, the bigger the connection probability is, as well as the similarity. When a new node occurs, if the original node is far from the new one, it has less effect on the angular coordinate. The initial angular coordinates can, therefore, be calculated from those nodes with a small hyperbolic distance from the new node.
Based on the above, we first filter out the neighbors with small and similar degrees toward the new node and set the mean of their angular coordinates as a basic approximation of the new node. The corresponding computational process is shown as (12).
θ ^ i ( t + 1 ) = j Ψ ( i ) θ j ( t ) m
where Ψ ( i ) is the neighboring node set toward the new node. i is a small and similar degree, and m is the number of nodes in the set.
Then we calculate the hyperbolic circular domain with the basic approximation as the center and select nodes inside the circle. The distance d i j between node ( r i , θ i ) and node ( r j , θ j ) is shown in (13) by using the hyperbolic cosine theorem.
cosh ( d i j ) = cosh ( r i ) · cosh ( r j ) sinh ( r i ) · sinh ( r j ) · cos ( Δ θ i j )
Given the centroid of the hyperbolic disk with radius R as: ( r 0 , θ 0 ) , the radial coordinate r h corresponding to the angle θ h on the disk is shown in (14). Where c = e R + 1 e R 1 .
( r h cos ( θ h ) r 0 h cos ( θ 0 h ) ( c 2 1 ) c 2 r 0 h 2 ) 2 + ( r h sin ( θ h ) r 0 h sin ( θ 0 h ) ( c 2 1 ) c 2 r 0 h 2 ) 2 = ( ( r 0 h 2 1 ) c r 0 h 2 c 2 ) 2
The hyperbolic circle is not easy to represent in an equation directly. To quickly find the inner range of the hyperbolic disk by using R-tree (Rectangle tree) [41], we sample points on the hyperbolic disk and then outline the polygon contour to approximate the hyperbolic disk. R-tree is a tree-based data structure that can be used for spatial high-dimensional data storage and fast query. The core strategy of R-tree is to aggregate adjacent nodes and use the minimum circumscribed rectangles of them as the nodes of each layer in the tree. R-tree can query node sets inside the polygon quickly.
Here we take the polygon contour obtained by outlining the hyperbolic disk as the input of the R-tree. We then approximately query the node sets of the hyperbolic disk. Considering the error that exists from transforming the tanh ( x ) function and R-tree search based on the maximal circumscribed rectangle, more nodes will certainly be searched than the exact result. However, we can still think that nodes beyond the searched results have a distance of more than R from the center of the disk. Therefore, the nodes beyond the search results can be ignored, i.e., the search results of R-tree are within the initialization range. The geometric initialization is calculated as follows:
θ i ( t + 1 ) = j G I ( θ ^ i ( t + 1 ) ) θ j ( t ) | G I ( θ ^ i ( t + 1 ) ) |
where G I ( θ i ^ ( t + 1 ) ) is the node set contained within the hyperbolic circular domain. | G I ( θ ^ i ( t + 1 ) ) | is the number of nodes in the set.

4. Performance Evaluations

In this section, we verify the performance of our schemes with numerous evaluations. First, we perform evaluations on the reliability of our schemes by comparison with the eigen decomposition-based scheme in terms of MSE. Then, we perform evaluations on synthetic networks by comparisons among the other static hyperbolic embedding schemes in terms of embedding precision. Afterward, we implement our schemes with different downstream tasks by comparison with the other hyperbolic embedding schemes.

4.1. Scheme Analysis

The proposed MpDHE is combined with the matrix perturbation and conformal mapping to reduce the dynamic embedding complexity with preserving the embedding precision. Conformal mapping transforms a Euclidean coordinate into a hyperbolic coordinate, which is lossless. However, if matrix perturbation acts on quickly re-embedding, it would inevitably incur errors.
To analyze the reliability of the Euclidean coordinates obtained by matrix perturbation, we implement ablation experiments on 10 groups of synthetic networks. These groups of networks are constructed by nPSO with network scales ranging from 100 to 1000. The proportion of changed nodes between net0 and net1 is 5%. We calculate the mean square error (MSE) between eigenvectors obtained from eigen decomposition and matrix perturbation. The corresponding results are shown in Figure 3.
It shows that all the MSEs are at a low level (under 0.05). Moreover, the MSE decreases with the increase in nodes. It indicates that the proposed method is capable of embedding large-scale networks under a convergence error.

4.2. Embedding Performance Evaluations

4.2.1. Settings

To verify the efficiency effectiveness of our embedding schemes for complex networks with different parameters, we generate 10 PSO synthetic networks with the combination of parameters as: T = 0.1 , 0.4 , 0.7 , 1 , 2 m = 4 , 6 , 8 , 10 and γ = 2 , 2.5 , 3 . The network scale is set as 10,000. We then implement LaBNE and the two proposed embedding schemes to embed the network into the Poincare disk. The results are based on the average of ten networks with the above parameter configurations.
To perform evaluations within network dynamic scenarios, we generate two snapshots of networks with the above generation configurations. It can also be easily extended to more snapshots. The first snapshot represents the initial “net0”, and the second snapshot represents “net1”. The second snapshot has a 1% change in nodes compared to the first snapshot, i.e., newly added nodes have the ratio 1 m + 1 , and the varied old nodes have the ratio m m + 1 according to the PSO model.
We perform evaluations on our MpDHE scheme with the performance bounds provided by the other static embedding schemes for dynamic network situations. Those static embedding schemes can still be used for dynamic scenarios if we treat each snapshot of the dynamic network as the static embedding scenario. Obviously, this procedure would incur high complexity, and it is not realistic for large-scale network application scenarios. In our evaluations, improved static embedding schemes are used as a precision bound for comparisons.

4.2.2. Baselines

In performance evaluation experiments, we compare the performance of the following static embedding methods to evaluate their effectiveness.
EE [42]: EE is an efficient hyperbolic embedding method with a greedy strategy, which combines common neighbor information with the maximum likelihood of optimizing the embedding.
Coalescent [43]: Coalescent approximates the hyperbolic distance between connected nodes with two manifold-learning-based pre-weighting strategies. The final embedding vectors are adjusted via maximum likelihood.
LPCS [44]: LPCS is a novel hyperbolic embedding method utilizing the community information of the network; it embeds nodes from a common community to preserve the mesoscale structure of the networks.
CHM [45]: CHM detects communities of the network and then constructs a fast index to solve the maximum likelihood with the guide of the obtained communities.
Mercator [46]: Mercator embeds networks into the S 1 model incorporating machine learning and maximum likelihood via a fast and precise mode.
LaBNE [24]: LaBNE is a manifold learning method based on the Laplace eigen decomposition. Embedding vectors are transformed into a two-dimensional hyperbolic plane according to conformal mapping.

4.2.3. Metrics

We use two metrics to evaluate the network embedding: the hyperbolic distance correlation and the concordance score.
Hyperbolic distance correlation (HD-corr): HD-corr is the Pearson correlation of the pairwise hyperbolic distance between initial coordinates and embedding coordinates. The Pearson correlation coefficient can measure the linear relation between two objects and estimate whether the linear relation can be fitted to a straight line. The closer the absolute value approaches 1, the stronger the correlation is. Otherwise, the closer the absolute value approaches 0, the weaker the correlation is.
Concordance score (C-score): C-score is the proportion of node pairs arranged in the same rotation direction of the initial network versus the embedding network, which is shown in (16).
C s c o r e = i = 1 n 1 j = i + 1 n δ ( i , j ) n · ( n 1 ) 2
In which n is the node number and i and j represent two nodes. If the direction (clockwise or anti-clockwise) of the shortest angular distance between i and j in the initial coordinates is the same as that in the embedding coordinates, then δ ( i , j ) is 1. Otherwise, δ ( i , j ) is 0. Similar to HD-corr, the C-score increases from 0 to 1 as the embedding performance improves. Therefore, these two metrics can guide the parameter selection of MpDHE. Specifically, we set T and γ to high HD-corr and C-score.

4.2.4. Results

First, we evaluate the embedding performance within different complex network configuration parameters. Figure 4 shows the mean HD-corr within different complex network parameters, where each row represents one specific embedding scheme. Each column corresponds to a specific power-law index. For each subgraph, the horizontal axis represents the temperature coefficient T, and the vertical axis represents the average degree m. The color of the heat map reflects the HD-corr value. The embedding performance increases as the color deepens. The results show that the embedding performance of our scheme is pretty good when compared with the bound of LaBNE within different network configurations. In addition, the optimal network configuration of parameter combinations appears at the lower left corner of the heat map for schemes, i.e., the parameter combination of T = 0.1 , 2 m = 10 and γ = 3 . This indicates that the hyperbolic embedding scheme is well-suited for low-temperature and dense, complex networks.
To further verify the adaptability of our embedding scheme within different networks, we perform evaluations on three groups of experiments in different scenarios.
The first group: We set the complex network configuration as a parameter combination of: T = 0.1 , 2 m = 10 and γ = 3 . The initial network size is set to 10,000 nodes. Then, we choose a different proportion of changed nodes in the network, and generate 10 groups of networks. The embedding results of HD-corr and C-score are shown in Table 1. Compared with the other static embedding methods, our scheme has pretty good embedding performance. As the proportion of changed nodes increases, the HD-corr and C-score of the embedding schemes slightly decrease. At the same time, C-score is higher than HD-corr within the same situation since HD-corr measures hyperbolic distance, while C-score only measures the relative angle.
The second group: We set the complex network configuration parameters as the same as the first group; we only changed the node variation ratio to 1%. Table 2 shows the HD-corr and C-score with a network scale from 1000 to 20,000. The results also show that our scheme achieves good embedding performance when compared with other methods.
The third group: We set the network scale as 10,000 nodes and the node variation ratio as 1%. Then, we extended the network time step from two steps to six steps, i.e., net0, net1 till net5. The other network configuration parameters are the same as the previous two groups. The results are shown in Table 3. We find that the HD-corr and C-score of our method slightly decreases but still stay above 0.96 and 0.99 with more updates.
From the three groups of evaluations, we find that our dynamic embedding scheme is rather competitive and may even be superior when compared with many schemes of static embedding in terms of embedding efficiency. Coalescent is the only one that obviously exceeds our scheme. It shows that our dynamic updating process does not incur an obvious loss of embedding efficiency.

4.3. Embedding Efficiency Evaluations

4.3.1. Settings

We fix the proportion of changed nodes to 1%, change the network scale from 2500 to 17,500 in the initial network net0 and use LaBNE and MpDHE to make the embeddings for net1. The other parameters are set as: T = 0.1 , 2 m = 10 and γ = 3 .
We make embedding efficiency evaluations on our proposed hyperbolic embedding scheme with both a synthetic network generated by the nPSO model and realistic, complex network datasets. To embody the dynamic network scenario, we involve two continuous time steps of network scenarios in the evaluation, which can also be easily extended to more continuous time steps. The dynamic network situation includes two continuous time steps: the initial “net0” and the second time step of “net1”. We compare the embedding time of different hyperbolic embedding schemes to “net1”. The details of the complex networks used for the evaluation are as follows.
nPSO-1: a synthesis network dataset generated by the nPSO model. We set 15 communities, and the network parameters are n = 1000 , m = 5 , T = 0.1 , γ = 3 . The proportion of varied nodes between net0 and net1 is 20%.
Students: Dataset [47] includes the relationship between some students at a French high school for a 5-day duration. The dataset is available at http://www.sociopatterns.org/datasets/high-school-contact-and-friendship-networks/, accessed on 7 September 2022.
BS: Network [48] consists of recorded users’ behavior on the Internet in a Chinese city for two weeks. It uses base stations (BS) as nodes. If there are people exchanges over a period of time, we think the two base stations have an edge.
DBLP: Is the reference network of DBLP [49], which is a database of scientific publications. The dataset can be downloaded from http://konect.cc/networks/dblp-cite/, accessed on 7 September 2022.
arXiv-HepPh: Is a co-reference network of scientific papers from the high-energy physics and phenomenology (Hep-Ph) part of arXiv [50]. The dataset can be downloaded from http://konect.cc/networks/ca-cit-HepPh/, accessed on 7 September 2022.
The statistical metrics of the complex network generated by the nPSO model and from the realistic dataset are shown in Table 4. Where | V ( G 0 ) | is the number of nodes in the initial network net0. | E ( G 0 ) | is the number of edges in the initial network net0. | γ ( G 0 ) | is the power-law distribution index in the initial network net0. | E a d d | is the number of newly added edges in net1 compared with net0. Correspondingly, | E d e l | is the number of deleted edges in net1 compared with net0.

4.3.2. Results

The embedding time results are shown in Table 5. We can see that our scheme achieves the best time efficiency within the different datasets. The static hyperbolic embedding schemes need to recompute coordinates each time the network changes, and our scheme only needs to update coordinates with low time complexity. In the last section, Coalescent achieves the best result; it exceeds our scheme a little bit. However, our scheme takes less time and achieves very close results, so our scheme is effective with better embedding time efficiency and valid precision.

4.4. Visualization Effect for Downstream Community Discovery

4.4.1. Settings

We perform performance evaluations of our proposed hyperbolic embedding schemes within the downstream community discovery task. Our evaluations are implemented on both complex networks generated by the nPSO model and the realistic complex network datasets.

4.4.2. Results

We implement our proposed scheme and then utilize the Critical Gap Method (CGM) [51] to perform the downstream community discovery task based on the embedding. We show the visualization of our scheme when compared with the classic community discovery algorithm: Louvain algorithm [52]. It forms the community directly based on the topology structure without representation of learning-based approaches. The performance of the community discovery is quantified by modularity in Table 6. Evidently, the modularity obtained by MpDHE still maintains a high level on most networks.
To conveniently show the visual effect of networks based on our schemes, we choose two medium-scale networks. One is from the synthetic network, and the other is from the real network.
Visualization of the synthetic network:Figure 5 and Figure 6 show the visualization effect of the community discovery task for net0 on the nPSO-1 dataset. Figure 5 is the visualization effect of the community discovery task using the Louvain algorithm. In the figure, the color represents community division, and the same color represents the same community. Figure 6a shows the hyperbolic embedding coordinates of the network, and Figure 6b is the result of community discovery using the CGM algorithm based on the embedding coordinates in Figure 6a.
Comparing the above figures, we can see that the Louvain algorithm forms the community based on network topology, so the location of nodes and the distance between nodes have no obvious physical meaning. However, community discovery based on the hyperbolic embedding results in a good visualization such that the embedding node location represents the balance between popularity and similarity. The similar color of two communities (i.e., with similar angular coordinates) in the figure implies the similarity of the two communities. Moreover, the radial coordinates (representing the popularity) imply the key nodes (circled in Figure 6b) in the community.
Visualization for the realistic complex network: For the realistic, complex network, we choose the “Students” dataset to show the visualization effect. Figure 7 and Figure 8 show the visualization results of the community discovery of net0 in the “Students” dataset. Figure 7 shows the visualization of community discovery using the Louvain algorithm. The same color in the figure represents the same community. Figure 8c shows the hyperbolic embedding coordinates of the network. Similar to the nPSO network, the visualization effect based on our embedding scheme is obviously better than the Louvain algorithm. The interaction within and between communities is obvious and clear. If the labels and attributes of nodes are known, such visualization effects will be further improved.
We then exhibit the visualization effect of the “Students” dataset within two continuous time-step scenarios. Figure 8a,b shows the evolution of net0 and net1 in the “Students” dataset. The results show that the community interactions vary with the two time steps’ evolution, but the overall community situation remains stable. The change in the node with a larger radial coordinate is more obvious than the node with a smaller radial coordinate, i.e., the node with a big degree does not incur a big change in the evolution, which is consistent with our basic assumption.

4.5. Important Nodes Searching for Downstream Routing

4.5.1. Settings

We performed performance evaluations within the downstream routing task on two datasets. One is the synthetic dataset nPSO-1, and the other is the realistic dataset DBLP. The other settings are the same as the previous evaluation.

4.5.2. Metrics

Traffic Load Centrality (TLC): First, we use TLC to calculate the importance of nodes for routing. It assumes each node sends a unit of some commodity (e.g., traffic) to any other node. The commodity is transferred from one node to a neighbor closest to the destination. If more than one such neighbor exists, the commodity would be equally divided among them. TLC is defined as the total amount of commodities passing through a node via these exchanges [53]. The more the commodity passes, the more important the node is.
Hyperbolic Traffic Load Centrality (HTLC): HTLC is an approximation of TLC. It assumes each node sends a unit of some commodity to each other node, and from each node (except the destination), the commodity is equally divided to its greedy neighbors. HTLC is defined as the total amount of commodity passing through a node via these exchanges. Thus, HTLC considers greedy paths over hyperbolic coordinates instead of hop-measured shortest paths [54].
We calculate HTLC using hyperbolic coordinates obtained from each scheme, respectively. Then, we compare the top-k nodes by HTLC with that by TLC that do not need hyperbolic coordinates. The more nodes in the intersection of top-k nodes by HTLC and, therefore, by TLC, the better the scheme is.

4.5.3. Results

The results of important node searching are shown in Figure 9 and Figure 10. The “prediction” means that we use HTLC to predict the top-k nodes, which is calculated by TLC. The results show that our scheme does not achieve the best result, but the difference between our scheme and the other schemes is minor. This is consistent with our target, i.e., to reduce the complexity of dynamic hyperbolic embedding while maintaining good precision for downstream tasks.

5. Conclusions

Time complexity in dynamic hyperbolic embedding is a big challenge for temporal complex networks. In this paper, we propose low-complexity hyperbolic embedding schemes, which aim at the evolution characteristics of temporal complex networks. To tackle the embedding efficiency problems within different dynamic network evolutionary processes; we propose two methods for medium-scale and large-scale networks. At first, we utilize the matrix perturbation approach to solve eigenvalue and eigenvector increments with low complexity in medium-scale networks. Secondly, we construct the geometric initialization to realize the low-complexity update for large-scale networks. The performance evaluations are implemented in different application scenarios. The evaluations show our schemes can model the smooth increments of a dynamic network very well. Our future work includes a robust model for dealing with dramatic increments caused by noise and outliers.

Author Contributions

Methodology, H.J.; Software, L.S.; Writing—original draft, J.F.; Writing—review & editing, Y.Z.; Visualization, L.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by NSFC Key Projects Supported by the Joint Fund for Enterprise Innovation and Development (Grant no. U19B2004), partially supported by Open Funding Project of the State Key Laboratory of Communication Content Cognition (No. 20K05) and partially supported by State Key Laboratory of Communication Content Cognition (Grant No. A02107).

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Li, T.; Zhang, J.; Yu, S.P.; Zhang, Y.; Yan, Y. Deep Dynamic Network Embedding for Link Prediction. IEEE Access 2018, 6, 29219–29230. [Google Scholar] [CrossRef]
  2. Li, H.J.; Wang, L.; Zhang, Y.; Perc, M. Optimization of identifiability for efficient community detection. New J. Phys. 2020, 22, 063035. [Google Scholar] [CrossRef]
  3. Ye, D.; Jiang, H.; Jiang, Y.; Wang, Q.; Hu, Y. Community preserving mapping for network hyperbolic embedding. Knowl. Based Syst. 2022, 246, 108699. [Google Scholar] [CrossRef]
  4. Hofmann, T.; Buhmann, M.J. Multidimensional Scaling and Data Clustering. In Proceedings of the NIPS, Denver, CO, USA, 27–30 November 1995; pp. 459–466. [Google Scholar]
  5. Balasubramanian, M.; Schwartz, L.E.; Tenenbaum, B.J.; Silva, D.V.; Langford, C.J. The isomap algorithm and topological stability. Science 2002, 295, 7. [Google Scholar] [CrossRef] [Green Version]
  6. Belkin, M.; Niyogi, P. Laplacian eigenmaps and spectral techniques for embedding and clustering. In Advances in Neural Information Processing Systems 14; MIT Press: Cambridge, MA, USA, 2002; Volumes 1 and 2, pp. 585.0–591.0. [Google Scholar]
  7. Roweis, T.S.; Saul, K.L. Nonlinear dimensionality reduction by locally linear embedding. Science 2000, 290, 2323–2326. [Google Scholar] [CrossRef] [Green Version]
  8. Cao, S.; Lu, W.; Xu, Q. GraRep: Learning Graph Representations with Global Structural Information. In Proceedings of the ACM International Conference on Information and Knowledge Management, Melbourne Australia, 18–23 October 2015. [Google Scholar]
  9. Perozzi, B.; Al-Rfou’, R.; Skiena, S. DeepWalk: Online learning of social representations. In Proceedings of the KDD’14: Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, New York, NY, USA, 24–27 August 2014; pp. 701–710. [Google Scholar]
  10. Grover, A.; Leskovec, J. Node2vec: Scalable Feature Learning for Networks. In Proceedings of the KDD’16: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 855–864. [Google Scholar]
  11. Chen, H.; Perozzi, B.; Hu, Y.; Skiena, S. HARP: Hierarchical Representation Learning for Networks. In Proceedings of the National Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018. [Google Scholar]
  12. Dong, Y.; Chawla, V.N.; Swami, A. Metapath2vec: Scalable Representation Learning for Heterogeneous Networks. In Proceedings of the KDD ’17: The 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Halifax, NS, Canada, 13–17 August 2017; pp. 135–144. [Google Scholar]
  13. Keikha, M.M.; Rahgozar, M.; Asadpour, M. Community Aware Random Walk for Network Embedding. Knowl. Based Syst. 2018, 148, 47–54. [Google Scholar] [CrossRef] [Green Version]
  14. Xue, S.; Lu, J.; Zhang, G. Cross-domain network representations. Pattern Recognit. 2019, 94, 135–148. [Google Scholar] [CrossRef] [Green Version]
  15. Kipf, N.T.; Welling, M. Semi-Supervised Classification with Graph Convolutional Networks. In Proceedings of the International Conference on Learning Representations, Toulon, France, 24–26 April 2017. [Google Scholar]
  16. Wang, D.; Cui, P.; Zhu, W. Structural Deep Network Embedding. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining—KDD’16, San Francisco, CA, USA, 13–17 August 2016; pp. 1225–1234. [Google Scholar]
  17. Cao, S.; Lu, W.; Xu, Q. Deep Neural Networks for Learning Graph Representations. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, Phoenix, AZ, USA, 12–17 February 2016; pp. 1145–1152. [Google Scholar]
  18. Tian, F.; Gao, B.; Cui, Q.; Chen, E.; Liu, T.Y. Learning Deep Representations for Graph Clustering. In Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, Québec, QC, Canada, 27–31 July 2014; pp. 1293–1299. [Google Scholar]
  19. Huang, F.; Zhang, X.; Xu, J.; Li, C.; Li, Z. Network embedding by fusing multimodal contents and links. Knowl. Based Syst. 2019, 171, 44–55. [Google Scholar] [CrossRef]
  20. Krioukov, D.; Papadopoulos, F.; Kitsak, M.; Vahdat, A.; Boguñá, M. Hyperbolic geometry of complex networks. Phys. Rev. E 2010, 82, 036106. [Google Scholar] [CrossRef] [Green Version]
  21. Kovács, B.; Palla, G. Optimisation of the coalescent hyperbolic embedding of complex networks. Sci. Rep. 2021, 11, 8350. [Google Scholar] [CrossRef]
  22. Wang, L.; Lu, Y.; Huang, C.; Vosoughi, S. Embedding Node Structural Role Identity into Hyperbolic Space. In Proceedings of the CIKM ’20: The 29th ACM International Conference on Information and Knowledge Management, Virtual Event, Ireland, 19–23 October 2020; pp. 2253–2256. [Google Scholar]
  23. Chamberlain, P.B.; Clough, J.; Deisenroth, P.M. Neural Embeddings of Graphs in Hyperbolic Space. arXiv 2017, arXiv:1705.10359. [Google Scholar]
  24. Alanis-Lobato, G.; Mier, P.; Andrade-Navarro, M.A. Efficient embedding of complex networks to hyperbolic space via their Laplacian. Sci. Rep. 2016, 6, 30108. [Google Scholar] [CrossRef] [Green Version]
  25. Papadopoulos, F.; Psomas, C.; Krioukov, V.D. Network Mapping by Replaying Hyperbolic Growth. Netw. IEEE/ACM Trans. 2015, 23, 198–211. [Google Scholar] [CrossRef] [Green Version]
  26. Papadopoulos, F.; Krioukov, V.D. Network Geometry Inference using Common Neighbors. Phys. Rev. E 2015, 92, 022807. [Google Scholar] [CrossRef] [Green Version]
  27. Alanis-Lobato, G.; Mier, P.; Andrade-Navarro, A.M. Manifold learning and maximum likelihood estimation for hyperbolic network embedding. Appl. Netw. Sci. 2016, 1, 10. [Google Scholar] [CrossRef] [Green Version]
  28. Li, H.J.; Wang, L.; Bu, Z.; Cao, J.; Shi, Y. Measuring the network vulnerability based on markov criticality. ACM Trans. Knowl. Discov. Data (TKDD) 2021, 16, 1–24. [Google Scholar] [CrossRef]
  29. Nguyen, G.H.; Lee, J.B.; Rossi, R.A.; Ahmed, N.K.; Koh, E.; Kim, S. Dynamic network embeddings: From random walks to temporal random walks. In Proceedings of the 2018 IEEE International Conference on Big Data (Big Data), Seattle, WA, USA, 10–13 December 2018; pp. 1085–1092. [Google Scholar]
  30. Nguyen, G.H.; Lee, J.B.; Rossi, R.A.; Ahmed, N.K.; Koh, E.; Kim, S. Continuous-time dynamic network embeddings. In Proceedings of the WWW’18: The Web Conference 2018, Lyon, France, 23–27 April 2018; pp. 969–976. [Google Scholar]
  31. Ou, M.; Cui, P.; Pei, J.; Zhang, Z.; Zhu, W. Asymmetric transitivity preserving graph embedding. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 1105–1114. [Google Scholar]
  32. Zhu, D.; Cui, P.; Zhang, Z.; Pei, J.; Zhu, W. High-order Proximity Preserved Embedding For Dynamic Networks. IEEE Trans. Knowl. Data Eng. 2018, 30, 2134–2144. [Google Scholar] [CrossRef]
  33. Cao, Y.; Dong, Y.h.; Wu, S.q.; Chen, H.h.; Qian, J.b.; Pan, S.l. Dynamic Network Representation Learning: A Review. Acta Electron. Sin. 2020, 48, 2047–2059. [Google Scholar]
  34. Mahdavi, S.; Khoshraftar, S.; An, A. dynnode2vec: Scalable Dynamic Network Embedding. In Proceedings of the International Conference on Big Data, Los Angeles, CA, USA, 9–12 December 2019; pp. 3762–3765. [Google Scholar]
  35. Goyal, P.; Kamra, N.; He, X.; Liu, Y. DynGEM: Deep Embedding Method for Dynamic Graphs. arXiv 2018, arXiv:1805.11273. [Google Scholar]
  36. Li, H.J.; Wang, Z.; Pei, J.; Cao, J.; Shi, Y. Optimal estimation of low-rank factors via feature level data fusion of multiplex signal systems. IEEE Trans. Knowl. Data Eng. 2022, 34, 2860–2871. [Google Scholar] [CrossRef]
  37. Papadopoulos, F.; Kitsak, M.; Serrano, M.Á.; Boguñá, M.; Krioukov, D. Popularity versus similarity in growing networks. Nature 2013, 489, 537–540. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  38. Muscoloni, A.; Cannistraci, V.C. A nonuniform popularity-similarity optimization (nPSO) model to efficiently generate realistic complex networks with communities. New J. Phys. 2018, 20, 052002. [Google Scholar] [CrossRef]
  39. Boguñá, M.; Papadopoulos, F.; Krioukov, D. Sustaining the Internet with hyperbolic mapping. Nat. Commun. 2010, 1, 62. [Google Scholar] [CrossRef] [PubMed]
  40. Li, J.; Dani, H.; Hu, X.; Tang, J.; Chang, Y.; Liu, H. Attributed Network Embedding for Learning in a Dynamic Environment. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, Singapore, 6–10 November 2017; pp. 387–396. [Google Scholar]
  41. Guttman, A. R-trees: A dynamic index structure for spatial searching. In Readings in Database Systems, 3rd ed.; MIT Press: Cambridge, MA, USA, 1998; pp. 90–100. [Google Scholar]
  42. Roy, A. Efficient embedding of scale-free graphs in the hyperbolic plane. Comput. Rev. 2019, 60, 173–174. [Google Scholar]
  43. Muscoloni, A.; Thomas, J.M.; Ciucci, S.; Bianconi, G.; Cannistraci, C.V. Machine learning meets complex networks via coalescent embedding in the hyperbolic space. Nat. Commun. 2017, 8, 1615. [Google Scholar] [CrossRef] [Green Version]
  44. Wang, Z.; Wu, Y.; Li, Q.; Jin, F.; Xiong, W. Link prediction based on hyperbolic mapping with community structure for complex networks. Phys. A Stat. Mech. Its Appl. 2016, 450, 609–623. [Google Scholar] [CrossRef]
  45. Wang, Z.; Li, Q.; Jin, F.; Xiong, W.; Wu, Y. Hyperbolic mapping of complex networks based on community information. Phys. Stat. Mech. Its Appl. 2016, 455, 104–119. [Google Scholar] [CrossRef]
  46. Garcia-Perez, G.; Allard, A.; Serrano, M.A.; Boguna, M. Mercator: Uncovering faithful hyperbolic embeddings of complex networks. New J. Phys. 2019, 21, 123033. [Google Scholar] [CrossRef] [Green Version]
  47. Mastrandrea, R.; Fournet, J.; Barrat, A. Contact Patterns in a High School: A Comparison between Data Collected Using Wearable Sensors, Contact Diaries and Friendship Surveys. PLoS ONE 2015, 10, e0136497. [Google Scholar] [CrossRef] [Green Version]
  48. Yi, S.; Jiang, H.; Jiang, Y.; Zhou, P.; Wang, Q. A Hyperbolic Embedding Method for Weighted Networks. IEEE Trans. Netw. Sci. Eng. 2021, 8, 599–612. [Google Scholar] [CrossRef]
  49. Ley, M. The DBLP Computer Science Bibliography: Evolution, Research Issues, Perspectives. In International Symposium on String Processing and Information Retrieval; Springer: Berlin/Heidelberg, Germany, 2002; pp. 1–10. [Google Scholar]
  50. Leskovec, J.; Kleinberg, J.; Faloutsos, C. Graph evolution: Densification and shrinking diameters. TKDD 2007, 1, 2. [Google Scholar] [CrossRef]
  51. García-Pérez, G.; Boguñá, M.; Allard, A.; Serrano, Á.M. The hidden hyperbolic geometry of international trade: World Trade Atlas 1870–2013. Sci. Rep. 2016, 6, 33441. [Google Scholar] [CrossRef]
  52. Blondel, D.V.; Guillaume, J.L.; Lambiotte, R.; Lefebvre, E. Fast unfolding of communities in large networks. J. Stat. Mech. Theory Exp. 2008, P10008. [Google Scholar] [CrossRef] [Green Version]
  53. Brandes, U. On variants of shortest-path betweenness centrality and their generic computation. Soc. Netw. 2008, 30, 136–145. [Google Scholar] [CrossRef]
  54. Stai, E.; Sotiropoulos, K.; Karyotis, V.; Papavassiliou, S. Hyperbolic embedding for efficient computation of path centralities and adaptive routing in large-scale complex commodity networks. IEEE Trans. Netw. Sci. Eng. 2017, 4, 140–153. [Google Scholar] [CrossRef]
Figure 1. An illustration of tree structure embedding into the hyperbolic space.
Figure 1. An illustration of tree structure embedding into the hyperbolic space.
Sensors 22 09306 g001
Figure 2. The overview of MpDHE.
Figure 2. The overview of MpDHE.
Sensors 22 09306 g002
Figure 3. The MSE between eigenvectors of the Laplacian matrix and matrix perturbation results.
Figure 3. The MSE between eigenvectors of the Laplacian matrix and matrix perturbation results.
Sensors 22 09306 g003
Figure 4. The embedding performance of networks with different parameters.
Figure 4. The embedding performance of networks with different parameters.
Sensors 22 09306 g004
Figure 5. The community discovery visualization of nPSO-1 based on the Louvian algorithm.
Figure 5. The community discovery visualization of nPSO-1 based on the Louvian algorithm.
Sensors 22 09306 g005
Figure 6. The community discovery visualization of nPSO-1 based on hyperbolic embedding. (a) shows the hyperbolic coordinates of the network, and (b) shows the result of community discovery using the CGM algorithm based on the coordinates in (a). We circled some key nodes which connect with many nodes, and we can see they usually have small radial coordinates in hyperbolic space.
Figure 6. The community discovery visualization of nPSO-1 based on hyperbolic embedding. (a) shows the hyperbolic coordinates of the network, and (b) shows the result of community discovery using the CGM algorithm based on the coordinates in (a). We circled some key nodes which connect with many nodes, and we can see they usually have small radial coordinates in hyperbolic space.
Sensors 22 09306 g006
Figure 7. The community discovery visualization of the Students dataset based on the Louvian algorithm.
Figure 7. The community discovery visualization of the Students dataset based on the Louvian algorithm.
Sensors 22 09306 g007
Figure 8. The community discovery visualization of the Students dataset based on hyperbolic embedding. (a) is the visualization of net0 in “Students” and (b) is the visualization of net1, we can see the evolution from net0 to net1. (c) shows the hyperbolic coordinates of the network.
Figure 8. The community discovery visualization of the Students dataset based on hyperbolic embedding. (a) is the visualization of net0 in “Students” and (b) is the visualization of net1, we can see the evolution from net0 to net1. (c) shows the hyperbolic coordinates of the network.
Sensors 22 09306 g008
Figure 9. The proportion of correct predictions in top-k nodes on the nPSO-1 dataset.
Figure 9. The proportion of correct predictions in top-k nodes on the nPSO-1 dataset.
Sensors 22 09306 g009
Figure 10. The proportion of correct predictions in top-k nodes on the DBLP dataset.
Figure 10. The proportion of correct predictions in top-k nodes on the DBLP dataset.
Sensors 22 09306 g010
Table 1. The hyperbolic embedding performance for different proportions of changed nodes.
Table 1. The hyperbolic embedding performance for different proportions of changed nodes.
MetricsSchemes1%5%10%15%20%25%30%
HD-corrEE0.93680.94350.94500.94960.94660.94910.9338
Coalescent0.97550.97590.97540.97590.97560.97530.9759
LPCS0.95480.95470.95360.95490.95480.95440.9544
CHM0.94250.94290.94330.94220.94290.94300.9421
Mercator0.94320.94350.94250.94250.94280.94170.9422
LaBNE0.96810.96870.96840.96650.96810.96900.9680
MpDHE0.96790.96830.96760.96740.96700.96840.9675
C-scoreEE0.88750.90350.93690.93340.93270.94120.9217
Coalescent0.99420.99640.99640.99630.99620.99650.9958
LPCS0.97620.97660.97560.97630.97650.97680.9763
CHM0.97100.97200.97290.97140.97200.97290.9715
Mercator0.98360.98910.98560.98930.98740.98740.9879
LaBNE0.99200.99270.99250.98930.99360.99220.9917
MpDHE0.99230.99270.99180.99060.99260.99190.9918
Table 2. The hyperbolic embedding performance on different network scales.
Table 2. The hyperbolic embedding performance on different network scales.
MetricsSchemes1000500010,00012,50015,000
HD-corrEE0.8965600.9446910.9481850.9560890.947904
Coalescent0.9075750.9747020.9754690.9755310.975494
LPCS0.8513570.9456760.9550540.9554830.958091
CHM0.8574660.9364280.9422250.9447960.946437
Mercator0.8919750.9474790.9430990.9390060.935306
LaBNE0.8990860.9670560.9679340.9670320.968298
MpDHE0.8990590.9670480.9679340.9670280.968298
C-scoreEE0.9082840.9235070.9338100.9418110.937041
Coalescent0.9362830.9922170.9958420.9964920.996578
LPCS0.8854280.9655130.9773270.9774330.979936
CHM0.8880150.9617480.9713670.9744660.976481
Mercator0.9312860.9896620.9863340.9857750.987029
LaBNE0.9325850.9891960.9925680.9919480.993236
MpDHE0.9325240.9892150.9925760.9919450.993233
Table 3. The hyperbolic embedding performance with different time steps.
Table 3. The hyperbolic embedding performance with different time steps.
MetricsSchemesnet1net2net3net4net5
HD-corrEE0.9496280.9475280.9507420.9456940.947809
Coalescent0.9754970.9755500.9756300.9754710.975483
LPCS0.9531260.9553250.9532650.9535110.956482
CHM0.9420170.9428810.9420820.9423080.942248
Mercator0.9429020.9431050.9427460.9430360.942983
LaBNE0.9667880.9667920.9667940.9667670.966768
MpDHE0.9667840.9667840.9667810.9667520.966753
C-scoreEE0.9401300.9350730.9415420.9195660.936673
Coalescent0.9962440.9962430.9962150.9962520.996244
LPCS0.9743110.9775190.9746550.9751970.978727
CHM0.9714050.9720270.9708150.9714770.971230
Mercator0.9875390.9872590.9875420.9874850.987348
LaBNE0.9917160.9917110.9916730.9916940.991691
MpDHE0.9917080.9916990.9916680.9916930.991680
Table 4. The statistical metrics of the datasets.
Table 4. The statistical metrics of the datasets.
Network | V ( G 0 ) | | E ( G 0 ) | | γ ( G 0 ) | | E add | | E del |
nPSO-1100049852.6710000
Students32329425.6712361474
BS8796174,8362.99102,243100,882
DBLP444224,7343.7518,3696872
ArXiv-HepPh5934275,6684.54106,180112,524
Table 5. The time for network embedding.
Table 5. The time for network embedding.
DatasetnPSO-1StudentsBSDBLPArxiv-Hepph
EE2.04721.2613149.171416.4267158.1508
Coalescent6.45928.1858584.745514.6385314.1906
LPCS26.349311.0025165.808698.883095.1393
CHM79.07030.05961611.3348509.7389666.9041
Mercator36.14395.65402001.5367463.9197698.7599
LaBNE0.17890.03467.74792.27873.6021
MpDHE0.05150.01612.58320.67541.0156
Table 6. Community discovery on dynamic networks.
Table 6. Community discovery on dynamic networks.
MetricsSchemesnPSO-1StudentsBSDBLPArxiv-Hepph
ModularityEE0.8334210.6664480.7278890.2505860.475920
Coalescent0.3799240.4974970.2444650.1778080.276130
LPCS0.6845980.4574700.4017100.4241340.439378
CHM0.7621710.6728360.7585460.6066650.641630
Mercator0.8204910.6316890.7089750.5387430.517403
LaBNE0.8390450.6105840.7029950.4657820.438718
MpDHE0.8253340.5938140.7041950.4384260.439234
Community numberEE1713508162
Coalescent1469533828442
LPCS991769
CHM178191620
Mercator167181911
LaBNE2510573420
MpDHE183424968
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Jiang, H.; Li, L.; Zeng, Y.; Fan, J.; Shen, L. Low-Complexity Hyperbolic Embedding Schemes for Temporal Complex Networks. Sensors 2022, 22, 9306. https://0-doi-org.brum.beds.ac.uk/10.3390/s22239306

AMA Style

Jiang H, Li L, Zeng Y, Fan J, Shen L. Low-Complexity Hyperbolic Embedding Schemes for Temporal Complex Networks. Sensors. 2022; 22(23):9306. https://0-doi-org.brum.beds.ac.uk/10.3390/s22239306

Chicago/Turabian Style

Jiang, Hao, Lixia Li, Yuanyuan Zeng, Jiajun Fan, and Lijuan Shen. 2022. "Low-Complexity Hyperbolic Embedding Schemes for Temporal Complex Networks" Sensors 22, no. 23: 9306. https://0-doi-org.brum.beds.ac.uk/10.3390/s22239306

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop