Next Article in Journal
The Impact of a Special Economic Zone Management on the Development of Modern Sectors and Technologies in a Polish Metropolis: The Smart City Context
Next Article in Special Issue
Verification and Analysis of the Problem-Dependent Multigroup Macroscopic Cross-Sections for Shielding Calculations
Previous Article in Journal
Automated Quantification of Wind Turbine Blade Leading Edge Erosion from Field Images
Previous Article in Special Issue
Inventories of Short-Lived Fission Gas Nuclides in Nuclear Reactors
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

Parallel Communication Optimization Based on Graph Partition for Hexagonal Neutron Transport Simulation Using MOC Method

1
Sino-French Institute of Nuclear Engineering and Technology, Sun Yat-sen University, Zhuhai 519082, China
2
Nuclear Power Institute of China, Chengdu 610041, China
*
Author to whom correspondence should be addressed.
Energies 2023, 16(6), 2823; https://doi.org/10.3390/en16062823
Submission received: 4 February 2023 / Revised: 9 March 2023 / Accepted: 10 March 2023 / Published: 18 March 2023
(This article belongs to the Special Issue Mathematics and Computational Methods in Nuclear Energy Technology)

Abstract

:
OpenMOC-HEX, a neutron transport calculation code with hexagonal modular ray tracing, has the capability of domain decomposition parallelism based on an MPI parallel programming model. In this paper, the optimization of inter-node communication was studied. Starting from the specific geometric arrangement of hexagonal reactors and the communication features of the Method of Characteristics, the computation and communication of all the hexagonal assemblies are mapped to a graph structure. Then, the METIS library is used for graph partitioning to minimize the inter-node communication under the premise of load balance on each node. Numerical results of an example hexagonal core with 1968 energy groups and 1027 assemblies demonstrate that the communication time is reduced by about 90%, and the MPI parallel efficiency is increased from 82.0% to 91.5%.

1. Introduction

The Method of Characteristics (MOC) is a neutron transport method with strong geometric adaptability, high precision, and great parallel potential. It has become the mainstream method for lattice calculation toolkits as well as high-fidelity numerical reactors. There are currently many MOC transport calculation codes for rectangular reactor cores, whereas those for hexagonal reactor cores are less studied.
The hexagonal fuel assembly is widely used in the core design of advanced multi-purpose reactors, such as lead-bismuth fast reactors and sodium-cooled fast reactors in the fourth-generation reactor candidates, because of its compact arrangement, which can increase the coolant flow rate and heat transfer efficiency. Therefore, developing neutron transport solvers capable of hexagonal core simulation and large-scale parallel computing has significant engineering practical importance. At present, some MOC codes are under development to conduct the hexagonal core simulation, such as DeCART [1], SONG [2], NECP-X [3], and ANT-MOC [4].
OpenMOC [5] is an open-source deterministic neutronic code that is developed and maintained by Massachusetts Institute of Technology (MIT) and is used to perform the MOC rectangular reactor cores simulation. OpenMOC-HEX [6] is a neutron transport calculation code developed and maintained by Sun Yat-sen University on the basis of OpenMOC. Leveraging the mature framework, neutron transport solver, and ray tracing module from OpenMOC, OpenMOC-HEX employs a hexagonal modular ray tracing track laydown and realizes domain decomposition parallelism based on the MPI parallel programming model.
Domain decomposition has been widely used in the parallel acceleration of numerical computation [7]. This method can be mapped to a graph partitioning problem, which is an in-depth study in computer science [8] and has been applied to many fields, such as computational fluid dynamics [9] and rectangular reactor [10,11,12] neutronics simulation.
Many studies have explored optimization methods for direct calculation time, focusing on computation time, involving the program’s underlying architecture, mathematical algorithms, calculation, and operation strategies, such as the CMFD acceleration algorithm, GMRES algorithm, etc.
As computing resources become increasingly abundant and the computation process reduces approximation to improve accuracy, parallel scales gradually expand. This introduces new optimization problems, such as load balancing and minimal communication, that were less significant in smaller-scale computing. Research on this new problem, especially the issue of the special hexagonal core geometry, can effectively enhance program performance and efficiency.
In general, the graph partitioning problem is NP-complete, meaning that it is difficult to partition the graph into the best partition. However, there are many well-developed graph partitioning algorithms and libraries, for example, METIS, to be used to minimize MPI communication under the constraint of load balancing [13].
In this paper, the MPI parallel communication of OpenMOC-HEX is optimized. The graph partitioning algorithm is used to reasonably map a good deal of MPI processes onto many multi-core computational nodes to achieve both load balance and communication optimization to improve parallel efficiency.

2. Methodologies

2.1. Hexagonal Lattice Processing

In order to define the hexagonal assembly flexibly and quickly, a hexagonal grid (HexLattice) [6] needs to be established at first, as shown in Figure 1. Then, each grid cell can be filled with different fuel rods. In the MOC solver, obtaining the material parameters of the grid where the segment is located is necessary. Therefore, if the coordinates of any point in the Cartesian coordinate system x , y are given, the number of this grid unit to which the point belongs is required to obtain the material information (fuel rod information) of this grid unit. In the hexagonal grid geometry processing, the method of obtaining the number is shown in Figure 1.
In Figure 1, the coordinate system i 1 , i 2 is defined, the angle of the coordinate axis is 2 π / 3 , and the unit length of the coordinate is 3 p / 2 , where p is the center distance of the grating element unit. Calculation of the i 1 , i 2 coordinate from point x , y in the coordinate system is shown in Equation (1).
α = y 3 x ;   i 1 = floor α 3 p ;   i 2 = floor y 0.75 p
According to the coordinate i 1 , i 2 , the coordinate points x , y may be located in i 1 , i 2 , i 1 + 1 , i 2 , i 1 , i 2 + 1 , or i 1 + 1 , i 2 + 1 . It is necessary to further determine which of the above four grid elements the point is indeed located in. The method is to calculate the distance between the coordinate point x , y and the center point of the four hexagonal grids, and the grid element with the shortest distance is where the coordinate point is located.

2.2. Correspondence Relationship of MPI Communication

The OpenMOC-HEX implements MPI parallelism based on domain decomposition. Each MPI process is responsible for processing a hexagonal assembly, and calculations are coupled by angular flux communication [6] between assemblies for MPI parallelism, as shown in Figure 2.
In order to send angular flux data to the target process accurately, it is necessary to determine the neighboring process number of the hexagonal assembly corresponding to the pre-given assembly and its specific edge. In addition, for the hexagonal assembly of the outermost ring, it is also necessary to determine whether there is a hexagonal assembly on the adjacent edge.
In the OpenMOC-HEX code, the process numbering of the hexagonal assemblies starts from the top hexagonal assembly along a clockwise direction and increases from the outermost ring to the innermost ring. For example, in a three-ring hexagonal core, the MPI process numbering of each assembly is shown in Figure 3.
It can be seen from Figure 3 that the adjacent edge correspondence of the hexagonal core depends on the number of rings. In order to find out the process number of the hexagonal assembly corresponding to the adjacent side, this study uses the method of transforming the coordinate system and regards the entire core composed of the hexagonal assemblies as a hexagonal grid. Firstly, the coordinates i 1 , i 2 corresponding to the process number mentioned in Section 2.1 is calculated. Secondly, the hexagonal coordinates i 1 , i 2 of the neighbors of the corresponding edges are obtained as shown in Figure 4. Finally, the process numbers of the neighboring assemblies are calculated based on the i 1 , i 2 coordinates of the neighboring assembly.

2.3. MPI Communication Analysis of Hexagonal Core

As mentioned in Section 2.2, the communication tasks between different processes of OpenMOC-HEX have strong spatial distribution features, and only adjacent assemblies need to exchange boundary angular flux data. As shown in Figure 5, for the 18th process of the 3-ring 19-assembly core, only processes with the rank of 12, 13, 14, 15, 16, 17 are valid neighbors for communication. Even if the MPI processes have been evenly distributed to each node through the Intel MPI environment variable setting [14], due to the numbering strategy of the hexagonal assembly and the random allocation of MPI processes on the multi-core computational nodes, there will be a large number of unnecessary cross-node communication, resulting in increasing communication time and low parallel efficiency. Therefore, processes must be allocated more appropriately at different nodes, especially for large-scale parallel computations.
Therefore, considering the usage of graph partitioning and explicit allocation of processes, MPI processes that need to communicate should be arranged in the same computational node as much as possible under the condition of load balance of all nodes. By making a data exchange within nodes rather than across nodes, the communication time of large-scale parallel computing can be reduced by improving data transferring efficiency. Considering the features of the assembly arrangement of hexagonal cores, the number distribution of assemblies is mapped into a topological structure diagram, and the communication optimization is realized by using the existing graph partitioning program METIS5.0 [13].

2.4. Graph Partitioning Program METIS

In order to make full use of the existing high-performance computing resources, the parallel development of many application software pursues the load balance of computing tasks and the minimum communication between different tasks. Such problems can be mapped as highly unstructured graph partitioning problems. Based on this requirement, Karypis Lab developed METIS5.0, a powerful serial graph partitioning software package [13].
The Metis graph partitioning program algorithm is mainly based on multi-level recursive binary segmentation, multi-level kway segmentation, and multi-constraint partitioning mechanisms. It takes the edge-cut minimization as the goal and the load balancing as the constraint condition, and has high-quality partitioning results. It is 10–50% more accurate than spectral clustering and 1–2 orders of magnitude faster than the basic partitioning algorithm. For a graph of millions of vertices, it costs only seconds to cut into 256 classes [15]. Therefore, METIS5.0 meets the needs of OpenMOC-HEX parallel communication optimization.
METIS multi-level partitioning runs as follows: The initial large-scale graph is first coarsened and divided into smaller sub-graphs, then smaller graphs are divided into much smaller graphs, thus transforming the large-scale partitioning problem into a small-scale partitioning problem that can be effectively handled [13]. After the division, METIS will refine the results of the sub-graph division step by step and gradually restore the minor division to the division of the initial graph. At the same time, METIS will continuously optimize and improve the reduction of each layer, and finally obtain high-quality division results [13].
METIS supports the partitioning of multiple graph structures. The basic one is the unweighted graph. The default calculation weight of each vertex and the communication weight with adjacent vertices are remarked as 1. METIS only needs the vertex number and the vertex numbers with which it communicates. On this basis, the computing load weights of different vertices and the communication load weights of different edges can be introduced, and multiple constraints can be added.
Taking the 3-ring 19-assembly core as an example, the unweighted graph format is adopted, assuming the equal computational burden of each process and the equal data exchange amount between adjacent processes. According to the corresponding relationship of the communication area in Section 2.2, the edge data are generated. As shown in Table 1, for the assembly ranked 0, the natural number in [12,−1,11,−1,1,−1] represents the rank of orderly adjacent assembly, and −1 represents the core’s boundary. The edge data are processed using a Python script to generate the graph data required by METIS, as shown in Table 1, the extra first line that represents this graph has 19 vertexes and 42 edges.

3. Results

3.1. Test Environment and the Benchmark

The test uses two supercomputing platforms. Platform 1 is the v6_384 queue of T-Partition of Beijing Super Cloud Computing Center (BSCC), using Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30 GHz CPU (96 cores per node); Center 2 is the t2 queue of T-Partition of BSCC, using Genuine Intel CPU 0000% CPU @ 2.20GHz CPU (48 cores per node). The specific configuration is shown in Table 2.
The test problem is a hexagonal core with a repetitive structure of a small hexagonal assembly. There are two materials in the benchmark, fuel and moderator. The macroscopic section of the material is provided by the RMC program [16]. The core adopts a reflection boundary condition on all boundaries and is repeatedly composed of a single assembly. Taking the 2-ring 7-assembly core as an example, the core structure is shown in Figure 6a, consisting of seven single assemblies with two rings. The structure of a single assembly is shown in Figure 6b, which is composed of a 3-ring 19-grid-element with large and small specifications. The large and small grid elements are shown in Figure 6b.

3.2. Calculation

METIS is used to optimize the distribution of processes on multiple compute nodes based on graph partitioning. Taking the allocation of a 6-ring 91-assembly problem on six nodes as an example, the cross-node allocation of the corresponding assemblies of the process before and after optimization is shown. The different colors of the assemblies in Figure 7 indicate that the processes corresponding to the assemblies are allocated on different nodes. Therefore, the adjacent processes of the same color exchange data within a node, whereas the adjacent processes of different colors exchange data across nodes.
It can be seen that the number of processes requiring a cross-node data exchange after optimization is significantly reduced. Thus, the amount of cross-node data exchange is also reduced.
The monitoring data from the Paramon toolkit, which BSCC uses to monitor the system-level, micro-architecture-level, and function-level performance of the clusters, also support this view. Taking the test problems of the 19-ring 1027-assembly on Platform 1 and 18-ring 919-assembly on Platform 2 as examples, the average communication amount of each node in the InfiniBand network before optimization is 760.37 MB and 1148.14 MB, which is reduced by 88% to 90.58 MB and by 84% to 178.15 MB after optimization, as shown in Table 3 and Table 4.
Using Intel MPI, the binding of processes and computing nodes is implemented on Platform 1 (96 cores per node) and Platform 2 (48 cores per node), respectively [14]. Furthermore, the weak scalability parallel efficiency of OpenMOC-HEX is analyzed. The results are shown in Table 5 and Table 6, and Figure 8 and Figure 9, in which:
Rings #: number of rings of assemblies constituting the core;
MPI #: the number of MPI processes, equal to the number of hexagonal assemblies;
Nodes #: ceil (MPI processes number/number of cores per node);
Cal time: total time of calculation;
Comm time: angular flux data communication time;
Comm Speedup: the speedup of communication (Communication time before optimization)/(Communication time after optimization);
MPI-UNPIN: the process is not allocated to the specified node before optimization;
MPI-PIN: the process is allocated to the specified node according to the graph partitioning result after optimization.
It can be seen from Figure 8 that in Platform 1 (96 cores per node), the communication acceleration effect of MPI-PIN is positively correlated with the number of processes; in Platform 2 (48 cores per node), the communication acceleration ratio of MPI-PIN is relatively small and does not change significantly with the increase of the number of processes. The reason is preliminarily analyzed. In Platform 2 (48 cores per node), the number of nodes used increases much faster than in Platform 1 as the number of processes increases due to the smaller number of cores of each node. With the rapid growth of inter-node communication, the optimization effect obtained by graph partitioning is limited.
The results of computational time before and after MPI-PIN optimization for the 6 to 19 rings examples measured on supercomputing Platform 1 (96 cores per node) are shown in Figure 9.
This paper mainly discusses MPI communication optimization and it is assumed that there is no communication between nodes when the code runs in a single node. Therefore, the single-node computational time is taken as the ideal computational time. OpenMOC-HEX implements MPI parallelism through domain decomposition, and the number of MPI processes equals the number of hexagonal assemblies. In the ideal case of weak scalability analysis, the computational time should remain unchanged as the number of MPI processes increases.
The test results show that the parallel efficiency after graph partitioning optimization is significantly higher than before. For the 19-ring 1027-assembly example, the parallel efficiency increased from 82.3% to 91.4%. Therefore, it is an effective optimization method for large-scale parallel computing to reduce the cross-node data exchange under the premise of load balancing by optimizing graph partitioning to make the process bound on nodes in a targeted manner.

4. Discussion

In this study, with the help of the graph partitioning algorithm and the METIS library, the optimized allocation of OpenMOC-HEX MPI processes across multi-core nodes is studied to minimize the inter-node communication under the constraint of load balance. As for the benchmark of 19-ring 1027-assembly, the inter-node communication is reduced by 88%, the communication time is reduced by 90%, and the parallel efficiency is increased from 82% to 91.5% after optimization. This demonstrates that the optimizing method proposed in this study is of great importance for improving the efficiency of a high fidelity neutron transport solver.
Combining the additional data exchange information and the communication time acceleration ratio shown in Table 4 and Figure 8, it can be observed that in Platform 2 (48 cores per node), there are significantly more communication data between nodes and a similar reduction in communication volume. However, the communication time acceleration effect is significantly lower than in Platform 1 (96 cores per node). We speculate that this is because the InfiniBand technology used in supercomputers achieves high bandwidth and low latency. Hence, its communication time growth is lower than the growth of data volume, which is beneficial for large-scale computing in computer clusters. The lower communication optimization on Platform 2 is due to its older hardware configuration and higher latency.
In 2D computations, this method has already achieved significant results, and in the future, when implementing computation capabilities for larger parallel scales in 3D hexagonal geometries, the use of graph partitioning algorithms will continue to be explored.
Since OpenMOC-HEX implements assembly-based domain decomposition, equal weights of computation and communication are assumed. The actual purpose of graph partitioning is to achieve load balancing and minimize communication. Its role is not limited to communication optimization. In future studies, the computation and communication weights should be finely tuned considering the actual design of different assemblies.
Furthermore, assembly-based domain decomposition limits the parallel potential of the program. The next step will further optimize the domain decomposition strategy of the program to achieve strong scalability as much as possible. Constrained by the special geometric form of hexagonal geometry, the improvement of the strong scalability of OpenMOC-HEX undoubtedly requires a more in-depth study of graph partitioning methods to complete the optimization of load balancing and minimum communication to maximize the parallel efficiency.

Author Contributions

Conceptualization, J.Z. and W.W.; methodology, J.Z., Z.W. and W.W.; software, J.Z. and W.W.; validation, J.Z., Z.X. and Z.W.; investigation, J.Z., X.P. and W.W.; resources, C.Z. and W.W.; data curation, J.Z. and Z.X.; writing—original draft preparation, J.Z.; writing—review and editing, W.W., Z.W., C.Z., Z.X. and X.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Grants of Nuclear Power Innovation Center, grant number HDLCXZX-2021-HD-030.

Data Availability Statement

Data are contained within the article.

Acknowledgments

The authors would like to thank BSCC for its technical support in testing and monitoring programs on supercomputing platforms.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cho, J.Y.; Kim, K.S.; Shim, H.J.; Song, J.S.; Lee, C.C.; Joo, H.G. Whole Core Transport Calculation Employing Hexagonal Modular Ray Tracing and CMFD Formulation. J. Nucl. Sci. Technol. 2008, 45, 740–751. [Google Scholar] [CrossRef]
  2. Chen, Q.; Si, S.; Zhao, J.; Bei, H. SONG-Development of transport modules. Hedongli Gongcheng/Nucl. Power Eng. 2014, 35, 127–130. [Google Scholar]
  3. Liu, Z.; Chen, J.; Li, S.; Wu, H.; Cao, L. Development and Verification of the Hexagonal Core Simulation Capability of the NECP-X Code. Ann. Nucl. Energy 2022, 179, 109388. [Google Scholar] [CrossRef]
  4. Yang, W.; Hu, C.; Liu, T.; Wang, A.; Wu, M. Research Progress of China Virtual Reactor (CVR1.0). Yuanzineng Kexue Jishu/At. Energy Sci. Technol. 2019, 53, 1821–1832. [Google Scholar]
  5. Boyd, W.; Shaner, S.; Li, L.; Forget, B.; Smith, K. The OpenMOC Method of Characteristics Neutral Particle Transport Code. Ann. Nucl. Energy 2014, 68, 43–52. [Google Scholar] [CrossRef] [Green Version]
  6. Wu, W.; Wang, Z.; Zheng, J.; Xie, Z.; Zhao, C.; Peng, X. Hexagonal Method of Characteristics with area decomposition in parallel. J. Harbin Eng. Univ 2022, 43, 1–5. [Google Scholar]
  7. Farhat, C. A simple and efficient automatic fem domain decomposer. Comput. Struct. 1988, 28, 579–602. [Google Scholar] [CrossRef]
  8. Elsner, U. Graph Partitioning—A Survey. Encycl. Parallel Comput. 1999, 97, 27. [Google Scholar] [CrossRef]
  9. Yao, Y.F.; Richards, B.E. Parallel CFD Computation on Unstructured Grids. In Parallel Computational Fluid Dynamics 1997; Emerson, D.R., Periaux, J., Ecer, A., Satofuka, N., Fox, P., Eds.; North-Holland: Amsterdam, The Netherlands, 1998; pp. 289–296. ISBN 978-0-444-82849-1. [Google Scholar]
  10. Fitzgerald, A.P. Spatial Decomposition of Structured Grids for Nuclear Reactor Simulations. Ann. Nucl. Energy 2019, 132, 686–701. [Google Scholar] [CrossRef]
  11. Zhao, C.; Peng, X.; Zhang, H.; Zhao, W.; Li, Q.; Chen, Z. Analysis and Comparison of the 2D/1D and Quasi-3D Methods with the Direct Transport Code SHARK. Nucl. Eng. Technol. 2022, 54, 19–29. [Google Scholar] [CrossRef]
  12. Zhao, C.; Peng, X.; Zhao, W.; Feng, J.; Zhao, Y.; Zhang, H.; Wang, B.; Chen, Z.; Gong, Z.; Li, Q. Verification of the Direct Transport Code SHARK with the JRR-3M Macro Benchmark. Ann. Nucl. Energy 2022, 177, 109294. [Google Scholar] [CrossRef]
  13. Karypis, G.; Kumar, V. METIS: A Software Package for Partitioning Unstructured Graphs, Partitioning Meshes, and Computing Fill-Reducing Orderings of Sparse Matrices. Comput. Sci. Eng. 1997. Available online: https://hdl.handle.net/11299/215346 (accessed on 3 February 2023).
  14. Intel® MPI Library Developer Reference for Linux* OS. 26 August 2022. Available online: https://www.intel.com/content/www/us/en/content-details/740630/intel-mpi-library-developer-reference-for-linux-os.html (accessed on 3 February 2023).
  15. Karypis, G.; Kumar, V. Multilevelk-Way Partitioning Scheme for Irregular Graphs. J. Parallel Distrib. Comput. 1998, 48, 96–129. [Google Scholar] [CrossRef] [Green Version]
  16. Kan, W.; Li, Z.; Ding, S.; Liu, Y.; Yu, G. Progress on RMC: A Monte Carlo neutron transport code for reactor analysis. In Proceedings of the International Conference on Mathematics and Computational Methods Applied to Nuclear Science and Engineering, Rio de Janeiro, Brazil, 8–12 May 2011. [Google Scholar]
Figure 1. Hexagonal grid geometry processing.
Figure 1. Hexagonal grid geometry processing.
Energies 16 02823 g001
Figure 2. The track laydown and the central assembly’s six neighboring assemblies.
Figure 2. The track laydown and the central assembly’s six neighboring assemblies.
Energies 16 02823 g002
Figure 3. MPI process numbering of a 3-ring 19-assembly hexagonal core.
Figure 3. MPI process numbering of a 3-ring 19-assembly hexagonal core.
Energies 16 02823 g003
Figure 4. The i 1 , i 2 coordinate method used to calculate the process number of the neighboring assemblies: (a) i 1 , i 2 coordinates; (b) neighbor coordinates in coordinates i 1 , i 2 .
Figure 4. The i 1 , i 2 coordinate method used to calculate the process number of the neighboring assemblies: (a) i 1 , i 2 coordinates; (b) neighbor coordinates in coordinates i 1 , i 2 .
Energies 16 02823 g004
Figure 5. Communication task diagram of rank18 in a 3-ring 19-assembly core example.
Figure 5. Communication task diagram of rank18 in a 3-ring 19-assembly core example.
Energies 16 02823 g005
Figure 6. (a) Core of benchmark; (b) grid elements and assembly.
Figure 6. (a) Core of benchmark; (b) grid elements and assembly.
Energies 16 02823 g006
Figure 7. The cross-node allocation diagram of process and its corresponding assemblies before and after optimization: (a) before optimization; (b) after optimization.
Figure 7. The cross-node allocation diagram of process and its corresponding assemblies before and after optimization: (a) before optimization; (b) after optimization.
Energies 16 02823 g007
Figure 8. Speedup of communication time.
Figure 8. Speedup of communication time.
Energies 16 02823 g008
Figure 9. Computational time before and after optimization on Platform 1 (96 cores per node).
Figure 9. Computational time before and after optimization on Platform 1 (96 cores per node).
Energies 16 02823 g009
Table 1. The edge file and graph file of the 3-ring 19-assembly example.
Table 1. The edge file and graph file of the 3-ring 19-assembly example.
Rank #Edge DataGraph Data
19 42
0[12,−1,11,−1,1,−1]13 12 2
1[13,−1,12,−1,2,0]14 13 3 1
2[3,−1,13,−1,−1,1]4 14 2
3[4,2,14,−1,−1,13]5 3 15 14
4[−1,3,5,−1,−1,14]4 6 15
5[−1,14,6,4,−1,15]15 7 5 16
6[−1,15,−1,5,−1,7]16 6 8
7[−1,16,−1,15,6,8]17 16 7 9
8[−1,9,−1,16,7,−1]10 17 8
9[8,10,−1,17,16,−1]9 11 18 17
10[9,−1,−1,11,17,−1]10 12 18
11[17,−1,10,0,12,−1]18 11 1 13
12[18,0,17,1,13,11]19 1 18 2 14 12
13[14,1,18,2,3,12]15 2 19 3 4 13
14[5,13,15,3,4,18]6 14 15 4 5 19
15[6,18,7,14,5,16]7 19 8 15 6 17
16[7,17,8,18,15,9]8 18 9 19 16 10
17[16,11,9,12,18,10]17 12 10 13 19 11
18[15,12,16,13,14,17]16 13 17 14 15 18
The numbering starts from 0 by default in OpenMOC-HEX but starts from 1 in METIS. For rank 0, number 12 in edge data represents the rank of adjacent assembly directly below, and the corresponding number is 13 in graph data; number −1 represents the core’s boundary which is not required in METIS.
Table 2. The specific configuration of the supercomputing platforms.
Table 2. The specific configuration of the supercomputing platforms.
Test EnvironmentPlatform 1
(v6_384 Queue of T-Partition of BSCC)
Platform 2
(t2 Queue of T-Partition of BSCC)
CPUIntel Xeon Platinum 9242 CPU @ 2.30 GHz
(96 cores per node)
Genuine Intel CPU 0000% @ 2.20 GHz
(48 cores per node)
Memory376 GB187 GB
SystemLinux Kernel: 3.10.0-1127.18.2.el7.x86_64Linux Kernel: 3.10.0-1127.18.2.el7.x86_64
CompilerICC 17.0.5ICC 17.0.5
MPI versionIntel MPI 2017Intel MPI 2017
Table 3. The Paramon monitoring info of data exchange across nodes of 19-ring on Platform 1.
Table 3. The Paramon monitoring info of data exchange across nodes of 19-ring on Platform 1.
Before OptimizationAfter Optimization
Node NameIB Send (MB)IB Recv (MB)Node NameIB Send (MB)IB Recv (MB)
cb1205758.46 763.06 cb1203112.09 115.96
cb1203758.58 763.65 cb1108107.10 112.56
cb1108753.89 757.90 cb1106122.12 128.02
cb1106754.49 757.80 cb110379.07 81.98
cb1103756.74 761.94 cb110168.41 71.47
cb1101758.32 769.29 cb100778.69 81.10
cb1007757.23 758.80 cb100370.19 73.22
cb1003766.00 770.49 cb100177.59 82.68
cb1001769.05 774.12 cb0907124.80 128.97
cb0807770.95 775.51 cb080765.76 70.15
Table 4. The Paramon monitoring info of data exchange across nodes of 18-ring on Platform 2.
Table 4. The Paramon monitoring info of data exchange across nodes of 18-ring on Platform 2.
Before OptimizationAfter Optimization
Node NameIB Send (MB)IB Recv (MB)Node NameIB Send (MB)IB Recv (MB)
cd05111121.69 1132.32 cd0511259.36 274.02
cd05101160.05 1176.01 cd0510210.98 232.36
cd05091160.61 1172.60 cd0509200.61 219.88
cd05081148.82 1158.95 cd0508238.68 255.61
cd05071157.81 1168.56 cd0507186.19 201.42
cd05061149.84 1164.02 cd0506193.93 207.14
cd05051147.87 1155.77 cd0505231.25 243.40
cd04151140.68 1150.67 cd0415169.61 190.58
cd04141145.60 1156.05 cd0414128.65 151.51
cd04131154.64 1164.28 cd0413165.99 187.12
cd04121148.46 1158.73 cd0412151.08 163.02
cd04111146.83 1160.91 cd0411161.74 180.15
cd04101148.74 1159.48 cd0410138.20 149.41
cd04081153.17 1167.11 cd0408152.14 168.90
cd04071150.50 1166.40 cd0407117.27 139.07
cd04051150.15 1164.01 cd0405187.36 201.21
cd04041153.88 1166.70 cd0404118.19 132.46
cd04031146.97 1154.43 cd0403177.83 188.47
cd04021139.48 1150.77 cd0402244.91 256.90
cd04011137.00 1156.12 cd0401129.05 152.29
Platform 2 was built earlier and had only 20 available nodes with a total of 960 cores so the testing on platform 2 was only conducted up to 18-ring 919-assembly.
Table 5. Optimization performance on Platform 1 (96 cores per node).
Table 5. Optimization performance on Platform 1 (96 cores per node).
Rings #MPI #Nodes #Before OptimizationAfter OptimizationComm Speedup
Cal Time (s)Comm Time (s)Cal Time (s)Comm Time (s)
691115.0020.1442///
7127214.2640.521513.6910.10984.75
8169215.7460.668715.520.13295.03
9217315.0210.677815.0080.11835.73
10271316.8360.862315.3290.13916.2
11331416.1650.892415.7770.12757
12397516.3430.890115.0240.13026.84
13469516.8511.059616.2680.14087.53
14547616.3351.057915.5470.14847.13
15631717.5941.115216.0760.15057.41
16721817.1341.711715.8070.160910.64
17817917.0361.721116.6690.17839.65
189191017.8491.798916.6090.162911.04
1910271118.2371.868516.4070.180810.34
Parallel efficiency of 19-ring core example0.823/0.914//
Table 6. Optimization performance on Platform 2 (48 cores per node).
Table 6. Optimization performance on Platform 2 (48 cores per node).
Rings #MPI #Nodes #Before OptimizationAfter OptimizationComm Speedup
Cal Time (s)Comm Time (s)Cal Time (s)Comm Time (s)
437113.3390.0847///
561213.6170.197512.6910.07182.75
691213.6850.360313.3880.12722.83
7127313.7060.386213.3890.09054.27
8169414.370.449813.3950.09464.75
9217513.7260.484713.4950.0945.16
10271614.2330.52413.7950.14963.5
11331714.4590.571313.6670.11055.17
12397913.7980.538513.6410.1623.32
134691014.7720.799714.0010.23343.43
145471214.3710.75313.8480.20783.62
156311414.1880.624513.6920.14994.17
167211614.5980.691714.2360.18963.65
178171814.1420.68713.8940.20843.3
189192014.4730.84613.7460.18044.69
Parallel efficiency of 18-ring core example0.922/0.97//
Platform 2 was built earlier and had only 20 available nodes with a total of 960 cores so the testing on platform 2 was only conducted up to 18-ring 919-assembly.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zheng, J.; Wang, Z.; Xie, Z.; Peng, X.; Zhao, C.; Wu, W. Parallel Communication Optimization Based on Graph Partition for Hexagonal Neutron Transport Simulation Using MOC Method. Energies 2023, 16, 2823. https://0-doi-org.brum.beds.ac.uk/10.3390/en16062823

AMA Style

Zheng J, Wang Z, Xie Z, Peng X, Zhao C, Wu W. Parallel Communication Optimization Based on Graph Partition for Hexagonal Neutron Transport Simulation Using MOC Method. Energies. 2023; 16(6):2823. https://0-doi-org.brum.beds.ac.uk/10.3390/en16062823

Chicago/Turabian Style

Zheng, Jingchao, Zhiqiang Wang, Zeyi Xie, Xingjie Peng, Chen Zhao, and Wenbin Wu. 2023. "Parallel Communication Optimization Based on Graph Partition for Hexagonal Neutron Transport Simulation Using MOC Method" Energies 16, no. 6: 2823. https://0-doi-org.brum.beds.ac.uk/10.3390/en16062823

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop