High-Performance Computing Algorithms and Their Applications 2021

A special issue of Algorithms (ISSN 1999-4893).

Deadline for manuscript submissions: closed (30 November 2021) | Viewed by 8323

Special Issue Editor


E-Mail Website
Guest Editor
Department of Computer Science, Georg-August-Universität Göttingen/GWDG, 37077 Göttingen, Germany
Interests: high-performance computing; storage; application of machine learning methods

Special Issue Information

Dear Colleagues,

Scalable parallel applications are an enabler of research and development in science and industry. For example, they allow us to predict the impacts of diseases, weather, or earthquakes on society, they enable the discovery of products such as drugs or constructions like a fusion reactor. The fundamental methods and algorithms evolve with the demand in the applications and the capabilities of hardware and software environments to execute them. On the horizon are revolutionary changes to the landscape of applications --  machine learning and quantum computing are being integrated into traditional HPC and large-scale workflows span across a single data center to the cloud, fog, and edge. At the same time, numerical algorithms, e.g., for particle and mesh-based simulations are evolving.

This special issue aims to provide an overview of the research frontiers of parallel algorithms and their application in science and industry. We invite you to submit high-quality papers to this Special Issue “High-Performance Computing Algorithms and their Applications” covering the whole spectrum of application domains: Astronomy, Biosciences, Digital humanities, Geoinformatics, ... A submitted manuscript should describe the application domain, the principles of the algorithms that address the problem, and state-of-the-art on the field. The following is a (non-exhaustive) list of algorithms of interest:

  • Particle and mesh-based simulations
  • Machine learning methods (in HPC)
  • Quantum algorithms
  • Applied numerical methods
  • Spectral methods
  • Algebra problems
  • Monte Carlo methods

Dr. Julian Kunkel
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Algorithms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Particle and mesh-based simulations
  • Machine learning methods (in HPC)
  • Quantum algorithms
  • Applied numerical methods
  • Spectral methods
  • Algebra problems
  • Monte Carlo methods

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 985 KiB  
Article
Efficient and Scalable Initialization of Partitioned Coupled Simulations with preCICE
by Amin Totounferoush, Frédéric Simonis, Benjamin Uekermann and Miriam Schulte
Algorithms 2021, 14(6), 166; https://0-doi-org.brum.beds.ac.uk/10.3390/a14060166 - 27 May 2021
Cited by 3 | Viewed by 3152
Abstract
preCICE is an open-source library, that provides comprehensive functionality to couple independent parallelized solver codes to establish a partitioned multi-physics multi-code simulation environment. For data communication between the respective executables at runtime, it implements a peer-to-peer concept, which renders the computational cost of [...] Read more.
preCICE is an open-source library, that provides comprehensive functionality to couple independent parallelized solver codes to establish a partitioned multi-physics multi-code simulation environment. For data communication between the respective executables at runtime, it implements a peer-to-peer concept, which renders the computational cost of the coupling per time step negligible compared to the typical run time of the coupled codes. To initialize the peer-to-peer coupling, the mesh partitions of the respective solvers need to be compared to determine the point-to-point communication channels between the processes of both codes. This initialization effort can become a limiting factor, if we either reach memory limits or if we have to re-initialize communication relations in every time step. In this contribution, we remove two remaining bottlenecks: (i) We base the neighborhood search between mesh entities of two solvers on a tree data structure to avoid quadratic complexity, and (ii) we replace the sequential gather-scatter comparison of both mesh partitions by a two-level approach that first compares bounding boxes around mesh partitions in a sequential manner, subsequently establishes pairwise communication between processes of the two solvers, and finally compares mesh partitions between connected processes in parallel. We show, that the two-level initialization method is fives times faster than the old one-level scheme on 24,567 CPU-cores using a mesh with 628,898 vertices. In addition, the two-level scheme is able to handle much larger computational meshes, since the central mesh communication of the one-level scheme is replaced with a fully point-to-point mesh communication scheme. Full article
(This article belongs to the Special Issue High-Performance Computing Algorithms and Their Applications 2021)
Show Figures

Figure 1

15 pages, 731 KiB  
Article
Parallel Delay Multiply and Sum Algorithm for Microwave Medical Imaging Using Spark Big Data Framework
by Rahmat Ullah and Tughrul Arslan
Algorithms 2021, 14(5), 157; https://0-doi-org.brum.beds.ac.uk/10.3390/a14050157 - 18 May 2021
Cited by 10 | Viewed by 3197
Abstract
Microwave imaging systems are currently being investigated for breast cancer, brain stroke and neurodegenerative disease detection due to their low cost, portable and wearable nature. At present, commonly used radar-based algorithms for microwave imaging are based on the delay and sum algorithm. These [...] Read more.
Microwave imaging systems are currently being investigated for breast cancer, brain stroke and neurodegenerative disease detection due to their low cost, portable and wearable nature. At present, commonly used radar-based algorithms for microwave imaging are based on the delay and sum algorithm. These algorithms use ultra-wideband signals to reconstruct a 2D image of the targeted object or region. Delay multiply and sum is an extended version of the delay and sum algorithm. However, it is computationally expensive and time-consuming. In this paper, the delay multiply and sum algorithm is parallelised using a big data framework. The algorithm uses the Spark MapReduce programming model to improve its efficiency. The most computational part of the algorithm is pixel value calculation, where signals need to be multiplied in pairs and summed. The proposed algorithm broadcasts the input data and executes it in parallel in a distributed manner. The Spark-based parallel algorithm is compared with sequential and Python multiprocessing library implementation. The experimental results on both a standalone machine and a high-performance cluster show that Spark significantly accelerates the image reconstruction process without affecting its accuracy. Full article
(This article belongs to the Special Issue High-Performance Computing Algorithms and Their Applications 2021)
Show Figures

Figure 1

Back to TopTop