Design Space Exploration and Resource Management of Multi/Many-Core Systems

A special issue of Journal of Low Power Electronics and Applications (ISSN 2079-9268).

Deadline for manuscript submissions: closed (30 September 2020) | Viewed by 28815

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors


E-Mail Website
Guest Editor
School of Computer Science and Electronic Engineering, University of Essex, Colchester CO4 3SQ, UK
Interests: embedded systems; MPSoC; NoC; design space exploration; run-time mapping
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Engineering, Rochester Institute of Technology, Rochester, NY 14623, USA
Interests: interconnection network; Network-on-Chip (NoC); multi-chip system integration; wireless interconnects; data center networks
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The reliance of computing systems of various scales, e.g., emebbeded to cloud computing, is increasing on multi/many-core chips mainly to satisfy the high performance requirement of complex software applications. These systems are typically referred to as multi/many-core systems. At the same time, depending upon the application domain, these systems also demand energy efficiency, reliabilty and/or security. These demands can be fulfilled by exploring the design space by considering the software applications and the multi/many-core chips to find the design points leading to efficiency in all the required metrics depending upon the application domain.

Further, considering varying workloads in the systems over time, efficient resource management methodologies need to be developed to meet the requirements of performance, energy efficiency, reliability and/or security. These methodologies can perform resource management decisions online without any prior analysis or can exploit offline explore design space to take efficient run-time decisions and, thus, management.

Authors are invited to submit regular papers following the JLPEA submission guidelines, within the remit of this Special Issue call. Topics include but are not limited to:

  • Design space exploration (DSE) techniques for multi/many-core systems;
  • DSE considering optimisation for one or more metrics, such as accuracy, performance, energy consumption, reliability and security;
  • Resource management considering various principles, e.g., machine learning and heuristics;
  • Adaptive and hieracrchical resource management;
  • Approximate computing to achieve trade-offs for various metrics.

Dr. Amit Kumar Singh
Dr. Amlan Ganguly
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Low Power Electronics and Applications is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Multicore and many-core
  • Design space exploration
  • Resource management
  • Security
  • Reliability
  • Energy efficiency
  • Machine learning
  • Approximate computing

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review, Other

37 pages, 4879 KiB  
Article
Reliability-Aware Resource Management in Multi-/Many-Core Systems: A Perspective Paper
by Siva Satyendra Sahoo, Behnaz Ranjbar and Akash Kumar
J. Low Power Electron. Appl. 2021, 11(1), 7; https://0-doi-org.brum.beds.ac.uk/10.3390/jlpea11010007 - 25 Jan 2021
Cited by 14 | Viewed by 3925
Abstract
With the advancement of technology scaling, multi/many-core platforms are getting more attention in embedded systems due to the ever-increasing performance requirements and power efficiency. This feature size scaling, along with architectural innovations, has dramatically exacerbated the rate of manufacturing defects and physical fault-rates. [...] Read more.
With the advancement of technology scaling, multi/many-core platforms are getting more attention in embedded systems due to the ever-increasing performance requirements and power efficiency. This feature size scaling, along with architectural innovations, has dramatically exacerbated the rate of manufacturing defects and physical fault-rates. As a result, in addition to providing high parallelism, such hardware platforms have introduced increasing unreliability into the system. Such systems need to be well designed to ensure long-term and application-specific reliability, especially in mixed-criticality systems, where incorrect execution of applications may cause catastrophic consequences. However, the optimal allocation of applications/tasks on multi/many-core platforms is an increasingly complex problem. Therefore, reliability-aware resource management is crucial while ensuring the application-specific Quality-of-Service (QoS) requirements and optimizing other system-level performance goals. This article presents a survey of recent works that focus on reliability-aware resource management in multi-/many-core systems. We first present an overview of reliability in electronic systems, associated fault models and the various system models used in related research. Then, we present recent published articles primarily focusing on aspects such as application-specific reliability optimization, mixed-criticality awareness, and hardware resource heterogeneity. To underscore the techniques’ differences, we classify them based on the design space exploration. In the end, we briefly discuss the upcoming trends and open challenges within the domain of reliability-aware resource management for future research. Full article
Show Figures

Figure 1

37 pages, 615 KiB  
Article
Hybrid Application Mapping for Composable Many-Core Systems: Overview and Future Perspective
by Behnaz Pourmohseni, Michael Glaß, Jörg Henkel, Heba Khdr, Martin Rapp, Valentina Richthammer, Tobias Schwarzer, Fedor Smirnov, Jan Spieck, Jürgen Teich, Andreas Weichslgartner and Stefan Wildermann
J. Low Power Electron. Appl. 2020, 10(4), 38; https://0-doi-org.brum.beds.ac.uk/10.3390/jlpea10040038 - 17 Nov 2020
Cited by 9 | Viewed by 3821
Abstract
Many-core platforms are rapidly expanding in various embedded areas as they provide the scalable computational power required to meet the ever-growing performance demands of embedded applications and systems. However, the huge design space of possible task mappings, the unpredictable workload dynamism, and the [...] Read more.
Many-core platforms are rapidly expanding in various embedded areas as they provide the scalable computational power required to meet the ever-growing performance demands of embedded applications and systems. However, the huge design space of possible task mappings, the unpredictable workload dynamism, and the numerous non-functional requirements of applications in terms of timing, reliability, safety, and so forth. impose significant challenges when designing many-core systems. Hybrid Application Mapping (HAM) is an emerging class of design methodologies for many-core systems which address these challenges via an incremental (per-application) mapping scheme: The mapping process is divided into (i) a design-time Design Space Exploration (DSE) step per application to obtain a set of high-quality mapping options and (ii) a run-time system management step in which applications are launched dynamically (on demand) using the precomputed mappings. This paper provides an overview of HAM and the design methodologies developed in line with it. We introduce the basics of HAM and elaborate on the way it addresses the major challenges of application mapping in many-core systems. We provide an overview of the main challenges encountered when employing HAM and survey a collection of state-of-the-art techniques and methodologies proposed to address these challenges. We finally present an overview of open topics and challenges in HAM, provide a summary of emerging trends for addressing them particularly using machine learning, and outline possible future directions. While there exists a large body of HAM methodologies, the techniques studied in this paper are developed, to a large extent, within the scope of invasive computing. Invasive computing introduces resource awareness into applications and employs explicit resource reservation to enable incremental application mapping and dynamic system management. Full article
Show Figures

Figure 1

15 pages, 891 KiB  
Article
Framework for Design Exploration and Performance Analysis of RF-NoC Manycore Architecture
by Habiba Lahdhiri, Jordane Lorandel, Salvatore Monteleone, Emmanuelle Bourdel and Maurizio Palesi
J. Low Power Electron. Appl. 2020, 10(4), 37; https://0-doi-org.brum.beds.ac.uk/10.3390/jlpea10040037 - 03 Nov 2020
Cited by 6 | Viewed by 2971
Abstract
The Network-on-chip (NoC) paradigm has been proposed as a promising solution to enable the handling of a high degree of integration in multi-/many-core architectures. Despite their advantages, wired NoC infrastructures are facing several performance issues regarding multi-hop long-distance communications. RF-NoC is an attractive [...] Read more.
The Network-on-chip (NoC) paradigm has been proposed as a promising solution to enable the handling of a high degree of integration in multi-/many-core architectures. Despite their advantages, wired NoC infrastructures are facing several performance issues regarding multi-hop long-distance communications. RF-NoC is an attractive solution offering high performance and multicast/broadcast capabilities. However, managing RF links is a critical aspect that relies on both application-dependent and architectural parameters. This paper proposes a design space exploration framework for OFDMA-based RF-NoC architecture, which takes advantage of both real application benchmarks simulated using Sniper and RF-NoC architecture modeled using Noxim. We adopted the proposed framework to finely configure a routing algorithm, working with real traffic, achieving up to 45% of delay reduction, compared to a wired NoC setup in similar conditions. Full article
Show Figures

Figure 1

28 pages, 1153 KiB  
Article
Intra- and Inter-Server Smart Task Scheduling for Profit and Energy Optimization of HPC Data Centers
by Sayed Ashraf Mamun, Alexander Gilday, Amit Kumar Singh, Amlan Ganguly, Geoff V. Merrett, Xiaohang Wang and Bashir M. Al-Hashimi
J. Low Power Electron. Appl. 2020, 10(4), 32; https://0-doi-org.brum.beds.ac.uk/10.3390/jlpea10040032 - 14 Oct 2020
Cited by 2 | Viewed by 3160
Abstract
Servers in a data center are underutilized due to over-provisioning, which contributes heavily toward the high-power consumption of the data centers. Recent research in optimizing the energy consumption of High Performance Computing (HPC) data centers mostly focuses on consolidation of Virtual Machines (VMs) [...] Read more.
Servers in a data center are underutilized due to over-provisioning, which contributes heavily toward the high-power consumption of the data centers. Recent research in optimizing the energy consumption of High Performance Computing (HPC) data centers mostly focuses on consolidation of Virtual Machines (VMs) and using dynamic voltage and frequency scaling (DVFS). These approaches are inherently hardware-based, are frequently unique to individual systems, and often use simulation due to lack of access to HPC data centers. Other approaches require profiling information on the jobs in the HPC system to be available before run-time. In this paper, we propose a reinforcement learning based approach, which jointly optimizes profit and energy in the allocation of jobs to available resources, without the need for such prior information. The approach is implemented in a software scheduler used to allocate real applications from the Princeton Application Repository for Shared-Memory Computers (PARSEC) benchmark suite to a number of hardware nodes realized with Odroid-XU3 boards. Experiments show that the proposed approach increases the profit earned by 40% while simultaneously reducing energy consumption by 20% when compared to a heuristic-based approach. We also present a network-aware server consolidation algorithm called Bandwidth-Constrained Consolidation (BCC), for HPC data centers which can address the under-utilization problem of the servers. Our experiments show that the BCC consolidation technique can reduce the power consumption of a data center by up-to 37%. Full article
Show Figures

Figure 1

15 pages, 674 KiB  
Article
PkMin: Peak Power Minimization for Multi-Threaded Many-Core Applications
by Arka Maity, Anuj Pathania and Tulika Mitra
J. Low Power Electron. Appl. 2020, 10(4), 31; https://0-doi-org.brum.beds.ac.uk/10.3390/jlpea10040031 - 30 Sep 2020
Cited by 2 | Viewed by 2546
Abstract
Multiple multi-threaded tasks constitute a modern many-core application. An accompanying generic Directed Acyclic Graph (DAG) represents the execution precedence relationship between the tasks. The application comes with a hard deadline and high peak power consumption. Parallel execution of multiple tasks on multiple cores [...] Read more.
Multiple multi-threaded tasks constitute a modern many-core application. An accompanying generic Directed Acyclic Graph (DAG) represents the execution precedence relationship between the tasks. The application comes with a hard deadline and high peak power consumption. Parallel execution of multiple tasks on multiple cores results in a quicker execution, but higher peak power. Peak power single-handedly determines the involved cooling costs in many-cores, while its violations could induce performance-crippling execution uncertainties. Less task parallelization, on the other hand, results in lower peak power, but a more prolonged deadline violating execution. The problem of peak power minimization in many-cores is to determine task-to-core mapping configuration in the spatio-temporal domain that minimizes the peak power consumption of an application, but ensures application still meets the deadline. All previous works on peak power minimization for many-core applications (with or without DAG) assume only single-threaded tasks. We are the first to propose a framework, called PkMin, which minimizes the peak power of many-core applications with DAG that have multi-threaded tasks. PkMin leverages the inherent convexity in the execution characteristics of multi-threaded tasks to find a configuration that satisfies the deadline, as well as minimizes peak power. Evaluation on hundreds of applications shows PkMin on average results in 49.2% lower peak power than a similar state-of-the-art framework. Full article
Show Figures

Figure 1

25 pages, 2203 KiB  
Article
Low-Complexity Run-time Management of Concurrent Workloads for Energy-Efficient Multi-Core Systems
by Ali Aalsaud, Fei Xia, Ashur Rafiev, Rishad Shafik, Alexander Romanovsky and Alex Yakovlev
J. Low Power Electron. Appl. 2020, 10(3), 25; https://0-doi-org.brum.beds.ac.uk/10.3390/jlpea10030025 - 25 Aug 2020
Cited by 1 | Viewed by 3170
Abstract
Contemporary embedded systems may execute multiple applications, potentially concurrently on heterogeneous platforms, with different system workloads (CPU- or memory-intensive or both) leading to different power signatures. This makes finding the most energy-efficient system configuration for each type of workload scenario extremely challenging. This [...] Read more.
Contemporary embedded systems may execute multiple applications, potentially concurrently on heterogeneous platforms, with different system workloads (CPU- or memory-intensive or both) leading to different power signatures. This makes finding the most energy-efficient system configuration for each type of workload scenario extremely challenging. This paper proposes a novel run-time optimization approach aiming for maximum power normalized performance under such circumstances. Based on experimenting with PARSEC applications on an Odroid XU-3 and Intel Core i7 platforms, we model power normalized performance (in terms of instruction per second (IPS)/Watt) through multivariate linear regression (MLR). We derive run-time control methods to exploit the models in different ways, trading off optimization results with control overheads. We demonstrate low-cost and low-complexity run-time algorithms that continuously adapt system configuration to improve the IPS/Watt by up to 139% compared to existing approaches. Full article
Show Figures

Figure 1

Review

Jump to: Research, Other

31 pages, 6698 KiB  
Review
A Survey of Resource Management for Processing-In-Memory and Near-Memory Processing Architectures
by Kamil Khan, Sudeep Pasricha and Ryan Gary Kim
J. Low Power Electron. Appl. 2020, 10(4), 30; https://0-doi-org.brum.beds.ac.uk/10.3390/jlpea10040030 - 24 Sep 2020
Cited by 7 | Viewed by 4397
Abstract
Due to the amount of data involved in emerging deep learning and big data applications, operations related to data movement have quickly become a bottleneck. Data-centric computing (DCC), as enabled by processing-in-memory (PIM) and near-memory processing (NMP) paradigms, aims to accelerate these types [...] Read more.
Due to the amount of data involved in emerging deep learning and big data applications, operations related to data movement have quickly become a bottleneck. Data-centric computing (DCC), as enabled by processing-in-memory (PIM) and near-memory processing (NMP) paradigms, aims to accelerate these types of applications by moving the computation closer to the data. Over the past few years, researchers have proposed various memory architectures that enable DCC systems, such as logic layers in 3D-stacked memories or charge-sharing-based bitwise operations in dynamic random-access memory (DRAM). However, application-specific memory access patterns, power and thermal concerns, memory technology limitations, and inconsistent performance gains complicate the offloading of computation in DCC systems. Therefore, designing intelligent resource management techniques for computation offloading is vital for leveraging the potential offered by this new paradigm. In this article, we survey the major trends in managing PIM and NMP-based DCC systems and provide a review of the landscape of resource management techniques employed by system designers for such systems. Additionally, we discuss the future challenges and opportunities in DCC management. Full article
Show Figures

Figure 1

Other

Jump to: Research, Review

12 pages, 340 KiB  
Opinion
A Case for Security-Aware Design-Space Exploration of Embedded Systems
by Andy D. Pimentel
J. Low Power Electron. Appl. 2020, 10(3), 22; https://0-doi-org.brum.beds.ac.uk/10.3390/jlpea10030022 - 17 Jul 2020
Cited by 4 | Viewed by 3689
Abstract
As modern embedded systems are becoming more and more ubiquitous and interconnected, they attract a world-wide attention of attackers and the security aspect is more important than ever during the design of those systems. Moreover, given the ever-increasing complexity of the applications that [...] Read more.
As modern embedded systems are becoming more and more ubiquitous and interconnected, they attract a world-wide attention of attackers and the security aspect is more important than ever during the design of those systems. Moreover, given the ever-increasing complexity of the applications that run on these systems, it becomes increasingly difficult to meet all security criteria. While extra-functional design objectives such as performance and power/energy consumption are typically taken into account already during the very early stages of embedded systems design, system security is still mostly considered as an afterthought. That is, security is usually not regarded in the process of (early) design-space exploration of embedded systems, which is the critical process of multi-objective optimization that aims at optimizing the extra-functional behavior of a design. This position paper argues for the development of techniques for quantifying the ’degree of secureness’ of embedded system design instances such that these can be incorporated in a multi-objective optimization process. Such technology would allow for the optimization of security aspects of embedded systems during the earliest design phases as well as for studying the trade-offs between security and the other design objectives such as performance, power consumption and cost. Full article
Show Figures

Figure 1

Back to TopTop