Microprocessors

A special issue of Micromachines (ISSN 2072-666X). This special issue belongs to the section "A:Physics".

Deadline for manuscript submissions: closed (31 October 2021) | Viewed by 11194

Special Issue Editors


E-Mail Website
Guest Editor
Department of Physics, Informatics and Mathematics, University of Modena and Reggio Emilia, 41125 Modena, Italy
Interests: optimization; GPU computing; edge computing; image reconstruction; deep learning; real-time scheduling; embedded systems; smart city

E-Mail Website
Guest Editor
Department of Physics, Informatics and Mathematics, University of Modena and Reggio Emilia, 41125 Modena, Italy
Interests: embedded systems; edge computing; autonomous driving systems; reconfigurable systems; programming model

Special Issue Information

Dear Colleagues,

In recent years, applying big-data technologies to field applications has resulted in several new needs: First, the processing of data across a compute continuum spanning from cloud to edge to devices, with varying capacity, architecture, etc. Second, some computations need to be made predictable, thus supporting both data-in-motion processing and larger-scale data-at-rest processing. The computation capabilities of smaller devices, such as wearables and extremely low-power-consumption sensors, need to be exploited at their best to achieve a pervasive sensor network in a responsive and energy-efficient smart city.

This Special Issue will cover the latest advances in low-power architectures, programming models for smart city infrastructures, data analytics methods for power or time constraints, and more.

In particular, we encourage submissions showing how this next generation of microprocessors can be effectively used in field applications, by making the best of hardware features such as GPGPU acceleration, reconfigurable logics (e.g., FPGAs), and deep learning compute engines in real applications. Methodologies for system design and software development for such platforms and domains are also welcome.

Dr. Roberto Cavicchioli
Dr. Paolo Burgio
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Micromachines is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Edge Computing
  • Fog Computing
  • IoT
  • Smart City
  • Reconfigurable Accelerators
  • Embedded Processors for Machine Learning

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

13 pages, 831 KiB  
Article
Application-Oriented Data Migration to Accelerate In-Memory Database on Hybrid Memory
by Wenze Zhao, Yajuan Du, Mingzhe Zhang, Mingyang Liu, Kailun Jin and Rachata Ausavarungnirun
Micromachines 2022, 13(1), 52; https://0-doi-org.brum.beds.ac.uk/10.3390/mi13010052 - 29 Dec 2021
Cited by 2 | Viewed by 1656
Abstract
With the advantage of faster data access than traditional disks, in-memory database systems, such as Redis and Memcached, have been widely applied in data centers and embedded systems. The performance of in-memory database greatly depends on the access speed of memory. With the [...] Read more.
With the advantage of faster data access than traditional disks, in-memory database systems, such as Redis and Memcached, have been widely applied in data centers and embedded systems. The performance of in-memory database greatly depends on the access speed of memory. With the requirement of high bandwidth and low energy, die-stacked memory (e.g., High Bandwidth Memory (HBM)) has been developed to extend the channel number and width. However, the capacity of die-stacked memory is limited due to the interposer challenge. Thus, hybrid memory system with traditional Dynamic Random Access Memory (DRAM) and die-stacked memory emerges. Existing works have proposed to place and manage data on hybrid memory architecture in the view of hardware. This paper considers to manage in-memory database data in hybrid memory in the view of application. We first perform a preliminary study on the hotness distribution of client requests on Redis. From the results, we observe that most requests happen on a small portion of data objects in in-memory database. Then, we propose the Application-oriented Data Migration called ADM to accelerate in-memory database on hybrid memory. We design a hotness management method and two migration policies to migrate data into or out of HBM. We take Redis under comprehensive benchmarks as a case study for the proposed method. Through the experimental results, it is verified that our proposed method can effectively gain performance improvement and reduce energy consumption compared with existing Redis database. Full article
(This article belongs to the Special Issue Microprocessors)
Show Figures

Figure 1

25 pages, 5045 KiB  
Article
Nonlinear Hyperparameter Optimization of a Neural Network in Image Processing for Micromachines
by Mingming Shen, Jing Yang, Shaobo Li, Ansi Zhang and Qiang Bai
Micromachines 2021, 12(12), 1504; https://0-doi-org.brum.beds.ac.uk/10.3390/mi12121504 - 30 Nov 2021
Cited by 8 | Viewed by 2242
Abstract
Deep neural networks are widely used in the field of image processing for micromachines, such as in 3D shape detection in microelectronic high-speed dispensing and object detection in microrobots. It is already known that hyperparameters and their interactions impact neural network model performance. [...] Read more.
Deep neural networks are widely used in the field of image processing for micromachines, such as in 3D shape detection in microelectronic high-speed dispensing and object detection in microrobots. It is already known that hyperparameters and their interactions impact neural network model performance. Taking advantage of the mathematical correlations between hyperparameters and the corresponding deep learning model to adjust hyperparameters intelligently is the key to obtaining an optimal solution from a deep neural network model. Leveraging these correlations is also significant for unlocking the “black box” of deep learning by revealing the mechanism of its mathematical principle. However, there is no complete system for studying the combination of mathematical derivation and experimental verification methods to quantify the impacts of hyperparameters on the performances of deep learning models. Therefore, in this paper, the authors analyzed the mathematical relationships among four hyperparameters: the learning rate, batch size, dropout rate, and convolution kernel size. A generalized multiparameter mathematical correlation model was also established, which showed that the interaction between these hyperparameters played an important role in the neural network’s performance. Different experiments were verified by running convolutional neural network algorithms to validate the proposal on the MNIST dataset. Notably, this research can help establish a universal multiparameter mathematical correlation model to guide the deep learning parameter adjustment process. Full article
(This article belongs to the Special Issue Microprocessors)
Show Figures

Figure 1

15 pages, 3924 KiB  
Article
Identifying Hybrid DDoS Attacks in Deterministic Machine-to-Machine Networks on a Per-Deterministic-Flow Basis
by Yen-Hung Chen, Yuan-Cheng Lai and Kai-Zhong Zhou
Micromachines 2021, 12(9), 1019; https://0-doi-org.brum.beds.ac.uk/10.3390/mi12091019 - 26 Aug 2021
Cited by 2 | Viewed by 1890
Abstract
The Deterministic Network (DetNet) is becoming a major feature for 5G and 6G networks to cope with the issue that conventional IT infrastructure cannot efficiently handle latency-sensitive data. The DetNet applies flow virtualization to satisfy time-critical flow requirements, but inevitably, DetNet flows and [...] Read more.
The Deterministic Network (DetNet) is becoming a major feature for 5G and 6G networks to cope with the issue that conventional IT infrastructure cannot efficiently handle latency-sensitive data. The DetNet applies flow virtualization to satisfy time-critical flow requirements, but inevitably, DetNet flows and conventional flows interact/interfere with each other when sharing the same physical resources. This subsequently raises the hybrid DDoS security issue that high malicious traffic not only attacks the DetNet centralized controller itself but also attacks the links that DetNet flows pass through. Previous research focused on either the DDoS type of the centralized controller side or the link side. As DDoS attack techniques are evolving, Hybrid DDoS attacks can attack multiple targets (controllers or links) simultaneously, which are difficultly detected by previous DDoS detection methodologies. This study, therefore, proposes a Flow Differentiation Detector (FDD), a novel approach to detect Hybrid DDoS attacks. The FDD first applies a fuzzy-based mechanism, Target Link Selection, to determine the most valuable links for the DDoS link/server attacker and then statistically evaluates the traffic pattern flowing through these links. Furthermore, the contribution of this study is to deploy the FDD in the SDN controller OpenDayLight to implement a Hybrid DDoS attack detection system. The experimental results show that the FDD has superior detection accuracy (above 90%) than traditional methods under the situation of different ratios of Hybrid DDoS attacks and different types and scales of topology. Full article
(This article belongs to the Special Issue Microprocessors)
Show Figures

Figure 1

15 pages, 1576 KiB  
Article
Towards Application-Driven Task Offloading in Edge Computing Based on Deep Reinforcement Learning
by Ming Sun, Tie Bao, Dan Xie, Hengyi Lv and Guoliang Si
Micromachines 2021, 12(9), 1011; https://0-doi-org.brum.beds.ac.uk/10.3390/mi12091011 - 26 Aug 2021
Cited by 9 | Viewed by 1754
Abstract
Edge computing is a new paradigm, which provides storage, computing, and network resources between the traditional cloud data center and terminal devices. In this paper, we concentrate on the application-driven task offloading problem in edge computing by considering the strong dependencies of sub-tasks [...] Read more.
Edge computing is a new paradigm, which provides storage, computing, and network resources between the traditional cloud data center and terminal devices. In this paper, we concentrate on the application-driven task offloading problem in edge computing by considering the strong dependencies of sub-tasks for multiple users. Our objective is to joint optimize the total delay and energy generated by applications, while guaranteeing the quality of services of users. First, we formulate the problem for the application-driven tasks in edge computing by jointly considering the delays and the energy consumption. Based on that, we propose a novel Application-driven Task Offloading Strategy (ATOS) based on deep reinforcement learning by adding a preliminary sorting mechanism to realize the joint optimization. Specifically, we analyze the characteristics of application-driven tasks and propose a heuristic algorithm by introducing a new factor to determine the processing order of parallelism sub-tasks. Finally, extensive experiments validate the effectiveness and reliability of the proposed algorithm. To be specific, compared with the baseline strategies, the total cost reduction by ATOS can be up to 64.5% on average. Full article
(This article belongs to the Special Issue Microprocessors)
Show Figures

Figure 1

17 pages, 2886 KiB  
Article
An Efficient Computation Offloading Strategy with Mobile Edge Computing for IoT
by Juan Fang, Jiamei Shi, Shuaibing Lu, Mengyuan Zhang and Zhiyuan Ye
Micromachines 2021, 12(2), 204; https://0-doi-org.brum.beds.ac.uk/10.3390/mi12020204 - 17 Feb 2021
Cited by 23 | Viewed by 2802
Abstract
With the rapidly development of mobile cloud computing (MCC), the Internet of Things (IoT), and artificial intelligence (AI), user equipment (UEs) are facing explosive growth. In order to effectively solve the problem that UEs may face with insufficient capacity when dealing with computationally [...] Read more.
With the rapidly development of mobile cloud computing (MCC), the Internet of Things (IoT), and artificial intelligence (AI), user equipment (UEs) are facing explosive growth. In order to effectively solve the problem that UEs may face with insufficient capacity when dealing with computationally intensive and delay sensitive applications, we take Mobile Edge Computing (MEC) of the IoT as the starting point and study the computation offloading strategy of UEs. First, we model the application generated by UEs as a directed acyclic graph (DAG) to achieve fine-grained task offloading scheduling, which makes the parallel processing of tasks possible and speeds up the execution efficiency. Then, we propose a multi-population cooperative elite algorithm (MCE-GA) based on the standard genetic algorithm, which can solve the offloading problem for tasks with dependency in MEC to minimize the execution delay and energy consumption of applications. Experimental results show that MCE-GA has better performance compared to the baseline algorithms. To be specific, the overhead reduction by MCE-GA can be up to 72.4%, 38.6%, and 19.3%, respectively, which proves the effectiveness and reliability of MCE-GA. Full article
(This article belongs to the Special Issue Microprocessors)
Show Figures

Graphical abstract

Back to TopTop