System Software Issues in Future Computing Systems

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (20 November 2021) | Viewed by 12149

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computer Engineering, Ewha University, Seoul 03760, Republic of Korea
Interests: operating system; real-time system; memory & storage management; embedded systems; system optimizations
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Computer Science and Engineering, Pusan National University, Busan 46241, Republic of Korea
Interests: operating system; cloud platform; non-volatile memory storage; non-block based storage
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The objective of this Special Issue is in the design and performance analysis of system software for future computing systems and their applications. In particular, we focus on the operating system issues (e.g., caching, scheduling, allocation, optimization) with special emphasis on new hardware environments (e.g., persistent /nonvolatile memory, many-cores) and emerging systems/applications (e.g., IoT, cloud, mobile, real-time, embedded, CPS, health-care, automotive, smart factory).

Since emerging hardware media (e.g., many-core GPU, phase-change memory, ReRAM, STT-MRAM) have different performance characteristics from traditional system components (e.g., single-core CPU, SRAM cache, DRAM memory, HDD storage), we need to revisit data structures and algorithms for designing appropriate software layers.

Unlike general-purpose computer systems, future systems and their applications have a large variety of I/O devices (e.g., sensor, GPS) and special requirements (e.g., deadline, battery, memory limitations), which should be carefully considered in the design of future operating systems.

Potential topics include but are not limited to: 

  1. Memory/storage management for emerging hardware and applications;
  2. Workload characterization for future applications w.r.t OS design;
  3. Caching, scheduling, allocation, and optimization issues in emerging systems and applications;
  4. Operating system issues in various system environments (e.g., IoT, cloud, mobile, embedded, real-time, CPS, health-care, automotive, smart factory). 

In this Special Issue, the extension of major conference papers, review articles, development case studies, as well as original research articles is welcome. We also encourage the authors to open the source code or materials of their developed software to the research community.

Prof. Dr. Hyokyung Bahn
Prof. Dr. Sungyong Ahn
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Prof. Dr. Hyokyung Bahn
Prof. Dr. Sungyong Ahn
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • operating system
  • memory and storage
  • workload characterization
  • caching
  • scheduling...

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 21705 KiB  
Article
Efficient Key-Value Data Placement for ZNS SSD
by Gijun Oh, Junseok Yang and Sungyong Ahn
Appl. Sci. 2021, 11(24), 11842; https://0-doi-org.brum.beds.ac.uk/10.3390/app112411842 - 13 Dec 2021
Cited by 7 | Viewed by 4553
Abstract
Log-structured merge-tree (LSM-Tree)-based key–value stores are attracting attention for their high I/O (Input/Output) performance due to their sequential write characteristics. However, excessive writes caused by compaction shorten the lifespan of the Solid-state Drive (SSD). Therefore, there are several studies aimed at reducing garbage [...] Read more.
Log-structured merge-tree (LSM-Tree)-based key–value stores are attracting attention for their high I/O (Input/Output) performance due to their sequential write characteristics. However, excessive writes caused by compaction shorten the lifespan of the Solid-state Drive (SSD). Therefore, there are several studies aimed at reducing garbage collection overhead by using Zoned Namespace ZNS; SSD in which the host can determine data placement. However, the existing studies have limitations in terms of performance improvement because the lifetime and hotness of key–value data are not considered. Therefore, in this paper, we propose a technique to minimize the space efficiency and garbage collection overhead of SSDs by arranging them according to the characteristics of key–value data. The proposed method was implemented by modifying ZenFS of RocksDB and, according to the result of the performance evaluation, the space efficiency could be improved by up to 75%. Full article
(This article belongs to the Special Issue System Software Issues in Future Computing Systems)
Show Figures

Figure 1

17 pages, 4427 KiB  
Article
Efficient Use of GPU Memory for Large-Scale Deep Learning Model Training
by Hyeonseong Choi and Jaehwan Lee
Appl. Sci. 2021, 11(21), 10377; https://0-doi-org.brum.beds.ac.uk/10.3390/app112110377 - 04 Nov 2021
Cited by 12 | Viewed by 6873
Abstract
To achieve high accuracy when performing deep learning, it is necessary to use a large-scale training model. However, due to the limitations of GPU memory, it is difficult to train large-scale training models within a single GPU. NVIDIA introduced a technology called CUDA [...] Read more.
To achieve high accuracy when performing deep learning, it is necessary to use a large-scale training model. However, due to the limitations of GPU memory, it is difficult to train large-scale training models within a single GPU. NVIDIA introduced a technology called CUDA Unified Memory with CUDA 6 to overcome the limitations of GPU memory by virtually combining GPU memory and CPU memory. In addition, in CUDA 8, memory advise options are introduced to efficiently utilize CUDA Unified Memory. In this work, we propose a newly optimized scheme based on CUDA Unified Memory to efficiently use GPU memory by applying different memory advise to each data type according to access patterns in deep learning training. We apply CUDA Unified Memory technology to PyTorch to see the performance of large-scale learning models through the expanded GPU memory. We conduct comprehensive experiments on how to efficiently utilize Unified Memory by applying memory advises when performing deep learning. As a result, when the data used for deep learning are divided into three types and a memory advise is applied to the data according to the access pattern, the deep learning execution time is reduced by 9.4% compared to the default Unified Memory. Full article
(This article belongs to the Special Issue System Software Issues in Future Computing Systems)
Show Figures

Figure 1

Back to TopTop