Cloud-Native Applications and Services

A special issue of Future Internet (ISSN 1999-5903). This special issue belongs to the section "Network Virtualization and Edge/Fog Computing".

Deadline for manuscript submissions: closed (31 December 2020) | Viewed by 17433

Special Issue Editor


E-Mail Website
Guest Editor
Department of Electrical Engineering and Computer Science, Lübeck University of Applied Sciences, 23562 Lübeck, Germany
Interests: cloud computing; service computing; microservices; serverless architectures; web-scale information systems; data science; machine learning; volunteer computing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Even small companies can generate enormous economic growth and business value by providing cloud-based services or applications. Instagram, Uber, AirBnB, DropBox, WhatsApp, NetFlix, Zoom, and many more astonishing small companies all had very modest headcounts in their founding days. However, these "cloud-native" enterprises have all had a remarkable economic and social impact just a few years later. What is more, these companies changed the style of how large-scale applications are being built today. What these companies have in common is their cloud-first approach. They  intentionally make use of cloud resources. These companies are capable of scaling their services on a global scale, as quickly as is needed. In times of world-wide COVID19 shutdowns, these "cloud-native" companies have emerged as an essential and unaware backbone that can keep even large economies (at least partly) operating. Services by these cloud-native companies enable over-night established remote working opportunities for company staff that found themselves suddenly working in home offices. These services enable ad-hoc remote teaching opportunities for teachers and students at schools and universities. Currently, these "cloud-native" services are some of the working things that "keep our heads above water".

This Special Issue aims to collect (and to esteem in the light of the current COVID19 shutdowns) the most recent innovations in cloud-native software and system engineering practices. We want to provide a broad and well-grounded picture what the more and more frequently used term "cloud-native" is exactly about, and what its implications are or could be. We strive to gather research from different disciplines and methodological backgrounds to discuss new ideas, research questions, recent results, and future challenges in this emerging area of research. 

Potential topics include, but are not limited to, the following:

  • Cloud-native related use cases, experiences, and evaluations
  • Cloud-native related software and system engineering methodologies (e.g., domain-driven design, microservices, serverless architectures, DevOps, and more)
  • Cloud-native related technologies (e.g., containers, container orchestration, infrastructure as code, service meshes, observability solutions, auto-scaling solutions, and more)
  • Cloud-native related solution proposal studies
  • Cloud-native related systematic mapping studies and systematic literature reviews

Prof. Dr. Nane Kratzke
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Future Internet is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • cloud computing
  • microservices
  • serverless
  • cloud-native
  • application
  • service
  • CNA
  • CNS
  • service computing

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Review, Other

2 pages, 155 KiB  
Editorial
Cloud-Native Applications and Services
by Nane Kratzke
Future Internet 2022, 14(12), 346; https://0-doi-org.brum.beds.ac.uk/10.3390/fi14120346 - 22 Nov 2022
Cited by 1 | Viewed by 1121
Abstract
This Special Issue presents some of the most recent innovations in cloud-native software and system engineering practices providing a broad and well-grounded picture of what the more and more frequently used term “Cloud-native” is currently used for [...] Full article
(This article belongs to the Special Issue Cloud-Native Applications and Services)

Research

Jump to: Editorial, Review, Other

13 pages, 867 KiB  
Article
A Cloud-Based Data Collaborative to Combat the COVID-19 Pandemic and to Solve Major Technology Challenges
by Max Cappellari, John Belstner, Bryan Rodriguez and Jeff Sedayao
Future Internet 2021, 13(3), 61; https://0-doi-org.brum.beds.ac.uk/10.3390/fi13030061 - 27 Feb 2021
Cited by 3 | Viewed by 2849
Abstract
The XPRIZE Foundation designs and operates multi-million-dollar, global competitions to incentivize the development of technological breakthroughs that accelerate humanity toward a better future. To combat the COVID-19 pandemic, the foundation coordinated with several organizations to make datasets about different facets of the disease [...] Read more.
The XPRIZE Foundation designs and operates multi-million-dollar, global competitions to incentivize the development of technological breakthroughs that accelerate humanity toward a better future. To combat the COVID-19 pandemic, the foundation coordinated with several organizations to make datasets about different facets of the disease available and to provide the computational resources needed to analyze those datasets. This paper is a case study of the requirements, design, and implementation of the XPRIZE Data Collaborative, which is a Cloud-based infrastructure that enables the XPRIZE to meet its COVID-19 mission and host future data-centric competitions. We examine how a Cloud Native Application can use an unexpected variety of Cloud technologies, ranging from containers, serverless computing, to even older ones such as Virtual Machines. We also search and document the effects that the pandemic had on application development in the Cloud. We include our experiences of having users successfully exercise the Data Collaborative, detailing the challenges encountered and areas for improvement and future work. Full article
(This article belongs to the Special Issue Cloud-Native Applications and Services)
Show Figures

Figure 1

17 pages, 468 KiB  
Article
Understanding the Determinants and Future Challenges of Cloud Computing Adoption for High Performance Computing
by Theo Lynn, Grace Fox, Anna Gourinovitch and Pierangelo Rosati
Future Internet 2020, 12(8), 135; https://0-doi-org.brum.beds.ac.uk/10.3390/fi12080135 - 11 Aug 2020
Cited by 18 | Viewed by 4857
Abstract
High performance computing (HPC) is widely recognized as a key enabling technology for advancing scientific progress, industrial competitiveness, national and regional security, and the quality of human life. Notwithstanding this contribution, the large upfront investment and technical expertise required has limited the adoption [...] Read more.
High performance computing (HPC) is widely recognized as a key enabling technology for advancing scientific progress, industrial competitiveness, national and regional security, and the quality of human life. Notwithstanding this contribution, the large upfront investment and technical expertise required has limited the adoption of HPC to large organizations, government bodies, and third level institutions. Recent advances in cloud computing and telecommunications have the potential to overcome the historical issues associated with HPC through increased flexibility and efficiency, and reduced capital and operational expenditure. This study seeks to advance the literature on technology adoption and assimilation in the under-examined HPC context through a mixed methods approach. Firstly, the determinants of cloud computing adoption for HPC are examined through a survey of 121 HPC decision makers worldwide. Secondly, a modified Delphi method was conducted with 13 experts to identify and prioritize critical issues in the adoption of cloud computing for HPC. Results from the quantitative phase suggest that only organizational and human factors significantly influence cloud computing adoption decisions for HPC. While security was not identified as a significant influencer in adoption decisions, qualitative research findings suggest that data privacy and security issues are an immediate and long-term concern. Full article
(This article belongs to the Special Issue Cloud-Native Applications and Services)
Show Figures

Figure 1

Review

Jump to: Editorial, Research, Other

20 pages, 318 KiB  
Review
Intelligent and Autonomous Management in Cloud-Native Future Networks—A Survey on Related Standards from an Architectural Perspective
by Qiang Duan
Future Internet 2021, 13(2), 42; https://0-doi-org.brum.beds.ac.uk/10.3390/fi13020042 - 05 Feb 2021
Cited by 19 | Viewed by 3854
Abstract
Cloud-native network design, which leverages network virtualization and softwarization together with the service-oriented architectural principle, is transforming communication networks to a versatile platform for converged network-cloud/edge service provisioning. Intelligent and autonomous management is one of the most challenging issues in cloud-native future networks, [...] Read more.
Cloud-native network design, which leverages network virtualization and softwarization together with the service-oriented architectural principle, is transforming communication networks to a versatile platform for converged network-cloud/edge service provisioning. Intelligent and autonomous management is one of the most challenging issues in cloud-native future networks, and a wide range of machine learning (ML)-based technologies have been proposed for addressing different aspects of the management challenge. It becomes critical that the various management technologies are applied on the foundation of a consistent architectural framework with a holistic vision. This calls for standardization of new management architecture that supports seamless the integration of diverse ML-based technologies in cloud-native future networks. The goal of this paper is to provide a big picture of the recent developments of architectural frameworks for intelligent and autonomous management for future networks. The paper surveys the latest progress in the standardization of network management architectures including works by 3GPP, ETSI, and ITU-Tand analyzes how cloud-native network design may facilitate the architecture development for addressing management challenges. Open issues related to intelligent and autonomous management in cloud-native future networks are also discussed in this paper to identify some possible directions for future research and development. Full article
(This article belongs to the Special Issue Cloud-Native Applications and Services)
Show Figures

Figure 1

Other

20 pages, 749 KiB  
Perspective
Volunteer Down: How COVID-19 Created the Largest Idling Supercomputer on Earth
by Nane Kratzke
Future Internet 2020, 12(6), 98; https://0-doi-org.brum.beds.ac.uk/10.3390/fi12060098 - 06 Jun 2020
Cited by 5 | Viewed by 3969
Abstract
From close to scratch, the COVID-19 pandemic created the largest volunteer supercomputer on earth. Sadly, processing resources assigned to the corresponding Folding@home project cannot be shared with other volunteer computing projects efficiently. Consequently, the largest supercomputer had significant idle times. This perspective paper [...] Read more.
From close to scratch, the COVID-19 pandemic created the largest volunteer supercomputer on earth. Sadly, processing resources assigned to the corresponding Folding@home project cannot be shared with other volunteer computing projects efficiently. Consequently, the largest supercomputer had significant idle times. This perspective paper investigates how the resource sharing of future volunteer computing projects could be improved. Notably, efficient resource sharing has been optimized throughout the last ten years in cloud computing. Therefore, this perspective paper reviews the current state of volunteer and cloud computing to analyze what both domains could learn from each other. It turns out that the disclosed resource sharing shortcomings of volunteer computing could be addressed by technologies that have been invented, optimized, and adapted for entirely different purposes by cloud-native companies like Uber, Airbnb, Google, or Facebook. Promising technologies might be containers, serverless architectures, image registries, distributed service registries, and all have one thing in common: They already exist and are all tried and tested in large web-scale deployments. Full article
(This article belongs to the Special Issue Cloud-Native Applications and Services)
Show Figures

Figure 1

Back to TopTop