Artificial Intelligence for the Cloud Continuum

A special issue of Information (ISSN 2078-2489). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: closed (30 October 2023) | Viewed by 7076

Special Issue Editors


E-Mail Website
Guest Editor
Department of Informatics, University of Piraeus, 18534 Piraeus, Greece
Interests: decision support systems; artificial intelligence; information systems
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Business Administration, School of Business, Athens University of Economics and Business, Patission 76, 10434 Athens, Greece
Interests: cloud computing; edge computing; multiclouds; cloud security; topology and orchestration management for cloud applications
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
National Technical University of Athens and Institute for Communication and Computers Systems, Iroon Polytechniou 9, 15780 Zografou, Greece
Interests: decision support systems; social computing; knowledge management; e-governance; web science

Special Issue Information

Dear Colleagues,

When compared to on-premise information systems, systems which leverage the cloud continuum of computing, which includes public clouds, private clouds, as well as fog and edge computing resources, offer more advantages economically, operationally, and functionally. The various economic benefits of cloud computing are flexibility in cost, which has the potential to reduce the overall cost. Moreover, the initial heavy capital investments on IT resources can be avoided. Additionally, the cost involved in deploying and maintaining these IT resources can be reduced, especially with emerging intelligent methods and algorithms supporting the tasks of the DevOps. From an operational perspective, the services provided by cloud computing offer higher scalability.

The opportunities and potential advantages of the cloud continuum come with new challenges. Orchestration and scheduling in the cloud continuum is a complex problem that needs to be intelligent enough to guarantee responses upon the uncertainty of the runtime environment, requiring efficient resource management of limited, highly segregated, and relatively unreliable processing and storage resources. Moreover, the edge side of the cloud computing continuum contains nodes that typically have resource and hardware limitations. Additionally, edge nodes are generally heterogeneous with different characteristics, such as various amounts of computing power, memory size, storage capacity, and network bandwidth. Capacity-limited resources of the computing continuum have to ensure the availability of the service regardless of the number of end-users’ client devices, as well as to guarantee its quality of service (QoS).

To take advantage of the opportunities of the cloud continuum, next-generation applications and services that employ a cloud execution model, in which deployment is made on interconnected, heterogeneous computing resources that span the cloud continuum, are increasingly developed using lightweight virtualization approaches. With microservices, for example, an application may comprise many microservices. Only the microservices deployed on a server with resource constraints need to be scaled out, thus providing resource and cost optimization benefits. The deployment and management of next-generation applications and services on the cloud continuum poses new challenges to the DevOps which, at the same time, are opportunities for developing new AI methods, algorithms, and tools that can support DevOps operations and decision making.

This Special Issue will bring researchers, academicians, and practitioners together to discuss new AI methods, algorithms, and tools, as well as new applications of AI for overcoming the challenges of deploying and managing next-generation applications and services on the cloud continuum. Topics of interest include (but are not limited to):

  • AI for DevOps operations on the cloud continuum;
  • AI for Function as a service (FaaS) provisioning, deployment, and management;
  • AI in cloud serverless architectures;
  • AI for secure communications management at the cloud continuum;
  • AI for data portability, privacy, and security for preserving the confidentiality of data and protecting users’ privacy within the cloud computing continuum;
  • Intelligent techniques to facilitate secure data portability and computation on sensitive data across heterogeneous resources;
  • AI for adaptive deployment on heterogeneous computation resources;
  • Intelligent methods for prediction, detection, and reaction on changes in the workloads;
  • AI for application lifecycle support and optimization;
  • Intelligent workflows, monitoring of execution platforms, application deployment, adaptation, and optimization of execution;
  • Autonomic fault recovery of cloud continuum computing resources;
  • AI for scalability and QoS at the cloud continuum;
  • AI for optimizing energy efficiency of cloud continuum infrastructures.

Dr. Dimitris Apostolou
Dr. Yiannis Verginadis
Prof. Dr. Gregoris Mentzas
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Information is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 929 KiB  
Article
Optimization and Prediction Techniques for Self-Healing and Self-Learning Applications in a Trustworthy Cloud Continuum
by Juncal Alonso, Leire Orue-Echevarria, Eneko Osaba, Jesús López Lobo, Iñigo Martinez, Josu Diaz de Arcaya and Iñaki Etxaniz
Information 2021, 12(8), 308; https://0-doi-org.brum.beds.ac.uk/10.3390/info12080308 - 30 Jul 2021
Cited by 2 | Viewed by 2599
Abstract
The current IT market is more and more dominated by the “cloud continuum”. In the “traditional” cloud, computing resources are typically homogeneous in order to facilitate economies of scale. In contrast, in edge computing, computational resources are widely diverse, commonly with [...] Read more.
The current IT market is more and more dominated by the “cloud continuum”. In the “traditional” cloud, computing resources are typically homogeneous in order to facilitate economies of scale. In contrast, in edge computing, computational resources are widely diverse, commonly with scarce capacities and must be managed very efficiently due to battery constraints or other limitations. A combination of resources and services at the edge (edge computing), in the core (cloud computing), and along the data path (fog computing) is needed through a trusted cloud continuum. This requires novel solutions for the creation, optimization, management, and automatic operation of such infrastructure through new approaches such as infrastructure as code (IaC). In this paper, we analyze how artificial intelligence (AI)-based techniques and tools can enhance the operation of complex applications to support the broad and multi-stage heterogeneity of the infrastructural layer in the “computing continuum” through the enhancement of IaC optimization, IaC self-learning, and IaC self-healing. To this extent, the presented work proposes a set of tools, methods, and techniques for applications’ operators to seamlessly select, combine, configure, and adapt computation resources all along the data path and support the complete service lifecycle covering: (1) optimized distributed application deployment over heterogeneous computing resources; (2) monitoring of execution platforms in real time including continuous control and trust of the infrastructural services; (3) application deployment and adaptation while optimizing the execution; and (4) application self-recovery to avoid compromising situations that may lead to an unexpected failure. Full article
(This article belongs to the Special Issue Artificial Intelligence for the Cloud Continuum)
Show Figures

Figure 1

22 pages, 2315 KiB  
Article
A Semantic Model for Interchangeable Microservices in Cloud Continuum Computing
by Salman Taherizadeh, Dimitris Apostolou, Yiannis Verginadis, Marko Grobelnik and Gregoris Mentzas
Information 2021, 12(1), 40; https://0-doi-org.brum.beds.ac.uk/10.3390/info12010040 - 18 Jan 2021
Cited by 4 | Viewed by 2644
Abstract
The rapid growth of new computing models that exploit the cloud continuum has a big impact on the adoption of microservices, especially in dynamic environments where the amount of workload varies over time or when Internet of Things (IoT) devices dynamically change their [...] Read more.
The rapid growth of new computing models that exploit the cloud continuum has a big impact on the adoption of microservices, especially in dynamic environments where the amount of workload varies over time or when Internet of Things (IoT) devices dynamically change their geographic location. In order to exploit the true potential of cloud continuum computing applications, it is essential to use a comprehensive set of various intricate technologies together. This complex blend of technologies currently raises data interoperability problems in such modern computing frameworks. Therefore, a semantic model is required to unambiguously specify notions of various concepts employed in cloud applications. The goal of the present paper is therefore twofold: (i) offering a new model, which allows an easier understanding of microservices within adaptive fog computing frameworks, and (ii) presenting the latest open standards and tools which are now widely used to implement each class defined in our proposed model. Full article
(This article belongs to the Special Issue Artificial Intelligence for the Cloud Continuum)
Show Figures

Figure 1

Back to TopTop