Special Issue "Machine Learning for Cybersecurity Threats, Challenges, and Opportunities Ⅱ"

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (30 November 2021).

Special Issue Editors

Prof. Dr. Rafael T. de Sousa Jr.
E-Mail Website
Guest Editor
Decision Technologies Laboratory - LATITUDE, Electrical Engineering Department (ENE), Institute of Technology (FT), University of Brasília (UnB), Brasília-DF, CEP 70910-900, Brazil
Interests: cyber; information and network security; distributed data services and machine learning for intrusion and fraud detection; signal processing; energy harvesting and security at the physical layer
Special Issues, Collections and Topics in MDPI journals
Dr. Robson de Oliveira Albuquerque
E-Mail Website
Guest Editor
Department of Electrical Engineering, University of Brasília, Brasília 70910-900, Brazil
Interests: distributed systems; information security; network management; network security; network systems; open source software; wireless networks
Special Issues, Collections and Topics in MDPI journals
Dr. Ana Lucila Sandoval Orozco
E-Mail Website
Guest Editor
Group of Analysis, Security and Systems (GASS), Universidad Complutense de Madrid (UCM), 28040 Madrid, Spain
Interests: computer and network security; multimedia forensics; error-correcting codes; information theory
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Cybersecurity has become a major priority for every organization. The right controls and procedures must be put in place to detect potential attacks and protect against them. However, the number of cyber-attacks will be always bigger than the number of people trying to protect themselves against them. New threats are being discovered on a daily basis, making it harder for current solutions to cope with a large amount of data to analyze. Machine-learning systems can be trained to find attacks which are similar to known attacks. This way, we can detect even the first intrusions of their kind and develop better security measures.

The sophistication of threats has also increased substantially. Sophisticated zero-day attacks may go undetected for months at a time. Attack patterns may be engineered to take place over extended periods of time, making them very difficult for traditional intrusion-detection technologies to detect. Even worse, new attack tools and strategies can now be developed using adversarial machine learning techniques, requiring a rapid co-evolution of defenses that matches the speed and sophistication of machine-learning-based offensive techniques. Based on this motivation, this Special Issue aims to provide a forum for people from academia and industry to communicate their latest results on theoretical advances and industrial case studies that combine machine-learning techniques such as reinforcement learning, adversarial machine learning, and deep learning with significant problems in cybersecurity. Research papers can be focused on offensive and defensive applications of machine learning to security. The potential topics of interest to this Special Issue are listed below. Submissions can contemplate original research, serious dataset collection and benchmarking, or critical surveys.

Potential topics include but are not limited to:

  • Adversarial training and defensive distillation;
  • Attacks against machine learning;
  • Black-box attacks against machine learning;
  • Challenges of machine learning for cyber-security;
  • Ethics of machine learning for cyber-security applications;
  • Generative adversarial models;
  • Graph representation learning;
  • Machine-learning forensics;
  • Machine-learning threat intelligence;
  • Malware detection;
  • Neural graph learning;
  • One-shot learning; continuous learning;
  • Scalable machine learning for cyber security;
  • Steganography and steganalysis based on machine-learning techniques;
  • Strength and shortcomings of machine learning for cyber-security.

Prof. Dr. Luis Javier Garcia Villalba
Prof. Dr. Rafael T. de Sousa Jr.
Dr. Robson de Oliveira Albuquerque
Dr. Ana Lucila Sandoval Orozco
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
Multi-Classifier of DDoS Attacks in Computer Networks Built on Neural Networks
Appl. Sci. 2021, 11(22), 10609; https://0-doi-org.brum.beds.ac.uk/10.3390/app112210609 - 11 Nov 2021
Viewed by 260
Abstract
The great commitment in different areas of computer science for the study of computer networks used to fulfill specific and major business tasks has generated a need for their maintenance and optimal operability. Distributed denial of service (DDoS) is a frequent threat to [...] Read more.
The great commitment in different areas of computer science for the study of computer networks used to fulfill specific and major business tasks has generated a need for their maintenance and optimal operability. Distributed denial of service (DDoS) is a frequent threat to computer networks because of its disruption to the services they cause. This disruption results in the instability and/or inoperability of the network. There are different classes of DDoS attacks, each with a different mode of operation, so detecting them has become a difficult task for network monitoring and control systems. The objective of this work is based on the exploration and choice of a set of data that represents DDoS attack events, on their treatment in a preprocessing phase, and later, the generation of a model of sequential neural networks of multi-class classification. This is done to identify and classify the various types of DDoS attacks. The result was compared with previous works treating the same dataset used herein. We compared their classification method, against ours. During this research, the CIC DDoS2019 dataset was used. Previous works carried out with this dataset proposed a binary classification approach, our approach is based on multi-classification. Our proposed model was capable of achieving around 94% in metrics such as precision, accuracy, recall and F1 score. The added value of multiclass classification during this work is identified and compared with binary classifications using the models presented in the previous. Full article
Show Figures

Figure 1

Article
Universal Adversarial Attack via Conditional Sampling for Text Classification
Appl. Sci. 2021, 11(20), 9539; https://0-doi-org.brum.beds.ac.uk/10.3390/app11209539 - 14 Oct 2021
Viewed by 258
Abstract
Despite deep neural networks (DNNs) having achieved impressive performance in various domains, it has been revealed that DNNs are vulnerable in the face of adversarial examples, which are maliciously crafted by adding human-imperceptible perturbations to an original sample to cause the wrong output [...] Read more.
Despite deep neural networks (DNNs) having achieved impressive performance in various domains, it has been revealed that DNNs are vulnerable in the face of adversarial examples, which are maliciously crafted by adding human-imperceptible perturbations to an original sample to cause the wrong output by the DNNs. Encouraged by numerous researches on adversarial examples for computer vision, there has been growing interest in designing adversarial attacks for Natural Language Processing (NLP) tasks. However, the adversarial attacking for NLP is challenging because text is discrete data and a small perturbation can bring a notable shift to the original input. In this paper, we propose a novel method, based on conditional BERT sampling with multiple standards, for generating universal adversarial perturbations: input-agnostic of words that can be concatenated to any input in order to produce a specific prediction. Our universal adversarial attack can create an appearance closer to natural phrases and yet fool sentiment classifiers when added to benign inputs. Based on automatic detection metrics and human evaluations, the adversarial attack we developed dramatically reduces the accuracy of the model on classification tasks, and the trigger is less easily distinguished from natural text. Experimental results demonstrate that our method crafts more high-quality adversarial examples as compared to baseline methods. Further experiments show that our method has high transferability. Our goal is to prove that adversarial attacks are more difficult to detect than previously thought and enable appropriate defenses. Full article
Show Figures

Figure 1

Article
Autonomous Penetration Testing Based on Improved Deep Q-Network
Appl. Sci. 2021, 11(19), 8823; https://0-doi-org.brum.beds.ac.uk/10.3390/app11198823 - 23 Sep 2021
Viewed by 392
Abstract
Penetration testing is an effective way to test and evaluate cybersecurity by simulating a cyberattack. However, the traditional methods deeply rely on domain expert knowledge, which requires prohibitive labor and time costs. Autonomous penetration testing is a more efficient and intelligent way to [...] Read more.
Penetration testing is an effective way to test and evaluate cybersecurity by simulating a cyberattack. However, the traditional methods deeply rely on domain expert knowledge, which requires prohibitive labor and time costs. Autonomous penetration testing is a more efficient and intelligent way to solve this problem. In this paper, we model penetration testing as a Markov decision process problem and use reinforcement learning technology for autonomous penetration testing in large scale networks. We propose an improved deep Q-network (DQN) named NDSPI-DQN to address the sparse reward problem and large action space problem in large-scale scenarios. First, we reasonably integrate five extensions to DQN, including noisy nets, soft Q-learning, dueling architectures, prioritized experience replay, and intrinsic curiosity model to improve the exploration efficiency. Second, we decouple the action and split the estimators of the neural network to calculate two elements of action separately, so as to decrease the action space. Finally, the performance of algorithms is investigated in a range of scenarios. The experiment results demonstrate that our methods have better convergence and scaling performance. Full article
Show Figures

Figure 1

Article
Generating Network Intrusion Detection Dataset Based on Real and Encrypted Synthetic Attack Traffic
Appl. Sci. 2021, 11(17), 7868; https://0-doi-org.brum.beds.ac.uk/10.3390/app11177868 - 26 Aug 2021
Viewed by 513
Abstract
The lack of publicly available up-to-date datasets contributes to the difficulty in evaluating intrusion detection systems. This paper introduces HIKARI-2021, a dataset that contains encrypted synthetic attacks and benign traffic. This dataset conforms to two requirements: the content requirements, which focus on the [...] Read more.
The lack of publicly available up-to-date datasets contributes to the difficulty in evaluating intrusion detection systems. This paper introduces HIKARI-2021, a dataset that contains encrypted synthetic attacks and benign traffic. This dataset conforms to two requirements: the content requirements, which focus on the produced dataset, and the process requirements, which focus on how the dataset is built. We compile these requirements to enable future dataset developments and we make the HIKARI-2021 dataset, along with the procedures to build it, available for the public. Full article
Show Figures

Figure 1

Article
Visualized Malware Multi-Classification Framework Using Fine-Tuned CNN-Based Transfer Learning Models
Appl. Sci. 2021, 11(14), 6446; https://0-doi-org.brum.beds.ac.uk/10.3390/app11146446 - 13 Jul 2021
Cited by 1 | Viewed by 575
Abstract
There is a massive growth in malicious software (Malware) development, which causes substantial security threats to individuals and organizations. Cybersecurity researchers makes continuous efforts to defend against these malware risks. This research aims to exploit the significant advantages of Transfer Learning (TL) and [...] Read more.
There is a massive growth in malicious software (Malware) development, which causes substantial security threats to individuals and organizations. Cybersecurity researchers makes continuous efforts to defend against these malware risks. This research aims to exploit the significant advantages of Transfer Learning (TL) and Fine-Tuning (FT) methods to introduce efficient malware detection in the context of imbalanced families without the need to apply complex features extraction or data augmentation processes. Therefore, this paper proposes a visualized malware multi-classification framework to avoid false positives and imbalanced datasets’ challenges through using the fine-tuned convolutional neural network (CNN)-based TL models. The proposed framework comprises eight different FT CNN models including VGG16, AlexNet, DarkNet-53, DenseNet-201, Inception-V3, Places365-GoogleNet, ResNet-50, and MobileNet-V2. First, the binary files of different malware families were transformed into 2D images and then forwarded to the FT CNN models to detect and classify the malware families. The detection and classification performance was examined on a benchmark Malimg imbalanced dataset using different, comprehensive evaluation metrics. The evaluation results prove the FT CNN models’ significance in detecting malware types with high accuracy that reached 99.97% which also outperforms the performance of related machine learning (ML) and deep learning (DL)-based malware multi-classification approaches tested on the same malware dataset. Full article
Show Figures

Graphical abstract

Article
The Security Perspectives of Vehicular Networks: A Taxonomical Analysis of Attacks and Solutions
Appl. Sci. 2021, 11(10), 4682; https://0-doi-org.brum.beds.ac.uk/10.3390/app11104682 - 20 May 2021
Cited by 1 | Viewed by 606
Abstract
Vehicular networks are the combination of transport systems and the internet systems formed with the main motive to increase the safety of passengers, although non-safety applications are also provided by vehicular networks. Internet of Things (IoT) has a subsection called Mobile Ad hoc [...] Read more.
Vehicular networks are the combination of transport systems and the internet systems formed with the main motive to increase the safety of passengers, although non-safety applications are also provided by vehicular networks. Internet of Things (IoT) has a subsection called Mobile Ad hoc Network (MANET)m which in turn has a subsection called Vehicular Ad hoc Network (VANET). Internet of Energy (IoE) is a new domain that is formed using electric vehicles connected with VANETs. As a large number of transport systems are coming into operation and various pervasive applications are designed to handle such networks, the increasing number of attacks in this domain is also creating threats. As IoE is connected to VANETs extension with electric cars, the future of VANETs can be a question if security measures are not significant. The present survey is an attempt to cover various attack types on vehicular networks with existing security solutions available to handle these attacks. This study will help researchers in getting in-depth information about the taxonomy of vehicular network security issues which can be explored further to design innovative solutions. This knowledge will also be helpful for new research directions, which in turn will help in the formulation of new strategies to handle attacks in a much better way. Full article
Show Figures

Figure 1

Back to TopTop