Machine Learning for Cybersecurity Threats, Challenges, and Opportunities Ⅱ

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (30 September 2022) | Viewed by 79021

Special Issue Editors


E-Mail Website
Guest Editor
Group of Analysis, Security and Systems (GASS), Universidad Complutense de Madrid (UCM), 28040 Madrid, Spain
Interests: artificial intelligence; big data; computer networks; computer security; information theory; IoT; multimedia forensics
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Decision Technologies Laboratory - LATITUDE, Electrical Engineering Department (ENE), Institute of Technology (FT), University of Brasília (UnB), Brasília-DF, CEP 70910-900, Brazil
Interests: cyber; information and network security; distributed data services and machine learning for intrusion and fraud detection; signal processing; energy harvesting and security at the physical layer
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Electrical Engineering, University of Brasília, Brasília 70910-900, Brazil
Interests: distributed systems; information security; network management; network security; network systems; open source software; wireless networks
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Group of Analysis, Security and Systems (GASS), Universidad Complutense de Madrid (UCM), 28040 Madrid, Spain
Interests: computer and network security; multimedia forensics; error-correcting codes; information theory
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Cybersecurity has become a major priority for every organization. The right controls and procedures must be put in place to detect potential attacks and protect against them. However, the number of cyber-attacks will be always bigger than the number of people trying to protect themselves against them. New threats are being discovered on a daily basis, making it harder for current solutions to cope with a large amount of data to analyze. Machine-learning systems can be trained to find attacks which are similar to known attacks. This way, we can detect even the first intrusions of their kind and develop better security measures.

The sophistication of threats has also increased substantially. Sophisticated zero-day attacks may go undetected for months at a time. Attack patterns may be engineered to take place over extended periods of time, making them very difficult for traditional intrusion-detection technologies to detect. Even worse, new attack tools and strategies can now be developed using adversarial machine learning techniques, requiring a rapid co-evolution of defenses that matches the speed and sophistication of machine-learning-based offensive techniques. Based on this motivation, this Special Issue aims to provide a forum for people from academia and industry to communicate their latest results on theoretical advances and industrial case studies that combine machine-learning techniques such as reinforcement learning, adversarial machine learning, and deep learning with significant problems in cybersecurity. Research papers can be focused on offensive and defensive applications of machine learning to security. The potential topics of interest to this Special Issue are listed below. Submissions can contemplate original research, serious dataset collection and benchmarking, or critical surveys.

Potential topics include but are not limited to:

  • Adversarial training and defensive distillation;
  • Attacks against machine learning;
  • Black-box attacks against machine learning;
  • Challenges of machine learning for cyber-security;
  • Ethics of machine learning for cyber-security applications;
  • Generative adversarial models;
  • Graph representation learning;
  • Machine-learning forensics;
  • Machine-learning threat intelligence;
  • Malware detection;
  • Neural graph learning;
  • One-shot learning; continuous learning;
  • Scalable machine learning for cyber security;
  • Steganography and steganalysis based on machine-learning techniques;
  • Strength and shortcomings of machine learning for cyber-security.

Prof. Dr. Luis Javier Garcia Villalba
Prof. Dr. Rafael T. de Sousa Jr.
Dr. Robson de Oliveira Albuquerque
Dr. Ana Lucila Sandoval Orozco
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (13 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

16 pages, 4693 KiB  
Article
An Optimized Gradient Boost Decision Tree Using Enhanced African Buffalo Optimization Method for Cyber Security Intrusion Detection
by Shailendra Mishra
Appl. Sci. 2022, 12(24), 12591; https://0-doi-org.brum.beds.ac.uk/10.3390/app122412591 - 08 Dec 2022
Cited by 6 | Viewed by 1522
Abstract
The cyber security field has witnessed several intrusion detection systems (IDSs) that are critical to the detection of malicious activities in network traffic. In the last couple of years, much research has been conducted in this field; however, in the present circumstances, network [...] Read more.
The cyber security field has witnessed several intrusion detection systems (IDSs) that are critical to the detection of malicious activities in network traffic. In the last couple of years, much research has been conducted in this field; however, in the present circumstances, network attacks are increasing in both volume and diverseness. The objective of this research work is to introduce new IDSs based on a combination of Genetic Algorithms (GAs) and Optimized Gradient Boost Decision Trees (OGBDTs). To improve classification, enhanced African Buffalo Optimizations (EABOs) are used. Optimization Gradient Boost Decision Trees (OGBDT-IDS) include data exploration, preprocessing, standardization, and feature ratings/selection modules. In high-dimensional data, GAs are appropriate tools for selecting features. In machine learning techniques (MLTs), gradient-boosted decision trees (GBDTs) are used as a base learner, and the predictions are added to the set of trees. In this study, the experimental results demonstrate that the proposed methods improve cyber intrusion detection for unused and new cases. Based on performance evaluations, the proposed IDS (OGBDT) performs better than traditional MLTs. The performances are evaluated by comparing accuracy, precision, recall, and F-score using the UNBS-NB 15, KDD 99, and CICIDS2018 datasets. The proposed IDS has the highest attack detection rates, and can predict attacks in all datasets in the least amount of time. Full article
Show Figures

Figure 1

17 pages, 1194 KiB  
Article
Cascaded Reinforcement Learning Agents for Large Action Spaces in Autonomous Penetration Testing
by Khuong Tran, Maxwell Standen, Junae Kim, David Bowman, Toby Richer, Ashlesha Akella and Chin-Teng Lin
Appl. Sci. 2022, 12(21), 11265; https://0-doi-org.brum.beds.ac.uk/10.3390/app122111265 - 07 Nov 2022
Cited by 6 | Viewed by 2191
Abstract
Organised attacks on a computer system to test existing defences, i.e., penetration testing, have been used extensively to evaluate network security. However, penetration testing is a time-consuming process. Additionally, establishing a strategy that resembles a real cyber-attack typically requires in-depth knowledge of the [...] Read more.
Organised attacks on a computer system to test existing defences, i.e., penetration testing, have been used extensively to evaluate network security. However, penetration testing is a time-consuming process. Additionally, establishing a strategy that resembles a real cyber-attack typically requires in-depth knowledge of the cybersecurity domain. This paper presents a novel architecture, named deep cascaded reinforcement learning agents, or CRLA, that addresses large discrete action spaces in an autonomous penetration testing simulator, where the number of actions exponentially increases with the complexity of the designed cybersecurity network. Employing an algebraic action decomposition strategy, CRLA is shown to find the optimal attack policy in scenarios with large action spaces faster and more stably than a conventional deep Q-learning agent, which is commonly used as a method for applying artificial intelligence to autonomous penetration testing. Full article
Show Figures

Figure 1

28 pages, 2639 KiB  
Article
Detecting Cryptojacking Web Threats: An Approach with Autoencoders and Deep Dense Neural Networks
by Aldo Hernandez-Suarez, Gabriel Sanchez-Perez, Linda K. Toscano-Medina, Jesus Olivares-Mercado, Jose Portillo-Portilo, Juan-Gerardo Avalos and Luis Javier García Villalba
Appl. Sci. 2022, 12(7), 3234; https://0-doi-org.brum.beds.ac.uk/10.3390/app12073234 - 22 Mar 2022
Cited by 10 | Viewed by 4793
Abstract
With the growing popularity of cryptocurrencies, which are an important part of day-to-day transactions over the Internet, the interest in being part of the so-called cryptomining service has attracted the attention of investors who wish to quickly earn profits by computing powerful transactional [...] Read more.
With the growing popularity of cryptocurrencies, which are an important part of day-to-day transactions over the Internet, the interest in being part of the so-called cryptomining service has attracted the attention of investors who wish to quickly earn profits by computing powerful transactional records towards the blockchain network. Since most users cannot afford the cost of specialized or standardized hardware for mining purposes, new techniques have been developed to make the latter easier, minimizing the computational cost required. Developers of large cryptocurrency houses have made available executable binaries and mainly browser-side scripts in order to authoritatively tap into users’ collective resources and effectively complete the calculation of puzzles to complete a proof of work. However, malicious actors have taken advantage of this capability to insert malicious scripts and illegally mine data without the user’s knowledge. This cyber-attack, also known as cryptojacking, is stealthy and difficult to analyze, whereby, solutions based on anti-malware extensions, blocklists, JavaScript disabling, among others, are not sufficient for accurate detection, creating a gap in multi-layer security mechanisms. Although in the state-of-the-art there are alternative solutions, mainly using machine learning techniques, one of the important issues to be solved is still the correct characterization of network and host samples, in the face of the increasing escalation of new tampering or obfuscation techniques. This paper develops a method that performs a fingerprinting technique to detect possible malicious sites, which are then characterized by an autoencoding algorithm that preserves the best information of the infection traces, thus, maximizing the classification power by means of a deep dense neural network. Full article
Show Figures

Figure 1

19 pages, 902 KiB  
Article
Password Guessability as a Service (PGaaS)
by Juan Bojato, Daniel Donado, Miguel Jimeno, Giovanni Moreno and Ricardo Villanueva-Polanco
Appl. Sci. 2022, 12(3), 1562; https://0-doi-org.brum.beds.ac.uk/10.3390/app12031562 - 31 Jan 2022
Cited by 2 | Viewed by 3067
Abstract
This paper presents an adaptable password guessability service suited for different password generators according to what a user might need when using such a service. In particular, we introduce a flexible cloud-based software architecture engineered to provide an efficient and robust password guessability [...] Read more.
This paper presents an adaptable password guessability service suited for different password generators according to what a user might need when using such a service. In particular, we introduce a flexible cloud-based software architecture engineered to provide an efficient and robust password guessability service that benefits from all the features and goals expected from cloud applications. This architecture comprises several components, featuring the combination of a synthetic dataset generator realized via a generative adversarial network (GAN), which may learn the distribution of passwords from a given dictionary and generate high-quality password guesses, along with a password guessability estimator realized via a password strength estimation algorithm. In addition to detailing the architecture’s components, we run a performance evaluation on the architecture’s key components, obtaining promising results. Finally, the complete application is delivered and may be used by a user to estimate the strength of a password and the time taken by an average computer to enumerate it. Full article
Show Figures

Figure 1

18 pages, 1533 KiB  
Article
Methodological Framework to Collect, Process, Analyze and Visualize Cyber Threat Intelligence Data
by Lucas José Borges Amaro, Bruce William Percilio Azevedo, Fabio Lucio Lopes de Mendonca, William Ferreira Giozza, Robson de Oliveira Albuquerque and Luis Javier García Villalba
Appl. Sci. 2022, 12(3), 1205; https://0-doi-org.brum.beds.ac.uk/10.3390/app12031205 - 24 Jan 2022
Cited by 9 | Viewed by 5039
Abstract
Cyber attacks have increased in frequency in recent years, affecting small, medium and large companies, creating an urgent need for tools capable of helping the mitigation of such threats. Thus, with the increasing number of cyber attacks, we have a large amount of [...] Read more.
Cyber attacks have increased in frequency in recent years, affecting small, medium and large companies, creating an urgent need for tools capable of helping the mitigation of such threats. Thus, with the increasing number of cyber attacks, we have a large amount of threat data from heterogeneous sources that needs to be ingested, processed and analyzed in order to obtain useful insights for their mitigation. This study proposes a methodological framework to collect, organize, filter, share and visualize cyber-threat data to mitigate attacks and fix vulnerabilities, based on an eight-step cyber threat intelligence model with timeline visualization of threats information and analytic data insights. We developed a tool to address needs in which the cyber security analyst can insert threat data, analyze them and create a timeline to obtain insights and a better contextualization of a threat. Results show the facilitation of understanding the context in which the threats are inserted, rendering the mitigation of vulnerabilities more effective. Full article
Show Figures

Figure 1

16 pages, 509 KiB  
Article
On the Detection Capabilities of Signature-Based Intrusion Detection Systems in the Context of Web Attacks
by Jesús Díaz-Verdejo, Javier Muñoz-Calle, Antonio Estepa Alonso, Rafael Estepa Alonso and Germán Madinabeitia
Appl. Sci. 2022, 12(2), 852; https://0-doi-org.brum.beds.ac.uk/10.3390/app12020852 - 14 Jan 2022
Cited by 22 | Viewed by 4336
Abstract
Signature-based Intrusion Detection Systems (SIDS) play a crucial role within the arsenal of security components of most organizations. They can find traces of known attacks in the network traffic or host events for which patterns or signatures have been pre-established. SIDS include standard [...] Read more.
Signature-based Intrusion Detection Systems (SIDS) play a crucial role within the arsenal of security components of most organizations. They can find traces of known attacks in the network traffic or host events for which patterns or signatures have been pre-established. SIDS include standard packages of detection rulesets, but only those rules suited to the operational environment should be activated for optimal performance. However, some organizations might skip this tuning process and instead activate default off-the-shelf rulesets without understanding its implications and trade-offs. In this work, we help gain insight into the consequences of using predefined rulesets in the performance of SIDS. We experimentally explore the performance of three SIDS in the context of web attacks. In particular, we gauge the detection rate obtained with predefined subsets of rules for Snort, ModSecurity and Nemesida using seven attack datasets. We also determine the precision and rate of alert generated by each detector in a real-life case using a large trace from a public webserver. Results show that the maximum detection rate achieved by the SIDS under test is insufficient to protect systems effectively and is lower than expected for known attacks. Our results also indicate that the choice of predefined settings activated on each detector strongly influences its detection capability and false alarm rate. Snort and ModSecurity scored either a very poor detection rate (activating the less-sensitive predefined ruleset) or a very poor precision (activating the full ruleset). We also found that using various SIDS for a cooperative decision can improve the precision or the detection rate, but not both. Consequently, it is necessary to reflect upon the role of these open-source SIDS with default configurations as core elements for protection in the context of web attacks. Finally, we provide an efficient method for systematically determining which rules deactivate from a ruleset to significantly reduce the false alarm rate for a target operational environment. We tested our approach using Snort’s ruleset in our real-life trace, increasing the precision from 0.015 to 1 in less than 16 h of work. Full article
Show Figures

Figure 1

15 pages, 2022 KiB  
Article
Multi-Classifier of DDoS Attacks in Computer Networks Built on Neural Networks
by Andrés Chartuni and José Márquez
Appl. Sci. 2021, 11(22), 10609; https://0-doi-org.brum.beds.ac.uk/10.3390/app112210609 - 11 Nov 2021
Cited by 18 | Viewed by 3011
Abstract
The great commitment in different areas of computer science for the study of computer networks used to fulfill specific and major business tasks has generated a need for their maintenance and optimal operability. Distributed denial of service (DDoS) is a frequent threat to [...] Read more.
The great commitment in different areas of computer science for the study of computer networks used to fulfill specific and major business tasks has generated a need for their maintenance and optimal operability. Distributed denial of service (DDoS) is a frequent threat to computer networks because of its disruption to the services they cause. This disruption results in the instability and/or inoperability of the network. There are different classes of DDoS attacks, each with a different mode of operation, so detecting them has become a difficult task for network monitoring and control systems. The objective of this work is based on the exploration and choice of a set of data that represents DDoS attack events, on their treatment in a preprocessing phase, and later, the generation of a model of sequential neural networks of multi-class classification. This is done to identify and classify the various types of DDoS attacks. The result was compared with previous works treating the same dataset used herein. We compared their classification method, against ours. During this research, the CIC DDoS2019 dataset was used. Previous works carried out with this dataset proposed a binary classification approach, our approach is based on multi-classification. Our proposed model was capable of achieving around 94% in metrics such as precision, accuracy, recall and F1 score. The added value of multiclass classification during this work is identified and compared with binary classifications using the models presented in the previous. Full article
Show Figures

Figure 1

13 pages, 2604 KiB  
Article
Universal Adversarial Attack via Conditional Sampling for Text Classification
by Yu Zhang, Kun Shao, Junan Yang and Hui Liu
Appl. Sci. 2021, 11(20), 9539; https://0-doi-org.brum.beds.ac.uk/10.3390/app11209539 - 14 Oct 2021
Viewed by 1759
Abstract
Despite deep neural networks (DNNs) having achieved impressive performance in various domains, it has been revealed that DNNs are vulnerable in the face of adversarial examples, which are maliciously crafted by adding human-imperceptible perturbations to an original sample to cause the wrong output [...] Read more.
Despite deep neural networks (DNNs) having achieved impressive performance in various domains, it has been revealed that DNNs are vulnerable in the face of adversarial examples, which are maliciously crafted by adding human-imperceptible perturbations to an original sample to cause the wrong output by the DNNs. Encouraged by numerous researches on adversarial examples for computer vision, there has been growing interest in designing adversarial attacks for Natural Language Processing (NLP) tasks. However, the adversarial attacking for NLP is challenging because text is discrete data and a small perturbation can bring a notable shift to the original input. In this paper, we propose a novel method, based on conditional BERT sampling with multiple standards, for generating universal adversarial perturbations: input-agnostic of words that can be concatenated to any input in order to produce a specific prediction. Our universal adversarial attack can create an appearance closer to natural phrases and yet fool sentiment classifiers when added to benign inputs. Based on automatic detection metrics and human evaluations, the adversarial attack we developed dramatically reduces the accuracy of the model on classification tasks, and the trigger is less easily distinguished from natural text. Experimental results demonstrate that our method crafts more high-quality adversarial examples as compared to baseline methods. Further experiments show that our method has high transferability. Our goal is to prove that adversarial attacks are more difficult to detect than previously thought and enable appropriate defenses. Full article
Show Figures

Figure 1

15 pages, 1365 KiB  
Article
Autonomous Penetration Testing Based on Improved Deep Q-Network
by Shicheng Zhou, Jingju Liu, Dongdong Hou, Xiaofeng Zhong and Yue Zhang
Appl. Sci. 2021, 11(19), 8823; https://0-doi-org.brum.beds.ac.uk/10.3390/app11198823 - 23 Sep 2021
Cited by 30 | Viewed by 3804
Abstract
Penetration testing is an effective way to test and evaluate cybersecurity by simulating a cyberattack. However, the traditional methods deeply rely on domain expert knowledge, which requires prohibitive labor and time costs. Autonomous penetration testing is a more efficient and intelligent way to [...] Read more.
Penetration testing is an effective way to test and evaluate cybersecurity by simulating a cyberattack. However, the traditional methods deeply rely on domain expert knowledge, which requires prohibitive labor and time costs. Autonomous penetration testing is a more efficient and intelligent way to solve this problem. In this paper, we model penetration testing as a Markov decision process problem and use reinforcement learning technology for autonomous penetration testing in large scale networks. We propose an improved deep Q-network (DQN) named NDSPI-DQN to address the sparse reward problem and large action space problem in large-scale scenarios. First, we reasonably integrate five extensions to DQN, including noisy nets, soft Q-learning, dueling architectures, prioritized experience replay, and intrinsic curiosity model to improve the exploration efficiency. Second, we decouple the action and split the estimators of the neural network to calculate two elements of action separately, so as to decrease the action space. Finally, the performance of algorithms is investigated in a range of scenarios. The experiment results demonstrate that our methods have better convergence and scaling performance. Full article
Show Figures

Figure 1

17 pages, 367 KiB  
Article
Generating Network Intrusion Detection Dataset Based on Real and Encrypted Synthetic Attack Traffic
by Andrey Ferriyan, Achmad Husni Thamrin, Keiji Takeda and Jun Murai
Appl. Sci. 2021, 11(17), 7868; https://0-doi-org.brum.beds.ac.uk/10.3390/app11177868 - 26 Aug 2021
Cited by 29 | Viewed by 12142
Abstract
The lack of publicly available up-to-date datasets contributes to the difficulty in evaluating intrusion detection systems. This paper introduces HIKARI-2021, a dataset that contains encrypted synthetic attacks and benign traffic. This dataset conforms to two requirements: the content requirements, which focus on the [...] Read more.
The lack of publicly available up-to-date datasets contributes to the difficulty in evaluating intrusion detection systems. This paper introduces HIKARI-2021, a dataset that contains encrypted synthetic attacks and benign traffic. This dataset conforms to two requirements: the content requirements, which focus on the produced dataset, and the process requirements, which focus on how the dataset is built. We compile these requirements to enable future dataset developments and we make the HIKARI-2021 dataset, along with the procedures to build it, available for the public. Full article
Show Figures

Figure 1

21 pages, 1798 KiB  
Article
Visualized Malware Multi-Classification Framework Using Fine-Tuned CNN-Based Transfer Learning Models
by Walid El-Shafai, Iman Almomani and Aala AlKhayer
Appl. Sci. 2021, 11(14), 6446; https://0-doi-org.brum.beds.ac.uk/10.3390/app11146446 - 13 Jul 2021
Cited by 29 | Viewed by 3899
Abstract
There is a massive growth in malicious software (Malware) development, which causes substantial security threats to individuals and organizations. Cybersecurity researchers makes continuous efforts to defend against these malware risks. This research aims to exploit the significant advantages of Transfer Learning (TL) and [...] Read more.
There is a massive growth in malicious software (Malware) development, which causes substantial security threats to individuals and organizations. Cybersecurity researchers makes continuous efforts to defend against these malware risks. This research aims to exploit the significant advantages of Transfer Learning (TL) and Fine-Tuning (FT) methods to introduce efficient malware detection in the context of imbalanced families without the need to apply complex features extraction or data augmentation processes. Therefore, this paper proposes a visualized malware multi-classification framework to avoid false positives and imbalanced datasets’ challenges through using the fine-tuned convolutional neural network (CNN)-based TL models. The proposed framework comprises eight different FT CNN models including VGG16, AlexNet, DarkNet-53, DenseNet-201, Inception-V3, Places365-GoogleNet, ResNet-50, and MobileNet-V2. First, the binary files of different malware families were transformed into 2D images and then forwarded to the FT CNN models to detect and classify the malware families. The detection and classification performance was examined on a benchmark Malimg imbalanced dataset using different, comprehensive evaluation metrics. The evaluation results prove the FT CNN models’ significance in detecting malware types with high accuracy that reached 99.97% which also outperforms the performance of related machine learning (ML) and deep learning (DL)-based malware multi-classification approaches tested on the same malware dataset. Full article
Show Figures

Graphical abstract

25 pages, 1601 KiB  
Article
The Security Perspectives of Vehicular Networks: A Taxonomical Analysis of Attacks and Solutions
by Amandeep Verma, Rahul Saha, Gulshan Kumar and Tai-hoon Kim
Appl. Sci. 2021, 11(10), 4682; https://0-doi-org.brum.beds.ac.uk/10.3390/app11104682 - 20 May 2021
Cited by 22 | Viewed by 4070
Abstract
Vehicular networks are the combination of transport systems and the internet systems formed with the main motive to increase the safety of passengers, although non-safety applications are also provided by vehicular networks. Internet of Things (IoT) has a subsection called Mobile Ad hoc [...] Read more.
Vehicular networks are the combination of transport systems and the internet systems formed with the main motive to increase the safety of passengers, although non-safety applications are also provided by vehicular networks. Internet of Things (IoT) has a subsection called Mobile Ad hoc Network (MANET)m which in turn has a subsection called Vehicular Ad hoc Network (VANET). Internet of Energy (IoE) is a new domain that is formed using electric vehicles connected with VANETs. As a large number of transport systems are coming into operation and various pervasive applications are designed to handle such networks, the increasing number of attacks in this domain is also creating threats. As IoE is connected to VANETs extension with electric cars, the future of VANETs can be a question if security measures are not significant. The present survey is an attempt to cover various attack types on vehicular networks with existing security solutions available to handle these attacks. This study will help researchers in getting in-depth information about the taxonomy of vehicular network security issues which can be explored further to design innovative solutions. This knowledge will also be helpful for new research directions, which in turn will help in the formulation of new strategies to handle attacks in a much better way. Full article
Show Figures

Figure 1

Review

Jump to: Research

24 pages, 1935 KiB  
Review
Financial Fraud Detection Based on Machine Learning: A Systematic Literature Review
by Abdulalem Ali, Shukor Abd Razak, Siti Hajar Othman, Taiseer Abdalla Elfadil Eisa, Arafat Al-Dhaqm, Maged Nasser, Tusneem Elhassan, Hashim Elshafie and Abdu Saif
Appl. Sci. 2022, 12(19), 9637; https://0-doi-org.brum.beds.ac.uk/10.3390/app12199637 - 26 Sep 2022
Cited by 28 | Viewed by 25695
Abstract
Financial fraud, considered as deceptive tactics for gaining financial benefits, has recently become a widespread menace in companies and organizations. Conventional techniques such as manual verifications and inspections are imprecise, costly, and time consuming for identifying such fraudulent activities. With the advent of [...] Read more.
Financial fraud, considered as deceptive tactics for gaining financial benefits, has recently become a widespread menace in companies and organizations. Conventional techniques such as manual verifications and inspections are imprecise, costly, and time consuming for identifying such fraudulent activities. With the advent of artificial intelligence, machine-learning-based approaches can be used intelligently to detect fraudulent transactions by analyzing a large number of financial data. Therefore, this paper attempts to present a systematic literature review (SLR) that systematically reviews and synthesizes the existing literature on machine learning (ML)-based fraud detection. Particularly, the review employed the Kitchenham approach, which uses well-defined protocols to extract and synthesize the relevant articles; it then report the obtained results. Based on the specified search strategies from popular electronic database libraries, several studies have been gathered. After inclusion/exclusion criteria, 93 articles were chosen, synthesized, and analyzed. The review summarizes popular ML techniques used for fraud detection, the most popular fraud type, and evaluation metrics. The reviewed articles showed that support vector machine (SVM) and artificial neural network (ANN) are popular ML algorithms used for fraud detection, and credit card fraud is the most popular fraud type addressed using ML techniques. The paper finally presents main issues, gaps, and limitations in financial fraud detection areas and suggests possible areas for future research. Full article
Show Figures

Figure 1

Back to TopTop