sensors-logo

Journal Browser

Journal Browser

Detecting and Preventing Deepfake Attacks

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Internet of Things".

Deadline for manuscript submissions: closed (1 March 2022) | Viewed by 7489

Special Issue Editors


E-Mail Website
Guest Editor
Cyber Security Research Center, P.O.B. 653 Beer-Sheva, Ben-Gurion University, Beer-Sheva, Israel
Interests: deepfakes; offensive AI; intrusion detection

E-Mail Website
Guest Editor
Cyber Security Research Center, P.O.B. 653 Beer-Sheva, Ben-Gurion University, Beer-Sheva, Israel
Interests: malware detection; smartphone security; AI security; social network security

Special Issue Information

Dear Colleagues,

Today, deep learning can be used to generate media which is virtually flawless. This has led to the invention of ‘deepfakes’, where media of an individual is created with full control over the individual’s face and voice. The most popular use of deepfakes is entertainment, where the face of a celebrity is swapped onto the body of an actor. Unfortunately, deepfake technologies have also been used for unethical and criminal applications (e.g., the objectification of women and the impersonation of individuals to propagate misinformation and perpetrate scams). Moreover, deepfakes have also extended to new domains such as finance and healthcare. Now, we find ourselves at the start of an era where we can no longer implicitly trust any form of digital media, regardless of its source. Although this outlook is concerning, it also gives us the opportunity to advance our knowledge of the threat and develop effective countermeasures before the issue becomes mainstream. 

We seek innovative works which can help society to overcome the threat of deepfakes. Original research, critical reviews, and theoretical papers are all welcome. Topics include and are not limited to:

  • Detection of
    • Reenactment;
    • Replacement;
    • Enhancement;
    • Voice cloning/conversion;
    • Stylized text (e.g., spear phishing);
    • Scene editing;
    • Real-time attacks;
  • Prevention by
    • Media provenance;
    • Pipeline attacks (anti-deepfakes);
    • Policy;
  • The ethics of deepfakes;
  • Social and phycological aspects of deepfakes.

Dr. Yisroel Mirsky
Prof. Dr. Yuval Elovici
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

32 pages, 912 KiB  
Article
Process-Driven Modelling of Media Forensic Investigations-Considerations on the Example of DeepFake Detection
by Christian Kraetzer, Dennis Siegel, Stefan Seidlitz and Jana Dittmann
Sensors 2022, 22(9), 3137; https://0-doi-org.brum.beds.ac.uk/10.3390/s22093137 - 20 Apr 2022
Cited by 6 | Viewed by 1911
Abstract
Academic research in media forensics mainly focuses on methods for the detection of the traces or artefacts left by media manipulations in media objects. While the resulting detectors often achieve quite impressive detection performances, when tested under lab conditions, hardly any of those [...] Read more.
Academic research in media forensics mainly focuses on methods for the detection of the traces or artefacts left by media manipulations in media objects. While the resulting detectors often achieve quite impressive detection performances, when tested under lab conditions, hardly any of those have yet come close to the ultimate benchmark for any forensic method, which would be courtroom readiness. This paper tries first to facilitate the different stakeholder perspectives in this field and then to partly address the apparent gap between the academic research community and the requirements imposed onto forensic practitioners. The intention is to facilitate the mutual understanding of these two classes of stakeholders and assist with first steps intended at closing this gap. To do so, first a concept for modelling media forensic investigation pipelines is derived from established guidelines. Then, the applicability of such modelling is illustrated on the example of a fusion-based media forensic investigation pipeline aimed at the detection of DeepFake videos using five exemplary detectors (hand-crafted, in one case neural network supported) and testing two different fusion operators. At the end of the paper, the benefits of such a planned realisation of AI-based investigation methods are discussed and generalising effects are mapped out. Full article
(This article belongs to the Special Issue Detecting and Preventing Deepfake Attacks)
Show Figures

Figure 1

24 pages, 3976 KiB  
Article
Predicting Attack Pattern via Machine Learning by Exploiting Stateful Firewall as Virtual Network Function in an SDN Network
by Senthil Prabakaran, Ramalakshmi Ramar, Irshad Hussain, Balasubramanian Prabhu Kavin, Sultan S. Alshamrani, Ahmed Saeed AlGhamdi and Abdullah Alshehri
Sensors 2022, 22(3), 709; https://0-doi-org.brum.beds.ac.uk/10.3390/s22030709 - 18 Jan 2022
Cited by 34 | Viewed by 4283
Abstract
Decoupled data and control planes in Software Defined Networks (SDN) allow them to handle an increasing number of threats by limiting harmful network links at the switching stage. As storage, high-end servers, and network devices, Network Function Virtualization (NFV) is designed to replace [...] Read more.
Decoupled data and control planes in Software Defined Networks (SDN) allow them to handle an increasing number of threats by limiting harmful network links at the switching stage. As storage, high-end servers, and network devices, Network Function Virtualization (NFV) is designed to replace purpose-built network elements with VNFs (Virtualized Network Functions). A Software Defined Network Function Virtualization (SDNFV) network is designed in this paper to boost network performance. Stateful firewall services are deployed as VNFs in the SDN network in this article to offer security and boost network scalability. The SDN controller’s role is to develop a set of guidelines and rules to avoid hazardous network connectivity. Intruder assaults that employ numerous socket addresses cannot be adequately protected by these strategies. Machine learning algorithms are trained using traditional network threat intelligence data to identify potentially malicious linkages and probable attack targets. Based on conventional network data (DT), Bayesian Network (BayesNet), Naive-Bayes, C4.5, and Decision Table (DT) algorithms are used to predict the target host that will be attacked. The experimental results shows that the Bayesian Network algorithm achieved an average prediction accuracy of 92.87%, Native–Bayes Algorithm achieved an average prediction accuracy of 87.81%, C4.5 Algorithm achieved an average prediction accuracy of 84.92%, and the Decision Tree algorithm achieved an average prediction accuracy of 83.18%. There were 451 k login attempts from 178 different countries, with over 70 k source IP addresses and 40 k source port addresses recorded in a large dataset from nine honeypot servers. Full article
(This article belongs to the Special Issue Detecting and Preventing Deepfake Attacks)
Show Figures

Figure 1

Back to TopTop