Deepfakes, Fake News and Multimedia Manipulation from Generation to Detection

A special issue of Journal of Imaging (ISSN 2313-433X). This special issue belongs to the section "Multimedia Systems and Applications".

Deadline for manuscript submissions: closed (20 December 2022) | Viewed by 29921

Special Issue Editor


E-Mail Website
Guest Editor
Department of Network and Computer Security, State University of New York Polytechnic Institute, Utica, NY 13502, USA
Interests: machine learning and computer vision with applications to cybersecurity, biometrics, affect recognition, image and video processing, and perceptual-based audiovisual multimedia quality assessmentperceptual-based audiovisual multimedia quality assessment; cybersecurity
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Machine-learning-based techniques are being utilized to generate hyper-realistic manipulated facial multimedia content, known as DeepFakes. While such technologies have positive potentials for use in entertainment applications, the malevolent use of this technology can harm citizens and the society as a whole by facilitating the construction of indecent content, the spread of fake news to subvert elections or undermine politics, bullying, and the amelioration of social engineering to perpetrate financial fraud. In fact, it has been shown that manipulated facial multimedia content can not only deceive humans but also automated face-recognition-based biometric systems. The advent of advanced hardware, powerful smart devices, user-friendly apps (e.g., FaceApp and ZAO), and open-source ML codes (e.g., Generative Adversarial Networks) have enabled even non-experts to effortlessly create manipulated facial multimedia contents. In principle, face manipulation involves swapping two faces, modifying facial attributes (e.g., age and gender), morphing two different faces into one face, adding imperceptible perturbations (i.e., adversarial examples), synthetically generating faces, or animating/recreating facial expressions in face images/videos.

 Topics of interest of this Special Issue include, but are not limited to:

  • The generation of DeepFakes, face morphing, manipulation and adversarial attacks;
  • The generation of synthetic faces using ML/AI techniques, e.g., GANs;
  • The detection of DeepFakes, face morphing, manipulation and adversarial attacks, including generalizable systems;
  • The generation and detection of audio DeepFakes;
  • Novel datasets and experimental protocols to facilitate research in DeepFakes and face manipulations;
  • The formulation and extraction of DeepFake devices, platforms and software/app fingerprints;
  • Face recognition systems (and humans) against DeepFakes, face morphing, manipulation and adversarial attacks, including their vulnerabilities to digital face manipulations;
  • DeepFakes in the courtroom and on copyright law.

Dr. Zahid Akhtar
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • DeepFakes
  • digital face manipulations
  • digital forensics
  • fake news
  • multimedia manipulations
  • generative AI
  • security and privacy
  • information authenticity
  • face morphing attack
  • biometrics

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

32 pages, 27008 KiB  
Article
Multiverse: Multilingual Evidence for Fake News Detection
by Daryna Dementieva, Mikhail Kuimov and Alexander Panchenko
J. Imaging 2023, 9(4), 77; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging9040077 - 27 Mar 2023
Cited by 1 | Viewed by 2459
Abstract
The rapid spread of deceptive information on the internet can have severe and irreparable consequences. As a result, it is important to develop technology that can detect fake news. Although significant progress has been made in this area, current methods are limited because [...] Read more.
The rapid spread of deceptive information on the internet can have severe and irreparable consequences. As a result, it is important to develop technology that can detect fake news. Although significant progress has been made in this area, current methods are limited because they focus only on one language and do not incorporate multilingual information. In this work, we propose Multiverse—a new feature based on multilingual evidence that can be used for fake news detection and improve existing approaches. Our hypothesis that cross-lingual evidence can be used as a feature for fake news detection is supported by manual experiments based on a set of true (legit) and fake news. Furthermore, we compared our fake news classification system based on the proposed feature with several baselines on two multi-domain datasets of general-topic news and one fake COVID-19 news dataset, showing that (in combination with linguistic features) it yields significant improvements over the baseline models, bringing additional useful signals to the classifier. Full article
Show Figures

Graphical abstract

11 pages, 632 KiB  
Article
Auguring Fake Face Images Using Dual Input Convolution Neural Network
by Mohan Bhandari, Arjun Neupane, Saurav Mallik, Loveleen Gaur and Hong Qin
J. Imaging 2023, 9(1), 3; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging9010003 - 21 Dec 2022
Cited by 10 | Viewed by 2932
Abstract
Deepfake technology uses auto-encoders and generative adversarial networks to replace or artificially construct fine-tuned faces, emotions, and sounds. Although there have been significant advancements in the identification of particular fake images, a reliable counterfeit face detector is still lacking, making it difficult to [...] Read more.
Deepfake technology uses auto-encoders and generative adversarial networks to replace or artificially construct fine-tuned faces, emotions, and sounds. Although there have been significant advancements in the identification of particular fake images, a reliable counterfeit face detector is still lacking, making it difficult to identify fake photos in situations with further compression, blurring, scaling, etc. Deep learning models resolve the research gap to correctly recognize phony images, whose objectionable content might encourage fraudulent activity and cause major problems. To reduce the gap and enlarge the fields of view of the network, we propose a dual input convolutional neural network (DICNN) model with ten-fold cross validation with an average training accuracy of 99.36 ± 0.62, a test accuracy of 99.08 ± 0.64, and a validation accuracy of 99.30 ± 0.94. Additionally, we used ’SHapley Additive exPlanations (SHAP) ’ as explainable AI (XAI) Shapely values to explain the results and interoperability visually by imposing the model into SHAP. The proposed model holds significant importance for being accepted by forensics and security experts because of its distinctive features and considerably higher accuracy than state-of-the-art methods. Full article
Show Figures

Figure 1

Review

Jump to: Research

16 pages, 2230 KiB  
Review
Deepfakes Generation and Detection: A Short Survey
by Zahid Akhtar
J. Imaging 2023, 9(1), 18; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging9010018 - 13 Jan 2023
Cited by 14 | Viewed by 22654
Abstract
Advancements in deep learning techniques and the availability of free, large databases have made it possible, even for non-technical people, to either manipulate or generate realistic facial samples for both benign and malicious purposes. DeepFakes refer to face multimedia content, which has been [...] Read more.
Advancements in deep learning techniques and the availability of free, large databases have made it possible, even for non-technical people, to either manipulate or generate realistic facial samples for both benign and malicious purposes. DeepFakes refer to face multimedia content, which has been digitally altered or synthetically created using deep neural networks. The paper first outlines the readily available face editing apps and the vulnerability (or performance degradation) of face recognition systems under various face manipulations. Next, this survey presents an overview of the techniques and works that have been carried out in recent years for deepfake and face manipulations. Especially, four kinds of deepfake or face manipulations are reviewed, i.e., identity swap, face reenactment, attribute manipulation, and entire face synthesis. For each category, deepfake or face manipulation generation methods as well as those manipulation detection methods are detailed. Despite significant progress based on traditional and advanced computer vision, artificial intelligence, and physics, there is still a huge arms race surging up between attackers/offenders/adversaries (i.e., DeepFake generation methods) and defenders (i.e., DeepFake detection methods). Thus, open challenges and potential research directions are also discussed. This paper is expected to aid the readers in comprehending deepfake generation and detection mechanisms, together with open issues and future directions. Full article
Show Figures

Figure 1

Back to TopTop