Special Issue "Image and Video Forensics"

A special issue of Journal of Imaging (ISSN 2313-433X). This special issue belongs to the section "Biometrics, Forensics, and Security".

Deadline for manuscript submissions: closed (31 May 2021).

Special Issue Editors

Dr. Irene Amerini
E-Mail Website
Guest Editor
Department of Computer, Control, and Management Engineering A. Ruberti, Sapienza University of Rome, 00185 Rome, Italy
Interests: image and video forensics; digital image processing; machine learning and deep learning for image and video analysis
Dr. Gianmarco Baldini
E-Mail Website
Guest Editor
European Commission, Joint Research Centre, Ispra, Italy
Interests: machine learning and deep learning in cybersecurity and automotive domains; physical layer identification and authentication
Dr. Francesco Leotta
E-Mail Website
Guest Editor
Department of Computer, Control, and Management Engineering A. Ruberti, Sapienza University of Rome, 00185 Rome, Italy
Interests: smart space automation and monitoring; ambient intelligence; action and activity recognition; human identification and tracking through video analysis

Special Issue Information

Dear Colleagues,

Nowadays images and videos have become the main modalities of information being exchanged in everyday life, and their pervasiveness has led the image forensics community to question their reliability, integrity, confidentiality, and security more and more. Multimedia contents are generated in many different ways through the use of consumer electronics and high-quality digital imaging devices, such as smartphones, digital cameras, tablets, wearable, and IoT devices. The ever-increasing convenience of image acquisition has facilitated instant distribution and sharing of digital images on digital social platforms determining a great amount of exchange data. Moreover, the pervasiveness of powerful image editing tools has allowed the manipulation of digital images for malicious or criminal ends, up to the creation of synthesized images and videos with the use of deep learning techniques.

In response to these threats, the multimedia forensics community has produced major research efforts regarding the identification of the source and the detection of manipulation. In all cases (e.g., forensic investigations, fake news debunking, information warfare, and cyberattacks) where images and videos serve as critical demonstrative evidence, forensic technologies that help to determine the origin, authenticity of sources, and integrity of multimedia content can become essential tools.

In detail, this Special Issue aims to collect a diverse and complementary set of articles that demonstrate new developments and applications in image and video forensics to tackle new and serious challenges to ensure media authenticity.

Dr. Irene Amerini
Dr. Gianmarco Baldini
Dr. Francesco Leotta
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • image and video forensics
  • multimedia source identification
  • image and video forgery detection
  • image and video authentication
  • image and video provenance
  • electronic device identification (e.g., smartphone) through built-in sensors
  • deepfake detection
  • adversarial multimedia forensics

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

Article
End-to-End Deep One-Class Learning for Anomaly Detection in UAV Video Stream
J. Imaging 2021, 7(5), 90; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7050090 - 19 May 2021
Viewed by 287
Abstract
In recent years, the use of drones for surveillance tasks has been on the rise worldwide. However, in the context of anomaly detection, only normal events are available for the learning process. Therefore, the implementation of a generative learning method in an unsupervised [...] Read more.
In recent years, the use of drones for surveillance tasks has been on the rise worldwide. However, in the context of anomaly detection, only normal events are available for the learning process. Therefore, the implementation of a generative learning method in an unsupervised mode to solve this problem becomes fundamental. In this context, we propose a new end-to-end architecture capable of generating optical flow images from original UAV images and extracting compact spatio-temporal characteristics for anomaly detection purposes. It is designed with a custom loss function as a sum of three terms, the reconstruction loss (Rl), the generation loss (Gl) and the compactness loss (Cl) to ensure an efficient classification of the “deep-one” class. In addition, we propose to minimize the effect of UAV motion in video processing by applying background subtraction on optical flow images. We tested our method on very complex datasets called the mini-drone video dataset, and obtained results surpassing existing techniques’ performances with an AUC of 85.3. Full article
(This article belongs to the Special Issue Image and Video Forensics)
Show Figures

Figure 1

Article
Copy-Move Forgery Detection (CMFD) Using Deep Learning for Image and Video Forensics
J. Imaging 2021, 7(3), 59; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7030059 - 20 Mar 2021
Viewed by 615
Abstract
With the exponential growth of high-quality fake images in social networks and media, it is necessary to develop recognition algorithms for this type of content. One of the most common types of image and video editing consists of duplicating areas of the image, [...] Read more.
With the exponential growth of high-quality fake images in social networks and media, it is necessary to develop recognition algorithms for this type of content. One of the most common types of image and video editing consists of duplicating areas of the image, known as the copy-move technique. Traditional image processing approaches manually look for patterns related to the duplicated content, limiting their use in mass data classification. In contrast, approaches based on deep learning have shown better performance and promising results, but they present generalization problems with a high dependence on training data and the need for appropriate selection of hyperparameters. To overcome this, we propose two approaches that use deep learning, a model by a custom architecture and a model by transfer learning. In each case, the impact of the depth of the network is analyzed in terms of precision (P), recall (R) and F1 score. Additionally, the problem of generalization is addressed with images from eight different open access datasets. Finally, the models are compared in terms of evaluation metrics, and training and inference times. The model by transfer learning of VGG-16 achieves metrics about 10% higher than the model by a custom architecture, however, it requires approximately twice as much inference time as the latter. Full article
(This article belongs to the Special Issue Image and Video Forensics)
Show Figures

Figure 1

Article
VIPPrint: Validating Synthetic Image Detection and Source Linking Methods on a Large Scale Dataset of Printed Documents
J. Imaging 2021, 7(3), 50; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7030050 - 08 Mar 2021
Viewed by 583
Abstract
The possibility of carrying out a meaningful forensic analysis on printed and scanned images plays a major role in many applications. First of all, printed documents are often associated with criminal activities, such as terrorist plans, child pornography, and even fake packages. Additionally, [...] Read more.
The possibility of carrying out a meaningful forensic analysis on printed and scanned images plays a major role in many applications. First of all, printed documents are often associated with criminal activities, such as terrorist plans, child pornography, and even fake packages. Additionally, printing and scanning can be used to hide the traces of image manipulation or the synthetic nature of images, since the artifacts commonly found in manipulated and synthetic images are gone after the images are printed and scanned. A problem hindering research in this area is the lack of large scale reference datasets to be used for algorithm development and benchmarking. Motivated by this issue, we present a new dataset composed of a large number of synthetic and natural printed face images. To highlight the difficulties associated with the analysis of the images of the dataset, we carried out an extensive set of experiments comparing several printer attribution methods. We also verified that state-of-the-art methods to distinguish natural and synthetic face images fail when applied to print and scanned images. We envision that the availability of the new dataset and the preliminary experiments we carried out will motivate and facilitate further research in this area. Full article
(This article belongs to the Special Issue Image and Video Forensics)
Show Figures

Figure 1

Article
Detecting and Locating Passive Video Forgery Based on Low Computational Complexity Third-Order Tensor Representation
J. Imaging 2021, 7(3), 47; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7030047 - 05 Mar 2021
Viewed by 517
Abstract
Great attention is paid to detecting video forgeries nowadays, especially with the widespread sharing of videos over social media and websites. Many video editing software programs are available and perform well in tampering with video contents or even creating fake videos. Forgery affects [...] Read more.
Great attention is paid to detecting video forgeries nowadays, especially with the widespread sharing of videos over social media and websites. Many video editing software programs are available and perform well in tampering with video contents or even creating fake videos. Forgery affects video integrity and authenticity and has serious implications. For example, digital videos for security and surveillance purposes are used as evidence in courts. In this paper, a newly developed passive video forgery scheme is introduced and discussed. The developed scheme is based on representing highly correlated video data with a low computational complexity third-order tensor tube-fiber mode. An arbitrary number of core tensors is selected to detect and locate two serious types of forgeries which are: insertion and deletion. These tensor data are orthogonally transformed to achieve more data reductions and to provide good features to trace forgery along the whole video. Experimental results and comparisons show the superiority of the proposed scheme with a precision value of up to 99% in detecting and locating both types of attacks for static as well as dynamic videos, quick-moving foreground items (single or multiple), zooming in and zooming out datasets which are rarely tested by previous works. Moreover, the proposed scheme offers a reduction in time and a linear computational complexity. Based on the used computer’s configurations, an average time of 35 s. is needed to detect and locate 40 forged frames out of 300 frames. Full article
(This article belongs to the Special Issue Image and Video Forensics)
Show Figures

Figure 1

Article
No Matter What Images You Share, You Can Probably Be Fingerprinted Anyway
J. Imaging 2021, 7(2), 33; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7020033 - 11 Feb 2021
Viewed by 520
Abstract
The popularity of social networks (SNs), amplified by the ever-increasing use of smartphones, has intensified online cybercrimes. This trend has accelerated digital forensics through SNs. One of the areas that has received lots of attention is camera fingerprinting, through which each smartphone is [...] Read more.
The popularity of social networks (SNs), amplified by the ever-increasing use of smartphones, has intensified online cybercrimes. This trend has accelerated digital forensics through SNs. One of the areas that has received lots of attention is camera fingerprinting, through which each smartphone is uniquely characterized. Hence, in this paper, we compare classification-based methods to achieve smartphone identification (SI) and user profile linking (UPL) within the same or across different SNs, which can provide investigators with significant clues. We validate the proposed methods by two datasets, our dataset and the VISION dataset, both including original and shared images on the SN platforms such as Google Currents, Facebook, WhatsApp, and Telegram. The obtained results show that k-medoids achieves the best results compared with k-means, hierarchical approaches, and different models of convolutional neural network (CNN) in the classification of the images. The results show that k-medoids provides the values of F1-measure up to 0.91% for SI and UPL tasks. Moreover, the results prove the effectiveness of the methods which tackle the loss of image details through the compression process on the SNs, even for the images from the same model of smartphones. An important outcome of our work is presenting the inter-layer UPL task, which is more desirable in digital investigations as it can link user profiles on different SNs. Full article
(This article belongs to the Special Issue Image and Video Forensics)
Show Figures

Figure 1

Article
Factors that Influence PRNU-Based Camera-Identification via Videos
J. Imaging 2021, 7(1), 8; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7010008 - 13 Jan 2021
Viewed by 509
Abstract
The Photo Response Non-Uniformity pattern (PRNU-pattern) can be used to identify the source of images or to indicate whether images have been made with the same camera. This pattern is also recognized as the “fingerprint” of a camera since it is a highly [...] Read more.
The Photo Response Non-Uniformity pattern (PRNU-pattern) can be used to identify the source of images or to indicate whether images have been made with the same camera. This pattern is also recognized as the “fingerprint” of a camera since it is a highly characteristic feature. However, this pattern, identically to a real fingerprint, is sensitive to many different influences, e.g., the influence of camera settings. In this study, several previously investigated factors were noted, after which three were selected for further investigation. The computation and comparison methods are evaluated under variation of the following factors: resolution, length of the video and compression. For all three studies, images were taken with a single iPhone 6. It was found that a higher resolution ensures a more reliable comparison, and that the length of a (reference) video should always be as high as possible to gain a better PRNU-pattern. It also became clear that compression (i.e., in this study the compression that Snapchat uses) has a negative effect on the correlation value. Therefore, it was found that many different factors play a part when comparing videos. Due to the large amount of controllable and non-controllable factors that influence the PRNU-pattern, it is of great importance that further research is carried out to gain clarity on the individual influences that factors exert. Full article
(This article belongs to the Special Issue Image and Video Forensics)
Show Figures

Figure 1

Article
Detecting Morphing Attacks through Face Geometry Features
J. Imaging 2020, 6(11), 115; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging6110115 - 29 Oct 2020
Cited by 2 | Viewed by 661
Abstract
Face-morphing operations allow for the generation of digital faces that simultaneously carry the characteristics of two different subjects. It has been demonstrated that morphed faces strongly challenge face-verification systems, as they typically match two different identities. This poses serious security issues in machine-assisted [...] Read more.
Face-morphing operations allow for the generation of digital faces that simultaneously carry the characteristics of two different subjects. It has been demonstrated that morphed faces strongly challenge face-verification systems, as they typically match two different identities. This poses serious security issues in machine-assisted border control applications and calls for techniques to automatically detect whether morphing operations have been previously applied on passport photos. While many proposed approaches analyze the suspect passport photo only, our work operates in a differential scenario, i.e., when the passport photo is analyzed in conjunction with the probe image of the subject acquired at border control to verify that they correspond to the same identity. To this purpose, in this study, we analyze the locations of biologically meaningful facial landmarks identified in the two images, with the goal of capturing inconsistencies in the facial geometry introduced by the morphing process. We report the results of extensive experiments performed on images of various sources and under different experimental settings showing that landmark locations detected through automated algorithms contain discriminative information for identifying pairs with morphed passport photos. Sensitivity of supervised classifiers to different compositions on the training and testing sets are also explored, together with the performance of different derived feature transformations. Full article
(This article belongs to the Special Issue Image and Video Forensics)
Show Figures

Figure 1

Review

Jump to: Research

Review
A Comprehensive Review of Deep-Learning-Based Methods for Image Forensics
J. Imaging 2021, 7(4), 69; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7040069 - 03 Apr 2021
Viewed by 571
Abstract
Seeing is not believing anymore. Different techniques have brought to our fingertips the ability to modify an image. As the difficulty of using such techniques decreases, lowering the necessity of specialized knowledge has been the focus for companies who create and sell these [...] Read more.
Seeing is not believing anymore. Different techniques have brought to our fingertips the ability to modify an image. As the difficulty of using such techniques decreases, lowering the necessity of specialized knowledge has been the focus for companies who create and sell these tools. Furthermore, image forgeries are presently so realistic that it becomes difficult for the naked eye to differentiate between fake and real media. This can bring different problems, from misleading public opinion to the usage of doctored proof in court. For these reasons, it is important to have tools that can help us discern the truth. This paper presents a comprehensive literature review of the image forensics techniques with a special focus on deep-learning-based methods. In this review, we cover a broad range of image forensics problems including the detection of routine image manipulations, detection of intentional image falsifications, camera identification, classification of computer graphics images and detection of emerging Deepfake images. With this review it can be observed that even if image forgeries are becoming easy to create, there are several options to detect each kind of them. A review of different image databases and an overview of anti-forensic methods are also presented. Finally, we suggest some future working directions that the research community could consider to tackle in a more effective way the spread of doctored images. Full article
(This article belongs to the Special Issue Image and Video Forensics)
Show Figures

Figure 1

Review
A Survey on Anti-Spoofing Methods for Facial Recognition with RGB Cameras of Generic Consumer Devices
J. Imaging 2020, 6(12), 139; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging6120139 - 15 Dec 2020
Viewed by 873
Abstract
The widespread deployment of facial recognition-based biometric systems has made facial presentation attack detection (face anti-spoofing) an increasingly critical issue. This survey thoroughly investigates facial Presentation Attack Detection (PAD) methods that only require RGB cameras of generic consumer devices over the past two [...] Read more.
The widespread deployment of facial recognition-based biometric systems has made facial presentation attack detection (face anti-spoofing) an increasingly critical issue. This survey thoroughly investigates facial Presentation Attack Detection (PAD) methods that only require RGB cameras of generic consumer devices over the past two decades. We present an attack scenario-oriented typology of the existing facial PAD methods, and we provide a review of over 50 of the most influenced facial PAD methods over the past two decades till today and their related issues. We adopt a comprehensive presentation of the reviewed facial PAD methods following the proposed typology and in chronological order. By doing so, we depict the main challenges, evolutions and current trends in the field of facial PAD and provide insights on its future research. From an experimental point of view, this survey paper provides a summarized overview of the available public databases and an extensive comparison of the results reported in PAD-reviewed papers. Full article
(This article belongs to the Special Issue Image and Video Forensics)
Show Figures

Figure 1

Back to TopTop