Special Issue "Deep Learning for Visual Contents Processing and Analysis"

A special issue of Journal of Imaging (ISSN 2313-433X).

Deadline for manuscript submissions: 30 June 2021.

Special Issue Editors

Prof. Dr. Hocine Cherifi
E-Mail Website
Guest Editor
Laboratoire d’Informatique de Bourgogne, University of Burgundy, UMR 6306 CNRS, Dijon, France
Interests: signal and image processing; computer vision; data compression; multimedia quality; complex networks
Special Issues and Collections in MDPI journals
Prof. Dr. Mohammed El Hassouni
E-Mail Website
Guest Editor
LRIT, FLSHR, Mohammed V University in Rabat, Rabat, Morocco
Interests: Computer Vision; Deep learning; Complex Networks; Data modeling and analysis; Artificial Intelligence

Special Issue Information

Dear Colleagues,

Nowadays, many visualization and imaging systems generate complex visual contents with a large amount of data, making the data extremely difficult to handle. This growing mass of data requires new strategies for data analysis and interpretation. In recent years, particular attention has been paid to deep learning methods for visual contents analysis and applications. Inspired by artificial intelligence, mathematics, biology and other fields, these methods can find relationships between different categories of complex data and provide a set of tools for analyzing and handling visual contents.

This Special Issue will provide a forum to publish original research papers covering state-of-the-art, new algorithms, methodologies, applications, theories and implementations of deep learning methods for visual contents, such as image, video, stereoscopic images, 3D meshes, points clouds, visual graphs, etc.

This Special Issue is primarily focused on, but not limited to, the following topics:

  • Classification;
  • Retrieval;
  • Restoration;
  • Compression;
  • Segmentation;
  • Visual quality assessment;
  • Convolutional neural networks (CNN);
  • Autoencoders;
  • Generative adversarial networks (GAN);
  • Reinforcement learning.

Prof. Dr. Hocine Cherifi
Prof. Dr. Mohammed El Hassouni
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
No-Reference Quality Assessment of In-Capture Distorted Videos
J. Imaging 2020, 6(8), 74; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging6080074 - 30 Jul 2020
Cited by 2 | Viewed by 1262
Abstract
We introduce a no-reference method for the assessment of the quality of videos affected by in-capture distortions due to camera hardware and processing software. The proposed method encodes both quality attributes and semantic content of each video frame by using two Convolutional Neural [...] Read more.
We introduce a no-reference method for the assessment of the quality of videos affected by in-capture distortions due to camera hardware and processing software. The proposed method encodes both quality attributes and semantic content of each video frame by using two Convolutional Neural Networks (CNNs) and then estimates the quality score of the whole video by using a Recurrent Neural Network (RNN), which models the temporal information. The extensive experiments conducted on four benchmark databases (CVD2014, KoNViD-1k, LIVE-Qualcomm, and LIVE-VQC) containing in-capture distortions demonstrate the effectiveness of the proposed method and its ability to generalize in cross-database setup. Full article
(This article belongs to the Special Issue Deep Learning for Visual Contents Processing and Analysis)
Show Figures

Figure 1

Back to TopTop