Advances in Signal, Image and Information Processing

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: closed (30 November 2022) | Viewed by 10982

Special Issue Editors


E-Mail Website
Guest Editor
Division of Communication, Electronic and Information Engineering, School of Electrical and Computer Engineering, National Technical University of Athens, 15780 Athens, Greece
Interests: pattern recognition; signal, image and information processing; stability (and instability); document and text processing; sound and music computing; computational geometry and object modeling

E-Mail Website
Guest Editor
Biomedical Engineering Department, University of West Attica, 11635 Attica, Greece
Interests: image processing; pattern recognition; computational geometry; biomedical engineering; curve/surface matching

E-Mail Website
Guest Editor
Department of Audio and Visual Arts, Ionian University, 49100 Corfu, Greece
Interests: application of digital signal processing and pattern recognition in archaeology heritage; arts and cultural heritage; digital image processing; pattern recognition; computer vision; artificial intelligence
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Automatic Control Systems and Robotics Laboratory, Electronics and Information Technology Systems Sector, Department of Electrical and Computer Engineering, Democritus University of Thrace, 67100 Xanthi, Greece
Interests: computational intelligence; automatic control; pattern recognition; signal and image processing

Special Issue Information

Dear Colleagues,

Signal and image processing, which are both fundamental in electrical and computer engineering, incorporate research results from a very broad set of disciplines. The common scope of them is that, whenever a real-world problem is meant to be tackled automatically, one meets it by processing its representation—i.e., of signals and/or of images, which are obtained/captured, as a rule, via measurements. Consequently, serious research has taken place in recent years in order to achieve processing of signals and images that offers solutions to important practical, real-world problems. The scope of the present Special Issue is to capture this connection between advances in signal, image and information processing and the real-world problems that triggered them. Specifically, novel contributions are welcome belonging to the fields of:

  • medical imaging/biomedical signal processing;
  • image and signal processing applications in archaeometry;
  • signal/image processing and pattern recognition applications in the arts;
  • document analysis/automatic writer identification;
  • moments for image processing and compression;
  • automated reconstruction of fragmented objects;
  • image content recognition;
  • image segmentation;
  • sound, speech and music automatic recognition;
  • finite precision error in signal processing;
  • image and signal processing for cultural heritage;
  • image enhancement and recovery.

Prof. Constantin Papaodysseus
Dr. Dimitris Arabadjis
Prof. Michail Panagopoulos
Prof. Yiannis Boutalis
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • medical imaging/biomedical signal processing
  • image and signal processing applications in archaeometry
  • signal/image processing and pattern recognition applications in cultural heritage
  • document analysis/automatic writer identification
  • moments for image processing and compression
  • automated reconstruction of fragmented objects
  • image content recognition
  • image segmentation
  • sound, speech and music automatic recognition
  • finite precision error in signal processing

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 37593 KiB  
Article
A Novel Image Encryption Algorithm Based on Multiple Random DNA Coding and Annealing
by Tianshuo Zhang, Bingbing Zhu, Yiqun Ma and Xiaoyi Zhou
Electronics 2023, 12(3), 501; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics12030501 - 18 Jan 2023
Cited by 2 | Viewed by 1404
Abstract
Improved encryption devices place higher demands on the randomness and security of encrypted images. Existing image encryption optimization methods based on single- or multi-objectives concentrate on selecting keys and parameters, resulting in relatively fixed parameters and keys that are susceptible to leakage and [...] Read more.
Improved encryption devices place higher demands on the randomness and security of encrypted images. Existing image encryption optimization methods based on single- or multi-objectives concentrate on selecting keys and parameters, resulting in relatively fixed parameters and keys that are susceptible to leakage and cracking. Despite the possibility of increasing security, the DNA coding encryption method does not fully take into account the large capacity of image data and the difference between pixels, resulting in a limited level of randomness. To overcome the problems above, this paper proposes a method for generating complex texture features in images using random variation of pixels. With an annealing algorithm that can find an optimal solution in a large search space, the image is optimally optimized in terms of information entropy, pixel correlation, and value of x2. Each iteration involves selecting one of 25632 combinations of DNA coding and operation. In comparison with current encryption algorithms based on optimization algorithms and DNA coding, this method is more secure and unbreakable. Full article
(This article belongs to the Special Issue Advances in Signal, Image and Information Processing)
Show Figures

Figure 1

18 pages, 4497 KiB  
Article
A Novel Approach for Classifying Brain Tumours Combining a SqueezeNet Model with SVM and Fine-Tuning
by Mohammed Rasool, Nor Azman Ismail, Arafat Al-Dhaqm, Wael M. S. Yafooz and Abdullah Alsaeedi
Electronics 2023, 12(1), 149; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics12010149 - 29 Dec 2022
Cited by 14 | Viewed by 2633
Abstract
Cancer of the brain is most common in the elderly and young and can be fatal in both. Brain tumours can heal better if they are diagnosed and treated quickly. When it comes to processing medical images, the deep learning method is essential [...] Read more.
Cancer of the brain is most common in the elderly and young and can be fatal in both. Brain tumours can heal better if they are diagnosed and treated quickly. When it comes to processing medical images, the deep learning method is essential in aiding humans in diagnosing various diseases. Classifying brain tumours is an essential step that relies heavily on the doctor’s experience and training. A smart system for detecting and classifying these tumours is essential to aid in the non-invasive diagnosis of brain tumours using MRI (magnetic resonance imaging) images. This work presents a novel hybrid deep learning CNN-based structure to distinguish between three distinct types of human brain tumours through MRI scans. This paper proposes a method that employs a dual approach to classification using deep learning and CNN. The first approach combines the unsupervised classification of an SVM for pattern classification with a pre-trained CNN (i.e., SqueezeNet) for feature extraction. The second approach combines the supervised soft-max classifier with a finely tuned SqueezeNet. To evaluate the efficacy of the suggested method, MRI scans of the brain were used to analyse a total of 1937 images of glioma tumours, 926 images of meningioma tumours, 926 images of pituitary tumours, and 396 images of a normal brain. According to the experiment results, the finely tuned SqueezeNet model obtained an accuracy of 96.5%. However, when SqueezeNet was used as a feature extractor and an SVM classifier was applied, recognition accuracy increased to 98.7%. Full article
(This article belongs to the Special Issue Advances in Signal, Image and Information Processing)
Show Figures

Figure 1

14 pages, 2886 KiB  
Article
Approximate Nearest Neighbor Search Using Enhanced Accumulative Quantization
by Liefu Ai, Hongjun Cheng, Xiaoxiao Wang, Chunsheng Chen, Deyang Liu, Xin Zheng and Yuanzhi Wang
Electronics 2022, 11(14), 2236; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics11142236 - 17 Jul 2022
Cited by 1 | Viewed by 1423
Abstract
Approximate nearest neighbor (ANN) search is fundamental for fast content-based image retrieval. While vector quantization is one key to performing an effective ANN search, in order to further improve ANN search accuracy, we propose an enhanced accumulative quantization (E-AQ). Based on our former [...] Read more.
Approximate nearest neighbor (ANN) search is fundamental for fast content-based image retrieval. While vector quantization is one key to performing an effective ANN search, in order to further improve ANN search accuracy, we propose an enhanced accumulative quantization (E-AQ). Based on our former work, we introduced the idea of the quarter point into accumulative quantization (AQ). Instead of finding the nearest centroid, the quarter vector was used to quantize the vector and was computed for each vector according to its nearest centroid and second nearest centroid. Then, the error produced through codebook training and vector quantization was reduced without increasing the number of centroids in each codebook. To evaluate the accuracy to which vectors were approximated by their quantization outputs, we realized an E-AQ-based exhaustive method for ANN search. Experimental results show that our approach gained up to 0.996 and 0.776 Recall@100 with eight size 256 codebooks on SIFT and GIST datasets, respectively, which is at least 1.6% and 4.9% higher than six other state-of-the-art methods. Moreover, based on the experimental results, E-AQ needs fewer codebooks while still providing the same ANN search accuracy. Full article
(This article belongs to the Special Issue Advances in Signal, Image and Information Processing)
Show Figures

Figure 1

21 pages, 5817 KiB  
Article
Objective Quality Assessment Metrics for Light Field Image Based on Textural Features
by Huy PhiCong, Stuart Perry, Eva Cheng and Xiem HoangVan
Electronics 2022, 11(5), 759; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics11050759 - 1 Mar 2022
Cited by 8 | Viewed by 1714
Abstract
Light Field (LF) imaging is a plenoptic data collection method enabling a wide variety of image post-processing such as 3D extraction, viewpoint change and digital refocusing. Moreover, LF provides the capability to capture rich information about a scene, e.g., texture, geometric information, etc. [...] Read more.
Light Field (LF) imaging is a plenoptic data collection method enabling a wide variety of image post-processing such as 3D extraction, viewpoint change and digital refocusing. Moreover, LF provides the capability to capture rich information about a scene, e.g., texture, geometric information, etc. Therefore, a quality assessment model for LF images is needed and poses significant challenges. Many LF Image Quality Assessment (LF-IQA) metrics have been recently presented based on the unique characteristics of LF images. The state-of-the-art objective assessment metrics have taken into account the image content and human visual system such as SSIM and IW-SSIM. However, most of these metrics are designed for images and video with natural content. Additionally, other models based on the LF characteristics (e.g., depth information, angle information) trade high performance for high computational complexity, along with them possessing difficulties of implementation for LF applications due to the immense data requirements of LF images. Hence, this paper presents a novel content-adaptive LF-IQA metric to improve the conventional LF-IQA performance that is also low in computational complexity. The experimental results clearly show improved performance compared to conventional objective IQA metrics, and we also identify metrics that are well-suited for LF image assessment. In addition, we present a comprehensive content-based feature analysis to determine the most appropriate feature that influences human visual perception among the widely used conventional objective IQA metrics. Finally, a rich LF dataset is selected from the EPFL dataset, allowing for the study of light field quality by qualitative factors such as depth (wide and narrow), focus (background or foreground) and complexity (simple and complex). Full article
(This article belongs to the Special Issue Advances in Signal, Image and Information Processing)
Show Figures

Graphical abstract

21 pages, 3442 KiB  
Article
Infrared and Visible Image Fusion Using Truncated Huber Penalty Function Smoothing and Visual Saliency Based Threshold Optimization
by Chaowei Duan, Yiliu Liu, Changda Xing and Zhisheng Wang
Electronics 2022, 11(1), 33; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics11010033 - 23 Dec 2021
Cited by 4 | Viewed by 2279
Abstract
An efficient method for the infrared and visible image fusion is presented using truncated Huber penalty function smoothing and visual saliency based threshold optimization. The method merges complementary information from multimodality source images into a more informative composite image in two-scale domain, in [...] Read more.
An efficient method for the infrared and visible image fusion is presented using truncated Huber penalty function smoothing and visual saliency based threshold optimization. The method merges complementary information from multimodality source images into a more informative composite image in two-scale domain, in which the significant objects/regions are highlighted and rich feature information is preserved. Firstly, source images are decomposed into two-scale image representations, namely, the approximate and residual layers, using truncated Huber penalty function smoothing. Benefiting from the edge- and structure-preserving characteristics, the significant objects and regions in the source images are effectively extracted without halo artifacts around the edges. Secondly, a visual saliency based threshold optimization fusion rule is designed to fuse the approximate layers aiming to highlight the salient targets in infrared images and remain the high-intensity regions in visible images. The sparse representation based fusion rule is adopted to fuse the residual layers with the goal of acquiring rich detail texture information. Finally, combining the fused approximate and residual layers reconstructs the fused image with more natural visual effects. Sufficient experimental results demonstrate that the proposed method can achieve comparable or superior performances compared with several state-of-the-art fusion methods in visual results and objective assessments. Full article
(This article belongs to the Special Issue Advances in Signal, Image and Information Processing)
Show Figures

Figure 1

Back to TopTop