Theory and Applications in Digital Signal Processing

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Circuit and Signal Processing".

Deadline for manuscript submissions: closed (1 May 2022) | Viewed by 73349

Special Issue Editor

Special Issue Information

Dear Colleagues,

This special issue (SI) encourages to present research achievement of new theories and methods of signal processing the researchers develop. Digital signal processing (DSP) is the use of digital processing, such as by computers or more specialized digital signal processors, to perform a wide variety of signal processing operations. The application of digital computation to signal processing allows for many advantages over analog processing in many applications, such as error detection and correction in transmission as well as data compression. However, most of the challenges arising from digital signal processing in still need to be researched regardless of the application, because many research questions remain. Many limitations exist in various application environments, and the latest research to solve these problems is being studied by many researchers. We look forward to the latest research findings that suggest theories and practical solutions for various application based on Digital Signal Processing (DSP).

Authors are encouraged to submit contributions in any of the following or related areas for DSP:

  • Information theory based on DSP;
  • Algorithms based on DSP;
  • Real-time computing based on DSP;
  • Applications based on DSP;
  • Image & video processing;
  • Display technology based on DSP;
  • Machine learning based on DSP;
  • Data hiding & Watermarking;
  • Pattern recognition;
  • Learning mechanism.

Prof. Dr. Cheonshik Kim
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Information theory based on DSP
  • Algorithms based on DSP
  • Real-time computing based on DSP
  • Applications based on DSP
  • Image & video processing
  • Display technology based on DSP
  • Machine learning based on DSP
  • Data hiding & Watermarking
  • Pattern recognition
  • Learning mechanism

Related Special Issue

Published Papers (29 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

26 pages, 25716 KiB  
Article
Accumulatively Increasing Sensitivity of Ultrawide Instantaneous Bandwidth Digital Receiver with Fine Time and Frequency Resolution for Weak Signal Detection
by Chen Wu, Taiwen Tang, Janaka Elangage and Denesh Krishnasamy
Electronics 2022, 11(7), 1018; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics11071018 - 24 Mar 2022
Cited by 4 | Viewed by 1957
Abstract
It is always an interesting research topic for digital receiver (DRX) designers to develop a DRX with (1) ultrawide instantaneous bandwidth (IBW), (2) high sensitivity, (3) fine time-of-arrival-measurement resolution (TMR), and (4) fine frequency-measurement resolution (FMR) for weak signal detection. This is because [...] Read more.
It is always an interesting research topic for digital receiver (DRX) designers to develop a DRX with (1) ultrawide instantaneous bandwidth (IBW), (2) high sensitivity, (3) fine time-of-arrival-measurement resolution (TMR), and (4) fine frequency-measurement resolution (FMR) for weak signal detection. This is because designers always want their receivers to have the widest possible IBW to detect far away and/or weak signals. As the analog-to-digital converter (ADC) rate increasing, the modern DRX IBW increases continuously. To improve the signal detection based on blocking FFT (BFFT) method, this paper introduces the new concept of accumulatively increasing receiver sensitivity (AIRS) for DRX design. In AIRS, a very large number of frequency-bins can be used for a given IBW in the time-to-frequency transform (TTFT), and the DRX sensitivity is cumulatively increased, when more samples are available from high-speed ADC. Unlike traditional FFT-based TTFT, the AIRS can have both fine TMR and fine FMR simultaneously. It also inherits all the merits of the BFFT, which can be implemented in an embedded system. This study shows that AIRS-based DRX is more efficient than normal FFT-based DRX in terms of using time-domain samples. For example, with a probability of false alarm rate of 107, for N=220 frequency-bins with TMR = 50 nSec, FMR = 2.4414 KHz, IBW > 1 GHz and ADC rate at 2.56 GHz, AIRS-based DRX detects narrow-band signals at about −42 dB of input signal-to-noise ratio (Input-SNR), and just uses a little less than N/2 real-samples. However, FFT-based DRX have to use all N samples. Simulation results also show that AIRS-based DRX can detect frequency-modulated continuous wave signals with ±0.1, ±1, ±10 and ±100 MHz bandwidths at about −39.4, −35.1, −30.2, and −25.5 dB of Input-SNR using about 264.6 K, 104.7 K, 40.2 K and 18.3 K real-samples, respectively, in 220 frequency-bins for TTFT. Full article
(This article belongs to the Special Issue Theory and Applications in Digital Signal Processing)
Show Figures

Figure 1

12 pages, 1470 KiB  
Article
Robust Burst Detection Algorithm for Distributed Unique Word TDMA Signal
by Kunheng Zou, Peng Sun, Jicai Deng, Kexian Gong and Zilong Liu
Electronics 2022, 11(1), 89; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics11010089 - 28 Dec 2021
Cited by 1 | Viewed by 1506
Abstract
In recent years, distributed unique word (DUW) has been widely used in satellite single carrier TDMA signals, such as very small aperture terminal (VSAT) satellite systems. Different from the centralized structure of traditional unique word, DUW is uniformly dispersed in a burst signal, [...] Read more.
In recent years, distributed unique word (DUW) has been widely used in satellite single carrier TDMA signals, such as very small aperture terminal (VSAT) satellite systems. Different from the centralized structure of traditional unique word, DUW is uniformly dispersed in a burst signal, where the traditional unique word detection methods are not applicable anymore. For this, we propose a robust burst detection algorithm based on DUW. Firstly, we allocated the sliding detection windows with the same structures as DUW in order to effectively detect it. Secondly, we adopt the method of time delay conjugate multiplication to eliminate the influence of frequency offset on detection performance. Due to the uniform dispersion of DUW, it naturally has two different kinds of time delays, namely the delay within the group and the delay between the two groups. So, we divide the traditional dual correlation formula into two parts to calculate them separately and obtain a dual correlation detection algorithm, which is suitable for DUW. Simulation and experimental results demonstrate that when the distribution structure of DUW changes, detection probability of the proposed algorithm fluctuates little, and its variance is 1.56×105, which is 99.83% lower than the existing DUW detection algorithms. In addition, its signal to noise ratio (SNR) threshold is about 1 dB lower than the existing algorithms under the same circumstance of the missed detection probability. Full article
(This article belongs to the Special Issue Theory and Applications in Digital Signal Processing)
Show Figures

Figure 1

13 pages, 1384 KiB  
Article
MPEG and DA-AD Resilient DCT-Based Video Watermarking Using Adaptive Frame Selection
by Jong-Uk Hou
Electronics 2021, 10(20), 2467; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics10202467 - 11 Oct 2021
Cited by 4 | Viewed by 1387
Abstract
We present a robust video watermarking scheme and report the detailed robustness of the video watermarking assessed based on standard criteria obtained from Information Hiding and its Criteria (IHC) Committee. Using discrete cosine transform domain spread-spectrum watermarking, our system achieves robustness under various [...] Read more.
We present a robust video watermarking scheme and report the detailed robustness of the video watermarking assessed based on standard criteria obtained from Information Hiding and its Criteria (IHC) Committee. Using discrete cosine transform domain spread-spectrum watermarking, our system achieves robustness under various non-hostile video processing techniques, including MPEG compression and digital/analog–analog/digital (DA-AD) conversion. The proposed system ensures that a 16-bit embedded sequence can be extracted through adaptive frame selection in any 15-s interval, even with a long video clip. To evaluate the performance of the proposed watermarking scheme, we conducted robustness tests under a DA-AD conversion environment, based on the MPEG-4 part 10 (H.264) codec. The experiment results obtained indicate that, in addition to being robust against non-hostile video processes, the proposed method achieves invisibility. The assessment of the developed watermarking scheme also satisfies the third edition of the IHC video watermarking evaluation criteria. Full article
(This article belongs to the Special Issue Theory and Applications in Digital Signal Processing)
Show Figures

Figure 1

13 pages, 21016 KiB  
Article
A Novel Zero Watermarking Based on DT-CWT and Quaternion for HDR Image
by Jiangtao Huang, Shanshan Shi, Zhouyan He and Ting Luo
Electronics 2021, 10(19), 2385; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics10192385 - 29 Sep 2021
Cited by 3 | Viewed by 1382
Abstract
This paper presents a high dynamic range (HDR) image zero watermarking method based on dual tree complex wavelet transform (DT-CWT) and quaternion. In order to be against tone mapping (TM), DT-CWT is used to transform the three RGB color channels of the HDR [...] Read more.
This paper presents a high dynamic range (HDR) image zero watermarking method based on dual tree complex wavelet transform (DT-CWT) and quaternion. In order to be against tone mapping (TM), DT-CWT is used to transform the three RGB color channels of the HDR image for obtaining the low-pass sub-bands, respectively, since DT-CWT can extract the contour of the HDR image and the contour change of the HDR image is small after TM. The HDR image provides a wide dynamic range, and thus, three-color channel correlations are higher than inner-relationships and the quaternion is used to consider three color channels as a whole to be transformed. Quaternion fast Fourier transform (QFFT) and quaternion singular value decomposition (QSVD) are utilized to decompose the HDR image for obtaining robust features, which is fused with a binary watermark to generate a zero watermark for copyright protection. Furthermore, the binary watermark is scrambled for the security by using the Arnold transform. Experimental results denote that the proposed zero-watermarking method is robust to TM and other image processing attacks, and can protect the HDR image efficiently. Full article
(This article belongs to the Special Issue Theory and Applications in Digital Signal Processing)
Show Figures

Figure 1

16 pages, 3136 KiB  
Article
A New Blind Video Quality Metric for Assessing Different Turbulence Mitigation Algorithms
by Chiman Kwan and Bence Budavari
Electronics 2021, 10(18), 2277; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics10182277 - 16 Sep 2021
Cited by 2 | Viewed by 1519
Abstract
Although many algorithms have been proposed to mitigate air turbulence in optical videos, there do not seem to be consistent blind video quality assessment metrics that can reliably assess different approaches. Blind video quality assessment metrics are necessary because many videos containing air [...] Read more.
Although many algorithms have been proposed to mitigate air turbulence in optical videos, there do not seem to be consistent blind video quality assessment metrics that can reliably assess different approaches. Blind video quality assessment metrics are necessary because many videos containing air turbulence do not have ground truth. In this paper, a simple and intuitive blind video quality assessment metric is proposed. This metric can reliably and consistently assess various turbulent mitigation algorithms for optical videos. Experimental results using more than 10 videos in the literature show that the proposed metrics correlate well with human subjective evaluations. Compared with an existing blind video metric and two other blind image quality metrics, the proposed metrics performed consistently better. Full article
(This article belongs to the Special Issue Theory and Applications in Digital Signal Processing)
Show Figures

Figure 1

24 pages, 7272 KiB  
Article
Additional Information Delivery to Image Content via Improved Unseen–Visible Watermarking
by Oswaldo Ulises Juarez-Sandoval, Laura Josefina Reyes-Ruiz, Francisco Garcia-Ugalde, Manuel Cedillo-Hernandez, Jazmin Ramirez-Hernandez and Robert Morelos-Zaragoza
Electronics 2021, 10(18), 2186; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics10182186 - 07 Sep 2021
Cited by 1 | Viewed by 1772
Abstract
In a practical watermark scenario, watermarks are used to provide auxiliary information; in this way, an analogous digital approach called unseen–visible watermark has been introduced to deliver auxiliary information. In this algorithm, the embedding stage takes advantage of the visible and invisible watermarking [...] Read more.
In a practical watermark scenario, watermarks are used to provide auxiliary information; in this way, an analogous digital approach called unseen–visible watermark has been introduced to deliver auxiliary information. In this algorithm, the embedding stage takes advantage of the visible and invisible watermarking to embed an owner logotype or barcodes as watermarks; in the exhibition stage, the equipped functions of the display devices are used to reveal the watermark to the naked eyes, eliminating any watermark exhibition algorithm. In this paper, a watermark complement strategy for unseen–visible watermarking is proposed to improve the embedding stage, reducing the histogram distortion and the visual degradation of the watermarked image. The presented algorithm exhibits the following contributions: first, the algorithm can be applied to any class of images with large smooth regions of low or high intensity; second, a watermark complement strategy is introduced to reduce the visual degradation and histogram distortion of the watermarked image; and third, an embedding error measurement is proposed. Evaluation results show that the proposed strategy has high performance in comparison with other algorithms, providing a high visual quality of the exhibited watermark and preserving its robustness in terms of readability and imperceptibility against geometric and processing attacks. Full article
(This article belongs to the Special Issue Theory and Applications in Digital Signal Processing)
Show Figures

Graphical abstract

14 pages, 14168 KiB  
Article
Digital Compensation of a Resistive Voltage Divider for Power Measurement
by Martin Dadić, Petar Mostarac, Roman Malarić and Jure Konjevod
Electronics 2021, 10(6), 696; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics10060696 - 16 Mar 2021
Cited by 3 | Viewed by 1899
Abstract
The paper presents a method for digital compensation of the ratio and phase angle errors of a resistive voltage divider. The system consists of a separate electrical circuit of a resistive divider, and a digital compensation system based on National Instruments (NI) PCI [...] Read more.
The paper presents a method for digital compensation of the ratio and phase angle errors of a resistive voltage divider. The system consists of a separate electrical circuit of a resistive divider, and a digital compensation system based on National Instruments (NI) PCI eXtension for Instrumentation (PXI) PXI-5922 digital acquisition cards (DAQ). A novel approach to the real-time compensation is presented, using digital signal processing. The algorithm is based on Wiener filtering and finite-impulse-response (FIR) filters. The proposed digital compensation, using FIR digital filtration and NI PXI DAQs, gives maximum magnitude error below 400 ppm and the phase angle error below 4500 μrad, in the frequency band 50 Hz–100 kHz. The algorithm allows the fine-tuning of the compensation to adjust to the possible change in the original transfer function due to the aging of the components. Full article
(This article belongs to the Special Issue Theory and Applications in Digital Signal Processing)
Show Figures

Figure 1

15 pages, 6798 KiB  
Article
Pixel P Air-Wise Fragile Image Watermarking Based on HC-Based Absolute Moment Block Truncation Coding
by Chia-Chen Lin, Si-Liang He and Chin-Chen Chang
Electronics 2021, 10(6), 690; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics10060690 - 15 Mar 2021
Cited by 4 | Viewed by 1603
Abstract
In this paper, we first designed Huffman code (HC)-based absolute moment block truncation coding (AMBTC). Then, we applied Huffman code (HC)-based absolute moment block truncation coding (AMBTC) to design a pixel pair-wise fragile image watermarking method. Pixel pair-wise tampering detection and content recovery [...] Read more.
In this paper, we first designed Huffman code (HC)-based absolute moment block truncation coding (AMBTC). Then, we applied Huffman code (HC)-based absolute moment block truncation coding (AMBTC) to design a pixel pair-wise fragile image watermarking method. Pixel pair-wise tampering detection and content recovery mechanisms were collaboratively applied in the proposed scheme to enhance readability even when images have been tampered with. Representative features are derived from our proposed HC-based AMBTC compression codes of the original image, and then serve as authentication code and recovery information at the same time during tamper detection and recovery operations. Recovery information is embedded into two LSB of the original image with a turtle shell-based data hiding method and a pre-determined matrix. Therefore, each non-overlapping pixel-pair carries four bits of recovery information. When the recipient suspects that the received image may have been tampered with, the compressed image can be used to locate tampered pixels, and then the recovery information can be used to restore the tampered pixels. Full article
(This article belongs to the Special Issue Theory and Applications in Digital Signal Processing)
Show Figures

Figure 1

22 pages, 7680 KiB  
Article
High-Capacity Reversible Data Hiding in Encrypted Images Based on Hierarchical Quad-Tree Coding and Multi-MSB Prediction
by Ya Liu, Guangdong Feng, Chuan Qin, Haining Lu and Chin-Chen Chang
Electronics 2021, 10(6), 664; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics10060664 - 12 Mar 2021
Cited by 9 | Viewed by 1920
Abstract
Nowadays, more and more researchers are interested in reversible data hiding in encrypted images (RDHEI), which can be applied in privacy protection and cloud storage. In this paper, a new RDHEI method on the basis of hierarchical quad-tree coding and multi-MSB (most significant [...] Read more.
Nowadays, more and more researchers are interested in reversible data hiding in encrypted images (RDHEI), which can be applied in privacy protection and cloud storage. In this paper, a new RDHEI method on the basis of hierarchical quad-tree coding and multi-MSB (most significant bit) prediction is proposed. The content owner performs pixel prediction to obtain a prediction error image and explores the maximum embedding capacity of the prediction error image by hierarchical quad-tree coding before image encryption. According to the marked bits of vacated room capacity, the data hider can embed additional data into the room-vacated image without knowing the content of original image. Through the data hiding key and the encryption key, the legal receiver is able to conduct data extraction and image recovery separately. Experimental results show that the average embedding rates of the proposed method can separately reach 3.504 bpp (bits per pixel), 3.394 bpp, and 2.746 bpp on three well-known databases, BOSSBase, BOWS-2, and UCID, which are higher than some state-of-the-art methods. Full article
(This article belongs to the Special Issue Theory and Applications in Digital Signal Processing)
Show Figures

Figure 1

15 pages, 8346 KiB  
Article
Palliation of Four-Wave Mixing in Optical Fibers Using Improved DSP Receiver
by Fazal Muhammad, Farman Ali, Ghulam Abbas, Ziaul Haq Abbas, Shahab Haider, Muhammad Bilal, Md. Jalil Piran and Doug Young Suh
Electronics 2021, 10(5), 611; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics10050611 - 05 Mar 2021
Viewed by 2696
Abstract
A long haul optical communication system (LHOCS) is one of the key resources to fulfill the higher capacity requirements in future communication networks. To launch LHOCS, the system mainly faces high order nonlinear effects. The four-wave mixing (FWM) is one of the major [...] Read more.
A long haul optical communication system (LHOCS) is one of the key resources to fulfill the higher capacity requirements in future communication networks. To launch LHOCS, the system mainly faces high order nonlinear effects. The four-wave mixing (FWM) is one of the major nonlinear effects, which limits the transmission distance. Therefore, in this paper, an advanced duo-binary (DB) modulation scheme-based system is evaluated by employing an improved digital signal processing (IDSP) approach at the receiver side to suppress the FWM effect. In addition, an analytical analysis is also performed for the proposed system. To observe the difference between the IDSP and conventional digital signal processing (DSP), the various performance metrics such as bit error rate (BER), Q-factor, and optical signal-to-noise ratio (OSNR) parameters are evaluated. Variable channel spacing along with polarization mode dispersion (PMD) are analyzed at several ranges of input powers and fiber lengths. The analytical and simulation-based calculations exhibit the effectiveness of the proposed model and hence, FWM effect are compensated to achieve 500 km optical fiber propagation range with a BER below 106. Full article
(This article belongs to the Special Issue Theory and Applications in Digital Signal Processing)
Show Figures

Figure 1

15 pages, 5830 KiB  
Article
Design of Cut Off-Frequency Fixing Filters by Error Compensation of MAXFLAT FIR Filters
by Daewon Chung, Woon Cho, Inyeob Jeong and Joonhyeon Jeon
Electronics 2021, 10(5), 553; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics10050553 - 26 Feb 2021
Cited by 1 | Viewed by 1970
Abstract
Maximally-flat (MAXFLAT) finite impulse response (FIR) filters often face a problem of the cutoff-frequency error due to approximation of the desired frequency response by some closed-form solution. So far, there have been plenty of efforts to design such a filter with an arbitrarily [...] Read more.
Maximally-flat (MAXFLAT) finite impulse response (FIR) filters often face a problem of the cutoff-frequency error due to approximation of the desired frequency response by some closed-form solution. So far, there have been plenty of efforts to design such a filter with an arbitrarily specified cut off-frequency, but this filter type requires extensive computation and is not MAXFLAT anymore. Thus, a computationally efficient and effective design is needed for highly accurate filters with desired frequency characteristics. This paper describes a new method for designing cutoff-frequency-fixing FIR filters through the cutoff-frequency error compensation of MAXFLAT FIR filters. The proposed method provides a closed-form Chebyshev polynomial containing a cutoff-error compensation function, which can characterize the “cutoff-error-free” filters in terms of the degree of flatness for a given order of filter and cut off-frequency. This method also allows a computationally efficient and accurate formula to directly determine the degree of flatness, so that this filter type has a flat magnitude characteristic both in the passband and the stopband. The remarkable effectiveness of the proposed method in design efficiency and accuracy is clearly demonstrated through various examples, indicating that the cutoff-fixing filters exhibit amplitude distortion error of less than 10−14 and no cut off-frequency error. This new approach is shown to provide significant advantages over the previous works in design flexibility and accuracy. Full article
(This article belongs to the Special Issue Theory and Applications in Digital Signal Processing)
Show Figures

Figure 1

20 pages, 6571 KiB  
Article
Reversible Data Hiding for AMBTC Compressed Images Based on Matrix and Hamming Coding
by Chia-Chen Lin, Juan Lin and Chin-Chen Chang
Electronics 2021, 10(3), 281; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics10030281 - 25 Jan 2021
Cited by 11 | Viewed by 1674
Abstract
In this paper, we propose a two-layer data hiding method by using the Hamming code to enhance the hiding capacity without causing significantly increasing computation complexity for AMBTC-compressed images. To achieve our objective, for the first layer, four disjoint sets using different combinations [...] Read more.
In this paper, we propose a two-layer data hiding method by using the Hamming code to enhance the hiding capacity without causing significantly increasing computation complexity for AMBTC-compressed images. To achieve our objective, for the first layer, four disjoint sets using different combinations of the mean value (AVG) and the standard deviation (VAR) are derived according to the combination of secret bits and the corresponding bitmap, following Lin et al.’s method. For the second layer, these four disjoint sets are extended to eight by adding or subtracting 1, according to a matrix embedding with (7, 4) Hamming code. To maintain reversibility, we must return the irreversible block to its previous state, which is the state after the first layer of data is embedded. Then, to losslessly recover the AMBTC-compressed images after extracting the secret bits, we use continuity feature, the parity of pixels value, and the unique number of changed pixels in the same row to restore AVG and VAR. Finally, in comparison with state-of-the-art AMBTC-based schemes, it is confirmed that our proposed method provided two times the hiding capacity comparing with other six representative AMBTC-based schemes while maintaining acceptable file size of steog-images. Full article
(This article belongs to the Special Issue Theory and Applications in Digital Signal Processing)
Show Figures

Figure 1

15 pages, 730 KiB  
Article
OQPSK Synchronization Parameter Estimation Based on Burst Signal Detection
by Zilong Liu, Kexian Gong, Peng Sun, Jicai Deng, Kunheng Zou and Linlin Duan
Electronics 2021, 10(1), 69; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics10010069 - 02 Jan 2021
Cited by 1 | Viewed by 2286
Abstract
The fast estimation of synchronization parameters plays an extremely important role in the demodulation of burst signals. In order to solve the problem of high computational complexity in the implementation of traditional algorithms, a synchronization parameter (frequency offset, phase offset, and timing error) [...] Read more.
The fast estimation of synchronization parameters plays an extremely important role in the demodulation of burst signals. In order to solve the problem of high computational complexity in the implementation of traditional algorithms, a synchronization parameter (frequency offset, phase offset, and timing error) estimation algorithm based on Offset Quadrature Phase Shift Keying (OQPSK) burst signal detection is proposed in this article. We first use the Data-Aided (DA) method to detect where the burst signal begins by taking the segment correlation between the receiving signals and the known local Unique Word (UW). In the sequel, the above results are adopted directly to estimate the synchronization parameters, which is obviously different from the conventional algorithms. In this way, the complexity of the proposed algorithm is greatly reduced, and it is more convenient for hardware implementation. The simulation results show that the proposed algorithm has high accuracy and can track the Modified Cramer–Rao Bound (MCRB) closely. Full article
(This article belongs to the Special Issue Theory and Applications in Digital Signal Processing)
Show Figures

Figure 1

22 pages, 555 KiB  
Article
Some Algorithms for Computing Short-Length Linear Convolution
by Aleksandr Cariow and Janusz P. Paplinski
Electronics 2020, 9(12), 2115; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics9122115 - 10 Dec 2020
Cited by 3 | Viewed by 2006
Abstract
In this article, we propose a set of efficient algorithmic solutions for computing short linear convolutions focused on hardware implementation in VLSI. We consider convolutions for sequences of length N= 2, 3, 4, 5, 6, 7, and 8. Hardwired units that implement [...] Read more.
In this article, we propose a set of efficient algorithmic solutions for computing short linear convolutions focused on hardware implementation in VLSI. We consider convolutions for sequences of length N= 2, 3, 4, 5, 6, 7, and 8. Hardwired units that implement these algorithms can be used as building blocks when designing VLSI -based accelerators for more complex data processing systems. The proposed algorithms are focused on fully parallel hardware implementation, but compared to the naive approach to fully parallel hardware implementation, they require from 25% to about 60% less, depending on the length N and hardware multipliers. Since the multiplier takes up a much larger area on the chip than the adder and consumes more power, the proposed algorithms are resource-efficient and energy-efficient in terms of their hardware implementation. Full article
(This article belongs to the Special Issue Theory and Applications in Digital Signal Processing)
Show Figures

Figure 1

19 pages, 2505 KiB  
Article
Optical Flow Filtering-Based Micro-Expression Recognition Method
by Junjie Wu, Jianfeng Xu, Deyu Lin and Min Tu
Electronics 2020, 9(12), 2056; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics9122056 - 03 Dec 2020
Cited by 11 | Viewed by 1962
Abstract
The recognition accuracy of micro-expressions in the field of facial expressions is still understudied, as current research methods mainly focus on feature extraction and classification. Based on optical flow and decision thinking theory, we propose a novel micro-expression recognition method, which can filter [...] Read more.
The recognition accuracy of micro-expressions in the field of facial expressions is still understudied, as current research methods mainly focus on feature extraction and classification. Based on optical flow and decision thinking theory, we propose a novel micro-expression recognition method, which can filter low-quality micro-expression video clips. Determined by preset thresholds, we develop two optical flow filtering mechanisms: one based on two-branch decisions (OFF2BD) and the other based on three-way decisions (OFF3WD). In OFF2BD, which use the classical binary logic to classify images, and divide the images into positive or negative domain for further filtering. Differ from the OFF2BD, OFF3WD added boundary domain to delay to judge the motion quality of the images. In this way, the video clips with low degree of morphological change can be eliminated, so as to directly improve the quality of micro-expression features and recognition rate. From the experimental results, we verify the recognition accuracy of 61.57%, and 65.41% for CASMEII, and SMIC datasets, respectively. Through the comparative analysis, it shows that the scheme can effectively improve the recognition performance. Full article
(This article belongs to the Special Issue Theory and Applications in Digital Signal Processing)
Show Figures

Figure 1

19 pages, 468 KiB  
Article
Robust Room Impulse Response Measurement Using Perfect Periodic Sequences for Wiener Nonlinear Filters
by Alberto Carini, Stefania Cecchi and Simone Orcioni
Electronics 2020, 9(11), 1793; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics9111793 - 29 Oct 2020
Cited by 10 | Viewed by 2081
Abstract
The paper discusses a measurement approach for the room impulse response (RIR), which is insensitive to the nonlinearities that affect the measurement instruments. The approach employs as measurement signals the perfect periodic sequences for Wiener nonlinear (WN) filters. Perfect periodic sequences (PPSs) are [...] Read more.
The paper discusses a measurement approach for the room impulse response (RIR), which is insensitive to the nonlinearities that affect the measurement instruments. The approach employs as measurement signals the perfect periodic sequences for Wiener nonlinear (WN) filters. Perfect periodic sequences (PPSs) are periodic sequences that guarantee the perfect orthogonality of a filter basis functions over a period. The PPSs for WN filters are appealing for RIR measurement, since their sample distribution is almost Gaussian and provides a low excitation to the highest amplitudes. RIR measurement using PPSs for WN filters is studied and its advantages and limitations are discussed. The derivation of PPSs for WN filters suitable for RIR measurement is detailed. Limitations in the identification given by the underestimation of RIR memory, order of nonlinearity, and effect of measurement noise are analysed and estimated. Finally, experimental results, which involve both simulations using signals affected by real nonlinear devices and real RIR measurements in the presence of nonlinearities, compare the proposed approach with the ones that are based on PPSs for Legendre nonlinear filter, maximal length sequences, and exponential sweeps. Full article
(This article belongs to the Special Issue Theory and Applications in Digital Signal Processing)
Show Figures

Figure 1

18 pages, 2602 KiB  
Article
A Novel Data-Driven Specific Emitter Identification Feature Based on Machine Cognition
by Mingzhe Zhu, Zhenpeng Feng and Xianda Zhou
Electronics 2020, 9(8), 1308; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics9081308 - 14 Aug 2020
Cited by 18 | Viewed by 2495
Abstract
Machine learning becomes increasingly promising in specific emitter identification (SEI), particularly in feature extraction and target recognition. Traditional features, such as radio frequency (RF), pulse amplitude (PA), power spectral density (PSD), and etc., usually show limited recognition effects when only a slight difference [...] Read more.
Machine learning becomes increasingly promising in specific emitter identification (SEI), particularly in feature extraction and target recognition. Traditional features, such as radio frequency (RF), pulse amplitude (PA), power spectral density (PSD), and etc., usually show limited recognition effects when only a slight difference exists in radar signals. Numerous two-dimensional features on transform domain, like various time-frequency representation and ambiguity function are used to augment information abundance, whereas the unacceptable computational burden usually emerges. To solve this problem, some artfully handcrafted features in transformed domain are proposed, like representative slice of ambiguity function (AF-RS) and compressed sensing mask (CS-MASK), to extract representative information that contributes to machine recognition task. However, most handcrafted features only utilizing neural network as a classifier, few of them focus on mining deep informative features from the perspective of machine cognition. Such feature extraction that is based on human cognition instead of machine cognition may probably miss some seemingly nominal texture information which actually contributes greatly to recognition, or collect too much redundant information. In this paper, a novel data-driven feature extraction is proposed based on machine cognition (MC-Feature) resort to saliency detection. Saliency detection exhibits positive contributions and suppresses irrelevant contributions in a transform domain with the help of a saliency map calculated from the accumulated gradients of each neuron to input data. Finally, positive and irrelevant contributions in the saliency map are merged into a new feature. Numerous experimental results demonstrate that the MC-feature can greatly strengthen the slight intra-class difference in SEI and provides a possibility of interpretation of CNN. Full article
(This article belongs to the Special Issue Theory and Applications in Digital Signal Processing)
Show Figures

Figure 1

17 pages, 6791 KiB  
Article
Electromagnetic Noise Suppression of Magnetic Resonance Sounding Combined with Data Acquisition and Multi-Frame Spectral Subtraction in the Frequency Domain
by Tingting Lin, Xiaokang Yao, Sijia Yu and Yang Zhang
Electronics 2020, 9(8), 1254; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics9081254 - 05 Aug 2020
Cited by 2 | Viewed by 2408
Abstract
As an advanced groundwater detection method, magnetic resonance sounding (MRS) has received more and more attention. However, the biggest challenge is that MRS measurements always suffer with a bad signal-to-noise ratio (SNR). Aiming at the problem of noise interference in MRS measurement, we [...] Read more.
As an advanced groundwater detection method, magnetic resonance sounding (MRS) has received more and more attention. However, the biggest challenge is that MRS measurements always suffer with a bad signal-to-noise ratio (SNR). Aiming at the problem of noise interference in MRS measurement, we propose a novel noise-suppression approach based on the combination of data acquisition and multi-frame spectral subtraction (DA-MFSS). The pure ambient noise from the measurement area is first collected by the receiving coil, and then the noisy MRS signal is recorded following the pulse moments transmitting. The procedure of the pure noise and the noisy MRS signal acquisition will be repeated several times. Then, the pure noise and the noisy signal are averaged to preliminarily suppress the noise. Secondly, the averaged pure noise and the noisy signal are divided into multiple frames. The framed signal is transformed into the frequency domain and the spectral subtraction method is applied to further suppress the electromagnetic noise embedded in the noisy MRS signal. Finally, the de-noised signal is recovered by the overlap-add method and inverse Fourier transformation. The approach was examined by numerical simulation and field measurements. After applying the proposed approach, the SNR of the MRS data was improved by 16.89 dB and both the random noise and the harmonic noise were well suppressed. Full article
(This article belongs to the Special Issue Theory and Applications in Digital Signal Processing)
Show Figures

Figure 1

21 pages, 5963 KiB  
Article
High Precision Sparse Reconstruction Scheme for Multiple Radar Mainlobe Jammings
by Yuan Cheng, Daiyin Zhu and Jindong Zhang
Electronics 2020, 9(8), 1224; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics9081224 - 30 Jul 2020
Cited by 4 | Viewed by 2452
Abstract
Radar mainlobe jamming has attracted considerable attention in the field of electronic countermeasures. When the direction of arrival (DOA) of jamming is close to that of the target, the conventional antijamming methods are ineffective. Generally, mainlobe antijamming method based on blind source separation [...] Read more.
Radar mainlobe jamming has attracted considerable attention in the field of electronic countermeasures. When the direction of arrival (DOA) of jamming is close to that of the target, the conventional antijamming methods are ineffective. Generally, mainlobe antijamming method based on blind source separation (BSS) can deteriorate the target direction estimation. Thus in this paper, a high precision sparse reconstruction scheme for multiple radar mainlobe jammings is proposed that does not suffer from failure or performance degradation inherent in the traditional method. First, the mainlobe jamming signal and desired signal components are extracted by using the joint approximation diagonalization of eigenmatrices (JADE) method. Then, oblique projection with sparse Bayesian learning (OP-SBL) method is employed to reconstruct the target with high precision. The proposed method is capable of suppressing at most three radar mainlobe jammers adaptively and also obtain DOA estimation error less than 0.1°. Simulation and experimental results confirm the effectiveness of the proposed method. Full article
(This article belongs to the Special Issue Theory and Applications in Digital Signal Processing)
Show Figures

Figure 1

19 pages, 3296 KiB  
Article
Speech Enhancement Based on Fusion of Both Magnitude/Phase-Aware Features and Targets
by Haitao Lang and Jie Yang
Electronics 2020, 9(7), 1125; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics9071125 - 10 Jul 2020
Cited by 5 | Viewed by 2981
Abstract
Recently, supervised learning methods have shown promising performance, especially deep neural network-based (DNN) methods, in the application of single-channel speech enhancement. Generally, those approaches extract the acoustic features directly from the noisy speech to train a magnitude-aware target. In this paper, we propose [...] Read more.
Recently, supervised learning methods have shown promising performance, especially deep neural network-based (DNN) methods, in the application of single-channel speech enhancement. Generally, those approaches extract the acoustic features directly from the noisy speech to train a magnitude-aware target. In this paper, we propose to extract the acoustic features not only from the noisy speech but also from the pre-estimated speech, noise and phase separately, then fuse them into a new complementary feature for the purpose of obtaining more discriminative acoustic representation. In addition, on the basis of learning a magnitude-aware target, we also utilize the fusion feature to learn a phase-aware target, thereby further improving the accuracy of the recovered speech. We conduct extensive experiments, including performance comparison with some typical existing methods, generalization ability evaluation on unseen noise, ablation study, and subjective test by human listener, to demonstrate the feasibility and effectiveness of the proposed method. Experimental results prove that the proposed method has the ability to improve the quality and intelligibility of the reconstructed speech. Full article
(This article belongs to the Special Issue Theory and Applications in Digital Signal Processing)
Show Figures

Figure 1

15 pages, 9204 KiB  
Article
Adaptive Weighted High Frequency Iterative Algorithm for Fractional-Order Total Variation with Nonlocal Regularization for Image Reconstruction
by Hui Chen, Yali Qin, Hongliang Ren, Liping Chang, Yingtian Hu and Huan Zheng
Electronics 2020, 9(7), 1103; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics9071103 - 07 Jul 2020
Cited by 5 | Viewed by 2552
Abstract
We propose an adaptive weighted high frequency iterative algorithm for a fractional-order total variation (FrTV) approach with nonlocal regularization to alleviate image deterioration and to eliminate staircase artifacts, which result from the total variation (TV) method. The high frequency gradients are reweighted in [...] Read more.
We propose an adaptive weighted high frequency iterative algorithm for a fractional-order total variation (FrTV) approach with nonlocal regularization to alleviate image deterioration and to eliminate staircase artifacts, which result from the total variation (TV) method. The high frequency gradients are reweighted in iterations adaptively when we decompose the image into high and low frequency components using the pre-processing technique. The nonlocal regularization is introduced into our method based on nonlocal means (NLM) filtering, which contains prior image structural information to suppress staircase artifacts. An alternating direction multiplier method (ADMM) is used to solve the problem combining reweighted FrTV and nonlocal regularization. Experimental results show that both the peak signal-to-noise ratios (PSNR) and structural similarity index (SSIM) of reconstructed images are higher than those achieved by the other four methods at various sampling ratios less than 25%. At 5% sampling ratios, the gains of PSNR and SSIM are up to 1.63 dB and 0.0114 from ten images compared with reweighted total variation with nuclear norm regularization (RTV-NNR). The improved approach preserves more texture details and has better visual effects, especially at low sampling ratios, at the cost of taking more time. Full article
(This article belongs to the Special Issue Theory and Applications in Digital Signal Processing)
Show Figures

Figure 1

15 pages, 3261 KiB  
Article
Learning Ratio Mask with Cascaded Deep Neural Networks for Echo Cancellation in Laser Monitoring Signals
by Haitao Lang and Jie Yang
Electronics 2020, 9(5), 856; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics9050856 - 21 May 2020
Viewed by 2646
Abstract
Laser monitoring has received more and more attention in many application fields thanks to its essential advantages. The analysis shows that the target speech in the laser monitoring signals is often interfered by the echoes, resulting in a decline in speech intelligibility and [...] Read more.
Laser monitoring has received more and more attention in many application fields thanks to its essential advantages. The analysis shows that the target speech in the laser monitoring signals is often interfered by the echoes, resulting in a decline in speech intelligibility and quality, which in turn affects the identification of useful information. The cancellation of echoes in laser monitoring signals is not a trivial task. In this article, we formulate it as a simple but effective additive echo noise model and propose a cascade deep neural networks (C-DNNs) as the mapping function from the acoustic feature of noisy speech to the ratio mask of clean signal. To validate the feasibility and effectiveness of the proposed method, we investigated the effect of echo intensity, echo delay, and training target on the performance. We also compared the proposed C-DNNs to some traditional and newly emerging DNN-based supervised learning methods. Extensive experiments demonstrated the proposed method can greatly improve the speech intelligibility and speech quality of the echo-cancelled signals and outperform the comparison methods. Full article
(This article belongs to the Special Issue Theory and Applications in Digital Signal Processing)
Show Figures

Figure 1

16 pages, 3323 KiB  
Article
Specific Emitter Identification Based on Synchrosqueezing Transform for Civil Radar
by Mingzhe Zhu, Zhenpeng Feng, Xianda Zhou, Rui Xiao, Yue Qi and Xinliang Zhang
Electronics 2020, 9(4), 658; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics9040658 - 16 Apr 2020
Cited by 15 | Viewed by 2671
Abstract
Time-frequency (TF) signal features are widely used in specific emitter identification (SEI) which commonly arises in many applications, especially for radar signals. Due to data scale and algorithm complexity, it is difficult to obtain an informative representation for SEI with existing TF features. [...] Read more.
Time-frequency (TF) signal features are widely used in specific emitter identification (SEI) which commonly arises in many applications, especially for radar signals. Due to data scale and algorithm complexity, it is difficult to obtain an informative representation for SEI with existing TF features. In this paper, a feature extraction method is proposed based on synchrosqueezing transform (SST). The SST feature has an equivalent dimension to Fourier transform, and retains the most relevant information of the signal, leading to on average approximately 20 percent improvement in SEI for complex frequency modulation signals compared with existing handcrafted features. Numerous results demonstrate that the synchrosqueezing TF feature can offer a better recognition accuracy, especially for the signals with intricate time-varying information. Moreover, a linear relevance propagation algorithm is employed to attest to the SST feature importance from the perspective of deep learning. Full article
(This article belongs to the Special Issue Theory and Applications in Digital Signal Processing)
Show Figures

Figure 1

15 pages, 3092 KiB  
Article
High-Capacity Data Hiding for ABTC-EQ Based Compressed Image
by Cheonshik Kim, Ching-Nung Yang and Lu Leng
Electronics 2020, 9(4), 644; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics9040644 - 14 Apr 2020
Cited by 29 | Viewed by 2810
Abstract
We present a new data hiding method based on Adaptive BTC Edge Quantization (ABTC-EQ) using an optimal pixel adjustment process (OPAP) to optimize two quantization levels. The reason we choose ABTC-EQ as a cover media is that it is superior to AMBTC in [...] Read more.
We present a new data hiding method based on Adaptive BTC Edge Quantization (ABTC-EQ) using an optimal pixel adjustment process (OPAP) to optimize two quantization levels. The reason we choose ABTC-EQ as a cover media is that it is superior to AMBTC in maintaining a high-quality image after encoding is executed. ABTC-EQ is represented by a form of t r i o ( Q 1 , Q 2 , [ Q 3 ] , BM) where Q is quantization levels ( Q 1 Q 2 Q 3 ) , and BM is a bitmap). The number of quantization levels are two or three, depending on whether the cover image has an edge or not. Before embedding secret bits in every block, we categorize every block into smooth block or complex block by a threshold. In case a block size is 4x4, the sixteen secret bits are replaced by a bitmap of the smooth block for embedding a message directly. On the other hand, OPAP method conceals 1 bit into LSB and 2LSB respectively, and maintains the quality of an image as a way of minimizing the errors which occur in the embedding procedure. The sufficient experimental results demonsrate that the performance of our proposed scheme is satisfactory in terms of the embedding capacity and quality of an image. Full article
(This article belongs to the Special Issue Theory and Applications in Digital Signal Processing)
Show Figures

Figure 1

25 pages, 1384 KiB  
Article
Improved 2D Coprime Array Structure with the Difference and Sum Coarray Concept
by Guiyu Wang, Zesong Fei, Shiwei Ren and Xiaoran Li
Electronics 2020, 9(2), 273; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics9020273 - 05 Feb 2020
Cited by 2 | Viewed by 2342
Abstract
Recently, the difference and sum (diff-sum) coarray has attracted much attention in one-dimensional direction-of-arrival estimation for its high degrees-of-freedom (DOFs). In this paper, we utilize both the spatial information and the temporal information to construct the diff-sum coarray for planar sparse arrays. The [...] Read more.
Recently, the difference and sum (diff-sum) coarray has attracted much attention in one-dimensional direction-of-arrival estimation for its high degrees-of-freedom (DOFs). In this paper, we utilize both the spatial information and the temporal information to construct the diff-sum coarray for planar sparse arrays. The diff-sum coarray contains both the difference coarray and the sum coarray, which provides much higher DOFs than the difference coarray alone. We take a planar coprime array consisting of two uniform square subarrays as the array model. To fully use the aperture-extending ability of the diff-sum coarray, we propose two novel configurations to improve the planar coprime array. The first configuration compresses the inter-element spacing of one subarray and results in a larger consecutive area in the coarray. The second configuration rearranges the two subarrays and introduces a proper separation between them, which can significantly reduce the redundancy of the diff-sum coarray and increase the DOFs. Besides, we derive the closed-form expressions of the central consecutive ranges in the coarrays of the proposed array configurations. Simulations verify the superiority of the proposed array configurations. Full article
(This article belongs to the Special Issue Theory and Applications in Digital Signal Processing)
Show Figures

Figure 1

15 pages, 5610 KiB  
Article
A Low-Complex Frame Rate Up-Conversion with Edge-Preserved Filtering
by Ran Li, Wendan Ma, Yanling Li and Lei You
Electronics 2020, 9(1), 156; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics9010156 - 15 Jan 2020
Cited by 1 | Viewed by 2556
Abstract
The improvement of resolution of digital video requires a continuous increase of computation invested into Frame Rate Up-Conversion (FRUC). In this paper, we combine the advantages of Edge-Preserved Filtering (EPF) and Bidirectional Motion Estimation (BME) in an attempt to reduce the computational complexity. [...] Read more.
The improvement of resolution of digital video requires a continuous increase of computation invested into Frame Rate Up-Conversion (FRUC). In this paper, we combine the advantages of Edge-Preserved Filtering (EPF) and Bidirectional Motion Estimation (BME) in an attempt to reduce the computational complexity. The inaccuracy of BME results from the existing similar structures in the texture regions, which can be avoided by using EPF to remove the texture details of video frames. EPF filters out by the high-frequency components, so each video frame can be subsampled before BME, at the same time, with the least accuracy degradation. EPF also preserves the edges, which prevents the deformation of object in the process of subsampling. Besides, we use predictive search to reduce the redundant search points according to the local smoothness of Motion Vector Field (MVF) to speed up BME. The experimental results show that the proposed FRUC algorithm brings good objective and subjective qualities of the interpolated frames with a low computational complexity. Full article
(This article belongs to the Special Issue Theory and Applications in Digital Signal Processing)
Show Figures

Figure 1

18 pages, 5492 KiB  
Article
Positioning Using IRIDIUM Satellite Signals of Opportunity in Weak Signal Environment
by Zizhong Tan, Honglei Qin, Li Cong and Chao Zhao
Electronics 2020, 9(1), 37; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics9010037 - 27 Dec 2019
Cited by 36 | Viewed by 7536
Abstract
In order to get rid of the dependence of the navigation and positioning system on the global navigation satellite system (GNSS), radio, television, satellite, and other signals of opportunity (SOPs) can be used to achieve receiver positioning. The space-based SOPs based on satellites [...] Read more.
In order to get rid of the dependence of the navigation and positioning system on the global navigation satellite system (GNSS), radio, television, satellite, and other signals of opportunity (SOPs) can be used to achieve receiver positioning. The space-based SOPs based on satellites offer better coverage and availability than ground-based SOPs. Based on the related research of Iridium SOPs positioning in the open environment, this paper mainly focuses on the occluded environment and studies the Iridium SOPs positioning technique in weak signal environment. A new quadratic square accumulating instantaneous Doppler estimation algorithm (QSA-IDE) is proposed after analysing the orbit and signal characteristics of the Iridium satellite. The new method can improve the ability of the Iridium weak signal Doppler estimation. The theoretical analysis and positioning results based on real signal data show that the positioning based on Iridium SOPs can be realized in a weak signal environment. The research broadens the applicable environment of the Iridium SOPs positioning, thereby improving the availability and continuity of its positioning. Full article
(This article belongs to the Special Issue Theory and Applications in Digital Signal Processing)
Show Figures

Figure 1

20 pages, 3315 KiB  
Article
Uniform Sampling Methodology to Construct Projection Matrices for Angle-of-Arrival Estimation Applications
by Mohammed A. G. Al-Sadoon, Marcus de Ree, Raed A. Abd-Alhameed and Peter S. Excell
Electronics 2019, 8(12), 1386; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics8121386 - 21 Nov 2019
Cited by 6 | Viewed by 2301
Abstract
This manuscript firstly proposes a reduced size, low-complexity Angle of Arrival (AoA) approach, called Reduced Uniform Projection Matrix (RUPM). The RUPM method applies a Uniform Sampling Matrix (USM) criterion to sample certain columns from the obtained covariance matrix in order to efficiently find [...] Read more.
This manuscript firstly proposes a reduced size, low-complexity Angle of Arrival (AoA) approach, called Reduced Uniform Projection Matrix (RUPM). The RUPM method applies a Uniform Sampling Matrix (USM) criterion to sample certain columns from the obtained covariance matrix in order to efficiently find the directions of the incident signals on an antenna array. The USM methodology is applied to reduce the dependency between the adjacent sampled columns within a covariance matrix; then, the sampled matrix is used to construct the projection matrix. The size of the obtained projection matrix is reduced to minimise the computational complexity in the searching grid stage. A theoretical analysis is presented to demonstrate that the USM methodology can increase the Degrees of Freedom (DOFs) with the same aperture size and number of sampled columns compared to the classical sampling criterion. Then, a polynomial root is constructed as an alternative efficient computational solution of the UPM method in a one-dimensional (1D) array spectrum peak searching problem. It is found that this distribution increases the number of produced nulls and enhances noise immunity. The advantage of the RUPM method is that it is appropriate to apply for any array configuration while the Root-UPM offers better estimation accuracy with less execution time under a uniform linear array condition. A computer simulation based on various scenarios is performed to demonstrate the theoretical claims. The proposed direction-finding methods are compared with several AoA methods in terms of the required execution time, Signal-to-Noise Ratio (SNR) and different numbers of data measurements. The results verify that the new methods can achieve significantly better performance with reduced computational demands. Full article
(This article belongs to the Special Issue Theory and Applications in Digital Signal Processing)
Show Figures

Figure 1

Review

Jump to: Research

25 pages, 498 KiB  
Review
Performance of Feature-Based Techniques for Automatic Digital Modulation Recognition and Classification—A Review
by Dhamyaa H. Al-Nuaimi, Ivan A. Hashim, Intan S. Zainal Abidin, Laith B. Salman and Nor Ashidi Mat Isa
Electronics 2019, 8(12), 1407; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics8121407 - 26 Nov 2019
Cited by 38 | Viewed by 5303
Abstract
The demand for bandwidth-critical applications has stimulated the research community not only to develop new ways of communication, but also to use the existing spectrum efficiently. Networks have become dynamic and heterogeneous. Receivers have received various signals that can be modulated differently. Automatic [...] Read more.
The demand for bandwidth-critical applications has stimulated the research community not only to develop new ways of communication, but also to use the existing spectrum efficiently. Networks have become dynamic and heterogeneous. Receivers have received various signals that can be modulated differently. Automatic modulation classification (AMC) is a key procedure for present and next-generation communication networks, and facilitates the demodulation process at the receiver side. Under the presence of noise from the channel, the transmitter and receiver with its unknown parameters, such as carrier frequency, phase offset, signal power, and timing information, have become cumbersome because detecting the modulation scheme of the received signal is a complicated procedure. Two main methods, namely maximum likelihood functions and the signal statistical feature-based (FB) approach, are used for the automatic classification of modulated signals. In this study, a comprehensive survey of various modulation techniques based on FB approach is conducted. In this research, a number of basic features that are usually used in determining and discriminating modulation types were investigated. The classifier that was used in the discrimination process is studied in detail and compared to other types of classifiers to help the reader determine the limitations associated with the FB approach. Both classifiers and basic features were compared, and their advantages and disadvantages were investigated based on previous researches to determine the best type of classifier and the set of features in relation to each discrimination environment. This work serves as a guide for researchers of AMC to determine the suitable features and algorithms. Full article
(This article belongs to the Special Issue Theory and Applications in Digital Signal Processing)
Show Figures

Figure 1

Back to TopTop