Machine Learning and Signal Processing

A special issue of Signals (ISSN 2624-6120).

Deadline for manuscript submissions: closed (30 June 2021) | Viewed by 16550

Special Issue Editors

Merchant Venturers School of Engineering, University of Bristol, Bristol BS8 1TH, UK
Interests: machine learning; computational statistics; signal rocessing
Department of Electrical and Electronic Engineering, University of Bristol, 1 Cathedral Square, Bristol BS15DD, UK
Interests: machine learning; behavioural modelling; structural learning; uncertainty modelling

Special Issue Information

Dear Colleagues,

Machine learning, in combination with signal processing, provides powerful solutions to many real-world technical and scientific challenges. Increasingly, the boundaries between the two have been blurred, such that machine learning methods are used to solve problems that were once done using traditional signal processing methods, and signal processing methods are often used to develop or enhance new machine learning methods. This Special Issue will present the most recent and exciting advances in machine learning for signal processing. Prospective authors are invited to submit papers on relevant algorithms and applications, including but not limited to:

  • Signal processing and machine learning for sensor networks;
  • Learning theory and modeling;
  • Neural networks and deep learning;
  • Bayesian Learning and modeling;
  • Sequential learning, sequential decision methods;
  • Information-theoretic learning;
  • Graphical and kernel models;
  • Bounds on performance;
  • Source separation and independent component analysis;
  • Signal detection, pattern recognition and classification;
  • Tensor and structured matrix methods;
  • Machine learning for big data;
  • Large scale learning;
  • Dictionary learning, subspace, and manifold learning;
  • Semi-supervised and unsupervised learning;
  • Active and reinforcement learning;
  • Learning from multimodal data;
  • Resource-efficient machine learning;
  • Cognitive information processing;
  • Bioinformatics applications;
  • Biomedical applications and neural engineering;
  • Speech and audio processing applications;
  • Image and video processing applications;
  • Intelligent multimedia and web processing;
  • Communications applications;
  • Other applications including social networks, games, smart grid, security, and privacy.
Dr. Tom Diethe
Dr. Niall Twomey
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Signals is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 4893 KiB  
Article
Continuous Adaptation with Online Meta-Learning for Non-Stationary Target Regression Tasks
by Taku Yamagata, Raúl Santos-Rodríguez and Peter Flach
Signals 2022, 3(1), 66-85; https://0-doi-org.brum.beds.ac.uk/10.3390/signals3010006 - 03 Feb 2022
Viewed by 2180
Abstract
Most environments change over time. Being able to adapt to such non-stationary environments is vital for real-world applications of many machine learning algorithms. In this work, we propose CORAL, a computationally efficient regression algorithm capable of adapting to a non-stationary target. CORAL is [...] Read more.
Most environments change over time. Being able to adapt to such non-stationary environments is vital for real-world applications of many machine learning algorithms. In this work, we propose CORAL, a computationally efficient regression algorithm capable of adapting to a non-stationary target. CORAL is based on Bayesian linear regression with a sliding window and offline/online meta-learning. The sliding window makes our model focus on the recently received data and ignores older observations. The meta-learning approach allows us to learn the prior distribution of the model parameters. It speeds up the model adaptation, complements the sliding window’s drawback, and enhances the performance. We evaluate CORAL on two tasks: a toy problem and a more complex blood glucose level prediction task. Our approach improves the prediction accuracy for the non-stationary target significantly while also performing well for the stationary target. We show that the two components of our method work in a complementary fashion to achieve this. Full article
(This article belongs to the Special Issue Machine Learning and Signal Processing)
Show Figures

Figure 1

18 pages, 5744 KiB  
Article
Development of Surface EMG Game Control Interface for Persons with Upper Limb Functional Impairments
by Joseph K. Muguro, Pringgo Widyo Laksono, Wahyu Rahmaniar, Waweru Njeri, Yuta Sasatake, Muhammad Syaiful Amri bin Suhaimi, Kojiro Matsushita, Minoru Sasaki, Maciej Sulowicz and Wahyu Caesarendra
Signals 2021, 2(4), 834-851; https://0-doi-org.brum.beds.ac.uk/10.3390/signals2040048 - 12 Nov 2021
Cited by 5 | Viewed by 3057
Abstract
In recent years, surface Electromyography (sEMG) signals have been effectively applied in various fields such as control interfaces, prosthetics, and rehabilitation. We propose a neck rotation estimation from EMG and apply the signal estimate as a game control interface that can be used [...] Read more.
In recent years, surface Electromyography (sEMG) signals have been effectively applied in various fields such as control interfaces, prosthetics, and rehabilitation. We propose a neck rotation estimation from EMG and apply the signal estimate as a game control interface that can be used by people with disabilities or patients with functional impairment of the upper limb. This paper utilizes an equation estimation and a machine learning model to translate the signals into corresponding neck rotations. For testing, we designed two custom-made game scenes, a dynamic 1D object interception and a 2D maze scenery, in Unity 3D to be controlled by sEMG signal in real-time. Twenty-two (22) test subjects (mean age 27.95, std 13.24) participated in the experiment to verify the usability of the interface. From object interception, subjects reported stable control inferred from intercepted objects more than 73% accurately. In a 2D maze, a comparison of male and female subjects reported a completion time of 98.84 s. ± 50.2 and 112.75 s. ± 44.2, respectively, without a significant difference in the mean of the one-way ANOVA (p = 0.519). The results confirmed the usefulness of neck sEMG of sternocleidomastoid (SCM) as a control interface with little or no calibration required. Control models using equations indicate intuitive direction and speed control, while machine learning schemes offer a more stable directional control. Control interfaces can be applied in several areas that involve neck activities, e.g., robot control and rehabilitation, as well as game interfaces, to enable entertainment for people with disabilities. Full article
(This article belongs to the Special Issue Machine Learning and Signal Processing)
Show Figures

Figure 1

23 pages, 987 KiB  
Article
On the Quality of Deep Representations for Kepler Light Curves Using Variational Auto-Encoders
by Francisco Mena, Patricio Olivares, Margarita Bugueño, Gabriel Molina and Mauricio Araya
Signals 2021, 2(4), 706-728; https://0-doi-org.brum.beds.ac.uk/10.3390/signals2040042 - 14 Oct 2021
Cited by 2 | Viewed by 3012
Abstract
Light curve analysis usually involves extracting manually designed features associated with physical parameters and visual inspection. The large amount of data collected nowadays in astronomy by different surveys represents a major challenge of characterizing these signals. Therefore, finding good informative representation for them [...] Read more.
Light curve analysis usually involves extracting manually designed features associated with physical parameters and visual inspection. The large amount of data collected nowadays in astronomy by different surveys represents a major challenge of characterizing these signals. Therefore, finding good informative representation for them is a key non-trivial task. Some studies have tried unsupervised machine learning approaches to generate this representation without much effectiveness. In this article, we show that variational auto-encoders can learn these representations by taking the difference between successive timestamps as an additional input. We present two versions of such auto-encoders: Variational Recurrent Auto-Encoder plus time (VRAEt) and re-Scaling Variational Recurrent Auto Encoder plus time (S-VRAEt). The objective is to achieve the most likely low-dimensional representation of the time series that matched latent variables and, in order to reconstruct it, should compactly contain the pattern information. In addition, the S-VRAEt embeds the re-scaling preprocessing of the time series into the model in order to use the Flux standard deviation in the learning of the light curves structure. To assess our approach, we used the largest transit light curve dataset obtained during the 4 years of the Kepler mission and compared to similar techniques in signal processing and light curves. The results show that the proposed methods obtain improvements in terms of the quality of the deep representation of phase-folded transit light curves with respect to their deterministic counterparts. Specifically, they present a good balance between the reconstruction task and the smoothness of the curve, validated with the root mean squared error, mean absolute error, and auto-correlation metrics. Furthermore, there was a good disentanglement in the representation, as validated by the Pearson correlation and mutual information metrics. Finally, a useful representation to distinguish categories was validated with the F1 score in the task of classifying exoplanets. Moreover, the S-VRAEt model increases all the advantages of VRAEt, achieving a classification performance quite close to its maximum model capacity and generating light curves that are visually comparable to a Mandel–Agol fit. Thus, the proposed methods present a new way of analyzing and characterizing light curves. Full article
(This article belongs to the Special Issue Machine Learning and Signal Processing)
Show Figures

Figure 1

11 pages, 555 KiB  
Article
Mixture Density Conditional Generative Adversarial Network Models (MD-CGAN)
by Jaleh Zand and Stephen Roberts
Signals 2021, 2(3), 559-569; https://0-doi-org.brum.beds.ac.uk/10.3390/signals2030034 - 01 Sep 2021
Cited by 3 | Viewed by 3700
Abstract
Generative Adversarial Networks (GANs) have gained significant attention in recent years, with impressive applications highlighted in computer vision, in particular. Compared to such examples, however, there have been more limited applications of GANs to time series modeling, including forecasting. In this work, we [...] Read more.
Generative Adversarial Networks (GANs) have gained significant attention in recent years, with impressive applications highlighted in computer vision, in particular. Compared to such examples, however, there have been more limited applications of GANs to time series modeling, including forecasting. In this work, we present the Mixture Density Conditional Generative Adversarial Model (MD-CGAN), with a focus on time series forecasting. We show that our model is capable of estimating a probabilistic posterior distribution over forecasts and that, in comparison to a set of benchmark methods, the MD-CGAN model performs well, particularly in situations where noise is a significant component of the observed time series. Further, by using a Gaussian mixture model as the output distribution, MD-CGAN offers posterior predictions that are non-Gaussian. Full article
(This article belongs to the Special Issue Machine Learning and Signal Processing)
Show Figures

Figure 1

19 pages, 3200 KiB  
Article
Voice Transformation Using Two-Level Dynamic Warping and Neural Networks
by Al-Waled Al-Dulaimi, Todd K. Moon and Jacob H. Gunther
Signals 2021, 2(3), 456-474; https://0-doi-org.brum.beds.ac.uk/10.3390/signals2030028 - 14 Jul 2021
Cited by 2 | Viewed by 2174
Abstract
Voice transformation, for example, from a male speaker to a female speaker, is achieved here using a two-level dynamic warping algorithm in conjunction with an artificial neural network. An outer warping process which temporally aligns blocks of speech (dynamic time warp, DTW) invokes [...] Read more.
Voice transformation, for example, from a male speaker to a female speaker, is achieved here using a two-level dynamic warping algorithm in conjunction with an artificial neural network. An outer warping process which temporally aligns blocks of speech (dynamic time warp, DTW) invokes an inner warping process, which spectrally aligns based on magnitude spectra (dynamic frequency warp, DFW). The mapping function produced by inner dynamic frequency warp is used to move spectral information from a source speaker to a target speaker. Artifacts arising from this amplitude spectral mapping are reduced by reconstructing phase information. Information obtained by this process is used to train an artificial neural network to produce spectral warping information based on spectral input data. The performance of the speech mapping compared using Mel-Cepstral Distortion (MCD) with previous voice transformation research, and it is shown to perform better than other methods, based on their reported MCD scores. Full article
(This article belongs to the Special Issue Machine Learning and Signal Processing)
Show Figures

Figure 1

Back to TopTop