Machine Learning with Label Noise

A special issue of Data (ISSN 2306-5729). This special issue belongs to the section "Information Systems and Data Management".

Deadline for manuscript submissions: closed (31 July 2021) | Viewed by 3924

Special Issue Editors


E-Mail Website
Guest Editor
Department of Electrical and Computer Engineering North South University, Dhaka 1212, Bangladesh
Interests: cloud; grid computing; data mining; fuzzy logic; machine learning; deep learning
Special Issues, Collections and Topics in MDPI journals
Department of Computer Science, University of Calgary, Calgary, AB T0A0B0, Canada
Interests: database systems; cyberprivacy; cybersecurity; distributed computing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Computer Science Department, University of Brasilia, Brasilia 30413, Brazil
Interests: noise detection; meta-learning; data characterization; data streams

E-Mail Website
Guest Editor
Department of Electrical and Computer Engineering, North South University, Dhaka 1212, Bangladesh
Interests: computer vision; natural language processing

E-Mail Website
Guest Editor
Department of Electrical and Computer Engineering, North South University, Dhaka 1212, Bangladesh
Interests: artificial life; swarm intelligence; complex systems; computational modeling; machine learning

Special Issue Information

Machine learning has been applied successfully for the last few decades in vast range of domains that include computer vision, natural language processing, pattern recognition, remote sensing etc. Machine learning techniques aid in developing predictive models, clustering, object segmentation, pattern classifications and so on. The data from which these machine learning systems learn governs several aspects such as performance and model complexity. However, data is either labeled by humans or automated methods, or a combination of both. Introduction of errors in labels happen at any stage of labeling that lead to a phenomenon called label noise. This is one of the most challenging issue to deal with as performance and model complexity is closely related to it. In many cases, label noise is ignored and assumed to be absent. But in reality, most of the times empirical data is error prone to label noise and explicit consideration of such noise should be given.

This Special Issue is intended to present discussions, techniques that are used to deal with label noise in different types of data, i.e., images, audio, video, text etc. in the arena of dense prediction, classification, regression, object detection and used in various disciplines such as medicine, finance, remote sensing, ecology, industrial control systems etc. Topics include but are not limited to the following areas:

  • Sources of label noise in data from different disciplines
  • Quantification of label noise
  • Label noise robust learning methods
  • Label noise injection methods
  • Label noise filters
  • Comparison of different labeling strategies, e.g., automated and crowd sourced labeling
  • Effects of label noise on performance and model complexity
  • Self and semi-supervised methods for learning from noisy data

Dr. M. Rashedur Rahman
Dr. Ken Barker
Dr. Luís Paulo F. Garcia
Dr. Nabeel Mohammed
Dr. Sifat Momen
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Data is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

26 pages, 809 KiB  
Article
A Framework Using Contrastive Learning for Classification with Noisy Labels
by Madalina Ciortan, Romain Dupuis and Thomas Peel
Data 2021, 6(6), 61; https://0-doi-org.brum.beds.ac.uk/10.3390/data6060061 - 09 Jun 2021
Cited by 5 | Viewed by 3180
Abstract
We propose a framework using contrastive learning as a pre-training task to perform image classification in the presence of noisy labels. Recent strategies, such as pseudo-labeling, sample selection with Gaussian Mixture models, and weighted supervised contrastive learning have, been combined into a fine-tuning [...] Read more.
We propose a framework using contrastive learning as a pre-training task to perform image classification in the presence of noisy labels. Recent strategies, such as pseudo-labeling, sample selection with Gaussian Mixture models, and weighted supervised contrastive learning have, been combined into a fine-tuning phase following the pre-training. In this paper, we provide an extensive empirical study showing that a preliminary contrastive learning step brings a significant gain in performance when using different loss functions: non robust, robust, and early-learning regularized. Our experiments performed on standard benchmarks and real-world datasets demonstrate that: (i) the contrastive pre-training increases the robustness of any loss function to noisy labels and (ii) the additional fine-tuning phase can further improve accuracy, but at the cost of additional complexity. Full article
(This article belongs to the Special Issue Machine Learning with Label Noise)
Show Figures

Figure 1

Back to TopTop