Continual Learning in Computer Vision: Theory and Applications

A special issue of Journal of Imaging (ISSN 2313-433X). This special issue belongs to the section "Computer Vision and Pattern Recognition".

Deadline for manuscript submissions: closed (31 January 2022) | Viewed by 7617

Special Issue Editors


E-Mail Website
Guest Editor
Department of Electrical, Electronic and Computer Engineering, PeRCeiVe Lab, University of Catania, Catania, Italy
Interests: computer vision; medical image analysis; video understanding
Special Issues, Collections and Topics in MDPI journals

E-Mail
Guest Editor
Department of Electrical, Electronic and Computer Engineering, PeRCeiVe Lab, University of Catania, Catania, Italy

Special Issue Information

Dear Colleagues,

In less than a decade, we have witnessed the rise of deep neural networks as the main paradigm for supervised and unsupervised learning. However, despite the impressive results achieved in a wide variety of fields, the current training approaches still suffer from a range of problems related to their incapability to adapt to scenarios different from the ones they are trained in—a capability which, instead, is a key factor in the way people learn, showing that the road to human-like artificial intelligence is still very long. While we can train very good models that carry out individual tasks, it is still unclear how to train models that can learn sequentially to perform multiple tasks sequentially.

Continual learning addresses the design and training of models with a built-in capability to adapt to multiple tasks without suffering from catastrophic forgetting and to work well in test conditions different from those seen during training. The impact of successful methods for continual learning would both affect the foundations of machine learning and extend the potential of existing solutions, across multiple application fields. Indeed, one can envisage how a model that can identify pulmonary diseases from X-ray images could benefit from re-using features learned to perform segmentation, and how useful a model that can perform both tasks would be to a physician.

In this Special Issue, we invite authors to send theoretical and application contributions related to continual learning, including but not limited to the following topics:

  • Critical surveys on continual learning
  • Training approaches for continual learning
  • Model design in continual learning
  • Memory-based techniques
  • Dataset distillation and model distillation
  • Continual learning on real-world datasets
  • New datasets for continual learning
  • Continual learning for expert domains (e.g., medicine)
  • Continual active learning

Dr. Simone Palazzo
Dr. Carmelo Pino
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Continual learning
  • Catastrophic forgetting
  • Continual learning datasets
  • Domain-specific continual learning
  • Class-incremental learning
  • Task-incremental learning

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

34 pages, 3852 KiB  
Article
Unified Probabilistic Deep Continual Learning through Generative Replay and Open Set Recognition
by Martin Mundt, Iuliia Pliushch, Sagnik Majumder, Yongwon Hong and Visvanathan Ramesh
J. Imaging 2022, 8(4), 93; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging8040093 - 31 Mar 2022
Cited by 15 | Viewed by 3768
Abstract
Modern deep neural networks are well known to be brittle in the face of unknown data instances and recognition of the latter remains a challenge. Although it is inevitable for continual-learning systems to encounter such unseen concepts, the corresponding literature appears to nonetheless [...] Read more.
Modern deep neural networks are well known to be brittle in the face of unknown data instances and recognition of the latter remains a challenge. Although it is inevitable for continual-learning systems to encounter such unseen concepts, the corresponding literature appears to nonetheless focus primarily on alleviating catastrophic interference with learned representations. In this work, we introduce a probabilistic approach that connects these perspectives based on variational inference in a single deep autoencoder model. Specifically, we propose to bound the approximate posterior by fitting regions of high density on the basis of correctly classified data points. These bounds are shown to serve a dual purpose: unseen unknown out-of-distribution data can be distinguished from already trained known tasks towards robust application. Simultaneously, to retain already acquired knowledge, a generative replay process can be narrowed to strictly in-distribution samples, in order to significantly alleviate catastrophic interference. Full article
(This article belongs to the Special Issue Continual Learning in Computer Vision: Theory and Applications)
Show Figures

Figure 1

19 pages, 1033 KiB  
Article
Incremental Learning for Dermatological Imaging Modality Classification
by Ana C. Morgado, Catarina Andrade, Luís F. Teixeira and Maria João M. Vasconcelos
J. Imaging 2021, 7(9), 180; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7090180 - 07 Sep 2021
Cited by 3 | Viewed by 2156
Abstract
With the increasing adoption of teledermatology, there is a need to improve the automatic organization of medical records, being dermatological image modality a key filter in this process. Although there has been considerable effort in the classification of medical imaging modalities, this has [...] Read more.
With the increasing adoption of teledermatology, there is a need to improve the automatic organization of medical records, being dermatological image modality a key filter in this process. Although there has been considerable effort in the classification of medical imaging modalities, this has not been in the field of dermatology. Moreover, as various devices are used in teledermatological consultations, image acquisition conditions may differ. In this work, two models (VGG-16 and MobileNetV2) were used to classify dermatological images from the Portuguese National Health System according to their modality. Afterwards, four incremental learning strategies were applied to these models, namely naive, elastic weight consolidation, averaged gradient episodic memory, and experience replay, enabling their adaptation to new conditions while preserving previously acquired knowledge. The evaluation considered catastrophic forgetting, accuracy, and computational cost. The MobileNetV2 trained with the experience replay strategy, with 500 images in memory, achieved a global accuracy of 86.04% with only 0.0344 of forgetting, which is 6.98% less than the second-best strategy. Regarding efficiency, this strategy took 56 s per epoch longer than the baseline and required, on average, 4554 megabytes of RAM during training. Promising results were achieved, proving the effectiveness of the proposed approach. Full article
(This article belongs to the Special Issue Continual Learning in Computer Vision: Theory and Applications)
Show Figures

Figure 1

Back to TopTop