Methods and Applications for Imaging, Simulation, and Modelling in Biology and Medicine: Artificial Intelligence, Current Research, New Trends

A special issue of Computers (ISSN 2073-431X).

Deadline for manuscript submissions: closed (31 December 2020) | Viewed by 20758

Special Issue Editor

Department of Computer Science, Università degli Studi di Milano, 20142 Milan, Italy
Interests: artificial intelligence; pattern recognition; microscopy and histological image analysis; physiology models; biomedical image analysis; computer-aided diagnosis (CAD) systems; color sensation and perception; image enhancement and visualization
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Over the past decades, a continuous increase in computational power, together with substantial advances in the digital image processing and pattern recognition fields, has motivated the development of computer-aided diagnostic (CAD) systems, augmented and virtual reality applications for surgical planning and simulation, and digital-twin applications for healthcare. Thanks to their effective, precise, and repeatable results, validated CAD systems are nowadays exploited as a valid aid during diagnostic procedures. Surgical simulation applications have become an essential part of surgical procedures as well as training, while research on the development of digital-twin applications in healthcare is continuously increasing, to deliver significant societal and health benefits.

Similar important advances in the field of microscopy image analysis have been motivated by the advent of digital microscopes producing high-resolution digital images. Due to their huge size, the long time required for the manual analysis of such images has spun the development of computerized systems aimed at, for example, marker (and nuclei) segmentation, quantification (nuclei counting), at marker co-occurrence analysis from images obtained by fluorescence, histochemical, and immunohistochemical.

Both medical, as well as biomedical processing applications are much appreciated in the field for their potential to minimize the inherent subjectivity of manual analysis and to largely reduce the workload of clinicians via high-throughput analysis. Moreover, they are devoted to the final aim of developing the so called Virtual Physiological Human, a human digital twin aimed at replicating its physical twin for allowing fast and prompt medical analysis and prediction.

This Special Issue intends to provide a comprehensive overview of recent, theoretical, computational, and/or practical advances in the medical/biomedical/microscopy image enhancement, analysis, and visualization fields. Applications and recent research interests in the biological and medical fields will be also considered and analyzed with the aim of highlighting new trends. Particularly, applications exploiting (deep) learning methods are encouraged.  

Dr. Elena Casiraghi
Dr. Barbara Barricelli
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Computers is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Medical/BioMedical/microscopy image enhancement
  • Medical/Biomedical/microscopy image quality assessment
  • Medical/Biomedical/microscopy image analysis
  • Computational approaches for citology and histology
  • Computational approaches for medical image analysis
  • Computer-aided diagnosis
  • Neuro imaging
  • Medical/BioMedical/microscopy image databases
  • Machine learning and artificial intelligence in medicine and biology
  • Image segmentation, registration, and fusion
  • Interventional imaging systems
  • Image-guided interventions and surgery
  • Surgical and interventional simulation systems
  • Interventional tracking and navigation
  • Surgical planning and simulation
  • Surgical visualization and mixed, augmented and virtual reality
  • Digital twins and end-user development in biomedicine
  • Interventional software and user interfaces
  • Physiological models

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

14 pages, 2495 KiB  
Article
Comparison of Frontal-Temporal Channels in Epilepsy Seizure Prediction Based on EEMD-ReliefF and DNN
by Aníbal Romney and Vidya Manian
Computers 2020, 9(4), 78; https://0-doi-org.brum.beds.ac.uk/10.3390/computers9040078 - 29 Sep 2020
Cited by 9 | Viewed by 3614
Abstract
Epilepsy patients who do not have their seizures controlled with medication or surgery live in constant fear. The psychological burden of uncertainty surrounding the occurrence of random seizures is one of the most stressful and debilitating aspects of the disease. Despite the research [...] Read more.
Epilepsy patients who do not have their seizures controlled with medication or surgery live in constant fear. The psychological burden of uncertainty surrounding the occurrence of random seizures is one of the most stressful and debilitating aspects of the disease. Despite the research progress in this field, there is a need for a non-invasive prediction system that helps disrupt the seizure epileptiform. Electroencephalogram (EEG) signals are non-stationary, nonlinear and vary with each patient and every recording. Full use of the non-invasive electrode channels is impractical for real-time use. We propose two frontal-temporal electrode channels based on ensemble empirical mode decomposition (EEMD) and Relief methods to address these challenges. The EEMD decomposes the segmented data frame in the ictal state into its intrinsic mode functions, and then we apply Relief to select the most relevant oscillatory components. A deep neural network (DNN) model learns these features to perform seizure prediction and early detection of patient-specific EEG recordings. The model yields an average sensitivity and specificity of 86.7% and 89.5%, respectively. The two-channel model shows the ability to capture patterns from brain locations for non-fontal-temporal seizures. Full article
Show Figures

Figure 1

31 pages, 2144 KiB  
Article
Complex Data Imputation by Auto-Encoders and Convolutional Neural Networks—A Case Study on Genome Gap-Filling
by Luca Cappelletti, Tommaso Fontana, Guido Walter Di Donato, Lorenzo Di Tucci, Elena Casiraghi and Giorgio Valentini
Computers 2020, 9(2), 37; https://0-doi-org.brum.beds.ac.uk/10.3390/computers9020037 - 11 May 2020
Cited by 8 | Viewed by 5184
Abstract
Missing data imputation has been a hot topic in the past decade, and many state-of-the-art works have been presented to propose novel, interesting solutions that have been applied in a variety of fields. In the past decade, the successful results achieved by deep [...] Read more.
Missing data imputation has been a hot topic in the past decade, and many state-of-the-art works have been presented to propose novel, interesting solutions that have been applied in a variety of fields. In the past decade, the successful results achieved by deep learning techniques have opened the way to their application for solving difficult problems where human skill is not able to provide a reliable solution. Not surprisingly, some deep learners, mainly exploiting encoder-decoder architectures, have also been designed and applied to the task of missing data imputation. However, most of the proposed imputation techniques have not been designed to tackle “complex data”, that is high dimensional data belonging to datasets with huge cardinality and describing complex problems. Precisely, they often need critical parameters to be manually set or exploit complex architecture and/or training phases that make their computational load impracticable. In this paper, after clustering the state-of-the-art imputation techniques into three broad categories, we briefly review the most representative methods and then describe our data imputation proposals, which exploit deep learning techniques specifically designed to handle complex data. Comparative tests on genome sequences show that our deep learning imputers outperform the state-of-the-art KNN-imputation method when filling gaps in human genome sequences. Full article
Show Figures

Figure 1

15 pages, 1402 KiB  
Article
Design and Validation of a Minimal Complexity Algorithm for Stair Step Counting
by Davide Coluzzi, Massimo W. Rivolta, Alfonso Mastropietro, Simone Porcelli, Marco L. Mauri, Marta T. L. Civiello, Enrico Denna, Giovanna Rizzo and Roberto Sassi
Computers 2020, 9(2), 31; https://0-doi-org.brum.beds.ac.uk/10.3390/computers9020031 - 16 Apr 2020
Cited by 3 | Viewed by 4565
Abstract
Wearable sensors play a significant role for monitoring the functional ability of the elderly and in general, promoting active ageing. One of the relevant variables to be tracked is the number of stair steps (single stair steps) performed daily, which is more challenging [...] Read more.
Wearable sensors play a significant role for monitoring the functional ability of the elderly and in general, promoting active ageing. One of the relevant variables to be tracked is the number of stair steps (single stair steps) performed daily, which is more challenging than counting flight of stairs and detecting stair climbing. In this study, we proposed a minimal complexity algorithm composed of a hierarchical classifier and a linear model to estimate the number of stair steps performed during everyday activities. The algorithm was calibrated on accelerometer and barometer recordings measured using a sensor platform worn at the wrist from 20 healthy subjects. It was then tested on 10 older people, specifically enrolled for the study. The algorithm was then compared with other three state-of-the-art methods, which used the accelerometer, the barometer or both. The experiments showed the good performance of our algorithm (stair step counting error: 13.8%), comparable with the best state-of-the-art (p > 0.05), but using a lower computational load and model complexity. Finally, the algorithm was successfully implemented in a low-power smartwatch prototype with a memory footprint of about 4 kB. Full article
Show Figures

Figure 1

12 pages, 5401 KiB  
Article
Deep Transfer Learning in Diagnosing Leukemia in Blood Cells
by Mohamed Loey, Mukdad Naman and Hala Zayed
Computers 2020, 9(2), 29; https://0-doi-org.brum.beds.ac.uk/10.3390/computers9020029 - 15 Apr 2020
Cited by 63 | Viewed by 6714
Abstract
Leukemia is a fatal disease that threatens the lives of many patients. Early detection can effectively improve its rate of remission. This paper proposes two automated classification models based on blood microscopic images to detect leukemia by employing transfer learning, rather than traditional [...] Read more.
Leukemia is a fatal disease that threatens the lives of many patients. Early detection can effectively improve its rate of remission. This paper proposes two automated classification models based on blood microscopic images to detect leukemia by employing transfer learning, rather than traditional approaches that have several disadvantages. In the first model, blood microscopic images are pre-processed; then, features are extracted by a pre-trained deep convolutional neural network named AlexNet, which makes classifications according to numerous well-known classifiers. In the second model, after pre-processing the images, AlexNet is fine-tuned for both feature extraction and classification. Experiments were conducted on a dataset consisting of 2820 images confirming that the second model performs better than the first because of 100% classification accuracy. Full article
Show Figures

Figure 1

Back to TopTop