Biomimetic and Bioinspired Computer Vision and Image Processing

A special issue of Biomimetics (ISSN 2313-7673). This special issue belongs to the section "Bioinspired Sensorics, Information Processing and Control".

Deadline for manuscript submissions: closed (30 October 2023) | Viewed by 11261

Special Issue Editors


E-Mail Website
Guest Editor
School of Computer Science and Technology, Shandong University of Finance and Economics, No. 7366, East Second Ring Road, Yaojia Sub-District, Jinan 250014, China
Interests: machine learning; data mining; multimedia processing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Shandong Provincial Key Laboratory of Computer Networks, Shandong Computer Science Center (National Supercomputer Center in Jinan), Qilu University of Technology, Shandong Academy of Sciences, Jinan, China
Interests: machine learning; data mining; multimedia processing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Biomimetics focuses on living systems. It attempts to transfer their properties to engineering applications and has dramatically influenced human civilization. In recent decades, the integration of biomimetics and computing methods has achieved great results in a variety of computer vision and image processing applications, including object detection, semantic segmentation, visual tracing, image denoising, and image super-resolution. This Special Issue seeks to understand how to design biomimetic machinery and material models that mimic the properties and structures of organisms and achieve the latest advances in the area of bioinspired algorithms in computer vision and image processing. We welcome the manuscripts devoted to original research, meta-analysis, and review articles related to these directions. Potential topics include but are not limited to:

  • Biomimetics of materials and structures;
  • Biomimetic design, construction, and devices;
  • Bioinspired robotics;
  • Bioinspired designs in computer vision and image processing;
  • Brain-inspired computing in computer vision and image processing, e.g., neural networks and deep learning;
  • Swarm intelligence in computer vision and image processing, g., particle swarm optimization and ant colony optimization;
  • Evolutional methods in computer vision and image processing, e.g., genetic algorithms.

Prof. Dr. Chaoran Cui
Prof. Dr. Xiaohui Han
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Biomimetics is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2200 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • biomimetics of materials and structures
  • biomimetic design, construction, and devices
  • bioinspired robotics
  • bioinspired designs in computer vision and image processing
  • brain-inspired computing in computer vision and image processing
  • swarm intelligence in computer vision and image processing
  • evolutional methods in computer vision and image processing

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 11394 KiB  
Article
Bio-Inspired Dark Adaptive Nighttime Object Detection
by Kuo-Feng Hung and Kang-Ping Lin
Biomimetics 2024, 9(3), 158; https://0-doi-org.brum.beds.ac.uk/10.3390/biomimetics9030158 - 3 Mar 2024
Viewed by 1037
Abstract
Nighttime object detection is challenging due to dim, uneven lighting. The IIHS research conducted in 2022 shows that pedestrian anti-collision systems are less effective at night. Common solutions utilize costly sensors, such as thermal imaging and LiDAR, aiming for highly accurate detection. Conversely, [...] Read more.
Nighttime object detection is challenging due to dim, uneven lighting. The IIHS research conducted in 2022 shows that pedestrian anti-collision systems are less effective at night. Common solutions utilize costly sensors, such as thermal imaging and LiDAR, aiming for highly accurate detection. Conversely, this study employs a low-cost 2D image approach to address the problem by drawing inspiration from biological dark adaptation mechanisms, simulating functions like pupils and photoreceptor cells. Instead of relying on extensive machine learning with day-to-night image conversions, it focuses on image fusion and gamma correction to train deep neural networks for dark adaptation. This research also involves creating a simulated environment ranging from 0 lux to high brightness, testing the limits of object detection, and offering a high dynamic range testing method. Results indicate that the dark adaptation model developed in this study improves the mean average precision (mAP) by 1.5−6% compared to traditional models. Our model is capable of functioning in both twilight and night, showcasing academic novelty. Future developments could include using virtual light in specific image areas or integrating with smart car lighting to enhance detection accuracy, thereby improving safety for pedestrians and drivers. Full article
(This article belongs to the Special Issue Biomimetic and Bioinspired Computer Vision and Image Processing)
Show Figures

Figure 1

21 pages, 6039 KiB  
Article
Steel Strip Surface Defect Detection Method Based on Improved YOLOv5s
by Jianbo Lu, Mingrui Zhu, Xiaoya Ma and Kunsheng Wu
Biomimetics 2024, 9(1), 28; https://0-doi-org.brum.beds.ac.uk/10.3390/biomimetics9010028 - 3 Jan 2024
Cited by 1 | Viewed by 1397
Abstract
Steel strip is an important raw material for the engineering, automotive, shipbuilding, and aerospace industries. However, during the production process, the surface of the steel strip is prone to cracks, pitting, and other defects that affect its appearance and performance. It is important [...] Read more.
Steel strip is an important raw material for the engineering, automotive, shipbuilding, and aerospace industries. However, during the production process, the surface of the steel strip is prone to cracks, pitting, and other defects that affect its appearance and performance. It is important to use machine vision technology to detect defects on the surface of a steel strip in order to improve its quality. To address the difficulties in classifying the fine-grained features of strip steel surface images and to improve the defect detection rate, we propose an improved YOLOv5s model called YOLOv5s-FPD (Fine Particle Detection). The SPPF-A (Spatial Pyramid Pooling Fast-Advance) module was constructed to adjust the spatial pyramid structure, and the ASFF (Adaptively Spatial Feature Fusion) and CARAFE (Content-Aware ReAssembly of FEatures) modules were introduced to improve the feature extraction and fusion capabilities of strip images. The CSBL (Convolutional Separable Bottleneck) module was also constructed, and the DCNv2 (Deformable ConvNets v2) module was introduced to improve the model’s lightweight properties. The CBAM (Convolutional Block Attention Module) attention module is used to extract key and important information, further improving the model’s feature extraction capability. Experimental results on the NEU_DET (NEU surface defect database) dataset show that YOLOv5s-FPD improves the mAP50 accuracy by 2.6% before data enhancement and 1.8% after SSIE (steel strip image enhancement) data enhancement, compared to the YOLOv5s prototype. It also improves the detection accuracy of all six defects in the dataset. Experimental results on the VOC2007 public dataset demonstrate that YOLOv5s-FPD improves the mAP50 accuracy by 4.6% before data enhancement, compared to the YOLOv5s prototype. Overall, these results confirm the validity and usefulness of the proposed model. Full article
(This article belongs to the Special Issue Biomimetic and Bioinspired Computer Vision and Image Processing)
Show Figures

Figure 1

13 pages, 18520 KiB  
Article
A Novel Image Processing Method for Obtaining an Accurate Three-Dimensional Profile of Red Blood Cells in Digital Holographic Microscopy
by Hyun-Woo Kim, Myungjin Cho and Min-Chul Lee
Biomimetics 2023, 8(8), 563; https://0-doi-org.brum.beds.ac.uk/10.3390/biomimetics8080563 - 22 Nov 2023
Cited by 1 | Viewed by 1064
Abstract
Recently, research on disease diagnosis using red blood cells (RBCs) has been active due to the advantage that it is possible to diagnose many diseases with a drop of blood in a short time. Representatively, there are disease diagnosis technologies that utilize deep [...] Read more.
Recently, research on disease diagnosis using red blood cells (RBCs) has been active due to the advantage that it is possible to diagnose many diseases with a drop of blood in a short time. Representatively, there are disease diagnosis technologies that utilize deep learning techniques and digital holographic microscope (DHM) techniques. However, three-dimensional (3D) profile obtained by DHM has a problem of random noise caused by the overlapping DC spectrum and sideband in the Fourier domain, which has the probability of misjudging diseases in deep learning technology. To reduce random noise and obtain a more accurate 3D profile, in this paper, we propose a novel image processing method which randomly selects the center of the high-frequency sideband (RaCoHS) in the Fourier domain. This proposed algorithm has the advantage of filtering while using only recorded hologram information to maintain high-frequency information. We compared and analyzed the conventional filtering method and the general image processing method to verify the effectiveness of the proposed method. In addition, the proposed image processing algorithm can be applied to all digital holography technologies including DHM, and in particular, it is expected to have a great effect on the accuracy of disease diagnosis technologies using DHM. Full article
(This article belongs to the Special Issue Biomimetic and Bioinspired Computer Vision and Image Processing)
Show Figures

Figure 1

16 pages, 4885 KiB  
Article
Deep Convolutional Neural Network with Symbiotic Organism Search-Based Human Activity Recognition for Cognitive Health Assessment
by Mohammed Alonazi, Haya Mesfer Alshahrani, Fadoua Kouki, Nabil Sharaf Almalki, Ahmed Mahmud and Jihen Majdoubi
Biomimetics 2023, 8(7), 554; https://0-doi-org.brum.beds.ac.uk/10.3390/biomimetics8070554 - 19 Nov 2023
Cited by 1 | Viewed by 1071
Abstract
Cognitive assessment plays a vital role in clinical care and research fields related to cognitive aging and cognitive health. Lately, researchers have worked towards providing resolutions to measure individual cognitive health; however, it is still difficult to use those resolutions from the real [...] Read more.
Cognitive assessment plays a vital role in clinical care and research fields related to cognitive aging and cognitive health. Lately, researchers have worked towards providing resolutions to measure individual cognitive health; however, it is still difficult to use those resolutions from the real world, and therefore using deep neural networks to evaluate cognitive health is becoming a hot research topic. Deep learning and human activity recognition are two domains that have received attention for the past few years. The former is for its relevance in application fields like health monitoring or ambient assisted living, and the latter is due to their excellent performance and recent achievements in various fields of application, namely, speech and image recognition. This research develops a novel Symbiotic Organism Search with a Deep Convolutional Neural Network-based Human Activity Recognition (SOSDCNN-HAR) model for Cognitive Health Assessment. The goal of the SOSDCNN-HAR model is to recognize human activities in an end-to-end way. For the noise elimination process, the presented SOSDCNN-HAR model involves the Wiener filtering (WF) technique. In addition, the presented SOSDCNN-HAR model follows a RetinaNet-based feature extractor for automated extraction of features. Moreover, the SOS procedure is exploited as a hyperparameter optimizing tool to enhance recognition efficiency. Furthermore, a gated recurrent unit (GRU) prototype can be employed as a categorizer to allot proper class labels. The performance validation of the SOSDCNN-HAR prototype is examined using a set of benchmark datasets. A far-reaching experimental examination reported the betterment of the SOSDCNN-HAR prototype over current approaches with enhanced precision of 86.51% and 89.50% on Penn Action and NW-UCLA datasets, respectively. Full article
(This article belongs to the Special Issue Biomimetic and Bioinspired Computer Vision and Image Processing)
Show Figures

Figure 1

22 pages, 7978 KiB  
Article
Otsu Multi-Threshold Image Segmentation Based on Adaptive Double-Mutation Differential Evolution
by Yanmin Guo, Yu Wang, Kai Meng and Zongna Zhu
Biomimetics 2023, 8(5), 418; https://0-doi-org.brum.beds.ac.uk/10.3390/biomimetics8050418 - 8 Sep 2023
Cited by 3 | Viewed by 1300
Abstract
A quick and effective way of segmenting images is the Otsu threshold method. However, the complexity of time grows exponentially as the number of thresolds rises. The aim of this study is to address the issues with the standard threshold image segmentation method’s [...] Read more.
A quick and effective way of segmenting images is the Otsu threshold method. However, the complexity of time grows exponentially as the number of thresolds rises. The aim of this study is to address the issues with the standard threshold image segmentation method’s low segmentation effect and high time complexity. The two mutations differential evolution based on adaptive control parameters is presented, and the twofold mutation approach and adaptive control parameter search mechanism are used. Superior double-mutation differential evolution views Otsu threshold picture segmentation as an optimization issue, uses the maximum interclass variance technique as the objective function, determines the ideal threshold, and then implements multi-threshold image segmentation. The experimental findings demonstrate the robustness of the enhanced double-mutation differential evolution with adaptive control parameters. Compared to other benchmark algorithms, our algorithm excels in both image segmentation accuracy and time complexity, offering superior performance. Full article
(This article belongs to the Special Issue Biomimetic and Bioinspired Computer Vision and Image Processing)
Show Figures

Figure 1

15 pages, 3742 KiB  
Article
SDE-YOLO: A Novel Method for Blood Cell Detection
by Yonglin Wu, Dongxu Gao, Yinfeng Fang, Xue Xu, Hongwei Gao and Zhaojie Ju
Biomimetics 2023, 8(5), 404; https://0-doi-org.brum.beds.ac.uk/10.3390/biomimetics8050404 - 1 Sep 2023
Cited by 2 | Viewed by 2053
Abstract
This paper proposes an improved target detection algorithm, SDE-YOLO, based on the YOLOv5s framework, to address the low detection accuracy, misdetection, and leakage in blood cell detection caused by existing single-stage and two-stage detection algorithms. Initially, the Swin Transformer is integrated into the [...] Read more.
This paper proposes an improved target detection algorithm, SDE-YOLO, based on the YOLOv5s framework, to address the low detection accuracy, misdetection, and leakage in blood cell detection caused by existing single-stage and two-stage detection algorithms. Initially, the Swin Transformer is integrated into the back-end of the backbone to extract the features in a better way. Then, the 32 × 32 network layer in the path-aggregation network (PANet) is removed to decrease the number of parameters in the network while increasing its accuracy in detecting small targets. Moreover, PANet substitutes traditional convolution with depth-separable convolution to accurately recognize small targets while maintaining a fast speed. Finally, replacing the complete intersection over union (CIOU) loss function with the Euclidean intersection over union (EIOU) loss function can help address the imbalance of positive and negative samples and speed up the convergence rate. The SDE-YOLO algorithm achieves a mAP of 99.5%, 95.3%, and 93.3% on the BCCD blood cell dataset for white blood cells, red blood cells, and platelets, respectively, which is an improvement over other single-stage and two-stage algorithms such as SSD, YOLOv4, and YOLOv5s. The experiment yields excellent results, and the algorithm detects blood cells very well. The SDE-YOLO algorithm also has advantages in accuracy and real-time blood cell detection performance compared to the YOLOv7 and YOLOv8 technologies. Full article
(This article belongs to the Special Issue Biomimetic and Bioinspired Computer Vision and Image Processing)
Show Figures

Figure 1

17 pages, 39466 KiB  
Article
Robotic Grasp Detection Network Based on Improved Deformable Convolution and Spatial Feature Center Mechanism
by Miao Zou, Xi Li, Quan Yuan, Tao Xiong, Yaozong Zhang, Jingwei Han and Zhenhua Xiao
Biomimetics 2023, 8(5), 403; https://0-doi-org.brum.beds.ac.uk/10.3390/biomimetics8050403 - 1 Sep 2023
Cited by 1 | Viewed by 1046
Abstract
In this article, we propose an effective grasp detection network based on an improved deformable convolution and spatial feature center mechanism (DCSFC-Grasp) to precisely grasp unidentified objects. DCSFC-Grasp includes three key procedures as follows. First, improved deformable convolution is introduced to adaptively adjust [...] Read more.
In this article, we propose an effective grasp detection network based on an improved deformable convolution and spatial feature center mechanism (DCSFC-Grasp) to precisely grasp unidentified objects. DCSFC-Grasp includes three key procedures as follows. First, improved deformable convolution is introduced to adaptively adjust receptive fields for multiscale feature information extraction. Then, an efficient spatial feature center (SFC) layer is explored to capture the global remote dependencies through a lightweight multilayer perceptron (MLP) architecture. Furthermore, a learnable feature center (LFC) mechanism is reported to gather local regional features and preserve the local corner region. Finally, a lightweight CARAFE operator is developed to upsample the features. Experimental results show that DCSFC-Grasp achieves a high accuracy (99.3% and 96.1% for the Cornell and Jacquard grasp datasets, respectively) and even outperforms the existing state-of-the-art grasp detection models. The results of real-world experiments on the six-DoF Realman RM65 robotic arm further demonstrate that our DCSFC-Grasp is effective and robust for the grasping of unknown targets. Full article
(This article belongs to the Special Issue Biomimetic and Bioinspired Computer Vision and Image Processing)
Show Figures

Figure 1

16 pages, 10752 KiB  
Article
An AAM-Based Identification Method for Ear Acupoint Area
by Qingfeng Li, Yuhan Chen, Yijie Pang, Lei Kou, Dongxin Lu and Wende Ke
Biomimetics 2023, 8(3), 307; https://0-doi-org.brum.beds.ac.uk/10.3390/biomimetics8030307 - 12 Jul 2023
Viewed by 1154
Abstract
Ear image segmentation and identification is for the “observation” of TCM (traditional Chinese medicine), because disease diagnoses and treatment are achieved through the massaging of or pressing on some corresponding ear acupoints. With the image processing of ear image positioning and regional segmentation, [...] Read more.
Ear image segmentation and identification is for the “observation” of TCM (traditional Chinese medicine), because disease diagnoses and treatment are achieved through the massaging of or pressing on some corresponding ear acupoints. With the image processing of ear image positioning and regional segmentation, the diagnosis and treatment of intelligent traditional Chinese medicine ear acupoints is improved. In order to popularize ear acupoint therapy, image processing technology has been adopted to detect the ear acupoint areas and help to gradually replace well-trained, experienced doctors. Due to the small area of the ear and the numerous ear acupoints, it is difficult to locate these acupoints based on traditional image recognition methods. An AAM (active appearance model)-based method for ear acupoint segmentation was proposed. The segmentation was illustrated as 91 feature points of a human ear image. In this process, the recognition effects of the ear acupoints, including the helix, antihelix, cymba conchae, cavum conchae, fossae helicis, fossae triangularis auriculae, tragus, antitragus, and earlobe, were divided precisely. Besides these, specially appointed acupoints or acupoint areas could be prominent in ear images. This method made it possible to partition and recognize the ear’s acupoints through computer image processing, and maybe own the same abilities as experienced doctors for observation. The method was proved to be effective and accurate in experiments and can be used for the intelligent diagnosis of diseases. Full article
(This article belongs to the Special Issue Biomimetic and Bioinspired Computer Vision and Image Processing)
Show Figures

Figure 1

Back to TopTop