Next Article in Journal
Minimization of Torque Ripple in the Brushless DC Motor Using Constrained Cuckoo Search Algorithm
Next Article in Special Issue
Machine Learning Model for Intracranial Hemorrhage Diagnosis and Classification
Previous Article in Journal
Applying a Genetic Algorithm to a m-TSP: Case Study of a Decision Support System for Optimizing a Beverage Logistics Vehicles Routing Problem
Previous Article in Special Issue
Discrete Pseudo-Fractional Fourier Transform and Its Fast Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Impact of Novel Image Preprocessing Techniques on Retinal Vessel Segmentation

1
Department of Electronic Engineering, Science and Technology Larkana Campus, Quaid-e-Awam University of Engineering, Larkana 76221, Pakistan
2
Electrical Engineering Department, Sukkur IBA University, Sukkur 65200, Pakistan
3
Ophthalmology Department, Peoples University of Medical And Health Sciences for Women (PUMHSW), Nawabshah Shaheed Benazirabad, Nawabshah 67459, Pakistan
4
Computer Vision & Remote Sensing, Technische Universität, 10623 Berlin, Germany
5
Electrical Engineering Department, College of Engineering, Najran University Saudi Arabia, Najran 61441, Saudi Arabia
6
College of Computer Science and Information Systems, Najran University, Najran 61441, Saudi Arabia
7
Department of Automatic Control and Robotics, Faculty of Electrical Engineering, Automatics, Computer Science and Biomedical Engineering, AGH University of Science and Technology, al. A. Mickiewicza 30, 30-059 Kraków, Poland
8
Department of Biocybernetics and Biomedical Engineering, Faculty of Electrical Engineering, Automatics, Computer Science and Biomedical Engineering, AGH University of Science and Technology, al. A. Mickiewicza 30, 30-059 Kraków, Poland
9
School of Computing and Mathematics, Charles Sturt University, Wagga Wagga 2650, Australia
*
Author to whom correspondence should be addressed.
Submission received: 19 August 2021 / Revised: 7 September 2021 / Accepted: 14 September 2021 / Published: 18 September 2021
(This article belongs to the Special Issue Novel Technologies on Image and Signal Processing)

Abstract

:
Segmentation of retinal vessels plays a crucial role in detecting many eye diseases, and its reliable computerized implementation is becoming essential for automated retinal disease screening systems. A large number of retinal vessel segmentation algorithms are available, but these methods improve accuracy levels. Their sensitivity remains low due to the lack of proper segmentation of low contrast vessels, and this low contrast requires more attention in this segmentation process. In this paper, we have proposed new preprocessing steps for the precise extraction of retinal blood vessels. These proposed preprocessing steps are also tested on other existing algorithms to observe their impact. There are two steps to our suggested module for segmenting retinal blood vessels. The first step involves implementing and validating the preprocessing module. The second step applies these preprocessing stages to our proposed binarization steps to extract retinal blood vessels. The proposed preprocessing phase uses the traditional image-processing method to provide a much-improved segmented vessel image. Our binarization steps contained the image coherence technique for the retinal blood vessels. The proposed method gives good performance on a database accessible to the public named DRIVE and STARE. The novelty of this proposed method is that it is an unsupervised method and offers an accuracy of around 96% and sensitivity of 81% while outperforming existing approaches. Due to new tactics at each step of the proposed process, this blood vessel segmentation application is suitable for computer analysis of retinal images, such as automated screening for the early diagnosis of eye disease.

1. Introduction

Ocular abnormalities occur in many ocular diseases, such as retinal vascular disorder, Diabetic retinopathy (DR). These diseases are characterized by the observation of geometric changes in the blood vessels of the eye [1,2]. Among the eye complications, serious eye diseases such as DR are the primary leading disease as they cause blindness, especially in the working-age population [3,4]. Based on the estimate, according to the World Health Organization, the number of people with diabetes increased from 108 million in 1980 to 422 million in 2014 [5,6]. Prevalence is rising faster in low- and middle-income countries than in high-income countries. Early treatment may be feasible by diagnosing the changes and tracking their progression, and these measures give an inexpensive treatment alternative. As a result, reconstructing a distinct network of vessels from retinal images aids in either quantifying the severity of the disease or evaluating the impact of routine eye treatment [6]. The analysis takes place using an image called a retinal image [7,8]. The retinal image was captured using digital fundus photography, which is done using an optical technique known as fundoscopy. The fundus camera works in different modes, but the two standard modes are angiography mode and color-filtered fundus mode. The angiography mode is known as fluorescein angiography (FFA), and it is a standard method of capturing retinal images. Still, FFA is an invasive method and is not recommended by doctors or medical experts due to the injection of fluorescein dye which leads to many complications for patients’ health. Color filtered fundus mode is known as color retinal image, and it is a non-invasive mode, but it suffered from varying contrast and noise as shown in Figure 1.
A fundus camera is a sophisticated lens system that offers a magnified view of the whole retina, including the optic disc, macula, and posterior pole. It is used to take images of the fundus of the retina. The primary goal is to spot any abnormalities or alterations in the images. Even for experienced physicians, however, this manual procedure takes time [7]. Manual process and expert observation the manual process takes 1 or 2 days to provide their disease progress. The primary goal is to spot any abnormalities or alterations in the images. Even for experienced physicians, however, this manual procedure takes time. Because human observers need 1–2 days to provide their feedback, any delay in results causes a delay in treatment as well as loss of examination follow-up, and numerous communication errors between clinical staff and the patient can occur [12,13].
One of the main tasks of manual segmentation of retinal blood vessels is performed by a qualified ophthalmologist to separate the vessels from their background for their subsequent clinical evaluations [14,15,16]. However, the process of manual segmentation takes time and also gives errors. Adopted Computerized segmentation methods/approaches had made good progress using image processing, computer vision, machine learning, and pattern recognition [17,18]. The main task of the computerized method is to process the color fundus photographs as input and make the system produce the segmented binary image with realistic clinical potential. The effectiveness of these approaches can improve accuracy and sensitivity while reducing the amount of time spent manually analyzing retinal images. These automated approaches for analyzing eye diseases will be a viable tool for large-scale screening [19,20].
The network of retinal vessels is known as the vascular network; this network consists of vessels containing arteries and veins. Figure 2 shows how the vessels resemble trees with roots and branches. These vessels feature a tabular design with variable widths and orientations that progressively change. Because of the variations in the vessels, there is low and variable contrast, making it difficult to see the vessels. The pre-processing steps are necessary to enhance and coherence the retinal vessels for the retinal segmentation process. Here are the main challenges of the segmentation process [7].
  • The presence of the central light reflex of the vessel.
  • Uneven background illumination.
  • False vessels near the optic disc’s edge are often detected.
  • Thin vessels with little contrast are seen.
  • Bifurcations, crossing areas, and the fusion of closely parallel vessels.
  • As shown in the Figure 2, the emergence of diseases such as microaneurysms (AD), cotton patches, light and dark lesions, and exudates.
This research work aims to analyze the effect of pre-processing steps for accurate segmentation of retinal blood vessels. To examine the study of the correct segmentation of retinal vessels, we apply these pre-processing stages on current techniques and post-processing steps on our proposed module. Our pre-processing steps for the retinal fundus image contain non-uniform background removal, contrast enhancement technique (morphological techniques versus homomorphic filtering), and Principal Component Analysis (PCA) to obtain a well-contrasted image in greyscale. The post-processing steps contain different filters and double-threshold techniques to achieve a well-segmented image.
Low and varying contrast and uneven illumination are handled by increasing the contrast level of each channel of retinal color fundus images and then converting to a grayscale image to get a well-contrast image. The detailed process is explained in the methodology section. The first step of the proposed method is eliminating background noise and irregular lighting(uneven illumination). We tested two ways, one based on morphological techniques and the other based on homomorphic filtering. The best method is selected by comparing the level of contrast through histograms. The second step is based on a well-contrasted grayscale image with the visibility of tiny vessels suppressing noise. We used traditional PCA techniques to achieve a high contrast grayscale image. Then we use our post-processing module to get the well-segmented image. The post-processing module is based on the coherence of the vessels using a second-order detector and a diffusion filter with the binary double threshold method. This article presents four main contributions:
  • Implementation of new preprocessing steps. These preprocessing steps are the most used with post-processing methods and give improved performance.
  • Contrast analysis for observations of small vessels leads to improved segmentation and helps diagnose the level of disease.
  • The preprocessing steps have improved the performance of existing methods based on conventional techniques.
  • Preprocessing methods may improve the learning processing of methods based on machine learning techniques.

2. Related Work

The distribution of blood vessels in the fundus of the retinal image is multidirectional, and it is difficult to extract them accurately. To obtain accurate segmentation of retinal blood vessels by using several filter-based methods have been presented in recent years to increase the visibility of retinal blood vessels [21]. Lathen et al. [22] implemented the improved local phase-based filter for proper vessel enhancement, and their method is an intensity filtering method for the segmentation of retinal vessels.
Many researchers implement many methods for the detection of retinal vessels [5,23]. The retinal vessel segmentation methods are divided into two classes, namely supervised retinal vessel detection methods and unsupervised retinal vessel detection methods [24]. Supervised retinal vessel detection methods require user interaction and require labeled samples to form the vessel, and non-vessel pixel classifiers [25]. The most widely used classifiers are the Gaussian Mixture Models (GMM) [20,26], K-Nearest Neighbors Classifier [25], Artificial Neural Networks (ANN) Classifier [27] and Support-Vector Machine (SVM) Classifier [28,29]. The unsupervised retinal vessel detection methods do not require any user interaction. They are based on imaging techniques or any mathematical modeling tactics to classify vessel and non-vessel pixels based on the image, and these methods do not require training [24,30,31]. The most novel approach is proposed by Yin et al. [32] as they developed the method of classifying retinal blood vessels based on the pattern matching with a retinal vascular network of retinal fundus image. Their method can segment vessel pixels and non-vessel pixels from the proposed model of the retinal vascular network, but their method gave false detection on the pathological image. The learning approach has been proposed [33,34] to improve their method but lacks validation with retinal vessel analysis, and the small vessels have not been segmented.
Mendonca et al. [35] implemented the unsupervised method. It is based on a limit difference of Gaussian filter threshold (DoOG) and morphological reconstruction at multiscale. Still, their method did not select the tiny vessels even though it gave numerous false detections of vessel pixels. The multiscale retinal vessel algorithm [36] is improved in [37] by a detailed analysis of the intensities of retinal images, which is related to the pixels of the vessels corresponding to particular pixels during the image acquisition process. They calculated the diameter-dependent equalization factor based on multiscale information. However, their method gave pixel detection of false vessels on images, especially images with a central light reflex. Later, Al-Diri et al. [31] implemented the retinal vessel segmentation module by extracting a vessel profile with a combination of vessel segmentation and width measurement based on the Ribbon of twin active contour model. The ribbon twin active contour model contained the tramline filter used to locate an initial region of actual vessel pixels by proposing to segment the centerline pixels. Then they used a segmented growth algorithm to convert the pixel map of the tramline filter into a set of segments. Each segment contains a series of profiles. A junction resolution algorithm is used to segment the different crosses and join the various junctions to give images of segmented vessels. One of the novel methods is proposed by Azzopardi et al. [38] for the automatic segmentation of retinal blood vessels from retinal fundus images, and it is based on a COSFIRE filter. The COSFIRE filter suppressed the irregular illuminations, but many vessels did not detect. The common problem of all of these methods is that they did not detect tiny vessels and false pixel detection of other vessels. These problems are due to low varying contrast, central light reflex, and noise. These steps improve in the pre-processing module, and Our proposed method is the unsupervised method. Still, the impact of our methods’ pre-processing steps can be used on supervised methods to improve training effectiveness.
Achieving the required threshold levels has always been a problem in these unsupervised methods. In particular, it is a big problem with the model based on active contours because it required mathematically precise modeling of the protocol and the development of optimization techniques to obtain well-segmented images on each image’s properties. This article offers pre-processing and post-processing to solve each problem until we get a well-segmented vessels image. Image performance of segmented vessels is assessed visually and statistically.

3. Material and Methods

Segmentation of retinal blood vessels for diagnosing eye disease remains a challenge due to low contrast, uneven illumination, and noise problems. Many researchers have implemented automatic segmentation methods of retinal blood vessels, but there is yet to improve the precise detection of tiny vessels to improve performance. We propose a method to solve these problems, and it is illustrated in Figure 3. It includes a pre-and post-processing module for obtaining an improved image and a well-segmented image during post-processing. Each module is explained in detail below.

3.1. Pre-Processing Module

Our pre-processing module contains different steps to obtain a well-contrasted image, and the process is shown in Figure 3. Each step of the pre-processing module is explained below.

3.1.1. Processing Retinal Color Fundus Image

The retinal color fundus images are processed as input to our pre-processing module to achieve a much-enhanced image. The retinal fundus images are categorized into monochrome image types, and most retinal image databases are monochrome, and these images are captured using a special camera called a fundus camera and are mainly used in hospitals. There are three channels of fundus images, namely Red, Green, and Blue, or called RGB channels of retinal color background images, and each channel has its imaging properties, as shown in Figure 4. The Red channel has luminance and contains noise [19,39], the green channel contains fewer noise pixels and gives better contrast, while the blue channel has more noise and shadow. Our main goal is to manage these retinal color channels and obtain a well-contrasted greyscale output image for further processing in detectors for small vessel observations. We use greyscale as it takes less processing time, as color images take more processing time, as well as loss of detail also in medical images. Because the color image process increases unnecessary information, this process increases the amount of processing data in any segmentation or classification method. The next step is the removal of uneven illuminations from the retinal fundus image.

3.1.2. Uneven Illumination Removal or Background Homogenization

The removal of uneven illumination or background homogenization is solved using image processing tactics. We used morphological operations and homomorphic filtering, and we selected the best tactics based on image visualizations and histogram comparisons.
Morphological Operation: Each grayscale (RGB) retinal image channel is treated using morphological techniques to eliminate noise and cope with uneven illuminations. It is observed from Figure 5 that each RGB channel contained noise and uneven illumination, and this impacts the blood vessel observations. Bottom hat and top hat procedures are used in the suggested morphological approaches. Both are applied to each RGB channel to see how background noise affects retinal blood vessels in retinal color fundus images. It is analyzed that there are variations in the intensities of background intensities and blood vessels intensities. Because the intensity levels of the blood vessels are significantly lower than the background intensity level, there is an uneven illumination and noise problem as a result of the changes in these intensity levels. The observation of retinal blood vessels, is critical to eliminate the background illuminations. The morphological bottom hat improves the image’s backdrop and provides more information to the image while lowering the noise level on the retinal blood vessels, making the blood vessels visible. Equation (1) shows the mathematical form of the bottom-hat operation. The • shows the closing operation.
T b f = f b f .
The top hat operation increases image contrast and controls the changing contrast of retinal blood vessels by applying it to each RGB channel. The mathematical representation of the top hat operation is defined in Equation (2)—the ∘ shows the opening operation.
T w f = f f b .
Uneven illuminations and the noise problem are addressed by subtracting the top hat image from the bottom image. A more improved vessel image is obtained with controlled uneven illumination and noise level. The vessels appear more observable, and the output image of the morphological operation of each channel is shown in Figure 5. However, an imaging tactic cannot be judged as the best improvement technique. We tested homomorphic filtering then after selecting the best operating basis on the comparison of the two tactics. Homomorphic filtering is discussed in the next paragraph.
Homomorphic filtering: Homomorphic filtering is an imaging method that is primarily used to separate illuminations and reflectance components from the image. An appropriate illuminated image is required to overcome the uneven illuminations of the fundus retinal images. Each image contains two components. First, the illuminate component, and the second is the reflectance component. The amount of light incident on the scene in an image is called the illumination component. It is an essential component that overcomes the problems of low-varying contrast and noise. Reflectance components are the light reflected from the scene. The mathematical representation of an image is f x , y according to their pixel locations x , y and the lighting component I x , y ) and the reflectance component R x , y are shown in Equation (3).
f x , y = I x , y × R x , y
Homomorphic filtering is based on transformation functions to convert from a spatial domain image to a frequency domain image, and this conversion process is the basis of the Fourier transform. The logarithmic function is applied to the essential homomorphic filter function (as shown in the Equation (3)), and the product of illumination and reflection is transformed into the sum of illumination and reflection as shown in the Equations (4) and (5).
I n f x , y = I n I x , y × R x , y .
I n f x , y = I n I x , y + I n R x , y .
After applying Fourier transform, it gives Equations (6) and (7). Where Z u , v , F I u , v and F R u , v is the fourier transform of z x , y , I n I x , y and I n R x , y .
F z x , y = F I n I x , y + F I n R x , y .
Z u , v = F I u , v + F R u , v .
The Fourier transform an image in homomorphic filtering is obtained by high pass filtering. The illumination and reflectance image obtain from the inverse Fourier transform. The output of the homomorphic filtering process of each channel of retinal fundus image is shown in Figure 6.
It is observed that the morphological images give more details compared to the homomorphic filtering output images, as seen clearly in Figure 5 and Figure 6. It is further validated from histogram images, by way of illustration as shown in the Figure 6, the comparison of histogram images of the green channel of morphological operations and the green channel of homomorphic filtering as shown in Figure 6. The homomorphic filtering gives a very smooth image that loses vessel detail and blurs the image’s background. Still, the morphological image has a variation of pixels and noise but gives more visible retinal blood vessels than the homomorphic filter output. We select retinal channels obtained through morphological operations for further processing via principal component analysis to obtain a well-contrasted greyscale image.

3.1.3. Converting a Colour (RGB) Image to a Single Greyscale Image

The grayscale image is mainly used to examine the image’s details. Observing characteristics in medical images is vital, particularly in the case of the retinal vasculature. Observations are crucial for tracking the progression of eye illness. The RGB background normalizations of the retinal channels reduced noise and handled the problem of uneven illumination. Converting RGB channels, on the other hand, yields more promising outcomes. Because most research studies employed the green channel of retinal pictures for post-processing, the primary objective is to convert RGB to a single channel in greyscale. However, the green channels also exhibit changes in contrast. We have used PCA in our recent work to obtain the greyscale image as shown in Figure 7. The PCA transformation rotates the axes from the color space’s intensity values into three orthogonal axes to produce a more appropriate greyscale image. A well-contrasted greyscale image is generated for further processing in the post-processing model. Color-to-grey conversion is performed by combining the three previously processed channels with their respective non-uniform removal processes. The mathematical representation of the conversion from color to greyscale is explained by Soomro et al. [7] in detail. It’s worth noting that the PCA produced a considerably more differentiated image of the vessels, and the histogram has a far more comprehensive range of intensity levels than the uniform green background image. In comparison to the green image histogram of morphological processes illustrated in Figure 8b, the PCA histogram is more spread out and reflects more intensity levels (Figure 9).

3.2. Post-Processing Model

The post-processing module contains the segmentation of the retinal vessels. Our main goal for post-processing is to analyze the impact of our pre-processing model and observe the accuracy of vessel segmentation. Our post-processing module contains three steps. The first step is related to the coherence of the vessels using a second-order detector, and the second step is associated with the improvement of the coherent vessels. The third step is based on the segmentation of the retinal blood vessels using image reconstruction techniques. Each step is explained in detail below.

3.2.1. Secnd-Order Gaussian Detector for Coherence of Vessels

The retinal blood vessels in retinal fundus images have geometric shapes, and the pattern of a geometric shape is known as the ridge structure. Applying the filtering as a second-order derivative directed filter in a specific direction is the easiest way to decrease the noise and structure of the segmented ridges structure of retinal blood vessels. Second-order derivative oriented filter contains three parameters such as length ( σ u ), width ( σ v ) and orientation. The length ( σ u ) is multiplied by the width to maintain the elongation. The width ( σ a v ) of the filtering process is chosen from the set of 4.3, and the method will be validated until the best-normalized image is obtained. The best-normalized image is accepted among the parameters of length, width and orientation and such an image is known as the coherence image of retinal vessels, and the generalized Gaussian function of the coherence image concerning these parameters is used and mathematically represented as:
g u , v = 1 2 π σ u σ v exp u 2 2 σ u 2 + v 2 2 σ v 2 .
It contained two independent parameters σ u and σ v , and it can obtain it performing its second derivative concerning u and giving:
g u u u , v = 1 2 π σ u 5 σ v u 2 σ u 2 exp u 2 2 σ u 2 + v 2 2 σ v 2 .
Scale factors are used to obtain a normalized image. In this process, the maximum of each pixel is determined, which averages all of the different lengths, widths, and orientations, and results in an output image of a well-normalized image or initial Coherent Vessels Image. The result of this process is shown in Figure 10.

3.2.2. Final Coherent Vessels: Anisotropic Oriented Diffusion Filter

A more coherent vessel image is required because the initial coherent vessel image contains noise and gave a minor appearance of tiny vessels. We have used the normalized Anisotropic Diffusion Filter scheme [40] and the scheme workflow includes the following steps:
  • Calculate the second-moment matrix to every pixel in the vessel.
  • Ensuring each Vessels pixel has its formation diffusion matrix.
  • Calculate the intensity change for every vessel pixel as follows: D L .
  • Process the updated image based on the difference formula depicted equation below and achieved coherent vessels image as shown in Figure 11
    f t + t = f t + t × D f .

3.2.3. Segmented Image

The final coherent vessel images still contained noisy pixels that make it difficult to analyze small vessels and connect vessels. We used the double threshold method based on a morphological reconstruction operation. The morphological reconstruction operation creates the final binary image that is a combination of marker and mask images. Let A (the mask image) and B (the marker image) be the two binary images of the same domain (D). The A B i.e p D , B ( p ) = 1 A ( p ) = 1 are used in image reconstruction to give a detailed binary image. The reconstruction process images, such as marker and mask binary images achieved from the histogram and the histogram of obtaining marker and mask is shown in Figure 12, and mask and marker images are also shown in Figure 13. The mask and the marker are obtained using mathematical operations. The marker image is created by multiplying by 0.9 standard deviations and subtracting it from the image’s mean value.
In contrast, the mask image is obtained by applying the image’s mean value based on the histogram. A segmented image of the retinal vessels is produced, morphological reconstruction is conducted using a maker and mask. The marker image contained less noise than the mask image, and the primary purpose of multiplying the standard deviation is to reduce the background noise. We got a better marker image over 0.9 standard deviations. Tiny vessels are detected by reducing false pixels in the background of the marker image. Another reason to multiply the standard deviation with the marker to satisfy the maximum edge pixels is based on the marker’s histogram and the mask in the image reconstruction technique. This process also gives more edge pixels to obtain a well-segmented vessel image. However, the morphological reconstruction image contained isolated noise pixels that segmented the false vessels. We utilized a simple image processing approach to eliminate the noisy pixels in this case. The retinal vessels segmented image is created, the smaller area of fewer than 50 pixels is eliminated.

3.3. Overall Algorithm

The retinal blood vessel segmentation method is made up of our proposed pre-processing and post-processing components:
  • The first step manages the processing of the retinal color image and converts the retinal color images into three channels (RGB) and then converts each channel into greyscale images.
  • The second step manages the removal of uneven illumination from each channel. Morphological operations and homomorphic filtering are tested to deal with irregular illuminations. Morphological operations gave better output images against homomorphic filtering.
  • The third step is based on obtaining the image in greyscale. We used the PCA approach to convert the RGB retinal fundus images to a greyscale image.
  • The fourth step is based on analyzing the normalization of vessels, especially tiny vessels, and it is an essential factor in increasing vessels’ sensitivity. The second-order detector is used to normalize the vessels, and there is still varying intensity of the vessels and broken ridges. This problem is addressed using anisotropic oriented diffusion filtering.
  • A segmented image of the vasculature is achieved. The next step is to combine double thresholding with morphological image reconstruction methods.

3.4. Database and Measuring Parameters

Databases: We used two mainly used publicly available databases, named Digital Retinal Images for Vessel Extraction (DRIVE) [25] and Structured Analysis of the Retina (STARE) [13] to validate our proposed method. The DRIVE database included images of two types: test and training images, which contained their mask images and ground truth images. Images in DRIVE databases have resolutions of 768 × 584 pixels. The 25% of images in the DRIVE databases contained pathologies and make this database a challenging database to test any retinal segmentation algorithm. The STARE database contained 20 images, and 50% of the images had different anomalies, and it is one of the most challenging databases to test the retinal segmentation algorithm. The images in the STARE database are 605 × 700 pixels in resolution and include mask and ground truth images. The main advantage of using this database is that it has ground truth images for validations, as many researchers have used it. This allows us to compare our retinal segmentation methods with existing methods.
Measuring Parameters: To assess the effectiveness of our suggested retinal segmentation approach, we employed the most widely used measurement parameters. Accuracy, sensitivity and specificity are these measuring parameters. The specificity and sensitivity give the true and false pixel detection information of vessels and non-vessels. The accuracy provides information on all the pixels of the segmented vessels. The main objective of this research work is to detect true pixels of vessels and analyze the impact of the pre-processing module on our post-processing module and other existing methods.

4. Results and Discussion

The results and discussion of this research work consist of validations of the performance of the proposed method, particularly the validation of the pre-processing steps on the proposed post-processing module and their impact on other existing methods. We analyzed the comparison of our proposed method against existing approaches and tested the proposed method on challenging images.

4.1. Performance Analysis on Database: DRIVE and STARE Database

As indicated in Table 1, we evaluated the performance of our approach on the DRIVE and STARE datasets. The accuracy achieved on the DRIVE database is 0.963 and 0.958 on the STARE database. The sensitivity on the DRIVE database is 0.812 and 0.809 on the STARE database. The performance of our proposed method has shown that our proposed method can detect retinal blood vessels with as high an accuracy as manual segmentation. Figure 14 shows the segmented images, and it is seen that the detection of tiny ships and gives outputs comparable to those of the corresponding ground truth images.

4.2. Impact of the Pre-Processing Module

This section contains the validation of our proposed pre-processing module. We analyzed the effectiveness of our pre-processing module in two different ways. We analyzed the effects of our pre-processing module on our segmentation without and with pre-processing as shown in the Table 2. It is seen that the performance of all measuring parameters is increased by up to 50%. We analyzed the impact of the pre-processing of our proposed method on new retinal vessel segmentation methods such as like Hou et al. [41] and Nguyen et al. [42]. Our pre-processing methods have improved the overall performance of the existing method as shown in Table 3 and Table 4.

4.3. Analysis on Challenging Images

We have many challenging images in DRIVE and STARE databases that contain pathologies and other abnormalities such as light centre reflex and uneven illumination, which becomes challenging to segment retinal blood vessels. This analysis procedure is called “analysis of challenging fundus images”, demonstrating the suggested algorithm’s ability to handle these challenging images. We achieved a good performance on these images of both databases as shown in Table 5. This performance achieved on challenging images is the validation of the ability to segment the precision of our retinal blood vessels.

4.4. Comparative Analysis

We have analyzed the performance of our proposed method compared to existing methods on the DRIVE and STARE databases. This comparative analysis gives more validations of our proposed method, as indicated in the Table 6. We can see that our technique of segmenting retinal blood vessels performs better than specific other ways, such as Thangaraj et al. [43] because the method of Thangaraj et al. [43] has a higher sensitivity of 0.834 on STARE than our method. Still, it is less accurate than our proposed method. The results obtained by the suggested approaches demonstrate that our method can precisely segment retinal blood vessels.
We can observe that our method of segmentation of retinal blood vessels offers performance compared to some methods such as Thangaraj et al. [43] because the technique of Thangaraj et al. [43] gave a better sensitivity of 0.834 on STARE compared to our method. Still, it has less accuracy than our proposed method. However, our proposed method surpasses other existing methods. This performance achieved by the proposed methods shows that our method can segment retinal blood vessels with precision.

5. Conclusions

This study investigated the effect of pre-processing stages on retinal vascular segmentation. Many retinal vessel segmentation methods have been implemented to solve the problems of detecting small vessels, but these methods have failed to increase the sensitivity of small vessel detections. The ability to recognize retinal vessels correctly has aided clinical personnel in determining the progression of the illness and recommending timely treatment. The suggested pre-processing module and its influence on the post-processing module and the proposed steps show promising results for detecting small vessels in this research. The proposed technique performed better and was comparable to other current methods when tested on the DRIVE and STARE databases. However, there is still space for improvement in future research since retinal vessel techniques based on machine learning have difficulties in training, and the suggested pre-processing procedures may help enhance the training process. This future improvement may drive this research work as a stand-alone software device for detecting eye diseases.

Author Contributions

Conceptualization, T.A.S., A.A. (Ahmed Ali), S.A., R.T. and E.K.; methodology, T.A.S., A.J.A. and M.I.; software, T.A.S.; validation, T.A.S., A.J.A. and L.Z.; formal analysis, T.A.S., A.A. (Ahmed Ali), S.A., R.T., E.K. and L.Z.; investigation, T.A.S. and N.A.J.; resources, N.A.J.; data curation, N.A.J., M.I., A.G. and L.Z.; writing—original draft preparation, T.A.S.; writing—review and editing, T.A.S., A.A. (Ahmed Ali), A.J.A., S.A. and A.G.; visualization, A.A. (Ahmed Ali), A.A. (Ali Alqahtani) and M.I.; supervision, L.Z. and T.A.S.; project administration, T.A.S. and M.I.; funding acquisition, M.I., S.A., A.G., R.T., E.K., A.A. (Ali Alqahtani) and L.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the AGH University of Science and Technology, grant No. 16.16.120.773.

Data Availability Statement

We used a publicly accessible database that was cited correctly in the database and measuring parameters sections. The databases are accessible from this link https://homes.esat.kuleuven.be/~mblaschk/projects/retina/ (accessed on 17 August 2021).

Acknowledgments

Authors acknowledge the support from Najran University Saudi Arabia.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Patton, N.; Aslam, T.; MacGillivray, T.; Patti, A.; Deary, I.J.; Dhillon, B. Retinal Vascular Image Analysis As A Potential Screening Tool For Cerebrovascular Disease: A: Rationale Based On Homology Between Cerebral And Retinal Microvasculatures. J. Anat. 2005, 206, 319–348. [Google Scholar] [CrossRef]
  2. Kanaide, H.; Ichiki, T.; Nishimura, J.; Hirano, K. Cellular Mechanism of Vasoconstriction Induced by Angiotensin II It Remains To Be Determined. Circ. Res. 2003, 1, 1089–1094. [Google Scholar]
  3. Grunkin, P.; Ersboll, M.; Madsen, B.; Larsen, K.; Christoffersen, M. Quantitative measurement of changes in retinal vessel diameter in ocular fundus images. Pattern Recogn. 2000, 21, 1215–1223. [Google Scholar]
  4. Heneghana, C.; Flynna, J.; O’Keefec, M.; Cahillc, M. Characterization of changes in blood vessel width and tortuosity in retinopathy of prematurity using image analysis. Med. Image Anal. 2002, 6, 407–429. [Google Scholar] [CrossRef]
  5. Fraza, M.; Remagninoa, P.; Hoppea, A.; Uyyanonvarab, B.; Rudnickac, A.; Owenc, C.; Barmana, S. Blood vessel segmentation methodologies in retinal images. A survey. Comput. Methods Programs Biomed. 2012, 108, 407–433. [Google Scholar] [CrossRef]
  6. Soomro, T.A.; Gao, J.; Khan, M.A.U.; Khan, T.M.; Paul, M. Role of Image Contrast Enhancement Technique for Ophthalmologist as Diagnostic Tool for Diabetic Retinopathy. In Proceedings of the International Conference on Digital Image Computing: Techniques and Applications (DICTA), Gold Coast, QLD, Australia, 30 November–2 December 2016; pp. 1–8. [Google Scholar] [CrossRef]
  7. Soomro, T.A.; Gao, J.; Khan, T.M.; Hani, A.F.M.; Khan, M.A.U.; Paul, M. Computerised Approaches for the Detection of Diabetic Retinopathy Using Retinal Fundus Images: A Survey. J. Pattern Anal. Appl. 2017, 20, 927–961. [Google Scholar] [CrossRef]
  8. Wang, J.J.; Liew, G.; Klein, R.; Rochtchina, E.; Knudtson, M.D.; Klein, B.E.; Wong, T.Y.; Burlutsky, G.; Mitchell, P. Retinal Vessel Diameter and Cardiovascular Mortality: Pooled Data Analysis From Two Older Populations. Eur. Heart J. 2007, 28, 1984–1992. [Google Scholar] [CrossRef] [PubMed]
  9. Hani, A.; Soomro, T.A. Non-invasive contrast enhancement for retinal fundus imaging. In Proceedings of the IEEE International Conference on Control System, Computing and Engineering (ICCSCE), Penang, Malaysia, 29 November–1 December 2013; Volume 1, pp. 197–202. [Google Scholar]
  10. Soomro, T.A.; Hani, A. Enhancement of colour fundus image and FFA image using RETICA. In Proceedings of the IEEE International Conference on Biomedical Engineering and Sciences (IECBES), Langkawi, Malaysia, 17–19 December 2012; Volume 1, pp. 831–836. [Google Scholar]
  11. Soomro, T.A. Non-Invasive Image Denoising and Contrast Enhancement Techniques for Retinal Fundus Images. Master’s Thesis, Electrical and Electronic Engineering Department, Universiti Teknologi Petronas (UTP), Seri Iskandar, Perak, 2014; pp. 1–175. [Google Scholar]
  12. Chaudhuri, S.; Chatterjee, S.; Katz, N.; Nelson, M.; Goldbaum, M. Detection of blood vessels in retinal images using two-dimensional matched filters. IEEE Trans. Med. Imaging 1989, 8, 263–269. [Google Scholar] [CrossRef] [Green Version]
  13. Hoover, A.; Kouznetsova, V.; Goldbaum, M. Locating blood vessels in retinal images by piecewise threshold probing of a matched filter response. IEEE Trans. Med. Imaging 2000, 19, 203–210. [Google Scholar] [CrossRef] [Green Version]
  14. Soomro, T.A.; Afifi, A.J.; Zheng, L.; Soomro, S.; Gao, J.; Hellwich, O.; Paul, M. Deep Learning Models for Retinal Blood Vessels Segmentation: A Review. IEEE Access 2019, 7, 71696–71717. [Google Scholar] [CrossRef]
  15. Pakter, H.M.; Ferlin, E.; Fuchs, S.C.; Maestri, M.K.; Moraes, R.S.; Nunes, G.; Moreira, L.B.; Gus, M.; Fuchs, F.D. Measuring Arteriolar-To-Venous Ratio in Retinal Photography of Patients with Hypertension: Development and Application of a New Semi-Automated Method. Am. J. Hypertens. 2005, 18, 417–421. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Wong, T.Y.; Knudtson, M.D.; Klein, R.; Klein, B.E.; Meuer, M.S.M.; Hubbard, L.D. Computer-assisted measurement of retinal vessel diameters in the Beaver Dam Eye Study: Methodology, correlation between eyes, and effect of refractive errors. J. Ophthalmol. 2004, 111, 1181–1190. [Google Scholar] [CrossRef] [PubMed]
  17. Soomro, T.A.; Afifi, A.J.; Gao, J.; Hellwich, O.; Zheng, L.; Paul, M. Strided fully convolutional neural network for boosting the sensitivity of retinal blood vessels segmentation. Expert Syst. Appl. 2019, 134, 36–52. [Google Scholar] [CrossRef]
  18. Kocevar, M.; Klampfer, S.; Chowdhury, A.; Kacic, Z. Low-Quality Fingerprint Image Enhancement on the Basis of Oriented Diffusion and Ridge Compensation. Elektron. Elektrotechnika 2014, 20, 49–54. [Google Scholar] [CrossRef]
  19. Soomro, T.A.; Gao, J.; Lihong, Z.; Afifi, A.J.; Soomro, S.; Paul, M. Retinal Blood Vessels Extraction of Challenging Images. In Data Mining. AusDM 2018. Communications in Computer and Information Science; Springer: Singapore, 2019; Volume 996. [Google Scholar]
  20. Soares, J.V.; Leandro, J.J.; Cesar, R.M.; Jelinek, H.F.; Cree, M.J. Retinal Vessel Segmentation Using the 2-D Gabor Wavelet and Supervised Classification. IEEE Trans. Med. Imaging 2006, 9, 1214–1222. [Google Scholar] [CrossRef] [Green Version]
  21. Soomro, T.A.; Afifi, A.J.; Ali Shah, A.; Soomro, S.; Baloch, G.A.; Zheng, L.; Yin, M.; Gao, J. Impact of Image Enhancement Technique on CNN Model for Retinal Blood Vessels Segmentation. IEEE Access 2019, 7, 158183–158197. [Google Scholar] [CrossRef]
  22. Lathen, G.; Jonasson, J.; Borga, M. Blood vessel segmentation using multi-scale quadrature filtering. Pattern Recognit. Lett. 2010, 31, 762–767. [Google Scholar] [CrossRef] [Green Version]
  23. Lesagea, D.; Angelini, E.D.; Bloch, I.; Funka-Leaa, G. A review of 3D Vessel Lumen Segmentation Techniques: Models, Features and Extraction Schemes. Med. Image Anal. 2009, 13, 819–845. [Google Scholar] [CrossRef]
  24. Sun, K.; Chen, Z.; Jiang, S. Local Morphology Fitting Active Contour for Automatic Vascular Segmentation. IEEE Trans. -Bio-Med Eng. 2012, 59, 464–473. [Google Scholar]
  25. Staal, J.; Abràmoff, M.D.; Niemeijer, M.; Viergever, M.A.; Van Ginneken, B. Ridge-based vessel segmentation in color images of the retina. IEEE Trans. Med. Imaging 2004, 23, 501–509. [Google Scholar] [CrossRef]
  26. Marin, D.; Aquino, A.; Gegundez-Arias, M.E.; Bravo, J.M. A New Supervised Method for Blood Vessel Segmentation in Retinal Images by Using Gray-Level and Moment Invariants-Based Features. IEEE Trans. Med. Imaging 2011, 30, 146–158. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Sinthanayothin, C.; Boyce, J.F.; Cook, H.L.; Williamson, T.H. Automated localisation of the optic disc, fovea, and retinal blood vessels from digital colour fundus images. Br. J. Ophthalmol. 1999, 83, 890–902. [Google Scholar] [CrossRef]
  28. Xinge, Y.; Qinmu, P.; Yuan, Y.; Yiu-ming, C.; Jiajia, L. Segmentation of Retinal Blood Vessels Using the Radial Projection and Semi-supervised Approach. Pattern Recognit. 2011, 44, 10–11. [Google Scholar]
  29. Ricci, E.; Perfetti, R. Retinal Blood Vessel Segmentation Using Line Operators and Support Vector Classification. IEEE Trans. Med. Imaging 2007, 26, 1357–1365. [Google Scholar] [CrossRef]
  30. Bankhead, P.; Scholfield, C.N.; McGeown, J.G.; Curtis, T.M. Fast retinal vessel detection and measurement using wavelets and edge location refinement. PLoS ONE 2012, 7, e32435. [Google Scholar] [CrossRef] [Green Version]
  31. Al-Diri, B.; Hunter, A.; Steel, D. An Active Contour Model for Segmenting and Measuring Retinal Vessels. IEEE Trans. Med. Imaging 2009, 28, 1488–1497. [Google Scholar] [CrossRef] [PubMed]
  32. Yin, X.; Ng, B.W.H.; He, J.; Zhang, Y.; Abbott, D. Accurate Image Analysis of the Retina Using Hessian Matrix and Binarisation of Thresholded Entropy with Application of Texture Mapping. PLoS ONE 2014, 9, e95943. [Google Scholar]
  33. Liskowski, P.; Krawiec, K. Segmenting Retinal Blood Vessels With Deep Neural Networks. IEEE Trans. Med. Imaging 2016, 35, 2369–2380. [Google Scholar] [CrossRef] [PubMed]
  34. Li, Q.; Feng, B.; Xie, L.; Liang, P.; Zhang, H.; Wang, T. A Cross-Modality Learning Approach for Vessel Segmentation in Retinal Images. IEEE Trans. Med. Imaging 2016, 35, 109–118. [Google Scholar] [CrossRef] [PubMed]
  35. Mendonca, A.; Campilho, A. Segmentation of retinal blood vessels by combining the detection of centerlines and morphological reconstruction. IEEE Trans. Med. Imaging 2006, 25, 1200–1213. [Google Scholar] [CrossRef]
  36. Martínez-Perez, M.E.; Hughes, A.D.; Stanton, A.V.; Thom, S.A.; Bharath, A.A.; Parker, K.H. Retinal blood vessel segmentation by means of scale-space analysis and region growing. In Medical Image Computing and Computer-Assisted Intervention; Springer: Berlin/Heidelberg, Germany, 1999. [Google Scholar]
  37. Martinez-Perez, M.E.; Hughes, A.D.; Thom, S.A.; Bharath, A.A. Segmentation of blood vessels from red-free and fluorescein retinal images. Med. Image Anal. 2007, 11, 47–61. [Google Scholar] [CrossRef] [PubMed]
  38. Azzopardia, G.; Strisciuglioa, N.; Ventob, M.; Petkova, N. Trainable COSFIRE filters for vessel delineation with application to retinal images. Med Image Anal. 2015, 19, 46–57. [Google Scholar] [CrossRef] [Green Version]
  39. Kanan, C.; Cottrell, G.W. Color-to-Grayscale: Does the Method Matter in Image Recognition? PLoS ONE 2012, 7, e29740. [Google Scholar]
  40. Fehrenbach, J.; Mirebeau, J.M. Sparse non-negative stencils for anisotropic diffusion. J. Math. Imag. Vis. 2014, 49, 123–147. [Google Scholar] [CrossRef] [Green Version]
  41. Hou, Y. Automatic Segmentation of Retinal Blood Vessels Based on Improved Multiscale Line Detection. J. Comput. Sci. Eng. 2014, 8, 119–128. [Google Scholar] [CrossRef] [Green Version]
  42. Nguyen, U.T.V.; Bhuiyan, A.; Park, L.A.F.; Ramamohanarao, K. An effective retinal blood vessel segmentation method using multi-scale line detection. Pattern Recognit. 2013, 46, 703–715. [Google Scholar] [CrossRef]
  43. Thangaraj, S.; Periyasamy, V.; Balaji, R. Retinal vessel segmentation using neural network. IET Image Process. 2018, 12, 669–678. [Google Scholar] [CrossRef]
  44. Lupas, C.A.; Tegolo, D.; Trucco, E. Retinal Vessel Segmentation Using AdaBoost. IEEE Trans. Inf. Technol. Biomed. 2010, 14, 1267–1274. [Google Scholar] [CrossRef]
  45. Palomera-Perez, M.A.; Martinez-Perez, M.E.; Benitez-Perez, H.; Ortega-Arjona, J.L. Parallel Multiscale Feature Extraction and Region Growing: Application in Retinal Blood Vessel Detection. IEEE Trans. Inf. Technol. Biomed. 2010, 14, 500–506. [Google Scholar] [CrossRef]
  46. Fraz, M.M.; Remagnino, P.; Hoppe, A.; Uyyanonvara, B.; Rudnicka, A.R.; Owen, C.G.; Barman, S.A. An ensemble classification-based approach applied to retinal blood vessel segmentation. IEEE Trans. Biomed. Eng. 2012, 59, 2538–2548. [Google Scholar] [CrossRef] [PubMed]
  47. Orlando, J.I.; Blaschko, M. Learning fully-connected CRFs for blood vessel segmentation in retinal images. In Medical Image Computing and Computer-Assisted Intervention; Springer: Cham, Switzerland, 2014; Volume 17, pp. 634–641. [Google Scholar]
  48. Roychowdhury, S.; Koozekanani, D.D.; Parhi, K.K. Blood Vessel Segmentation of Fundus Images by Major Vessel Extraction and Subimage Classification. IEEE J. Biomed. Health Inform. 2015, 19, 1118–1128. [Google Scholar] [PubMed]
  49. Melinscak, M.; Prentasic, P.; Loncaric, S. Retinal Vessel Segmentation Using Deep Neural Networks. In Proceedings of the 10th International Conference on Computer Vision Theory and Applications (VISAPP-2015), Berlin, Germany, 11–14 March 2015; pp. 577–582. [Google Scholar]
  50. Annunziata, R.; Garzelli, A.; Ballerini, L.; Mecocci, A.; Trucco, E. Leveraging Multiscale Hessian-Based Enhancement With a Novel Exudate Inpainting Technique for Retinal Vessel Segmentation. IEEE J. Biomed. Health Inform. 2016, 20, 1129–1138. [Google Scholar] [CrossRef] [PubMed]
  51. Zhao, Y.; Rada, L.; Chen, K.; Harding, S.P.; Zheng, Y. Automated Vessel Segmentation Using Infinite Perimeter Active Contour Model with Hybrid Region Information with Application to Retinal Images. IEEE Trans. Med. Imaging 2015, 34, 1797–1807. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  52. Soomro, T.A.; Khan, M.A.U.; Gao, J.; Khan, T.M.; Paul, M.; Mir, N. Automatic Retinal Vessel Extraction Algorithm. In Proceedings of the International Conference on Digital Image Computing: Techniques and Applications (DICTA), Gold Coast, QLD, Australia, 30 November–2 December 2016; pp. 1–8. [Google Scholar] [CrossRef]
  53. Khan, T.M.; Khan, M.A.; Kong, Y.; Kittaneh, O. Stopping criterion for linear anisotropic image diffusion: A fingerprint image enhancement case. Eurasip J. Image Video Process. 2016, 2016, 1–20. [Google Scholar] [CrossRef] [Green Version]
  54. Zhang, J.; Dashtbozorg, B.; Bekkers, E.; Pluim, J.P.W.; Duits, R.; ter Haar Romeny, B.M. Robust Retinal Vessel Segmentation via Locally Adaptive Derivative Frames in Orientation Scores. IEEE Trans. Med. Imaging 2016, 35, 2631–2642. [Google Scholar] [CrossRef] [Green Version]
  55. Orlando, J.I.; Prokofyeva, E.; Blaschko, M.B. A Discriminatively Trained Fully Connected Conditional Random Field Model for Blood Vessel Segmentation in Fundus Images. IEEE Trans. Biomed. Eng. 2017, 64, 16–27. [Google Scholar] [CrossRef] [Green Version]
  56. Ngo, L.; Han, J. Multi-level deep neural network for efficient segmentation of blood vessels in fundus images. Electron. Lett. 2017, 53, 1096–1098. [Google Scholar] [CrossRef]
  57. Guo, Y.; Budak, U.; Sengur, A.; Smarandache, F. A Retinal Vessel Detection Approach Based on Shearlet Transform and Indeterminacy Filtering on Fundus Images. Symmetry 2017, 9, 10. [Google Scholar] [CrossRef] [Green Version]
  58. Biswal, B.; Pooja, T.; Subrahmanyam, N.B. Robust retinal blood vessel segmentation using line detectors with multiple masks. IET Image Process. 2018, 12, 389–399. [Google Scholar] [CrossRef]
  59. Soomro, T.A.; Khan, T.M.; Khan, M.A.; Gao, J.; Paul, M.; Zheng, L. Impact of ICA-Based Image Enhancement Technique on Retinal Blood Vessels Segmentation. IEEE Access 2018, 6, 3524–3538. [Google Scholar] [CrossRef]
Figure 1. Explanation of the varying-low contrast and noise in fundus images: (a) shows the FFA image and (b) shows the color fundus image. Note: These images are taken from the FINDeRs UTP Malaysia database [9,10,11].
Figure 1. Explanation of the varying-low contrast and noise in fundus images: (a) shows the FFA image and (b) shows the color fundus image. Note: These images are taken from the FINDeRs UTP Malaysia database [9,10,11].
Electronics 10 02297 g001
Figure 2. Abnormalities associated with Retinal Fundus Image.
Figure 2. Abnormalities associated with Retinal Fundus Image.
Electronics 10 02297 g002
Figure 3. Our Proposed Model.
Figure 3. Our Proposed Model.
Electronics 10 02297 g003
Figure 4. Retinal Image Processing (a) Retinal Color Fundus Image Processing. (b) Image of the Retinal Color Fundus in the Red Channel. (c) Image of the Retinal Color Fundus in the Green Channel (d) Image of the Retinal Color Fundus in Blue Channel.
Figure 4. Retinal Image Processing (a) Retinal Color Fundus Image Processing. (b) Image of the Retinal Color Fundus in the Red Channel. (c) Image of the Retinal Color Fundus in the Green Channel (d) Image of the Retinal Color Fundus in Blue Channel.
Electronics 10 02297 g004
Figure 5. Morphological Operation Output (a) Morphological Operation Output: Red Channel. (b) Morphological Operation Output: Green Channel. (c) Morphological Operation Output: Blue Channel.
Figure 5. Morphological Operation Output (a) Morphological Operation Output: Red Channel. (b) Morphological Operation Output: Green Channel. (c) Morphological Operation Output: Blue Channel.
Electronics 10 02297 g005
Figure 6. Homomorphic filtering Operation Output (a) Homomorphic filtering Output: Red Channel. (b) Homomorphic filtering Output: Green Channel. (c) Homomorphic filtering Output: Blue Channel.
Figure 6. Homomorphic filtering Operation Output (a) Homomorphic filtering Output: Red Channel. (b) Homomorphic filtering Output: Green Channel. (c) Homomorphic filtering Output: Blue Channel.
Electronics 10 02297 g006
Figure 7. Our Proposed Model.
Figure 7. Our Proposed Model.
Electronics 10 02297 g007
Figure 8. Comparison of Morphological operation Green channel Histogram and Homomorphic filtering Green channel Histogram. (a) Morphological operation Green channel Histogram (b) Homomorphic filtering Green channel Histogram.
Figure 8. Comparison of Morphological operation Green channel Histogram and Homomorphic filtering Green channel Histogram. (a) Morphological operation Green channel Histogram (b) Homomorphic filtering Green channel Histogram.
Electronics 10 02297 g008
Figure 9. PCA Histogram.
Figure 9. PCA Histogram.
Electronics 10 02297 g009
Figure 10. Second-order Gaussian derivative detectors output Image: Initial Coherent Vessels Image.
Figure 10. Second-order Gaussian derivative detectors output Image: Initial Coherent Vessels Image.
Electronics 10 02297 g010
Figure 11. Coherent Vessels image.
Figure 11. Coherent Vessels image.
Electronics 10 02297 g011
Figure 12. The histogram is divided into two threshold levels, which are shown by vertical bars. The lower threshold level T L is determined by subtracting the mean value of each edge of the histogram from the mean value of the picture, whereas the upper threshold level T U is determined by multiplying the standard deviation by 0.9 and subtracting it from the mean value of the image.
Figure 12. The histogram is divided into two threshold levels, which are shown by vertical bars. The lower threshold level T L is determined by subtracting the mean value of each edge of the histogram from the mean value of the picture, whereas the upper threshold level T U is determined by multiplying the standard deviation by 0.9 and subtracting it from the mean value of the image.
Electronics 10 02297 g012
Figure 13. The output of the post-processing module. The mask image is shown in (a), the marker image is shown in (b), the morphologically reconstructed image is shown in (c), and the final binary image or segmented image of retinal blood vessels is shown in (d).
Figure 13. The output of the post-processing module. The mask image is shown in (a), the marker image is shown in (b), the morphologically reconstructed image is shown in (c), and the final binary image or segmented image of retinal blood vessels is shown in (d).
Electronics 10 02297 g013
Figure 14. Shows the proposed method’s segmented output image. The input image is the first, the segmented image is the second, and the groundtruth image is the third.
Figure 14. Shows the proposed method’s segmented output image. The input image is the first, the segmented image is the second, and the groundtruth image is the third.
Electronics 10 02297 g014
Table 1. Analysis of Performance on DRIVE and STARE Databases.
Table 1. Analysis of Performance on DRIVE and STARE Databases.
DatabaseSeSpACAUC
DRIVE0.8120.9710.9630.951
STARE0.8090.9690.9580.949
Table 2. Effectiveness of Pre-processing Module.
Table 2. Effectiveness of Pre-processing Module.
MethodWithout Pre-ProcessingWith Pre-Processing
DatabaseSeSpACSeSpAC
DRIVE0.4910.5020.4210.8120.9710.963
STARE0.4870.4980.4370.8090.9690.958
Table 3. Effectiveness of Pre-processing Module on Existing Method: DRIVE Database.
Table 3. Effectiveness of Pre-processing Module on Existing Method: DRIVE Database.
MethodPerformance of MethodWith Pre-Processing
DatabaseSeSpACAUCSeSpACAUC
Nguyen et al. [42]--0.940-0.7320.9520.9480.951
Hou et al. [41]0.7350.9690.941-0.7820.9690.9490.953
Table 4. Effectiveness of Pre-processing Module on Existing Method: STARE Database.
Table 4. Effectiveness of Pre-processing Module on Existing Method: STARE Database.
MethodPerformance of MethodWith Pre-Processing
DatabaseSeSpACAUCSeSpACAUC
Nguyen et al. [42]--0.932-0.7010.9490.9440.942
Hou et al. [41]0.7340.9650.933-0.7790.9680.9490.953
Table 5. Performance on Challenging Images Analysis.
Table 5. Performance on Challenging Images Analysis.
DatabaseSeSpACAUC
DRIVE0.8120.9590.9620.945
STARE0.8090.9620.9680.952
Table 6. Comparison of Proposed Method with Existing Methods.
Table 6. Comparison of Proposed Method with Existing Methods.
DatabaseDRIVESTARE
MethodsSeSpACAUCSeSpACAUC
Staal et al. [25]--0.946---0.951-
Soares et al. [20]--0.946---0.948-
Mendonca et al. [35]0.7340.9760.9450.8550.6990.9730.9440.836
Martinez-Perez et al. [37]0.7240.9650.9340.8450.7500.9560.9410.853
Al-Diri et al. [31]0.7280.955-0.8420.7520.968-0.860
Lupas et al. [44]0.720-0.959-----
Palomera-Perez et al. [45]0.660.9610.9220.8110.7790.9400.9240.860
Xinge et al. [28]0.7410.9750.9430.8580.7260.9750.9490.851
Marin et al. [26]0.7060.9800.9450.8430.6940.9810.9520.838
Fraz et al. [46]0.7410.9810.9480.9740.7540.9730.9530.977
Nguyen et al. [42]--0.940---0.932-
Hou et al. [41]0.7350.9690.9410.9610.7340.9650.9330.957
Orlando et al. [47]0.7850.967----0.951-
Yin et al. [32]--0.947-----
Roychowdhury et al. [48]0.7250.9830.9520.9620.7720.9730.9510.969
Melinscak et al. [49]--0.9460.974----
Annunziata et al. [50]----0.7130.9840.9560.965
Li et al. [34]0.7560.9810.9520.9740.7730.9840.9620.987
Zhao et al. [51]0.7160.9780.9440.8480.7760.9540.9430.865
Soomro et al. [52]0.7130.9680.9410.8410.7110.9650.9420.838
Khan et al. [53]0.7340.9670.9510.8500.7360.9710.950.853
Zhang et al. [54]0.7430.9760.9470.9520.7670.9760.9540.961
Orlando et al. [55]0.7890.968--0.7680.973--
Ngo et al. [56]0.7460.9840.9530.975----
Guo et al [57]---0.947---0.946
Thangaraj et al [43]0.8010.9750.9610.8880.8340.9530.9440.894
Biswal et al. [58]0.710.970.95-0.700.970.95-
Soomro et al. [59]0.7520.9760.953-0.7860.9820.967-
Soomro et al. [19]0.7450.9620.948-0.7840.9760.951-
Proposed Method0.8120.9710.9630.9510.8090.9690.9580.949
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Soomro, T.A.; Ali, A.; Jandan, N.A.; Afifi, A.J.; Irfan, M.; Alqhtani, S.; Glowacz, A.; Alqahtani, A.; Tadeusiewicz, R.; Kantoch, E.; et al. Impact of Novel Image Preprocessing Techniques on Retinal Vessel Segmentation. Electronics 2021, 10, 2297. https://0-doi-org.brum.beds.ac.uk/10.3390/electronics10182297

AMA Style

Soomro TA, Ali A, Jandan NA, Afifi AJ, Irfan M, Alqhtani S, Glowacz A, Alqahtani A, Tadeusiewicz R, Kantoch E, et al. Impact of Novel Image Preprocessing Techniques on Retinal Vessel Segmentation. Electronics. 2021; 10(18):2297. https://0-doi-org.brum.beds.ac.uk/10.3390/electronics10182297

Chicago/Turabian Style

Soomro, Toufique A., Ahmed Ali, Nisar Ahmed Jandan, Ahmed J. Afifi, Muhammad Irfan, Samar Alqhtani, Adam Glowacz, Ali Alqahtani, Ryszard Tadeusiewicz, Eliasz Kantoch, and et al. 2021. "Impact of Novel Image Preprocessing Techniques on Retinal Vessel Segmentation" Electronics 10, no. 18: 2297. https://0-doi-org.brum.beds.ac.uk/10.3390/electronics10182297

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop