Image Processing Using FPGAs 2021

A special issue of Journal of Imaging (ISSN 2313-433X). This special issue belongs to the section "Image and Video Processing".

Deadline for manuscript submissions: closed (31 December 2021) | Viewed by 18345

Special Issue Editor


E-Mail Website
Guest Editor
Department of Mechanical and Electrical Engineering, School of Food and Advanced Technology, Massey University, Palmerston North 4442, New Zealand
Interests: machine vision; FPGA based design; digital image processing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Field Programmable Gate Arrays (FPGAs) are increasingly being used for the implementation of image processing applications. This is especially the case for real-time embedded applications where latency and power are important considerations. An FPGA embedded in a smart camera is able to perform much of the image processing directly as the image is streamed from the sensor, providing a processed data stream, rather than images. Modern system-on-chip (SoC) FPGAs allow the design for an application to be appropriately partitioned between hardware and software to exploit the characteristics of both platforms.

Simply porting a software algorithm onto an FPGA often gives disappointing results, because many image processing algorithms have been optimised for a serial processor. It is usually necessary to transform the algorithm to efficiently exploit the parallelism and resources available on an FPGA. This can lead to novel algorithms and hardware computational architectures, both at the image processing operation level and also the application level.

The aim of this Special Issue is to present and highlight novel algorithms, architectures, techniques and applications of FPGAs in the domain of image processing. Each submission should clearly evidence the novel contributions in one or more of these areas or provide a comprehensive review of some aspect of image processing on FPGAs.

Prof. Dr. Donald Bailey
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Hardware algorithms for imaging
  • Computational imaging architectures
  • Reconfigurable image processing systems
  • Parallel image processing
  • Hardware acceleration for imaging applications
  • FPGA based smart cameras
  • FPGA based deep learning

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 477 KiB  
Article
Resources and Power Efficient FPGA Accelerators for Real-Time Image Classification
by Angelos Kyriakos, Elissaios-Alexios Papatheofanous, Charalampos Bezaitis and Dionysios Reisis
J. Imaging 2022, 8(4), 114; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging8040114 - 15 Apr 2022
Cited by 4 | Viewed by 2441
Abstract
A plethora of image and video-related applications involve complex processes that impose the need for hardware accelerators to achieve real-time performance. Among these, notable applications include the Machine Learning (ML) tasks using Convolutional Neural Networks (CNNs) that detect objects in image frames. Aiming [...] Read more.
A plethora of image and video-related applications involve complex processes that impose the need for hardware accelerators to achieve real-time performance. Among these, notable applications include the Machine Learning (ML) tasks using Convolutional Neural Networks (CNNs) that detect objects in image frames. Aiming at contributing to the CNN accelerator solutions, the current paper focuses on the design of Field-Programmable Gate Arrays (FPGAs) for CNNs of limited feature space to improve performance, power consumption and resource utilization. The proposed design approach targets the designs that can utilize the logic and memory resources of a single FPGA device and benefit mainly the edge, mobile and on-board satellite (OBC) computing; especially their image-processing- related applications. This work exploits the proposed approach to develop an FPGA accelerator for vessel detection on a Xilinx Virtex 7 XC7VX485T FPGA device (Advanced Micro Devices, Inc, Santa Clara, CA, USA). The resulting architecture operates on RGB images of size 80×80 or sliding windows; it is trained for the “Ships in Satellite Imagery” and by achieving frequency 270 MHz, completing the inference in 0.687 ms and consuming 5 watts, it validates the approach. Full article
(This article belongs to the Special Issue Image Processing Using FPGAs 2021)
Show Figures

Figure 1

21 pages, 2195 KiB  
Article
Adaptive Real-Time Object Detection for Autonomous Driving Systems
by Maryam Hemmati, Morteza Biglari-Abhari and Smail Niar
J. Imaging 2022, 8(4), 106; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging8040106 - 11 Apr 2022
Cited by 3 | Viewed by 2369
Abstract
Accurate and reliable detection is one of the main tasks of Autonomous Driving Systems (ADS). While detecting the obstacles on the road during various environmental circumstances add to the reliability of ADS, it results in more intensive computations and more complicated systems. The [...] Read more.
Accurate and reliable detection is one of the main tasks of Autonomous Driving Systems (ADS). While detecting the obstacles on the road during various environmental circumstances add to the reliability of ADS, it results in more intensive computations and more complicated systems. The stringent real-time requirements of ADS, resource constraints, and energy efficiency considerations add to the design complications. This work presents an adaptive system that detects pedestrians and vehicles in different lighting conditions on the road. We take a hardware-software co-design approach on Zynq UltraScale+ MPSoC and develop a dynamically reconfigurable ADS that employs hardware accelerators for pedestrian and vehicle detection and adapts its detection method to the environment lighting conditions. The results show that the system maintains real-time performance and achieves adaptability with minimal resource overhead. Full article
(This article belongs to the Special Issue Image Processing Using FPGAs 2021)
Show Figures

Figure 1

21 pages, 399 KiB  
Article
Union-Retire for Connected Components Analysis on FPGA
by Donald G. Bailey and Michael J. Klaiber
J. Imaging 2022, 8(4), 89; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging8040089 - 24 Mar 2022
Cited by 1 | Viewed by 1722
Abstract
The Union-Retire CCA (UR-CCA) algorithm started a new paradigm for connected components analysis. Instead of using directed tree structures, UR-CCA focuses on connectivity. This algorithmic change leads to a reduction in required memory, with no end-of-row processing overhead. In this paper we describe [...] Read more.
The Union-Retire CCA (UR-CCA) algorithm started a new paradigm for connected components analysis. Instead of using directed tree structures, UR-CCA focuses on connectivity. This algorithmic change leads to a reduction in required memory, with no end-of-row processing overhead. In this paper we describe a hardware architecture based on UR-CCA and its realisation on an FPGA. The memory bandwidth and pipelining challenges of hardware UR-CCA are analysed and resolved. It is shown that up to 36% of memory resources can be saved using the proposed architecture. This translates directly to a smaller device for an FPGA implementation. Full article
(This article belongs to the Special Issue Image Processing Using FPGAs 2021)
Show Figures

Figure 1

19 pages, 821 KiB  
Article
Hybrid FPGA–CPU-Based Architecture for Object Recognition in Visual Servoing of Arm Prosthesis
by Attila Fejér, Zoltán Nagy, Jenny Benois-Pineau, Péter Szolgay, Aymar de Rugy and Jean-Philippe Domenger
J. Imaging 2022, 8(2), 44; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging8020044 - 12 Feb 2022
Viewed by 2691
Abstract
The present paper proposes an implementation of a hybrid hardware–software system for the visual servoing of prosthetic arms. We focus on the most critical vision analysis part of the system. The prosthetic system comprises a glass-worn eye tracker and a video camera, and [...] Read more.
The present paper proposes an implementation of a hybrid hardware–software system for the visual servoing of prosthetic arms. We focus on the most critical vision analysis part of the system. The prosthetic system comprises a glass-worn eye tracker and a video camera, and the task is to recognize the object to grasp. The lightweight architecture for gaze-driven object recognition has to be implemented as a wearable device with low power consumption (less than 5.6 W). The algorithmic chain comprises gaze fixations estimation and filtering, generation of candidates, and recognition, with two backbone convolutional neural networks (CNN). The time-consuming parts of the system, such as SIFT (Scale Invariant Feature Transform) detector and the backbone CNN feature extractor, are implemented in FPGA, and a new reduction layer is introduced in the object-recognition CNN to reduce the computational burden. The proposed implementation is compatible with the real-time control of the prosthetic arm. Full article
(This article belongs to the Special Issue Image Processing Using FPGAs 2021)
Show Figures

Figure 1

16 pages, 2501 KiB  
Article
A Soft Coprocessor Approach for Developing Image and Video Processing Applications on FPGAs
by Tiantai Deng, Danny Crookes, Roger Woods and Fahad Siddiqui
J. Imaging 2022, 8(2), 42; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging8020042 - 11 Feb 2022
Cited by 1 | Viewed by 2383
Abstract
Developing Field Programmable Gate Array (FPGA)-based applications is typically a slow and multi-skilled task. Research in tools to support application development has gradually reached a higher level. This paper describes an approach which aims to further raise the level at which an application [...] Read more.
Developing Field Programmable Gate Array (FPGA)-based applications is typically a slow and multi-skilled task. Research in tools to support application development has gradually reached a higher level. This paper describes an approach which aims to further raise the level at which an application developer works in developing FPGA-based implementations of image and video processing applications. The starting concept is a system of streamed soft coprocessors. We present a set of soft coprocessors which implement some of the key abstractions of Image Algebra. Our soft coprocessors are designed for easy chaining, and allow users to describe their application as a dataflow graph. A prototype implementation of a development environment, called SCoPeS, is presented. An application can be modified even during execution without requiring re-synthesis. The paper concludes with performance and resource utilization results for different implementations of a sample algorithm. We conclude that the soft coprocessor approach has the potential to deliver better performance than the soft processor approach, and can improve programmability over dedicated HDL cores for domain-specific applications while achieving competitive real time performance and utilization. Full article
(This article belongs to the Special Issue Image Processing Using FPGAs 2021)
Show Figures

Figure 1

16 pages, 5451 KiB  
Article
Design of Flexible Hardware Accelerators for Image Convolutions and Transposed Convolutions
by Cristian Sestito, Fanny Spagnolo and Stefania Perri
J. Imaging 2021, 7(10), 210; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7100210 - 12 Oct 2021
Cited by 7 | Viewed by 1908
Abstract
Nowadays, computer vision relies heavily on convolutional neural networks (CNNs) to perform complex and accurate tasks. Among them, super-resolution CNNs represent a meaningful example, due to the presence of both convolutional (CONV) and transposed convolutional (TCONV) layers. While the former exploit multiply-and-accumulate (MAC) [...] Read more.
Nowadays, computer vision relies heavily on convolutional neural networks (CNNs) to perform complex and accurate tasks. Among them, super-resolution CNNs represent a meaningful example, due to the presence of both convolutional (CONV) and transposed convolutional (TCONV) layers. While the former exploit multiply-and-accumulate (MAC) operations to extract features of interest from incoming feature maps (fmaps), the latter perform MACs to tune the spatial resolution of the received fmaps properly. The ever-growing real-time and low-power requirements of modern computer vision applications represent a stimulus for the research community to investigate the deployment of CNNs on well-suited hardware platforms, such as field programmable gate arrays (FPGAs). FPGAs are widely recognized as valid candidates for trading off computational speed and power consumption, thanks to their flexibility and their capability to also deal with computationally intensive models. In order to reduce the number of operations to be performed, this paper presents a novel hardware-oriented algorithm able to efficiently accelerate both CONVs and TCONVs. The proposed strategy was validated by employing it within a reconfigurable hardware accelerator purposely designed to adapt itself to different operating modes set at run-time. When characterized using the Xilinx XC7K410T FPGA device, the proposed accelerator achieved a throughput of up to 2022.2 GOPS and, in comparison to state-of-the-art competitors, it reached an energy efficiency up to 2.3 times higher, without compromising the overall accuracy. Full article
(This article belongs to the Special Issue Image Processing Using FPGAs 2021)
Show Figures

Figure 1

42 pages, 6844 KiB  
Article
iDocChip: A Configurable Hardware Accelerator for an End-to-End Historical Document Image Processing
by Menbere Kina Tekleyohannes, Vladimir Rybalkin, Muhammad Mohsin Ghaffar, Javier Alejandro Varela, Norbert Wehn and Andreas Dengel
J. Imaging 2021, 7(9), 175; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7090175 - 03 Sep 2021
Cited by 1 | Viewed by 2894
Abstract
In recent years, there has been an increasing demand to digitize and electronically access historical records. Optical character recognition (OCR) is typically applied to scanned historical archives to transcribe them from document images into machine-readable texts. Many libraries offer special stationary equipment for [...] Read more.
In recent years, there has been an increasing demand to digitize and electronically access historical records. Optical character recognition (OCR) is typically applied to scanned historical archives to transcribe them from document images into machine-readable texts. Many libraries offer special stationary equipment for scanning historical documents. However, to digitize these records without removing them from where they are archived, portable devices that combine scanning and OCR capabilities are required. An existing end-to-end OCR software called anyOCR achieves high recognition accuracy for historical documents. However, it is unsuitable for portable devices, as it exhibits high computational complexity resulting in long runtime and high power consumption. Therefore, we have designed and implemented a configurable hardware-software programmable SoC called iDocChip that makes use of anyOCR techniques to achieve high accuracy. As a low-power and energy-efficient system with real-time capabilities, the iDocChip delivers the required portability. In this paper, we present the hybrid CPU-FPGA architecture of iDocChip along with the optimized software implementations of the anyOCR. We demonstrate our results on multiple platforms with respect to runtime and power consumption. The iDocChip system outperforms the existing anyOCR by 44× while achieving 2201× higher energy efficiency and a 3.8% increase in recognition accuracy. Full article
(This article belongs to the Special Issue Image Processing Using FPGAs 2021)
Show Figures

Figure 1

Back to TopTop