sensors-logo

Journal Browser

Journal Browser

Image Sensors 2009

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Physical Sensors".

Deadline for manuscript submissions: closed (31 October 2009) | Viewed by 275440

Special Issue Editor


E-Mail Website
Editorial Advisor
1. Senior Technology Expert, Hamamatsu Photonics Europe, Dornacherplatz 7,CH-4500 Solothurn, Switzerland
2. Adjunct Professor of Optoelectronics, EPFL, Neuchâtel, Rue de la Maladière 71b, CH-2002 Neuchâtel 2, Switzerland
3. Innovation Sherpa, Innovation and Entrepreneurship Lab, ETH Zürich, Leonhardstrasse 27, CH-8092 Zürich, Switzerland
Interests: semiconductor image sensors; smart pixels; high-performance photosensing; low-noise; high-speed and high-dynamic-range image sensing; photonic microsystems; optical metrology and measurement systems; optical time-of-flight 3D range cameras; organic semiconductors; polymer optoelectronics; monolithic photonic microsystems based on organic semiconductors; entrepreneurship, management, creativity, intellectual property and project management
Special Issues, Collections and Topics in MDPI journals

Keywords

  • image sensors
  • charge-coupled devices (CCD)
  • contact image sensors (CIS)
  • complementary metal–oxide–semiconductors (CMOS)
  • image sensors preformance and improvement potential
  • pixel sensors
  • color sensing
  • thermal imaging
  • x-rays sensor arrays

Published Papers (20 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

739 KiB  
Article
A Rigid Image Registration Based on the Nonsubsampled Contourlet Transform and Genetic Algorithms
by Fatiha Meskine, Miloud Chikr El Mezouar and Nasreddine Taleb
Sensors 2010, 10(9), 8553-8571; https://0-doi-org.brum.beds.ac.uk/10.3390/s100908553 - 14 Sep 2010
Cited by 16 | Viewed by 9424
Abstract
Image registration is a fundamental task used in image processing to match two or more images taken at different times, from different sensors or from different viewpoints. The objective is to find in a huge search space of geometric transformations, an acceptable [...] Read more.
Image registration is a fundamental task used in image processing to match two or more images taken at different times, from different sensors or from different viewpoints. The objective is to find in a huge search space of geometric transformations, an acceptable accurate solution in a reasonable time to provide better registered images. Exhaustive search is computationally expensive and the computational cost increases exponentially with the number of transformation parameters and the size of the data set. In this work, we present an efficient image registration algorithm that uses genetic algorithms within a multi-resolution framework based on the Non-Subsampled Contourlet Transform (NSCT). An adaptable genetic algorithm for registration is adopted in order to minimize the search space. This approach is used within a hybrid scheme applying the two techniques fitness sharing and elitism. Two NSCT based methods are proposed for registration. A comparative study is established between these methods and a wavelet based one. Because the NSCT is a shift-invariant multidirectional transform, the second method is adopted for its search speeding up property. Simulation results clearly show that both proposed techniques are really promising methods for image registration compared to the wavelet approach, while the second technique has led to the best performance results of all. Moreover, to demonstrate the effectiveness of these methods, these registration techniques have been successfully applied to register SPOT, IKONOS and Synthetic Aperture Radar (SAR) images. The algorithm has been shown to work perfectly well for multi-temporal satellite images as well, even in the presence of noise. Full article
(This article belongs to the Special Issue Image Sensors 2009)
Show Figures

454 KiB  
Article
Relaxation Time Estimation from Complex Magnetic Resonance Images
by Fabio Baselice, Giampaolo Ferraioli and Vito Pascazio
Sensors 2010, 10(4), 3611-3625; https://0-doi-org.brum.beds.ac.uk/10.3390/s100403611 - 09 Apr 2010
Cited by 23 | Viewed by 8095
Abstract
Magnetic Resonance (MR) imaging techniques are used to measure biophysical properties of tissues. As clinical diagnoses are mainly based on the evaluation of contrast in MR images, relaxation times assume a fundamental role providing a major source of contrast. Moreover, they can give [...] Read more.
Magnetic Resonance (MR) imaging techniques are used to measure biophysical properties of tissues. As clinical diagnoses are mainly based on the evaluation of contrast in MR images, relaxation times assume a fundamental role providing a major source of contrast. Moreover, they can give useful information in cancer diagnostic. In this paper we present a statistical technique to estimate relaxation times exploiting complex-valued MR images. Working in the complex domain instead of the amplitude one allows us to consider the data bivariate Gaussian distributed, and thus to implement a simple Least Square (LS) estimator on the available complex data. The proposed estimator results to be simple, accurate and unbiased. Full article
(This article belongs to the Special Issue Image Sensors 2009)
Show Figures

424 KiB  
Article
Improving the Ability of Image Sensors to Detect Faint Stars and Moving Objects Using Image Deconvolution Techniques
by Octavi Fors, Jorge Núñez, Xavier Otazu, Albert Prades and Robert D. Cardinal
Sensors 2010, 10(3), 1743-1752; https://0-doi-org.brum.beds.ac.uk/10.3390/s100301743 - 03 Mar 2010
Cited by 11 | Viewed by 12845
Abstract
In this paper we show how the techniques of image deconvolution can increase the ability of image sensors as, for example, CCD imagers, to detect faint stars or faint orbital objects (small satellites and space debris). In the case of faint stars, we [...] Read more.
In this paper we show how the techniques of image deconvolution can increase the ability of image sensors as, for example, CCD imagers, to detect faint stars or faint orbital objects (small satellites and space debris). In the case of faint stars, we show that this benefit is equivalent to double the quantum efficiency of the used image sensor or to increase the effective telescope aperture by more than 30% without decreasing the astrometric precision or introducing artificial bias. In the case of orbital objects, the deconvolution technique can double the signal-to-noise ratio of the image, which helps to discover and control dangerous objects as space debris or lost satellites. The benefits obtained using CCD detectors can be extrapolated to any kind of image sensors. Full article
(This article belongs to the Special Issue Image Sensors 2009)
Show Figures

1160 KiB  
Article
Geometric Stability and Lens Decentering in Compact Digital Cameras
by Enoc Sanz-Ablanedo, José Ramón Rodríguez-Pérez, Julia Armesto and María Flor Álvarez Taboada
Sensors 2010, 10(3), 1553-1572; https://0-doi-org.brum.beds.ac.uk/10.3390/s100301553 - 01 Mar 2010
Cited by 15 | Viewed by 10729
Abstract
A study on the geometric stability and decentering present in sensor-lens systems of six identical compact digital cameras has been conducted. With regard to geometrical stability, the variation of internal geometry parameters (principal distance, principal point position and distortion parameters) was considered. With [...] Read more.
A study on the geometric stability and decentering present in sensor-lens systems of six identical compact digital cameras has been conducted. With regard to geometrical stability, the variation of internal geometry parameters (principal distance, principal point position and distortion parameters) was considered. With regard to lens decentering, the amount of radial and tangential displacement resulting from decentering distortion was related with the precision of the camera and with the offset of the principal point from the geometric center of the sensor. The study was conducted with data obtained after 372 calibration processes (62 per camera). The tests were performed for each camera in three situations: during continuous use of the cameras, after camera power off/on and after the full extension and retraction of the zoom-lens. Additionally, 360 new calibrations were performed in order to study the variation of the internal geometry when the camera is rotated. The aim of this study was to relate the level of stability and decentering in a camera with the precision and quality that can be obtained. An additional goal was to provide practical recommendations about photogrammetric use of such cameras. Full article
(This article belongs to the Special Issue Image Sensors 2009)
Show Figures

529 KiB  
Article
Field Map Reconstruction in Magnetic Resonance Imaging Using Bayesian Estimation
by Fabio Baselice, Giampaolo Ferraioli and Aymen Shabou
Sensors 2010, 10(1), 266-279; https://0-doi-org.brum.beds.ac.uk/10.3390/s100100266 - 30 Dec 2009
Cited by 24 | Viewed by 11232
Abstract
Field inhomogeneities in Magnetic Resonance Imaging (MRI) can cause blur or image distortion as they produce off-resonance frequency at each voxel. These effects can be corrected if an accurate field map is available. Field maps can be estimated starting from the phase of [...] Read more.
Field inhomogeneities in Magnetic Resonance Imaging (MRI) can cause blur or image distortion as they produce off-resonance frequency at each voxel. These effects can be corrected if an accurate field map is available. Field maps can be estimated starting from the phase of multiple complex MRI data sets. In this paper we present a technique based on statistical estimation in order to reconstruct a field map exploiting two or more scans. The proposed approach implements a Bayesian estimator in conjunction with the Graph Cuts optimization method. The effectiveness of the method has been proven on simulated and real data. Full article
(This article belongs to the Special Issue Image Sensors 2009)
Show Figures

1071 KiB  
Article
A 3D Sensor Based on a Profilometrical Approach
by Jesús Carlos Pedraza-Ortega, Efren Efren Gorrostieta-Hurtado, Manuel Delgado-Rosas, Sandra L. Canchola-Magdaleno, Juan Manuel Ramos-Arreguin, Marco A. Aceves Fernandez and Artemio Sotomayor-Olmedo
Sensors 2009, 9(12), 10326-10340; https://0-doi-org.brum.beds.ac.uk/10.3390/s91210326 - 21 Dec 2009
Cited by 8 | Viewed by 13686
Abstract
An improved method which considers the use of Fourier and wavelet transform based analysis to infer and extract 3D information from an object by fringe projection on it is presented. This method requires a single image which contains a sinusoidal white light fringe [...] Read more.
An improved method which considers the use of Fourier and wavelet transform based analysis to infer and extract 3D information from an object by fringe projection on it is presented. This method requires a single image which contains a sinusoidal white light fringe pattern projected on it, and this pattern has a known spatial frequency and its information is used to avoid any discontinuities in the fringes with high frequency. Several computer simulations and experiments have been carried out to verify the analysis. The comparison between numerical simulations and experiments has proved the validity of this proposed method. Full article
(This article belongs to the Special Issue Image Sensors 2009)
Show Figures

7789 KiB  
Article
Nonrigid Registration of Brain Tumor Resection MR Images Based on Joint Saliency Map and Keypoint Clustering
by Zhijun Gu and Binjie Qin
Sensors 2009, 9(12), 10270-10290; https://0-doi-org.brum.beds.ac.uk/10.3390/s91210270 - 17 Dec 2009
Cited by 14 | Viewed by 12193
Abstract
This paper proposes a novel global-to-local nonrigid brain MR image registration to compensate for the brain shift and the unmatchable outliers caused by the tumor resection. The mutual information between the corresponding salient structures, which are enhanced by the joint saliency map (JSM), [...] Read more.
This paper proposes a novel global-to-local nonrigid brain MR image registration to compensate for the brain shift and the unmatchable outliers caused by the tumor resection. The mutual information between the corresponding salient structures, which are enhanced by the joint saliency map (JSM), is maximized to achieve a global rigid registration of the two images. Being detected and clustered at the paired contiguous matching areas in the globally registered images, the paired pools of DoG keypoints in combination with the JSM provide a useful cluster-to-cluster correspondence to guide the local control-point correspondence detection and the outlier keypoint rejection. Lastly, a quasi-inverse consistent deformation is smoothly approximated to locally register brain images through the mapping the clustered control points by compact support radial basis functions. The 2D implementation of the method can model the brain shift in brain tumor resection MR images, though the theory holds for the 3D case. Full article
(This article belongs to the Special Issue Image Sensors 2009)
Show Figures

1894 KiB  
Article
Sensors for 3D Imaging: Metric Evaluation and Calibration of a CCD/CMOS Time-of-Flight Camera
by Filiberto Chiabrando, Roberto Chiabrando, Dario Piatti and Fulvio Rinaudo
Sensors 2009, 9(12), 10080-10096; https://0-doi-org.brum.beds.ac.uk/10.3390/s91210080 - 11 Dec 2009
Cited by 134 | Viewed by 19671
Abstract
3D imaging with Time-of-Flight (ToF) cameras is a promising recent technique which allows 3D point clouds to be acquired at video frame rates. However, the distance measurements of these devices are often affected by some systematic errors which decrease the quality of the [...] Read more.
3D imaging with Time-of-Flight (ToF) cameras is a promising recent technique which allows 3D point clouds to be acquired at video frame rates. However, the distance measurements of these devices are often affected by some systematic errors which decrease the quality of the acquired data. In order to evaluate these errors, some experimental tests on a CCD/CMOS ToF camera sensor, the SwissRanger (SR)-4000 camera, were performed and reported in this paper. In particular, two main aspects are treated: the calibration of the distance measurements of the SR-4000 camera, which deals with evaluation of the camera warm up time period, the distance measurement error evaluation and a study of the influence on distance measurements of the camera orientation with respect to the observed object; the second aspect concerns the photogrammetric calibration of the amplitude images delivered by the camera using a purpose-built multi-resolution field made of high contrast targets. Full article
(This article belongs to the Special Issue Image Sensors 2009)
Show Figures

Graphical abstract

553 KiB  
Article
Non-Linearity in Wide Dynamic Range CMOS Image Sensors Utilizing a Partial Charge Transfer Technique
by Suhaidi Shafie, Shoji Kawahito, Izhal Abdul Halin and Wan Zuha Wan Hasan
Sensors 2009, 9(12), 9452-9467; https://0-doi-org.brum.beds.ac.uk/10.3390/s91209452 - 26 Nov 2009
Cited by 11 | Viewed by 13258
Abstract
The partial charge transfer technique can expand the dynamic range of a CMOS image sensor by synthesizing two types of signal, namely the long and short accumulation time signals. However the short accumulation time signal obtained from partial transfer operation suffers of non-linearity [...] Read more.
The partial charge transfer technique can expand the dynamic range of a CMOS image sensor by synthesizing two types of signal, namely the long and short accumulation time signals. However the short accumulation time signal obtained from partial transfer operation suffers of non-linearity with respect to the incident light. In this paper, an analysis of the non-linearity in partial charge transfer technique has been carried, and the relationship between dynamic range and the non-linearity is studied. The results show that the non-linearity is caused by two factors, namely the current diffusion, which has an exponential relation with the potential barrier, and the initial condition of photodiodes in which it shows that the error in the high illumination region increases as the ratio of the long to the short accumulation time raises. Moreover, the increment of the saturation level of photodiodes also increases the error in the high illumination region. Full article
(This article belongs to the Special Issue Image Sensors 2009)
Show Figures

847 KiB  
Article
Asphalted Road Temperature Variations Due to Wind Turbine Cast Shadows
by Rafael Arnay, Leopoldo Acosta, Marta Sigut and Jonay Toledo
Sensors 2009, 9(11), 8863-8883; https://0-doi-org.brum.beds.ac.uk/10.3390/s91108863 - 05 Nov 2009
Viewed by 13266
Abstract
The contribution of this paper is a technique that in certain circumstances allows one to avoid the removal of dynamic shadows in the visible spectrum making use of images in the infrared spectrum. This technique emerged from a real problem concerning the autonomous [...] Read more.
The contribution of this paper is a technique that in certain circumstances allows one to avoid the removal of dynamic shadows in the visible spectrum making use of images in the infrared spectrum. This technique emerged from a real problem concerning the autonomous navigation of a vehicle in a wind farm. In this environment, the dynamic shadows cast by the wind turbines’ blades make it necessary to include a shadows removal stage in the preprocessing of the visible spectrum images in order to avoid the shadows being misclassified as obstacles. In the thermal images, dynamic shadows completely disappear, something that does not always occur in the visible spectrum, even when the preprocessing is executed. Thus, a fusion on thermal and visible bands is performed. Full article
(This article belongs to the Special Issue Image Sensors 2009)
Show Figures

Graphical abstract

2029 KiB  
Article
A Novel Morphometry-Based Protocol of Automated Video-Image Analysis for Species Recognition and Activity Rhythms Monitoring in Deep-Sea Fauna
by Jacopo Aguzzi, Corrado Costa, Yoshihiro Fujiwara, Ryoichi Iwase, Eva Ramirez-Llorda and Paolo Menesatti
Sensors 2009, 9(11), 8438-8455; https://0-doi-org.brum.beds.ac.uk/10.3390/s91108438 - 26 Oct 2009
Cited by 43 | Viewed by 15515
Abstract
The understanding of ecosystem dynamics in deep-sea areas is to date limited by technical constraints on sampling repetition. We have elaborated a morphometry-based protocol for automated video-image analysis where animal movement tracking (by frame subtraction) is accompanied by species identification from animals’ outlines [...] Read more.
The understanding of ecosystem dynamics in deep-sea areas is to date limited by technical constraints on sampling repetition. We have elaborated a morphometry-based protocol for automated video-image analysis where animal movement tracking (by frame subtraction) is accompanied by species identification from animals’ outlines by Fourier Descriptors and Standard K-Nearest Neighbours methods. One-week footage from a permanent video-station located at 1,100 m depth in Sagami Bay (Central Japan) was analysed. Out of 150,000 frames (1 per 4 s), a subset of 10.000 was analyzed by a trained operator to increase the efficiency of the automated procedure. Error estimation of the automated and trained operator procedure was computed as a measure of protocol performance. Three displacing species were identified as the most recurrent: Zoarcid fishes (eelpouts), red crabs (Paralomis multispina), and snails (Buccinum soyomaruae). Species identification with KNN thresholding produced better results in automated motion detection. Results were discussed assuming that the technological bottleneck is to date deeply conditioning the exploration of the deep-sea. Full article
(This article belongs to the Special Issue Image Sensors 2009)
Show Figures

507 KiB  
Article
Sensor Calibration Based on Incoherent Optical Fiber Bundles (IOFB) Used For Remote Image Transmission
by José L. Lázaro, Pedro R. Fernández, Alfredo Gardel, Angel E. Cano and Carlos A. Luna
Sensors 2009, 9(10), 8215-8229; https://0-doi-org.brum.beds.ac.uk/10.3390/s91008215 - 19 Oct 2009
Cited by 7 | Viewed by 15486
Abstract
Image transmission using incoherent optical fiber bundles (IOFB) requires prior calibration to obtain the spatial in-out fiber correspondence in order to reconstruct the image captured by the pseudo-sensor. This information is recorded in a Look-Up Table (LUT), used later for reordering the fiber [...] Read more.
Image transmission using incoherent optical fiber bundles (IOFB) requires prior calibration to obtain the spatial in-out fiber correspondence in order to reconstruct the image captured by the pseudo-sensor. This information is recorded in a Look-Up Table (LUT), used later for reordering the fiber positions and reconstructing the original image. This paper presents a method based on line-scan to obtain the in-out correspondence. The results demonstrate that this technique yields a remarkable reduction in processing time and increased image quality by introducing a fiber detection algorithm, an intensity compensation process and finally, a single interpolation algorithm. Full article
(This article belongs to the Special Issue Image Sensors 2009)
Show Figures

596 KiB  
Article
Fundamentals of in Situ Digital Camera Methodology for Water Quality Monitoring of Coast and Ocean
by Lonneke Goddijn-Murphy, Damien Dailloux, Martin White and Dave Bowers
Sensors 2009, 9(7), 5825-5843; https://0-doi-org.brum.beds.ac.uk/10.3390/s90705825 - 22 Jul 2009
Cited by 31 | Viewed by 13750
Abstract
Conventional digital cameras, the Nikon Coolpix885® and the SeaLife ECOshot®, were used as in situ optical instruments for water quality monitoring. Measured response spectra showed that these digital cameras are basically three-band radiometers. The response values in the red, green [...] Read more.
Conventional digital cameras, the Nikon Coolpix885® and the SeaLife ECOshot®, were used as in situ optical instruments for water quality monitoring. Measured response spectra showed that these digital cameras are basically three-band radiometers. The response values in the red, green and blue bands, quantified by RGB values of digital images of the water surface, were comparable to measurements of irradiance levels at red, green and cyan/blue wavelengths of water leaving light. Different systems were deployed to capture upwelling light from below the surface, while eliminating direct surface reflection. Relationships between RGB ratios of water surface images, and water quality parameters were found to be consistent with previous measurements using more traditional narrow-band radiometers. This current paper focuses on the method that was used to acquire digital images, derive RGB values and relate measurements to water quality parameters. Field measurements were obtained in Galway Bay, Ireland, and in the Southern Rockall Trough in the North Atlantic, where both yellow substance and chlorophyll concentrations were successfully assessed using the digital camera method. Full article
(This article belongs to the Special Issue Image Sensors 2009)
Show Figures

Graphical abstract

3801 KiB  
Article
A High Resolution Color Image Restoration Algorithm for Thin TOMBO Imaging Systems
by Amar A. El-Sallam and Farid Boussaid
Sensors 2009, 9(6), 4649-4668; https://0-doi-org.brum.beds.ac.uk/10.3390/s90604649 - 15 Jun 2009
Cited by 5 | Viewed by 13443
Abstract
In this paper, we present a blind image restoration algorithm to reconstruct a high resolution (HR) color image from multiple, low resolution (LR), degraded and noisy images captured by thin ( [...] Read more.
In this paper, we present a blind image restoration algorithm to reconstruct a high resolution (HR) color image from multiple, low resolution (LR), degraded and noisy images captured by thin (< 1mm) TOMBO imaging systems. The proposed algorithm is an extension of our grayscale algorithm reported in [1] to the case of color images. In this color extension, each Point Spread Function (PSF) of each captured image is assumed to be different from one color component to another and from one imaging unit to the other. For the task of image restoration, we use all spectral information in each captured image to restore each output pixel in the reconstructed HR image, i.e., we use the most efficient global category of point operations. First, the composite RGB color components of each captured image are extracted. A blind estimation technique is then applied to estimate the spectra of each color component and its associated blurring PSF. The estimation process is formed in a way that minimizes significantly the interchannel cross-correlations and additive noise. The estimated PSFs together with advanced interpolation techniques are then combined to compensate for blur and reconstruct a HR color image of the original scene. Finally, a histogram normalization process adjusts the balance between image color components, brightness and contrast. Simulated and experimental results reveal that the proposed algorithm is capable of restoring HR color images from degraded, LR and noisy observations even at low Signal-to-Noise Energy ratios (SNERs). The proposed algorithm uses FFT and only two fundamental image restoration constraints, making it suitable for silicon integration with the TOMBO imager. Full article
(This article belongs to the Special Issue Image Sensors 2009)
Show Figures

Graphical abstract

1105 KiB  
Article
CMOS Image Sensor with a Built-in Lane Detector
by Pei-Yung Hsiao, Hsien-Chein Cheng, Shih-Shinh Huang and Li-Chen Fu
Sensors 2009, 9(3), 1722-1737; https://0-doi-org.brum.beds.ac.uk/10.3390/s90301722 - 12 Mar 2009
Cited by 4 | Viewed by 16388
Abstract
This work develops a new current-mode mixed signal Complementary Metal-Oxide-Semiconductor (CMOS) imager, which can capture images and simultaneously produce vehicle lane maps. The adopted lane detection algorithm, which was modified to be compatible with hardware requirements, can achieve a high recognition rate of [...] Read more.
This work develops a new current-mode mixed signal Complementary Metal-Oxide-Semiconductor (CMOS) imager, which can capture images and simultaneously produce vehicle lane maps. The adopted lane detection algorithm, which was modified to be compatible with hardware requirements, can achieve a high recognition rate of up to approximately 96% under various weather conditions. Instead of a Personal Computer (PC) based system or embedded platform system equipped with expensive high performance chip of Reduced Instruction Set Computer (RISC) or Digital Signal Processor (DSP), the proposed imager, without extra Analog to Digital Converter (ADC) circuits to transform signals, is a compact, lower cost key-component chip. It is also an innovative component device that can be integrated into intelligent automotive lane departure systems. The chip size is 2,191.4 x 2,389.8 mm, and the package uses 40 pin Dual-In-Package (DIP). The pixel cell size is 18.45 x 21.8 mm and the core size of photodiode is 12.45 x 9.6 mm; the resulting fill factor is 29.7%. Full article
(This article belongs to the Special Issue Image Sensors 2009)
Show Figures

540 KiB  
Article
1T Pixel Using Floating-Body MOSFET for CMOS Image Sensors
by Guo-Neng Lu, Arnaud Tournier, François Roy and Benoît Deschamps
Sensors 2009, 9(1), 131-147; https://0-doi-org.brum.beds.ac.uk/10.3390/s90100131 - 07 Jan 2009
Cited by 2 | Viewed by 11892
Abstract
We present a single-transistor pixel for CMOS image sensors (CIS). It is a floating-body MOSFET structure, which is used as photo-sensing device and source-follower transistor, and can be controlled to store and evacuate charges. Our investigation into this 1T pixel structure includes modeling [...] Read more.
We present a single-transistor pixel for CMOS image sensors (CIS). It is a floating-body MOSFET structure, which is used as photo-sensing device and source-follower transistor, and can be controlled to store and evacuate charges. Our investigation into this 1T pixel structure includes modeling to obtain analytical description of conversion gain. Model validation has been done by comparing theoretical predictions and experimental results. On the other hand, the 1T pixel structure has been implemented in different configurations, including rectangular-gate and ring-gate designs, and variations of oxidation parameters for the fabrication process. The pixel characteristics are presented and discussed. Full article
(This article belongs to the Special Issue Image Sensors 2009)
Show Figures

456 KiB  
Article
Sparse Detector Imaging Sensor with Two-Class Silhouette Classification
by David Russomanno, Srikant Chari and Carl Halford
Sensors 2008, 8(12), 7996-8015; https://0-doi-org.brum.beds.ac.uk/10.3390/s8127996 - 08 Dec 2008
Cited by 29 | Viewed by 12550
Abstract
This paper presents the design and test of a simple active near-infrared sparse detector imaging sensor. The prototype of the sensor is novel in that it can capture remarkable silhouettes or profiles of a wide-variety of moving objects, including humans, animals, and vehicles [...] Read more.
This paper presents the design and test of a simple active near-infrared sparse detector imaging sensor. The prototype of the sensor is novel in that it can capture remarkable silhouettes or profiles of a wide-variety of moving objects, including humans, animals, and vehicles using a sparse detector array comprised of only sixteen sensing elements deployed in a vertical configuration. The prototype sensor was built to collect silhouettes for a variety of objects and to evaluate several algorithms for classifying the data obtained from the sensor into two classes: human versus non-human. Initial tests show that the classification of individually sensed objects into two classes can be achieved with accuracy greater than ninety-nine percent (99%) with a subset of the sixteen detectors using a representative dataset consisting of 512 signatures. The prototype also includes a Webservice interface such that the sensor can be tasked in a network-centric environment. The sensor appears to be a low-cost alternative to traditional, high-resolution focal plane array imaging sensors for some applications. After a power optimization study, appropriate packaging, and testing with more extensive datasets, the sensor may be a good candidate for deployment in vast geographic regions for a myriad of intelligent electronic fence and persistent surveillance applications, including perimeter security scenarios. Full article
(This article belongs to the Special Issue Image Sensors 2009)
Show Figures

331 KiB  
Article
Pattern Recognition via PCNN and Tsallis Entropy
by YuDong Zhang and LeNan Wu
Sensors 2008, 8(11), 7518-7529; https://0-doi-org.brum.beds.ac.uk/10.3390/s8117518 - 25 Nov 2008
Cited by 53 | Viewed by 10347
Abstract
In this paper a novel feature extraction method for image processing via PCNN and Tsallis entropy is presented. We describe the mathematical model of the PCNN and the basic concept of Tsallis entropy in order to find a recognition method for isolated objects. [...] Read more.
In this paper a novel feature extraction method for image processing via PCNN and Tsallis entropy is presented. We describe the mathematical model of the PCNN and the basic concept of Tsallis entropy in order to find a recognition method for isolated objects. Experiments show that the novel feature is translation and scale independent, while rotation independence is a bit weak at diagonal angles of 45° and 135°. Parameters of the application on face recognition are acquired by bacterial chemotaxis optimization (BCO), and the highest classification rate is 72.5%, which demonstrates its acceptable performance and potential value. Full article
(This article belongs to the Special Issue Image Sensors 2009)
Show Figures

Review

Jump to: Research

4509 KiB  
Review
Toward 100 Mega-Frames per Second: Design of an Ultimate Ultra-High-Speed Image Sensor
by Dao Vu Truong Son, Takeharu Goji Etoh, Masatoshi Tanaka, Nguyen Hoang Dung, Vo Le Cuong, Kohsei Takehara, Toshiro Akino, Kenji Nishi, Hitoshi Aoki and Junichi Nakai
Sensors 2010, 10(1), 16-35; https://0-doi-org.brum.beds.ac.uk/10.3390/s100100016 - 24 Dec 2009
Cited by 28 | Viewed by 15564
Abstract
Our experiencein the design of an ultra-high speed image sensor targeting the theoretical maximum frame rate is summarized. The imager is the backside illuminated in situ storage image sensor (BSI ISIS). It is confirmed that the critical factor limiting the highest frame rate [...] Read more.
Our experiencein the design of an ultra-high speed image sensor targeting the theoretical maximum frame rate is summarized. The imager is the backside illuminated in situ storage image sensor (BSI ISIS). It is confirmed that the critical factor limiting the highest frame rate is the signal electron transit time from the generation layer at the back side of each pixel to the input gate to the in situ storage area on the front side. The theoretical maximum frame rate is estimated at 100 Mega-frames per second (Mfps) by transient simulation study. The sensor has a spatial resolution of 140,800 pixels with 126 linear storage elements installed in each pixel. The very high sensitivity is ensured by application of backside illumination technology and cooling. The ultra-high frame rate is achieved by the in situ storage image sensor (ISIS) structure on the front side. In this paper, we summarize technologies developed to achieve the theoretical maximum frame rate, including: (1) a special p-well design by triple injections to generate a smooth electric field backside towards the collection gate on the front side, resulting in much shorter electron transit time; (2) design technique to reduce RC delay by employing an extra metal layer exclusively to electrodes responsible for ultra-high speed image capturing; (3) a CCD specific complementary on-chip inductance minimization technique with a couple of stacked differential bus lines. Full article
(This article belongs to the Special Issue Image Sensors 2009)
Show Figures

Graphical abstract

482 KiB  
Review
CMOS Image Sensors for High Speed Applications
by Munir El-Desouki, M. Jamal Deen, Qiyin Fang, Louis Liu, Frances Tse and David Armstrong
Sensors 2009, 9(1), 430-444; https://0-doi-org.brum.beds.ac.uk/10.3390/s90100430 - 13 Jan 2009
Cited by 163 | Viewed by 25295
Abstract
Recent advances in deep submicron CMOS technologies and improved pixel designs have enabled CMOS-based imagers to surpass charge-coupled devices (CCD) imaging technology for mainstream applications. The parallel outputs that CMOS imagers can offer, in addition to complete camera-on-a-chip solutions due to being fabricated [...] Read more.
Recent advances in deep submicron CMOS technologies and improved pixel designs have enabled CMOS-based imagers to surpass charge-coupled devices (CCD) imaging technology for mainstream applications. The parallel outputs that CMOS imagers can offer, in addition to complete camera-on-a-chip solutions due to being fabricated in standard CMOS technologies, result in compelling advantages in speed and system throughput. Since there is a practical limit on the minimum pixel size (4~5 μm) due to limitations in the optics, CMOS technology scaling can allow for an increased number of transistors to be integrated into the pixel to improve both detection and signal processing. Such smart pixels truly show the potential of CMOS technology for imaging applications allowing CMOS imagers to achieve the image quality and global shuttering performance necessary to meet the demands of ultrahigh-speed applications. In this paper, a review of CMOS-based high-speed imager design is presented and the various implementations that target ultrahigh-speed imaging are described. This work also discusses the design, layout and simulation results of an ultrahigh acquisition rate CMOS active-pixel sensor imager that can take 8 frames at a rate of more than a billion frames per second (fps). Full article
(This article belongs to the Special Issue Image Sensors 2009)
Show Figures

Back to TopTop