Next Article in Journal
A Foveon Sensor/Green-Pass Filter Technique for Direct Exposure of Traditional False Color Images
Previous Article in Journal
Viewing Geometry Sensitivity of Commonly Used Vegetation Indices towards the Estimation of Biophysical Variables in Orchards
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Wearable Structured Light System in Non-Rigid Configuration

by
Carmen Paniagua
1,*,
Gonzalo López-Nicolás
2 and
José Jesús Guerrero
2
1
Instituto Tecnológico de Aragón, Zaragoza, 2018, Spain
2
Instituto de Investigación en Ingeniería de Aragón I3A, University of Zaragoza, Zaragoza, 2018, Spain
*
Author to whom correspondence should be addressed.
Submission received: 11 March 2016 / Revised: 22 April 2016 / Accepted: 3 May 2016 / Published: 9 May 2016

Abstract

:
Traditionally, structured light methods have been studied in rigid configurations. In these configurations the position and orientation between the light emitter and the camera are fixed and known beforehand. In this paper we break with this rigidness and present a new structured light system in non-rigid configuration. This system is composed by a wearable standard perspective camera and a simple laser emitter. Our non-rigid configuration permits free motion of the light emitter with respect to the camera. The point-based pattern emitted by the laser permits us to easily establish correspondences between the image from the camera and a virtual one generated from the light emitter. Using these correspondences, our method computes rotation and translation up to scale of the planes of the scene where the point pattern is projected and reconstructs them. This constitutes a very useful tool for navigation applications in indoor environments, which are mainly composed of planar surfaces.

Graphical Abstract

1. Introduction

One of the most known active methods to extract 3D information from a scene is structured light [1]. In comparison with passive methods, which are based on the extraction of features from textured images and subsequent triangulations [2], structured light can be used with non textured images in which few features are present.
Structured light systems are formed by a camera and a light emitter which projects a pattern on the scene [3,4,5,6]. To extract 3D information, structured light systems extract the distorted patterns projected in the scene from the image observed by the camera. Traditionally, structured light has been studied in rigid configurations. In these configurations the camera and the light emitter are fixed and its relative pose is known. Recently, devices such as Kinect or Asus Pro Live have revolutionized this computer vision field. These devices are structured light systems whose main feature is that they capture color and depth information of the scene simultaneously. Many authors have developed applications using these sensors in different fields such as interactive displays [7], robot guidance [8] or gesture recognition [9]. However, both Kinect and Asus Pro Live are structured light systems in a rigid configuration, since the camera and the projector are fixed and their intrinsic and extrinsic calibrations are known a priori. Breaking with some of this rigidness, a few semi-rigid configurations have been proposed in literature. Semi-rigid configurations are used in robotic systems in which the light emitter or the camera are mounted in a robotic arm [10,11]. These systems provide more flexibility but the motion between camera and emitter is still limited and a calibration is required.
Matching the synthetic features generated by the projector and the image is a difficult task. Traditionally, coded-light projectors have been used to solve this problem. Light pattern codification methods can be classified in two groups: temporal coding [12,13] and spacial coding [14,15]. Temporal coding methods use time-varying patterns to compute depth, whereas spatial coding methods use space-varying patterns. Stripe patterns [16] and grid patterns [17] are examples of methods that project high frequency information and use a phase of unrolling or line counting step to track depth changes on a surface. Coded-light projectors are generally expensive and heavy devices.
In this paper, we break with the rigidness of traditional structured light systems exploring a new configuration. We refer to this as non-rigid configuration. In our approach we use a wearable camera and a hand-held light emitter with free motion with respect to the camera. Both camera and light emitter are low-cost. In Figure 1 we show both devices and the configuration of the system. To the best of our knowledge only two previous works have considered a structured light system in non-rigid configuration. In [18] a wearable omnidirectional camera and a conic pattern light emitter are used and in [19] a scanning technique using a hand-held camera and a hand-held projector is presented. Our system differs from [18] and [19] in the use of a traditional perspective camera and a simple light emitter instead of expensive and heavy omnidirectional cameras or projectors with complex coding pattern.
Hence, in this work we present a novel, wearable, wide-baseline and low-cost structured light system in non-rigid configuration. Our proposal works in environments where the scene is formed by more than one planar surface. This assumption is reasonable in human-made environments in which the majority of the objects of interest are mainly composed of planes. We use the image of the light pattern acquired by the camera and a virtual image generated from the light emitter to perform the reconstruction of the scene. From this reconstruction we compute orientation and translation of the planar surfaces where the laser pattern has been projected. The 3D is obtained up to a scale factor, but with wide baseline because of the uncalibrated configuration proposed.
This work is a step towards the development of a human navigation assistance tool for visually-handicapped people. The development of new technologies in recent years has favored the appearance of powerful mobile devices that make everyday live of people easier, but these systems are usually developed considering people with normal abilities. However, they also have the potential to help people with special needs. For this reason, the long-term objective of this research is to provide a visually-handicapped person with a low-cost wearable system that helps him/her while moving inside a building. A wearable system must be flexible, light and affordable and must give the person the freedom to explore the environment without restrictions. Our system, in which the camera hangs from the person and the laser can be hand-held, can provide more flexibility and information than the common white cane for blind people. In order to help the person obtain the necessary information to move inside unknown indoor environments, which are mainly composed of planar surfaces, the system must be able to detect these planes and recover relevant information of the scene.
The remaining sections are organized as follows. In Section 2 the problem is formulated. In Section 3 the method to obtain the 3D information of a planar scene is presented. In Section 4 several simulations and experiments are shown. Finally, conclusions and remarks are given in Section 5.

2. Problem Definition

The problem that we tackle in this paper is the reconstruction of two or more planar surfaces in which the point-based pattern of the laser is projected and seen by the camera. Our approach uses two image projections: one image corresponds to the perspective camera and the second image is a virtual one obtained from the light emitter. In order to compute the reconstruction we need corresponding points in the images to compute the rotation and translation between camera and laser. To segment the 3D structure into planar surfaces, we use homographies and to compute the rotation and translation we use an algorithm based on homography decomposition. Therefore, in this section, we present the camera and laser model and the concept of homography that will be used throughout the paper.

2.1. Camera Model

We use a standard perspective camera assuming a pin-hole projection model. A point in space with coordinates X = [ X , Y , Z ] T is mapped to the point on the image plane ( u , v ) where a line joining the point X to the center of projection meets the image plane [2]. This projection is encapsulated in a projection matrix P I R 3 x 4 composed by an intrinsic parameter matrix and an extrinsic one. Intrinsic parameters are focal distances ( f x , f y ) and principal point coordinates ( C x , C y ) . Extrinsic parameters are rotation R and translation t between world and camera frames. Equation (1) defines mathematically this projection where the parameter s represents a scale factor.
u v s = f x 0 C x 0 f y C y 0 0 1 R t 0 1 X Y Z 1

2.2. Laser Model

Our light emitter is a laser that projects a point-based pattern without any coding and generates synthetic features on the scene. In Figure 2 we show an image of the pattern. The projector can be modeled as a pin-hole camera since it can be conceptually regarded as an inverse camera projecting rays to the scene, with its z-axis pointing in the direction of the laser projection. Since we consider the laser as an inverse pin-hole camera it is possible to create a virtual perspective image from the projected pattern. Virtual images only require projective coordinates, therefore their image coordinates are relative to an scale. We propose to mesh the virtual image of the point-based pattern using a Delaunay triangulation [20] to facilitate some operations in our method. This meshed virtual image will be used subsequently to obtain the planes of the scene.

2.3. Homography

A planar surface in a 3D scene induces a projective transformation, called homography, that relates the projections in two views of any point belonging to the plane. As a mapping between a pair of image points in homogeneous coordinates, the planar homography is expressed by matrix H I R 3 x 3 . Since it is a general projective transformation defined up to scale (due to the scale ambiguity inherent to perspective projection), the homography matrix has 8 degrees of freedom. The homography mapping of one scene point provides two independent equations. Therefore, from a minimum number of four pairs of corresponding points the homography can be estimated linearly by solving the resulting system of equations [2]. Homographies have been extensively studied and different methods for their computation are available in the literature. They can be computed using lines instead of points in the presence of partial occlusions [21] and a second homography can be calculated using only three matches in an image pair [22].

3. Scene Reconstruction

In this section we propose an algorithm to reconstruct the 3D information of the planar surfaces that appear on the scene as well as the pose of the camera and the laser. In the first place, we extract the first plane of the scene using the concept of homography introduced before. Thanks to this extraction, we are able to obtain two solutions for the rotation and translation between the camera and the laser. Then, we segment the second and subsequent planes. This allows us to select the correct solution. Finally, we can compute the pose of the planes with respect to the camera using the previous information. We summarize this procedure in Algorithm 1.
Algorithm 1 Reconstruction of the scene
1:
Extraction of the first plane (3.1).
2:
Computation of two possible solutions for translation and rotation between camera and laser (3.2).
3:
Segmentation of the second and subsequent planes and selection of the appropriate solution (3.3).
4:
Calculation of the pose of all planes with respect to the camera (3.4).

3.1. First Plane Extraction

We extract the first plane of the scene calculating a homography. To find the homography that best fits the plane, we have to find all the points of the pattern that have been projected in it and their corresponding matchings in the virtual image. The following steps summarize the procedure to find such points. Recall that we compute a Delaunay triangulation with the points seen in both images.
  • Initial matching: four matches between the two images have to be established to initialize the homography. To do this, we proceed as follows. The first one is the central point of the pattern that we assume somehow coded in both images. In practice, this central point is easily recognizable as we discuss in the real experiments. The following are the nearest points to the horizontal and vertical lines defined by the previous ones counter-clockwise, i.e., the second is the nearest point to the horizontal line defined by the first one; the third, to the vertical line defined by the second one; and the fourth, to the horizontal line defined by the third one. An example of this initial matching can be seen in Figure 3. Any other selection can be done provided that the same relation between the points is maintained in both images.
  • Neighbouring points search: we seek for the points that are neighbours to the points already matched. These are the points which form a triangle in the mesh with the matched points in both images. In Figure 3b the neighbouring points in the laser image are depicted in purple.
  • Expansion of the homography: we apply for each neighbouring point in the image from the camera the transformation defined by the calculated homography and check if that point satisfies the correspondence in the laser image. At this point, the algorithm detects if the four initial points do not belong to the same plane because the calculated homography cannot be expanded. In such case, four different points have to be selected following the same order and relation in both images. This can be done since the central point of the pattern is known.
  • Refinement of the homography: we compute a new homography with all the points which found valid correspondences in the previous step.
The last three steps are repeated until no neighbouring point finds a correspondence and, at that point, the segmentation of the first plane is completed.

3.2. Compute Rotation and Translation between Camera and Laser

The camera motion can be computed from the homography H obtained after the first plane extraction when the camera and laser are calibrated [23]. If the homography is calculated between image points and is expressed in pixels, we obtain an uncalibrated homography H u . However, if the intrinsic calibration parameters of camera and laser are known, a calibrated homography H c can be computed as follows:
H c = K c 1 H u K l
where K c and K l are the calibration matrices of the camera and laser, respectively. This calibrated homography matrix encapsulates the relative location of the views and the unit normal of the scene plane in the following way:
H c = λ ( R + t n T )
where R I R 3 x 3 is the rotation matrix, t I R 3 is the translation vector (scaled by the distance to the plane) between camera and laser and n I R 3 is the unit normal of the plane. This homography is defined up to a scalar factor λ. In order to extract the laser pose from the homography matrix, it is necessary to compute the Euclidean homography H e and to decompose it [24]. When computing the homography from image feature correspondences in homogeneous calibrated coordinates, we obtain a calibrated homography according to Equation (3).
As shown in [25], a 3x3 Euclidean homography matrix has its second largest singular value equal to one. Multiplying a matrix by a scale factor causes its singular values to be multiplied by the same factor. Then, for a given calibrated homography, we can obtain a unique Euclidean homography matrix (up to sign) by dividing the computed homography matrix by its second largest singular value. The sign ambiguity can be solved by employing the positive depth constraint. Computing the laser pose, R and t , from the Euclidean homography, two physically valid solutions are obtained. A complete procedure for the computation of the laser pose from a calibrated homography is outlined in Algorithm 2.
Algorithm 2 Computation of laser pose from homography
1:
Compute H c = K c 1 H u K l
2:
Compute the SVD of H c such that H c H e = U diag ( λ 1 , λ 2 , λ 3 ) V T
3:
Let α = λ 3 2 λ 2 2 λ 3 2 λ 1 2 and β = λ 2 2 λ 1 2 λ 3 2 λ 1 2
4:
Writing V = [ v 1 , v 2 , v 3 ] , compute v v = α v 1 ± β v 3
5:
Compute R = H e v v , H e v 2 , H e v v × H e v 2 v v , v 2 , v v × v 2 T
6:
Compute t = H e n R n with n = v v × v 2

3.3. Segmentation of Second and Subsequent Planes

The segmentation of the second plane of the scene is performed in a similar way than the first one, calculating a homography. To do so, we need to establish four correspondences between both meshes to correctly initialize such homography (Figure 4a). We select two initial points as follows: we compute four straight lines from the first four initial points of the first plane and define four search areas: top, bottom, left and right. Between the points that have not been matched yet, we select the two closest to the straight lines that are situated in the same area (Figure 4b).
Due to the deformation of the pattern when it is projected on the second plane, there are several options to complete its initialization and cannot be carried out as for the first one. For this reason, we compute for each initialization hypothesis two homographies with rotation and translation fixed. Each one corresponds to one of the solutions obtained in the previous subsection for the rotation and translation between camera and laser. Now we introduce the calculation of these fixed-pose homographies.

3.3.1. Fixed-Pose Homography

Let p A and p B be the pair of selected corresponding points from the calibrated images; R and t , the rotation and translation between camera and laser; n and d, the normal and the distance to the plane; s, a scale factor; and H , the homography matrix. We have
s p B = H p A = ( R + t n T d ) p A .
From Equation (4) we can formulate an equation system as follows:
t z p x A p x B t x p x A t z p y A p x B t x p y A t z p x B t x t z p x A p y B t y p x A t z p y A p y B t y p y A t z p y B t y n x d n y d n z d = = r 11 p x A + r 12 p y A + r 13 r 31 p x A p x B r 32 p y A p x B r 33 p x B r 21 p x A + r 22 p y A + r 23 r 31 p x A p y B r 32 p y A p y B r 33 p y B ,
where
R = r 11 r 12 r 13 r 21 r 22 r 23 r 31 r 32 r 33 , t = t x t y t z , p A = p x A p y A , p B = p x B p y B , n d = n x d n y d n z d
According to Equation (5), the two pairs of corresponding points that we have already obtained are enough to solve the system. However, the rank of the system matrix is equal to two, resulting in an indefinite system. From a geometrical point of view, this is correct since two points define a beam of planes. Therefore, we have to include an additional correspondence from the initialization hypotheses being not collinear with the other two points. Using at least three pairs of corresponding points to solve Equation (5), the candidate normal to the second plane and the inverse distance to the plane for each hypothesis are obtained and using the calculated scaled normal we can finally compute each candidate homography with fixed pose according to Equation (3).
Eventually, only the correct initialization hypothesis together with the correct rotation and translation solution allows us to calculate a homography which expands along the second plane. This allows us to overcome the duplicity of solution for the rotation and translation.
To segment the subsequent planes, the relative pose between camera and laser is known thanks to the segmentation of the second plane. In the same way, a homography is calculated for each initialization hypothesis and we can find the correct homography because only the homography associated with the right initialization allows us to segment the plane. Note that this process is valid for every subsequent plane.

3.4. Planes Reconstruction

The last step is to compute rotation and translation of all the planes with respect to the camera. Assuming that the calibration matrices of the camera and the laser are known and with the rotation and translation between them calculated in the previous step, the projection matrices for both are computed. With the projection matrices and the correspondences established by the homographies, a triangulation process is used to compute the final reconstruction up to a scale factor.

4. Experiments

To verify the validity of the proposed method, we performed different experiments using simulated data and real images acquired with our non-rigid structured light system with laser in hand. Initially, we performed a sensitivity analysis with synthetic data. We added Gaussian noise to the image coordinates of the points and studied the influence of such noise. In the second place, we tested the reconstruction of a scene both in simulation and with the real system.

4.1. Simulations with Synthetic Data

We simulate a laser of 121 points with a field of view of 53 degrees. The camera has a resolution of 640 × 480 pixels with a field of view of 50 degrees in horizontal and 39 degrees in vertical. In order to evaluate the results of the simulations we define the laser translation error as the angle between estimated and real translation vectors. For rotation, we compute a rotation error matrix and put it on the axis-angle representation to use the angle as a single measure of the laser rotation error.

4.1.1. Sensitivity Analysis

In the first place, we present the evolution of the rotation and translation error depending on the Gaussian noise introduced in the image points. We tested several configurations of the system in which both floor and wall planes were located one meter away from the camera. In particular, we performed two scannings: a horizontal scanning in which we varied the azimuth angles of the light emitter from −10° to 10° in intervals of five degrees with a constant elevation of −60° (Figure 5), and a vertical scanning with elevation angles of the light emitter from −65° to −45° in intervals of five degrees with a constant azimuth of zero degrees (Figure 6).
In all the resulting configurations of these scannings, we added Gaussian noise of zero mean and standard deviation from 0 to 2 pixels in intervals of 0.1 pixels to the image point coordinates. The 121 points of the simulated pattern were seen in all the images. Results correspond to the mean errors of 1000 simulations for each configuration and each noise level. The results for the horizontal and vertical scannings are shown in Figure 5 and Figure 6, respectively. As expected, both rotation and translation errors grow with the image noise. However, the maximum errors in both variables are reasonable for the studied noise levels.

4.1.2. Reconstruction of a Simulated Scene

In the second place, we applied the proposed method to synthetic environments and we tested our algorithm in three different scenes: the first was composed by two orthogonal planes, the second contained three orthogonal planes and the third was formed by two non-orthogonal planes. We computed orientation and translation (up to scale) of the planes, that were located one meter away from the camera. The actual value for translation between camera and laser was (0.3, −0.3, 0.4) meters and for rotation, (−45 − 10, 3) degrees.
First, we show in Figure 7 the results of the reconstruction of the scene composed by two orthogonal planes. We added noise of varying mean to the image to study its effect upon the results. It can be seen that the obtained reconstruction of both planes is perfect when no noise is added to the point coordinates image (Figure 7a) and that the reconstruction deteriorates as more image noise is applied (Figure 7b–d).
Second, we simulated three orthogonal planes and tested the algorithm (Figure 8). When working with three planes, the number of points projected onto each one is smaller than with two, and the fewer points per plane, the less accurate the reconstruction is. Although the reconstruction is not as accurate as for the case of two planes due to the lower number of points per plane, these results show that the three planes are reconstructed correctly. We also show in Figure 8 the effect of noise in the reconstruction of three planes. As for the case of two planes, when noise increases the reconstruction of the three planes deteriorates.
Finally, we performed experiments to confirm that the method is not affected by scenes in which planes are not orthogonal. We show in Figure 9 the reconstruction of two scenes in which the angle between planes is 75 and 120 degrees respectively. As can be seen the angle between planes does not influence the reconstruction. This was expected since we are not assuming perpendicularity at any point.

4.2. Real Experiments

Real experiments were performed using our wearable non-rigid structured light system, which is composed of a perspective camera held on a belt and a low-cost laser in hand projecting a point-based pattern. The algorithm needs an intrinsic model projection of both camera and laser which can be computed separately. With respect to the camera we use the open-source software [26] which follows a standard calibration process. On the other hand, the method does not need an accurate calibration of the laser. A simple model assuming equidistant angles of projecting rays with the principal point in the image center has been considered. The angles between laser rays can be computed by knowing the distances between points which can be easily measured projecting to a fronto-parallel plane from known distance. We also need to define a common reference for laser and camera meshes. We use the central point of the pattern as origin of that reference. This point can be easily recognized. At this stage, we are using a red laser pointer coupled to the laser to mark the central point of the pattern. A more elaborated solution to obtain a common reference could be to use a non-symmetric pattern, a coded-light projector or even a specifically built projector developed for this application. Nevertheless, these alternatives would increase not only the cost of the system but also its weight in the case of coded-light projector.
To extract the point pattern and the red point, we used the HSI (Hue Saturation Intensity) space color since it is compatible with the vision physiology of human eyes [27] and its three components are independent. Using different thresholds in channels H and S we binarize the image and after smoothing, filtering and denoising operations the pattern and the red point can be extracted.
Once the light pattern and its central point are extracted from the image we apply our method to obtain the planes of the scene. The results for some different scenes are shown in Figure 10. To evaluate the accuracy of the reconstructed planes we compute the angle between their normal vectors. The results are shown in Table 1. Although the reconstruction is not perfect due to the noise in the images, the reconstructed angles between the planes are near the actual value of 90 degrees.
We consider that these errors are acceptable since the goal is not to obtain accurate measurements of planes of a scene but to extract useful information to be interpreted by the person.

5. Conclusions

In this paper we have presented a new structured light system in non-rigid configuration which can be used as a personal assistance system. To recover the planes of a scene, this system only requires a single perspective image where the light pattern is present. To extract the planes, we have proposed the use of homographies and for rotation and translation calculation we have presented a method based on their decomposition where duplicity solutions problems have been solved. Our approach has shown good results on simulated and real data. In future work it would be interesting to develop a new system in which the intrinsic parameters of both camera and laser were not known. Related to this, we also expect to improve the current laser calibration. Another interesting research line is the improvement of the image processing in order to deal with more general illumination conditions. Finally, an open issue is to extend our approach to the case of non-planar scenes.

Acknowledgments

The work has been supported by the Ministerio de Economía y Competitividad of Spain (projects DPI2014-61792-EXP and DPI2015-65962-R).

Author Contributions

This work has been ellaborated by C. Paniagua under the supervision of G. López-Nicolás y J. J. Guerrero.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Posdamer, J.L.; Altschuler, M.D. Surface measurement by space-encoded projected beam systems. Comput. Graph. Image Process. 1982, 18, 1–17. [Google Scholar] [CrossRef]
  2. Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
  3. Wei, Z.; Zhang, G.; Xie, M. Calibration method for line structured light vision sensor based on vanish points and lines. In Proceedings of the International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; pp. 794–797.
  4. Bradley, B.D.; Chan, A.D.; Hayes, J.D. Calibration of a Simple, Low Cost, 3D Laser Light-Sectioning Scanner System for Biomedical purposes. Int. J. Adv. Media Commun. 2009, 3, 35–54. [Google Scholar] [CrossRef]
  5. Kim, D.; Kim, H.; Lee, S. Wide-Angle Laser Structured Light System Calibration with a Planar Object. In Proceedings of the International Conference on Control, Automation and Systems in KINTEX, Gyeonggi-do, Korea, 27–30 October 2010; pp. 1879–1882.
  6. Park, J.B.; Lee, S.H.; Lee, I.J. Precise 3D Lug Pose Detection Sensor for Automatic Robot Welding Using a Structured-Light Vision System. Sensors 2009, 9, 7550–7565. [Google Scholar] [CrossRef] [PubMed]
  7. Zhang, S.; He, W.; Yu, Q.; Zheng, X. Low-cost interactive whiteboard using the Kinect. In Proceedings of the International Conference on Image Analysis and Signal Processing, Hangzhou, China, 9–11 November 2012; pp. 1–5.
  8. Ramey, A.; González-Pacheco, V.; Salichs, M.A. Integration of a low-cost RGB-D sensor in a social robot for gesture recognition. In Proceedings of the International Conference on Human-Robot Interaction, Lausanne, Switzerland, 6–9 March 2011; pp. 229–230.
  9. Palacios, J.M.; Sagüés, C.; Montijano, E.; Llorente, S. Human-computer interaction based on hand gestures using RGB-D sensors. Sensors 2013, 13, 11842–11860. [Google Scholar] [CrossRef] [PubMed]
  10. Chen, C.H.; Kak, A.C. Modeling and Calibration of a Structured Light Scanner for 3-D Robot Vision. In Proceedings of the International Conference on Robotics and Automation, Raleigh, NC, USA, March 1987; Volume 4, pp. 807–815.
  11. Hu, J.S.; Chang, Y.J. Calibration of an Eye-to-Hand System Using a Laser Pointer on Hand and Planar Constraints. In Proceedings of the IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011; pp. 982–987.
  12. Saint-Marc, P.; Jezouin, J.L.; Medioni, G. A versatile PC-based range finding system. Trans. Robot. Autom. 1991, 7, 250–256. [Google Scholar] [CrossRef]
  13. Agin, G.J.; Binford, T.O. Computer description of curved objects. Trans. Comput. 1976, 100, 439–449. [Google Scholar] [CrossRef]
  14. Zhang, L.; Curless, B.; Seitz, S.M. Rapid shape acquisition using color structured light and multi-pass dynamic programming. In Proceedings of the International Symposium on 3D Data Processing Visualization and Transmission, Padova, Italy, 19–21 June 2002; pp. 24–36.
  15. Je, C.; Lee, S.W.; Park, R.H. High-contrast color-stripe pattern for rapid structured-light range imaging. In Computer Vision-ECCV; Springer-Verlag: Berlin, Germany, 2004; pp. 95–107. [Google Scholar]
  16. Boyer, K.L.; Kak, A.C. Color-encoded structured light for rapid active ranging. Trans. Pattern Anal. Mach. Intell. 1987, 9, 14–28. [Google Scholar] [CrossRef]
  17. Proesmans, M.; Van Gool, L.; Oosterlinck, A. One-shot active 3D shape acquisition. In Proceedings of the International Conference on Pattern Recognition, Vienna, Austria, 25–29 August 1996; Volume 3, pp. 336–340.
  18. Paniagua, C.; Puig, L.; Guerrero, J.J. Omnidirectional Structured Light in a Flexible Configuration. Sensors 2013, 13, 13903–13916. [Google Scholar] [CrossRef] [PubMed]
  19. Kawasaki, H.; Sagawa, R.; Yasi, Y.; Furukawa, R.; Asada, N.; Stum, P. One-shot scanning method using an uncalibrated projector and a camera system. In Proceedings of the Computer Vision and Pattern Recognition Workshops, San Francisco, CA, USA, 13–18 June 2010; pp. 104–111.
  20. Lee, D.T.; Schachter, B.J. Two algorithms for constructing a Delaunay triangulation. Int. J. Comput. Inf. Sci. 1980, 9, 219–242. [Google Scholar] [CrossRef]
  21. Guerrero, J.J.; Sagüés, C. Robust line matching and estimate of homographies simultaneously. In Pattern Recognition and Image Analysis; Springer-Verlag: Berlin, Germany, 2003; pp. 297–307. [Google Scholar]
  22. López-Nicolás, G.; Guerrero, J.J.; Pellejero, O.A.; Sagüés, C. Computing homographies from three lines or points in an image pair. In Image Analysis and Processing; Springer-Verlag: Berlin, Germany, 2005; pp. 446–453. [Google Scholar]
  23. Weng, J.; Huang, T.S.; Ahuja, N. Motion and Structure from Image Sequences; Springer-Verlag: Berlin, Germany, 1993. [Google Scholar]
  24. Aranda, M.; Lopez-Nicolas, G.; Sagues, C. Planar motion estimation from 1D homographies. In Proceedings of the 12th International Conference on Control Automation Robotics Vision (ICARCV), Guangzhou, China, 5–7 December 2012; pp. 329–334.
  25. Ma, Y.; Soatto, S.; Kosecka, J.; Sastry, S.S. An Invitation to 3-D Vision: From Images to Geometric Models; Springer Science & Business Media: Berlin, Germany, 2012; Volume 26. [Google Scholar]
  26. Bouguet, J.Y. Camera calibration toolbox for Matlab [Online]. Available online: http://www.vision.caltech.edu/bouguetj/calib_doc/ (accessed on 9 May 2016).
  27. Carron, T.; Lambert, P. Color Edge Detector Using Jointly Hue, Saturation and Intensity. In Proceedings of the International Conference on Image Processing, Austin, TX, USA, 13–16 November 1994; pp. 977–981.
Figure 1. Our non-rigid structured light system. (a) Wearable and low-cost camera; (b) Low-cost and hand-held laser projector; (c) Configuration of the system.
Figure 1. Our non-rigid structured light system. (a) Wearable and low-cost camera; (b) Low-cost and hand-held laser projector; (c) Configuration of the system.
Jimaging 02 00016 g001
Figure 2. Point-based pattern projected by the laser.
Figure 2. Point-based pattern projected by the laser.
Jimaging 02 00016 g002
Figure 3. Example of extraction of the first plane. (a) Delaunay triangulation of the points in the camera image. The first four points to initialize the first homography for the first plane are marked in red and numbered; (b) Delaunay triangulation of the points in the laser image. Red points represent the initial matchings and purple points represent neighbouring points. Pixels coordinates are used in the virtual image plane.
Figure 3. Example of extraction of the first plane. (a) Delaunay triangulation of the points in the camera image. The first four points to initialize the first homography for the first plane are marked in red and numbered; (b) Delaunay triangulation of the points in the laser image. Red points represent the initial matchings and purple points represent neighbouring points. Pixels coordinates are used in the virtual image plane.
Jimaging 02 00016 g003
Figure 4. (a) Relation between points in camera and laser meshes; (b) Selected points for the initial homography of the second plane.
Figure 4. (a) Relation between points in camera and laser meshes; (b) Selected points for the initial homography of the second plane.
Jimaging 02 00016 g004
Figure 5. Horizontal Scanning. Sensitivity analysis results for horizontal scanning. (a) Laser rotation error; (b) Laser translation error.
Figure 5. Horizontal Scanning. Sensitivity analysis results for horizontal scanning. (a) Laser rotation error; (b) Laser translation error.
Jimaging 02 00016 g005
Figure 6. Vertical Scanning. Sensitivity analysis results for vertical scanning. (a) Rotation error; (b) Translation error.
Figure 6. Vertical Scanning. Sensitivity analysis results for vertical scanning. (a) Rotation error; (b) Translation error.
Jimaging 02 00016 g006
Figure 7. Reconstruction of a simulated scene composed by two orthogonal planes. (a) Result without image noise; (b) Result with image noise of mean 2 pixels; (c) Result with noise of mean 5 pixels; (d) Result with noise of mean 10 pixels.
Figure 7. Reconstruction of a simulated scene composed by two orthogonal planes. (a) Result without image noise; (b) Result with image noise of mean 2 pixels; (c) Result with noise of mean 5 pixels; (d) Result with noise of mean 10 pixels.
Jimaging 02 00016 g007
Figure 8. Reconstruction of a simulated scene composed by three orthogonal planes. (a) Result with image noise of mean 2 pixels; (b) Result with noise of mean 10 pixels
Figure 8. Reconstruction of a simulated scene composed by three orthogonal planes. (a) Result with image noise of mean 2 pixels; (b) Result with noise of mean 10 pixels
Jimaging 02 00016 g008
Figure 9. Reconstruction of a simulated scene composed by two non-orthogonal planes. (a) Result of a scene in which the planes formed an angle of 75 degrees; (b) Result of a scene in which the planes formed an angle of 120 degrees.
Figure 9. Reconstruction of a simulated scene composed by two non-orthogonal planes. (a) Result of a scene in which the planes formed an angle of 75 degrees; (b) Result of a scene in which the planes formed an angle of 120 degrees.
Jimaging 02 00016 g009
Figure 10. (a) Original image for experiment 1; (b) Reconstruction for experiment 1; (c) Original image for experiment 2; (d) Reconstruction for experiment 2.
Figure 10. (a) Original image for experiment 1; (b) Reconstruction for experiment 1; (c) Original image for experiment 2; (d) Reconstruction for experiment 2.
Jimaging 02 00016 g010
Table 1. Angles between normal vectors for reconstructed planes.
Table 1. Angles between normal vectors for reconstructed planes.
Normal Vector Plane 1Normal Vector Plane 2Angle between Planes
Experiment 1(−0.50,0.55,−0.67) m(0.25,0.88,0.39) m85 degrees
Experiment 2(−0.39,−0.55,−0.74) m(0.18,0.87,0.46) m86 degrees

Share and Cite

MDPI and ACS Style

Paniagua, C.; López-Nicolás, G.; Guerrero, J.J. Wearable Structured Light System in Non-Rigid Configuration. J. Imaging 2016, 2, 16. https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging2020016

AMA Style

Paniagua C, López-Nicolás G, Guerrero JJ. Wearable Structured Light System in Non-Rigid Configuration. Journal of Imaging. 2016; 2(2):16. https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging2020016

Chicago/Turabian Style

Paniagua, Carmen, Gonzalo López-Nicolás, and José Jesús Guerrero. 2016. "Wearable Structured Light System in Non-Rigid Configuration" Journal of Imaging 2, no. 2: 16. https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging2020016

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop