Next Article in Journal
Feasibility of a Sensor-Based Technological Platform in Assessing Gait and Sleep of In-Hospital Stroke and Incomplete Spinal Cord Injury (iSCI) Patients
Next Article in Special Issue
Detection of Anomalous Behavior in Modern Smartphones Using Software Sensor-Based Data
Previous Article in Journal
Photoacoustic-Based Gas Sensing: A Review
Previous Article in Special Issue
Facial Expressions Recognition for Human–Robot Interaction Using Deep Convolutional Neural Networks with Rectified Adam Optimizer
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

High Precision Positioning with Multi-Camera Setups: Adaptive Kalman Fusion Algorithm for Fiducial Markers

by
Dragos Constantin Popescu
1,2,*,
Ioan Dumitrache
1,3,
Simona Iuliana Caramihai
1 and
Mihail Octavian Cernaianu
2,*
1
Faculty of Automatic Control and Computers, University Politehnica of Bucharest, Splaiul Independentei No. 313, 060042 Bucharest, Romania
2
Extreme Light Infrastructure-Nuclear Physics (ELI-NP)/Horia Hulubei National Institute for R&D in Physics and Nuclear Engineering (IFIN-HH), Str. Reactorului No. 30, Magurele, 077125 Bucharest, Romania
3
Department of Science and Information Technology of the Romanian Academy, Cal. Victoriei No. 125, 010071 Bucharest, Romania
*
Authors to whom correspondence should be addressed.
Submission received: 31 March 2020 / Revised: 5 May 2020 / Accepted: 7 May 2020 / Published: 11 May 2020

Abstract

:
The paper addresses the problem of fusing the measurements from multiple cameras in order to estimate the position of fiducial markers. The objectives are to increase the precision and to extend the working area of the system. The proposed fusion method employs an adaptive Kalman algorithm which is used for calibrating the setup of cameras as well as for estimating the pose of the marker. Special measures are taken in order to mitigate the effect of the measurement noise. The proposed method is further tested in different scenarios using a Monte Carlo simulation, whose qualitative precision results are determined and compared. The solution is designed for specific positioning and alignment tasks in physics experiments, but also, has a degree of generality that makes it suitable for a wider range of applications.

Graphical Abstract

1. Introduction

The evolution of technology has led to an increasing demand for solving complex problems which may be viewed as attempts to control and direct system behaviours towards desired states. The inherent complexity of problems and processes requires new approaches both in system modelling and in defining the emergent interaction with a highly dynamical and sparsely defined environment.
Cognitive approaches are successfully used in contexts where the boundary between the systems and the environment is fuzzy. However, they exhibit strong interrelation and interconnection, assisted by specific perception mechanisms [1]. Advances in complex control applications can be achieved only by considering adequate design approaches for sensory systems, especially in domains like environmental applications [2], health [3], industrial control, agriculture, etc.
Although new technologies such as wireless sensor networks or Internet of Things (IoT) are providing valuable solutions for appropriate perception mechanisms, complex issues are raised with the inclusion of data fusion, reliability, flexibility, reconfigurability, and cost of the measurement system. If some of these problems can be addressed during the modelling process, there are others (e.g., sensor positioning and sensor dynamics) that have a bigger impact on the generality of the overall solution.
These issues are specific in critical infrastructures, research institutes, power systems, e-health, mining, and traffic control, where the multitude of concurrent dynamics generating a large amount of information requires adequate solving methods that cover aspects such as relevance, efficiency, and cost optimisation.
This paper follows the development of a measurement method used in a very precise and yet flexible and portable positioning system. Firstly, the method is an answer to a specific application: Automatic alignment with high precision of various instruments which have to be remotely operated in highly flexible, open, and dynamic configurations for physics experiments. However, it can be used in any application where high precision positioning over a large working area is required, where no absolute reference can be defined, or when it is necessary to simultaneously align multiple features in relative positions, with high reliability. Such applications include large scale construction sites where multiple components need to be positioned in precise locations, manufacturing facilities where large machines are assembled or spacecraft docking in zero gravity environments [4].
The article has the following structure. The specific application and context addressed in this work are presented in Section 2. The theoretical context behind the proposed algorithm is described in Section 3. It it then used to build two essential procedures for the positioning system. The precision of the proposed method is assessed in the Section 4 using a Monte Carlo simulation. The conclusions and future developments are detailed in Section 5.

2. Description of the Initial Problem

The large number of high repetition rate, ultrashort pulse, and high power laser facilities that will come online all around the world will require state-of-the-art tools to allow the harnessing of their full potential [5]. The high power lasers of the ELI-NP (Extreme Light Infrastructure-Nuclear Physics) user facility will be employed for a wide range of research topics like studies of nuclear induced reactions, dark matter search, material irradiation, or medical applications [6]. The development of the experimental setups for studying these topics requires specific instrumentation, while also having strict needs in terms of positioning and alignment, in order to ensure optimal experimental conditions.
As the setups are continuously changing, absolute position referencing is hard to achieve. This is a necessity, as multiple instruments need to be precisely positioned relative to each other during the experiment. Figure 1 displays an example of a setup for solid target alignment in which multiple optical diagnostics are positioned using motorised manipulators that have 3 to 6 degrees of freedom (DOF).
Apart from the precision requirements, additional ones should be taken into consideration. The alignment should be done remotely because the experiments take place inside vacuum chambers and behind concrete walls used for radiation shielding. Moreover, to take advantage of the high repetition rate of the lasers and to maximise the beam-time available for the users, the positioning has to be performed in a limited amount time. In a previous work [7], we addressed this series of requirements and constraints by developing an automatic alignment system that is based on relative position measurements using imaging cameras and compact and flat fiducial markers, due to space and visibility constraints. The alignment algorithm was based on a real-time optimisation procedure which is the subject of a patent [8].
Although the fiducial markers were initially developed and used in augmented reality applications [9], due to their versatility in determining their position in a non-invasive manner (by imaging them with a camera), they were quickly adopted for a different range of applications. These include kinematic calibration and visual servoing for industrial robots [10,11], robot localisation and navigation [12,13,14], SLAM (simultaneous localisation and mapping) [15,16,17] and sensor fusion [18,19,20].
The motivation behind this work is to combine the measurements from multiple cameras in order to increase the working area of the system and to maximise the positioning precision. Simultaneously, the main focus is to present and test the precision of the proposed methods and algorithm, comparing the results with those recorded while using only one camera [21].

3. Method

The basic approach behind the proposed method is to use multiple cameras in order to improve the precision of estimating the pose of fiducial markers and to extend the working area of the system. It begins with the assumption that each camera provides measurements that are erroneous and noisy. The problem can be conceived through an analogy with the GPS system where distance measurements from multiple satellites are used to estimate the position of the receiver module with high precision. The key ingredient in the GPS system is to know very precisely the position of the satellites. The proposed positioning system needs to meet the same requirement for its cameras, but wasting too much effort on this task diminishes its practicality. Consequently it is only assumed that the cameras have unknown but fixed positions.
A related approach that has similar objectives can be found in [22]. The main benefit of the method is the possibility to calibrate the setup of cameras using a 3D feature with fiducial markers having unknown configuration. However, it is not meant for estimating the pose of single markers. In order to achieve this, additional methods are required.
Our method is developed using the ArUco fiducial markers [23,24]. The ArUco library detects and estimates the pose of the marker with respect to the camera using the solvepnp algorithm in which the pinhole camera model is data fitted using a Levenberg–Marquardt non-linear optimisation procedure. The output is represented by the extrinsic camera parameters which can be expressed in terms of a homogenous transformation T C M (the transformation between the camera attached reference frame and the marker attached one) detailed in [7].

3.1. Adaptive Kalman Fusion Algorithm for Multiple Cameras and Fiducial Markers

The method consists of fusing noisy measurements from multiple cameras, in order to improve the precision of estimating the exact position of the fiducial marker. In the scientific literature, multiple sensor fusion and noise filtering methods have been developed over the years. Among them there is the Kalman filter [25], which is the most widely used. The noise effect is mitigated by using a dynamical model for the physical process involved and a statistical model (covariance matrix) for the noise in the measurement process.
For a discrete linear state-space dynamical model in Equation (1), the Kalman filter estimates the value of the state vector x using noisy measurements for the input u and the output y.
x k = A · x k 1 + B · u k 1 + q k 1 y k = C · x k + r k .
In standard data fusion applications that use the Kalman filter, the pose of various objects is estimated using the input from accelerometer, gyroscope, and magnetometer sensors [26,27] and the output from distance measurement devices [28,29]. Our application makes impossible to use any type of attached sensors and, hence, we only rely on the “non-invasive” pose measurement using the solvepnp algorithm and imaging cameras.
Our approach is to employ a state-space model with free dynamics (where u is zero) and with an identity state matrix (A). In this respect, each measurement is made using only one snapshot image from all the cameras that are synchronised with the help of an electrical trigger signal. In this way, the position of the fiducial marker during the measurement is considered fixed. On the same set of images, the solvepnp algorithm is applied multiple times and, thus, the evolution of the measurement is caused only by the noise and not by the movement of the fiducial marker. Furthermore, the measurements are used to iteratively estimate the real position using the proposed algorithm.
In order to improve the results, we propose a procedure designed to update the statistical model of the noise. The Q and R covariance matrices (which are discussed below) are continuously adjusted using the newly acquired data. Any change of the noise behaviour and any existing correlations are captured and thus, the effect of the noise is mitigated more efficiently. Consequently, the proposed algorithm is an adaptive Kalman version.
The homogeneous transformation representation ( T C M ) has numerous practical benefits, especially for pose composition and inversion operations, but it is not suitable in this circumstance because it is redundant (16 numerical values for expressing a 6 DOF pose). The solution is to use an equivalent representation which is composed of the set of translation coordinates (X, Y, and Z) and the set of Euler angles ( A X , A Y , and A Z ).
In this particular case, the structure of the state vector is the real pose of the fiducial marker defined in Equation (2), where the subscript k denotes the present discrete-time sample of the state vector.
x k = X k Y k Z k A X k A Y k A Z k .
The output vector is composed of the pose elements measured using all the cameras available. The structure of the output vector is defined in Equation (3), where the superscript i is indicating the index of the camera considered.
y k = X k 1 Y k 1 Z k 1 A X k 1 A Y k 1 A Z k 1 X k i Y k i Z k i A X k i A Y k i A Z k i .
The discrete-time state-space model considered is defined in Equation (4), where I 6 is the identity matrix of rank 6, q k 1 is a random noise signal with normal distribution (white noise) that is quantifying the false evolution induced by the noisy measurement, the H is the measurement model matrix defined in Equation (5), and r k is the measurement noise also considered white noise.
x k = I 6 · x k 1 + q k 1 y k = H · x k + r k
H = I 6 1 I 6 i .
The equations that describe the Kalman filter are presented in Equation (6). The first two equations give a rough state estimate using the dynamical model while the last 5 equations are used for improving the estimate using the newly acquired output sample. P is a covariance matrix that expresses the confidence degree of the state estimation, which is updated during the algorithm iterations, Q and R are the covariance matrices associated with the noises q and r respectively, v k is the rough estimation error (difference between the measured output and the predicted one using the first two equations), S k is the covariance matrix associated with the predicted output, and K k is the Kalman state update.
Predict the state 1 . x ^ k | k 1 = A · x ^ k 1 | k 1 2 . P k | k 1 = A · P k 1 | k 1 · A T + Q k 1 Update the prediction 3 . v k = y k H k · x ^ k | k 1 4 . S k = H k · P k | k 1 · H k T + R k 5 . K k = P k | k 1 · H k T · S k 1 6 . x ^ k | k = x ^ k | k 1 + K k · v k 7 . P k | k = P k | k 1 K k · S k · K k T
The filter requires good estimates for the initial state x 0 and the covariance matrices Q and R. In the proposed algorithm this is achieved using an initialisation procedure. The position of the fiducial marker is measured for N i number of sampling times. The result is the set of samples defined in Equation (3) for k = 1 , , N i (the length of the initialisation) and i = 1 , , n (the number of cameras available).
Averaging the samples gives a good estimate for the initial state, which is built according to Equation (7), where E [ · ] is the mean operator (expected value). The Q and R matrices are computed from the same set of samples, assuming that there is no correlation between the noises affecting the measurements.
x 0 = X 0 Y 0 Z 0 A X 0 A Y 0 A Z 0 = E 1 k N i 1 i n X k i E 1 k N i 1 i n Y k i E 1 k N i 1 i n Z k i E 1 k N i 1 i n A X k i E 1 k N i 1 i n A Y k i E 1 k N i 1 i n A Z k i
Q 0 = d i a g ( E 1 i n v a r 1 k N i X k i , E 1 i n v a r 1 k N i Y k i , E 1 i n v a r 1 k N i Z k i , E 1 i n v a r 1 k N i A X k i , E 1 i n v a r 1 k N i A Y k i , E 1 i n v a r 1 k N i A Z k i )
R 0 = d i a g R 0 1 , , R 0 i , , R 0 n
R 0 i = d i a g ( v a r 1 k N i X k i , v a r 1 k N i Y k i , v a r 1 k N i Z k i , v a r 1 k N i A X k i , v a r 1 k N i A Y k i , v a r 1 k N i A Z k i ) .
Q is a diagonal matrix defined in Equation (8) built using the mean variance of the pose elements (X, Y, Z, A X , A Y , and A Z ) along all the cameras. The R matrix is defined in Equation (9) where R i is defined in Equation (10), built using the variance for each pose element measured by each camera.
After the initialisation is finished, the Kalman algorithm is iterated for N e number of sampling times, while at each step, a new set of samples y k in the form of Equation (3) is measured and an estimated state trajectory is built ( x ^ k | k for k = N i + 1 , , N i + N e ).
Newly measured system outputs can be used to improve the statistical models of the noises for increased performance. Thereby, at each iteration, every new set is added to the initialisation set and the covariance matrices Q and R are updated using Equations (11)–(13) for k = 1 , , N e .
Q k = d i a g ( E 1 i n v a r 1 j N i + k X j i , E 1 i n v a r 1 j N i + k Y j i , E 1 i n v a r 1 j N i + k Z j i , E 1 i n v a r 1 j N i + k A X j i , E 1 i n v a r 1 j N i + k A Y j i , E 1 i n v a r 1 j N i + k A Z j i )
R k = d i a g R k 1 , , R k i , , R k n
R k i = d i a g ( v a r 1 j N i + k X j i , v a r 1 j N i + k Y j i , v a r 1 j N i + k Z j i , v a r 1 j N i + k A X j i , v a r 1 j N i + k A Y j i , v a r 1 j N i + k A Z j i ) .
Considering that the estimated state trajectory ( x ^ k | k ) belongs to a system with free dynamics where, in the absence of the noise effects, the state should be constant, a final estimate with a better precision can be achieved by averaging the values of the estimated state trajectory according to Equation (14). The timeline of all the procedures that are involved in the proposed algorithm is presented in Figure 2. In Algorithm 1 the proposed procedure is summarised.
x ^ = E N i + 1 k N i + N e x ^ k | k
Algorithm 1: The proposed algorithm for multi-camera pose fusion.
Sensors 20 02746 i001

3.2. Setup Calibration Procedure

Given a set of n cameras in a pre-defined fixed configuration, having unknown absolute positions, the purpose of the setup calibration procedure is to determine, in a precise manner, their relative positions. As depicted in Figure 3, the goal is to determine the set of homogenous transformations T C i C j where i = 1 , , n and j = i + 1 , , n .
This requires the use of a precision gauge, which can be manufactured in the form of a cube of fiducial markers like in [21] or in any 3D shape where the markers can be viewed all around. The gauge needs to be manufactured or calibrated with increased precision. It can be seen as the absolute precision reference used to calibrate the entire positioning system. In Figure 4, a conceptual diagram of a gauge containing m fiducial markers is presented. The set of homogenous transformations between all the markers ( T M i M j where i = 1 , , m and j = i + 1 , , m ) is precisely known.
The camera setup calibration procedure is done simultaneously for all the camera pairs C i , C j where i = 1 , , n and j = i + 1 , , n , with the goal of estimating the homogenous transformation T C i C j (as presented in Figure 5). It starts with placing the precision gauge inside the environment. Depending on the orientation, each camera sees a different portion of the gauge, i.e., from all n fiducial markers, C i can measure the position of n i markers and C j can measure only n j , where n i n and n j n . It is preferable that at least one fiducial marker M k is seen by both cameras, otherwise, it can be virtually determined using the gauge transformations.
The proposed algorithm estimates the T C i M k homogenous transformation using the noisy measurements from all n i markers using the C i camera. This transformation is expressed multiple times in terms of T C i M l pose measurement (where l = 1 , , n i ) using Equation (15). The T M l M k transformation is precisely known from the gauge.
T C i M k T C i M k ^ l = T C i M l · T M l M k
The resulting T C i M k ^ l pose is converted to translations and Euler angles which are used for building the output measurement vector of Equation (3).
The proposed algorithm is further applied and the result is an estimation of T C i M k having a higher degree of precision and being closer to the real value. For the C j camera, T C j M k is estimated in a similar manner. Consequently, T C i C j transformation is computed using Equation (16).
T C i C j = T C i M k · T M k C j 1

3.3. Position Estimation Procedure

Having a calibrated camera setup, the aim of this procedure is to estimate the position of a fiducial marker attached to a specific instrument, according to the application.
In Figure 6, the conceptual diagram of this procedure is depicted. Depending on the configuration, the M marker can be seen only by a number of n m out of n cameras ( n m n ). An arbitrary camera ( C r ) is chosen, which is considered the positioning reference.
The proposed algorithm is estimating the T C r M homogenous transformation using the noisy measurements from each of the n m cameras. In a similar manner to the setup calibration procedure, the T C r M transformation is expressed multiple times in terms of T C k M pose measurement (where k = 1 , , n m ) using Equation (17). The T C r C k is precisely known from the setup calibration.
T C r M T C r M ^ k = T C r C k · T C k M .
The resulting T C r M ^ k pose is converted to translations and Euler angles and used for building the output measurement vector of Equation (3). The proposed algorithm is further applied and the result is an estimation of T C r M which has a higher degree of precision and is closer to the real value.
This approach can be used in a similar manner for measuring the relative position of multiple fiducial markers in the environment, which is very useful in alignment tasks as presented in [7].

4. Simulation Results

4.1. Monte Carlo Setup

There are two factors that contribute and affect the precision of estimating the pose of the fiducial marker. First, there are the physical and environment-related aspects, which include the optical specifications of the imaging system (the sensor and optical resolution, the focal length, the depth of field, and the field of view), the environment illumination conditions (how strong, uniform, and consistent the lighting is) and if the cameras are optimally positioned so as to achieve good viewing angles and maintain consistent accuracy along the working area. Secondly, there are factors related to the algorithms regarding how precise the pose can be estimated and how much the effect of the noise can be mitigated during the data fusion. In this paper we decouple the two types of factors and only consider the behaviour of our proposed method, so as to have a first qualitative assessment regarding the precision.
Consequently, we developed a simulation environment that aims to replicate how our method performs in a real setup, considering the noisy pose estimations it receives from the solvepnp algorithm. For an increased confidence in the results, we adopted a Monte Carlo approach in which we statistically analysed how the noise is propagated through our method and how the precision is affected. The simulation is implemented in MATLAB where positions of multiple cameras, precision gauges, and fiducial markers are virtually defined. In order to simulate the noisy input from the solvepnp, each position (that is supposed to be measured in the real environment) is disturbed with additive random noise having normal distribution. The noise is configured considering the precision limits we determined experimentally for one camera in our previous work [21]: for X and Y, 75 μ m, for Z, 300 μ m, and for A X , A Y and A Z , 0.02 . The mean value of the noise is 0 while the standard deviation ( σ ) was configured in such a way that 95.45 % of the noise values are within the experimental limits (inside [ 2 σ , 2 σ ] ).
The simulation puts in place three scenarios which are additionally used to assess the different contributions between the number of cameras and the number of the fiducial markers from the gauge:
  • 3 cameras, a precision gauge with 5 markers, and one marker whose position must be estimated;
  • 5 cameras, a precision gauge with 3 markers, and one marker whose position must be estimated;
  • 5 cameras, a precision gauge with 5 markers, and one marker whose position must be estimated.
The results are compared against estimating the pose using only one camera in the same environment. The Monte Carlo simulation performs 5000 iterations where, in each run and for each of the scenarios, the setup is calibrated using the gauge. The calibration is further used to estimate the pose of the fiducial marker which is compared with the true, predefined one. The N i and N e parameters are configured to 20 and 50 respectively. In a real application, the choice of these two parameters is a matter of cost optimisation, considering the computational effort, the resources available, and the required measuring frequency.
Compared with a real setup, we adopted one simplifying hypothesis. The angle of incidence of the marker relative to the camera, as we showed in [21], has an influence on the precision. Close to normal angles of incidence tend to bring more noise in the estimation. In this study, we only consider that the precision of the solvepnp algorithm to be consistent, regardless of the angle. However, in other respects, the simulation is considering the worst case scenario because of the following arguments:
  • Noises from different cameras and from different elements of the pose (X, Y, Z, A X , A Y , and A Z ) are considered completely uncorrelated. In real circumstances this might not be the case (e.g., the noise induced by the environment illumination which affects all the measurements in a similar manner) thus, any relaxed conditions are contributing to an increased precision of the estimation. This additional information is harnessed using the update procedure for Q and R covariance matrices (which in this case would no longer be diagonal);
  • The precision of one camera measurement along the Z axis is 5 times lower compared with the X and Y axis. In the simulation this is taken as it is, but in a real situation, this effect can be diminished by placing the camera setup in an optimal configuration. For instance, the measurement along the Z axis from one camera can be replaced by measurements from two cameras placed in lateral positions, for which the Z axis motion is decomposed in X and Y components that have a higher precision.

4.2. Results

Figure 7, Figure 8, Figure 9, Figure 10, Figure 11 and Figure 12 presents the simulation results for estimating X, Y, and Z coordinates and A X , A Y , and A Z orientation angles, which are given in terms of the probability density function (PDF) and the standard deviation (SD) of the error. The first plot from each Figure depicts the estimation error achieved using only one camera. The following three plots give the results achieved using the setups from each of the above-mentioned scenarios.
The results show that the proposed algorithm achieves an increase in precision which is close to an order of magnitude. It can also be observed that it is of greater importance to have more fiducial markers in the precision gauge instead of more cameras. In scenario #3, a slight decrease of precision is experienced in comparison to the #1 scenario. This is to be expected as each added camera is an additional noise source. However, the benefit of achieving a larger working area is far more important. In Table 1 the results are summarised and compared with regards to the limits of the variation interval [ 2 σ , 2 σ ] where it is expected that 95.45 % of the errors will occur. This can be considered the precision that the positioning system is expected to achieve when using th proposed method.
In order to further emphasise the simulation results achieved by the proposed method, Figure 13, Figure 14, Figure 15, Figure 16, Figure 17 and Figure 18 depict a set of examples from the 3rd scenario which show how the estimation of the state vector elements is evolving. Each of the figures presents: The noisy measurement from the five cameras, the value of the state vector computed after the initialisation procedure, the evolution of the state vector during the Kalman estimation, and the final state estimation which falls close to the true value. Although the noise in the measurements is amplified because the reconstruction of the position of the marker in different cameras, the evolution of the Kalman estimation shows a clear noise damping effect.

5. Conclusions

With respect to the considered case study, the results show that the proposed method and algorithm have managed to successfully meet the objectives. The working area could be increased in accordance with the number of cameras in the setup. This is a decision-making procedure that needs to take into account the cost relative to the working area and the precision required. For our simulation scenarios, the precision increase was close to an order of magnitude, which was around 10–15 μ m for X and Y coordinates, 30 μ m for Z and 0.002 for A X , A Y , and A Z orientation angles.
The cost is an important benefit of the system compared to other solutions like laser trackers which tend to be extremely expensive. In addition to that, our proposed method could achieve relative and simultaneous positioning of multiple fiducial markers, which supports the development of advanced applications.
Future work will include a complete analysis of the method in a real environment where all physical and algorithm-related factors are considered, and a precision comparison against other methods presented in the literature.

Author Contributions

Conceptualisation, D.C.P. and I.D.; methodology, I.D., S.I.C., and M.O.C.; software, D.C.P. and M.O.C.; validation, D.C.P. and M.O.C.; formal analysis, D.C.P. and I.D.; investigation, D.C.P., S.I.C., and M.O.C.; resources, M.O.C.; writing—original draft preparation, D.C.P., S.I.C., and M.O.C.; visualisation, D.C.P.; supervision, I.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by Extreme Light Infrastructure Nuclear Physics Phase II, a project co-financed by the Romanian Government and the European Union through the European Regional Development Fund and the Competitiveness Operational Programme (No. 1/07.07.2016,COP, ID 1334). It was also supported by POC contract 53/05.09.2016, ID 40 270, MySMIS 105976 Ecosystem for ICT Research, Innovation, Product and Service Development for a Internet of Things Interconnected Society—NETIO, subsidiary contract 1265/22.01.2018, Development of an Innovative Solution for Efficient Communication in an Industrial Environment—DIVE and by Romanian Ministry of Research through project PN 19 06 01 05/2019.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Albus, J.S.; Barbera, A.J. RCS: A cognitive architecture for intelligent multi-agent systems. IFAC Proc. Vol. 2004, 37, 1–11. [Google Scholar] [CrossRef] [Green Version]
  2. Prasanna, S.; Rao, S. An overview of wireless sensor networks applications and security. Int. J. Soft Comput. Eng. 2012, 2. [Google Scholar]
  3. Apolle, R.; Brückner, S.; Frosch, S.; Rehm, M.; Thiele, J.; Valentini, C.; Lohaus, F.; Babatz, J.; Aust, D.E.; Hampe, J.; et al. Utility of fiducial markers for target positioning in proton radiotherapy of oesophageal carcinoma. Radiother. Oncol. 2019, 133, 28–34. [Google Scholar] [CrossRef]
  4. Fiala, M. Artag fiducial marker system applied to vision based spacecraft docking. In Proceedings of the International Conference on Intelligent Robots and Systems (IROS) 2005 Workshop on Robot Vision for Space Applications, Edmonton, AB, Canada, 2 August 2005; pp. 35–40. [Google Scholar]
  5. Gales, S.; Tanaka, K.A.; Balabanski, D.L.; Negoita, F.; Stutman, D.; Tesileanu, O.; Ur, C.A.; Ursescu, D.; Andrei, I.; Ataman, S.; et al. The extreme light infrastructure-nuclear physics (ELI-NP) facility: New horizons in physics with 10 PW ultra-intense lasers and 20 MeV brilliant gamma beams. Rep. Prog. Phys. Phys. Soc. 2018, 81, 094301. [Google Scholar] [CrossRef] [PubMed]
  6. Tanaka, K.A.; Spohr, K.M.; Balabanski, D.L.; Balascuta, S.; Capponi, L.; Cernaianu, M.O.; Cuciuc, M.; Cucoanes, A.; Dancus, I.; Dhal, A.; et al. Current status and highlights of the ELI-NP research program. Matter Radiat. Extrem. 2020, 5, 024402. [Google Scholar] [CrossRef] [Green Version]
  7. Popescu, D.C.; Cernaianu, M.O.; Dumitrache, I. Automatic rough alignment for key components in laser driven experiments using fiducial markers. J. Phys. Conf. Ser. 2018, 1079, 012013. [Google Scholar] [CrossRef]
  8. Popescu, D.C.; Cernăianu, M.O. Method and System for Automatic Positioning of Elements for Configurations of Experiments with High-Power Lasers; Romanian patent RO132327B1; Library Catalog: ESpacenet; OSIM: Bucharest, Romania, 2019. [Google Scholar]
  9. Fiala, M. ARTag, a fiducial marker system using digital techniques. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005; Volume 2, pp. 590–596. [Google Scholar] [CrossRef]
  10. Boby, R.A.; Saha, S.K. Single image based camera calibration and pose estimation of the end-effector of a robot. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 2435–2440. [Google Scholar] [CrossRef]
  11. Cai, C.; Dean-León, E.; Somani, N.; Knoll, A. 6D image-based visual servoing for robot manipulators with uncalibrated stereo cameras. In Proceedings of the 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, Chicago, IL, USA, 14–18 September 2014; pp. 736–742. [Google Scholar] [CrossRef] [Green Version]
  12. Chen, D.; Peng, Z.; Ling, X. A low-cost localization system based on artificial landmarks with two degree of freedom platform camera. In Proceedings of the 2014 IEEE International Conference on Robotics and Biomimetics (ROBIO 2014), Bali, Indonesia, 5–10 December 2014; pp. 625–630. [Google Scholar] [CrossRef]
  13. Babinec, A.; Jurišica, L.; Hubinský, P.; Duchoň, F. Visual Localization of Mobile Robot Using Artificial Markers. Procedia Eng. 2014, 96, 1–9. [Google Scholar] [CrossRef] [Green Version]
  14. Bi, S.; Yang, D.; Cai, Y. Automatic Calibration of Odometry and Robot Extrinsic Parameters Using Multi-Composite-Targets for a Differential-Drive Robot with a Camera. Sensors 2018, 18, 3097. [Google Scholar] [CrossRef] [Green Version]
  15. Houben, S.; Droeschel, D.; Behnke, S. Joint 3D laser and visual fiducial marker based SLAM for a micro aerial vehicle. In Proceedings of the 2016 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI), Baden-Baden, Germany, 19–21 September 2016; pp. 609–614. [Google Scholar] [CrossRef]
  16. Muñoz-Salinas, R.; Marín-Jimenez, M.J.; Medina-Carnicer, R. SPM-SLAM: Simultaneous localization and mapping with squared planar markers. Pattern Recognit. 2019, 86, 156–171. [Google Scholar] [CrossRef]
  17. Muñoz-Salinas, R.; Medina-Carnicer, R. UcoSLAM: Simultaneous localization and mapping by fusion of keypoints and squared planar markers. Pattern Recognit. 2020, 101, 107193. [Google Scholar] [CrossRef] [Green Version]
  18. Xing, B.; Zhu, Q.; Pan, F.; Feng, X. Marker-Based Multi-Sensor Fusion Indoor Localization System for Micro Air Vehicles. Sensors 2018, 18, 1706. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  19. Poulose, A.; Han, D.S. Hybrid Indoor Localization Using IMU Sensors and Smartphone Camera. Sensors 2019, 19, 5084. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  20. Wu, Y.; Niu, X.; Du, J.; Chang, L.; Tang, H.; Zhang, H. Artificial Marker and MEMS IMU-Based Pose Estimation Method to Meet Multirotor UAV Landing Requirements. Sensors 2019, 19, 5428. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  21. Popescu, D.C.; Cernaianu, M.O.; Ghenuche, P.; Dumitrache, I. An assessment on the accuracy of high precision 3D positioning using planar fiducial markers. In Proceedings of the 2017 21st International Conference on System Theory, Control and Computing (ICSTCC), Sinaia, Romania, 19–21 October 2017; pp. 471–476. [Google Scholar] [CrossRef]
  22. Sarmadi, H.; Muñoz-Salinas, R.; Berbís, M.A.; Medina-Carnicer, R. Simultaneous Multi-View Camera Pose Estimation and Object Tracking With Squared Planar Markers. IEEE Access 2019, 7, 22927–22940. [Google Scholar] [CrossRef]
  23. Garrido-Jurado, S.; Muñoz-Salinas, R.; Madrid-Cuevas, F.J.; Marín-Jiménez, M.J. Automatic generation and detection of highly reliable fiducial markers under occlusion. Pattern Recognit. 2014, 47, 2280–2292. [Google Scholar] [CrossRef]
  24. Garrido-Jurado, S.; Muñoz-Salinas, R.; Madrid-Cuevas, F.J.; Medina-Carnicer, R. Generation of fiducial marker dictionaries using Mixed Integer Linear Programming. Pattern Recognit. 2016, 51, 481–491. [Google Scholar] [CrossRef]
  25. Kalman, R.E. A New Approach to Linear Filtering and Prediction Problems. J. Basic Eng. 1960, 82, 35–45. [Google Scholar] [CrossRef] [Green Version]
  26. Hao, Y.; Xu, A.; Sui, X.; Wang, Y. A Modified Extended Kalman Filter for a Two-Antenna GPS/INS Vehicular Navigation System. Sensors 2018, 18, 3809. [Google Scholar] [CrossRef] [Green Version]
  27. Hosseinyalamdary, S. Deep Kalman Filter: Simultaneous Multi-Sensor Integration and Modelling; A GNSS/IMU Case Study. Sensors 2018, 18, 1316. [Google Scholar] [CrossRef] [Green Version]
  28. Bischoff, O.; Wang, X.; Heidmann, N.; Laur, R.; Paul, S. Implementation of an ultrasonic distance measuring system with kalman filtering in wireless sensor networks for transport logistics. Procedia Eng. 2010, 5, 196–199. [Google Scholar] [CrossRef] [Green Version]
  29. Li, S.E.; Li, G.; Yu, J.; Liu, C.; Cheng, B.; Wang, J.; Li, K. Kalman filter-based tracking of moving objects using linear ultrasonic sensor array for road vehicles. Mech. Syst. Signal Process. 2018, 98, 173–189. [Google Scholar] [CrossRef]
Figure 1. Solid target alignment setup example where multiple instruments are positioned using motorised manipulators.
Figure 1. Solid target alignment setup example where multiple instruments are positioned using motorised manipulators.
Sensors 20 02746 g001
Figure 2. The sequence of procedures involved in the proposed algorithm.
Figure 2. The sequence of procedures involved in the proposed algorithm.
Sensors 20 02746 g002
Figure 3. The calibration of a set of n cameras requires to determine their relative positions T C i C j .
Figure 3. The calibration of a set of n cameras requires to determine their relative positions T C i C j .
Sensors 20 02746 g003
Figure 4. A setup of cameras is calibrated using a precision gauge. It is a manufactured feature having m fiducial markers where all their relative positions are precisely known.
Figure 4. A setup of cameras is calibrated using a precision gauge. It is a manufactured feature having m fiducial markers where all their relative positions are precisely known.
Sensors 20 02746 g004
Figure 5. The process of calibrating all the pairs of n cameras, which illustrates all the homogenous transformations involved in finding the relative position of C i and C j cameras.
Figure 5. The process of calibrating all the pairs of n cameras, which illustrates all the homogenous transformations involved in finding the relative position of C i and C j cameras.
Sensors 20 02746 g005
Figure 6. The homogenous transformations involved in estimating the position of a fiducial marker placed in the environment of a calibrated camera setup.
Figure 6. The homogenous transformations involved in estimating the position of a fiducial marker placed in the environment of a calibrated camera setup.
Sensors 20 02746 g006
Figure 7. The probability density function (PDF) and the standard deviation (SD) of the error for estimating the X coordinate.
Figure 7. The probability density function (PDF) and the standard deviation (SD) of the error for estimating the X coordinate.
Sensors 20 02746 g007
Figure 8. The probability density function (PDF) and the standard deviation (SD) of the error for estimating the Y coordinate.
Figure 8. The probability density function (PDF) and the standard deviation (SD) of the error for estimating the Y coordinate.
Sensors 20 02746 g008
Figure 9. The probability density function (PDF) and the standard deviation (SD) of the error for estimating the Z coordinate.
Figure 9. The probability density function (PDF) and the standard deviation (SD) of the error for estimating the Z coordinate.
Sensors 20 02746 g009
Figure 10. The probability density function (PDF) and the standard deviation (SD) of the error for estimating the A X angle.
Figure 10. The probability density function (PDF) and the standard deviation (SD) of the error for estimating the A X angle.
Sensors 20 02746 g010
Figure 11. The probability density function (PDF) and the standard deviation (SD) of the error for estimating the A Y angle.
Figure 11. The probability density function (PDF) and the standard deviation (SD) of the error for estimating the A Y angle.
Sensors 20 02746 g011
Figure 12. The probability density function (PDF) and the standard deviation (SD) of the error for estimating the A Z angle.
Figure 12. The probability density function (PDF) and the standard deviation (SD) of the error for estimating the A Z angle.
Sensors 20 02746 g012
Figure 13. The evolution of the estimation of X.
Figure 13. The evolution of the estimation of X.
Sensors 20 02746 g013
Figure 14. The evolution of the estimation of Y.
Figure 14. The evolution of the estimation of Y.
Sensors 20 02746 g014
Figure 15. The evolution of the estimation of Z.
Figure 15. The evolution of the estimation of Z.
Sensors 20 02746 g015
Figure 16. The evolution of the estimation of A X .
Figure 16. The evolution of the estimation of A X .
Sensors 20 02746 g016
Figure 17. The evolution of the estimation of A Y .
Figure 17. The evolution of the estimation of A Y .
Sensors 20 02746 g017
Figure 18. The evolution of the estimation of A Z .
Figure 18. The evolution of the estimation of A Z .
Sensors 20 02746 g018
Table 1. The simulation results for estimating the pose using only one camera and multiple cameras in three scenarios. The value is the boundary of the error variation interval ( 2 σ ).
Table 1. The simulation results for estimating the pose using only one camera and multiple cameras in three scenarios. The value is the boundary of the error variation interval ( 2 σ ).
Scenario
Pose ElementUMOne Camera#1#2#3
X( μ m)75.6811.121.0417.4
Y( μ m)75.111.1412.711.88
Z( μ m)298.6830.7430.0128.28
AX(deg)0.0190.00160.00140.0013
AY(deg)0.0190.00220.00240.0022
AZ(deg)0.0190.00190.00180.0016

Share and Cite

MDPI and ACS Style

Popescu, D.C.; Dumitrache, I.; Caramihai, S.I.; Cernaianu, M.O. High Precision Positioning with Multi-Camera Setups: Adaptive Kalman Fusion Algorithm for Fiducial Markers. Sensors 2020, 20, 2746. https://0-doi-org.brum.beds.ac.uk/10.3390/s20092746

AMA Style

Popescu DC, Dumitrache I, Caramihai SI, Cernaianu MO. High Precision Positioning with Multi-Camera Setups: Adaptive Kalman Fusion Algorithm for Fiducial Markers. Sensors. 2020; 20(9):2746. https://0-doi-org.brum.beds.ac.uk/10.3390/s20092746

Chicago/Turabian Style

Popescu, Dragos Constantin, Ioan Dumitrache, Simona Iuliana Caramihai, and Mihail Octavian Cernaianu. 2020. "High Precision Positioning with Multi-Camera Setups: Adaptive Kalman Fusion Algorithm for Fiducial Markers" Sensors 20, no. 9: 2746. https://0-doi-org.brum.beds.ac.uk/10.3390/s20092746

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop