Next Article in Journal
Analysis of the Spatial Variation of Network-Constrained Phenomena Represented by a Link Attribute Using a Hierarchical Bayesian Model
Next Article in Special Issue
Estimation of 3D Indoor Models with Constraint Propagation and Stochastic Reasoning in the Absence of Indoor Measurements
Previous Article in Journal
Discover Patterns and Mobility of Twitter Users—A Study of Four US College Cities
Previous Article in Special Issue
Indoor Multi-Dimensional Location GML and Its Application for Ubiquitous Indoor Location Services
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Performance Analysis of Space Resection-Aided Pedestrian Dead Reckoning for Smartphone Navigation in a Mapped Indoor Environment

Department of Geomatics, National Cheng-Kung University, 1 University Road, Tainan 701, Taiwan
*
Author to whom correspondence should be addressed.
ISPRS Int. J. Geo-Inf. 2017, 6(2), 43; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi6020043
Submission received: 21 December 2016 / Revised: 26 January 2017 / Accepted: 26 January 2017 / Published: 14 February 2017
(This article belongs to the Special Issue 3D Indoor Modelling and Navigation)

Abstract

:
Smartphones have become indispensable in our daily lives. Their various embedded sensors have inspired innovations in mobile applications—especially for indoor navigation. However, the accuracy, reliability and generalizability of navigation all continue to struggle in environments lacking a Global Navigation Satellite System (GNSS). Pedestrian Dead Reckoning (PDR) is a popular method for indoor pedestrian navigation. Unfortunately, due to its fundamental principles, even a small navigation error will amplify itself, step by step, generally leading to the need for supplementary resources to maintain navigation accuracy. Virtually all mobile devices and most robots contain a basic camera sensor, which has led to the popularity of image-based localization, and vice versa. However, all of the image-based localization requires continuous images for uninterrupted positioning. Furthermore, the solutions provided by either image-based localization or a PDR are usually in a relative coordinate system. Therefore, this research proposes a system, which uses space resection-aided PDR with geo-referenced images of a previously mapped environment to enable seamless navigation and solve the shortcomings of PDR and image-based localization, and evaluates the performance of space resection with different assumptions using a smartphone. The indoor mobile mapping system (IMMS) is used for the effective production of geo-referenced images. The preliminary results indicate that the proposed algorithm is suitable for universal pedestrian indoor navigation, achieving the accuracy required for commercial applications.

1. Introduction

A variety of indoor navigation technologies based on different principles and hardware have been developed over the last two decades [1]. Some of these methods depend on intensive or advanced infrastructure to achieve high accuracy, and may require a specific device to transmit or receive the corresponding signal. Similarly, accurate inertial based methods usually position a sensor on a specific part of the user’s body in an attempt to overcome the inaccuracy of previous sensors [2]. Such requirements are costly and inconvenient, but the proliferation of smartphones has encouraged a technological revolution in indoor navigation. The various sensors in smartphones, such as Global Navigation Satellite System (GNSS chip), Wi-Fi, Bluetooth, accelerometer, gyroscope, magnetometer, camera, and even a barometer or an ambient light sensor can conceivably be harnessed to assist in navigation. In addition to their various sensors, their ubiquity is another reason why smartphones seem ideal as personal mobile navigators. Many who use smartphones for outdoor navigation applications use systems based on GNSS. However, people actually spend 90% of their time in indoors, which is a GNSS-denied environment [3]. Fortunately, the characteristics of various sensors embedded in the smartphone enable advanced navigation technologies and can achieve solid estimates utilizing their complementarity for pedestrian indoor navigation. Retscher and Hecht show the feasibility of using different smartphones for tracking in location-based service (LBS) and other navigation applications in their research [4]. Liu et al. propose a smartphone-based indoor positioning engine, which relies solely on built-in hardware and computational resources [5]. Their research shows the potential and possibility of using smartphones for navigation. Although because smartphones are not dedicated positioning devices, the accuracy of such micro electro mechanical system (MEMS) sensors still leave much to be desired. However, better performance and smaller size can be expected in the near future.
Pedestrian dead reckoning (PDR) is one of the most commonly used technologies for pedestrian indoor navigation. PDR estimates a two-dimensional location of pedestrians based on the accelerometer, gyro and magnetometer. For smartphone-based pedestrian navigation, the accelerometer is generally used for step count, and gyro and magnetometer are used for heading estimation, combined with a step length model to decide movement. The advantage of using the accelerometer to count steps is that it avoids errors accumulated in positions due to the double integration of acceleration. Further explanation, vibrations and misalignment (between the sensor body frame and pedestrian frame) from holding the smartphone tend to produce noise and gravity projections at each axis, which lead to inaccurate position estimation by integration, especially with respect to a cheaper inertial sensor.
However, even though PDR has various advantages, it still suffers from algorithm errors. Firstly, step-count errors caused by an inadequately tuned algorithm are generally caused by the user and usage behavior. For example, when the threshold peak detection along with time interval is used, it will generally fail to detect the steps at the beginning and end of a walk because the properties of steps at those stages are typically different [6]. These missed steps produce an accumulated position error depending on their number and step length, leading to the second error source: step length. Step length is often derived from empirical formulas based on acceleration, characteristics of individuals, and related corresponding coefficients. Unfortunately, most step length estimates fail to accurately meet individual users’ characteristics and walking habits. Weinberg reports that step length can vary by as much as 40% with different individuals at a given walking speed, and by up to 50% with different walking speeds and individuals [7]. Ho et al. propose an adaptive step-length estimator based upon a Fast Fourier Transform smoother and a set of step-detection rules, which accurately estimate step length [8]. Tsai et al. compares three accurate empirical step length models and evaluates their PDR performance [9]. According to their research, an accurate estimation of step length generally requires pre-calibration or some analyses in post-processing to determine the optimal parameters for each individual, which cannot be easily implemented by the public. Meanwhile, a system that depends on tuning parameters usually has poor generalizability, which leads to the case where the demonstration system performs better than the production system due to the uncertainty of parameters [10]. In addition to tuning parameters, there are some advanced methods for detecting steps and accurately estimating step lengths in real-time, such as a gyroscope attached to the knee [11] and an accelerometer attached to the waist [12]. Naturally, this entails the mounting of accurate sensors (more cost) on specific parts of the body, which will hardly appeal to general users who only have smartphones.
In addition to the above errors related to distance, a third error source is heading (azimuth). Magnetometers and gyros are generally used to provide data for heading and rotation, respectively. The derived heading from a magnetometer is based upon the measurement of the Earth’s magnetic field, and there is no error accumulated with time. However, the measured magnetic fields are usually affected by hard- and soft-iron effects [13]. These effects are usually calibrated because the relation between the magnetometer and other magnetic components is fixed in a device. However, some environments retain lots of magnetic materials and devices with strong magnetic fields. They will cause the magnetometer to lose efficacy and accuracy during sensing, and even change the property of the internal magnetic field, which will invalidate the factory-set default compensation parameters. Ali et al. propose an advanced method called Particle Swarm Optimization (PSO)-based calibration to accurately estimate the values of the bias and scale factor for low-cost magnetometers [14]. In addition, some real-time calibration methods, such as figure-8 rotation, also emerged. However, they are not easy to adapt to the public with little technical knowledge. Therefore, the heading based on the magnetometer can still lose accuracy in a magnetic-hostile environment and produces false values. On the other hand, a gyro is a relatively environment-independent sensor. Measurements from a gyro are not affected by the environment. But the characteristics of a gyro require an initial heading, and its errors will accumulate with time. Consequently, a magnetometer and a gyro are complementary, and used together can provide better heading estimations than either one used separately. However, even if an integrated heading is used, the performance still depends on the tuned parameters of the integration algorithm, which is affected by the environmental magnetic field as well as the specifications of the gyro used. As mentioned, in order to maintain the navigation accuracy, a PDR integrated with another external positioning system is needed.
There are many kinds of positioning technologies that can be integrated with PDR, such as received signal strength indicator (RSSI)-based Wi-Fi positioning [15], Bluetooth iBeacon [16], indoor map aiding (includes indoor landmark and digital model) [17,18], etc. Some researchers further propose a composite integration such as Magnetic/Wi-Fi/PDR system [19], Wi-Fi/map/PDR [20] and magnetic/landmark/PDR [21]. Those researchers indicate the consensus of the development of an integrated system for indoor navigation. In 2016, one of the most famous mobile games called Pokemon GO promoted the augmented reality (AR) application on a smartphone. Many experts have reported the impacts of this phenomenon on Location-Based Services (LBS). AR connects the real world and virtual information by camera. Therefore, if the user is accustomed to using the smartphone camera for LBS, an indoor image-based localization can be considered as the ideal positioning system that integrates with PDR.
An indoor image-based localization system is usually based on the technologies of computer and robotic vision such as simultaneous localization and mapping (SLAM) and visual odometry [22]. Many stereo images taken by multiple cameras can be used to estimate the relative position and attitude. Similarly, successive images taken by the single camera can be used to estimate relative movement and rotation. Nunez et al. propose a novel visual odometry system using stereo cameras and compare four kinds of odometry methods [23]. Zhang et al. propose a novel RGB-D SLAM system based upon visual odometry and an extended information filter [24]. The overlap of images plays an important role in those systems. On the other hand, some of the image-based methods match the query image to a reference image in the database and then give the location of the reference image as the user’s location. These advanced methods further estimate the relative position and attitude between the reference image and user’s image. Deretey et al. propose a method that uses images from a single monocular camera matched against a feature database to obtain the camera position in a previously mapped indoor environment [25]. Although there are differences between these image-based methods, all of them use feature recognition and image matching. Meanwhile, they need to successively take images, which is beyond the means of general users navigating with their smartphones for continuous positioning. Therefore, the integration of other positioning systems should be considered to improve the inconvenience and increase the speed of image-based methods. Grießbach et al. propose a low-cost, stereo vision-aided inertial navigation system in which inertial measurements are used to constrain the range of image matching while tracking feature points [26]. However, this integrated system and most of the image-based localizations are navigating in a relative coordinate system. Therefore, some researchers use the geo-referenced images to provide absolute coordinates for navigation. Liang et al. propose an image-based localization based upon a geo-referenced image database [27]. Li et al. propose a vision navigation approach based upon a geo-referenced image database to facilitate continuous and robust vehicle navigation, integrated with the Global Positioning System (GPS) and Inertial Navigation System (INS) integration system [28]. Similarly, space resection based on geo-referenced images can provide the position in an absolute coordinate system, requiring only a single query image. However, it still requires feature recognition and image matching; therefore, the space resection integrated to PDR is applied in this research. In conclusion, traditional image-based localization requires multiple cameras or successive images and position in a relative coordinate system. Space resection-aided PDR only requires one camera and extremely few images (implemented when PDR becomes worse, and only single images are required for position estimation). Meanwhile, the estimated position is in an absolute coordinate system.
Space resection is a Photogrammetric method used to determine the six exterior orientation parameters (EOPs) of the exposure center of a single photograph [29]. The EOPs include the 3D positions (X, Y, Z) and attitude angles (omega, phi and kappa). These are solved using collinearity equations and a photographic image with a known principal distance from the camera used, as well as at least three control points whose 3D ground coordinates are known and also appear in the image. The collinearity equations are based upon the condition that the camera, the control point (object point) and its corresponding image point all lie on a straight line. Since the collinearity equations are nonlinear and have been linearized using Taylor’s theorem, initial values of the exterior parameter are needed for the iterative calculation. Poor initial values cause the divergence of space resection. Space resection using collinearity equations is a purely numerical method, which permits the usage of the least squares method with redundant amounts of control points for the most probable estimation. Li et al. propose a hybrid image-based localization for seamless navigation based upon GNSS, a compass, calibrated camera and space resection with geo-referenced images [30]. They believe such technologies have the potential for indoor and outdoor navigation using a smartphone. However, their initial values for space resection are provided by GNSS and orientation sensor (compass for heading and accelerometer for roll and pitch) in both outdoor and indoor environments: They use smartphones for outdoor navigation because GNSS provides good initial values everywhere. They use a video recorder on a moving vehicle for indoor navigation because the first initial values of space resection are provided by GNSS, but the subsequent initial values are provided by the previous solution of space resection, which means the requirement of frequent image taking. Furthermore, all the used devices in his research are calibrated for better position accuracy. As mentioned, successively taking images and calibration are inconvenient for the general user. Therefore, our research proposes a space resection-aided PDR. PDR provides the initial values for space resection during the indoor navigation. Meanwhile, space resection maintains the PDR accuracy. On the other hand, the accuracy of space resection is mainly dependent on the calibrated camera. Therefore, this research also analyzed the effect of calibrated parameters with different geometry of control points in a professional camera calibration laboratory, then proposed an adaptive weighted least squares method to minimize the error caused by the un-calibrated smartphone camera.
This research proposes novel usage of space resection to aid PDR with geo-referenced images in a previously mapped environment. The smartphone is the only used device. The proposed algorithm enables the PDR to provide continuous navigation between each image to reduce the need to constantly take and process pictures and also provides the initial values for the iterative calculation for indoor space resection. On the other hand, space resection reduces the accumulated error of PDR. Meanwhile, the position error of space resection caused by the use of an un-calibrated smartphone camera is improved after applying the proposed adaptive weight method. Finally, the proposed algorithm navigates in an absolute coordinate system, and avoids individual calibration, tuning parameters, environmental infrastructure, and wearable sensors for a popular application. Figure 1 shows the complete flowchart of this research. The first stage is preparatory work of mapping environments. The analysis of space resection is the second stage, which is used for better understanding of the characteristics of space resection using a smartphone camera. This analysis helps us to better estimate the smartphone-based space resection using the adaptive weighted least squares method, which reduces the error caused by the usage of un-calibrated interior orientation parameters (IOPs). The third stage is practical pedestrian indoor navigation implementing the proposed adaptive weighted space resection-aided PDR in a real scene. The geo-referenced images of a real scene with TWD97 (Taiwan Datum 1997) are obtained using the indoor mobile mapping system (IMMS) instead of a total station, which provides the ground coordinates of the control point by measuring geo-referenced images. The IMMS has better efficiency, with mapping accuracy, which ranges from the sub-meter to meter level. Therefore, this research also evaluates the performance of space resection using the geo-referenced images of IMMS. Moreover, a traditional survey with the total station is used for all the check points, which are used to analyze the accuracy of the proposed algorithm.

2. Methods

The core methods used in this research can be divided into three major parts: indoor mapping, space resection, and PDR, each described in one of the following sections. The first section explains the mapping methods used and where they are applied and includes a traditional survey for check points and mobile mapping for fast generation of geo-referenced images. The use of IMMS technology is particularly complex and is related to GNSS/INS integration and direct geo-referencing. Direct geo-referencing is a technique that can be used for photogrammetry [29]. Geo-referencing means the coordinate system of a map or image can be related to a ground coordinate system. Therefore, the ground coordinate of an interesting point in an image can be directly measured, and the spatial information can be further extracted. Therefore, the first section only provides a brief description of the mapping procedure, accuracy, and the specification of the applied system. The second section illustrates the concept and the analysis procedure of space resection. The analysis includes different scenarios to evaluate the effect of IOPs and different geometry of control points. Then, the optimal weighted least squares method of space resection for an un-calibrated smartphone camera can be determined. The third section illustrates the algorithm of the used PDR. The used algorithm is not complex because the purpose of this research is to evaluate the performance of an integrated positioning system for a smartphone user without any parameter tuning and calibration of PDR.

2.1. Indoor Mapping

The contributions of indoor mapping in this research include the generation of the geo-referenced images with absolute coordinates for space resection and analysis of the accuracy of the proposed algorithm. Two methods of indoor mapping, survey mapping and mobile mapping, are used in this research. The professional camera calibration laboratory must be fitted with the accurate coordinate of control points so the analysis of space resection can focus on the various effects of the IOPs and geometry of the control points. To accomplish this, a traditional survey of the total station is performed at centimeter-level accuracy in order to minimize control point error. In addition, all the used check points are surveyed by the total station in this research.
Since a traditional survey for geo-referenced images can be very time-consuming, depending on the total number of control points, IMMS has been developed to collect the environmental images quickly and then the arbitrary number of control points, which can be easily measured. This procedure is performed using a kind of photogrammetric technology called direct geo-referencing. Direct geo-referencing requires accurate camera position and attitude. To accomplish this, the self-developed software was used to precisely estimate the trajectory of IMMS based upon the Extend Kalman Filter (EKF), constraint algorithms of Non-Holonomic Constraint (NHC) and Zero Velocity Update (ZUPT), and Rauch-Tung-Striebel (RTS) smoother. Because there are three different KFs used in this research, the EKF used for IMMS is called MEKF for distinction in this research. Meanwhile, even the matrix operations of the prediction and update stage are the same; the related matrixes are still different, such as their state vector matrixes. In the mapping procedure, a large GNSS-denied environment is a harsh space for an INS/GNSS integration system, even using MEKF, smoothing, NHC and ZUPT. Therefore, according to the specification of the used INS, velocity of vehicle and past experience, some control points are surveyed by the total station every 200 m. Those control points are measured in an image for space resection, then the position and attitude of IMMS can be estimated to update the MEKF solution at that epoch. The IMMS was initialized outdoors, and moved two small circles to make the MEKF stable before entering the experimental indoor field. In the indoor environment, the trajectory of IMMS can be accurately estimated for geo-referencing. Information regarding the software can be found in our previous research [31]. With respect to the system components, the IMMS developed by National Cheng Kung University (NCKU) is an INS/GNSS integration system, which consists of GNSS, navigation-grade Inertial Measurement Unit (IMU), a 360-degree spherical camera system, a power supply and an industrial computer, as shown in Figure 2a. Similar systems can be found in [32,33,34]. The second self-developed software is used to produce the geo-referenced images with TWD97 coordinates, as shown in Figure 2b. The accuracy of the produced geo-referenced images is 1.03 m (Root Mean Square Error, RMSE) in the test field based on 39 check points.

2.2. Space Resection

Space resection is based on collinearity equations. An important principle is that the object point, corresponding image point, and camera are collinear. The collinearity equations with additional parameters are shown in Equations (1) and (2):
x a   =   x p   Δ x     c [ m 11 ( X A     X L )   +   m 12 ( Y A     Y L )   +   m 13 ( Z A     Z L ) m 31 ( X A X L )   +   m 32 ( Y A     Y L )   +   m 33 ( Z A     Z L ) ]
y a   =   y p     Δ y     c [ m 21 ( X A     X L )   +   m 22 ( Y A     Y L )   +   m 23 ( Z A     Z L ) m 31 ( X A     X L )   +   m 32 ( Y A     Y L )   +   m 33 ( Z A     Z L ) ]
where ( X A , Y A , Z A ) is the ground coordinate of object point A; the matrix m is a 3D rotation matrix that includes elements composed of three attitude angles: omega, phi and kappa; ( X L , Y L ,   Z L ) is the ground coordinate of the camera. The three attitude angles and the location of the camera are the elements of EOPs; ( x a , y a ) is the measured image coordinate of object point A; ( x p ,   y p ) and c are the principle point offsets and principal distance of the camera, respectively, which compose the IOPs. Furthermore, the generalized IOPs include the additional parameters: Δ x and Δ y , which represent the system error of the camera, such as the lens distortion.
Space resection estimates the EOPs of a smartphone camera as the user’s location. Each object point with a known ground coordinate serves as a control point, enabling the construction of two equations based on collinearity equations. Therefore, at least three control points are needed to solve the unknown EOPs (six parameters). Meanwhile, more control points are needed as the redundant measurement for better estimation by least squares. However, the relation between observations and unknowns is nonlinear, so it is necessary for the equations to be linearized using the Taylor theorem. The collinearity equations can then be used to derive partial derivatives with respect to the unknown EOPs, where the exact form can be found [29]. Therefore, the initial values of unknown EOPs are needed for iterative calculations due to the use of linearized equations. The initial values of EOPs in terms of attitude and position are given by the orientation sensor and the PDR algorithm, respectively, in our indoor research. A special case is the first location at the beginning of navigation, which cannot be provided by PDR because of its characteristic relative positioning. Therefore, the initial outdoor location provided by GNSS is needed. The initial values of position are in the ground coordinate system for space resection. However, the attitude from the orientation sensor is in a phone frame, as shown on the left of Figure 3. When the user takes the image, the attitude given by the orientation sensor is relative to the local level frame (North, East, down). In addition, the initial values of rotation angles corresponding to three axes of the camera frame are required for space resection, as shown on the right of Figure 3. Therefore, the following equations are needed to determine the three rotation angles based on the orientation sensor:
s = z ( γ + 90 ) × x ( β ) × y ( α )
p = [ r 11 r 12 r 13 r 21 r 22 r 23 r 31 r 32 r 33 ] = z ' ( κ ) × y ' ( φ ) × x ' ( ω )
p =   s T
where s is the three-dimensional rotation matrix of the smartphone in the local level frame; x , y and z are the rotation matrixes corresponding to the x, y and z axes, respectively, of the smartphone; α , β and γ are the roll, pitch and heading, respectively, given by the orientation sensor; when the user taking the image (y axis is pointed to the user’s left), the heading needs an additional 90 degrees since the heading of the sensor is relative to the y axis; p is the three-dimensional rotation matrix of the camera frame; r i j is the element of the rotation matrix; x ' , y ' and z ' are the respective rotation matrixes corresponding to the x’, y’ and z’ axes in the camera frame; ω , φ and κ are the rotation angles corresponding to the x’, y’ and z’ axes, respectively. Therefore, omega, phi and kappa can be estimated based on the inverse trigonometric function, r i j and corresponding definitions in the direction cosine matrix (DCM). Details can be found in [29]. Finally, the required initial values are completely obtained for space resection.
Moreover, the intersection geometry is important for space resection. Figure 4 shows the difference between good and bad intersection geometry, where the red and blue points are assumed to be two control points. The circle represents the distance between the intersection point and control point, and the thickness of the circle represents the range of the distance error. In other words, the difference between two circles with the same color represents the distance error. The intersection area of two control points shows the possible location of an intersection point. The larger the area, the more uncertain the intersection point. In Figure 4, the distance between two control points of the example on the left is larger than in the example on the right, representing better intersection angle and geometry. Therefore, the intersection area of the left example is smaller than for the one on the right. The uncertainty of the intersection point location is also smaller and almost the same in each direction. By contrast, the intersection area of the right figure is larger, indicating a larger intersection uncertainty. As a result, the dilution of precision (DOP) values of the left case are smaller than for the right case. According to this reason, the analysis of intersection geometry based on position accuracy and DOP is discussed in the results section.
An analysis of space resection has been implemented using images from two kinds of smartphone cameras. The professional camera calibration laboratory has a wall with artificial landmarks, as shown in Figure 5; the control points have been surveyed by the total station at centimeter-level accuracy. The results of the present research indicate control point location leads to different contributions for space resection, leading to the proposition that an adaptive weighted least square method leads to a more accurate estimation of space resection based on the distance between the image points and principle point. In order to understand how to determine the weight of each image point corresponding to the distance, a regression of the quartic polynomial curve fitting is used. Then, the optimal weight of each image point can be decided for a smartphone camera (consumer-grade), and the accuracy of space resection is improved without camera calibration.

2.3. Pedestrian Dead Reckoning

In addition to space resection, the other component of the proposed algorithm is PDR. The architecture of using space resection as the external aiding resource for PDR is shown in Figure 6. There are two Kalman Filters (KFs), one for heading and one for position estimations. The heading KF (HKF) and position KF (PKF) are used for distinction in this research.
The pedometer used is set for threshold peak detection along with the time interval to identify steps. In addition to a pedometer, the data generated for the HKF are based on two kinds of headings, which are calculated by the magnetometer and gyro, respectively. The heading provided by the gyro requires an initial heading from the magnetometer. The characteristics of the gyro heading are smooth and independent of the environment, resulting in errors that accumulate quickly over time. The magnetic heading, on the other hand, is affected by the environment, which causes the unsmooth heading measurement but without accumulated error. Taking these differences into account, the HKF estimates the integrated heading and gyro bias, combining the advantages of both to obtain a smooth heading and reduce error drift. The predictive stage of the HKF is shown in the following:
x k H K F   =   [ δ ψ k δ b ψ ,   k ] = Φ k 1 H K F x k 1 H K F = [ 1 Δ t 0 1 ] [ δ ψ k 1 δ b ψ ,   k 1 ]   +   w k H K F ,   w k H K F ~ N ( 0 ,   Q k H K F ) ,   Q k H K F   =   [ 0 . 01   0 . 01 ]
P k H K F =   Φ k 1 H K F P k 1 H K F Φ k 1 H K F T +   Q k H K F
where x k H K F is the state vector of HKF at k epoch; δ ψ is the heading error; δ b is the error of gyro bias; w k H K F is the gyro system noise (assumed to be a Gaussian distribution); Q k H K F is the covariance matrix of gyro system noise. Φ k 1 H K F is the transition matrix of HKF, which represents the relation between the states at k and k − 1 epochs; P k 1 H K F is the covariance matrix of the state vector of HKF. In the next step, the updated stage of HKF is:
K k H K F = P k H K F ( H H K F ) T ( H H K F P k H K F ( H H K F ) T   +   R k H K F ) 1 ,   H H K F =   [ 1   0 ] ,   R k H K F   =   [ 0 . 25   0 . 25 ]
  x ^ k H K F   =   x k H K F   +   K k H K F ( z k H K F     H H K F x k H K F ) ,   z k H K F =   A m     A g
P ^ k H K F   =   ( I     K k H K F H H K F ) P k H K F ( I     K k H K F H H K F ) T +   K k H K F R k H K F ( H H K F ) T
where K k H K F is the Kalman gain of HKF; H H K F is the design matrix of HKF for measurement; R k H K F is the covariance matrix of measurement at k epoch; x ^ k H K F is the updated state vector of HKF at k epoch; z k H K F is the observation model of HKF, which is the difference between the magnetic heading A m and gyro heading A g at k epoch. Using this system, PDR estimates the position based on the integrated heading from HKF. Furthermore, the used step length is from an empirical formula that has better performance. The following is the equation:
L k   =   ( 0 . 7   +   a ( H     1 . 75 )   +   b ( F k     1 . 79 ) H 1 . 75 ) c
In this equation, L k is the step length of k step; a, b and c are the tuning parameters; H is the height of the user; and F k is the walking frequency, which is estimated at k step. However, external aids such as space resection are used in order to reduce the error accumulation of PDR, as well as to avoid model tuning. Therefore, the parameters (a, b and c) in Equation (11) are default values, which are the same as the reference [35]. Chen et al. determined those parameters from measurements taken from 33 walking scenarios using 11 people. We also compared the different empirical formulas in past research [9], and this formula has better performance. For PKF, the state vector and transition matrix are shown in the following equations:
x k P K F = [ E k N k L k b E ,   k b N , k b L , k ] ,   H P K F = [ 1   0   0   0   0   0 ; 0   1   0   0   0   0 ]
Q k P K F = [ 1   1   0.1   1   1   0 . 1 ]
z k P K F   =   [ E k s r     E k p d r ,   N k s r     N k p d r ] ,   R k P K F   =   [ 0 . 01   0 . 01 ]
Φ k 1 P K F = [ 1   0   ϑ   1   0   0 ; 0   1   η   0   1   0 ;   0   0   1   0   0   1 ;   0   0   0   1   0   0 ;   0   0   0   0   1   0 ;   0   0   0   0   0   1 ]
where x k P K F is the state vector of PKF; H P K F is the design matrix of PKF for measurement; z k P K F is the observation model of PKF, which is the difference between the positions of space resection and PDR at k epoch; E k s r and E k p d r are the eastern coordinates of space resection and PDR, respectively; N k s r and N k p d r are the northern coordinates of space resection and PDR, respectively; R k P K F is the covariance matrix of measurement at k epoch; Φ k 1 P K F is the transition matrix of PKF; L is the step length; ϑ and η are the second-order Taylor series expansion for sine and cosine functions; b E ,   k is the offset for east; b N ,   k is the offset for north; b L ,   k is the bias of step length; and k is the k step. The matrix operations of the prediction and update stages of PKF are the same as HKF. The PKF is used to estimate the integrated position from space resection and PDR. The innovation equation is based on the difference between the positions of PDR and space resection. In other words, the position estimated by space resection is used to update PKF. As shown in these equations, the proposed space resection aids PDR estimates of the user’s location and heading based on the inertial sensors and smartphone camera. Although there are PDR errors caused by pedometer, step length and heading estimation, in order to emphasize the effect of integration, the PDR algorithm is not complex and does not require calibration and tuning parameters for individual users and the environment. Once the image is taken, the solution of space resection serves as the constraint for PDR by resetting accumulated error, meanwhile the PDR provides the initial values for later space resection. Finally, the smartphone user can navigate indoors easily.

3. Results and Discussion

The following sections provide the analysis results of space resection and the proposed integrated algorithm. The smartphones used are HTC M8 (HTC Inc., New Taipei City, Taiwan) and iPhone 5S (Apple Inc., Cupertino, CA, USA). The official specifications of those smartphone cameras are shown in Table 1. The analysis of space resection was implemented in a professional camera calibration laboratory. The proposed space resection-aided PDR is implemented in the underground parking lot, which is mapped by IMMS.

3.1. Analysis of Space Resection Using Smartphone

A smartphone camera located 4 m in front of the center of a wall took horizontal images for the following experiments. The height of the smartphone camera wass set by a tripod at about 1.5 m. After the pictures were taken, space resection used the control points with different intersection angles and number of control points to evaluate their influence, based on position error and DOPs. The north/south, east/west and height directions correspond to the depth, x and y of the image in this field. Therefore, the east and height have similar characteristics, both being parallel to the image plane. However, they also have some differences because the horizontal range (east direction) and vertical (height) intersection angles are different due to the orientation of the image. In addition, the north direction corresponds to the depth, which has different characteristics than the others. The results indicate the accuracy and influence of un-calibrated smartphone cameras for space resection, then inspire the way to improve.

3.1.1. Intersection Angle

The control points used to evaluate the effect of intersection angles that range from 5 to 52 degrees of horizontal, are shown as red points in Figure 7. Table 2 and Table 3 show the position errors at different intersection angles using the iPhone and HTC, respectively. The horizontal intersection angles are shown in the second column, and the vertical intersection angles are shown in the parentheses beside the horizontal intersection angle. Table 2 and Table 3 also show the position errors for three cases that use different IOP values. The first case uses the space resection with the un-calibrated IOPs, which means that the principle distance is provided by an image file and the others are assumed to be zero. The second case uses the calibrated principle distance and principle point offsets, while the third case further uses additional parameters such as lens distortion correction. These cases were designed for several reasons and assumptions. For the assumption of the first case, it is difficult for general users to obtained the calibrated IOPs by calibration, and the manufacturers do not announce the detailed specification of their smartphone camera. For the assumption of the second case, the users can obtain the accurate principle distance recorded in an image file (even it is inaccurate now but is expected to be more accurate in the future). Finally, the third case is designed for professional applications with complete calibrated IOPs. Therefore, these cases are arranged to compare the performance in different situations.
Since the results of space resection in different directions are highly correlated with each other, a comparison of RMSE of all the directions is more comprehensive. A comparison between the RMSE of the first and second cases shows a significant improvement after the calibrated principle distance is used. In addition, a comparison between the RMSE of the second and third cases shows a slight improvement due to the use of fully calibrated IOPs that have lens distortion correction. It is clear that the error caused by inaccurate principle distance is larger than the error caused by lens distortion correction. Moreover, the RMSEs of the first case show an opposite effect regarding the principle distances of the two smartphones. This is due to the difference between the recording principle distance and calibrated principle distance of these two phones: one is positive and the other negative (the calibrated principle distances of the iPhone and HTC are 4.19 and 3.79 mm, respectively, but the recording values are both 4 mm). The RMSEs of the second case are significantly better than for the first case, and have the same characteristics for both smartphones because the more accurate principle distances are used simultaneously. However, the RMSEs of a horizontal intersection angle of 23 degrees are the best, because the smaller intersection angle has poorer intersection geometry, and the larger intersection angle uses control points with larger lens distortion. Control points with larger intersection angles are far from the principle point, which originally has larger lens distortion. Therefore, the RMSEs of the third case at large intersection angles are much better than for the other cases because of the lens distortion correction. In conclusion, principle distance, lens distortion and intersection angle seriously affect the performance of space resection, and the importance from high to low is principle distance, lens distortion and intersection angle. However, the accuracy of space resection without any calibrated IOPs is still acceptable for pedestrian indoor navigation in which the requirement accuracy is only at the meter level.
In addition to position error, the DOPs are mainly dependent on the intersection angle in this analysis, as shown in Figure 8. The PDOP values are about 6, 27 and 127 for the iPhone and 8, 35 and 180 for the HTC phone, which correspond to 52, 23 and 5 degrees of intersection angle, respectively. The ADOP values are about 2, 7 and 34 for the iPhone and 2, 9 and 47 for the HTC phone, which correspond to 52, 23 and 5 degrees of intersection angles, respectively. The larger the intersection angles, the more significantly the DOP values decrease. However, the intersection angle is not the only factor that affects position accuracy. Even if all the calibrated IOPs are used, the better DOPs do not automatically cause better position accuracy, as shown in Table 2 and Table 3. However, DOPs indicate the better geometry for iterative calculation and collinearity conditions, which causes lower uncertainty for convergence.

3.1.2. Quantity of Control Points

In addition to the intersection angle, the quantity of control points is also an important factor because there may not always be sufficient features in the navigated environment. In order to evaluate the effect of the number of control points for space resection, different numbers of control point are used in this analysis. Tests were done with 4, 8, 18 and 32 control points, all evenly distributed on the image, as shown in Figure 9. These are expected to exclude the influence of intersection angle and geometry. All conditions, such as the direction, imaging distance and setting of Case 1 to Case 3 are the same as in previous tests. Therefore, the north, east and height vectors correspond to the directions of depth, x and y of the image. The different cases have different usages of calibrated IOPs. Table 4 and Table 5 show the position errors corresponding to the different numbers of control points in the three cases. Comparison of RMSE of the first and second cases clearly shows a significant improvement due to the usage of calibrated principle distance. Moreover, the comparison of RMSE between the second and third cases are also slightly improved because of the correction of lens distortion. However, there is no significant improvement when more control points are used. The comparisons between all the cases show that merely adding control points does not reduce the error caused by the inaccurate IOPs. Therefore, the DOPs are further analyzed.
Figure 10 shows the values of PDOP and ADOP for the two smartphones. When more control points are used, better DOP values are obtained. However, the position accuracy shows no significant improvement when the DOPs become smaller because the intersection geometry is good enough with only four control points (the DOPs are already small enough, and more control points only slightly improve DOPs). This shows that the few control points used in this analysis are enough for good geometry of intersection, which successfully estimates the solution with lower uncertainty. Therefore, more control points are only meaningful when the added points can significantly improve the intersection angle and geometry for practical application. However, more control points also represent more redundant observation for least squares, which improves the reliability of estimation. Therefore, DOPs represent not only better geometry but also better reliability, which makes the iterative calculation successfully converge with lower uncertainty and the best possible estimation.

3.1.3. Adaptive Weighted Least Squares

According to the above analyses, accurate IOPs significantly improve the position accuracy of space resection. The assumption of Case 2 is a possible situation for a smartphone camera in the future, for which the principle distance recording in the image file will be more accurate, but the other IOPs such as lens distortion are still difficult to obtain. Therefore, we propose an adaptive weighted least square approach for space resection using a smartphone to reduce the error caused by lens distortion, then apply this in further positioning applications. According to Section 3.1.1, the remaining error of case 2 is mainly from lens distortion after applying the calibrated principle distance. The positon error of case 2 becomes larger when the used control points are far from the principle point. This means those control points have better intersection angles but with larger lens distortion. Finding the best balance between these two factors will provide the optimal estimation and reduce the error caused by lens distortion and lead to lower uncertainty. Figure 11 shows two scenarios for selected control points. The selected control points are based on the premise of even distribution and have various distances from the principle point. The following analysis evaluates the optimal weight of control points based on their distance from the principle point without the use of any calibrated IOPs. Furthermore, each smartphone takes three images at different times to evaluate the repeatability of the proposed method.
Table 6 and Table 7 show the position RMSEs for two scenarios using two smartphones. The three images were taken by each smartphone at different times. The first row shows the power of distance, which is the weight of each control point that depends on its distance from the principle point. If the power is positive, it means a larger weight of the image point with longer distance between itself and the principle point; if the power is negative, it means a smaller weight of the image point with a longer distance between itself and the principle point. Figure 12 shows quartic polynomial fitting of the optimal weight corresponding to different powers of distance and the normalized position error from tables, which illustrates the characteristic more clearly. The blue dotted lines show the results for the HTC, and the red dotted lines are the results for the iPhone. The solid lines represent the fitting results corresponding to the iPhone, HTC, and both using red, blue and green, respectively. The results indicate that the power of distance should be −3 or −4 for optimal estimation of space resection when the consumer-grade smartphone cameras are used with un-calibrated IOPs. This characteristic corresponds with the findings of previous analyses: the intersection angle has a lower effect for close range photogrammetry using a smartphone camera, but more distant control points cause greater lens distortion, which affects the position accuracy. Therefore, the adaptive weighted least squares method improves the position error by about five to ten centimeters by reducing the effect of lens distortion. However, two cases (Images 1 and 3 of HTC) do not match in this conclusion because the original position errors with equal weight are quite small. This means these two cases have already archived the limited accuracy of RMSE to about fifteen centimeters since the error caused by inaccurate principle distance and other observational error still remains.
Final conclusion: two kinds of analyses both show the importance (listed in order from highest to lowest) of calibrated IOPs based upon the viewpoint of position accuracy. Meanwhile, intersection angle and number of control points affect the DOPs and play important roles in collinearity equations, least squares, and iterative calculation, which all enhance the reliability of estimations. The good geometry of control points is important than more control points used, and the effect of IOPs is more significant than intersection geometry. Unfortunately, calibrating each smartphone camera for complete IOPs is not practical for a universal application. However, the image file records the principle distance, and the recording is expected to be more accurate in the future. Therefore, there will remain the error mainly caused by lens distortion. The adaptive weighted least squares method is proposed based upon the discovered characteristics of a smartphone camera that balance the intersection angle and lens distortion. The optimal weighting has been decided after performing experiments. The weight of control points that are farther away from the principle point should be smaller because of increased lens distortion in consumer-grade smartphone cameras. After correctly modifying the weighting, the position error of space resection is reduced by about ten centimeters.

3.2. Proposed Space Resection-Aided PDR

Adaptive weighted space resection is now used for integration with PDR, thus reducing the effect of inaccurate IOPs. Figure 13 shows the experimental route taken by four participants using two smartphones (iPhone 5S or an HTC M8). They took the phone and walked along the experimental route. The total walking distance was around 566 m. The route started outdoors and then went into an underground parking lot, finally returning to the original position. The participants were named A, B, C and D, three males and one female, with heights of 1.70, 1.87, 1.67 and 1.57 m, respectively. The light blue box zooms in the partial area of the underground parking lot. The black square indicates the starting and ending points; the experimental route was a closed path. The light green triangles are the check points, located at the corners. It is difficult for the user to hold the smartphone and equip the accurate sensors with corresponding power supply and computer simultaneously, so it is quite difficult to create an accurate reference trajectory. With this in mind, we chose points in the corners as the check points, which can be easily measured on the estimated trajectory. The check points were surveyed by the total station at centimeter-level accuracy. The yellow points are the locations that implement adaptive weighted space resection with an un-calibrated smartphone camera. In the other words, only six images were used during the navigation. The un-calibrated IOPs used the principle distance from the image file and assumed the principle point offsets and lens distortion were zero. In addition, the yellow points were also surveyed by the total station for accuracy analysis of space resection.
Table 8 and Table 9 show the accuracy of space resection in this field (yellow points in Figure 13) with geo-referenced images provided by IMMS. The errors shown in the tables represent the RMSE of six check points for the east/west and north/south directions. The horizontal error is the RMS value of east and north RMSE. Because height and attitude are not considered in the proposed system, they are not discussed. The accuracy of space resection in this field is about one meter, which is significantly worse than the accuracy in the camera calibration laboratory. This is because the used ground coordinates were measured by geo-referenced images. In the other words, the geo-referenced images provided by IMMS are less accurate than a traditional survey (laboratory conditions). Furthermore, the candidates for control points are rare in underground parking lots, so cannot provide the best intersection geometry, causing poor reliability and estimation errors. However, the accuracy of one meter is acceptable and effective for PDR, since the accuracy of the PDR stand-alone algorithm is significantly worse, especially without any model calibration, parameter tuning, or accurate sensors. Any missed two steps of the PDR can produce an error of over one meter.
Figure 14 and Figure 15 show the estimated trajectories of four participants using iPhone 5S and HTC M8, respectively. The coordinate system is TWD97, which can be easily transformed to the World Geodetic System 1984 (WGS84). However, all the solutions of the trajectory are minus the coordinate of starting location coordinates for a clear illustration. The proposed space resection-aided PDR is named S-PDR in the following figures and tables. The dark green triangle (the same as the light green triangles and yellow points in Figure 13) represents the true location surveyed by the total station, which is used as the references for comparison. The red line represents the trajectory estimated by the PDR algorithm with the pedometer using threshold peak detection along with time intervals, an empirically formulated step length, and an integrated heading based upon gyro and magnetometer data. The blue dotted line represents the trajectory estimated by the proposed space resection-aided PDR. The red and blue squares are the end locations of PDR and space resection aided PDR, respectively. The black square is the starting point of both trajectories. The trajectories of the two algorithms overlap at the beginning, and then become different after the first space resection update. In order to evaluate the improvement of the proposed algorithm and verify whether it is actually more practical, all the used parameters of PDR are default values. Therefore, the position error caused by step length and step count is obvious because the corresponding parameters are not tuned.
Table 10 shows the percentage of loop closure error for the analysis of the above trajectories. The percentage of loop closure error represents the difference between the starting and ending position divided by the total walking distance for a closed path. For example, a loop closure error of 1% represents one meter of accumulated error for every one hundred meters traveled. The table shows that the results of S-PDR have less loop closure error, with an average of 2.6% after traveling 566 m. The percentages of loop closure error using HTC are worse than when using the iPhone. The HTC M8’s embedded inertial sensor’s reading is not accurate enough, and the built-in algorithm constraints the value to zero when the movement is slight. However, the improvements of both smartphones are significant after space resection aiding.
In order to provide absolute error analysis, the light green triangles shown in Figure 13 are used as check points, which are the exclusive points that implement the space resection update. Table 11 and Table 12 show the step count of each user, and the RMSE of six check points for all the trajectories. Table 11 and Table 12 also show the improvement of the PDR after aided by the space resection (compare the pure PDR and S-PDR). Because of the inadequately tuned algorithm, the step miscount is about fifteen, resulting in a substantial position error. However, the space resection-aided PDR makes the whole trajectory coincide more closely with the real walking route, and has an average improvement of about 50%, as well as an average RMSE of about 8.8 m at the check points. The percentage of average RMSE and traveled distance is about 1.55%, which is quite small. It is worth mentioning that the largest error is usually from the corner, and the four check points in the south occurred after a relatively long period without a space resection update, as shown in Figure 13.
After comparing the true locations of the dark green triangle, the trajectories of the proposed algorithm can be seen to coincide more closely with the real locations. Therefore, the results show the proposed algorithm works better without any individual calibration, parameters tuning, more accurate wearable sensors, or environmental infrastructure. The results also indicate the proposed algorithm is more convenient, costs less and is more generalizable for general users using their smartphones for indoor navigation. The integration of PDR solves the problem of initial values indoors for the convergence of space resection, which avoids frequent image taking. Meanwhile, space resection provides the error control for PDR without any complex calibration and model. However, the more accurate PDR can be considered for a longer gap between two images taken for space resection. Therefore, our group is starting to work on the integration of proposed space resection-aided PDR and map-aided PDR (which is our previous work [36]). In addition, there are some positioning technologies based on RSSI such as magnetic fingerprinting and Wi-Fi positioning. Since the Wi-Fi access point is common indoors and the smartphone usually has a magnetometer, those technologies can be considered in the future to integrate with space resection. However, their accuracies should be acceptable for the convergence of space resection. There is also a large challenge in determining the proposed algorithm for practical and real-time applications using a smartphone due to hardware limitations. The image processing such as feature detection and matching are performed manually in this research. Therefore, the automation process will be considered in future work. The development of cloud servers is one solution for the near future, as they can receive the query image and perform feature extraction and image matching. Another solution is using some designed feature, such as a bar code, to improve the performance of image processing, which is our ongoing work.

4. Conclusions

This research proposes space resection-aided PDR in order to develop an algorithm that is low cost, easy to use, highly generalizable, and accurate for universal indoor pedestrian navigation. By using the proposed algorithm, the user only needs a smartphone with an embedded inertial sensor and camera to estimate location. Users will not have to implement any advanced calibration or parameter tuning in post-processing, nor will they need to buy any environmental infrastructure or equipped sensors. First, the IMMS was used to fast map an environment to produce geo-referenced images. Then, the space resection provided the position estimation with an accuracy of one meter and used the updated measurement for PDR. PDR provided the initial values for iterative calculation of space resection indoors once the starting location was known. PDR also reduced the need to frequently take images for space resection, and provided continuous navigation. Furthermore, the adaptive weighted least squares method was proposed for better estimation of space resection based on the analysis of space resection using a smartphone camera. If the recording principle distance becomes more accurate, and the adaptive weighted least square method is applied, the performance of space resection without any camera calibration can approximate the performance when complete calibrated IOPs are used. In addition, space resection based on the geo-referenced images provides the position with absolute coordinates for seamless navigation. In order to validate the performance of the proposed algorithm, different trials were carried out in this research. After traveling 566 m, the preliminary results presented in this study indicate the proposed algorithm provides an average percentage loop closure error of about 2.6%, and an average check point error of about 8.8 m. There is work to be done and improvements to be made, such as image processing of space resection, and advanced PDR. The automation image process, cloud servers, map information and RSS-based positioning technology will be considered in the future.

Acknowledgments

The authors would like to acknowledge the financial support through the project (SYC1050121) funded by the Department of Land Administration, Ministry of Interior, Taiwan and field support from National Cheng Kung University.

Author Contributions

Kai-Wai Chiang conceived and guided this research; Jhen-Kai Liao and Shih-Huan Huang developed the algorithm as well as designed and performed the experiments; Hsiu-Wen Chang and Jhen-Kai Liao analyzed the data and wrote the paper. Chien-Hsun Chu supported the related mapping technology.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Mautz, R. Indoor Positioning Technologies. Habilitation Thesis, Eidgenössische Technische Hochschule Zürich, Zurich, Switzerland, 2012. [Google Scholar]
  2. Harle, R. A survey of indoor inertial positioning systems for pedestrians. IEEE Commun. Surv. Tutor. 2013, 15, 1281–1293. [Google Scholar] [CrossRef]
  3. Klepeis, N.E.; Nelson, W.C.; Ott, W.R.; Robinson, J.P.; Tsang, A.M.; Switzer, P.; Behar, J.V.; Hern, S.C.; Engelmann, W.H. The national human activity pattern survey (NHAPS): A resource for assessing exposure to environmental pollutants. J. Expo. Anal. Environ. Epidemiol. 2001, 11, 231–252. [Google Scholar] [CrossRef] [PubMed]
  4. Retscher, G.; Hecht, T. Investigation of location capabilities of four different smartphones for lbs navigation applications. In Proceeddings of the 2012 International Conference on Indoor Positioning and Indoor Navigation (IPIN), Sydney, NSW, Australia, 13–15 November 2012; pp. 1–6.
  5. Liu, J.; Chen, R.; Pei, L.; Guinness, R.; Kuusniemi, H. A hybrid smartphone indoor positioning solution for mobile LBS. Sensors 2012, 12, 17208–17233. [Google Scholar] [CrossRef] [PubMed]
  6. Agata, B.; Robert, H. Walk detection and step counting on unconstrained smartphones. In Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing, Zurich, Switzerland, 8–12 September 2013; pp. 225–234.
  7. Weinberg, H. Using the ADXL202 in pedometer and personal navigation applications. In Application Notes American Devices; Analog Devices, Inc.: Norwood, MA, USA, 2002; p. 2. [Google Scholar]
  8. Ho, N.H.; Truong, P.H.; Jeong, G.M. Step-detection and adaptive step-length estimation for pedestrian dead-reckoning at various walking speeds using a smartphone. Sensors 2016, 16, 1423. [Google Scholar] [CrossRef] [PubMed]
  9. Tsai, G.-J.; Liao, J.-K.; Chu, H.-J.; Chiang, K.-W. The performance analysis of a smartphone based three dimension pedestrian dead-reckoning and map-matching algorithm for indoor navigation applications. In Proceedings of the 27th International Technical Meeting of The Satellite Division of the Institute of Navigation, Tampa, FL, USA, 8–12 September 2014; pp. 2191–2201.
  10. Groves, P.D.; Pulford, G.W.; Littlefield, C.A.; Nash, D.L.J.; Mather, C.J. Inertial navigation versus pedestrian dead reckoning optimizing the integration. In Proceedings of the 20th International Technical Meeting of the Satellite Division of The Institute of Navigation, Fort Worth, TX, USA, 25–28 September 2007.
  11. Liu, Z.; Aduba, C.; Won, C.-H. In-plane dead reckoning with knee and waist attached gyroscopes. Measurement 2011, 44, 1860–1868. [Google Scholar] [CrossRef]
  12. Lan, K.C.; Shih, W.Y. On calibrating the sensor errors of a pdr-based indoor localization system. Sensors 2013, 13, 4781–4810. [Google Scholar] [CrossRef] [PubMed]
  13. Afzal, M.H. Use of earth’s magnetic field for pedestrian navigation. Ph.D. Thesis, University of Calgary, Calgary, AB, Canada, 2011. [Google Scholar]
  14. Ali, A.; Siddharth, S.; Syed, Z.; El-Sheimy, N. Swarm optimization-based magnetometer calibration for personal handheld devices. Sensors 2012, 12, 12455–12472. [Google Scholar] [CrossRef]
  15. Li, X.; Wang, J.; Liu, C.; Zhang, L.; Li, Z. Integrated WiFi/PDR/Smartphone using an adaptive system noise extended kalman filter algorithm for indoor localization. ISPRS Int. J. Geo-Inf. 2016, 5, 8. [Google Scholar] [CrossRef]
  16. Chen, Z.; Zhu, Q.; Soh, Y.C. Smartphone inertial sensor-based indoor localization and tracking with ibeacon corrections. IEEE Trans. Ind. Inform. 2016, 12, 1540–1549. [Google Scholar] [CrossRef]
  17. Wang, X.; Jiang, M.; Guo, Z.; Hu, N.; Sun, Z.; Liu, J. An indoor positioning method for smartphones using landmarks and PDR. Sensors 2016, 16, 2135. [Google Scholar] [CrossRef] [PubMed]
  18. Shang, J.; Hu, X.; Cheng, W.; Fan, H. Gridiloc: A backtracking grid filter for fusing the grid model with PDR using smartphone sensors. Sensors 2016, 16, 2137. [Google Scholar] [CrossRef] [PubMed]
  19. Guo, X.; Shao, W.; Zhao, F.; Wang, Q.; Li, D.; Luo, H. WiMag: Multimode fusion localization system based on Magnetic/WiFi/PDR. In Proceedings of the 2016 International Conference on Indoor Positioning and Indoor Navigation (IPIN), Alcalá de Henares, Spain, 4–7 October 2016; pp. 1–8.
  20. Tian, Z.; Jin, Y.; Zhou, M.; Wu, Z.; Li, Z. Wi-Fi/MARG integration for indoor pedestrian localization. Sensors 2016, 16, 2100. [Google Scholar] [CrossRef] [PubMed]
  21. Wang, Q.; Luo, H.; Zhao, F.; Shao, W. An indoor self-localization algorithm using the calibration of the online magnetic fingerprints and indoor landmarks. In Proceedings of the 2016 International Conference on Indoor Positioning and Indoor Navigation (IPIN), Alcalá de Henares, Spain, 4–7 October 2016; pp. 1–8.
  22. Ben-Afia, A.; Deambrogio, L.; Salós, D.; Escher, A.-C.; Macabiau, C.; Soulier, L.; Gay-Bellile, V. Review and classification of vision-based localisation techniques in unknown environments. IET Radar Sonar Navig. 2014, 8, 1059–1072. [Google Scholar] [CrossRef]
  23. Nunez, P.; Vazquez-Martin, R.; Bandera, A. Visual odometry based on structural matching of local invariant features using stereo camera sensor. Sensors 2011, 11, 7262–7284. [Google Scholar] [CrossRef] [PubMed]
  24. Zhang, H.; Liu, Y.; Tan, J.; Xiong, N. RGB-D SLAM combining visual odometry and extended information filter. Sensors 2015, 15, 18742–18766. [Google Scholar] [CrossRef] [PubMed]
  25. Deretey, E.; Ahmed, M.T.; Marshall, J.A.; Greenspan, M. Visual indoor positioning with a single camera using pnp. In Proceedings of the 2015 International Conference on Indoor Positioning and Indoor Navigation (IPIN), Banff, AB, Canada, 13–16 October 2015; pp. 1–9.
  26. Grießbach, D.; Baumbach, D.; Zuev, S. Stereo-vision-aided inertial navigation for unknown indoor and outdoor environments. In Proceedings of the 2014 International Conference on Indoor Positioning and Indoor Navigation (IPIN), Busan, Korea, 27–30 October 2014; pp. 709–716.
  27. Liang, J.Z.; Corso, N.; Turner, E.; Zakhor, A. Reduced-complexity data acquisition system for image-based localization in indoor environments. In Proceedings of the International Conference on Indoor Positioning and Indoor Navigation, Montbéliard, France, 28–31 October 2013.
  28. Li, Y.; Hu, Q.; Wu, M.; Gao, Y. An imaging sensor-aided vision navigation approach that uses a geo-referenced image database. Sensors 2016, 16, 166. [Google Scholar] [CrossRef] [PubMed]
  29. Wolf, P.R.; DeWitt, B.A. Elements of Photogrammetry: With Applications in GIS, 3rd ed.; McGraw Hill: Boston, MA, USA, 2000; pp. 216–217. [Google Scholar]
  30. Li, X.; Wang, J.; Li, T. Seamless positioning and navigation by using geo-referenced images and multi-sensor data. Sensors 2013, 13, 9047–9069. [Google Scholar] [CrossRef] [PubMed]
  31. Chiang, K.W.; Duong, T.T.; Liao, J.K. The performance analysis of a real-time integrated INS/GPS vehicle navigation system with abnormal GPS measurement elimination. Sensors 2013, 13, 10599–10622. [Google Scholar] [CrossRef] [PubMed]
  32. Chu, C.H.; Chiang, K.W.; Lin, C.A. The performance analysis of a portable mobile mapping system with different gnss processing strategies. In Proceedings of the Proceedings of the 26th International Technical Meeting of The Satellite Division of the Institute of Navigation, Nashville, TN, USA, 16–20 September 2013; pp. 689–703.
  33. Ellum, C.M. The development of a backpack mobile mapping system. Master’s Thesis, University of Calgary, Calgary, Alberta, Canada, 2001. [Google Scholar]
  34. Haala, N.; Fritsch, D.; Peter, M.; Khosravani, A.M. Pedestrain mobile mapping system for indoor environments based on mems imu and range camera. Arch. Photogram. Cartogr. Remote Sens. 2011, 22, 159–172. [Google Scholar]
  35. Chen, R.; Pei, L.; Chen, Y. A smart phone based pdr solution for indoor navigation. In Proceedings of the 24th International Technical Meeting of The Satellite Division of the Institute of Navigation, Portland, OR, USA, 20–23 September 2011; pp. 1404–1408.
  36. Chiang, K.W.; Liao, J.K.; Tsai, G.J.; Chang, H.W. The performance analysis of the map-aided fuzzy decision tree based on the pedestrian dead reckoning algorithm in an indoor environment. Sensors 2016, 16, 34. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The flowchart of this research.
Figure 1. The flowchart of this research.
Ijgi 06 00043 g001
Figure 2. The developed IMMS: (a) The exterior of IMMS and the specifications of the used high-grade IMU and 360-degree spherical camera system; (b) The developed software for direct geo-referencing.
Figure 2. The developed IMMS: (a) The exterior of IMMS and the specifications of the used high-grade IMU and 360-degree spherical camera system; (b) The developed software for direct geo-referencing.
Ijgi 06 00043 g002
Figure 3. The spatial relationship: (a) the phone frame; (b) the camera frame.
Figure 3. The spatial relationship: (a) the phone frame; (b) the camera frame.
Ijgi 06 00043 g003
Figure 4. An example of horizontal intersection with distance error.
Figure 4. An example of horizontal intersection with distance error.
Ijgi 06 00043 g004
Figure 5. The professional camera calibration laboratory.
Figure 5. The professional camera calibration laboratory.
Ijgi 06 00043 g005
Figure 6. The proposed space resection-aided PDR.
Figure 6. The proposed space resection-aided PDR.
Ijgi 06 00043 g006
Figure 7. The used control points of intersection angle analysis for space resection.
Figure 7. The used control points of intersection angle analysis for space resection.
Ijgi 06 00043 g007
Figure 8. The DOP values of intersection angle analysis for two smartphones.
Figure 8. The DOP values of intersection angle analysis for two smartphones.
Ijgi 06 00043 g008
Figure 9. The selected control points of quantity analysis for space resection.
Figure 9. The selected control points of quantity analysis for space resection.
Ijgi 06 00043 g009
Figure 10. The DOP values of quantity analysis for two smartphones.
Figure 10. The DOP values of quantity analysis for two smartphones.
Ijgi 06 00043 g010
Figure 11. The distribution of control points for analysis of their optimal weights.
Figure 11. The distribution of control points for analysis of their optimal weights.
Ijgi 06 00043 g011
Figure 12. The fitting results of optimal weight analysis for two smartphones.
Figure 12. The fitting results of optimal weight analysis for two smartphones.
Ijgi 06 00043 g012
Figure 13. The experimental route.
Figure 13. The experimental route.
Ijgi 06 00043 g013
Figure 14. The trajectories of four participants using an iPhone.
Figure 14. The trajectories of four participants using an iPhone.
Ijgi 06 00043 g014
Figure 15. The trajectories of four participants using an HTC.
Figure 15. The trajectories of four participants using an HTC.
Ijgi 06 00043 g015
Table 1. The official specifications of the smartphone cameras.
Table 1. The official specifications of the smartphone cameras.
SpecificationiPhone 5SHTC M8
Recording principle distance (mm)44
Calibrated principle distance (mm)4.193.79
Pixel size (mm)0.00150.0020
Image size (pixel)3264 × 22482688 × 1520
Table 2. The position error of intersection angle analysis for the iPhone 5S.
Table 2. The position error of intersection angle analysis for the iPhone 5S.
iPhone 5SIntersection Angle (Deg)E-Error (cm)N-Error (cm)H-Error (cm)RMSE (cm)
Case 15 (6)3.753−26.8931.66327.205
23 (11)−5.171−26.742−3.40527.449
52 (20)−5.829−30.356−6.69531.627
Case 25 (6)4.675−10.3364.52012.211
23 (11)−1.708−10.5530.62110.708
52 (20)−2.017−14.356−2.40614.696
Case 35 (6)4.481−10.2074.68012.090
23 (11)−1.599−8.790−0.7608.967
52 (20)0.002−9.066−1.3339.164
Table 3. The position error of intersection angle analysis for the HTC M8.
Table 3. The position error of intersection angle analysis for the HTC M8.
HTC M8Intersection Angle (Deg)E-Error (cm)N-Error (cm)H-Error (cm)RMSE (cm)
Case 15 (6)3.89523.606−2.87024.097
23 (11)−6.04921.0622.69922.079
52 (20)−5.59616.3040.39217.242
Case 25 (6)2.9292.652−4.8176.230
23 (11)−5.7850.2670.0435.792
52 (20)−5.304−4.248−2.3077.177
Case 35 (6)3.8372.667−2.8025.449
23 (11)−4.2752.711−0.5215.088
52 (20)−4.1422.600−0.4764.914
Table 4. The position error of quantity analysis for the iPhone 5S.
Table 4. The position error of quantity analysis for the iPhone 5S.
iPhone 5SNumber of Control PointsE-Error (cm)N-Error (cm)H-Error (cm)RMSE (cm)
Case 14−5.829−30.356−6.69531.627
8−5.797−30.053−6.33331.255
18−5.887−29.450−5.98130.622
32−6.033−28.994−5.55430.131
Case 24−2.017−14.356−2.40614.695
8−2.030−14.036−2.03414.327
18−2.120−13.398−1.68813.669
32−2.288−12.926−1.25613.187
Case 340.002−9.066−1.3339.164
80.205−9.017−1.5929.159
18−0.372−8.901−1.4719.029
32−0.588−8.841−1.5508.995
Table 5. The position error of quantity analysis for HTC M8.
Table 5. The position error of quantity analysis for HTC M8.
HTC M8Number of Control PointsE-Error (cm)N-Error (cm)H-Error (cm)RMSE (cm)
Case 14−5.59616.3040.39217.242
8−6.75916.6601.34518.029
18−6.22117.3401.33718.470
32−6.67617.8191.59319.095
Case 24−5.304−4.248−2.3077.177
8−6.373−3.897−1.4827.615
18−5.867−3.260−1.4586.869
32−6.286−2.798−1.2326.990
Case 34−4.1422.600−0.4764.914
8−4.3052.612−0.4165.053
18−4.0882.658−0.2904.885
32−4.2182.628−0.4874.994
Table 6. The position error of analysis of adaptive weight for the iPhone 5S.
Table 6. The position error of analysis of adaptive weight for the iPhone 5S.
Power of Distance−4−3−2−101234
Scenario 1 (Unit: cm)
Image 125.99226.72428.00728.79729.08929.16929.17429.17229.185
Image 214.43014.63415.20415.80916.08616.23416.35616.45616.533
Image 318.44118.44718.84619.38519.72219.96220.17420.35320.494
Scenario 2 (Unit: cm)
Image 125.09025.37727.11630.94334.31835.71936.11536.28336.441
Image 215.47014.82915.63418.32720.55521.36021.55221.59121.599
Image 317.86517.74418.42020.79922.61722.96522.87222.80522.786
Table 7. The position error of analysis of adaptive weight for the HTC M8.
Table 7. The position error of analysis of adaptive weight for the HTC M8.
Power of Distance−4−3−2−101234
Scenario 1 (Unit: cm)
Image 116.39915.36114.69315.80215.71615.29814.86514.51514.257
Image 220.76424.79736.07042.86138.09421.66029.55154.54075.504
Image 321.11619.78318.62117.65717.10916.95517.00117.07917.119
Scenario 2 (Unit: cm)
Image 115.80215.49316.43122.73128.69430.28330.64430.90031.110
Image 223.55923.40024.95027.84529.36928.93127.92227.21226.822
Image 323.05921.48521.46823.86125.62325.55524.86124.33824.037
Table 8. The result of space resection for the iPhone in a real scene.
Table 8. The result of space resection for the iPhone in a real scene.
UserE-Error (m)N-Error (m)Horizontal-Error (m)
A0.6740.8650.775
B0.8110.8510.831
C0.8460.9260.887
D0.7480.8660.809
Table 9. The result of space resection for the HTC in a real scene.
Table 9. The result of space resection for the HTC in a real scene.
UserE-Error (m)N-Error (m)Horizontal-Error (m)
A1.0320.8370.940
B0.9560.9130.935
C1.3380.9381.156
D1.2671.0071.145
Table 10. The percentage of loop closure error of four participants using two smartphones.
Table 10. The percentage of loop closure error of four participants using two smartphones.
UseriPhone 5SHTC M8
PDR (%)S-PDR (%)PDR (%)S-PDR (%)
A4.8034.7137.8972.495
B3.6653.9875.3383.438
C2.2221.7305.8362.226
D1.2920.8786.0351.531
Table 11. Accuracy analysis of trajectories using the iPhone.
Table 11. Accuracy analysis of trajectories using the iPhone.
UserHeight (m)True StepsMeasured StepRMSE of PDR (m)RMSE of S-PDR (m)Improvement (%)
A1.7084083211.5455.78449.900
B1.8772874413.9359.86129.236
C1.678478259.4913.99557.907
D1.5794693918.3815.34370.932
Table 12. The accuracy analysis of trajectories using the HTC.
Table 12. The accuracy analysis of trajectories using the HTC.
UserHeight (m)True StepsMeasured StepRMSE of PDR (m)RMSE of S-PDR (m)Improvement (%)
A1.7083081626.45811.35257.094
B1.8771772425.63615.75238.555
C1.6785185319.1129.23551.680
D1.5792691724.7149.33662.224

Share and Cite

MDPI and ACS Style

Chiang, K.-W.; Liao, J.-K.; Huang, S.-H.; Chang, H.-W.; Chu, C.-H. The Performance Analysis of Space Resection-Aided Pedestrian Dead Reckoning for Smartphone Navigation in a Mapped Indoor Environment. ISPRS Int. J. Geo-Inf. 2017, 6, 43. https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi6020043

AMA Style

Chiang K-W, Liao J-K, Huang S-H, Chang H-W, Chu C-H. The Performance Analysis of Space Resection-Aided Pedestrian Dead Reckoning for Smartphone Navigation in a Mapped Indoor Environment. ISPRS International Journal of Geo-Information. 2017; 6(2):43. https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi6020043

Chicago/Turabian Style

Chiang, Kai-Wei, Jhen-Kai Liao, Shih-Huan Huang, Hsiu-Wen Chang, and Chien-Hsun Chu. 2017. "The Performance Analysis of Space Resection-Aided Pedestrian Dead Reckoning for Smartphone Navigation in a Mapped Indoor Environment" ISPRS International Journal of Geo-Information 6, no. 2: 43. https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi6020043

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop