Next Article in Journal
Multi-Criterion Spatial Optimization of Future Police Stations Based on Urban Expansion and Criminal Behavior Characteristics
Next Article in Special Issue
Terrain Segmentation Using a U-Net for Improved Relief Shading
Previous Article in Journal
Certainty Factor Analyses and Spatiotemporal Characteristics of Landslide Evolution: Case Studies in the Chishan River Watershed in Taiwan
Previous Article in Special Issue
Generation Method for Shaded Relief Based on Conditional Generative Adversarial Nets
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Accurate and Efficient Quaternion-Based Visualization Approach to 2D/3D Vector Data for the Mobile Augmented Reality Map

1
Institute of Geographic Sciences and Natural Resources Research, Chinese Academy of Sciences, Beijing 100101, China
2
SuperMap Software Co., Ltd., Beijing 100015, China
*
Author to whom correspondence should be addressed.
ISPRS Int. J. Geo-Inf. 2022, 11(7), 383; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi11070383
Submission received: 12 May 2022 / Revised: 2 July 2022 / Accepted: 8 July 2022 / Published: 10 July 2022
(This article belongs to the Special Issue Geovisualization and Map Design)

Abstract

:
Increasingly complex vector map applications and growing multi-source spatial data pose a serious challenge to the accuracy and efficiency of vector map visualization. It is true especially for real-time and dynamic scene visualization in mobile augmented reality, with the dramatic development of spatial data sensing and the emergence of AR-GIS. Such issues can be decomposed into three issues: accurate pose representation, fast and precise topological relationships computation and high-performance acceleration methods. To solve these issues, a novel quaternion-based real-time vector map visualization approach is proposed in this paper. It focuses on precise position and orientation representation, accurate and efficient spatial relationships calculation and acceleration parallel rendering in mobile AR. First, a quaternion-based pose processing method for multi-source spatial data is developed. Then, the complex processing of spatial relationships is mapped into simple and efficient quaternion-based operations. With these mapping methods, spatial relationship operations with large computational volumes can be converted into efficient quaternion calculations, and then the results are returned to respond to the interaction. Finally, an asynchronous rendering acceleration mechanism is also presented in this paper. Experiments demonstrated that the method proposed in this paper can significantly improve vector visualization of the AR map. The new approach, when compared to conventional visualization methods, provides more stable and accurate rendering results, especially when the AR map has strenuous movements and high frequency variations. The smoothness of the user interaction experience is also significantly improved.

1. Introduction

With the rapid development of computer graphics and the geographic information system (GIS), geovisualization in augmented reality GIS (AR-GIS) has rapidly developed and has been attracting considerable interest in cartography [1,2]. Instead of being restricted to the traditional graphical user interface (GUI), AR-GIS makes it easier to manipulate 3D data [3] and allows visualization and manipulation of spatial data in a realistic and intuitive manner [4]. In precision farming, AR-GIS is a useful tool for navigation of fieldwork and visualization of agricultural information. It can guide the user to do soil sampling in their research [5]. In earth science education, AR-GIS is used as an interactive and advantageous learning tool for lessons of hydrology, land management and soil science [6].
Recently, augmented reality (AR) systems running or presenting augmented view on mobile devices (mobile augmented reality, MAR [7]) have emerged as a powerful and the most widely known solution to AR-GIS applications. One major reason behind this trend is that a single mobile device can offer the entire solution for locating objects and modeling scenes. This makes it unnecessary to require several different sensors and devices (camera, GPS, laptop). In AR-GIS, the usage of mobile devices contributes to enhanced flexibility in providing essential information and functionalities while adapting to the user’s location [8].
To better understand the joint use of GIS and an AR system, Hugues et al. [9] separate the applications of AR-GIS into augmented territory (AT) and augmented map (AM). The proposed classification is based on the source of geographical information and its representation in AR-GIS. In augmented territories, digital spatial data is directly placed in the real environment. Sensing and parsing the semantic structure of the user’s environment is the main purpose for this kind of application. In augmented maps, the environment of the elements needs to be represented in order to facilitate the recognition, the realism or the understanding of the project. Spatial or vector data processing is the core of application.
Vector map visualization is one of the fundamental components of GIS, which plays an important role in AR-GIS [10]. In many MAR-related applications including ATs and AMs, vector visualization can enhance the presentation of the position and relationship of spatial objects, interactively annotate the objects in the physical space, and intuitively prompt the users to interact and form spatial cognition by overlapping the digital information on the physical world [11]. For instance, vector rendering is used to overlap the virtual navigation instructions and concepts on the actual road to prompt the user in the right path and direction [12,13,14]. In these applications, arrows and lines are intuitive for leading the way [15]. Vector visualization in AR can also assist users in checking the information that is difficult to see and improve their spatial cognitive abilities in specific application circumstances. In limited visibility and even at night, AR vector layers represent guidance information elements of safe navigation on unexplored or hidden environment [16]. In civil engineering, underground facilities (e.g., pipes, cables, manholes) are rendered by points and polylines as if the user could see “through” the road in X-ray vision [17].
In agricultural decision support, polygons and point of interests, respectively, express overall subfield and detail sensor data [18]. In geospatial collaboration tasks, vector visualizations can break the limits of physical space and share jobs and resources for a member of the virtual teams [19,20]. In spatial data acquisition and scene modeling tasks, the vector symbolic can interactively capture and interpret entities and features in the spatial scene modeling [21,22,23,24,25]. In the geo-design or education [26] domain, vector representation in the AR sandbox [27] such as vantage points [28], a polygon illustrating the location [29] and contour lines [30] is performed to present environmental dynamics on surface [31] and geographical concepts for enhancing spatial thinking [32].
A great deal of previous research of vector presentation in GIS has focused on fast rendering vector points, polylines [33,34] and polygon data [35]. It is well known among GIS researchers that the time-consuming steps of real-time vector mapping is querying spatial data, retrieving them and rendering geometric graphics [36]. A variety of approaches, including generating the tiled data [37,38], simplifying the vector [39] and parallel optimization [40,41], have been used to improve the efficiency of vector data rendering.
The aforementioned studies are unquestionably important to applications such as rendering massive or complex vector data in traditional 2D and 3D GIS, which is well researched and improved. However, relatively little research has been carried out on AR-GIS visualization, and even less on vector visualization in AR-GIS. The main drawback to adopting current rendering techniques to AR-GIS is that almost all existing methods do not seem to address specific issues of augmented reality and AR-GIS. The vector rendering in AR maps is much more complicated than vector mapping in traditional 2D and 3D GIS software. The specific rendering issues of AR-GIS fall into the following three aspects.
  • The research about the representation of the pose (position and orientation) of the spatial object in GIS is still limited. In a 2D or 3D GIS system, the display system requires only user operations and display parameters. For the display system in AR-GIS, however, the accuracy of pose estimation and tracking become the key challenges [42,43]. The pose representation of Euler angles applied by traditional GIS systems is limited. The right position and orientation are crucial for retrieving and displaying the correct spatial information. The Euler angle, widely applied in GIS, is a common representation of the 3D rotation and pose processing approach and angular motion information [44]. For a more specific need in complicated spatial relations and high-frequency large-scale motion scenarios, nevertheless, the Euler angles lead to singularity when the pitch angle is 90 °. This results in there being an infinite number of solutions to the Euler sequence. Furthermore, the Euler angles description approach has calculation issues when handling the pose. Despite numerous studies developing more precise rotation and pose correction methods, the current methods are limited to pose calculation of a single spatial object or single source spatial data. The general module for multi-source spatial objects in GIS is still scarce.
  • The geometric relationship calculation between 2D and 3D spatial objects is still inefficient. In the real scene of the AR map, the processing of spatial relationships is more complicated than the processing in the traditional 2D and 3D GIS map. The main reason is that the spatial relationships in AR involve complex spatial semantics and specific 3D rendering issues. Different from traditional 2D/3D GIS, AR-GIS needs to determine the correct relationship between real and digital space [45], digital elements placement [25,46] including occlusion [47,48] and avoidance [10]. These tasks are crucial in AR-GIS because they are responsible of the spatial semantic constraints between the digital entities and the real environments. Furthermore, computer generated elements in dynamic AR scenes need to address accessibility, clarity, aesthetics, and spatial–temporal continuity, which can be challenging [49]. Although computer vision technology can solve some problems, the results of the visual-based relationship calculation are too coarse to enable the GIS spatial geometric relationship function; and the efficiency of processing of the complex spatial relationship of the 3D spatial object is not enough in the GIS system [50].
  • The performance of mobile augmented reality (MAR) [7,51] is also important to practical AR-GIS applications. Mobile AR devices are widely deployed in AR-GIS applications because they are equipped with multiple sensors [4,52], but the computation capacity of mobile hardware is very limited [53]. Besides, many applications require real-time dynamic visual representation and high computational capabilities. Visualization in AR requires synchronization with human visual processing [54]. An inefficient visualization would cause delays in AR.
To address these issues, this paper proposes a novel quaternions-based [55] AR vector visualization method. This paper focuses on the accuracy and efficiency of complex or diverse vector visualization for augmented maps using mobile devices. The contribution of this paper lies in three aspects:
  • The entire pipeline for real-time updating of the pose of multi-source spatial objects and the quaternions and SLAM-based method is presented to render AR-GIS map objects. Using our method, six-degrees-of-freedom posture data are incorporated from the AR system’s SLAM inertial navigation into the GIS system to improve the AR map’s synchronization frequency with the actual world, realize real-world alignment and registration, and provide high-precision position tracking.
  • In addition, an efficient and accurate spatial relationship between 2D and 3D spatial objects for AR vector visualization is proposed in this article. The complex spatial relationship calculation of 2D and 3D objects is transformed into a fast quaternion solution. Therefore, the computationally costly spatial queries and trigonometric functions are avoided. For this purpose, this paper provides a mapping approach to determine the topological relationship of the 2D/3D object and quaternion spatial relationship solution.
  • In order to quickly respond to the requirements of real-time updating in the AR real environment, an asynchronous rendering mechanism for real-time AR map visualization is also presented in this article. This paper builds on previous research [25] to provide a high-performance multi-threaded scheduling of AR maps, and it implements efficient acceleration methods such as hierarchical mesh indexing, GPU-based batch rendering, pre-caching and octagon tree scheduling.
This paper has been divided into four sections. The next section illustrates a survey about technologies and applications related to the augmented reality map (AR map). Section 3 highlights the key theoretical concepts behind the rendering of vector data in AR-GIS and the methodology employed in this study. Section 4 then describes the experiment setup and analyzes the method’s effectiveness. Finally, the paper concludes with a discussion in Section 5.

2. Related Work

AR interfaces are closely related to GIScience and geovisualization, due to the fact that AR systems deal with large amounts of spatial data. As a result, several GIScience infrastructures and techniques are used in the development of highly efficient AR-GIS mapping systems, including precise positioning, efficient modeling and rendering of spatial data.

2.1. Registration and Tracking for AR-GIS

The fundamental goal of AR is to analyze changes in collected camera frames and to align virtual data into the camera scene correctly depending on tracking results. Although both AR and VR techniques need to track the viewpoint of the user, the tracking is more important to AR applications. AR needs tracking to superimpose a virtual object over physical environment views and seamlessly register virtual information and real-world views in real time. Therefore, the key for the tracking is continuous re-evaluation of poses to accurately align assets and correct the perspective to enhance the user’s experience in the real-virtual environment. The effectiveness of registration is highly dependent on a tracking method’s speed, accuracy, noise tolerance and stability [56].

2.1.1. Sensor-Based Registration and Tracking

Many real-time positioning and georeferencing studies are based on various types of portable position sensors in the outdoor environment, but experiment results indicate that regular portable posture sensors are susceptible to measurement noise and interference factors [57,58]. Since sensor-based tracking does not require any sort of marker placement, it is more flexible and more suited for outdoor augmented reality [59]. However, there are measurement errors in Global Navigation Satellite System (GNSS) devices such as ordinary GPS, Beidou and the integrated positioning chip in mobile phones. [60]. As a result, georeferencing directly utilizing a mobile portable posture sensor often requires off-line processing and correction [61,62] and manual interaction [63]. For AR-GIS applications, tracking is usually achieved by marker-based, feature-based and other vision-based methods.

2.1.2. Marker-Based Registration and Tracking

A marker-based technique allows for precise tracking using visual markers. Marker-based tracking uses a digital camera, computer vision methods and easily recognizable markers placed in indoor or outdoor environments. Most of the existing applications use printed markers.
As shown in Figure 1, the marker image needs to be input into the system and the consecutive video frames need to be extracted. The tracking module depicted in Figure 1 is at the core of the augmented reality system. It evaluates the camera’s relative pose based on appropriately detected and recognized landmarkers in the scene. The term “pose” refers to a spatial entity’s six degrees of freedom (DoF) position including the 3D location and 3D orientation.
The more advanced marker-based tracking method by the photo marker is widely used in existing AR systems [64], particularly in augmented map applications [4]. It eliminates the need to place synthetic fiducial markers in the scene [65,66]. It is efficient to take a photograph of a planar object in a real-world scene and use it as a visual marker.

2.1.3. Feature-Based Registration and Tracking

Feature-based tracking, which is a popular markerless technique in computer vision, tracks camera poses by extracting geometric features in the actual world to find 3D world and 2D image coordinate correspondences [67]. This method can provide accurate real-time camera pose tracking. The basic assumption behind tracking solutions, unlike marker-based methods, is to find a connection between 2D image features and their 3D world frame coordinates. The feature-based tracking method is applicable to both indoor and outdoor AR scenes while marker-based tracking is mainly suitable for the indoor situation. It is not really practical, however, if the site lacks sufficient features. Furthermore, rendering digital objects over the physical space may be slow because the required processing is computationally expensive [68], especially for high-dimensional feature space [69].
However, these vision registration and tracking methods, including marker-based and feature-based approaches, have some drawbacks [63]: (1) Vision tracking can only handle static scenes with little fluctuation. Therefore, the operation is dependent on the user’s stability. (2) Vision tracking necessitates substantial computational power, which imposes a large hardware burden for outdoor AR.

2.2. Quaternion-Based Post Estimation

To many visual applications, pose estimate is a crucial procedure. In the augmented reality map, a rigid body’s spatial orientation is often represented by six degrees of freedom (6DoF), which describes a translation and an ordered series of rotations (Euler angles). These series of rotations appear intuitive at first glance, as they represent rotations around the main axes. Ordered series of rotations, however, have certain inherent basic issues, such as gimbal lock or loss of a degree of freedom. Another issue is that there are 12 alternative rotation combinations that can be used to generate a specific spatial orientation. This is not only perplexing, but it also causes issues when computing inverse or forward dynamics. Calculating the rates of change of the Euler angles for a given rotation sequence is achievable, but it is mathematically difficult at best.
In industries as diverse as chemistry, robotics, space shuttle control, 3D games and virtual reality, quaternion is the preferred rotation operator. It is particularly useful in real-time or key-frame animations and AR applications; and quaternion multiplication is computationally inexpensive which allows for fast and smooth interpolation of orientations.
Quaternions provide an algebraic representation of arbitrary rotational operations on oriented spatial data which are difficult to handle with Euler angles in traditional GIS systems. Through the method of quaternion operation, it provides a solution for the rotational transformation of complex spatial data. In contrast, traditional GIS graphics techniques involve a series of steps and rules of thumb that involve little in-depth involvement in the expression or rotation process of spatial direction data.
Quaternions are very useful for pose estimation. Because quaternion-based algorithms can recover a unique solution to camera pose estimation problems, Fathian et al. [70] developed a quaternion-based approach to represent rotation. With this method, the rotation and translation can easily recover. Quaternion-based pose estimation is faster and more accurate than traditional methods [71]. A tracking system can estimate an accurate rotation of an object and can eliminate the drift effect in static state. Seo et al. [72] presented quaternion-based orientation estimation to improve the accuracy of the tracking system and eliminated the drift effect in static state. GIS lacks the ability to handle the rotation and interpolation of orientation. De Paor et al. [73] solved this issue by introducing quaternions.
Quaternions-based pose representation is a more effective method for solving general nonlinear least squares optimization problems involving unit quaternion functions [74]. To reduce computational requirements, a Kalman filter using quaternion-based orientation estimation from MARG sensors is presented in [75]. A quaternion-based orientation estimation algorithm can use an inertial measurement unit (IMU) [76]. It is based on relationships between the quaternion representing the platform orientation, the measurement of gravity from the accelerometers and the angular rate measurement from the gyros.
Quaternions have the ability to express precise spatial relationships and geometry. For 3D to 2D line and point correspondences, a novel linear pose estimation based on quaternion is provided, which improves the accuracy and running time of the camera pose estimation algorithm [77]. For visualization of indoor navigation, a quaternion-based precise 3D modeling method for path networks is proposed to automatically generate highly recognizable 3D models [78].
Quaternions can describe the rotation of any vector axes, support the rotation of oriented spatial data, express differences at the surface level of directional spatial data. So far, however, previous published studies about quaternion-based pose estimation for spatial data are limited to a single spatial object. There has been little investigation about quaternion-based methods for multi-source and multi-type spatial objects and the vector map.

2.3. Augmented Realitiy Map

AR is a relatively new technology that can be used to develop new applications [79], such as a graphical user interface for spatial data in mapping. Landscapes and other cartographic things can be visualized in a graphically impressive and dynamic context in this way.
In order to distinguish between the source of AR map information and its representation, Hugues et al. [9] proposed a novel classification which is generative and separates the augmented reality map into the augmented map (AM) and augmented territory (AT). As shown in Table 1, AM and AT have different augmentation target, updating and rendering content, and so on [80]. AM responds to user requests and updates the map [81]. Because GIS data are the focus of the augmentations, an AM can also be thought of as an augmented virtuality technique [82]; and it is known as an ex-situ AR application since it visualizes a 3D model of an object at its location at any other place [83]. AT responds to user instructions and updates the data with location. Because the actual environment is the target of the augmentations, AT (in situ AR applications) is mainly designed for collecting additional information during exploration of the physical environment.

2.3.1. Augmented Territory

As mentioned above, AT is used to explore and understand the real environment. It is widely applied for underground construction, navigation, scene modelling and environment reconstruction. In [52], a mobile outdoor AR map for geovisualization on a city scale is presented. It made information about destroyed buildings and historical sites that were affected by the earthquakes more easily accessible to the public [84]. Fenais et al. reviewed the use of AR in the underground construction industry [85]. AT can create real-time visualization of 3D models on top of the actual scene. In this way, the risk of damaging buried utilities can be significantly reduced by the augmented information. AT can help to locate the existing utilities and display the critical information. In mine site investigation, AT can help the user to quickly explore underground mine objects, making it easier to understand the subsurface environment [86].
AT can also provide immersive visual feedback to the user, enhancing their 3D spatial perception of georeferenced data [87]. Because the digital contents are directly overlaid and annotated on the site, AT can support planning and designing of infrastructure by directly modifying data to incorporate required changes, without the need for any post-processing.

2.3.2. Augmented Maps

AM provides a novel way to interact with printed maps, to increase user engagement and motivation, and to improve understanding of geospatial data [88]. The advancement of computer hardware and software has enabled the introduction of various types of map presentations including 3D rendering in the last decade. For instance, Bobrich [81] presented an AM overlapping 3D DEM on paper maps with the markers of the ARToolKit. Printed paper maps are also digitally extended with some beneficial interactions, by using AR technology [89].
AM could augment electronic maps to display dynamic map content. Electronic maps are easy to update and query in the processing and visualization since the information is all in digital data format, making them more suitable for displaying dynamically change content than paper maps. More recently, there is a new application of AM, namely, the augmented reality sandbox, which has emerged as a powerful tool in education and geodesign. It is made of a table with the AR box full of a virtual sand layer as the main interface [90]. The AR sandbox provides a novel method to express and research different geographical progress and phenomena in real time. A growing body of evidence suggests that the AR sandbox is beneficial for educational experience and improving the spatial thinking of students [27,31].
The augmented reality map combined with mobile sensing and GIS technologies, like other AR systems, can improve the AR map experience. GIS can be used to enhance AR maps, making them more usable and accurate [91]. It allowed for different maps to be registered and augmented using the same reference database. When AR maps are combined with cellphones, new methods using a portable camera projector to engage with and view additional content of interest can be created [92].

3. Materials, Concepts and Methods

3.1. The Process of Vector Visualization for AR Map

As shown in Figure 2, the main process of AR vector map visualization is to start and initialize the AR map system, as well as to read and initialize the context of the AR map via the GIS vector data model. High-precision positioning coordinates are acquired in conjunction with RTK-GPS [93], AR maps are initialized in terms of position and orientation, and the spatial relationship between the spatial context of AR maps and the GIS vector data model is determined. The physical environment and plane are then identified based on the camera and inertial navigation measurement of the MAR device. The scale of the real scene and the matrix conversion relationship of the camera are determined. The coordinate reference of the GIS virtual environment and the real AR scene are then aligned and registered based on the matrix transformation relationship and the requirements of the AR map applications.
Once the multi-source spatial data are loaded, the corresponding AR spatial object will be constructed based on the data type (see Figure 3). The rotated quaternion of AR spatial object is calculated in relation to the MAR device to produce the rotation matrix from a new quaternion. Using this rotation matrix, the viewport visualization parameters of the vector map is updated in real-time.
If the pose is updated during the AR map interaction and collaboration process, the new quaternion of the MAR device will be recalculated. If the location grid has changed, the AR spatial object will be regenerated. After these steps, the result of the AR map interaction and operations will be saved.

3.2. Quaternion-Based Pose Calucation and Transformation

AR-GIS pose modeling and processing consists of the real-time acquisition and transformation build of the camera pose data, as well as translating the pose information such as the spatial location orientation of the AR video camera into GIS-friendly information.
Quaternions can be used as an alternative to Euler angles for parametrizing spatial rotations in three dimensions [94]. The traditional GIS view change operation is mainly achieved by Euler angles and coordinate displacement [95]. This method is incapable of resolving two common issues in AR scenes: (1) unlimited 6 DoF free movement, that is, any combination of operations in 6 operations such as rotation, pitch, tumbling and X axis displacement, Y axis displacement, Z axis displacement, etc., at the same time can accurately describe this state of motion; (2) Euler angles indicate that the universal lock problem is encountered when moving, which leads to the same spatial state.

3.2.1. Quaternion Angle Conversion

The system uses Euler angles to create a three-angle rotation matrix, then converts it to quaternions, which reverses the view angle of the vector map for display. Quaternions have the advantage of saving storage space and facilitating interpolation over the matrix method, whereas AR maps must frequently use the rotated representation of the map in the real environment to ensure that the map can always accurately interact with the real environment at any of the user’s observation positions and perspectives.
The quaternion of any position in the relative coordinate system can be obtained by moving the AR device. A unit quaternion can be described as a complex representation or a matrix representation,
q = q w + q x i + q y j + q z k = q w q x q y q z T
where   q x , q y and q z   are used to represent the axis in vector form, and q w is the angle of rotation around the axis.
In order to obtain the display parameters required by the vector map in the view transformation operation, the Equation (2) is utilized to convert the quaternion of the mobile AR device into the Euler angles at the current position of the camera,
θ φ = atan 2 2 q w q x + q y q z , 1 2 q x 2 + q y 2 asin ( 2 ( q w q y + q z q x ) ) atan 2 2 q w q z + q x q y , 1 2 q y 2 + q z 2
where Euler angles ( , θ , φ ) are, respectively, roll (rotation about the Z-axis), pitch (rotation about the new Y-axis), yaw (rotation about the new X-axis).

3.2.2. Quaternion-Based Visualization of AR Spatial Objects

When the map renders a bitmap, an AR spatial point object is created. Then, the corresponding quaternion q 1 based on the position orientation is also calculated; and the current posture quaternion of the mobile AR device is named as q 2 . The absolute position of the AR spatial object relative to the mobile AR device can be computed by the multiple of q 1 and q 2 determining a new quaternion q n . With q 2 , a rotation matrix is created, allowing the map view to be updated and interacted with by users in an augmented reality environment.
The quaternion q 3 represents the pose of the mobile AR device. R represents the simultaneous mobile AR device’s rotation matrix, which is depicted in Equation (3). The rotation matrix of the vector map detected by the quaternion into a mobile AR device is calculated using Equation (4).
R = r 11 r 12 r 13 r 21 r 22 r 23 r 31 r 32 r 33 ,
R = 1 2 q y 2 2 q z 2 2 q x q y 2 q w q z 2 q x q z 2 q w q y 2 q x q y 2 q w q z 1 2 q x 2 2 q z 2 2 q y q z 2 q w q x 2 q x q z 2 q w q y 2 q y q z 2 q w q x 1 2 q x 2 2 q y 2 ,
As shown in Equation (5), q 1 represents the quaternion at the location x 1 , and q 2 represents the quaternion at the location x 2 . The rigid body motion transformation of the position x 2 6DoF relative to the location x 1 6DoF can be performed by multiplication to obtain a new quaternion. Therefore, the initialization position of the AR spatial object added by the user is always unchanged relative to the moving AR device.
Because the vector map is also an AR spatial object, the quaternion created by MAR device movement can be transformed into a new quaternion of position of the vector map, ensuring that the vector map remains stationary in the actual world once registered. This method, according to Equation (5), allows the vector map to directly move the quaternion of the AR device to any position selected for translation, and the quaternion conversion result is then transmitted to the AR vector map visualization task via the calculation process of Equation (4).
q 1 = q w 1 + q x 1 i + q y 1 j + q z 1 k q 2 = q w 2 + q x 2 i + q y 2 j + q z 2 k q 1   ·   q 2 = q w 1 q w 2 q x 1 q x 2 q y 1 q y 2 q z 1 q z 2 + q w 1 q x 2 + q x 1 q w 2 + q y 1 q z 2 q z 1 q y 2 i + q w 1 q y 2 + q x 1 q z 2 + q y 1 q w 2 q z 1 q x 2 j + q w 1 q z 2 + q x 1 q y 2 + q y 1 q x 2 q z 1 q w 2 k

3.3. Quaternion-Based 2D and 3D Spatial Objects Relationship Calculation

Quaternion and SLAM, in combination with inertial navigation, can provide high-frequency, high-precision, continuous positioning and tracking, making AR maps more immersive. In order to update the display viewport and posture rotation matrix easily, however, the continuous variation of the spatial relationship of virtual spatial objects based on quaternion operations needs to adapt the GIS rendering engine
The following is the procedure of AR map mapping using asynchronous multi-threading. The multi-source vector data are retrieved from the GIS spatial database, which is rendered into bitmaps by the rules of grid index division methods. The AR spatial object of the bitmap is constructed once the dual cache and buffer pool have been built.
The quaternion pose of the AR spatial object is constantly updated synchronously when the real-time motion of MAR devices changes. Each render subthreads’ bitmaps are merged to render at each updated position, and the merged entire bitmaps are saved and cropped to a larger size than the full screen [10]. The AR spatial object then notifies the main screen to refresh the cropped vector map and display it on the user screen of MAR devices. Additionally, this approach offers a smoother visualization that is closer to the real world by integrating RTK-GPS and eradicating the accumulated inaccuracy during motion.
Because quaternions may represent the rotation of any vector axis, quaternion-based arithmetic algorithms can enable unrestricted rotation of oriented spatial data and describe differences at the surface level of directional spatial data. However, the present quaternion literature concentrates on executing quaternion operations on point targets of a single object, and it is not yet possible in GIS to distinguish between multi-source and multi-type spatial objects and vector elements. Because the basic research units in GIS include points, lines, surfaces, bodies, and so on, these spatial entities may execute spatialization activities such as querying, selecting, editing, snapping, coordinate conversion, topological operations, spatial analysis, and so on.
Therefore, how to establish the quaternion operation and the mapping relationship of 2D/3D spatial objects in GIS determine whether the common GIS spatial operations can be performed in the AR map, such as selecting the buildings and roads in the current viewport in reality mode, querying the results of the nearby cell and accurately visualizing it in the real scene, guiding the user to walk in the correction direction and the real-time interaction and adaptive reaction of 2D/3D spatial objects on the tablet AR sandbox.
In this section, the spatial mapping of polygon objects and quaternions is used as an example. As a fundamental type of map, a polygon object of any spatial object (such as a three-dimensional building) in an AR map should support real-time selection, query and editing. The acquisition of a closed region as a window, which saves the results of the acquisition to a spatial database under the map coordinate system, is an example of a GIS-based data generation process. Unlike tradition GIS, which requires a large volume spatial query and trigonometric function computation, then filtering out the target set, quaternions can provide a faster way to operate. The quaternions can be constructed directly from the polygon objects, and the operation can be completed by simply determining whether the line and rays of AR viewport intersect. The topological relationship of two- and three-dimensional GIS spatial object is thus mapped to a quaternion spatial relationship calculation.
Specifically, the applications of AR maps are often above the ground. The process of generating quaternions for any polygon in an AR map can be simplified by constructing the quaternion of rotations of a plane composed of three points that are not collinear with respect to the horizontal plane.
Because the normal can uniquely identify the direction of each plane, the spatial plane can represent any plan position by rotation R and translation T, and the problem is now translated into the rotation of the linear vector.
The detailed implementation procedures are as follows: A is the initial horizontal plane of the AR map, B is the plane composed of three non-collinear points, and the plane normal vector of B is calculated by cross-multiplying the three points. According to Rodrigues’ rotation formula,
k = k x k y k z , v 0 = v x v y v z
where the unit normal vector of the rotation plane is k (that is, the unit vector k of the axis of rotation); note that the counterclockwise direction is positive when rotating. The normal vector of plane A is denoted as v 0 . The rotated vector can be expressed as
v 1 = R 0 v 0
where the normal vector of plane B is denoted as v 1 , and θ is the angle of rotation. R 0 is the rotation matrix through an angle θ counterclockwise about the axis k which can be calculated as
R 0 = E cos θ + 1 cos θ k x k y k z   k x k y k z + sin θ 0 k z k y k z 0 k x k y k x 0
where E denotes a 3 × 3 identity matrix. After acquiring the rotation matrix of the corresponding polygon object, the quaternion information of the polygon object can be obtained through the Equation (10).
M v , θ = cos θ + 1 cos θ x 2 1 cos θ x y + sin θ z 1 cos θ x z + sin θ y 1 cos θ y x + sin θ z cos θ + 1 cos θ y 2 1 cos θ y z + sin θ x 1 cos θ z x + sin θ y 1 cos θ z y + sin θ x cos θ + 1 cos θ z 2 ,
The following is the conversion between the rotation matrix and the quaternion. As shown in Equation (3), the rotation matrix is known to be R. The conversion of the rotation matrix R to the quaternion can be computed without loss. Equation (10) depicts the quaternion calculation corresponding to phase,
q x = r 32 r 23 4 q w q y = r 13 r 31 4 q w q z = r 21 r 12 4 q w q w = 1 2 1 + r 11 + r 22 + r 33
After constructing the quaternion of the object, the main quaternion operations can be performed such as interpolation, conjugation, multiplication and inverse. The operation result is converted into a plane object in GIS. The traditional spatial operations can also be executed including spatial data editing, capture, topology calculation and spatial query in GIS; and the operation results are then converted into quaternions for management and visualization. Figure 4 and Figure 5 depict the arbitrary rotation of quaternion-based spatial orientation data and the GIS operation based on quaternion-based spatially orientation data.

3.4. High-Speed Asynchronous Method for Vector Visualization

An asynchronous rendering mechanism is proposed in order to quickly respond to the requirements of real-time update in the AR real environment. It demands the independence between the content of the vector map visualization and the dynamic interaction process of the user input, and the real-time visualization of the vector map is unaffected by the rendering frequency, data volume and scale of drastic changes. At the same time, multi-threaded rendering scheduling is introduced which supports the vector map in AR to be decomposed into multiple mesh submodules for rendering. Using quaternion-based AR spatial object visualization methods, the rendered bitmaps are displayed asynchronously scheduled and swapped to the AR screen.
The asynchronous multi-threaded AR map mapping process is as follows: Obtain multi-source vector data from a GIS spatial database, divide the vector data into bitmap using the map’s grid index rules, initiate the rendering task via the AR host screen, and control the rendering result of the active refresh AR window operation, bitmap after double cache construction, grid content to build a buffer pool, bitmap to build a buffer pool. Figure 6 illustrates the asynchronous update method for AR vector maps on MAR devices. Figure 7 shows the multi-threaded acceleration method for AR vector map visualization.

4. Results and Discussion

To verify the adaptability and robustness of our visualization algorithm in real AR map data types such as points, lines, polygons, labels and 3D cubes, the processing time of each step of the algorithm was compared (as shown in Table 2), tested and verified under the common large amount of data in AR-GIS. The processing time of the camera viewport map vector data query step ranges from 11 ms to 20 ms. Spatial objects construction preprocessing steps are mainly based on quaternion position transformation, taking 10 ms~24 ms. Spatial object asynchronous rendering takes 11 ms~15 ms, rendering results spliced and screen display take 5 ms~6 ms, mainly through asynchronous rendering of the stored cache according to the view size of the AR spatial objects for stitching, and interactively draw the device screen. The whole process takes 40 ms to 63 ms, the points, lines and polygons are displayed smoothly, and the annotations and 3D cubes show no obvious lag. What stands out in the table is the efficiency of our method. Based on asynchronous multi-threaded drawing task scheduling, AR map refresh no longer has the problem of serious stuttering, and can also support AR visualization of full-element vector maps in real scenes, and support real-time visualization and update of arbitrary spatial vectors for multiple 2D/3D vector maps in a real sense.
As shown in Table 3, when representing the transformation of a GIS view, the standard method based on Euler angles shakes dramatically at extreme values of operations such as rotation, pitch, rolling, and other activities for the AR map. This approach limits typical application problems in AR scenarios.
The comparison verification results are summarized below:
  • Euler angles often have gimbal lock problems when representing motion, resulting in Euler angle representations of the same spatial state not being unique. When the roll angle in the camera is close to 90°, or the pitch angle is close to 180°, or the rotation angle is close to 180°, the two sets of Euler angle representations with largely different values may indicate the same rotation, resulting in an unstable AR map display.
  • Traditional methods are difficult to support unlimited 6DoF free movement, that is, simultaneous rotation, pitch, rolling and any combination of operations in six operations such as X-axis displacement, Y-axis displacement and Z-axis displacement; and it is impossible to accurately describe such a state of motion for AR maps.
  • The quaternion-based AR map visualization method proposed in this paper can persevere with smooth visualization under the extreme conditions of each axis of motion of the camera without the phenomenon of violent shaking. This method expresses the motion state of the mobile AR device as rigid body motion, and expresses it through quaternion, so that the vector map visualization content can always be aligned with the real scene, without the dislocation or fracture shaking. Despite the severe pitch angle condition in the camera, our approach is still able to accurately render all of the element content in the map viewport.
Based on the quaternion transformation algorithm and asynchronous multi-threaded scheduling, the management and rendering mechanism for complex AR spatial objects is established, so that AR map visualization can respond to human–computer interaction and drawing smoothing in real time. The new rendering method adaptively assigns drawing tasks based on the amount of data in a vector map. It generates a list of drawing tasks containing all the feature types in the AR vector map, and visualizes the map in real time based on the screen range and spatial relationship of human–computer interaction. We compared the operation effects of two vector map visualization algorithms. The five types of operations tested are common operations of AR maps in practical applications, which can cover common operations such as panning, stretching, pitching and positioning of maps in general AR-GIS systems. Figure 8 shows the results of the open-source map visualization engine Mapbox [96] and the method in this article. Figure 9 shows a comparison of the latency of the visualization results in the case of fast panning in a common interaction with vector maps. Experiments show that:
  • The AR vector map visualization algorithm proposed in this paper can adapt to the raw vector map (non-tile) application scenario that Mapbox is not good at. The proposed method supports normal roaming and fast translation operation, and can even carry out continuous map stretching operation across the scale, and eliminates the obvious display delay phenomenon.
  • In the application scenario where Mapbox is good at vector tile AR maps, the method proposed in this paper can hardly see the delay of map content when the vector tile AR map is quickly panned, while Mapbox generates the delay of rendering content in real time, and the blank map of the drawn content appears.
As shown in Table 4, the stability of different visualization methods is compared in an experiment. The result shows that our method has no obvious shaking, and visual performance of the new method is efficient for visualization. Compared with the similar map visualization Mapbox, it is found that the Mapbox local refresh response has a delay, which means that it only supports tiles, and does not support moving around in a wide range.
The stability of the two visualization techniques is compared in the above visualization experiment. The results demonstrate that our method has no obvious jitter and delay and supports real-time map operations in a large range with complex spatial data.
In order to assess the visualization efficiency of this way with the traditional method, three AR map visualization methods are examined, including the marker-based approach [66], feature-based approach [68] and our method for AT and AM. Figure 10 depicts these two popular AR map application scenes (A YouTube video demonstration can be found on https://youtu.be/jjfmKi8sXkA, accessed on 10 May 2022). Table 5 and Table 6 show the test results.
A number of key metrics for AT and AM are tested. Take AT applications as an example: the visualization performances of applications are tested including street view tour, rendering vector data in the arrow of arbitrary orientation vector routes in the real scene, and the real-time rendering of large-scale vector data with high-precision positioning. Table 5 shows the results of the three approaches in AT scenarios. Table 6 shows the results of the three methods in AM. The following are the comparison verification results:
  • The quaternion algorithm in our method can effectively convert the inertial pose of SLAM into the viewport matrix required for a smooth map display, while maintaining the characteristics of the high-frequency rendering frame rate and high-precision posture update. It is more suitable for visualization and interaction tasks of various AR map practical applications, especially large-scale and continuous AR map application scenarios such as map navigation in real-life mode and urban scene exploration in overlook mode.
  • The method based on feature matching has the advantage of rapid extraction and no requirement for prior image information, but in the application of actual AR maps, frequent mobile AR device movement and human–computer interaction cause a large change in real environmental information, and the browsing map operation produces frequent repeated information, which brings greater redundant calculation impact to feature matching. Besides it is difficult to complete the core operations such as long-term and continuous walking and roaming, walking navigation, panning and zooming of the actual AR map.
  • Based on the marker method, the disadvantages in large-scale continuous map applications are obvious. After the loss of marker image information, the device will not be able to continue to maintain the spatial pose information of the AR map visualization, but in the desktop sand table mode and the small-scale overlook mode, the marker method has good robustness, and does not require complex calculation and geographical registration of AR spatial objects.
As mentioned earlier in Section 2, the marker-based method and the feature-based method have the drawback of being computationally intensive and being inappropriate for outdoor AR map applications with a distance of hundreds or even tens of meters [63]. As a result, the sensor-based approach is frequently employed in the real scene AR map (see Figure 11). A comparative analysis of AT applications using the sensor-based method [62] and our method was conducted. The precision of the visualization results of transforming the sensor angle into a rotation matrix is compared with the accuracy of quaternion-based method.
We conducted an analysis of multiple sets of experimental areas for the comparative method. The target area contains spatial information such as traffic roads, buildings, urban manhole covers and railings with obvious visual characteristics, sky and lane line markings with inconspicuous features and positioning signals that are blocked in a certain extent. The results show that our method has a significant improvement compared to the sensor method.
Limited by the accuracy of initialization registration, the error of vector data visualization in the traditional method at the beginning is large. The error of the initial location of the visualization is 57.2 cm on average. Due to the traditional methods’ ability to self-correction based on sensor angle calculation, the visualization error of traditional methods at a given distance does not increase significantly with the increase of sports odometers. However, there is a large error in the sensor angle calculation. The error of the sensor method increases with the distance from the AR device’s visual position in a different pose at the position. In the sensor-based approach, for instance, the display of a building farther away from the current AR device has an error of 104.6 cm.
As shown in Table 7, the method in this paper performs the pose calculation of visual-inertial SLAM based on quaternion, and the visualization error of initialization is reduced to 8.9 cm. Although this method accumulates error with motion, it still has a better accuracy than do traditional methods. The visualization error of building under the same conditions as the sensor method is only 16.8 cm. At the same time, the cumulative error of this method can be eliminated by methods such as multi-sensor fusion [97]. Since the error of the initial registration and tracking process is small, the distance from the visual position of the AR device has little impact on the visualization error of the vector data under different attitudes at the same position. The visualization error of spatial objects observed by the AR device, such as vector road in front of the eye, distant building and a manhole cover under the foot, is between 13 cm and 16 cm. In summary, the proposed method can accurately visualize various types of vector data in AT applications, and the visualization effect is not influenced by factors such as the area of the vector data and the distance.
As shown in Table 8, this method supports a variety of AR map visualization application scenarios, including indoor and outdoor AR navigation, underground pipeline inspection, 3D vectorization modeling of buildings, virtual teaching sand table and immersive real scene exhibition hall. These scenarios have different visualization characteristics and are classified as the AR map mode of AT, AM, or both. As mentioned earlier, the technical improvement of this method better supports these visualization features. This method represents a transformation matrix of small motion more accurately through the smoothing characteristics of quaternions, which ensures the continuity and stability of vector visualizations such as route arrow symbols in AR navigation. The spatial relationship mapping calculation method built on this basis can more quickly and accurately solve the pose of 2D/3D vector data, ensure the rapid and accurate display of AR map vectors, make any surface in the real scene be correctly snapped and improve the map interaction experience in AM mode. This method can quickly extract spatial semantics for vectorization modeling. Based on the rapid and accurate calculation of multiple spatial relationships in this method, the processing and display of spatial relationships such as spatial three-dimensional surface extraction, spatial extension line, three-dimensional line and three-dimensional point line polygon body are realized, and the immersive real scene exhibition hall that combines virtual information display with real scene and complex AR application is fully supported.

5. Conclusions

This study proposed a fast and accurate 2D and 3D vector data visualization method for mobile augmented reality (MAR) mapping applications. Taking AR map navigation of long-distance outdoor blocks as an illustration, the proposed approach has higher display accuracy and response efficiency in a real scene than the common methods. Different from the traditional visualization algorithm of vector data, which requires a lot of auxiliary motion and a long processing time of feature points, this method optimizes the dynamic update of vector data and creates rotation constraints for various spatial objects, without relying on visual prior marks or continuous feature information. This method offers continuous enhanced visualization for small devices like miniature cameras, and is appropriate for the rapid visualization of 2D/3D vector data over a wide region.
Compared with the existing MAR visualization methods, the approach in this paper mainly improves the rendering process of 2D/3D vector spatial objects based on quaternions. It is based mainly on three works: The quaternion-based accurate pose representation for multi-source spatial data, the fast and accurate 2D/3D spatial relationship calculation of spatial objects and the high-speed asynchronous method for vector visualization. In the camera’s quaternion-based pose transformation, any increment (adjacent pose transformation relation) is calculated on the tangent space SE(3) [98] at the identity matrix, and the obtained increment is exponentially mapped back to the global spatial pose of the moving AR device. This smoothed difference property of quaternions avoids singularities and ensures that small transformation matrices can also be represented, supporting smooth expression of differences between arbitrary directions. Based on this feature, this paper implemented a high-precision mapping algorithm between different AR space objects and combined the three-dimensional rotation matrix operation to further establishment of an accurate attitude solution for the lossless transformation of a variety of 2D/3D vector data in the viewport of an AR camera. These solution results can be used directly for view screen rendering, enabling AR-GIS to display vector data quickly and accurately in real-world environments. Based on quaternion theory, this method established the relationship between pose solving and transformation of AR vector spatial objects and RTK-GPS, which has significant advantages over other methods in long-distance vector data augmented reality visualization.
Our paper provides a new way and framework for AR-GIS vector data visualization in the scenes of augmented map and augmented territory. To verify the proposed approach, this article has tested the new method by comparing the stability and the visual performance of different methods. The experimental results showed that our method can be used to render the AR-GIS vector in several real-world application scenes. When rendering vector data of an AR sandbox map, the method proposed in this paper is nearly 10 times more accurate than the traditional method of representing the sensor input angle with a transformation matrix. Additionally, this method could support the real-time visualization of multiple map collections in different spatial locations. It can adaptively display 2D/3D vector data in accordance with the spatial mapping rules of different atlases, greatly enhancing the interactive experience of AR map applications. This is achieved by the establishment of the quaternions and spatial objects pose conversion relationship. This technique significantly improves the vector data visualization of mobile augmented reality. By using vector visualization with our method, vector visualization under the extreme conditions of each axis of motion of the camera can be stable and precise; and the performance of this method is also applicable for real-time applications.
This work provides a fundamental study in the AR-GIS domain, which might be used to promote for related visual research. However, our research still has limitations. First, the representation and operation of spatial objects based on quaternions is not intuitive, especially in terms of nonlinear interpolation. The calculation process is more complex than traditional angular calculation. Vector visualization using quaternion operations also needs to adapt to more map application scenarios, including complex large-scale vector maps, terrain with obvious height fluctuations, and high-density models in real scenes. Although the introduction of quaternion new methods can solve some of the problems of AR GIS visualization, it also increases the complexity of vector data processing of AR maps. The spatial mapping of traditional 2D and 3D vector data in the geographic coordinate system close to the surface of the earth will be more complex. Second, while quaternions can improve the registration and tracking process effect of AR visualization processes, making the alignment of virtual and physical objects more accurate, some problems with visualization in AR still limit the application of AR-GIS, including the difficulty of virtual objects always matching the actual physical environment, and the interaction maloperation caused by depth illusion. This method can improve the display accuracy and stability in line with the actual physical environment, but it cannot fully meet the requirements of AR-GIS applications such as vector data mapping and mapping data production in durability and human–computer interaction. In the future, methods such as IMU and RTK-GPS multi-sensor fusion can be introduced to further improve the durability, accuracy and robustness of rendering. Future research on this technique in conjunction with computer intelligent scene understanding will be crucial given the advancement of artificial intelligence technology and multi-source location technology.

Author Contributions

Conceptualization, Chenliang Wang and Kejia Huang; Data curation, Kejia Huang; Funding acquisition, Wenjiao Shi; Investigation, Chenliang Wang; Methodology, Chenliang Wang and Kejia Huang; Project administration, Kejia Huang; Resources, Wenjiao Shi; Supervision, Chenliang Wang, Kejia Huang and Wenjiao Shi; Validation, Kejia Huang; Visualization, Kejia Huang; Writing—original draft, Chenliang Wang and Kejia Huang; Writing—review & editing, Chenliang Wang and Wenjiao Shi. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Strategic Priority Research Program of the Chinese Academy of Sciences, No. XDA23100202, the Youth Innovation Promotion Association, Chinese Academy of Sciences, No. 2018071.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors express thanks to anonymous reviewers for their constructive comments and advice.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Edler, D.; Kersten, T.P. Virtual and Augmented Reality in Spatial Visualization. KN-J. Cartogr. Geogr. Inf. 2021, 71, 221–222. [Google Scholar] [CrossRef]
  2. Dickmann, F.; Keil, J.; Dickmann, P.L.; Edler, D. The Impact of Augmented Reality Techniques on Cartographic Visualization. KN-J. Cartogr. Geogr. Inf. 2021, 71, 285–295. [Google Scholar] [CrossRef]
  3. Romão, T.; Romero, L.; Dias, E.; Danado, J.; Correia, N.; Trabuco, A.; Santos, C.; Santos, R.; Nobre, E.; Câmara, A. Augmenting Reality with Geo-Referenced Information for Environmental Management. In Proceedings of the Tenth ACM International Symposium on Advances in Geographic Information Systems—GIS ’02, McLean, VA, USA, 8–9 November 2002; ACM Press: New York, NY, USA, 2002; p. 175. [Google Scholar]
  4. Liarokapis, F.; Greatbatch, I.; Mountain, D.; Gunesh, A.; Brujic-Okretic, V.; Raper, J. Mobile Augmented Reality Techniques for Geovisualisation. Ninth Int. Conf. Inf. Vis. 2005, 2005, 745–751. [Google Scholar] [CrossRef]
  5. Huuskonen, J.; Oksanen, T. Soil Sampling with Drones and Augmented Reality in Precision Agriculture. Comput. Electron. Agric. 2018, 154, 25–35. [Google Scholar] [CrossRef]
  6. Vaughan, K.L.; Vaughan, R.E.; Seeley, J.M. Experiential Learning in Soil Science: Use of an Augmented Reality Sandbox. Nat. Sci. Educ. 2017, 46, 1–5. [Google Scholar] [CrossRef] [Green Version]
  7. Chatzopoulos, D.; Bermejo, C.; Huang, Z.; Hui, P. Mobile Augmented Reality Survey: From Where We Are to Where We Go. IEEE Access 2017, 5, 6917–6950. [Google Scholar] [CrossRef]
  8. Koegst, L. Potentials of Digitally Guided Excursions at Universities Illustrated Using the Example of an Urban Geography Excursion in Stuttgart. KN-J. Cartogr. Geogr. Inf. 2022, 72, 59–71. [Google Scholar] [CrossRef]
  9. Hugues, O.; Cieutat, J.-M.; Guitton, P. GIS and Augmented Reality: State of the Art and Issues. In Handbook of Augmented Reality; Furht, B., Ed.; Springer: New York, NY, USA, 2011; pp. 721–740. ISBN 978-1-4614-0063-9. [Google Scholar]
  10. Huang, K.; Wang, C.; Wang, S.; Liu, R.; Chen, G.; Li, X. An Efficient, Platform-Independent Map Rendering Framework for Mobile Augmented Reality. ISPRS Int. J. Geo-Inf. 2021, 10, 593. [Google Scholar] [CrossRef]
  11. Wang, Z.; Bai, X.; Zhang, S.; Billinghurst, M.; He, W.; Wang, Y.; Han, D.; Chen, G.; Li, J. The Role of User-Centered AR Instruction in Improving Novice Spatial Cognition in a High-Precision Procedural Task. Adv. Eng. Inform. 2021, 47, 101250. [Google Scholar] [CrossRef]
  12. Narzt, W.; Pomberger, G.; Ferscha, A.; Kolb, D.; Müller, R.; Wieghardt, J.; Hörtner, H.; Lindinger, C. A New Visualization Concept for Navigation Systems. In Proceedings of the User-Centered Interaction Paradigms for Universal Access in the Information Society, Vienna, Austria, 28–29 June 2004; Stary, C., Stephanidis, C., Eds.; Springer: Berlin/Heidelberg, Germany, 2004; pp. 440–451. [Google Scholar]
  13. Narzt, W.; Pomberger, G.; Ferscha, A.; Kolb, D.; Müller, R.; Wieghardt, J.; Hörtner, H.; Lindinger, C. Augmented Reality Navigation Systems. Univers. Access Inf. Soc. 2006, 4, 177–187. [Google Scholar] [CrossRef]
  14. De Haan, G.; Piguillet, H.; Post, F.H. Spatial Navigation for Context-Aware Video Surveillance. IEEE Comput. Graph. Appl. 2010, 30, 20–31. [Google Scholar] [CrossRef] [PubMed]
  15. Liu, B.; Meng, L. Doctoral Colloquium—Towards a Better User Interface of Augmented Reality Based Indoor Navigation Application. In Proceedings of the 2020 6th International Conference of the Immersive Learning Research Network (iLRN), San Luis Obispo, CA, USA, 21–25 June 2020; IEEE: Manhattan, NY, USA, 2020; pp. 392–394. [Google Scholar]
  16. Templin, T.; Popielarczyk, D.; Gryszko, M. Using Augmented and Virtual Reality (AR/VR) to Support Safe Navigation on Inland and Coastal Water Zones. Remote Sens. 2022, 14, 1520. [Google Scholar] [CrossRef]
  17. Stylianidis, E.; Valari, E.; Pagani, A.; Carrillo, I.; Kounoudes, A.; Michail, K.; Smagas, K. Augmented Reality Geovisualisation for Underground Utilities. PFG-J. Photogramm. Remote Sens. Geoinf. Sci. 2020, 88, 173–185. [Google Scholar] [CrossRef]
  18. Zheng, M.; Campbell, A.G. Location-Based Augmented Reality In-Situ Visualization Applied for Agricultural Fieldwork Navigation. In Proceedings of the 2019 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct), Beijing, China, 10–18 October 2019; IEEE: Manhattan, NY, USA, 2019; pp. 93–97. [Google Scholar]
  19. Jin, Y.; Seo, J.; Lee, J.G.; Ahn, S.; Han, S. BIM-Based Spatial Augmented Reality (SAR) for Architectural Design Collaboration: A Proof of Concept. Appl. Sci. 2020, 10, 5915. [Google Scholar] [CrossRef]
  20. Livingston, M.A.; Ai, Z.; Karsch, K.; Gibson, G.O. User Interface Design for Military AR Applications. Virtual Real. 2011, 15, 175–184. [Google Scholar] [CrossRef] [Green Version]
  21. Ma, W.; Xiong, H.; Dai, X.; Zheng, X.; Zhou, Y. An Indoor Scene Recognition-Based 3D Registration Mechanism for Real-Time AR-GIS Visualization in Mobile Applications. ISPRS Int. J. Geo-Inf. 2018, 7, 112. [Google Scholar] [CrossRef] [Green Version]
  22. Mahmood, B.; Han, S.; Lee, D.-E. BIM-Based Registration and Localization of 3D Point Clouds of Indoor Scenes Using Geometric Features for Augmented Reality. Remote Sens. 2020, 12, 2302. [Google Scholar] [CrossRef]
  23. Vernica, T.; Hanke, A.; Bernstein, W.Z. Leveraging Standard Geospatial Representations for Industrial Augmented Reality. In Proceedings of the 11th Model-Based Enterprise Summit (MBE 2020), Gaithersburg, MD, USA, 31 March–2 April 2020; pp. 184–190. [Google Scholar]
  24. Xiong, H.; Ma, W.; Zheng, X.; Gong, J.; Abdelalim, D. Indoor Scene Texturing Based on Single Mobile Phone Images and 3D Model Fusion. Int. J. Digit. Earth 2019, 12, 525–543. [Google Scholar] [CrossRef]
  25. Huang, K.; Wang, C.; Liu, R.; Chen, G. A Fast and Accurate Spatial Target Snapping Method for 3D Scene Modeling and Mapping in Mobile Augmented Reality. ISPRS Int. J. Geo-Inf. 2022, 11, 69. [Google Scholar] [CrossRef]
  26. McNeal, K.S.; Ryker, K.; Whitmeyer, S.; Giorgis, S.; Atkins, R.; LaDue, N.; Clark, C.; Soltis, N.; Pingel, T. A Multi-Institutional Study of Inquiry-Based Lab Activities Using the Augmented Reality Sandbox: Impacts on Undergraduate Student Learning. J. Geogr. High. Educ. 2020, 44, 85–107. [Google Scholar] [CrossRef]
  27. Sánchez, S.Á.; Martín, L.D.; Gimeno-González, M.Á.; Martín-Garcia, T.; Almaraz-Menéndez, F.; Ruiz, C. Augmented Reality Sandbox: A Platform for Educative Experiences. In Proceedings of the Fourth International Conference on Technological Ecosystems for Enhancing Multiculturality, Salamanca, Spain, 2–4 November 2016; ACM Press: New York, NY, USA, 2016; Volume 27, pp. 599–602. [Google Scholar]
  28. Petrasova, A.; Harmon, B.; Petras, V.; Tabrizian, P.; Mitasova, H. Tangible Modeling with Open Source GIS.; Springer International Publishing: Cham, Switzerland, 2018; ISBN 978-3-319-89302-0. [Google Scholar]
  29. Afrooz, A.; Ballal, H.; Pettit, C. Implementing Augmented Reality Sandbox in Geodesign: A Future. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2018, 4, 5–12. [Google Scholar] [CrossRef] [Green Version]
  30. Woods, T.L.; Reed, S.; Hsi, S.; Woods, J.A.; Woods, M.R. Pilot Study Using the Augmented Reality Sandbox to Teach Topographic Maps and Surficial Processes in Introductory Geology Labs. J. Geosci. Educ. 2016, 64, 199–214. [Google Scholar] [CrossRef]
  31. George, R.; Howitt, C.; Oakley, G. Young Children’s Use of an Augmented Reality Sandbox to Enhance Spatial Thinking. Child. Geogr. 2020, 18, 209–221. [Google Scholar] [CrossRef]
  32. Carbonell Carrera, C.; Bermejo Asensio, L.A. Augmented Reality as a Digital Teaching Environment to Develop Spatial Thinking. Cartogr. Geogr. Inf. Sci. 2017, 44, 259–270. [Google Scholar] [CrossRef]
  33. She, J.; Zhou, Y.; Tan, X.; Li, X.; Guo, X. A Parallelized Screen-Based Method for Rendering Polylines and Polygons on Terrain Surfaces. Comput. Geosci. 2017, 99, 19–27. [Google Scholar] [CrossRef]
  34. She, J.; Li, C.; Li, J.; Wei, Q. An Efficient Method for Rendering Linear Symbols on 3D Terrain Using a Shader Language. Int. J. Geogr. Inf. Sci. 2018, 32, 476–497. [Google Scholar] [CrossRef]
  35. Wu, M.; Chen, T.; Zhang, K.; Jing, Z.; Han, Y.; Chen, M.; Wang, H.; Lv, G. An Efficient Visualization Method for Polygonal Data with Dynamic Simplification. ISPRS Int. J. Geo-Inf. 2018, 7, 138. [Google Scholar] [CrossRef] [Green Version]
  36. Guo, M.; Huang, Y.; Xie, Z. A Balanced Decomposition Approach to Real-Time Visualization of Large Vector Maps in CyberGIS. Front. Comput. Sci. 2015, 9, 442–455. [Google Scholar] [CrossRef]
  37. Ahmad, W.; Zia, A.; Khalid, U. A Google Map Based Social Network (GMBSN) for Exploring Information about a Specific Territory. J. Softw. Eng. Appl. 2013, 06, 343–348. [Google Scholar] [CrossRef] [Green Version]
  38. Netek, R.; Masopust, J.; Pavlicek, F.; Pechanec, V. Performance Testing on Vector vs. Raster Map Tiles—Comparative Study on Load Metrics. ISPRS Int. J. Geo-Inf. 2020, 9, 101. [Google Scholar] [CrossRef] [Green Version]
  39. Li, L.; Hu, W.; Zhu, H.; Li, Y.; Zhang, H. Tiled Vector Data Model for the Geographical Features of Symbolized Maps. PLoS ONE 2017, 12, e0176387. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  40. Hu, W.; Li, L.; Wu, C.; Zhang, H.; Zhu, H. A Parallel Method for Accelerating Visualization and Interactivity for Vector Tiles. PLoS ONE 2019, 14, e0221075. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  41. Guo, M.; Huang, Y.; Guan, Q.; Xie, Z.; Wu, L. An Efficient Data Organization and Scheduling Strategy for Accelerating Large Vector Data Rendering. Trans. GIS 2017, 21, 1217–1236. [Google Scholar] [CrossRef]
  42. Zhou, Z.; Karlekar, J.; Hii, D.; Schneider, M.; Lu, W.; Wittkopf, S. Robust Pose Estimation for Outdoor Mixed Reality with Sensor Fusion. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Stephanidis, C., Ed.; Springer: Berlin/Heidelberg, Germany, 2009; Volume 5616 LNCS, pp. 281–289. ISBN 3642027121. [Google Scholar]
  43. Rabbi, I. Augmented Reality Tracking Techniques: A Systematic Literature Review Protocol. IOSR J. Comput. Eng. 2012, 2, 23–29. [Google Scholar] [CrossRef]
  44. Panchal, K.; Shah, H. 3D Face Recognition Based on Pose Correction Using Euler Angle Method. In Proceedings of the Proceedings-2013 International Conference on Machine Intelligence Research and Advancement, ICMIRA 2013, Katra JK, India, 21–23 December 2013; IEEE: Manhattan, NY, USA, 2013; pp. 467–471. [Google Scholar]
  45. Portalés, C.; Lerma, J.L.; Navarro, S. Augmented Reality and Photogrammetry: A Synergy to Visualize Physical and Virtual City Environments. ISPRS J. Photogramm. Remote Sens. 2010, 65, 134–142. [Google Scholar] [CrossRef]
  46. Zhou, Z.; Wang, L.; Popescu, V. A Partially-Sorted Concentric Layout for Efficient Label Localization in Augmented Reality. IEEE Trans. Vis. Comput. Graph. 2021, 27, 4087–4096. [Google Scholar] [CrossRef]
  47. Tian, Y.; Long, Y.; Xia, D.; Yao, H.; Zhang, J. Handling Occlusions in Augmented Reality Based on 3D Reconstruction Method. Neurocomputing 2015, 156, 96–104. [Google Scholar] [CrossRef]
  48. Tian, Y.; Wang, X.; Yao, H.; Chen, J.; Wang, Z.; Yi, L. Occlusion Handling Using Moving Volume and Ray Casting Techniques for Augmented Reality Systems. Multimed. Tools Appl. 2018, 77, 16561–16578. [Google Scholar] [CrossRef]
  49. Jia, J.; Elezovikj, S.; Fan, H.; Yang, S.; Liu, J.; Guo, W.; Tan, C.C.; Ling, H. Semantic-Aware Label Placement for Augmented Reality in Street View. Vis. Comput. 2021, 37, 1805–1819. [Google Scholar] [CrossRef]
  50. Yuan, L.; Yu, Z.; Luo, W.; Yi, L.; Lü, G. Multidimensional-Unified Topological Relations Computation: A Hierarchical Geometric Algebra-Based Approach. Int. J. Geogr. Inf. Sci. 2014, 28, 2435–2455. [Google Scholar] [CrossRef]
  51. Parker, C.; Tomitsch, M. Data Visualisation Trends in Mobile Augmented Reality Applications. In Proceedings of the 7th International Symposium on Visual Information Communication and Interaction—VINCI ’14, Sydney, Australia, 5–8 August 2014; ACM Press: New York, NY, USA, 2014; Volume 2014, pp. 228–231. [Google Scholar]
  52. Lee, G.A.; Dunser, A.; Kim, S.; Billinghurst, M. CityViewAR: A Mobile Outdoor AR Application for City Visualization. In Proceedings of the 11th IEEE International Symposium on Mixed and Augmented Reality 2012—Arts, Media, and Humanities Papers, ISMAR-AMH 2012, Atlanta, GA, USA, 5–8 November 2012; IEEE: Manhattan, NY, USA, 2012; pp. 57–64. [Google Scholar]
  53. Chen, K.; Li, T.; Kim, H.S.; Culler, D.E.; Katz, R.H. MARVEL: Enabling Mobile Augmented Reality with Low Energy and Low Latency. In Proceedings of the 16th ACM Conference on Embedded Networked Sensor Systems, SenSys’18, Shenzhen, China, 4–7 November 2018; Ramachandran, G.S., Krishnamachari, B., Eds.; Association for Computing Machinery: New York, NY, USA; pp. 292–304. [Google Scholar] [CrossRef]
  54. Çöltekin, A.; Lochhead, I.; Madden, M.; Christophe, S.; Devaux, A.; Pettit, C.; Lock, O.; Shukla, S.; Herman, L.; Stachoň, Z.; et al. Extended Reality in Spatial Sciences: A Review of Research Challenges and Future Directions. ISPRS Int. J. Geo-Inf. 2020, 9, 439. [Google Scholar] [CrossRef]
  55. Vince, J. Quaternions for Computer Graphics, 2nd ed.; Springer: London, UK, 2011; p. 181. [Google Scholar] [CrossRef]
  56. Holloway, R.L. Registration Error Analysis for Augmented Reality. Presence Teleoperators Virtual Environ. 1997, 6, 413–432. [Google Scholar] [CrossRef]
  57. Min, S.; Lei, L.; Wei, H.; Xiang, R. Interactive Registration for Augmented Reality GIS. In Proceedings of the 2012 International Conference on Computer Vision in Remote Sensing, Xiamen, China, 16–18 December 2012; IEEE: Manhattan, NY, USA, 2012; pp. 246–251. [Google Scholar]
  58. Reitmayr, G.; Schmalstieg, D. OpenTracker-an Open Software Architecture for Reconfigurable Tracking Based on XML. In Proceedings of the IEEE Virtual Reality 2001, Yokohama, Japan, 13–17 March 2001; IEEE: Manhattan, NY, USA, 2001; pp. 285–286. [Google Scholar]
  59. Kasperi, J.; Edwardsson, M.P.; Romero, M. Occlusion in Outdoor Augmented Reality Using Geospatial Building Data. In Proceedings of the ACM Symposium on Virtual Reality Software and Technology, VRST, Gothenburg, Sweden, 8–10 November 2017; ACM: New York, NY, USA, 2017; pp. 1–10. [Google Scholar]
  60. Fogliaroni, P.; Mazurkiewicz, B.; Kattenbeck, M.; Giannopoulos, I. Geographic-Aware Augmented Reality for VGI. Adv. Cartogr. GIScience ICA 2019, 2, 1–9. [Google Scholar] [CrossRef]
  61. Newman, J.; Wagner, M.; Bauer, M.; MacWilliams, A.; Pintaric, T.; Beyer, D.; Pustka, D.; Strasser, F.; Schmalstieg, D.; Klinker, G. Ubiquitous Tracking for Augmented Reality. In Proceedings of the ISMAR 2004: Third IEEE and ACM International Symposium on Mixed and Augmented Reality, Arlington, VA, USA, 2–5 November 2004; IEEE: Manhattan, NY, USA, 2004; pp. 192–201. [Google Scholar]
  62. Li, W.; Han, Y.; Liu, Y.; Zhu, C.; Ren, Y.; Wang, Y.; Chen, G. Real-Time Location-Based Rendering of Urban Underground Pipelines. ISPRS Int. J. Geo-Inf. 2018, 7, 32. [Google Scholar] [CrossRef] [Green Version]
  63. Huang, W.; Sun, M.; Li, S. A 3D GIS-Based Interactive Registration Mechanism for Outdoor Augmented Reality System. Expert Syst. Appl. 2016, 55, 48–58. [Google Scholar] [CrossRef]
  64. Zhang, X.; Fronz, S.; Navab, N. Visual Marker Detection and Decoding in AR Systems: A Comparative Study. In Proceedings of the Proceedings—International Symposium on Mixed and Augmented Reality, ISMAR 2002, Darmstadt, Germany, 30 September–1 October 2002; IEEE: Manhattan, NY, USA, 2002; pp. 97–106. [Google Scholar]
  65. Khan, D.; Ullah, S.; Rabbi, I. Sharp-Edged, De-Noised, and Distinct (SDD) Marker Creation for ARToolKit. In Communications in Computer and Information Science; Springer: Berlin/Heidelberg, Germany, 2014; Volume 465, pp. 396–407. [Google Scholar]
  66. Nenovski, B.; Nedelkovski, I. Recognizing and Tracking Outdoor Objects by Using Artoolkit Markers. Int. J. Comput. Sci. Inf. Technol. 2019, 11, 21–28. [Google Scholar] [CrossRef]
  67. Han, B.; Roberts, W.; Wu, D.; Li, J. Robust Feature-Based Object Tracking. In Proceedings of the Algorithms for Synthetic Aperture Radar Imagery XIV, Orlando, FL, USA, 10–11 April 2007; Zelnio, E.G., Garber, F.D., Eds.; SPIE: Bellingham, WA, USA, 2007; Volume 6568, p. 65680U. [Google Scholar]
  68. Fan, L.; Riihimaki, M.; Kunttu, I. A Feature-Based Object Tracking Approach for Realtime Image Processing on Mobile Devices. In Proceedings of the Proceedings—International Conference on Image Processing, ICIP, Hong Kong, China, 26–29 September 2010; IEEE: Manhattan, NY, USA, 2010; pp. 3921–3924. [Google Scholar]
  69. Bohyung, H.; Davis, L. Object Tracking by Adaptive Feature Extraction. In Proceedings of the 2004 International Conference on Image Processing, 2004. ICIP ’04, Singapore, 24–27 October 2004; IEEE: Manhattan, NY, USA, 2004; Volume 3, pp. 1501–1504. [Google Scholar]
  70. Fathian, K.; Ramirez-Paredes, J.P.; Doucette, E.A.; Curtis, J.W.; Gans, N.R. QuEst: A Quaternion-Based Approach for Camera Motion Estimation from Minimal Feature Points. IEEE Robot. Autom. Lett. 2018, 3, 857–864. [Google Scholar] [CrossRef] [Green Version]
  71. Rosa, S.; Toscana, G.; Bona, B. Q-PSO: Fast Quaternion-Based Pose Estimation from RGB-D Images. J. Intell. Robot. Syst. Theory Appl. 2018, 92, 465–487. [Google Scholar] [CrossRef]
  72. Seo, E.-H.; Park, C.-S.; Kim, D.; Song, J.-B. Quaternion-Based Orientation Estimation with Static Error Reduction. In Proceedings of the 2011 IEEE International Conference on Mechatronics and Automation, Beijing, China, 7–10 August 2011; IEEE: Manhattan, NY, USA, 2011; pp. 1624–1629. [Google Scholar]
  73. De Paor, D.G. Computation of Orientations for GIS—the “Roll” of Quaternions. Comput. Methods Geosci. 1996, 15, 447–456. [Google Scholar]
  74. Ude, A. Nonlinear Least Squares Optimisation of Unit Quaternion Functions for Pose Estimation from Corresponding Features. In Proceedings of the Fourteenth International Conference on Pattern Recognition (Cat. No.98EX170), Brisbane, Australia, 16–20 August 1998; IEEE: Manhattan, NY, USA, 1998; Volume 1, pp. 425–427. [Google Scholar]
  75. Marins, J.L.; Yun, X.; Bachmann, E.R.; McGhee, R.B.; Zyda, M.J. An Extended Kalman Filter for Quaternion-Based Orientation Estimation Using MARG Sensors. In Proceedings of the 2001 IEEE/RSJ International Conference on Intelligent Robots and Systems. Expanding the Societal Role of Robotics in the the Next Millennium (Cat. No.01CH37180), Maui, HI, USA, 29 October–3 November 2001; IEEE: Manhattan, NY, USA, 2001; Volume 4, pp. 2003–2011. [Google Scholar]
  76. Kim, A.; Golnaraghi, M.F. A Quaternion-Based Orientation Estimation Algorithm Using an Inertial Measurement Unit. In Proceedings of the Record—IEEE PLANS, Position Location and Navigation Symposium, Monterey, CA, USA, 26–19 April 2004; IEEE: Manhattan, NY, USA, 2004; pp. 268–272. [Google Scholar]
  77. He, Y.; Jiang, C.; Hu, C.; Xin, J.; Wu, Q.; Wang, F. Linear Pose Estimation Algorithm Based on Quaternion. In Lecture Notes in Computer Science (including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Berlin/Heidelberg, Germany, 2011; Volume 6838 LNCS, pp. 303–310. ISBN 9783642247279. [Google Scholar]
  78. Jian, H.; Fan, X.; Liu, J.; Jin, Q.; Kang, X. A Quaternion-Based Piecewise 3D Modeling Method for Indoor Path Networks. ISPRS Int. J. Geo-Inf. 2019, 8, 89. [Google Scholar] [CrossRef] [Green Version]
  79. Geetha, S.; Anbarasi, L.J.; Prasad, A.V.; Gupta, A.; Raj, B.E. Augmented Reality Application. In Multimedia and Sensory Input for Augmented, Mixed, and Virtual Reality; Tyagi, A.K., Ed.; IGI Global: Hershey, PA, USA, 2021; pp. 118–133. [Google Scholar]
  80. Cheng, Y.; Zhu, G.; Yang, C.; Miao, G.; Ge, W. Characteristics of Augmented Map Research from a Cartographic Perspective. Cartogr. Geogr. Inf. Sci. 2022, 1–17. [Google Scholar] [CrossRef]
  81. Bobrich, J.; Otto, S. Augmented Maps. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2002, 34, 502–505. [Google Scholar]
  82. Werner, P. Review of Implementation of Augmented Reality into the Georeferenced Analogue and Digital Maps and Images. Information 2018, 10, 12. [Google Scholar] [CrossRef] [Green Version]
  83. Devaux, A.; Hoarau, C.; Brédif, M.; Christophe, S. 3D Urban Geovisualization: In Situ Augmented and Mixed Reality Experiments. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2018, 4, 41–48. [Google Scholar] [CrossRef] [Green Version]
  84. Lee, G.; Billinghurst, M. CityViewAR Outdoor AR Visualization. In Proceedings of the 13th International Conference of the NZ Chapter of the ACM’s Special Interest Group on Human-Computer Interaction—CHINZ ’12, Dunedin, New Zealand, 2–3 July 2012; ACM Press: New York, NY, USA, 2012; p. 97. [Google Scholar]
  85. Fenais, A.S.; Ariaratnam, S.T.; Ayer, S.K.; Smilovsky, N. A Review of Augmented Reality Applied to Underground Construction. J. Inf. Technol. Constr. 2020, 25, 308–324. [Google Scholar] [CrossRef]
  86. Suh, J.; Lee, S.; Choi, Y. UMineAR: Mobile-Tablet-Based Abandoned Mine Hazard Site Investigation Support System Using Augmented Reality. Minerals 2017, 7, 198. [Google Scholar] [CrossRef] [Green Version]
  87. Peña-Rios, A.; Hagras, H.; Gardner, M.; Owusu, G. A Type-2 Fuzzy Logic Based System for Augmented Reality Visualisation of Georeferenced Data. In Proceedings of the IEEE International Conference on Fuzzy Systems, Rio de Janeiro, Brazil, 8–13 July 2018; IEEE: Manhattan, NY, USA, 2018; Volume 2018, pp. 1–8. [Google Scholar]
  88. De Almeida Pereira, G.H.; Stock, K.; Stamato Delazari, L.; Centeno, J.A.S. Augmented Reality and Maps: New Possibilities for Engaging with Geographic Data. Cartogr. J. 2017, 54, 313–321. [Google Scholar] [CrossRef]
  89. Adithya, C.; Kowsik, K.; Namrata, D.; Nageli, V.S.; Shrivastava, S.; Rakshit, S. Augmented Reality Approach for Paper Map Visualization. In Proceedings of the 2010 International Conference on Communication and Computational Intelligence, INCOCCI-2010, Tamil Nadu, India, 27–29 December 2010; pp. 352–356. [Google Scholar]
  90. Reed, S.-E.; Kreylos, O.; Hsi, S.; Kellogg, L.-H.; Schladow, G.; Yikilmaz, M.-B.; Segale, H.; Silverman, J.; Yalowitz, S.; Sato, E. Shaping Watersheds Exhibit: An Interactive, Augmented Reality Sandbox for Advancing Earth Science Education. In Proceedings of the AGU Fall Meeting Abstracts, San Francisco, CA, USA, 15–19 December 2014; Volume 2014, p. ED34A-01. [Google Scholar]
  91. Yang, L.; Normand, J.-M.; Moreau, G. Augmenting Off-the-Shelf Paper Maps Using Intersection Detection and Geographical Information Systems. In Proceedings of the 2015 14th IAPR International Conference on Machine Vision Applications (MVA), Tokyo, Japan, 18–22 May 2015; IEEE: Manhattan, NY, USA, 2015; pp. 190–193. [Google Scholar]
  92. Schöning, J.; Löchtefeld, M.; Rohs, M.; Krüger, A.; Kratz, S. Map Torchlight: A Mobile Augmented Reality Camera Projector Unit. In Proceedings of the Conference on Human Factors in Computing Systems—Proceedings, Boston, MA, USA, 4–9 April 2009; ACM Press: New York, NY, USA, 2009; pp. 3841–3845. [Google Scholar]
  93. Ren, X.; Sun, M.; Jiang, C.; Liu, L.; Huang, W. An Augmented Reality Geo-Registration Method for Ground Target Localization from a Low-Cost UAV Platform. Sensors 2018, 18, 3739. [Google Scholar] [CrossRef] [Green Version]
  94. Diebel, J. Representing Attitude: Euler Angles, Unit Quaternions, and Rotation Vectors. Matrix 2006, 58, 1–35. [Google Scholar]
  95. Xu, H.; Lu, G.; Sheng, Y.; Zhou, L.; Guo, F.; Shang, Z.; Wang, J. 3D GIS Spatial Operation Based on Extended Euler Operators. In Proceedings of the Geoinformatics 2008 and Joint Conference on GIS and Built Environment: Geo-Simulation and Virtual GIS Environments, Guangzhou, China, 28–29 June 2008; Liu, L., Li, X., Liu, K., Zhang, X., Chen, A., Eds.; SPIE: Bellingham, WA, USA, 2008; Volume 7143, p. 71433D. [Google Scholar]
  96. Laksono, A. Utilizing A Game Engine for Interactive 3D Topographic Data Visualization. ISPRS Int. J. Geo-Inf. 2019, 8, 361. [Google Scholar] [CrossRef] [Green Version]
  97. You, S.; Neumann, U. Fusion of Vision and Gyro Tracking for Robust Augmented Reality Registration. In Proceedings of the Proceedings IEEE Virtual Reality 2001, Yokohama, Japan, 13–17 March 2001; pp. 71–78. [Google Scholar]
  98. Teed, Z.; Deng, J. Tangent Space Backpropagation for 3D Transformation Groups. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Virtual, 19–25 June 2021; IEEE: Manhattan, NY, USA, 2021; pp. 10333–10342. [Google Scholar]
Figure 1. Marker-based AR system flow chart.
Figure 1. Marker-based AR system flow chart.
Ijgi 11 00383 g001
Figure 2. AR vector map visualization core process based on quaternion.
Figure 2. AR vector map visualization core process based on quaternion.
Ijgi 11 00383 g002
Figure 3. The pipeline of data processing for AR-GIS vector visualization.
Figure 3. The pipeline of data processing for AR-GIS vector visualization.
Ijgi 11 00383 g003
Figure 4. Arbitrary rotation of quaternion-based spatially oriented data: (a) Arrows that lead to the right; (b) Continue to the arrow that points to the right after following the guide; (c) Space objects (arrows) that are based on quaternion operations vanish and are replaced with destinations; (d) Arrow pointing to the left; (e) Continue following the arrow to the left after following the guide.
Figure 4. Arbitrary rotation of quaternion-based spatially oriented data: (a) Arrows that lead to the right; (b) Continue to the arrow that points to the right after following the guide; (c) Space objects (arrows) that are based on quaternion operations vanish and are replaced with destinations; (d) Arrow pointing to the left; (e) Continue following the arrow to the left after following the guide.
Ijgi 11 00383 g004
Figure 5. GIS operations based on quaternions for spatially oriented data: (a) 3D map spatial query; (b) 3D map spatial query (not queried); (c) 2D map spatial query; (d) 2D map spatial query (not queried); (e) Arbitrary plane recognition and extraction; (f) Arbitrary plane recognition and extraction (long-range); (g) Identification and extraction of any plane of complex office areas; (h) Identification and extraction of any plane in complex office areas (long-distance).
Figure 5. GIS operations based on quaternions for spatially oriented data: (a) 3D map spatial query; (b) 3D map spatial query (not queried); (c) 2D map spatial query; (d) 2D map spatial query (not queried); (e) Arbitrary plane recognition and extraction; (f) Arbitrary plane recognition and extraction (long-range); (g) Identification and extraction of any plane of complex office areas; (h) Identification and extraction of any plane in complex office areas (long-distance).
Ijgi 11 00383 g005
Figure 6. Asynchronous update method for AR vector maps with no latency.
Figure 6. Asynchronous update method for AR vector maps with no latency.
Ijgi 11 00383 g006
Figure 7. The multi-threaded acceleration method for AR vector map visualization.
Figure 7. The multi-threaded acceleration method for AR vector map visualization.
Ijgi 11 00383 g007
Figure 8. Two methods of vector data visualization for real world enhancement: (a) Mapbox, (b) the proposed method.
Figure 8. Two methods of vector data visualization for real world enhancement: (a) Mapbox, (b) the proposed method.
Ijgi 11 00383 g008
Figure 9. Comparison of the delay of the two methods in the case of fast panning operations: (a) before fast pan (Mapbox), (b) before fast pan (our method), (c) after fast pan (Mapbox), (d) after fast pan (our method).
Figure 9. Comparison of the delay of the two methods in the case of fast panning operations: (a) before fast pan (Mapbox), (b) before fast pan (our method), (c) after fast pan (Mapbox), (d) after fast pan (our method).
Ijgi 11 00383 g009
Figure 10. AR map application scenes: (a) AT for navigation in a real-world scene. (b) AM for AR desktop sandbox.
Figure 10. AR map application scenes: (a) AT for navigation in a real-world scene. (b) AM for AR desktop sandbox.
Ijgi 11 00383 g010
Figure 11. Two methods of vector data visualization for AT: (a) our method, (b) sensor-based method.
Figure 11. Two methods of vector data visualization for AT: (a) our method, (b) sensor-based method.
Ijgi 11 00383 g011
Table 1. Comparison between augmented map and augmented territory.
Table 1. Comparison between augmented map and augmented territory.
TypeAugmentation TargetUpdating ContentSpatial Cognitive Methods Geographic Information In/Ex Site Rendering Content
Augmented map (AM)Spatial data and mapUpdating the display of virtual data of mapMap-based cognitionVirtual environment in mapEx site3D model of the virtual environment
Augmented territory (AT)The actual environmentUpdating the data with location of the real sceneExperience-based cognitionReal environment In siteReal scene related ancillary information
Table 2. Sub process time test of various rendering algorithms.
Table 2. Sub process time test of various rendering algorithms.
Map Symbol TypeCamera Viewport Map Vector Data QuerySpatial Objects Construction PreprocessingSpatial Object Asynchronous RenderingRendering Results Spliced and Screen DisplayThe Whole ProcessLags
Points
(500 features)
11 ms10 ms14 ms5 ms40 msRunning smoothly
Lines
(500 features)
16 ms12 ms 13 ms 6 ms47 msRunning smoothly
Polygons
(500 features)
15 ms 16 ms11 ms6 ms48 msRunning smoothly
Annotations
(200 features)
15 ms21 ms15 ms5 ms56 msNo obvious lag
3D cube
(200 features)
20 ms24 ms13 ms6 ms63 msNo obvious lag
Table 3. Test results for common operations in AR maps at extremes at different angles.
Table 3. Test results for common operations in AR maps at extremes at different angles.
AngleRotation
(Traditional Method)
Pitch
(Traditional Method)
Rolling
(Traditional Method)
Rotation
(Our Method)
Pitch
(Our Method)
Rolling
(Our Method)
−180°Shake
violently
Shake
violently
--Shake slightlynormal--
−90°normalnormalShake
violently
normalnormalnormal
normalnormalnormalnormalnormalnormal
90°normalnormalShake
violently
normalnormalnormal
180°ShakingShaking--normalnormal--
Table 4. Comparison of two full feature vector visualization methods for AR map.
Table 4. Comparison of two full feature vector visualization methods for AR map.
AR Map Vector Tile RoamingVector Tile Pan Continuously and QuicklyRaw Vector Walkthrough(Non-Tiles)Raw Vector Continuous Fast Panning (Non-Tiles)Raw Vector Continuously Stretched across Scale Bars (Non-Tiles)Raw Vector Continuous Stretching across Scale Bars (Non-Tiles)
Our methodNo significant delayDisplaying
normally
Displaying smoothlyShake slightlyNo significant
delay and blank
Slight delay
and No blank
MapboxSignificant delayHaving blankJitterJitterJitter and having blankJitter and having blank
Table 5. Rendering results comparison of all methods in AT.
Table 5. Rendering results comparison of all methods in AT.
MethodA Flat-Fit Display of Any PlaneArrow of Arbitrary Orientation Vector RoutesLarge-Range Display of Fused RTK-GPS
Marker-based approachLost after moving away from marker imagesNormal statusNormal status
Feature-based approachOccasional failuresOccasional failuresFailure after multiple loss of feature points
Our methodRoute guidance status within 15 circles is normalDisplaying smoothlyDisplaying smoothly
Table 6. Rendering results comparison of all methods in AM.
Table 6. Rendering results comparison of all methods in AM.
MethodMultiple Vector Map WindowsSnapping the Real Surface through the Anchor PointVector Spatial Objects That Always Follow the Viewport
Marker-based approachLost after moving away from marker imagesNormal statusNormal status
Feature-based approachOccasional failuresOccasional failuresFailure after multiple loss of feature points
Our methodNormal statusDisplaying smoothlyDisplaying smoothly
Table 7. Error comparison of the two visualization methods in real scene.
Table 7. Error comparison of the two visualization methods in real scene.
MethodInitial Location RoadBuildingManhole CoverPOI
Sensor-based method57.3 cm71.7 cm104.6 cm55.9 cm65.2 cm
Our method8.9 cm13.1 cm16.8 cm13.8 cm12.7 cm
Table 8. AR map mode and visualization characteristics of some typical application scenarios supported in our method.
Table 8. AR map mode and visualization characteristics of some typical application scenarios supported in our method.
Application ScenariosAR Map ModeVisualization Characteristics
Indoor/Outdoor AR-NavigationAT/AMStable Visualization of the vector following the viewport
Underground pipeline inspectionAT2D/3D vector visualization that snaps any plane of the real scene
3D vectorization modeling of buildingsAMRapid extraction of the semantics of building 3D space,
Adaptive display of 2D/3D vector data,
High-precision map interoperability
Virtual teaching sand tableAMAdaptive display of 2D/3D vector data,
High-precision map interoperability
Immersive real scene exhibition hallAM/ATRapid and accurate calculation of a variety of spatial relationships, 2D/3D vector visualization that snaps any plane of the real scene
Stable Visualization of the vector following the viewport
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, C.; Huang, K.; Shi, W. An Accurate and Efficient Quaternion-Based Visualization Approach to 2D/3D Vector Data for the Mobile Augmented Reality Map. ISPRS Int. J. Geo-Inf. 2022, 11, 383. https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi11070383

AMA Style

Wang C, Huang K, Shi W. An Accurate and Efficient Quaternion-Based Visualization Approach to 2D/3D Vector Data for the Mobile Augmented Reality Map. ISPRS International Journal of Geo-Information. 2022; 11(7):383. https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi11070383

Chicago/Turabian Style

Wang, Chenliang, Kejia Huang, and Wenjiao Shi. 2022. "An Accurate and Efficient Quaternion-Based Visualization Approach to 2D/3D Vector Data for the Mobile Augmented Reality Map" ISPRS International Journal of Geo-Information 11, no. 7: 383. https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi11070383

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop