Next Article in Journal
Micro-Topography Mapping through Terrestrial LiDAR in Densely Vegetated Coastal Environments
Next Article in Special Issue
Virtual Prospecting in Paleontology Using a Drone-Based Orthomosaic Map: An Eye Movement Analysis
Previous Article in Journal
Predicting the Place Visited of Floating Car: A Three-Layer Framework Using Spatiotemporal Probability
Previous Article in Special Issue
Eye Tracking Research in Cartography: Looking into the Future
Article

A Visual Attention Model Based on Eye Tracking in 3D Scene Maps

School of Geoscience and Technology, Zhengzhou University, Zhengzhou 450001, China
*
Author to whom correspondence should be addressed.
Academic Editors: Stanislav Popelka, Zdeněk Stachoň, Peter Kiefer, Arzu Çöltekin and Wolfgang Kainz
ISPRS Int. J. Geo-Inf. 2021, 10(10), 664; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi10100664
Received: 7 July 2021 / Revised: 23 September 2021 / Accepted: 28 September 2021 / Published: 1 October 2021
(This article belongs to the Special Issue Eye-Tracking in Cartography)
Visual attention plays a crucial role in the map-reading process and is closely related to the map cognitive process. Eye-tracking data contains a wealth of visual information that can be used to identify cognitive behavior during map reading. Nevertheless, few researchers have applied these data to quantifying visual attention. This study proposes a method for quantitatively calculating visual attention based on eye-tracking data for 3D scene maps. First, eye-tracking technology was used to obtain the differences in the participants’ gaze behavior when browsing a street view map in the desktop environment, and to establish a quantitative relationship between eye movement indexes and visual saliency. Then, experiments were carried out to determine the quantitative relationship between visual saliency and visual factors, using vector 3D scene maps as stimulus material. Finally, a visual attention model was obtained by fitting the data. It was shown that a combination of three visual factors can represent the visual attention value of a 3D scene map: color, shape, and size, with a goodness of fit (R2) greater than 0.699. The current research helps to determine and quantify the visual attention allocation during map reading, laying the foundation for automated machine mapping. View Full-Text
Keywords: visual attention; eye tracking; map cognition; visual cognition visual attention; eye tracking; map cognition; visual cognition
Show Figures

Figure 1

MDPI and ACS Style

Yang, B.; Li, H. A Visual Attention Model Based on Eye Tracking in 3D Scene Maps. ISPRS Int. J. Geo-Inf. 2021, 10, 664. https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi10100664

AMA Style

Yang B, Li H. A Visual Attention Model Based on Eye Tracking in 3D Scene Maps. ISPRS International Journal of Geo-Information. 2021; 10(10):664. https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi10100664

Chicago/Turabian Style

Yang, Bincheng, and Hongwei Li. 2021. "A Visual Attention Model Based on Eye Tracking in 3D Scene Maps" ISPRS International Journal of Geo-Information 10, no. 10: 664. https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi10100664

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop