Next Issue
Volume 5, April
Previous Issue
Volume 5, February
 
 

ISPRS Int. J. Geo-Inf., Volume 5, Issue 3 (March 2016) – 17 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Select all
Export citation of selected articles as:
1797 KiB  
Article
Semantic Specification of Data Types for a World of Open Data
by Xiaogang Ma, John S. Erickson, Stephan Zednik, Patrick West and Peter Fox
ISPRS Int. J. Geo-Inf. 2016, 5(3), 38; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi5030038 - 16 Mar 2016
Cited by 5 | Viewed by 6294
Abstract
Data interoperability is an ongoing challenge for global open data initiatives. The machine-readable specification of data types for datasets will help address interoperability issues. Data types have typically been at the syntactical level such as integer, float and string, etc. in programming languages. [...] Read more.
Data interoperability is an ongoing challenge for global open data initiatives. The machine-readable specification of data types for datasets will help address interoperability issues. Data types have typically been at the syntactical level such as integer, float and string, etc. in programming languages. The work presented in this paper is a model design for the semantic specification of data types, such as a topographic map. The work was conducted in the context of the Semantic Web. The model differentiates the semantic data type from the basic data type. The former are instances (e.g., topographic map) of a specific data type class that is defined in the developed model. The latter are classes (e.g., Image) of resource types in existing ontologies. A data resource is an instance of a basic data type and is tagged with one or more specific data types. The implementation of the model is given within an existing production data portal that enables one to register specific data types and use them to annotate data resources. Data users can obtain explicating assumptions or information inherent in a dataset through the specific data types of that dataset. The machine-readable information of specific data types also paves the way for further studies, such as dataset recommendation. Full article
(This article belongs to the Special Issue Research Data Management)
Show Figures

Graphical abstract

4466 KiB  
Article
A Framework for Data-Centric Analysis of Mapping Activity in the Context of Volunteered Geographic Information
by Karl Rehrl and Simon Gröchenig
ISPRS Int. J. Geo-Inf. 2016, 5(3), 37; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi5030037 - 15 Mar 2016
Cited by 21 | Viewed by 7250
Abstract
Over the last decade, volunteered geographic information (VGI) has become established as one of the most relevant geographic data sources in terms of worldwide coverage, representation of local knowledge and open data policies. Beside the data itself, data about community activity provides valuable [...] Read more.
Over the last decade, volunteered geographic information (VGI) has become established as one of the most relevant geographic data sources in terms of worldwide coverage, representation of local knowledge and open data policies. Beside the data itself, data about community activity provides valuable insights into the mapping progress which can be useful for estimating data quality, understanding the activity of VGI communities or predicting future developments. This work proposes a conceptual as well as technical framework for structuring and analyzing mapping activity building on the concepts of activity theory. Taking OpenStreetMap as an example, the work outlines the necessary steps for converting database changes into user- and feature-centered operations and higher-level actions acting as a universal scheme for arbitrary spatio-temporal analyses of mapping activities. Different examples from continent to region and city-scale analyses demonstrate the practicability of the approach. Instead of focusing on the interpretation of specific analysis results, the work contributes on a meta-level by addressing several conceptual and technical questions with respect to the overall process of analyzing VGI community activity. Full article
(This article belongs to the Special Issue Big Data for Urban Informatics and Earth Observation)
Show Figures

Figure 1

4951 KiB  
Article
Towards an Automatic Ice Navigation Support System in the Arctic Sea
by Xintao Liu, Shahram Sattar and Songnian Li
ISPRS Int. J. Geo-Inf. 2016, 5(3), 36; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi5030036 - 14 Mar 2016
Cited by 21 | Viewed by 6548
Abstract
Conventional ice navigation in the sea is manually operated by well-trained navigators, whose experiences are heavily relied upon to guarantee the ship’s safety. Despite the increasingly available ice data and information, little has been done to develop an automatic ice navigation support system [...] Read more.
Conventional ice navigation in the sea is manually operated by well-trained navigators, whose experiences are heavily relied upon to guarantee the ship’s safety. Despite the increasingly available ice data and information, little has been done to develop an automatic ice navigation support system to better guide ships in the sea. In this study, using the vector-formatted ice data and navigation codes in northern regions, we calculate ice numeral and divide sea area into two parts: continuous navigable area and the counterpart numerous separate unnavigable area. We generate Voronoi Diagrams for the obstacle areas and build a road network-like graph for connections in the sea. Based on such a network, we design and develop a geographic information system (GIS) package to automatically compute the safest-and-shortest routes for different types of ships between origin and destination (OD) pairs. A visibility tool, Isovist, is also implemented to help automatically identify safe navigable areas in emergency situations. The developed GIS package is shared online as an open source project called NavSpace, available for validation and extension, e.g., indoor navigation service. This work would promote the development of ice navigation support system and potentially enhance the safety of ice navigation in the Arctic sea. Full article
(This article belongs to the Special Issue Bridging the Gap between Geospatial Theory and Technology)
Show Figures

Graphical abstract

5104 KiB  
Article
Soil Moisture Mapping in an Arid Area Using a Land Unit Area (LUA) Sampling Approach and Geostatistical Interpolation Techniques
by Saeid Gharechelou, Ryutaro Tateishi, Ram C. Sharma and Brian Alan Johnson
ISPRS Int. J. Geo-Inf. 2016, 5(3), 35; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi5030035 - 11 Mar 2016
Cited by 18 | Viewed by 7067
Abstract
Soil moisture (SM) plays a key role in many environmental processes and has a high spatial and temporal variability. Collecting sample SM data through field surveys (e.g., for validation of remote sensing-derived products) can be very expensive and time consuming if a study [...] Read more.
Soil moisture (SM) plays a key role in many environmental processes and has a high spatial and temporal variability. Collecting sample SM data through field surveys (e.g., for validation of remote sensing-derived products) can be very expensive and time consuming if a study area is large, and producing accurate SM maps from the sample point data is a difficult task as well. In this study, geospatial processing techniques are used to combine several geo-environmental layers relevant to SM (soil, geology, rainfall, land cover, etc.) into a land unit area (LUA) map, which delineates regions with relatively homogeneous geological/geomorphological, land use/land cover, and climate characteristics. This LUA map is used to guide the collection of sample SM data in the field, and the field data is finally spatially interpolated to create a wall-to-wall map of SM in the study area (Garmsar, Iran). The main goal of this research is to create a SM map in an arid area, using a land unit area (LUA) approach to obtain the most appropriate sample locations for collecting SM field data. Several environmental GIS layers, which have an impact on SM, were combined to generate a LUA map, and then field surveying was done in each class of the LUA map. A SM map was produced based on LUA, remote sensing data indexes, and spatial interpolation of the field survey sample data. The several interpolation methods (inverse distance weighting, kriging, and co-kriging) were evaluated for generating SM maps from the sample data. The produced maps were compared to each other and validated using ground truth data. The results show that the LUA approach is a reasonable method to create the homogenous field to introduce a representative sample for field soil surveying. The geostatistical SM map achieved adequate accuracy; however, trend analysis and distribution of the soil sample point locations within the LUA types should be further investigated to achieve even better results. Co-kriging produced the most accurate SM map of the study area. Full article
Show Figures

Figure 1

1899 KiB  
Article
Characterizing Traffic Conditions from the Perspective of Spatial-Temporal Heterogeneity
by Peichao Gao, Zhao Liu, Kun Tian and Gang Liu
ISPRS Int. J. Geo-Inf. 2016, 5(3), 34; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi5030034 - 10 Mar 2016
Cited by 23 | Viewed by 5988
Abstract
Traffic conditions are usually characterized from the perspective of travel time or the average vehicle speed in the field of transportation, reflecting the congestion degree of a road network. This article provides a method from a new perspective to characterize traffic conditions; the [...] Read more.
Traffic conditions are usually characterized from the perspective of travel time or the average vehicle speed in the field of transportation, reflecting the congestion degree of a road network. This article provides a method from a new perspective to characterize traffic conditions; the perspective is based on the heterogeneity of vehicle speeds. A novel measurement, the ratio of areas (RA) in a rank-size plot, is included in the proposed method to capture the heterogeneity. The proposed method can be performed from the perspective of both spatial heterogeneity and temporal heterogeneity, being able to characterize traffic conditions of not only a road network but also a single road. Compared with methods from the perspective of travel time, the proposed method can characterize traffic conditions at a higher frequency. Compared to methods from the perspective of the average vehicle speed, the proposed method takes account of the heterogeneity of vehicle speeds. The effectiveness of the proposed method has been demonstrated with real-life traffic data of Shenzhen (a coastal urban city in China), and the advantage of the proposed RA has been verified by comparisons to similar measurements such as the ht-index and the CRG index. Full article
(This article belongs to the Special Issue Geospatial Big Data and Transport)
Show Figures

Figure 1

1288 KiB  
Article
The Strategy for the Development of the Infrastructure for Spatial Information in the Czech Republic
by Václav Čada and Karel Janečka
ISPRS Int. J. Geo-Inf. 2016, 5(3), 33; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi5030033 - 10 Mar 2016
Cited by 13 | Viewed by 5086
Abstract
Spatial information is often not effectively handled and used, e.g., in public administration. The key reason is that information about what spatial data exists, and where and under which circumstances it can be used, is missing. This leads to a situation whereby data [...] Read more.
Spatial information is often not effectively handled and used, e.g., in public administration. The key reason is that information about what spatial data exists, and where and under which circumstances it can be used, is missing. This leads to a situation whereby data are gathered and maintained multiple times. In October 2014, the Czech government approved the conception of The Strategy for the Development of the Infrastructure for Spatial Information in the Czech Republic to 2020 (GeoInfoStrategy), which serves as a basis for the National Spatial Data Infrastructure (NSDI). Furthermore, in June 2015 the GeoInfoStrategy Action Plan was approved. The vision of the GeoInfoStrategy is that the Czech Republic will use spatial information effectively by 2020. The innovative approach of the GeoInfoStrategy to build the NSDI includes cooperation between all parties—not only public administration, but also the private sector, academia, professional associations and user communities. The principles defined in the GeoInfoStrategy are general and can serve as best practice for other countries building an NSDI that should meet the requirements of all target groups working with spatial information. Full article
(This article belongs to the Special Issue Research Data Management)
Show Figures

Figure 1

5376 KiB  
Article
Open Polar Server (OPS)—An Open Source Infrastructure for the Cryosphere Community
by Weibo Liu, Kyle Purdon, Trey Stafford, John Paden and Xingong Li
ISPRS Int. J. Geo-Inf. 2016, 5(3), 32; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi5030032 - 09 Mar 2016
Cited by 67 | Viewed by 5679
Abstract
The Center for Remote Sensing of Ice Sheets (CReSIS) at the University of Kansas has collected approximately 1000 terabytes (TB) of radar depth sounding data over the Arctic and Antarctic ice sheets since 1993 in an effort to map the thickness of the [...] Read more.
The Center for Remote Sensing of Ice Sheets (CReSIS) at the University of Kansas has collected approximately 1000 terabytes (TB) of radar depth sounding data over the Arctic and Antarctic ice sheets since 1993 in an effort to map the thickness of the ice sheets and ultimately understand the impacts of climate change and sea level rise. In addition to data collection, the storage, management, and public distribution of the dataset are also primary roles of the CReSIS. The Open Polar Server (OPS) project developed a free and open source infrastructure to store, manage, analyze, and distribute the data collected by CReSIS in an effort to replace its current data storage and distribution approach. The OPS infrastructure includes a spatial database management system (DBMS), map and web server, JavaScript geoportal, and MATLAB application programming interface (API) for the inclusion of data created by the cryosphere community. Open source software including GeoServer, PostgreSQL, PostGIS, OpenLayers, ExtJS, GeoEXT and others are used to build a system that modernizes the CReSIS data distribution for the entire cryosphere community and creates a flexible platform for future development. Usability analysis demonstrates the OPS infrastructure provides an improved end user experience. In addition, interpolating glacier topography is provided as an application example of the system. Full article
(This article belongs to the Special Issue Research Data Management)
Show Figures

Figure 1

2707 KiB  
Article
Extracting Stops from Noisy Trajectories: A Sequence Oriented Clustering Approach
by Longgang Xiang, Meng Gao and Tao Wu
ISPRS Int. J. Geo-Inf. 2016, 5(3), 29; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi5030029 - 09 Mar 2016
Cited by 36 | Viewed by 6037
Abstract
Trajectories, representing the movements of objects in the real world, carry significant stop/move semantics. The detection of trajectory stops poses a critical problem in the study of moving objects and becomes even more challenging due to the inevitable noise recorded along with true [...] Read more.
Trajectories, representing the movements of objects in the real world, carry significant stop/move semantics. The detection of trajectory stops poses a critical problem in the study of moving objects and becomes even more challenging due to the inevitable noise recorded along with true data. To extract stops with a variety of shapes and sizes from single trajectories with noise, this paper presents a sequence oriented clustering approach, in which noise points within the sequence of a stop can be identified and classified as a part of the stop. In our method, two key concepts are first introduced: (1) a core sequence that defines sequence density based not only on proximity in space but also continuity in time as well as the duration over time; and (2) an Eps-reachability sequence that aggregates core sequences that overlap or meet over time. Then, three criteria are presented to merge Eps-reachability sequences interrupted by noise. Further, an algorithm, called SOC (Sequence Oriented Clustering), is developed to automatically extract stops from a single trajectory. In addition, a reachability graph is designed that visually illustrates the spatio-temporal clustering structure and levels of a trajectory. Finally, the proposed algorithm is evaluated against two baseline methods through extensive experiments based on real world trajectories, some with serious noise, and the results show that our approach is fairly effective in recognizing trajectory stops. Full article
Show Figures

Graphical abstract

3001 KiB  
Article
Land Cover Extraction from High Resolution ZY-3 Satellite Imagery Using Ontology-Based Method
by Heng Luo, Lin Li, Haihong Zhu, Xi Kuai, Zhijun Zhang and Yu Liu
ISPRS Int. J. Geo-Inf. 2016, 5(3), 31; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi5030031 - 08 Mar 2016
Cited by 23 | Viewed by 6451
Abstract
The rapid development and increasing availability of high-resolution satellite (HRS) images provides increased opportunities to monitor large scale land cover. However, inefficiency and excessive independence on expert knowledge limits the usage of HRS images on a large scale. As a knowledge organization and [...] Read more.
The rapid development and increasing availability of high-resolution satellite (HRS) images provides increased opportunities to monitor large scale land cover. However, inefficiency and excessive independence on expert knowledge limits the usage of HRS images on a large scale. As a knowledge organization and representation method, ontology can assist in improving the efficiency of automatic or semi-automatic land cover information extraction, especially for HRS images. This paper presents an ontology-based framework that was used to model the land cover extraction knowledge and interpret HRS remote sensing images at the regional level. The land cover ontology structure is explicitly defined, accounting for the spectral, textural, and shape features, and allowing for the automatic interpretation of the extracted results. With the help of regional prototypes for land cover class stored in Web Ontology Language (OWL) file, automated land cover extraction of the study area is then attempted. Experiments are conducted using ZY-3 (Ziyuan-3) imagery, which were acquired for the Jiangxia District, Wuhan, China, in the summers of 2012 and 2013.The experimental method provided good land cover extraction results as the overall accuracy reached 65.07%. Especially for bare surfaces, highways, ponds, and lakes, whose producer and user accuracies were both higher than 75%. The results highlight the capability of the ontology-based method to automatically extract land cover using ZY-3 HRS images. Full article
(This article belongs to the Special Issue Advances and Innovations in Land Use/Cover Mapping)
Show Figures

Figure 1

4072 KiB  
Article
A Semi-Automated Workflow Solution for Data Set Publication
by Suresh Vannan, Tammy W. Beaty, Robert B. Cook, Daine M. Wright, Ranjeet Devarakonda, Yaxing Wei, Les A. Hook and Benjamin F. McMurry
ISPRS Int. J. Geo-Inf. 2016, 5(3), 30; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi5030030 - 08 Mar 2016
Cited by 1 | Viewed by 6822
Abstract
To address the need for published data, considerable effort has gone into formalizing the process of data publication. From funding agencies to publishers, data publication has rapidly become a requirement. Digital Object Identifiers (DOI) and data citations have enhanced the integration and availability [...] Read more.
To address the need for published data, considerable effort has gone into formalizing the process of data publication. From funding agencies to publishers, data publication has rapidly become a requirement. Digital Object Identifiers (DOI) and data citations have enhanced the integration and availability of data. The challenge facing data publishers now is to deal with the increased number of publishable data products and most importantly the difficulties of publishing diverse data products into an online archive. The Oak Ridge National Laboratory Distributed Active Archive Center (ORNL DAAC), a NASA-funded data center, faces these challenges as it deals with data products created by individual investigators. This paper summarizes the challenges of curating data and provides a summary of a workflow solution that ORNL DAAC researcher and technical staffs have created to deal with publication of the diverse data products. The workflow solution presented here is generic and can be applied to data from any scientific domain and data located at any data center. Full article
(This article belongs to the Special Issue Research Data Management)
Show Figures

Figure 1

937 KiB  
Article
The RADAR Project—A Service for Research Data Archival and Publication
by Angelina Kraft, Matthias Razum, Jan Potthoff, Andrea Porzel, Thomas Engel, Frank Lange, Karina Van den Broek and Filipe Furtado
ISPRS Int. J. Geo-Inf. 2016, 5(3), 28; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi5030028 - 04 Mar 2016
Cited by 8 | Viewed by 9519
Abstract
The aim of the RADAR (Research Data Repository) project is to set up and establish an infrastructure that facilitates research data management: the infrastructure will allow researchers to store, manage, annotate, cite, curate, search and find scientific data in a digital platform available [...] Read more.
The aim of the RADAR (Research Data Repository) project is to set up and establish an infrastructure that facilitates research data management: the infrastructure will allow researchers to store, manage, annotate, cite, curate, search and find scientific data in a digital platform available at any time that can be used by multiple (specialized) disciplines. While appropriate and innovative preservation strategies and systems are in place for the big data communities (e.g., environmental sciences, space, and climate), the stewardship for many other disciplines, often called the “long tail research domains”, is uncertain. Funded by the German Research Foundation (DFG), the RADAR collaboration project develops a service oriented infrastructure for the preservation, publication and traceability of (independent) research data. The key aspect of RADAR is the implementation of a two-stage business model for data preservation and publication: clients may preserve research results for up to 15 years and assign well-graded access rights, or to publish data with a DOI assignment for an unlimited period of time. Potential clients include libraries, research institutions, publishers and open platforms that desire an adaptable digital infrastructure to archive and publish data according to their institutional requirements and workflows. Full article
(This article belongs to the Special Issue Research Data Management)
Show Figures

Figure 1

1744 KiB  
Article
A Comparative Analysis of the Distributions of KFC and McDonald’s Outlets in China
by Yikang Rui, Huang Huang, Min Lu, Bao Wang and Jiechen Wang
ISPRS Int. J. Geo-Inf. 2016, 5(3), 27; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi5030027 - 04 Mar 2016
Cited by 8 | Viewed by 17486
Abstract
Mainland China has become one of the most important markets for international fast-food chains over the past decade. To study the regional spread of KFC and McDonald’s outlets in Chinese cities, the correlation of their distributions and degree of market expansion were explored [...] Read more.
Mainland China has become one of the most important markets for international fast-food chains over the past decade. To study the regional spread of KFC and McDonald’s outlets in Chinese cities, the correlation of their distributions and degree of market expansion were explored and compared to analyze both the local and the global spatial autocorrelations. A geographically weighted Poisson regression model was also used to examine the influence of demographic, economic, and geographic factors on their spatial distributions. The findings of this comparative study reveal the site selection criteria at the city level by studying the differences and similarities in outlet distributions for KFC and McDonald’s. The presented results can guide other chains to enhance business location planning and formulate regional development policy. Full article
Show Figures

Figure 1

2336 KiB  
Article
Visualizing the Structure of the Earth’s Lithosphere on the Google Earth Virtual-Globe Platform
by Liangfeng Zhu, Wensheng Kan, Yu Zhang and Jianzhong Sun
ISPRS Int. J. Geo-Inf. 2016, 5(3), 26; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi5030026 - 02 Mar 2016
Cited by 9 | Viewed by 10516
Abstract
While many of the current methods for representing the existing global lithospheric models are suitable for academic investigators to conduct professional geological and geophysical research, they are not suited to visualize and disseminate the lithospheric information to non-geological users (such as atmospheric scientists, [...] Read more.
While many of the current methods for representing the existing global lithospheric models are suitable for academic investigators to conduct professional geological and geophysical research, they are not suited to visualize and disseminate the lithospheric information to non-geological users (such as atmospheric scientists, educators, policy-makers, and even the general public) as they rely on dedicated computer programs or systems to read and work with the models. This shortcoming has become more obvious as more and more people from both academic and non-academic institutions struggle to understand the structure and composition of the Earth’s lithosphere. Google Earth and the concomitant Keyhole Markup Language (KML) provide a universal and user-friendly platform to represent, disseminate, and visualize the existing lithospheric models. We present a systematic framework to visualize and disseminate the structure of the Earth’s lithosphere on Google Earth. A KML generator is developed to convert lithospheric information derived from the global lithospheric model LITHO1.0 into KML-formatted models, and a web application is deployed to disseminate and visualize those models on the Internet. The presented framework and associated implementations can be easily exported for application to support interactively integrating and visualizing the internal structure of the Earth with a global perspective. Full article
Show Figures

Figure 1

2129 KiB  
Article
panMetaDocs, eSciDoc, and DOIDB—An Infrastructure for the Curation and Publication of File-Based Datasets for GFZ Data Services
by Damian Ulbricht, Kirsten Elger, Roland Bertelmann and Jens Klump
ISPRS Int. J. Geo-Inf. 2016, 5(3), 25; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi5030025 - 02 Mar 2016
Cited by 3 | Viewed by 7020
Abstract
The GFZ German Research Centre for Geosciences is the national laboratory for Geosciences in Germany. As part of the Helmholtz Association, providing and maintaining large-scale scientific infrastructures are an essential part of GFZ activities. This includes the generation of significant volumes and numbers [...] Read more.
The GFZ German Research Centre for Geosciences is the national laboratory for Geosciences in Germany. As part of the Helmholtz Association, providing and maintaining large-scale scientific infrastructures are an essential part of GFZ activities. This includes the generation of significant volumes and numbers of research data, which subsequently become source materials for data publications. The development and maintenance of data systems is a key component of GFZ Data Services to support state-of-the-art research. A challenge lies not only in the diversity of scientific subjects and communities, but also in different types and manifestations of how data are managed by research groups and individual scientists. The data repository of GFZ Data Services provides a flexible IT infrastructure for data storage and publication, including minting of digital object identifiers (DOI). It was built as a modular system of several independent software components linked together through Application Programming Interfaces (APIs) provided by the eSciDoc framework. Principal application software are panMetaDocs for data management and DOIDB for logging and moderating data publications activities. Wherever possible, existing software solutions were integrated or adapted. A summary of our experiences made in operating this service is given. Data are described through comprehensive landing pages and supplementary documents, like journal articles or data reports, thus augmenting the scientific usability of the service. Full article
(This article belongs to the Special Issue Research Data Management)
Show Figures

Figure 1

3638 KiB  
Article
Spatio-Temporal Patterns of Urban-Rural Development and Transformation in East of the “Hu Huanyong Line”, China
by Zhichao Hu, Yanglin Wang, Yansui Liu, Hualou Long and Jian Peng
ISPRS Int. J. Geo-Inf. 2016, 5(3), 24; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi5030024 - 27 Feb 2016
Cited by 47 | Viewed by 8196
Abstract
Urban-rural development and transformation is profoundly changing the socioeconomic system as well as the natural environment. The study uses the AHP (Analytic Hierarchy Process) method to construct a top-down index of human activity based around five dimensions (population, land, industry, society, and environment) [...] Read more.
Urban-rural development and transformation is profoundly changing the socioeconomic system as well as the natural environment. The study uses the AHP (Analytic Hierarchy Process) method to construct a top-down index of human activity based around five dimensions (population, land, industry, society, and environment) to evaluate the spatial characteristics in the region east of the Hu Huanyong line, China, in 1994 and 2010. Then, we investigate the spatial-temporal pattern using the methods of hotspot analysis, local Moran’s I index and Pearson correlation coefficient. The calculation showed that: (1) northeast China was experiencing an economic recession during study period, and the implementation of revitalization plan have not controlled the recession trend yet; (2) Pearson correlation analysis showed that the improvement of population quality promote the development of industry and society systems significantly during study period; and (3) negative correlation between Population Development Index (PDI) change and Population Transformation Index (PTI) change (along with the Society Transformation Index (STI) change and Industry Transformation Index (ITI) change) reflected that east of the Hu Huanyong line, China was in a “demographic dividend” period. Then, with the help of SOFM neural network algorithm, we divided the study area into six types of region, and found that municipalities, provincial capitals, Yangtze River Delta region and cities on the North China Plain owned the greatest development, while cities in southwest and northeast China showed relatively poor development during study period. Full article
Show Figures

Figure 1

7002 KiB  
Article
Modeling the Relationship between the Gross Domestic Product and Built-Up Area Using Remote Sensing and GIS Data: A Case Study of Seven Major Cities in Canada
by Kamil Faisal, Ahmed Shaker and Suhaib Habbani
ISPRS Int. J. Geo-Inf. 2016, 5(3), 23; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi5030023 - 26 Feb 2016
Cited by 12 | Viewed by 5912
Abstract
City/regional authorities are responsible for designing and structuring the urban morphology based on the desired land use activities. One of the key concerns regarding urban planning is to establish certain development goals, such as the real gross domestic product (GDP). In Canada, the [...] Read more.
City/regional authorities are responsible for designing and structuring the urban morphology based on the desired land use activities. One of the key concerns regarding urban planning is to establish certain development goals, such as the real gross domestic product (GDP). In Canada, the gross national income (GNI) mainly relies on the mining and manufacturing industries. In order to estimate the impact of city development, this study aims to utilize remote sensing and Geographic Information System (GIS) techniques to assess the relationship between the built-up area and the reported real GDP of seven major cities in Canada. The objectives of the study are: (1) to investigate the use of regression analysis between the built-up area derived from Landsat images and the industrial area extracted from Geographic Information System (GIS) data; and (2) to study the relationship between the built-up area and the socio-economic data (i.e., real GDP, total population and total employment). The experimental data include 42 multi-temporal Landsat TM images and 42 land use GIS vector datasets obtained from year 2005 to 2010 during the summer season (June, July and August) for seven major cities in Canada. The socio-economic data, including the real GDP, the total population and the total employment, are obtained from the Metropolitan Housing Outlook during the same period. Both the Normalized Difference Built-up Index (NDBI) and Normalized Difference Vegetation Index (NDVI) were used to determine the built-up areas. Those high built-up values within the industrial areas were acquired for further analysis. Finally, regression analysis was conducted between the real GDP, the total population, and the total employment with respect to the built-up area. Preliminary findings showed a strong linear relationship (R2= 0.82) between the percentage of built-up area and industrial area within the corresponding city. In addition, a strong linear relationship (R2= 0.8) was found between the built-up area and socio-economic data. Therefore, the study justifies the use of remote sensing and GIS data to model the socio-economic data (i.e., real GDP, total population and total employment). The research findings can contribute to the federal/municipal authorities and act as a generic indicator for targeting a specific real GDP with respect to industrial areas. Full article
Show Figures

Graphical abstract

18380 KiB  
Article
An Efficient Parallel Algorithm for Multi-Scale Analysis of Connected Components in Gigapixel Images
by Michael H.F. Wilkinson, Martino Pesaresi and Georgios K. Ouzounis
ISPRS Int. J. Geo-Inf. 2016, 5(3), 22; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi5030022 - 25 Feb 2016
Cited by 7 | Viewed by 4913
Abstract
Differential Morphological Profiles (DMPs) and their generalized Differential Attribute Profiles (DAPs) are spatial signatures used in the classification of earth observation data. The Characteristic-Salience-Leveling (CSL) is a model allowing the compression and storage of the multi-scale information contained in the DMPs and DAPs [...] Read more.
Differential Morphological Profiles (DMPs) and their generalized Differential Attribute Profiles (DAPs) are spatial signatures used in the classification of earth observation data. The Characteristic-Salience-Leveling (CSL) is a model allowing the compression and storage of the multi-scale information contained in the DMPs and DAPs into raster data layers, used for further analytic purposes. Computing DMPs or DAPs is often constrained by the size of the input data and scene complexity. Addressing very high resolution remote sensing gigascale images, this paper presents a new concurrent algorithm based on the Max-Tree structure that allows the efficient computation of CSL. The algorithm extends the “one-pass” method for computation of DAPs, and delivers an attribute zone segmentation of the underlying trees. The DAP vector field and the set of multi-scale characteristics are computed separately and in a similar fashion to concurrent attribute filters. Experiments on test images of 3.48 to 3.96 Gpixel showed an average computational speed of 59.85 Mpixel per second, or 3.59 Gpixel per minute on a single 2U rack server with 64 opteron cores. The new algorithms could be extended to morphological keypoint detectors capable of handling gigascale images. Full article
(This article belongs to the Special Issue Mathematical Morphology in Geoinformatics)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop