Special Issue "Deep Learning Meets GIR: Recent Advances in Geographic Information Retrieval"

Special Issue Editors

Prof. Dr. Davide Buscaldi
E-Mail Website
Guest Editor
Université Sorbonne Paris Nord, 99 avenue Jean-Baptiste Clément, 93430 Villetaneuse, France
Interests: knowledge extraction and engineering; geographic information retrieval; sentiment analysis; natural language processing; scholarly data
Dr. Eric Kergosien
E-Mail Website
Guest Editor
Université de Lille, 42 Rue Paul Duez, 59000 Lille, France
Interests: text mining, knowledge organization, geographic information processing, information research

Special Issue Information

Dear Colleagues,

In the last decade, Deep Learning has transformed how we process text; word embeddings have transformed the way we measure semantic similarities, language models with astonishing generation capabilities have been introduced, and neural architectures represent now the state of the art in various classification tasks.

Textual information represents also an important source of information for the Geographic Information Retrieval (GIR) task: it is necessary to process text to extract toponyms, determine the geographic scope of a document, disambiguate and geolocate text, and of course to calculate document similarities and rank them with respect to a search query.

With these premises, it is clear that recent Deep Learning techniques may have a substantial impact on the results in Geographic Information Retrieval. Therefore, to highlight the progress in this field, in this Special Issue we welcome contributions that show how Deep Learning has been applied to GIR: from toponym resolution to retrieval and classification tasks. Any submission that highlights the integration of geographical and textual knowledge via neural methods is also welcome. Potential topics include, but are not limited to:

  • Methods that leverage Deep Learning for Geographic Information Retrieval, in particular: 1. methods for ranking documents; 2. methods for query expansion
  • Geographical resources built with the help of Deep Learning methods
  • Neural methods for the geolocation of text
  • Methods for determining the geographical scope of documents
  • Toponym disambiguation using deep learning methods

Prof. Dr. Davide Buscaldi
Dr. Eric Kergosien
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. ISPRS International Journal of Geo-Information is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • deep learning
  • geographical information retrieval
  • text mining
  • place name resolution
  • geolocation

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
An Innovative Intelligent System with Integrated CNN and SVM: Considering Various Crops through Hyperspectral Image Data
ISPRS Int. J. Geo-Inf. 2021, 10(4), 242; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi10040242 - 07 Apr 2021
Viewed by 356
Abstract
Generation of a thematic map is important for scientists and agriculture engineers in analyzing different crops in a given field. Remote sensing data are well-accepted for image classification on a vast area of crop investigation. However, most of the research has currently focused [...] Read more.
Generation of a thematic map is important for scientists and agriculture engineers in analyzing different crops in a given field. Remote sensing data are well-accepted for image classification on a vast area of crop investigation. However, most of the research has currently focused on the classification of pixel-based image data for analysis. The study was carried out to develop a multi-category crop hyperspectral image classification system to identify the major crops in the Chiayi Golden Corridor. The hyperspectral image data from CASI (Compact Airborne Spectrographic Imager) were used as the experimental data in this study. A two-stage classification was designed to display the performance of the image classification. More specifically, the study used a multi-class classification by support vector machine (SVM) + convolutional neural network (CNN) for image classification analysis. SVM is a supervised learning model that analyzes data used for classification. CNN is a class of deep neural networks that is applied to analyzing visual imagery. The image classification comparison was made among four crops (paddy rice, potatoes, cabbages, and peanuts), roads, and structures for classification. In the first stage, the support vector machine handled the hyperspectral image classification through pixel-based analysis. Then, the convolution neural network improved the classification of image details through various blocks (cells) of segmentation in the second stage. A series of discussion and analyses of the results are presented. The repair module was also designed to link the usage of CNN and SVM to remove the classification errors. Full article
Show Figures

Figure 1

Article
High-Resolution Remote Sensing Image Segmentation Framework Based on Attention Mechanism and Adaptive Weighting
ISPRS Int. J. Geo-Inf. 2021, 10(4), 241; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi10040241 - 07 Apr 2021
Viewed by 396
Abstract
Semantic segmentation has been widely used in the basic task of extracting information from images. Despite this progress, there are still two challenges: (1) it is difficult for a single-size receptive field to acquire sufficiently strong representational features, and (2) the traditional encoder-decoder [...] Read more.
Semantic segmentation has been widely used in the basic task of extracting information from images. Despite this progress, there are still two challenges: (1) it is difficult for a single-size receptive field to acquire sufficiently strong representational features, and (2) the traditional encoder-decoder structure directly integrates the shallow features with the deep features. However, due to the small number of network layers that shallow features pass through, the feature representation ability is weak, and noise information will be introduced to affect the segmentation performance. In this paper, an Adaptive Multi-Scale Module (AMSM) and Adaptive Fuse Module (AFM) are proposed to solve these two problems. AMSM adopts the idea of channel and spatial attention and adaptively fuses three-channel branches by setting branching structures with different void rates, and flexibly generates weights according to the content of the image. AFM uses deep feature maps to filter shallow feature maps and obtains the weight of deep and shallow feature maps to filter noise information in shallow feature maps effectively. Based on these two symmetrical modules, we have carried out extensive experiments. On the ISPRS Vaihingen dataset, the F1-score and Overall Accuracy (OA) reached 86.79% and 88.35%, respectively. Full article
Show Figures

Figure 1

Back to TopTop