Next Article in Journal
Modeling Housing Rent in the Atlanta Metropolitan Area Using Textual Information and Deep Learning
Next Article in Special Issue
User Evaluation of Map-Based Visual Analytic Tools
Previous Article in Journal
Incorporating Topological Representation in 3D City Models
Previous Article in Special Issue
Regionalization Analysis and Mapping for the Source and Sink of Tourist Flows
 
 
Review
Peer-Review Record

Performance Testing on Marker Clustering and Heatmap Visualization Techniques: A Comparative Study on JavaScript Mapping Libraries

ISPRS Int. J. Geo-Inf. 2019, 8(8), 348; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi8080348
by Rostislav Netek *, Jan Brus and Ondrej Tomecka
Reviewer 1: Anonymous
Reviewer 2:
Reviewer 3: Anonymous
ISPRS Int. J. Geo-Inf. 2019, 8(8), 348; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi8080348
Submission received: 10 June 2019 / Revised: 6 July 2019 / Accepted: 30 July 2019 / Published: 1 August 2019
(This article belongs to the Special Issue Smart Cartography for Big Data Solutions)

Round 1

Reviewer 1 Report

This is a nicely conducted case study. I especially like the provided web-page that allows to investigate the findings directly.
I am having some comments though, that might be addressed:

1) I am wondering if it is really important to explain the different data formats for geospatial point data.
From my understanding, they are not very relevant when carrying out this study.

2) You are talking about binning in the introduction. But as far as I can see, you don't include it into your study.
I would therefore drop this part and maybe change the title accordingly to only address heatmaps and clustering.

3) While the study is interesting for practitioners, it might not be very useful to advance the underlying algorithms or data structures.
Hence, my main concern regarding the study is as how long the results are relevant.
Even an update on one of the described libraries may already give different results.
I would discuss this issue somewhere in the paper.

In addition, I am wondering what exactly causes the differences in the performance of these libraries, i.e., what are the algorithms used.

It would also be very interesting to know how the heatmaps are actually calculated as there are different ways to do it.
Depending on the approach (e.g., kernel, parameters) the results can vary a lot.
In addition, especially when heatmaps are calculated using the GPU, not only the number of points might determine the rendering speed, but also the size/resolution of the screen.
Some explanations on these issues are for example described in this paper: https://0-doi-org.brum.beds.ac.uk/10.1007/s41651-017-0004-4.

4) Finally, maybe give it to some language checking service to improve the English a bit.

Author Response

Dear reviewer, thank you for your positive feedback. We went through all your comments and revise them according to our good conscience and knowledge.

 

Ad 1) That is a good point, we have revised and simplified whole chapter

Ad 2) We have dropped section about Spatial binning

Ad 3a) + 3b) We have added a paragraph about relevancy and differences in the performance at the very end of the article, please see a Conclusion chapter. Moreover, we have added some new information about algorithms to several places in the text with aim to be more understandable

Ad 3c) Thank you for you note about GPU. We have added new paragraphs about general technical background into Heatmap chapter as well as into both specific libraries (Leaflet.heat and OpenLayers chapters). We already made some additional tests which compares loading on small and large monitor, so we also add some new results and information about this aspect.

Ad 4) We fully understand concerns about English level of non-native speakers. But the article has been corrected by a native speaker. We used a "professional" proofreading service, so we paid for it.. We hereby declare that original paper as well as revised version was corrected by a native speaker.  We already have made some changes for better understanding. We will send a confirmation about proofreading to Editor (Joyce Zhang)

 

 

  


Reviewer 2 Report

This article describes a study where several JavaScript libraries for visualisation have been tested and compared by performance of load time and visualisation through a web-browser.

There is a general lack of tests when it comes to comparing performance of these tools. They become more and more important for handling the growing datasets in a world where data are growing to become Big data.

The authors should work with the first part of the article to make it more readable and refine and lift the argumentation to be better communicated to the readers. Also strengthen the conclusions in relation to this so that the important research questions can be answered in the best possible way.

L. 10 - a word is messing. Presumably it is “ago”

L. 13 - Sentence starting “Exact amount...” is not grammatically well structured and can be difficult to understand the meaning from. Please re-write and make the message more clear.

L. 23 - Last sentence in abstract ends in a strange unanswered way. I think the authors should re-formulate this and give a clear statement about the results from the study performed.

L. 31 - I would use Geographical Information Science in stead of just referring to the software. The authors use this term in their own sub-section title (L. 50) See this reference for the long discussion about how the abbreviation is used:
Michael F. Goodchild (2009) Geographic information systems and science: today and tomorrow, Annals of GIS, 15:1, 3-9, DOI: 10.1080/19475680903250715

L. 34 - I can’t understand the argumentation about why big data is expanding? Rising prices or improved developments in IT do not explain anything about why the whole paradigm around how data volumes have changed in recent years. Either delete sentence or make the argumentation more clear. If necessary use a reference to other work.

L. 50 - Same as in line 34 - but now the authors use the argument of decreasing costs. They have just made the opposite argument!

L. 55 - Here the authors give credit to NASA for coining the term Big data - but in their own introduction they said that there was no general or accepted definition of it.

L. 60 - There is a broad acceptance in the field of Big data that 3V have evolved to 5V. The two “missing” V’s compared to 5V is “Veracity” and “Value”. I think the inflation in number of V’s is out of scope and should not be in focus for this article - but it would be more correct to refer to 5V which is well known and widely used in data research.

L. 100 - It would be relevant to have a short section that could explain the difference between point data and point clouds. In recent years there have been a rising number of studies that are dealing with point clouds from scanners, satellites and other sensor technology. How does that type of data differ from the point data that are in focus in this article?

L. 100 - The section about formats is not very consistent and I am not sure why it is necessary for me to know all this - and in fact it makes me more frustrated since they describe some formats in more detail than others. In the end they only work with JSON and GeoJSON.

L. 149 - When the three visualisation methods are described it would be good to show examples of their use. I prefer to see an illustration together with the description. Spatial Binning is described but not used in the further work within the study. Maybe not supported in the libraries that were tested?

L. 218 - Please explain which attributes are unnecessary for the tests? I think it is relevant to know how much information was stripped (filtered) before the tests. Maybe a table to show before and after stripping.

L. 229 - Figure 1 uses very small font sizes and could be designed more readable.

L. 243 - Same goes for figure 2. It is almost impossible to read what is going on.

L. 245 - The sections where the tests are described is well written and explains both the libraries tested and how the tests were performed in a good and comprehensive way. No need for changes in this part of the article. The tables and the maps are very understandable and communicative.

I find the conclusions of the paper relevant for further work and testing of load time and general visualisation methods of spatial data in larger datasets. However it would be necessary to relate back to the introductory part where the main research question should be made more clear.

Generally the English language of the article needs to be improved and cleared for grammatical errors.


It is an important contribution and very relevant to the community to know more about the performance of those tools that so many people use every day. I want to encourage you to continue your research and re-submit a revised version of the article.

Author Response

Dear reviewer, thank you for your positive feedback. We went through all your comments and revise them according to our good conscience and knowledge.

Ad 1) We have revised whole article. We have made changes to several places in the text with aim to be more understandable

Ad 2) We have added a word “ago“

Ad 3) We have revised a sentence … „It could be problematic to visualise easily and quickly a large amount of data via an Internet platform“ …

Ad 4) We have revised the last sentence in the Abstract

Ad 5) That is a good point, we use Geographical Information Science (GISci) instead of GIS

Ad 6) and 7) We deleted the sentence

Ad 8) We have revised the sentence - … “Big data” was firstly mentioned by NASA  …

Ad 9) There was a mistake, of course we would like to use 5V. First of all, we notice a 3V (L 60) and then 5V (L73), so we revised this part a bit, to be more precise.

Ad 10) Actually we are not focus on data capturing at all, therefore we are not focus on point clouds. However, we added a new paragraph at L 92 which discusses this aspect.

Ad 11) We have revised and simplified whole chapter about data formats

Ad 12) We have dropped section about Spatial binning

Ad 13) See L 183: All thematic attributes (taxonomy etc.) except coordinates were removed, as they were irrelevant to the testing.

Ad 14) and 15) We have changed all images for better readability

Ad 16) We have revised and extended the conlusion chapter as well as we have added main research questions to the introduction chapter

Ad 17) We fully understand concerns about English level of non-native speakers. But the article has been corrected by a native speaker. We used a "professional" proofreading service, so we paid for it.. We hereby declare that original paper as well as revised version was corrected by a native speaker.  We already have made some changes for better understanding. We will send a confirmation about proofreading to Editor (Joyce Zhang)


Reviewer 3 Report

The scientific problems to be solved in this paper are not clear. It is only the performance evaluation of existing tools in big data rendering. At the same time, the innovation of this paper is not enough.

The introduction focuses on the research background of this paper, and does not explain the significance and main innovations of this paper.

The figures cannot delivery information effectively.


Author Response

Dear reviewer, thank you for your positive feedback. We went through all your comments and revise them according to our good conscience and knowledge.

 

Ad 1) We have revised whole article, we have made changes to several places in the text with aim to be more understandable. We still hope that topic of performance of JS libraries is relevant to the community and the innovation is on comparison which never been done before at this level

Ad 2) We have added main research questions to introduction chapter

Ad 3) That is a good point, we have changed all images

We fully understand concerns about English level of non-native speakers. But the article has been corrected by a native speaker. We used a "professional" proofreading service, so we paid for it.. We hereby declare that original paper as well as revised version was corrected by a native speaker.  We already have made some changes for better understanding. We will send a confirmation about proofreading to Editor (Joyce Zhang)

Round 2

Reviewer 2 Report

The article has been considerably improved from the first version and I note that the authors have reacted positively to my suggestions. I understand that the article has been checked for english language twice. This has also contributed to the better quality. I find no need for further editorial work on this submission.

Reviewer 3 Report

1. This paper has no enough innovations. Several JavaScript libraries are compared, but the analysis of the results is rather weak. No meaningful conclusions can be drawn. The last paragraph added after the revision makes the meaning of the article more suspicious.

2. After the revision, this paper still has very serious problems with its layout. The charts in this paper are not in compliance with usual specification, and the graphs in the experimental part have no valuable meaning, just show the author doing the experiment.


Back to TopTop