Next Article in Journal
Preoperative Variables Associated with Surgical Outcome for the Correction of Exodeviation
Previous Article in Journal
The Interpretation of E-Motions in Faces and Bodies Derived from Static Artworks by Individuals with High Functioning Autistic Spectrum
 
 
Article
Peer-Review Record

The Impact of Shape-Based Cue Discriminability on Attentional Performance

by Olga Lukashova-Sanz 1,*, Siegfried Wahl 1,2, Thomas S. A. Wallis 3,† and Katharina Rifai 1,2
Reviewer 1: Anonymous
Reviewer 2:
Reviewer 3: Anonymous
Submission received: 1 February 2021 / Revised: 9 April 2021 / Accepted: 13 April 2021 / Published: 15 April 2021

Round 1

Reviewer 1 Report

The impact of shape-based cue discriminability on attentional performance

This study aims to test the effect cue discriminability on perceptual performance. The author measured accuracy and RT of orientation discrimination in gabor stimuli, based on how easy the pre-cue was to shift attention to one side. It was found that the easier was to categorize (whether it was to the right or the left), the grater the validity effect. Additionally, the effect was modulated by ISI, so that the shorter the ISI was, the larger was the effect.  

Evaluation

The question whether (and how) the discriminability of the cue affects performance may be an interesting. The results are not surprising by any mean. When people see the cue better (recognize the direction of the cue) the effect is larger. What is a bit surprising is that this doesn’t happen when the cue has 70% discriminability and that there is a difference between cues that have almost the same discriminability.

I have additional concerns with this manuscript; theoretical and empirical and presentational, which I summarize below.

Empirical issues:

  1. The experiment used many variables and the rational of the experimental procedure is not clear. For example, cue discriminability, ISI, cue validity of 1:2, and on top of everything, a response cue. The introduction discusses a bit the discriminability issue, but what is the reason, for example, for the response cue? In general, it looks to me that there are too many variables manipulated at once, with no much theoretical (or empirical) justification.
  2. Secondly, the specific variables are not well justified. Why using CD1, CD2 and CD3? This question is even more important once we look at the results of the discriminability test, where it is clear that CD2 and CD3 are very similar.
  3. What is the explanation that CD1 did not show any validity effect, despite the result that it had 70% discriminability? What is the reason for the differences between CD2 and CD3 given that there is difference in their discriminability level?
  4. Why not running more participants? This is a full article, and unless I missed something, a pilot experiment should not be published as a full article. This is an editorial question, but also a question to the authors. I understand that the pandemic prevents most face to face research, but in that case, my opinion is that we should just wait until it is possible to add sufficient number of participants.
  5. Why 11 participants? How was this number determined? Is there a power analysis to determine number of participants required for the study? If not, could you do any post hoc power analysis or sensitivity analysis to show how strong this effect size is?
  6. The choice to subtract the results from the line cue is a bit odd, as the line cue experiment was performed in a separate session (unless I misunderstood this?). You cannot assume that people respond the same in two different session. This is not a good control for your data. It is difficult to evaluate the analysis when it is based on that.
  7. Presentation issues:

Some editing is needed as there are numerous typos. Some examples:

Line 33: discrimination and not dicrimination.

Line 50: discriminability and not dicriminability.

Line 128 “in a different day” or “in different days”, but not “in a different days”.

There is also a unique use of punctuations.

I stopped making notes about these typos after a while but please make sure you look carefully. A small tip: upload the manuscript in a program that can read it aloud. The program reads exactly what is written and it is easy to detect the words that are not what you meant, even words that are correct but not in this context (for example if you used trail instead of trial). It also makes it easy to detect problems with punctuations.

Minor issues:

I wouldn’t use the word complexity. There is nothing more complex in CD2 than CD3. The only variable they differ on is discriminability.

Figure 1B is not necessary. You could add these to 1A and is will be even clearer (not only showing the difference between right and left but also the difference between the different cues).

I think the convection now is to use participants rather than subjects.

Please check the use of terms in the results section. Often you use “main effect” when you actually describe an interaction.

 

 

 

 

 

 

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 2 Report

In a single straight-forward experiment, the authors examine the degree to which an abstract shape could shift attention. The shapes had a larger protruding lobe on either the left or right. The shape was placed within a standard cueing paradigm in which participants were required to indicate the orientation of a Gabor patch the occurred at either the cued or uncued location. The central results showed that thresholds for target discrimination were decreased as a function of cue shape.

Evaluation

I very much like the paper. The works looks to have been carried out very well, the data analysed appropriately, and the ms written fairly well (I have pointed out a few things in this respect below). I just have a few small issues.

The authors refer to their shapes as ‘endogenous’ cues. They should be clearer on why they think this is the case. Is it because it is a central cue? Although this is the case, abstract blobs with a slightly protruding lobe are not anything like, for instance, an arrow which has a large top-down interpretive component. It seems to me that the attentional shift that the authors observe is a stimulus-driven effect.

Line 36. The authors state “Fewer, however, looked into the modulation of the properties of the cue itself”. I don’t think this is correct. The whole point about all the work on Attentional Control Settings and Contingent Capture (Folk, Remington, & Johnston, 1992) is to examine the influence of the cue and its relationship to the target.  

Line 91. It was only after a couple of re-reads that I realised the section entitled, “Shape-based spatial cue” was some form of pre-test as part of stimulus generation. I would entitle this section something like. “Stimulus generation and pre-test”. Similarly, in the same section the term “indicated” usually means what a participant does, they indicate via a response. But here the authors mean it in a different sense. The authors should therefore change some of the phrasing of this section. For instance, in Line 91, “To indicate cueing direction the radial frequency….” Could state “In order to objectively determine cue direction….”

The authors should consider including some measure of effect size. This is becoming standard practice and at one point the authors state “The main effect of inter-stimulus interval is not significant (p = 0.06).” Providing effect size here would help the reader to interpret this “non-significant” effect.

Suggestions for small edits:

Line 34. The phrase “Numerous studies investigated the modulation of attention deployment when varying the target, distractors, or the task in attentional paradigm” doesn’t quite work. I would change it to “Numerous studies investigated the modulation of attention deployment when varying the target, distractors, or the task in an array of attentional paradigms”.

Line 36. “Looked into” should be changed to “have examined”.

Line 44. The paragraphs are quite long. “In the present verification study…” should be the beginning of a new one.

Line 59. “This can influence the time course of attentional effect” should read “This can influence the time course of an attentional effect”

“Subjects” should be replaced by “Participants” throughout.

Line 81. Presently, the sentence states the stimuli were presented 50cms from the display monitor.

Line 232. “Based on this data”. “Data” is plural so the phrase could be changed to “Based on these data”.

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 3 Report

Vision MS# 1115026

SUMMARY

The Authors present a pilot experiment in which they manipulate the shape and number of convolutions on the outline of a shape used in a spatial cueing experiment. The results showed that increases in cue discriminability are associated with increases in attentional movement. While I believe this study is methodologically sound and that data analysis was done properly, I have a number of issues the authors should address before I can recommend publication. The biggest issue is the motivation/purpose behind the study, which is unclear to me from reading the manuscript. Additionally, I believe the discussion/conclusions could also be improved (although this would also be made easier with an increased foregrounding of the motivation). Finally, while I applaud the authors for making their data publicly available, at present they are not really available because clicking the provided URL leads to an OSF page where I need to “Request Access” to the data. This should be amended to make the data truly public. I outline these suggestions below, and then provide a list of more minor revisions I would suggest the authors take into consideration.

MAJOR ISSUES (in order of importance)

  1. The motivation for conducting this study is unclear. In the introduction, the authors discuss optimizing UX on websites, navigational assistance in cars, and similar applications, but I was left asking myself why looking at “worse” cues is important in this instance. For example, using a clearly demarcated arrow leads to a strong cueing effect, so what is the motivation behind using less clear stimuli to induce cueing? By the authors’ own admission (in line 58-59): “Less discriminable cues…might require more time to process them”. So why use them? I want to clarify, I am totally in favour of conducting basic research for the sake of basic research, but the way the introduction is currently set up seems to suggest that the importance of the research is solving problems that don’t exist if you just use more basic stimuli. My recommendation here would be that the authors bring out the purpose of this experiment early in the introduction, and then fit the rest of their arguments into that framework to set up the actual experiment itself.
  2. The discussion tends to focus on connecting to previous research, and to discussion possible limitations of the study, and it does this quite well. What I would like to see more of (and this relates closely to point 1 above) is a description of how these research findings help to answer the questions asked in the introduction. Further, this section needs to answer the question of why these findings are important! (Again – I am not saying they are not, just that it is not immediately apparent based on the current writing.) The authors should provide this information, which would also strengthen the conclusion of the manuscript.
  3. The data are not truly public, because one needs to request access from the authors to access the data on OSF. This would lead to a failure of blinding in the review process, so the authors should make the data freely and publicly available to ensure review of the data is possible before publication.

MINOR ISSUES (in order of appearance)

  1. The introductory paragraph seems like it could use some citations to previous literature to support some of the statements that are made.
  2. The term “pilot” and “verification” are used together in line 72, but in other places in the manuscript they are used interchangeably. It would be good to choose one term and stick to it throughout the manuscript.
  3. How was the equation and shape described in section 2.2.2. decided upon?
  4. The fractions in line 108 are somewhat challenging to parse in terms of what it means to the shape being used. Is there a way you could incorporate those values in Figure 1 to show how they are being implemented?
  5. It is unclear to me why the angles in the Gabor patches (described in line 124) were manipulated. Was there another hypothesis that was not tested/reported here?
  6. Section 2.3.3 is labelled “training session” – is this demonstrably different from a practice session? I was somewhat confused by the terminology used by the authors, because this is not (at least I don’t think it is) a training study, per se.
  7. The authors should justify why they chose a cue validity of 67%, when Posner’s task (upon which this paradigm was based) had a validity of 75%.
  8. Did the authors have a priori and/or quantitative rules around excluding a participant’s data? While I appreciate that some participants just don’t seem to “get” the task (I have seen this many times in my own tasks), it is good to have clear and objective guidelines to use when excluding data.

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Round 2

Reviewer 1 Report

I thank the authors for the extensive work they did on the introduction and explaining the variables and their decisions. 
A couple of issues still remain in my mind. 

  1. I can see the authors added effect size to the analyses, thank you for that. However, no discussion on the effect size is provided. This is important giving the unusually small number of participants.
  2. Still with the small number of participants, I am sorry, but I cannot see a justification to publish a study with only a few participants. This is not such an important issue that the community cannot wait a few more months to be sure there is enough power in the study. This will have to be an editorial decision, but this is my opinion. 
  3. I am still not clear about the subtraction of a completely separate experiment, from the main one. This is statistically, empirically, and theoretically wrong.  Two separate studies can be compared to each other, but subtracting one from another suggests that they are dependent, while they are not. Are there any studies you can cite that used this procedure? 

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 2 Report

I think the authors have done a good on the revision and am happy for it to be published.

Author Response

We thank the reviewer for the kind comments on the manuscript.

Reviewer 3 Report

The authors have attended to the revisions I recommended on the original manuscript with great care, and I believe the manuscript is significantly improved. In particular, the revised introduction is great. I have a couple of additional comments that I would like to see addressed before acceptance of the manuscript.

  1. I appreciate the issues around excluding data when using a curve-fitting procedure - I have encountered it myself as well! I think it is a very positive fact that the statistical results do not change when the full participant set is tested. It would be great for the authors to include this fact (and statistical output) in the manuscript, perhaps as a footnote. The more transparency provided, the more credible the results will seem!
  2. I'm sorry I didn't notice this on the first revision of the manuscript, but I would like to see a discussion of how the authors are certain that they are dealing with covert attentional shifts. I am not sure if, based on the experimental design, I am certain that the results could not be explained by means of overt attentional shifts. So including some discussion of this would strengthen the manuscript.

 

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Back to TopTop