Next Article in Journal
In Time with the Beat: Entrainment in Patients with Phonological Impairment, Apraxia of Speech, and Parkinson’s Disease
Next Article in Special Issue
Visual Duration but Not Numerosity Is Distorted While Running
Previous Article in Journal
Love Stinks: The Association between Body Odors and Romantic Relationship Commitment
Previous Article in Special Issue
Sensory-Motor Modulations of EEG Event-Related Potentials Reflect Walking-Related Macro-Affordances
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Implicit Associations between Adverbs of Place and Actions in the Physical and Digital Space

by
Laila Craighero
1,* and
Maddalena Marini
2
1
Department of Neuroscience and Rehabilitation, University of Ferrara, 44121 Ferrara, Italy
2
Center for Translational Neurophysiology, Istituto Italiano di Tecnologia, 44121 Ferrara, Italy
*
Author to whom correspondence should be addressed.
Submission received: 21 September 2021 / Revised: 10 November 2021 / Accepted: 13 November 2021 / Published: 17 November 2021
(This article belongs to the Special Issue The Role of the Sensorimotor System in Cognitive Functions)

Abstract

:
Neuropsychological, behavioral, and neurophysiological evidence indicates that the coding of space as near and far depends on the involvement of different neuronal circuits. These circuits are recruited on the basis of functional parameters, not of metrical ones, reflecting a general distinction of human behavior, which alternatively attributes to the individual the role of agent or observer. Although much research in cognitive psychology was devoted to demonstrating that language and concepts are rooted in the sensorimotor system, no study has investigated the presence of implicit associations between different adverbs of place (far vs. near) and actions with different functional characteristics. Using a series of Implicit Association Test (IAT) experiments, we tested this possibility for both actions performed in physical space (grasp vs. look at) and those performed when using digital technology (content generation vs. content consumption). For both the physical and digital environments, the results showed an association between the adverb near and actions related to the role of agent, and between the adverb far and actions related to the role of observer. Present findings are the first experimental evidence of an implicit association between different adverbs of place and different actions and of the fact that adverbs of place also apply to the digital environment.

Graphical Abstract

1. Introduction

The notion of egocentric distance refers to the space between an observer and a reference. All human languages contain special grammatical terms that serve to distinguish between different sectors of space, using the speaker or addressee as a frame of reference. Crosslinguistic research suggests that a large proportion of languages in the world make a fundamental binary distinction between terms that refer to something that is a short distance away and terms that refer to something that is a great distance away [1]. Among these terms, the adverbs of place serve to specify the place of action, the position of a person or an object in space, and the distance of a person or object from the speaker or listener. In particular, far is who or what is at a great distance, and near is who or what is close to where one is.
The discovery of the neurophysiological basis of the distinction of physical space into near and far has not been trivial. Indeed, at a level of physiology of vision, no index allows distinguishing in a binary way “near” from “far”. It is only possible to have visual information about which object is “closest”. Information on the distance between the observer and static objects is based on cues provided by the coordinated use of the two eyes, mainly stereopsis and eye convergence, which compare the parameters coming from different objects simultaneously present [2]. The situation is further complicated, as a series of studies showed that the metrics in perception are influenced by many different variables, such as the energetic costs associated with performing distance-relevant actions [3,4], the observer’s purposes [4], and the behavioral abilities of the observer’s body [5]. Specifically, targets at a distance looked farther when carrying a heavy load compared with carrying no load or a lighter load [3,4], but only if the intention was to throw the load (e.g., a ball) to it, not if participants intended to walk to it [4]. Furthermore, targets located between the boundaries of what cannot be reached without a tool but can be reached with one appear closer when the tool was used [5].
Neuropsychological studies investigating neglect patients [6,7] have shed some light on what is special about touching an object, with or without the help of a tool, to change the perception of distance. Neglect patients tend to ignore the left side of their visual field. For example, when they are asked to bisect a line, they bisect only the right half, resulting in responses far to the right of the true center. Some patients show neglect only for near lines and not for far lines [8], whereas other patients show neglect only for far lines [9]. This double dissociation constitutes strong evidence that the brain contains separate neural circuits for near or peripersonal space, on the one hand, and far or extrapersonal space, on the other. For patients who have shown neglect only in near space, it has been found that bisecting a distant line using a stick causes the error to occur. However, when patients are asked to use a laser pointer to bisect a distant line, the far-space error is not present [6,7]. Therefore, the use of the stick induces the appearance of a symptom in the far space that is normally present in the near space only, indicating that the far space has been remapped as near space as a result of the use of the stick. This remapping, however, does not occur when a laser pointer is used. Thus, these results suggest that the use of the stick and the pointer may involve different neuronal circuits. The stick may require the activation of the damaged circuit, responsible for the presence of peripersonal neglect. The pointer, on the other hand, may use a healthy circuit able to allow a perfect performance.
The main difference between the stick and the laser pointer is functional. With the stick, you reach the target, and you can exert such force that you can break the target, rotate it, push it, or make it feel touched in the case of an animated being. In the latter case, you can communicate something to the target, you can catch his attention, you can push him in a certain direction, stroke him with a light touch, or punish him with a heavy touch. The effects on the target are exactly those that would be produced by the use of the arm/hand. With the laser pointer, in contrast, you can point to something very precisely, but you are not able to physically modify the target. You can neither break it, nor move it, nor draw his attention if he is not looking at you. The laser pointer behaves in a similar way to the eyes; it has the same ability to focus on an extremely precise point of space from which we can obtain the information that is interesting at that moment. Several studies on monkeys, patients, and normal participants also support the view that tool use can modulate peripersonal space [10,11,12,13,14,15] and showed that this modulation has to be attributed to tool functionality, i.e., the possibility to interact and modify the state of the target, and not to other confounding variables, such as proprioceptive feedback, or visual appearance [16]. Indeed, the division of the space based on the potential actions that can be performed in it is reflected also by the organization of the sensorimotor system. Separate networks are devoted to the movement programming of the eyes [17,18,19] and to the encoding of reaching movements of the head, arm, and hand [20]. Therefore, the reachable peripersonal space and the unreachable extrapersonal space are anatomically and functionally represented in segregated circuits [21].
Taken together, the neuropsychological, behavioral, and neurophysiological evidence suggests that the binary cognitive/linguistic distinction of space into near and far is not defined by metrical parameters but by functional ones, that is, it depends on the possibility to voluntarily act on the target to modify its physical state. This position is in agreement with embodied cognition theories [22,23], according to which language and concepts are grounded in the sensorimotor system, given the presence of a deep interconnection between cognition, perception, and action. At present, there is very little experimental evidence that indicates a link between proximal and distal linguistic descriptors and the activation of the sensorimotor system. Specifically, it has been shown that, during a task to reach an object, the automatic reading of spatial adverbs (far vs. near) that are inconsistent with the real position of the object influences the kinematics of the reaching movement [24]. More interestingly, a study indicated that when participants were asked to name (for example, “this red triangle”; “that red triangle”) and to point with their hand or a tool to objects at different distances, the use of a stick led to greater use of the proximal demonstratives for objects placed at greater distances [25]. This latter result is in agreement with the hypothesis that the use of spatial demonstratives reflects a distinction between near and far space based on the actual possibility for the individual to act in that space at that moment.
However, as far as we know, there is no experimental evidence that indicates an association at a cognitive level between spatial adverbs and actions with different functional characteristics, in a context that does not involve the participant in the execution of an action. The first objective of this study was to fill this gap by studying the presence of implicit associations between adverbs of space and labels referring to different actions performed in the physical environment.
In addition, today, many of our activities are carried out using smartphones, and the COVID-19 pandemic has helped to increase the imbalance between actions in the physical and digital space. The digital space is an information space designed as a network of more or less static addressable objects, where information is perceived, stored, and retrieved, and where an individual can interact with others. Obviously, digital space is not constrained by metrical parameters, and we access to it by clicking the app icons which are present on the screen. Thus, digital space may be coded as peripersonal or extrapersonal according to the same rules that apply in the physical space, which is peripersonal when objects can be acted upon and a clear interaction is present and extrapersonal when objects can be just perceived. If so, then the app icons may reveal the potential actions to be performed on the target, and consequently they may be associated with the near or far space. Indeed, as with physical space, also with digital space, our behaviors may be divided into perceiving or acting. The terms used to categorize these different online behaviors are, respectively, “content consumption” and “content generation” [26]. Content consumption refers to the act of reading, listening, viewing, and other ways of taking in various forms of digital media. Examples of apps that involve this type of activity are those that allow access to web browsers (e.g., Google, Firefox, Safari), to weather information (e.g., 1 Weather, Weather Now, AccuWeather), and news (e.g., Apple News, Google News, The Week, Flipboard). Content generation, instead, describes the various practices that result in any type of digital content, including text and voice messages, video files, photos, etc., to be shared with the digital community via blogs, email apps, and social media sites (e.g., Facebook, WhatsApp, Instagram, Twitter). The second objective of this work was, therefore, to study for the first time the presence of implicit associations between spatial adverbs and app icons that direct to online actions with different functional characteristics.
Specifically, we propose that in both physical and digital environments the binary cognitive/linguistic distinction of space into near and far reflects an encoding in an operational/functional format (i.e., the boundary between the two regions depends on the possibility to voluntarily act on the target to modify its physical state). We aimed to test whether, in the physical environment, the adverb of space near is associated with actions that require reaching the target (e.g., reaching and grasping the object), and the adverb of space far with actions in which reaching the target is not allowed (e.g., looking at an unreachable object) and, congruently, whether in the digital environment, the adverb of space near is associated with app icons directing to content generation actions (e.g., WhatsApp) and the adverb of space far with those directing to content consumption actions (e.g., Google).
To this end, we used a series of Implicit Association Test (IAT) [27,28] experiments. The IAT is a research tool based on reaction time recordings for indirectly measuring the strength of associations among categories. The task requires sorting of stimulus exemplars from four categories using only two response options, each of which is assigned to two of the four categories. The logic of the IAT is that this sorting task should be easier when the two categories that share the same response option are strongly associated than when they are weakly associated. In the twenty years since its initial publication, the IAT has been applied in a diverse array of disciplines including social psychology [29,30], cognitive psychology [31,32], clinical psychology [33,34], developmental psychology [35,36], neuroscience [37,38,39,40,41], market [42], and health research [34,43,44].
Specifically, in the present study, four experiments were conducted using the IAT: (i) in the physical environment, to investigate the implicit association between the adverbs of space near and far and the actions grasp and look at (Experiment 1), and (ii) in the digital environment, to investigate the implicit association between the adverbs of space near and far and apps directing to content generation and content consumption actions (Experiments 2–4). We expected an association between near + grasp/far + look at, in the physical environment, and between near + content generation apps/far + content consumption apps, in the digital one.
The data for each experiment are available at the link: https://osf.io/9g235/?view_only=f072f3bb7ea34def93ea3cb07b9f2e78 (accessed on: 15 November 2021)

2. Experiment 1

The aim of Experiment 1 was to test for the presence of an implicit association between the adverbs of space near and far, needed to respectively classify objects at a reachable distance and landscapes and pictures showing individuals performing a reach and grasp action or a look at action. These actions are characterized by different functional characteristics. The reach and grasp action changes the state of the target as it moves it, whereas the look at action does not. According to the evidence reviewed in the Introduction, the binary distinction of space into near and far is defined by functional parameters, that is, it depends on the possibility to voluntarily act on the target to modify its physical state. Therefore, we expected an association near + grasp/far + look at.

2.1. Materials and Methods

2.1.1. Participants

For all the experiments reported in this article, the following indications apply. Participants were unaware of the purposes of the study. Procedures were in accordance with the guidelines of the Declaration of Helsinki. Data collection was carried out from March to December 2020. Participants were actively recruited by sending an invitation on Google Classroom to all students enrolled in the different teaching courses at the University of Ferrara. Interested students contacted the authors via email granting informed consent. Each interested student was emailed a private link to the Project Implicit research platform (https://implicit.harvard.edu) to participate in the experiment. Participants were asked not to run the experiment more than once and not to share the link with others. Information about gender and age, but not on identity, was requested on the research platform before starting the experiment. All data were collected online, and no information about participants’ identity (neither name nor email address) was recorded. The number of data present for each experiment always corresponded to the number of students who answered the call for that experiment. We assume that no participant has taken part in more than one experiment as no student has been sent more than one link.
Forty-three participants (M age = 22.53, SD age = 2.32, 58.1% women) volunteered to take part in Experiment 1.

2.1.2. Stimuli and Procedure

Stimuli used in Experiment 1 consisted of 20 colored pictures, subdivided into four categories: (i) for the near category, five images showing objects at a reachable distance; (ii) for the far category, five images showing landscapes; (iii) for the grasp category, images portraying a man or a woman grasping five different objects; (iv) for the look at category, images portraying a man or a woman looking at the same objects, at the same distance, with a Plexiglass placed between the actor and the object.
The link opened a webpage where participants were first asked to enter their gender and age and then displayed instructions for performing the task. In particular, a double entry table was presented in which images and associated categories were shown (Figure S1 in Supplementary Materials), accompanied by the information relative to sorting categories and response options. In all experiments, the two answer options were the “E” key (leftmost) and the “I” key (rightmost) on the QWERTY keyboard. Instructions were as follows: “In the center of the screen, one at a time, you will be presented with the IMAGES you see below. You will have to press the button corresponding to the CATEGORY to which the image belongs. When the category to which the image belongs is written at the top LEFT, you will have to quickly press the E key with the index finger of your LEFT hand. When the category to which the image belongs is written at the top RIGHT, you will have to quickly press the I key with the index finger of your RIGHT hand.”
Stimuli appeared one at a time in the middle of the screen, together with the names of the contrasted categories on the left and right at the top of the screen. If the participant made an error, a red X appeared below the stimulus and the trial continued until the correct key was pressed.
The experimental session lasted about 10 min on average and consisted of 180 trials subdivided into seven blocks, following the standard task procedure described by Nosek, Greenwald, and Banaji [45]. In this procedure, some blocks are practice tasks to acquaint subjects with the stimulus materials and sorting rules (Blocks 1, 2, 5). Others are critical blocks, which involve simultaneous sorting of stimulus items representing four concepts/categories with two response options (Blocks 3, 4, 6, 7). Ease of sorting is indexed by the speed of responding, faster responding indicating stronger associations.
In the present experiment, in Block 1 (action categories, 20 trials), participants practiced sorting images belonging to the grasp category (i.e., individuals grasping objects) and images belonging to the look at category (i.e., individuals looking at the objects). They were asked to press the E key for grasp and the I key for look at. In Block 2 (distance categories, 20 trials), participants practiced sorting images belonging to the near category (i.e., objects at a reachable distance) and images belonging to the far category (i.e., landscapes). They pressed the E key for near and the I key for far. In Blocks 3 (20 trials) and 4 (40 trials), participants categorized images belonging either to the action and distance categories (combined blocks, action + distance; congruent blocks). They pressed the E key for grasp and near and the I key for look at and far. In Block 5 (reverse action categories, 20 trials), participants practiced sorting images belonging to the action categories with the reverse key mapping from Block 1, i.e., grasp with the I key and look at with the E key. In Blocks 6 (20 trials) and 7 (40 trials), participants sorted images from all four categories with the opposite key pairings from Blocks 3 and 4 (combined blocks, reverse action + distance; incongruent blocks), i.e., look at and near with the E key and grasp and far with the I key (Figure 1).
The order of the action categories blocks (Blocks 1 and 5) and the combined blocks (Blocks 3 + 4 and Blocks 6 + 7) was counterbalanced across subjects. That is, half of the participants (Order 1) administered the blocks in the order just outlined, whereas the other half (Order 2) completed the task with Block 1, Block 3, and Block 4 switched with Block 5, Block 6, and Block 7 (Figure 2).
Response times and errors were collected online through the Project Implicit research platform (https://implicit.harvard.edu). Response time is the time from the onset of a single stimulus to the categorization of that stimulus.

2.2. Data Analysis

Data were analyzed according to the recommended IAT algorithm described by [46]. That is, we computed the D score for each participant. The D score [46] is a variation of Cohen’s d calculation of effect size used to measure the association strength between categories in the IAT. Research showed that the recommended IAT algorithm strongly outperforms, in terms of psychometric proprieties, the conventional procedures often used in cognitive and social psychology for RTs-paradigms, such as comparing latencies or errors in the combined blocks [27,46]. Indeed, this algorithm simultaneously considers both latencies and errors. That is, it uses a metric calibrated by each respondent’s latency variability and includes a latency penalty for errors.
Specifically, to calculate D score for each participant, we (a) removed responses faster than 350 ms and slower than 10,000 ms, (b) computed the mean of correct latencies for each combined block (Block 3, Block 4, Block 6, and Block 7), (c) computed one pooled standard deviation for all trials in Block 3 and Block 6 (SD3–6) and another for Block 4 and Block 7 (SD4–7), (d) replaced errors with the mean of the correct responses in that response block (computed in Step b) plus a 600 millisecond penalty, (e) averaged the resulting values for each of the four blocks (MBlock3, MBlock4, MBlock6, MBlock7), (f) computed the two mean differences (MBlock6MBlock3) and (MBlock7MBlock4), (g) divided each difference score by its associated pooled standard deviation, and (h) computed D as the equal-weight average of the two resulting ratios.
D = ( M B l o c k 6 M B l o c k 3 S D 3 6 ) + ( M B l o c k 7 M B l o c k 4 S D 7 4 ) 2
According to the present blocks sequence, MBlock6MBlock3 corresponded to
M B l o c k 6 ( N e a r L o o k   a t / F a r G r a s p ) M B l o c k 3 ( N e a r G r a s p / F a r L o o k   a t )
and MBlock7MBlock4 corresponded to
M B l o c k 7 ( N e a r L o o k   a t / F a r G r a s p ) M B l o c k 4 ( N e a r G r a s p / F a r L o o k   a t )
Therefore, a positive D score indicated an association near + grasp/far + look at.
The mean and the standard deviation of D scores of all participants were calculated.
To test the significance of the association revealed by the IAT, the mean D score was compared against zero (i.e., no association) using a one sample t-test.

2.3. Results

Results showed a positive and significant D score (DM = 0.31, DSD = 0.44, Cohen’s d = 0.70, t(42) = 4.595, p < 0.001, 95% C.I. (0.17, 0.45)), indicating an association near + grasp/far + look at. That is, participants associated the adverb of space near more with the action category grasp and the adverb of space far more with the action category look at (Figure 6). The percent error was 5.64%.

2.4. Discussion

Results of Experiment 1 indicated the presence of implicit associations between adverbs of space and different actions performed in the physical environment. Specifically, they showed a significant association between the adverb near and the reach and grasp action and between the adverb far and the look at action.
Therefore, these results suggest that the use of proximal or distal adverbs of space is related to the potential possibility of acting to modify the objects present in that space.

3. Experiment 2

The aim was to use the IAT to study for the first time the presence of implicit associations between spatial adverbs and app icons that direct to online actions with different functional characteristics. The starting hypothesis was that not only the conceptual knowledge of the physical world is mapped within our sensorimotor system [22,23] but also that of the digital world. Digital experiences and the spaces in which they take place, even if they are sometimes called virtual, are quite real and have real, definite consequences [47]. Consequently, as well as the sensorimotor system characterizes the semantic content of concepts in terms of the way that we function with our bodies in the physical world [23], the sensorimotor system may characterize the concepts related to the digital world in terms of the way we act in it. Specifically, we expected that the distinction between adverbs of space near and far depends on the presence of digital actions that, respectively, involve or not a modification of the digital content.
Experiment 2 tested for the presence of an implicit semantic association between the adverbs near and far, needed to respectively classify objects at a reachable distance and landscapes, and social and no social app icons. Social app icons (e.g., Facebook, WhatsApp, Instagram, Twitter, etc.) address content generation actions, and no social app icons address content consumption actions (e.g., Google, Weather Now, The Week, etc.) We expected an association near + social/far + no social.

3.1. Materials and Methods

3.1.1. Participants

Thirty-four participants (M age = 23.65, SD age = 4.59, 44.1% women) volunteered to take part in Experiment 2.

3.1.2. Stimuli and Procedure

The stimuli used in Experiment 2 consisted of 20 colored pictures, subdivided into four categories: (i) for the near category, five images showing objects at a reachable distance; (ii) for the far category, five images showing landscapes; (iii) for the social category, images showing App icons addressing five social media sites (i.e., Instagram, WhatsApp, Twitter, Facebook, Snapchat); (iv) for the no social category, images showing App icons addressing five information media sites (i.e., Weather, iTunes, Google Maps, Chrome, Google) (Figure S2 in Supplementary Materials).
Procedure (Figure 3) and data analysis were the same as in Experiment 1.
According to the present blocks sequence, MBlock6MBlock3 corresponded to
M B l o c k 6 ( N e a r N o   s o c i a l / F a r S o c i a l ) M B l o c k 3 ( N e a r S o c i a l / F a r N o   s o c i a l )
and MBlock7MBlock4 corresponded to
M B l o c k 7 ( N e a r N o   s o c i a l / F a r S o c i a l ) M B l o c k 4 ( N e a r S o c i a l / F a r N o   s o c i a l )
Therefore, a positive D score indicated an association near + social/far + no social.

3.2. Results

Results showed a positive and significant D score (DM = 0.46, DSD = 0.48, Cohen’s d = 0.96, t(33) = 5.609, p < 0.001, 95% C.I. (0.30, 0.63)), indicating an association of near + social/far + no social. That is, participants associated the adverb of space near more with the app category social and the adverb of space far more with the app category no social (Figure 6). The percent error was 5.64%.

3.3. Discussion

Results of Experiment 2 showed an implicit association between spatial adverbs and app icons that direct to online actions with different functional characteristics. Specifically, the difference between social and no social apps concerned the digital actions they addressed, that is, content generation and content consumption. Therefore, present results suggest that adverbs of space also apply to digital space.
However, some linguistics researchers criticize the idea that demonstratives are a particular class of spatial terms based on our bodily experience with concrete objects in space and propose that demonstratives are used primarily for social and interactive purposes [48]. This possibility questions our interpretation of the data. Indeed, it is possible that the association found between near and social and between far and no social depended on the label used to classify the apps, which explicitly referred to social activity. Unfortunately, no other labels could be found that would allow easy categorization of the icons.

4. Experiment 3

Experiment 3 was designed to test the hypothesis of an implicit association between spatial adverbs and different categories of app icons, without using a category label that refers to social activity. Since we could not find labels to categorize multiple apps, we decided to use only two apps, i.e., WhatsApp and a weather information app. WhatsApp allows users to send text messages and voice messages; make voice and video calls; and share images, documents, user locations, and other content. Undoubtedly, it is an app that requires content generation. On the contrary, using an app to check the weather requires content consumption. We used images of the most used weather forecasting apps in Italy (i.e., il Meteo, 3B Meteo, meteo.it).

4.1. Materials and Methods

4.1.1. Participants

Twenty-six participants (M age = 22.42, SD age = 2.18, 53.8% women) volunteered to take part in Experiment 3.

4.1.2. Stimuli and Procedure

The stimuli used in Experiment 3 consisted of 20 colored pictures, subdivided into four categories: (i) for the near category, five images showing objects at a reachable distance; (ii) for the far category, five images showing landscapes; (iii) for the WhatsApp category, images showing five different WhatsApp icons; (iv) for the Weather category, images showing five different weather forecasting Apps (Figure S3 in Supplementary Materials).
Procedure (Figure 4) and data analysis were the same as in Experiment 1.
According to the present blocks sequence, MBlock6MBlock3 corresponded to
M B l o c k 6 ( N e a r W e a t h e r / F a r W h a t s A p p ) M B l o c k 3 ( N e a r W h a t s A p p / F a r W e a t h e r )
and MBlock7   MBlock4 corresponded to
M B l o c k 7 ( N e a r W e a t h e r / F a r W h a t s A p p ) M B l o c k 4 ( N e a r W h a t s A p p / F a r W e a t h e r )
Therefore, a positive D score indicated an association near + WhatsApp/far + Weather.

4.2. Results

Results showed a positive and significant D score (DM = 0.22, DSD = 0.48, Cohen’s d = 0.46, t(25) = 2.384, p < 0.05, 95% C.I. (0.03, 0.42)), indicating an association of near + WhatsApp/far + Weather. That is, participants associated the adverb of space near more with the app icon WhatsApp and the adverb of space far more with the app icons Weather (Figure 6). The percent error was 5.10%.

4.3. Discussion

Results of Experiment 3 confirmed those of Experiment 2. They showed an implicit association between different adverbs of space and different categories of app icons. Specifically, WhatsApp, an app that requires content generation, was associated to the adverb near, while Weather forecasting apps, requiring content consumption, were associated to the adverb far.
However, even in this case, an alternative interpretation may question whether the results depended on the functional characteristics of the digital actions evoked by the app icons. Indeed, it is possible that the highlighted association depended on the presence of landscapes, a possible destination of a journey, and on the interest in weather that can jeopardize the decision to move.

5. Experiment 4

Experiment 4 aimed to test the hypothesis of an implicit association between spatial adverbs and categories of app icons related to content consumption and content generation, avoiding confounding variables related to category labels and the meaning of the stimuli. For this purpose, we selected WhatsApp and Google Chrome icons. Google Chrome is a web browser, that is, a software application for accessing information on the World Wide Web and, therefore, refers to content consumption.

5.1. Materials and Methods

5.1.1. Participants

Forty-four participants (M age = 21.39, SD age = 3.25, 59.1% women) volunteered to take part in Experiment 4.

5.1.2. Stimuli and Procedure

The stimuli used in Experiment 4 consisted of 20 colored pictures, subdivided into four categories: (i) for the near category, five images showing objects at a reachable distance; (ii) for the far category, five images showing landscapes; (iii) for the WhatsApp category, images showing five different WhatsApp icons; (iv) for the Google category, images showing five different Google icons (Figure S4 in Supplementary Materials).
Procedure (Figure 5) and data analysis were the same as in Experiment 1.
According to the present blocks sequence, MBlock6MBlock3 corresponded to
M B l o c k 6 ( N e a r G o o g l e / F a r W h a t s A p p ) M B l o c k 3 ( N e a r W h a t s A p p / F a r G o o g l e )
and MBlock7MBlock4 corresponded to
M B l o c k 7 ( N e a r G o o g l e / F a r W h a t s A p p ) M B l o c k 4 ( N e a r W h a t s A p p / F a r G o o g l e )
Therefore, a positive D score indicated an association near + WhatsApp/far + Google.

5.2. Results

Results showed a positive and significant D score (DM = 0.21, DSD = 0.40, Cohen’s d = 0.53, t(43) = 3.549, p < 0.001, 95% C.I. (0.09, 0.33)), indicating an association of near + WhatsApp/far + Google. That is, participants associated the adverb of space near more with the app icon WhatsApp and the adverb of space far more with the app icon Google (Figure 6). The percent error was 4.94%.

5.3. Discussion

Results of Experiment 4 confirmed those of previous experiments, that is, we observed an implicit association between different adverbs of space, near and far, and the app icons of WhatsApp (digital content generation) and Google (digital content consumption), respectively.

6. General Discussion

The present study aimed to test the semantic association between adverbs of place and the concept of acting vs. perceiving. Specifically, we hypothesized that the use of adverbs near and far depends on the functional characteristics of the potentially performed actions in that space. This position stems from neuroscientific evidence pointing to separate neuronal circuits coding the near peripersonal space, that, is the area of space reachable by body parts, in which objects can be manipulated, and the far extrapersonal space, that is, the area of space out of reach, in which objects can be only perceived [49,50,51,52]. Consequently, we predicted a stronger association between near and actions able to act on the target and between far and actions devoted to the use or perception of target information. This binary distinction between actions with different functional characteristics is present not only in the physical space (e.g., to grasp, to move, to brake, etc. vs. to look at, to observe, to check, etc.) but also in the digital one. Digital behaviors are divided into content generation and consumption: the first produces new digital content to be published online, and the second is limited to the use of content already present on the network [26].
To this end, we used a series of IAT experiments [27,28] to investigate the associations of the category near (images showing objects at a reachable distance) and the category far (images showing landscapes), with actions characterized by different functional characteristics, performed in the physical (Experiment 1) and digital (Experiments 2–4) environment.
In Experiment 1, actions were grasp (images portraying individuals grasping objects) and look at (images portraying individuals looking at objects). Results confirmed the hypothesis, showing a stronger association between near and grasp and between far and look at. Note that in both grasp and look at stimuli, the objects were placed at the same distance from the agent. In the first case, the agent touched the object with the extended arm, in the second one, a transparent Plexiglas barrier was placed between the agent and the object, and the agent’s hand was resting on the table. This detail is important for several reasons. The first one is that it excludes that IAT results depended on a different distance between the agent and the object. A farther metric distance could easily be associated with the considerable distance from which you look at the landscapes used as stimuli for the far category. The second reason regards the necessity to carefully distinguish the encoding of peri- and extrapersonal space in a metric format (i.e., the boundary between the two regions depends on the distance from the agent’s body) from that in an operational/functional format (i.e., the boundary between the two regions depends on the workspace of the agent) [53]. We hypothesized that the implicit association between actions and adverbs of space reflects an encoding in a functional format. Indeed, the presence of the Plexiglas restricted the workspace, preventing the agent from reaching the object with his hands, without modifying its distance from the agent. Furthermore, it is known that, in the absence of barriers, the mere presence of a graspable object in the vicinity of an agent evokes in the observer the idea of an action towards the object [54,55].
In Experiments 2–4, actions were represented by different app icons addressing either content generation or content consumption behaviors. In all these experiments, results confirmed the hypothesis showing an association between near and content generation apps and between far and content consumption apps. While possible alternative explanations could apply for Experiments 2 and 3, we found no possible confounding variable linked to category labels, the meaning of the stimuli, or other that could explain the results of Experiment 4 in an alternative way. Specifically, present results suggest that adverbs of space also apply to digital space and that the distinction into near and far depends on the functional characteristics of digital actions addressed by the app icons. In future experiments, the usage of the digital app by the participants will also be taken into account. In fact, many people may be using platforms like Instagram or Twitter more for content consumption than for content generation. Current results already indicate a strong effect that would likely be reinforced if we could only consider participants using these apps for content generation. Furthermore, the next experimental steps will aim to generalize these results to contexts that do not contrast social activities with non-social activities. For example, we will test the hypothesis in a condition where content generation does not involve a social environment, e.g., a single-player game.
Regarding the physical space, several neuroscience findings showed the crucial role of action consequences in the coding of space, as extensively discussed in the Introduction (for a recent review, see [56]). However, to our best knowledge, results observed in the present study are the first experimental evidence of an implicit association between actions and adverbs of space at a cognitive level. Gallese et al. [23] proposed that conceptual knowledge is mapped within our sensorimotor system, which characterizes the semantic content of concepts in terms of the way that we function with our bodies in the world. This position, shared by others [57,58,59,60,61,62], is opposed to that considering concepts as represented outside of sensorimotor cortices [63,64,65]. Specifically, this last position claims that concepts are abstracted away from sensorimotor experience and organized according to conceptual properties. In particular, action verbs may reflect the retrieval of non-sensory, conceptual or grammatical information relevant to action verbs. Present results showed that different action verbs (i.e., to grasp and to look at) are specifically associated to different adverbs of space (i.e., near and far) depending on the possibility to perform that action at that space. That is, grasping is only possible in the near space. Therefore, these results are in favor of the first hypothesis and can hardly be explained by the second one.
While neuroscience research has devoted many resources to studying the role of the sensorimotor system in coding of physical space, in the era of smartphones, almost no studies are present on the influence that the acquisition of digital skills and the constant utilization of technological devices has on sensorimotor abilities and processes. The only few studies suggest that something interesting is happening there. For instance, touchscreen users showed larger amplitude of cortical potentials in response to tactile stimulation of the fingertips compared to nonusers, and amplitude was directly proportional to the recent phone use history [66]. Moreover, patterns of inter-touch intervals were different for content consumption and content generation, and the presence of recent greater activity in content generation influenced somatosensory cortical activity [26]. This suggests that digital actions with different functional characteristics can involve the sensorimotor system differently, just like actions performed in the physical world. Present results are in favor of this possibility, indicating that the encoding of the representation of digital space depends on the actions functions to be performed when using that specific app. Therefore, it is plausible to expect for the digital space the same consequences in terms of behavior, cognition, way of interaction, and pathologies found for the physical space [56]. For example, it is possible to assume that lesions to different cortical areas may induce neglect-like symptoms which are selective for content generation or content consumption. However, currently, no neuropsychological testing specifically considers digital skills among its trials, even if most neuropsychological assessments include at least one measure that is administered, scored, or interpreted by computers or touchscreen apps [67].
Obviously, this being the first study that deals with the relationship between spatial concepts and different apps, the absence of control conditions may limit the internal validity of the study. Further studies will be needed to rule out that the results do not depend on other factors such as, for example, different attitudes towards different apps. Among others, one possibility could be that participants might perceive WhatsApp as a more trustworthy app than Google. In this case, the near/far association could reflect a figurative closeness rather than a spatial or agency-related effect. We trust that in the future all alternative explanations will be taken into consideration allowing to verify the correctness of our interpretation.
A further and important limitation of the present study is evident, due to the restrictions caused by the COVID-19 pandemic which made it impossible to expand the experimental sample to other categories. All participants in the study were students belonging to generation Z (i.e., people born between 1996 and 2015). The average Gen Z received their first mobile phone at age 10.3 years, grew up playing with their parents’ mobile phones in a hyper-connected world, and the smartphone is their preferred method of communication. Together with Millennials (born between 1980 to 1996), they are defined as digital natives, as opposed to digital immigrants (i.e., Baby Boomers, 1946–1964, and Gen x, 1965–1980) [68]. Indeed, a characteristic of worldwide immigrants is their accent. For this reason, second-language learners are readily identified as non-natives, i.e., immigrants. This happens because their phonetic repertoire depends on their native language. The same can be said for the digital immigrants, who face the digital environment for the first time while possessing a cognitive and sensorimotor system totally forged by the continuous and exclusive interaction with the physical environment. This can determine a sensorimotor and cognitive “accent” that makes their spatial cognition of the digital environment different from the digital natives. The latter, indeed, can start interacting with the physical and digital environment almost at the same time. To verify the presence of differences in digital natives and immigrants, further studies are needed considering the generational cohort as a covariate.

7. Conclusions

Present findings suggest that the distinction in the use of proximal or distal space adverbs depends on the characteristics of the actions potentially suitable to be performed in that space. The results showed, for the first time, an implicit association between the adverbs of near and far and, respectively, actions that determine an effect in space (i.e., grasp) and actions that only allow the perception of objects in space (i.e., look at). This same result was also found for the digital space, as even in this environment our behaviors are divided into acting and perceiving. Specifically, results indicated an implicit association between the adverb near and app icons (i.e., WhatsApp) that direct to content generation actions and the adverb far and app icons (i.e., Google) that direct to content consumption actions. For the first time, present findings suggest that adverbs of space also apply to digital space.

Supplementary Materials

The following are available online at https://0-www-mdpi-com.brum.beds.ac.uk/article/10.3390/brainsci11111523/s1, Figure S1: Double entry table presented in Experiment 1, showing images and associated categories, Figure S2: Double entry table presented in Experiment 2, showing images and associated categories, Figure S3: Double entry table presented in Experiment 3, showing images and associated categories, Figure S4: Double entry table presented in Experiment 4, showing images and associated categories.

Author Contributions

Conceptualization, L.C.; methodology, L.C and M.M.; software, M.M.; formal analysis, M.M.; investigation, L.C.; writing—original draft preparation, L.C.; writing—review and editing, L.C. and M.M.; supervision, L.C.; project administration, L.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data for each experiment are available at the link https://osf.io/9g235/?view_only=f072f3bb7ea34def93ea3cb07b9f2e78 (accessed on: 15 November 2021).

Acknowledgments

We thank all the students who participated in the experiment, allowing us to continue the research activity despite the restrictions due to the COVID-19 pandemic.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kemmerer, D. “Near” and “far” in language and perception. Cognition 1999, 73, 35–63. [Google Scholar] [CrossRef]
  2. Proffitt, D.R.; Caudek, C. Depth Perception and the Perception of Events. In Handbook of Psychology, 2nd ed.; John Wiley & Sons: New York, NY, USA, 2012. [Google Scholar]
  3. Proffitt, D.R.; Stefanucci, J.; Banton, T.; Epstein, W. The role of effort in perceiving distance. Psychol. Sci. 2003, 14, 106–112. [Google Scholar] [CrossRef] [PubMed]
  4. Witt, J.K.; Proffitt, D.R.; Epstein, W. Perceiving distance: A role of effort and intent. Perception 2004, 33, 577–590. [Google Scholar] [CrossRef]
  5. Witt, J.K.; Proffitt, D.R.; Epstein, W. Tool use affects perceived distance, but only when you intend to use it. J. Exp. Psychol. Hum. Percept. Perform. 2005, 31, 880. [Google Scholar] [CrossRef]
  6. Berti, A.; Frassinetti, F. When far becomes near: Remapping of space by tool use. J. Cogn. Neurosci. 2000, 12, 415–420. [Google Scholar] [CrossRef]
  7. Pegna, A.J.; Petit, L.; Caldara-Schnetzer, A.S.; Khateb, A.; Annoni, J.M.; Sztajzel, R.; Landis, T. So near yet so far: Neglect in far or near space depends on tool use. Ann. Neurol. 2001, 50, 820–822. [Google Scholar] [CrossRef]
  8. Halligan, P.W.; Marshall, J.C. Left neglect for near but not far space in man. Nature 1991, 350, 498–500. [Google Scholar] [CrossRef]
  9. Cowey, A.; Small, M.; Ellis, S. Left visuo-spatial neglect can be worse in far than in near space. Neuropsychologia 1994, 32, 1059–1066. [Google Scholar] [CrossRef]
  10. Farnè, A.; Làdavas, E. Dynamic size-change of hand peripersonal space following tool use. Neuroreport 2000, 11, 1645–1649. [Google Scholar] [CrossRef]
  11. Gamberini, L.; Seraglia, B.; Priftis, K. Processing of peripersonal and extrapersonal space using tools: Evidence from visual line bisection in real and virtual environments. Neuropsychologia 2008, 46, 1298–1304. [Google Scholar] [CrossRef]
  12. Longo, M.R.; Lourenco, S.F. On the nature of near space: Effects of tool use and the transition to far space. Neuropsychologia 2006, 44, 977–981. [Google Scholar] [CrossRef]
  13. Maravita, A.; Husain, M.; Clarke, K.; Driver, J. Reaching with a tool extends visual–tactile interactions into far space: Evidence from cross-modal extinction. Neuropsychologia 2001, 39, 580–585. [Google Scholar] [CrossRef]
  14. Maravita, A.; Spence, C.; Kennett, S.; Driver, J. Tool-use changes multimodal spatial interactions between vision and touch in normal humans. Cognition 2002, 83, B25–B34. [Google Scholar] [CrossRef]
  15. Neppi-Mòdona, M.; Rabuffetti, M.; Folegatti, A.; Ricci, R.; Spinazzola, L.; Schiavone, F.; Ferrarin, M.; Berti, A. Bisecting Lines with Different Tools in Right Brain Damaged Patients: The Role of Action Programming and Sensory Feedback in Modulating Spatial Remapping. Cortex 2007, 43, 397–410. [Google Scholar] [CrossRef]
  16. Gamberini, L.; Carlesso, C.; Seraglia, B.; Craighero, L. A behavioural experiment in virtual reality to verify the role of action function in space coding. Vis. Cogn. 2013, 21, 961–969. [Google Scholar] [CrossRef]
  17. Andersen, R.A.; Gnadt, J.W. Posterior parietal cortex. Rev. Oculomot. Res. 1989, 3, 315–335. [Google Scholar]
  18. Goldberg, M.E.; Segraves, M.A. The visual and frontal cortices. Rev. Oculomot. Res. 1989, 3, 283–313. [Google Scholar] [PubMed]
  19. Barash, S.; Bracewell, R.M.; Fogassi, L.; Gnadt, J.W.; Andersen, R.A. Saccade-related activity in the lateral intraparietal area. II. Spatial properties. J. Neurophysiol. 1991, 66, 1109–1124. [Google Scholar] [CrossRef] [Green Version]
  20. Fogassi, L.; Gallese, V.; Fadiga, L.; Luppino, G.; Matelli, M.; Rizzolatti, G. Coding of peripersonal space in inferior premotor cortex (area F4). J. Neurophysiol. 1996, 76, 141–157. [Google Scholar] [CrossRef] [PubMed]
  21. Craighero, L. The role of the motor system in cognitive functions. In The Routledge Handbook of Embodied Cognition; Routledge: New York, NY, USA, 2014; pp. 51–58. ISBN 9781315775845. [Google Scholar]
  22. Barsalou, L.W. Grounded Cognition. Annu. Rev. Psychol. 2008, 59, 617–645. [Google Scholar] [CrossRef] [Green Version]
  23. Gallese, V.; Lakoff, G. The Brain’s concepts: The role of the Sensory-motor system in conceptual knowledge. Cogn. Neuropsychol. 2005, 22, 455–479. [Google Scholar] [CrossRef]
  24. Gentilucci, M.; Benuzzi, F.; Bertolani, L.; Daprati, E.; Gangitano, M. Language and motor control. Exp. Brain Res. 2000, 133, 468–490. [Google Scholar] [CrossRef]
  25. Coventry, K.R.; Valdés, B.; Castillo, A.; Guijarro-Fuentes, P. Language within your reach: Near–far perceptual space and spatial demonstratives. Cognition 2008, 108, 889–895. [Google Scholar] [CrossRef]
  26. Ghosh, A.; Pfister, J.-P.; Cook, M. Optimised information gathering in smartphone users. arXiv 2017, arXiv:1701.02796. [Google Scholar]
  27. Greenwald, A.G.; McGhee, D.E.; Schwartz, J.L.K. Measuring individual differences in implicit cognition: The implicit association test. J. Pers. Soc. Psychol. 1998, 74, 1464–1480. [Google Scholar] [CrossRef]
  28. Nosek, B.A.; Greenwald, A.G.; Banaji, M.R. The Implicit Association Test at Age 7: A Methodological and Conceptual Review. In Automatic Processes in Social Thinking and Behavior; Psychology Press: New York, NY, USA, 2007. [Google Scholar]
  29. Fazio, R.H.; Olson, M.A. Implicit Measures in Social Cognition Research: Their Meaning and Use. Annu. Rev. Psychol. 2003, 54, 297–327. [Google Scholar] [CrossRef] [PubMed]
  30. Marini, M.; Rubichi, S.; Sartori, G. The Role of Self-Involvement in Shifting IAT Effects. Exp. Psychol. 2012, 59, 348–354. [Google Scholar] [CrossRef] [PubMed]
  31. Marini, M.; Agosta, S.; Mazzoni, G.; Barba, G.D.; Sartori, G. True and False DRM Memories: Differences Detected with an Implicit Task. Front. Psychol. 2012, 3, 310. [Google Scholar] [CrossRef] [Green Version]
  32. Sartori, G.; Agosta, S.; Zogmaister, C.; Ferrara, S.D.; Castiello, U. How to Accurately Detect Autobiographical Events. Psychol. Sci. 2008, 19, 772–780. [Google Scholar] [CrossRef] [PubMed]
  33. de Jong, P.; Pasman, W.; Kindt, M.; Hout, M.V.D. A reaction time paradigm to assess (implicit) complaint-specific dysfunctional beliefs. Behav. Res. Ther. 2001, 39, 101–113. [Google Scholar] [CrossRef]
  34. Teachman, B.A.; Gapinski, K.D.; Brownell, K.D.; Rawlins, M.; Jeyaram, S. Demonstrations of implicit anti-fat bias: The impact of providing causal information and evoking empathy. Heal. Psychol. 2003, 22, 68–78. [Google Scholar] [CrossRef]
  35. Baron, A.S.; Banaji, M.R. The Development of Implicit Attitudes. Evidence of Race Evaluations From Ages 6 and 10 and Adulthood. Psychol. Sci. 2006, 17, 53–58. [Google Scholar] [CrossRef] [PubMed]
  36. Dunham, Y.; Baron, A.S.; Banaji, M.R. From American City to Japanese Village: A Cross-Cultural Investigation of Implicit Race Attitudes. Child Dev. 2006, 77, 1268–1281. [Google Scholar] [CrossRef]
  37. Cunningham, W.A.; Johnson, M.K.; Raye, C.L.; Gatenby, J.C.; Gore, J.C.; Banaji, M.R. Separable Neural Components in the Processing of Black and White Faces. Psychol. Sci. 2004, 15, 806–813. [Google Scholar] [CrossRef] [Green Version]
  38. Marini, M.; Agosta, S.; Sartori, G. Electrophysiological Correlates of the Autobiographical Implicit Association Test (aIAT): Response Conflict and Conflict Resolution. Front. Hum. Neurosci. 2016, 10, 391. [Google Scholar] [CrossRef] [Green Version]
  39. Marini, M.; Banaji, M.R.; Pascual-Leone, A. Studying Implicit Social Cognition with Noninvasive Brain Stimulation. Trends Cogn. Sci. 2018, 22, 1050–1066. [Google Scholar] [CrossRef]
  40. Phelps, E.A.; O’Connor, K.J.; Cunningham, W.A.; Funayama, E.S.; Gatenby, J.C.; Gore, J.C.; Banaji, M.R. Performance on Indirect Measures of Race Evaluation Predicts Amygdala Activation. J. Cogn. Neurosci. 2000, 12, 729–738. [Google Scholar] [CrossRef] [Green Version]
  41. Richeson, J.A.; Baird, A.A.; Gordon, H.L.; Heatherton, T.F.; Wyland, C.L.; Trawalter, S.; Shelton, J.N. An fMRI investigation of the impact of interracial contact on executive function. Nat. Neurosci. 2003, 6, 1323–1328. [Google Scholar] [CrossRef] [PubMed]
  42. Maison, D.; Greenwald, A.G.; Bruin, R. The Implicit Association Test as a measure of implicit consumer attitudes. Pol. Psychol. Bull. 2001, 32, 61–69. [Google Scholar]
  43. Marini, M. Underweight vs. overweight/obese: Which weight category do we prefer? Dissociation of weight-related preferences at the explicit and implicit level. Obes. Sci. Pr. 2017, 3, 390–398. [Google Scholar] [CrossRef] [PubMed]
  44. Marini, M.; Waterman, P.D.; Breedlove, E.; Chen, J.T.; Testa, C.; Reisner, S.L.; Pardee, D.J.; Mayer, K.H.; Krieger, N. The target/perpetrator brief-implicit association test (B-IAT): An implicit instrument for efficiently measuring discrimination based on race/ethnicity, sex, gender identity, sexual orientation, weight, and age. BMC Public Heal. 2021, 21, 1–14. [Google Scholar] [CrossRef]
  45. Nosek, B.A.; Greenwald, A.; Banaji, M.R. Understanding and Using the Implicit Association Test: II. Method Variables and Construct Validity. Pers. Soc. Psychol. Bull. 2005, 31, 166–180. [Google Scholar] [CrossRef]
  46. Greenwald, A.G.; Nosek, B.A.; Banaji, M.R. Understanding and using the Implicit Association Test: I. An improved scoring algorithm. J. Pers. Soc. Psychol. 2003, 85, 197–216. [Google Scholar] [CrossRef] [Green Version]
  47. Chayko, M. Portable Communities: The Social Dynamics of Online and Mobile Connectedness; SUNY Press: Albany, NY, USA, 2008. [Google Scholar]
  48. Diessel, H.; Coventry, K.R. Demonstratives in Spatial Language and Social Interaction: An Interdisciplinary Review. Front. Psychol. 2020, 11, 3158. [Google Scholar] [CrossRef]
  49. Bufacchi, R.; Iannetti, G.D. An Action Field Theory of Peripersonal Space. Trends Cogn. Sci. 2018, 22, 1076–1090. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  50. Cléry, J.; Guipponi, O.; Wardak, C.; Ben Hamed, S. Neuronal bases of peripersonal and extrapersonal spaces, their plasticity and their dynamics: Knowns and unknowns. Neuropsychologia 2015, 70, 313–326. [Google Scholar] [CrossRef] [PubMed]
  51. Committeri, G.; Pitzalis, S.; Galati, G.; Patria, F.; Pelle, G.; Sabatini, U.; Castriota-Scanderbeg, A.; Piccardi, L.; Guariglia, C.; Pizzamiglio, L. Neural bases of personal and extrapersonal neglect in humans. Brain 2006, 130, 431–441. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  52. di Pellegrino, G.; Làdavas, E. Peripersonal space in the brain. Neuropsychologia 2015, 66, 126–133. [Google Scholar] [CrossRef]
  53. Caggiano, V.; Fogassi, L.; Rizzolatti, G.; Thier, P.; Casile, A. Mirror Neurons Differentially Encode the Peripersonal and Extrapersonal Space of Monkeys. Science 2009, 324, 403–406. [Google Scholar] [CrossRef]
  54. Cardellicchio, P.; Sinigaglia, C.; Costantini, M. Grasping affordances with the other’s hand: A TMS study. Soc. Cogn. Affect. Neurosci. 2012, 8, 455–459. [Google Scholar] [CrossRef] [Green Version]
  55. Pierno, A.C.; Becchio, C.; Wall, M.B.; Smith, A.T.; Turella, L.; Castiello, U. When Gaze Turns into Grasp. J. Cogn. Neurosci. 2006, 18, 2130–2137. [Google Scholar] [CrossRef]
  56. Serino, A. Peripersonal space (PPS) as a multisensory interface between the individual and the environment, defining the space of the self. Neurosci. Biobehav. Rev. 2019, 99, 138–159. [Google Scholar] [CrossRef]
  57. Allport, D.A. Distributed memory, modular subsystems and dysphasia. In Current Perspectives in Dysphasia; Churchill Livingstone: New York, NY, USA, 1985; pp. 207–244. [Google Scholar]
  58. Barsalou, L.W.; Simmons, W.K.; Barbey, A.K.; Wilson, C.D. Grounding conceptual knowledge in modality-specific systems. Trends Cogn. Sci. 2003, 7, 84–91. [Google Scholar] [CrossRef]
  59. Martin, A.; Wiggs, C.L.; Ungerleider, L.G.; Haxby, J.V. Neural correlates of category-specific knowledge. Nature 1996, 379, 649–652. [Google Scholar] [CrossRef] [PubMed]
  60. Pulvermüller, F. Words in the brain’s language. Behav. Brain Sci. 1999, 22, 253–279. [Google Scholar] [CrossRef]
  61. Pulvermüller, F. Brain reflections of words and their meaning. Trends Cogn. Sci. 2001, 5, 517–524. [Google Scholar] [CrossRef]
  62. Pulvermüller, F. A brain perspective on language mechanisms: From discrete neuronal ensembles to serial order. Prog. Neurobiol. 2002, 67, 85–111. [Google Scholar] [CrossRef]
  63. Caramazza, A.; Hillis, A.E.; Rapp, B.C.; Romani, C. The multiple semantics hypothesis: Multiple confusions? Cogn. Neuropsychol. 1990, 7, 161–189. [Google Scholar] [CrossRef]
  64. Mahon, B.Z.; Caramazza, A. A critical look at the embodied cognition hypothesis and a new proposal for grounding conceptual content. J. Physiol. 2008, 102, 59–70. [Google Scholar] [CrossRef] [PubMed]
  65. Rogers, T.T.; Lambon Ralph, M.A.; Garrard, P.; Bozeat, S.; McClelland, J.L.; Hodges, J.R.; Patterson, K. Structure and Deterioration of Semantic Memory: A Neuropsychological and Computational Investigation. Psychol. Rev. 2004, 111, 205–235. [Google Scholar] [CrossRef] [Green Version]
  66. Gindrat, A.-D.; Chytiris, M.; Balerna, M.; Rouiller, E.M.; Ghosh, A. Use-Dependent Cortical Processing from Fingertips in Touchscreen Phone Users. Curr. Biol. 2015, 25, 109–116. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  67. Parsey, C.M.; Schmitter-Edgecombe, M. Applications of Technology in Neuropsychological Assessment. Clin. Neuropsychol. 2013, 27, 1328–1361. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  68. Prensky, M. Digital Natives, Digital Immigrants Part 1. Horizon 2001, 9, 1–6. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Examples of trials in congruent (left panel) and incongruent (right panel) blocks of Experiment 1. Participants were asked to press the E key with their left hand, when the category to which the image belongs was written at the top left, and to press the I key with their right hand, when it was written at the top right (see the text for details).
Figure 1. Examples of trials in congruent (left panel) and incongruent (right panel) blocks of Experiment 1. Participants were asked to press the E key with their left hand, when the category to which the image belongs was written at the top left, and to press the I key with their right hand, when it was written at the top right (see the text for details).
Brainsci 11 01523 g001
Figure 2. Structure of the seven blocks in the IAT administered to the two groups of participants (Order 1, Order 2) in Experiment 1 (see text for details).
Figure 2. Structure of the seven blocks in the IAT administered to the two groups of participants (Order 1, Order 2) in Experiment 1 (see text for details).
Brainsci 11 01523 g002
Figure 3. Examples of trials in congruent (left panel) and incongruent (right panel) blocks of Experiment 2. Participants had the instruction to press the E key with their left hand when the category to which the image belongs was written at the top left and to press the I key with their right hand when it was written at the top right (see the text for details).
Figure 3. Examples of trials in congruent (left panel) and incongruent (right panel) blocks of Experiment 2. Participants had the instruction to press the E key with their left hand when the category to which the image belongs was written at the top left and to press the I key with their right hand when it was written at the top right (see the text for details).
Brainsci 11 01523 g003
Figure 4. Examples of trials in congruent (left panel) and incongruent (right panel) blocks of Experiment 3. Participants were asked to press the E key with their left hand when the category to which the image belongs was written at the top left and to press the I key with their right hand when it was written at the top right (see the text for details).
Figure 4. Examples of trials in congruent (left panel) and incongruent (right panel) blocks of Experiment 3. Participants were asked to press the E key with their left hand when the category to which the image belongs was written at the top left and to press the I key with their right hand when it was written at the top right (see the text for details).
Brainsci 11 01523 g004
Figure 5. Examples of trials in congruent (left panel) and incongruent (right panel) blocks of Experiment 4. Participants were asked to press the E key with their left hand when the category to which the image belongs was written at the top left and to press the I key with their right hand when it was written at the top right (see the text for details).
Figure 5. Examples of trials in congruent (left panel) and incongruent (right panel) blocks of Experiment 4. Participants were asked to press the E key with their left hand when the category to which the image belongs was written at the top left and to press the I key with their right hand when it was written at the top right (see the text for details).
Brainsci 11 01523 g005
Figure 6. D scores of Experiments 1-4. Each marker indicates the D score for each participant. Short horizontal lines mark the mean D score for each experiment. Positive D scores indicate the presence of the predicted association. Asterisks show the presence of a statistically significant association, as a result of a one sample t-test of the mean D score against zero D score (horizontal line equivalent of no association).
Figure 6. D scores of Experiments 1-4. Each marker indicates the D score for each participant. Short horizontal lines mark the mean D score for each experiment. Positive D scores indicate the presence of the predicted association. Asterisks show the presence of a statistically significant association, as a result of a one sample t-test of the mean D score against zero D score (horizontal line equivalent of no association).
Brainsci 11 01523 g006
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Craighero, L.; Marini, M. Implicit Associations between Adverbs of Place and Actions in the Physical and Digital Space. Brain Sci. 2021, 11, 1523. https://0-doi-org.brum.beds.ac.uk/10.3390/brainsci11111523

AMA Style

Craighero L, Marini M. Implicit Associations between Adverbs of Place and Actions in the Physical and Digital Space. Brain Sciences. 2021; 11(11):1523. https://0-doi-org.brum.beds.ac.uk/10.3390/brainsci11111523

Chicago/Turabian Style

Craighero, Laila, and Maddalena Marini. 2021. "Implicit Associations between Adverbs of Place and Actions in the Physical and Digital Space" Brain Sciences 11, no. 11: 1523. https://0-doi-org.brum.beds.ac.uk/10.3390/brainsci11111523

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop