Next Article in Journal
Differences in Explicit Stereotype Activation among Social Groups Based on the Stereotype Content Model: Behavioral and Electrophysiological Evidence in Chinese Sample
Previous Article in Journal
Effects of Juvenile or Adolescent Working Memory Experience and Inter-Alpha Inhibitor Protein Treatment after Neonatal Hypoxia-Ischemia
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Toward a Computational Neuropsychology of Cognitive Flexibility

Department of Neurology, Hannover Medical School, Carl-Neuberg-Straße 1, 30625 Hannover, Germany
*
Author to whom correspondence should be addressed.
Submission received: 25 November 2020 / Revised: 10 December 2020 / Accepted: 15 December 2020 / Published: 17 December 2020

Abstract

:
Cognitive inflexibility is a well-documented, yet non-specific corollary of many neurological diseases. Computational modeling of covert cognitive processes supporting cognitive flexibility may provide progress toward nosologically specific aspects of cognitive inflexibility. We review computational models of the Wisconsin Card Sorting Test (WCST), which represents a gold standard for the clinical assessment of cognitive flexibility. A parallel reinforcement-learning (RL) model provides the best conceptualization of individual trial-by-trial WCST responses among all models considered. Clinical applications of the parallel RL model suggest that patients with Parkinson’s disease (PD) and patients with amyotrophic lateral sclerosis (ALS) share a non-specific covert cognitive symptom: bradyphrenia. Impaired stimulus-response learning appears to occur specifically in patients with PD, whereas haphazard responding seems to occur specifically in patients with ALS. Computational modeling hence possesses the potential to reveal nosologically specific profiles of covert cognitive symptoms, which remain undetectable by traditionally applied behavioral methods. The present review exemplifies how computational neuropsychology may advance the assessment of cognitive flexibility. We discuss implications for neuropsychological assessment and directions for future research.

1. The Neuropsychology of Cognitive Flexibility

Maintaining goal-directed behavior in the face of novel situations is a fundamental requirement for everyday life. The processes that enable individuals to maintain goal-directedness are subsumed under the term executive control (also called executive function or cognitive control) [1,2,3,4,5,6]. Impaired executive control is a well-documented corollary of various neurological diseases as well as an important predictor of disease progression [7,8,9,10,11,12]. Hence, a major aim of contemporary neuropsychological research is to achieve a better understanding of executive control.
The present review focuses on a particular facet of executive control: cognitive flexibility [4,13,14,15]. Cognitive flexibility refers to the ability to adjust behavior to novel situational demands, rules or priorities in an adaptive manner [4,15,16,17]. There are various standardized neuropsychological assessment tools for cognitive flexibility. These include, for example, the Trail Making Test Part B [18,19,20], the intra/extradimensional attentional set-shifting task [21], and the Wisconsin Card Sorting Test (WCST) [22,23,24]. The WCST is probably the most frequently used tool for the neuropsychological assessment of cognitive flexibility [25].
The WCST requires participants to sort stimulus cards to key cards according to categories that change periodically. In order to identify the prevailing category, participants need to adjust card sorting to the examiner’s positive and negative feedback, which follows any card sort. Negative feedback indicates that the previously applied category was incorrect, and, accordingly, that participants should switch the applied category. Positive feedback indicates that the previously applied category was correct, and that participants should repeat the applied category. Perseveration errors (PEs) and set-loss errors (SLEs) represent failures to adjust card sorting to these task demands. PEs refer to erroneous category repetitions following negative feedback, and SLEs refer to erroneous category switches following positive feedback. A typical interpretation of increased PE and/or SLE propensities on the WCST is that the assessed participant shows cognitive inflexibility [11]. WCST error propensities usually refer to conditional PE and/or SLE probabilities, e.g., [26] (e.g., conditional PE probabilities equal the number of committed PEs divided by the number of trials following negative feedback). Figure 1 depicts a representative WCST-trial sequence, which illustrates the two types of errors (PE, SLE) that may occur on the WCST.
Beginning with Milner’s [31] seminal work, PE propensities have received the most attention in neuropsychology. Milner [31] investigated the effects of unilateral cortical excisions for the relief of focal epilepsy on PE propensities. Patients with frontal lobe lesions showed massively increased PE propensities when compared to patients with posterior cortical lesions. Two meta-analytical studies confirmed the association between the presence of frontal lobe lesions and increased PE propensities, reporting small (d = −0.32) [32] to large effect sizes (d = −0.97) [33] for elevated PE propensities in patients with frontal lobe lesions when compared to patients with non-frontal brain lesions or healthy controls (HCs) [34]. These meta-analytic findings contributed to the widely held belief that the frontal lobes and related neuroanatomical structures support executive control in general [35,36], and cognitive flexibility in particular [11,32,33].
However, elevated PE propensities not only occur in patients with frontal lobe lesions [37]. For example, Eslinger and Grattan [38] reported increased PE propensities for patients with focal ischemic lesions in the basal ganglia when compared to patients with posterior cortical lesions. Enhanced PE propensities also occur in various neurological patient groups, such as patients with idiopathic Parkinson’s disease (PD) [39], amyotrophic lateral sclerosis (ALS) [29], Alzheimer’s disease [40], Gilles de la Tourette syndrome [41], or primary dystonia [42]. Increased PE propensities also occur in a number of psychiatric patient groups, such as patients with attention deficit hyperactivity disorder [43], eating disorders [44], major depressive disorder [45], or obsessive-compulsive disorder [46]. The ubiquity of increased PE propensities across many neurological diseases and psychiatric disorders suggests that elevated PE propensities may neither be specific neuropsychological symptoms of frontal lobe lesions nor of various clinical conditions [11,47].
The non-specific finding of increased PE propensities across many neurological diseases and psychiatric disorders may result from the impurity of PE propensities [11,13,48,49]. That is, PE propensities may not represent pure correlates of the efficacy of a particular, well-circumscribed cognitive process. Instead, PE propensities may rather reflect the efficacy of a mixture of multiple, yet covert cognitive processes. An impairment of any of these covert cognitive processes could become behaviorally manifest as increased PE propensities [11]. Due to this process impurity, PE propensities may not achieve nosological specificity across a range of neurological diseases and psychiatric disorders. Thus, neuropsychological assessment of cognitive flexibility via PE propensities should probably be considered as a first step, subject to further improvement, rather than as a goal state of affairs. The purpose of the present article is to review recent progress toward computational modeling of covert cognitive processes that may be related to the commitment of overt behavioral errors on the WCST, and to analyze how computational modeling may contribute to the development of next-generation neuropsychological assessment methods.
Based on the assumption that PE propensities reflect the efficacy of a mixture of covert cognitive processes, similar increased PE propensities across various clinical conditions could arise from (partially) separable impairments of covert cognitive processes. However, such covert cognitive symptoms may not yet be detectable by behavioral WCST measures because any impairment of covert cognitive processes may become behaviorally manifest as elevated PE propensities. Figure 2 presents an illustrative example of this reasoning.
We developed our computational research program in the context of two neurological diseases, i.e., PD and ALS. A loss of dopaminergic neurons in nigro-striatal pathways primarily characterizes PD [50,51]. In contrast, a loss of upper and lower motor neurons in the brain and spinal cord neurons characterizes ALS [52]. There is evidence for increased PE propensities in both patients with PD and patients with ALS [29,39]. Despite this neuropsychological commonality between patients with PD and patients with ALS, the neurodegenerative alterations that occur in patients with PD could affect a set of covert cognitive processes that remain spared in patients with ALS, who, in contrast, show impairments in a distinct set of covert cognitive processes [11]. Thus, while patients with PD and patients with ALS remain indiscernible by analyses of overt PE propensities, these patient groups may nevertheless show (partially) dissociable impairments of covert cognitive processes (i.e., covert cognitive symptoms). The assessment of covert cognitive processes in patients with PD and patients with ALS could provide initial progress toward the detection of nosologically specific aspects of cognitive inflexibility.

2. Assessing Covert Cognitive Processes on the WCST

Before we review recent advances with regard to computational modeling of covert cognitive processes on the WCST, we will give an overview of common methodological approaches to covert cognitive processes. Of particular interest for this overview is the utility of the discussed methodological approaches for an individual-based assessment of covert cognitive processes.

2.1. Dissociating Patterns of Erroneous Responses

The dissociation of patterns of erroneous responses represents a common approach to the identification and isolation of covert cognitive processes on the WCST [26,28,53,54]. For example, in a recent behavioral study [26] of neurological inpatients who completed a short paper-and-pencil version of the WCST (the modified-WCST; M-WCST) [55], we stratified PE and SLE by response demands (see Figure 3). We found reduced PE propensities with PEs that implied a response repetition (i.e., “Demanded Response Alternation” in Figure 3A), when compared to PEs, which implied a response alternation (i.e., “Demanded Response Repetition” in Figure 3A). These results suggest a modulation of PE propensities by response demands; PEs become less likely when they imply repeating the response that has received a negative feedback on the previous trial. We concluded that participants not only learn to avoid re-applications of categories following a received negative feedback. In addition, participants also learn to avoid re-executions of responses after received negative feedback.
We replicated the modulation of PE propensities by response demands in a large sample of young volunteers (N = 375) who completed a computerized WCST (cWCST) variant [56]. This successful replication suggests that response demands modulate PE propensities not only on a paper-and-pencil variant of the WCST (i.e., M-WCST), but also on a computerized WCST variant. The successful replication of the modulation of PE propensities by response demands in a large sample of young volunteers also suggests that neurological inpatients, as well as individuals with no known brain damage, show this behavioral phenomenon.
Analyses of patterns of erroneous responses may allow for the detection of particular behavioral effects on the WCST (e.g., a modulation of PE propensities by response demands), which in turn allow inferences about covert cognitive processes (e.g., learning to avoid re-executions of particular responses following negative feedback). However, analyses of erroneous responses still refer to overt behavioral events, rendering conclusions about actual covert cognitive processes difficult. Thus, the dissociation of patterns of erroneous responses does not represent a satisfactory approach to the assessment of covert cognitive processes.

2.2. Identifying and Isolating Latent Variables

Computational methods provide an alternative approach to covert cognitive processes on the WCST [48,57]. Computational methods identify and isolate latent variables (as opposed to observable variables, such as WCST error propensities) from observed behavior. Latent variables reflect the efficacy of covert cognitive processes that may support WCST responding. In contrast to the dissociation of patterns of erroneous responses, computational methods allow for inferences closer to the level of covert cognitive processes.

2.2.1. Factor Analyses

Factor analyses identify sets of latent variables that explain variance common to WCST scores [58,59,60]. Factor-analytical WCST studies consistently revealed a single latent variable, which could indicate a general executive control ability [57]. However, factor-analytical WCST studies remain inconclusive about the number of additionally identifiable latent variables [57]. Furthermore, it remains difficult to infer which covert cognitive processes are actually reflected by these latent variables and how these covert cognitive processes could interact [57]. Thus, factor analyses are of limited utility for the assessment of covert cognitive processes.

2.2.2. Computational Modeling

In contrast to factor analyses, computational models explicitly formalize covert cognitive processes and the way in which these covert cognitive processes interact by mathematical expressions [61,62,63,64,65,66]. Computational models thereby allow one (1) to systematically test hypotheses about covert cognitive processes and (2) to estimate sets of latent variables that reflect the efficacy of the assumed covert cognitive processes [61,62,63,64,67,68].
In the first case, computational models represent hypotheses about covert cognitive processes [68]. Evaluations of competing computational models allow one to test hypotheses about covert cognitive processes. A common method for the evaluation of computational models is to compare their abilities to predict observed behavior [68,69]. The computational model that provides the best prediction of observed behavior may also give the best conceptualization of covert cognitive processes among the compared computational models. Another method for the evaluation of computational models is to compare their abilities to simulate particular behavioral phenomena, such as observed PE and SLE propensities [68]. If a computational model does not simulate all behavioral phenomena of interest, then that computational model should be considered as falsified [68].
There are several computational models for the WCST [48,70,71,72,73,74,75,76,77,78,79,80]. These computational models typically belong to one of two subclasses: neural network models or mechanistic models [48]. Most computational models of the WCST are neural network models (e.g., [71,72]). Neural network models are biologically inspired sets of computational units (referred to as cells or neurons) [81,82]. Interconnections of computational units usually mirror cerebral structures that instantiate specific covert cognitive processes [48,74]. For example, Caso and Cooper [74] proposed a neural network model of the WCST that incorporates cortical and striatal learning mechanisms. “Lesions” (i.e., alterations of latent variables) to computational units that reflect striatal learning mechanisms were considered as a model of pathophysiological changes in patients with PD (see also [83]). The lesioned neural network model produced PE propensities comparable to those observed in a sample of patients with PD [29]. The authors concluded that the proposed neural network model represents a biologically plausible model of (impaired) striatal learning mechanisms in patients with PD.
Neural network models allow the simulation of general patterns of WCST error propensities, such as increased PE propensities as found in patients with PD [74]. However, neural network models incorporate very large numbers of latent variables, rendering their precise estimation for individual participants difficult [48]. In addition, the enormous number of latent variables complicates their psychological interpretation. Thus, neural network models provide limited utility for the assessment of covert cognitive processes.
The second family of computational models of the WCST are so-called mechanistic models [48,67]. In mechanistic models, straightforward computational mechanisms instantiate the assumed covert cognitive processes. Mechanistic models typically incorporate a small number of latent variables, which can be robustly estimated from individual trial-by-trial WCST responses [48,67]. Thus, in contrast to neural network models, mechanistic models provide sets of latent variables for each assessed participant. Moreover, latent variables obtained from mechanistic models—as opposed to latent variables obtained from factor analyses or neural network models—serve as psychologically interpretable metrics for covert cognitive processes. Against this background, mechanistic models could provide a suitable approach to the assessment of covert cognitive processes on the WCST—an approach which we will refer to as computational neuropsychology [48,67,84,85,86,87,88,89].

3. Toward a Computational Neuropsychology of Cognitive Flexibility

Computational neuropsychology may provide progress toward nosologically specific aspects of cognitive inflexibility. That is, analyses of latent variables of mechanistic models could reveal disease-specific covert cognitive symptoms of neurological conditions, which yet remain undetectable by traditionally applied behavioral methods.
During the remainder of this article, we aim to elucidate whether computational neuropsychology possesses the potential to reveal nosologically specific profiles of covert cognitive symptoms. We will therefore review and compare mechanistic models of the WCST. Having identified the most suitable mechanistic model of the WCST among all models considered, we will discuss exemplary clinical applications of this mechanistic model in patients with PD and patients with ALS. In order to shed light on the nosological specificity of covert cognitive symptoms, we will compare profiles of covert cognitive symptoms of patients with PD and patients with ALS.

3.1. Mechanistic Models of the WCST

3.1.1. The Attentional-Updating Model

The attentional-updating (AU) model by Bishara et al. [48] represents an established mechanistic model of the WCST. Core to the AU model is the assumption that participants form attentional prioritizations (APs) of categories. A high AP of a category results in a high probability of applying that category on a particular trial. APs of categories are trial-wise updated following received feedback. Following a received positive feedback, the AP of the applied category will increase, and AP of not-applied categories will decrease (and vice versa for negative feedback). Thus, following received positive feedback, the repetition of a category becomes more likely, whereas a switch of the applied category becomes more likely after received negative feedback. An attentional focus mechanism modulates the strength of updating of AP: a high AP of a particular category results in strong updating of that AP. In contrast, a low AP of a particular category results in weak updating of that AP.
The AU model incorporates four individual latent variables. Sensitivity parameters quantify the overall strengths of updating of AP following received feedback. The AU model includes separate sensitivity parameters for positive and negative feedback, enabling different individual strengths of updating following positive and negative feedback. An attentional focus parameter quantifies the extent to which magnitudes of AP modulate the strength of updating of AP. A response variability parameter quantifies how well responding corresponds to AP. Figure 4 gives a schematic depiction of the AU model.
The AU model successfully contributed to a number of clinical studies [67,78,90]. For example, Bishara et al. [48] applied the AU model to study covert cognitive symptoms in substance dependent individuals. Substance dependent individuals showed a decreased sensitivity for negative feedback as well as increased response variability when compared to a control group. The AU model also contributed to a lesion mapping study [91]. Results of this lesion mapping study suggest an association between lesions in the right prefrontal cortex (PFC) and the sensitivity parameter for negative feedback. In a model evaluation study, the AU model successfully simulated individual PE and SLE propensities of patients with PD and HC participants who completed a cWCST variant [48,67,91].

3.1.2. The Cognitive Reinforcement-Learning Model

The cognitive reinforcement-learning (RL) model [56] is based on the well-established mathematical framework of reinforcement learning [89,92,93,94,95,96,97,98]. Core to the cognitive RL model is the assumption that participants form feedback predictions for the application of categories. A high feedback prediction indicates a strong prediction of positive feedback for the application of a category. High feedback predictions for a category also relate to a high probability of applying that category. Feedback predictions for categories are trial-wise updated in response to received feedback. Following received positive feedback, feedback predictions for the applied category will increase. After received negative feedback, feedback predictions for the applied category will decrease. Prediction errors modulate the strength of updating of feedback predictions. Prediction errors equal the difference between the received feedback and the predicted feedback. Large prediction errors result in stronger updating of feedback predictions.
The cognitive RL model incorporates two mechanisms that are not inherent parts of canonical RL models [92]. First, a retention mechanism describes the transfer of feedback predictions from one trial to the next [99,100]. Second, a “soft-max” rule gives response probabilities as a function of feedback predictions on a particular trial [92,101,102,103].
The cognitive RL model comprises four individual latent variables. Cognitive learning rates quantify the extent to which prediction errors update feedback predictions. There are separate cognitive learning rates for received positive and negative feedback [89,104,105,106]. A cognitive retention rate quantifies the extent to which feedback predictions transfer from one trial to the next [99,100]. An inverse temperature parameter quantifies how well executed responses correspond to feedback predictions [101,102,103]. Figure 5 gives a schematic depiction of the cognitive RL model.

3.1.3. The Parallel Reinforcement-Learning Model

Based on the finding of a modulation of PE propensities by response demands (see Figure 3), we hypothesized that participants learn at two parallel levels on the WCST [26]. Category-level (putatively cortical) learning implies that participants tend to repeat the applied category on trials following positive feedback, and that they tend to switch the applied category on trials following negative feedback. Participants might also learn at the level of responses. Response-level (putatively striatal) learning implies that participants tend to repeat the execution of a particular response following positive feedback, and that participants tend to avoid the re-execution of a response following negative feedback.
The parallel RL model [56] constitutes a mathematical formalization of category- and response-level learning [26]. Cognitive RL (as in the cognitive RL model) serves as an instantiation of category-level learning. In addition, sensorimotor RL serves as an instantiation of response-level learning. Hence, the parallel RL model constitutes an extended variant of the cognitive RL model (see Figure 5).
Sensorimotor RL is solely concerned with feedback predictions for the execution of responses irrespective of associated categories. A high feedback prediction for the execution of a response results in a high probability of executing that response. Feedback predictions for responses are trial-wise updated following received feedback. Following a received positive feedback, feedback predictions for the executed response will increase, whereas feedback predictions for the executed response will decrease after a received negative feedback. Thus, following a received positive feedback, the repetition of a response execution becomes more likely, whereas a switch of the executed response becomes more likely after a received negative feedback. Prediction errors (i.e., the difference between the received feedback and the predicted feedback for the execution of a particular response) modulate the strength of updating of feedback predictions for responses. Sensorimotor RL also incorporates a retention mechanism that describes the transfer of feedback predictions for responses from one trial to the next [99,100]. The parallel RL model adds feedback predictions for responses to feedback predictions for categories on any trial. A soft-max function gives response probabilities as a function of these integrated feedback predictions [92,101,102,103].
The parallel RL model incorporates eight individual latent variables. Separate cognitive and sensorimotor learning rates quantify the extents to which prediction errors update feedback predictions for categories and responses, respectively. There are separate learning rates for received positive and negative feedback at both cognitive and sensorimotor levels [89,104,105]. Separate retention rates at cognitive and sensorimotor levels [99,100] quantify the extents to which feedback predictions for categories and responses transfer from trial to trial. A weighting parameter quantifies the relative strength of cognitive over sensorimotor RL. An inverse temperature parameter [101,102,103] expresses how well executed responses correspond to integrated feedback predictions. Figure 6 gives a schematic depiction of the parallel RL model.

3.1.4. Comparison of Mechanistic Models

In a recent model comparison study [56], we evaluated the AU model [48], the cognitive RL model, and the parallel RL model on a large sample of healthy volunteers (N = 375) who completed a cWCST variant [30].
We evaluated mechanistic models by predictive accuracies [107,108]. Predictive accuracies quantify how well a mechanistic model predicts observed trial-by-trial cWCST responses. The cognitive and the parallel RL model showed better predictive accuracies than the AU model for most participants. These results suggest that RL models provide a better conceptualization of trial-by-trial cWCST responses than the AU model.
RL models differ from the AU model [48] with regard to updating mechanisms. In RL models, prediction errors modulate the strength of the updating of feedback predictions. Prediction errors ensure that updating of feedback predictions is stronger when the correspondence between the received and the predicted feedback is poor. For example, a participant receives positive feedback for the application of a category that had a low feedback prediction (i.e., indicating the prediction of a negative feedback for that category). Thus, the prediction of feedback for this category was poor, resulting in a high prediction error. Hence, updating of feedback prediction for this category will be strong, facilitating the re-application of the category that produced a positive feedback. In the AU model, an attentional focus mechanism ensures that updating of AP of a particular category is less strong when the AP of that category was low. In the example mentioned above, updating of AP will be less strong since the AP of that category was low. Hence, the attentional focus mechanism complicates the re-application of the category that produced a positive feedback. Thus, RL models incorporate more efficient adaptation of card sorting to changing task demands in comparison to the AU model.
RL models further differ from the AU model with regard to retention mechanisms. In RL models, retention mechanisms attenuate feedback predictions from one trial to the next [99,100]. In the AU model, APs transfer from trial-to-trial without attenuation. RL models also differ from the AU model with regard to the computation of response probabilities. A soft-max rule gives response probabilities in RL models [92,101,102,103]. In contrast, an algorithm that divides single AP by the overall sum of AP gives response probabilities in the AU model. Lastly, in RL models, prediction errors update single feedback predictions on any trial (i.e., prediction errors only update feedback predictions for the applied category and/or the executed response). The AU model assumes that all APs of categories are updated on any trial (i.e., after a received positive feedback, the AP of the applied category increases, and all other AP decrease, and vice versa for a received negative feedback).
Our model comparison study [56] remains inconclusive about which particular mechanism of RL models gives a better conceptualization of trial-by-trial cWCST responses than the corresponding AU mechanism. Future studies should explicitly compare the discussed model mechanisms. Such studies could evaluate predictive accuracies of mechanistic models that solely differ with regard to one of the contrasted mechanisms.
Suitable mechanistic models of the WCST should account for a wide range of behavioral phenomena [68]. In our model comparison study [56], the benchmark for all mechanistic models was (1) a successful simulation of individual PE and SLE propensities as well as (2) a successful simulation of the modulation of perseveration propensities by response demands (see Figure 3) [26]. The parallel RL model clearly outperformed the cognitive RL model and the AU model with regard to simulations of these behavioral phenomena. All mechanistic models under consideration simulated individual PE and SLE propensities. However, only the parallel RL model simulated the modulation of PE propensities by response demands.
Against this background, the parallel RL model, which incorporates cognitive and sensorimotor RL as computational instantiations of category- and response-level learning, represents a suitable mechanistic model of the cWCST. In contrast, the cognitive RL model and the state-of-the-art AU model are insufficient mechanistic models of the cWCST.

3.2. Assessing Covert Cognitive Symptoms in Neurological Diseases

In order to elucidate whether computational neuropsychology possesses the potential to reveal nosologically specific profiles of covert cognitive symptoms, we will review exemplary applications of the parallel RL model [56] in patients with PD and patients with ALS.

3.2.1. Parkinson’s Disease

In a recent computational study [109], we characterized covert cognitive symptoms associated with PD pathophysiology. Therefore, we reanalyzed data from 16 patients with PD and 34 matched HC participants who completed a cWCST variant [110] by means of the parallel RL model.
Patients with PD showed increased cognitive retention rates when compared to HC participants. With high cognitive retention rates, feedback predictions for categories that produced a negative feedback remain at high levels when transferring to the next trial. Hence, the erroneous repetition of such categories becomes more likely (see Figure 7B), rendering category-level learning inflexible. We concluded that increased cognitive retention rates are an expression of bradyphrenia (i.e., “inflexibility of thought”), which represents a hallmark cognitive symptom of PD, at the level of covert cognitive processes [111,112,113,114].
Patients with PD also showed reduced sensorimotor retention rates when compared to HC participants. Reduced sensorimotor retention rates indicate that feedback predictions for responses transfer less strongly from trial to trial (see Figure 7C). Thus, in patients with PD, responding on a particular cWCST-trial is less strongly affected by previous feedback predictions for responses when compared to HC participants. That is, responding of patients with PD appears less repetitive (following positive feedback) or alternating (following negative feedback). The finding of decreased sensorimotor retention rates in patients with PD may correspond to impaired stimulus-response learning (or, with regard to the cWCST, selecting a key card by executing a response), which was repeatedly reported for patients with PD [116,117,118].

3.2.2. Dopamine Replacement Therapy in Patients with PD

In our recent computational study of patients with PD [109], we also characterized covert cognitive symptoms associated with the administration of dopamine (DA) replacement therapy. Therefore, patients with PD were assessed both “on” and “off” DA medication (i.e., after withdrawal of DA medication) [110].
DA replacement therapy aims to alleviate motor symptoms in patients with PD by restoring missing DA in nigro-striatal DA systems. However, adjusting systemic DA replacement solely at the best possible motility may incur cognitive side effects. Optimal DA replacement in the nigro-striatal DA systems may lead to DA overdosing in less affected DA systems, such as the meso-limbic and/or meso-cortical DA systems. Thereby, DA replacement therapy may induce cognitive impairments [12,119,120,121,122,123,124].
The application of the parallel RL model revealed that DA replacement therapy in patients with PD increased cognitive retention rates. Thus, DA replacement therapy seems to induce bradyphrenic side effects (see Figure 7B). DA replacement therapy in patients with PD also reduced cognitive learning rates following positive feedback, indicating that DA replacement therapy in patients with PD induces another covert cognitive symptom: impaired category learning from positive feedback (see Figure 7D).
The meso-cortical DA systems support cognitive flexibility [125,126,127], whereas the meso-limbic DA systems support anticipation of feedback [128,129]. Thus, distinct DA systems could give rise to the reported iatrogenic cognitive impairments induced by DA replacement therapy [130]. An overstimulation of meso-cortical DA systems might cause bradyphrenic side effects, whereas an overstimulation of meso-limbic DA systems might impair category learning from positive feedback [109].

3.2.3. Amyotrophic Lateral Sclerosis

In another computational study [115], we characterized covert cognitive symptoms associated with ALS pathophysiology. Therefore, we reanalyzed data from 18 patients with ALS and 21 matched HC participants who completed a cWCST variant [29] by means of the parallel RL model.
Patients with ALS showed increased cognitive retention rates when compared to HC participants (see Figure 7B). These results suggest that bradyphrenia does not specifically occur in patients with PD. In contrast, bradyphrenia may rather constitute a disease-nonspecific covert cognitive symptom associated with pathophysiological changes in both patients with PD and patients with ALS.
Patients with ALS also showed increased inverse temperature parameters in comparison to HC participants. The inverse temperature parameter expresses how well finally executed responses correspond to integrated feedback predictions for categories and responses [92,101,102,103]. Higher configurations of the inverse temperature parameter indicate that responding is more independent of integrated feedback predictions. Thus, with high inverse temperature parameters, responding appears to be more haphazard (see Figure 7E). These results suggest that ALS pathophysiology comprises another covert cognitive symptom: haphazard responding. Haphazard responding may relate to motor impairments in patients with ALS. For example, haphazard responding could arise from deficient fine motor skills of patients with ALS that obstruct successful cWCST responding [115,131].

3.2.4. Comparison

Traditionally applied behavioral methods for the neuropsychological assessment of cognitive flexibility do not possess sufficient nosological specificity. For example, patients with PD and patients with ALS show increased PE propensities [29,39]. Thus, the finding of increased PE propensities is neither specific to patients with PD nor to those with ALS. We proposed that computational neuropsychology could provide progress with regard to the detection of nosologically specific aspects of cognitive inflexibility.
Our exemplary comparison of profiles of covert cognitive symptoms of patients with PD and patients with ALS corroborates this hypothesis [109,115]. Computational modeling revealed a disease-nonspecific alteration in latent variables. Patients with PD and patients with ALS showed increased cognitive retention rates. These results suggest that bradyphrenia constitutes a disease-nonspecific covert cognitive symptom, which characterizes both patient groups. DA medication in patients with PD further increased cognitive retention rates, indicating that DA medication in patients with PD incurred bradyphrenic side effects.
Computational modeling also revealed PD- and ALS-specific covert cognitive symptoms. Patients with PD, but not those with ALS, showed decreased sensorimotor retention rates when compared to HC participants. Decreased sensorimotor retention rates could indicate impaired stimulus-response learning in patents with PD. DA medication in patients with PD decreased cognitive learning rates after positive feedback. Thus, DA medication in patients with PD could induce impaired category learning from positive feedback. Lastly, only patients with ALS showed increased inverse temperature parameters when compared to HC participants. Increased inverse temperature parameters in patients with ALS may indicate haphazard responding.
The reported covert cognitive symptoms in patients with PD and patients with ALS demonstrate that computational neuropsychology possesses the potential to reveal nosologically specific profiles of covert cognitive symptoms. Figure 8 summarizes profiles of covert cognitive symptoms in patients with PD and patients with ALS [109,115].

4. Implications for Neuropsychological Assessment

The present review demonstrates how computational neuropsychology may provide progress with regard to the neuropsychological assessment of cognitive flexibility [109,115]. First, as delineated above, computational neuropsychology possesses the potential to reveal nosologically specific profiles of covert cognitive symptoms, which remain yet undetectable by traditional behavioral methods of neuropsychological assessment.
Second, traditional behavioral neuropsychological assessment refers to cognitive assessment, yet the referenced cognitive processes remain unobservable. For example, a typical inference from the presence of enhanced WCST error propensities would be that the assessed participant shows cognitive inflexibility [4,34,49,132]. Hence, behavioral neuropsychological assessment involves drawing inferences that go beyond behavioral observations. In contrast, computational neuropsychology offers a technique for the assessment of latent variables. As latent variables reflect the efficacy of assumed covert cognitive processes, computational neuropsychology may allow for inferences at the level of covert cognitive processes.
Third, behavioral neuropsychological assessment typically refers to vaguely defined cognitive symptoms. That is, cognitive symptoms are often verbal re-descriptions of behavioral observations. For example, Naville [133] observed a lack of voluntary attention, initiative, spontaneous interest, and capacity for effort in patients with encephalitis lethargica, which was also noted in patients with PD [134]. Naville summarized this observation as bradyphrenia [134]. Bradyphrenia literally translates to “slowness of thought”. Hence, a number of studies of bradyphrenia utilized response time tasks [112]. However, prolonged response times, when considered as an expression of bradyphrenia, are likely to be confounded with bradykinesia (i.e., “slowness of movement”) [112,135,136]. Hence, response times are not process pure because they intermingle bradyphrenia and bradykinesia.
Another interpretation of bradyphrenia refers to cognitive akinesia [134], rendering bradyphrenia better conceived as “inflexibility of thought”. Therefore, a number of studies investigated bradyphrenia by means of neuropsychological tests, which target aspects of attentional or cognitive flexibility [137,138]. The example of the bradyphrenia construct illustrates that the reliance on vague semantic definitions renders the interpretation of behavioral studies as indicating particular cognitive symptoms difficult or even impossible.
Computational neuropsychology provides indicators of covert cognitive symptoms along with explicit definitions of their meaning. For example, we considered increased cognitive retention rates as an indicator of bradyphrenia (see above). We showed how increased cognitive retention rates render category-level learning inflexible (see Figure 7B). In the long run, explicit computational definitions may replace the state-of-the-art, yet ambiguous semantic constructs that typically back-bone behavioral neuropsychological assessment.

5. Outlook

The ultimate success of computational neuropsychology for neuropsychological assessment depends on further studies of validity and reliability [139]. A common method for the validation of computational models is to assess their ability to simulate particular behavioral phenomena [68,100,140,141]. In our recent model comparison study [56], we assessed mechanistic models with regard to their ability to simulate PE and SLE propensities as well as the modulation of PE propensities by response demands [26]. The AU model [48], as well as the cognitive RL model, failed to simulate the modulation of PE propensities by response demands. In contrast, the parallel RL model successfully simulated this behavioral effect. Thus, the parallel RL model may represent a valid mechanistic model of the cWCST with regard to the studied behavioral phenomena. However, the parallel RL model only remains valid until it fails to explain yet unnoticed behavioral phenomena, or until yet to be specified computational models explain the known behavioral phenomena in a more parsimonious manner [68,140].
Future studies should also validate computational models with regard to their proposed neural underpinnings. For example, cortical brain areas may primarily support cognitive RL, whereas sub-cortical, striatal brain areas may primarily support sensorimotor RL [26]. Confirmatory brain imaging studies should test this hypothesis. Such studies could make use of individual trial-by-trial variables provided by the parallel RL model. For example, individual trial-wise cognitive and sensorimotor prediction errors could correlate with activation patterns in cortical and/or striatal brain areas as revealed by functional magnetic resonance imaging [142,143].
Future studies should also investigate the clinical validity of computational modeling [139,144]. With regard to the parallel RL model, we found increased cognitive retention rates in patients with PD and patients with ALS, which we considered an expression of bradyphrenia. These results suggest an association between increased cognitive retention rates and brain dysfunctions that are common to patients with PD and patients with ALS [115]. Both PD and ALS pathophysiology affect the premotor cortex and the dorsolateral PFC (Broadman areas 4, 6, 8, and 9) [145,146,147]. Hence, increased cognitive retention rates may relate to dysfunctions in these cortical areas. Our finding that patients with PD “on” DA medication showed even more exaggerated cognitive retention rates supports this hypothesis. That is, DA replacement therapy in patients with PD may overstimulate meso-cortical DA systems [119,120,122,123].
Alterations in other latent variables of the parallel RL model could specifically relate to pathophysiological characteristics of patients with PD and patients with ALS [115]. Only patients with PD showed decreased sensorimotor retention rates. Striatal brain areas may primarily support sensorimotor RL [26]. Striatal brain areas are also strongly affected in patients with PD [50,51]. Thus, decreased sensorimotor retention rates could relate to striatal dysfunctions in patients with PD. DA replacement therapy in patients with PD decreased cognitive learning rates following positive feedback. As discussed above, decreased cognitive learning rates could relate to an overstimulation of meso-limbic DA systems induced by DA replacement therapy in patients with PD. Lastly, only patients with ALS showed increased inverse temperature parameters. Thus, increased inverse temperature parameters could possibly relate to motor cortex dysfunctions associated with ALS pathophysiology [52]. Future research should explicitly test these hypothesized relationships between alterations in latent variables and pathophysiological characteristics of patients with PD and patients with ALS [115]. Such studies could combine computational modeling with brain imaging and/or lesion-(covert)-symptom mapping [91,143,148].
Computational models should provide reliable latent variable estimation from observed behavior [139,141]. Parameter recovery allows one to assess the reliability of parameter estimation [139,141]. Parameter recovery studies simulate behavior by a mechanistic model using a pre-defined set of latent variables. If latent variable estimation is reliable, there should be a close correspondence between the pre-defined set of latent variables and latent variables estimated from simulated behavior. An investigation of parameter recovery [56] suggests that a configuration of the parallel RL that incorporates a weighting parameter (see Figure 6) did not provide reliable latent variable estimation. However, a configuration of the parallel RL model that does not incorporate a weighting parameter provided reliable parameter estimation [56]. We utilized this less complex configuration of the parallel RL model (i.e., a configuration without a weighting parameter) to study covert cognitive symptoms in patients with PD and patients with ALS. These results of parameter recovery suggest that reducing model complexity (i.e., the number of latent variables) may improve the reliability of latent variable estimation.
It could also be advisable to assess other facets of reliability of latent variables, such as temporal stability and/or internal consistency [139,149]. Studies addressing latent variables repeatedly over time should investigate the temporal stability of latent variables, as assessed by test–retest reliability [150]. Studies addressing latent variables in other contexts should investigate the internal consistency of latent variables, as assessed by split-half reliability. Split-half reliability methods apply to any assessment tool that can be split into subsets of trials, such as the cWCST [150,151].
The WCST served as an exemplary assessment tool for the present review. However, we would like to highlight that computational neuropsychology is not limited to the WCST. In fact, computational neuropsychology should be applicable to many assessment tools. The sole requirements are (1) that there is a mechanistic model of a participant’s performance, which provides a set of latent variables at the level of individuals, and (2) that these latent variables can be estimated from observed behavior with sufficient precision. The precision of latent variable estimation can be increased with the number of analyzed participants [109,115]. Hence, computational neuropsychology may be particularly suitable for (re-)analyses of large datasets, such as those available from open science approaches [152,153] or multi-lab studies [154].

6. Conclusions

Increased PE propensities are a well-documented behavioral finding in many neurological patient groups. This disease-nonspecific finding suggests that cognitive inflexibility constitutes a cognitive symptom common to all these neurological diseases. However, elevated PE propensities may actually arise from shared and disease-specific impairments of covert cognitive processes supporting cognitive flexibility. The present review demonstrates that computational neuropsychology possesses the potential to reveal such nosologically specific profiles of covert cognitive symptoms, which remain undiscoverable through traditional behavioral neuropsychology. We conclude that computational neuropsychology offers a potential route to the advancement of neuropsychological assessment.

Author Contributions

Conceptualization, A.S. and B.K.; methodology, A.S. and B.K.; investigation, A.S. and B.K.; resources, B.K.; data curation, A.S.; writing—original draft preparation, A.S. and B.K.; writing—review and editing, A.S. and B.K.; visualization, A.S. and B.K.; supervision, B.K.; project administration, B.K.; funding acquisition, B.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Karlheinz-Hartmann Stiftung, Hannover, Germany.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Duncan, J.; Emslie, H.; Williams, P.; Johnson, R.; Freer, C. Intelligence and the frontal lobe: The organization of goal-directed behavior. Cogn. Psychol. 1996, 30, 257–303. [Google Scholar] [CrossRef] [PubMed]
  2. Grafman, J.; Litvan, I. Importance of deficits in executive functions. Lancet 1999, 354, 1921–1923. [Google Scholar] [CrossRef]
  3. MacPherson, S.E.; Gillebert, C.R.; Robinson, G.A.; Vallesi, A. Editorial: Intra- and Inter-individual Variability of Executive Functions: Determinant and Modulating Factors in Healthy and Pathological Conditions. Front. Psychol. 2019, 10, 432. [Google Scholar] [CrossRef] [PubMed]
  4. Diamond, A. Executive functions. Annu. Rev. Psychol. 2013, 64, 135–168. [Google Scholar] [CrossRef] [Green Version]
  5. Miller, E.K.; Cohen, J.D. An integrative theory of prefrontal cortex function. Annu. Rev. Neurosci. 2001, 24, 167–202. [Google Scholar] [CrossRef] [Green Version]
  6. Toba, M.N.; Malkinson, T.S.; Howells, H.; Mackie, M.A.; Spagna, A. Same or different? A multi-method review on the relationships between processes underlying executive control. PsyArXiv Prepr. 2020. [Google Scholar] [CrossRef]
  7. Dirnberger, G.; Jahanshahi, M. Executive dysfunction in Parkinson’s disease: A review. J. Neuropsychol. 2013, 7, 193–224. [Google Scholar] [CrossRef]
  8. Elamin, M.; Bede, P.; Montuschi, A.; Pender, N.; Chio, A.; Hardiman, O. Predicting prognosis in amyotrophic lateral sclerosis: A simple algorithm. J. Neurol. 2015, 262, 1447–1454. [Google Scholar] [CrossRef] [Green Version]
  9. Rapp, M.A.; Reischies, F.M. Attention and executive control predict Alzheimer disease in late life: Results from the Berlin Aging Study (BASE). Am. J. Geriatr. Psychiatry 2005, 13, 134–141. [Google Scholar] [CrossRef]
  10. Beeldman, E.; Raaphorst, J.; Twennaar, M.; de Visser, M.; Schmand, B.A.; de Haan, R.J. The cognitive profile of ALS: A systematic review and meta-analysis update. J. Neurol. Neurosurg. Psychiatry 2016, 87, 611–619. [Google Scholar] [CrossRef] [Green Version]
  11. Lange, F.; Seer, C.; Kopp, B. Cognitive flexibility in neurological disorders: Cognitive components and event-related potentials. Neurosci. Biobehav. Rev. 2017, 83, 496–507. [Google Scholar] [CrossRef] [PubMed]
  12. Seer, C.; Lange, F.; Georgiev, D.; Jahanshahi, M.; Kopp, B. Event-related potentials and cognition in Parkinson’s disease: An integrative review. Neurosci. Biobehav. Rev. 2016, 71, 691–714. [Google Scholar] [CrossRef] [PubMed]
  13. Miyake, A.; Friedman, N.P. The nature and organization of individual differences in executive functions. Curr. Dir. Psychol. Sci. 2012, 21, 8–14. [Google Scholar] [CrossRef] [PubMed]
  14. Friedman, N.P.; Miyake, A. Unity and diversity of executive functions: Individual differences as a window on cognitive structure. Cortex 2017, 86, 186–204. [Google Scholar] [CrossRef] [Green Version]
  15. Miyake, A.; Friedman, N.P.; Emerson, M.J.; Witzki, A.H.; Howerter, A.; Wager, T.D. The unity and diversity of executive functions and their contributions to complex “frontal lobe” tasks: A latent variable analysis. Cogn. Psychol. 2000, 41, 49–100. [Google Scholar] [CrossRef] [Green Version]
  16. Braem, S.; Egner, T. Getting a grip on cognitive flexibility. Curr. Dir. Psychol. Sci. 2018, 27, 470–476. [Google Scholar] [CrossRef] [Green Version]
  17. Badre, D.; Wagner, A.D. Computational and neurobiological mechanisms underlying cognitive flexibility. Proc. Natl. Acad. Sci. USA 2006, 103, 7186–7191. [Google Scholar] [CrossRef] [Green Version]
  18. Kortte, K.B.; Horner, M.D.; Windham, W.K. The Trail Making Test, Part B: Cognitive flexibility or ability to maintain set? Appl. Neuropsychol. 2002, 9, 106–109. [Google Scholar] [CrossRef]
  19. Reitan, R.M. The relation of the Trail Making Test to organic brain damage. J. Consult. Psychol. 1955, 19, 393–394. [Google Scholar] [CrossRef]
  20. Kopp, B.; Rösser, N.; Tabeling, S.; Stürenburg, H.J.; de Haan, B.; Karnath, H.-O.; Wessel, K. Errors on the Trail Making Test are associated with right hemispheric frontal lobe damage in stroke patients. Behav. Neurol. 2015, 2015, 309235. [Google Scholar] [CrossRef] [Green Version]
  21. Dias, R.; Robbins, T.W.; Roberts, A.C. Dissociation in prefrontal cortex of affective and attentional shifts. Nature 1996, 380, 69–72. [Google Scholar] [CrossRef] [PubMed]
  22. Grant, D.A.; Berg, E.A. A behavioral analysis of degree of reinforcement and ease of shifting to new responses in a Weigl-type card-sorting problem. J. Exp. Psychol. 1948, 38, 404–411. [Google Scholar] [CrossRef] [PubMed]
  23. Berg, E.A. A simple objective technique for measuring flexibility in thinking. J. Gen. Psychol. 1948, 39, 15–22. [Google Scholar] [CrossRef] [PubMed]
  24. Heaton, R.K.; Chelune, G.J.; Talley, J.L.; Kay, G.G.; Curtiss, G. Wisconsin Card Sorting Test Manual: Revised and Expanded; Psychological Assessment Resources Inc.: Odessa, FL, USA, 1993. [Google Scholar]
  25. Rabin, L.A.; Barr, W.B.; Burton, L.A. Assessment practices of clinical neuropsychologists in the United States and Canada: A survey of INS, NAN, and APA Division 40 members. Arch. Clin. Neuropsychol. 2005, 20, 33–65. [Google Scholar] [CrossRef] [Green Version]
  26. Kopp, B.; Steinke, A.; Bertram, M.; Skripuletz, T.; Lange, F. Multiple levels of control processes for Wisconsin Card Sorts: An observational study. Brain Sci. 2019, 9, 141. [Google Scholar] [CrossRef] [Green Version]
  27. Barceló, F. The Madrid card sorting test (MCST): A task switching paradigm to study executive attention with event-related potentials. Brain Res. Protoc. 2003, 11, 27–37. [Google Scholar] [CrossRef]
  28. Lange, F.; Kröger, B.; Steinke, A.; Seer, C.; Dengler, R.; Kopp, B. Decomposing card-sorting performance: Effects of working memory load and age-related changes. Neuropsychology 2016, 30, 579–590. [Google Scholar] [CrossRef]
  29. Lange, F.; Vogts, M.-B.; Seer, C.; Fürkötter, S.; Abdulla, S.; Dengler, R.; Kopp, B.; Petri, S. Impaired set-shifting in amyotrophic lateral sclerosis: An event-related potential study of executive function. Neuropsychology 2016, 30, 120–134. [Google Scholar] [CrossRef]
  30. Lange, F.; Dewitte, S. Cognitive flexibility and pro-environmental behaviour: A multimethod approach. Eur. J. Pers. 2019, 56, 46–54. [Google Scholar] [CrossRef]
  31. Milner, B. Effects of different brain lesions on card sorting. Arch. Neurol. 1963, 9, 90–100. [Google Scholar] [CrossRef]
  32. Demakis, G.J. A meta-analytic review of the sensitivity of the Wisconsin Card Sorting Test to frontal and lateralized frontal brain damage. Neuropsychology 2003, 17, 255–264. [Google Scholar] [CrossRef]
  33. Alvarez, J.A.; Emory, E. Executive function and the frontal lobes: A meta-analytic review. Neuropsychol. Rev. 2006, 16, 17–42. [Google Scholar] [CrossRef]
  34. MacPherson, S.E.; Sala, S.D.; Cox, S.R.; Girardi, A.; Iveson, M.H. Handbook of Frontal Lobe Assessment; Oxford University Press: New York, NY, USA, 2015. [Google Scholar]
  35. Luria, A.R. Higher Cortical Functions in Man; Tavistock Publications: London, UK, 1966. [Google Scholar]
  36. Stuss, D.T. Functions of the frontal lobes: Relation to executive functions. J. Int. Neuropsychol. Soc. 2011, 17, 759–765. [Google Scholar] [CrossRef] [PubMed]
  37. Nyhus, E.; Barceló, F. The Wisconsin Card Sorting Test and the cognitive assessment of prefrontal executive functions: A critical update. Brain Cogn. 2009, 71, 437–451. [Google Scholar] [CrossRef] [PubMed]
  38. Eslinger, P.J.; Grattan, L.M. Frontal lobe and frontal-striatal substrates for different forms of human cognitive flexibility. Neuropsychologia 1993, 31, 17–28. [Google Scholar] [CrossRef]
  39. Lange, F.; Brückner, C.; Knebel, A.; Seer, C.; Kopp, B. Executive dysfunction in Parkinson’s disease: A meta-analysis on the Wisconsin Card Sorting Test literature. Neurosci. Biobehav. Rev. 2018, 93, 38–56. [Google Scholar] [CrossRef]
  40. Guarino, A.; Favieri, F.; Boncompagni, I.; Agostini, F.; Cantone, M.; Casagrande, M. Executive functions in Alzheimer disease: A systematic review. Front. Aging Neurosci. 2019, 10, 437. [Google Scholar] [CrossRef]
  41. Lange, F.; Seer, C.; Müller-Vahl, K.; Kopp, B. Cognitive flexibility and its electrophysiological correlates in Gilles de la Tourette syndrome. Dev. Cogn. Neurosci. 2017, 27, 78–90. [Google Scholar] [CrossRef]
  42. Lange, F.; Seer, C.; Salchow, C.; Dengler, R.; Dressler, D.; Kopp, B. Meta-analytical and electrophysiological evidence for executive dysfunction in primary dystonia. Cortex 2016, 82, 133–146. [Google Scholar] [CrossRef]
  43. Romine, C. Wisconsin Card Sorting Test with children: A meta-analytic study of sensitivity and specificity. Arch. Clin. Neuropsychol. 2004, 19, 1027–1041. [Google Scholar] [CrossRef]
  44. Roberts, M.E.; Tchanturia, K.; Stahl, D.; Southgate, L.; Treasure, J. A systematic review and meta-analysis of set-shifting ability in eating disorders. Psychol. Med. 2007, 37, 1075–1084. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  45. Snyder, H.R. Major depressive disorder is associated with broad impairments on neuropsychological measures of executive function: A meta-analysis and review. Psychol. Bull. 2013, 139, 81–132. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  46. Shin, N.Y.; Lee, T.Y.; Kim, E.; Kwon, J.S. Cognitive functioning in obsessive-compulsive disorder: A meta-analysis. Psychol. Med. 2014, 44, 1121–1130. [Google Scholar] [CrossRef] [PubMed]
  47. Roca, M.; Parr, A.; Thompson, R.; Woolgar, A.; Torralva, T.; Antoun, N.; Manes, F.; Duncan, J. Executive function and fluid intelligence after frontal lobe lesions. Brain 2010, 133, 234–247. [Google Scholar] [CrossRef] [PubMed]
  48. Bishara, A.J.; Kruschke, J.K.; Stout, J.C.; Bechara, A.; McCabe, D.P.; Busemeyer, J.R. Sequential learning models for the Wisconsin card sort task: Assessing processes in substance dependent individuals. J. Math. Psychol. 2010, 54, 5–13. [Google Scholar] [CrossRef] [Green Version]
  49. Strauss, E.; Sherman, E.M.S.; Spreen, O. A Compendium of Neuropsychological Tests: Administration, Norms, and Commentary; Oxford University Press: New York, NY, USA, 2006; ISBN 9780195159578. [Google Scholar]
  50. Hawkes, C.H.; Del Tredici, K.; Braak, H. A timeline for Parkinson’s disease. Parkinsonism Relat. Disord. 2010, 16, 79–84. [Google Scholar] [CrossRef]
  51. Braak, H.; Del Tredici, K. Nervous system pathology in sporadic Parkinson disease. Neurology 2008, 70, 1916–1925. [Google Scholar] [CrossRef]
  52. Wijesekera, L.C.; Leigh, P.N. Amyotrophic lateral sclerosis. Orphanet J. Rare Dis. 2009, 4, 3. [Google Scholar] [CrossRef] [Green Version]
  53. Barceló, F.; Knight, R.T. Both random and perseverative errors underlie WCST deficits in prefrontal patients. Neuropsychologia 2002, 40, 349–356. [Google Scholar] [CrossRef]
  54. Barceló, F. Electrophysiological evidence of two different types of error in the Wisconsin Card Sorting Test. Neuroreport 1999, 10, 1299–1303. [Google Scholar] [CrossRef] [Green Version]
  55. Schretlen, D.J. Modified Wisconsin Card Sorting Test (M-WCST): Professional Manual; Psychological Assessment Resources Inc.: Lutz, FL, USA, 2010. [Google Scholar]
  56. Steinke, A.; Lange, F.; Kopp, B. Parallel Model-Based and Model-Free Reinforcement Learning for Card Sorting Performance. Sci. Rep. 2020, 10, 15464. [Google Scholar] [CrossRef] [PubMed]
  57. Greve, K.W.; Stickle, T.R.; Love, J.; Bianchi, K.; Stanford, M. Latent structure of the Wisconsin Card Sorting Test: A confirmatory factor analytic study. Arch. Clin. Neuropsychol. 2005, 20, 355–364. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  58. Greve, K.W.; Bianchini, K.J.; Mathias, C.W.; Houston, R.J.; Crouch, J.A. Detecting malingered performance with the Wisconsin Card Sorting Test: A preliminary investigation in traumatic brain injury. Clin. Neuropsychol. 2002, 16, 179–191. [Google Scholar] [CrossRef] [PubMed]
  59. Jewsbury, P.A.; Bowden, S.C. Construct validity has a critical role in evidence-based neuropsychological assessment. In National Academy of Neuropsychology: Series on Evidence-Based Practices. Neuropsychological Assessment in the Age of Evidence-Based Practice: Diagnostic and Treatment Evaluations; Bowden, S.C., Ed.; Oxford University Press: Oxford, NY, USA, 2017; pp. 33–63. [Google Scholar]
  60. Bowden, S.C.; Fowler, K.S.; Bell, R.C.; Whelan, G.; Clifford, C.C.; Ritter, A.J.; Long, C.M. The reliability and internal validity of the Wisconsin Card Sorting Test. Neuropsychol. Rehabil. 1998, 8, 243–254. [Google Scholar] [CrossRef]
  61. Sun, R. (Ed.) The Cambridge Handbook of Computational Psychology; Cambridge University Press: Cambridge, UK, 2001; ISBN 9780511816772. [Google Scholar]
  62. Forstmann, B.U.; Wagenmakers, E.-J. An Introduction to Model-Based Cognitive Neuroscience; Springer: New York, NY, USA, 2015; ISBN 978-1-4939-2235-2. [Google Scholar]
  63. Busemeyer, J.R.; Wang, Z.; Townsend, J.T.; Eidels, A. The Oxford Handbook of Computational and Mathematical Psychology; Oxford University Press: New York, NY, USA, 2015. [Google Scholar]
  64. Hazy, T.E.; Frank, M.J.; O’Reilly, R.C. Towards an executive without a homunculus: Computational models of the prefrontal cortex/basal ganglia system. Philos. Trans. R. Soc. B Biol. Sci. 2007, 362, 1601–1613. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  65. Botvinick, M.M.; Cohen, J.D. The computational and neural basis of cognitive control: Charted territory and new frontiers. Cogn. Sci. 2014, 38, 1249–1285. [Google Scholar] [CrossRef] [PubMed]
  66. Oberauer, K.; Lewandowsky, S. Addressing the theory crisis in psychology. Psychon. Bull. Rev. 2019, 26, 1596–1618. [Google Scholar] [CrossRef] [PubMed]
  67. Steinke, A.; Lange, F.; Seer, C.; Kopp, B. Toward a computational cognitive neuropsychology of Wisconsin card sorts: A showcase study in Parkinson’s disease. Comput. Brain Behav. 2018, 1, 137–150. [Google Scholar] [CrossRef]
  68. Palminteri, S.; Wyart, V.; Koechlin, E. The importance of falsification in computational cognitive modeling. Trends Cogn. Sci. 2017, 21, 425–433. [Google Scholar] [CrossRef]
  69. Busemeyer, J.R.; Diederich, A. Estimation and Testing of Computational Psychological Models. In Neuroeconomics; Glimcher, P., Fehr, E., Eds.; Academic Press: San Diego, CA, USA, 2014; pp. 49–61. [Google Scholar]
  70. D’Alessandro, M.; Lombardi, L. A dynamic framework for modelling set-shifting performances. Behav. Sci. 2019, 9, 79. [Google Scholar] [CrossRef] [Green Version]
  71. Levine, D.S.; Prueitt, P.S. Modeling some effects of frontal lobe damage—Novelty and perseveration. Neural Netw. 1989, 2, 103–116. [Google Scholar] [CrossRef]
  72. Granato, G.; Baldassarre, G. Goal-directed top-down control of perceptual representations: A computational model of the Wisconsin Card Sorting Test. In Proceedings of the 2019 Conference on Cognitive Computational Neuroscience; Cognitive Computational Neuroscience, Brentwood, TN, USA, 15 September 2019. [Google Scholar]
  73. D’Alessandro, M.; Radev, S.T.; Voss, A.; Lombardi, L. A Bayesian brain model of adaptive behavior: An application to the Wisconsin Card Sorting Task. arXiv 2020, arXiv:2003.07394. [Google Scholar]
  74. Caso, A.; Cooper, R.P. A neurally plausible schema-theoretic approach to modelling cognitive dysfunction and neurophysiological markers in Parkinson’s disease. Neuropsychologia 2020, 140, 107359. [Google Scholar] [CrossRef] [PubMed]
  75. Amos, A. A computational model of information processing in the frontal cortex and basal ganglia. J. Cogn. Neurosci. 2000, 12, 505–519. [Google Scholar] [CrossRef]
  76. Berdia, S.; Metz, J.T. An artificial neural network stimulating performance of normal subjects and schizophrenics on the Wisconsin card sorting test. Artif. Intell. Med. 1998, 13, 123–138. [Google Scholar] [CrossRef]
  77. Dehaene, S.; Changeux, J.P. The Wisconsin Card Sorting Test: Theoretical analysis and modeling in a neuronal network. Cereb. Cortex 1991, 1, 62–79. [Google Scholar] [CrossRef]
  78. Farreny, A.; del Rey-Mejías, Á.; Escartin, G.; Usall, J.; Tous, N.; Haro, J.M.; Ochoa, S. Study of positive and negative feedback sensitivity in psychosis using the Wisconsin Card Sorting Test. Compr. Psychiatry 2016, 68, 119–128. [Google Scholar] [CrossRef]
  79. Kaplan, G.B.; Şengör, N.S.; Gürvit, H.; Genç, İ.; Güzeliş, C. A composite neural network model for perseveration and distractibility in the Wisconsin card sorting test. Neural Netw. 2006, 19, 375–387. [Google Scholar] [CrossRef]
  80. Kimberg, D.Y.; Farah, M.J. A unified account of cognitive impairments following frontal lobe damage: The role of working memory in complex, organized behavior. J. Exp. Psychol. Gen. 1993, 122, 411–428. [Google Scholar] [CrossRef]
  81. Gallant, S.I. Neural Network Learning and Expert Systems; MIT Press: Boston, MA, USA, 1993. [Google Scholar]
  82. Farrell, S.; Lewandowsky, S. Computational Modeling of Cognition and Behavior; Cambridge University Press: Cambridge, UK, 2018. [Google Scholar]
  83. Guest, O.; Caso, A.; Cooper, R.P. On Simulating Neural Damage in Connectionist Networks. Comput. Brain Behav. 2020, 3, 289–321. [Google Scholar] [CrossRef]
  84. Palminteri, S.; Lebreton, M.; Worbe, Y.; Hartmann, A.; Lehéricy, S.; Vidailhet, M.; Grabli, D.; Pessiglione, M. Dopamine-dependent reinforcement of motor skill learning: Evidence from Gilles de la Tourette syndrome. Brain 2011, 134, 2287–2301. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  85. Stout, J.C.; Busemeyer, J.R.; Lin, A.; Grant, S.J.; Bonson, K.R. Cognitive modeling analysis of decision-making processes in cocaine abusers. Psychon. Bull. Rev. 2004, 11, 742–747. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  86. Busemeyer, J.R.; Stout, J.C. A contribution of cognitive decision models to clinical assessment: Decomposing performance on the Bechara gambling task. Psychol. Assess. 2002, 14, 253–262. [Google Scholar] [CrossRef] [PubMed]
  87. Botvinick, M.M.; Plaut, D.C. Doing without schema hierarchies: A recurrent connectionist approach to normal and impaired routine sequential action. Psychol. Rev. 2004, 111, 395–429. [Google Scholar] [CrossRef] [Green Version]
  88. Cooper, R.; Shallice, T. Contention scheduling and the control of routine activities. Cogn. Neuropsychol. 2000, 17, 297–338. [Google Scholar] [CrossRef]
  89. Frank, M.J.; Seeberger, L.C.; O’Reilly, R.C. By carrot or by stick: Cognitive reinforcement learning in Parkinsonism. Science 2004, 306, 1940–1943. [Google Scholar] [CrossRef] [Green Version]
  90. Cella, M.; Bishara, A.J.; Medin, E.; Swan, S.; Reeder, C.; Wykes, T. Identifying cognitive remediation change through computational modelling—Effects on reinforcement learning in schizophrenia. Schizophr. Bull. 2014, 40, 1422–1432. [Google Scholar] [CrossRef]
  91. Gläscher, J.; Adolphs, R.; Tranel, D. Model-based lesion mapping of cognitive control using the Wisconsin Card Sorting Test. Nat. Commun. 2019, 10, 20. [Google Scholar] [CrossRef] [Green Version]
  92. Sutton, R.S.; Barto, A.G. Reinforcement Learning: An introduction; MIT Press: Cambridge, MA, USA, 1998. [Google Scholar]
  93. Niv, Y. Reinforcement learning in the brain. J. Math. Psychol. 2009, 53, 139–154. [Google Scholar] [CrossRef] [Green Version]
  94. Silvetti, M.; Verguts, T. Reinforcement learning, high-level cognition, and the human brain. In Neuroimaging—Cognitive and Clinical Neuroscience; Bright, P., Ed.; InTech: Rijeka, Croatia, 2012; pp. 283–296. [Google Scholar]
  95. Gerraty, R.T.; Davidow, J.Y.; Foerde, K.; Galvan, A.; Bassett, D.S.; Shohamy, D. Dynamic flexibility in striatal-cortical circuits supports reinforcement learning. J. Neurosci. 2018, 38, 2442–2453. [Google Scholar] [CrossRef] [Green Version]
  96. Fontanesi, L.; Gluth, S.; Spektor, M.S.; Rieskamp, J. A reinforcement learning diffusion decision model for value-based decisions. Psychon. Bull. Rev. 2019, 26, 1099–1121. [Google Scholar] [CrossRef]
  97. Fontanesi, L.; Palminteri, S.; Lebreton, M. Decomposing the effects of context valence and feedback information on speed and accuracy during reinforcement learning: A meta-analytical approach using diffusion decision modeling. Cogn. Affect. Behav. Neurosci. 2019, 19, 490–502. [Google Scholar] [CrossRef] [PubMed]
  98. Caligiore, D.; Arbib, M.A.; Miall, R.C.; Baldassarre, G. The super-learning hypothesis: Integrating learning processes across cortex, cerebellum and basal ganglia. Neurosci. Biobehav. Rev. 2019, 100, 19–34. [Google Scholar] [CrossRef] [PubMed]
  99. Erev, I.; Roth, A.E. Predicting how people play games: Reinforcement learning in experimental games with unique, mixed strategy equilibria. Am. Econ. Rev. 1998, 88, 848–881. [Google Scholar]
  100. Steingroever, H.; Wetzels, R.; Wagenmakers, E.-J. Validating the PVL-Delta model for the Iowa gambling task. Front. Psychol. 2013, 4, 898. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  101. Luce, R.D. Individual Choice Behaviour; John Wiley & Sons Inc.: New York, NY, USA, 1959. [Google Scholar]
  102. Daw, N.D.; O’Doherty, J.P.; Dayan, P.; Seymour, B.; Dolan, R.J. Cortical substrates for exploratory decisions in humans. Nature 2006, 441, 876–879. [Google Scholar] [CrossRef] [PubMed]
  103. Thrun, S.B. The role of exploration in learning control. In Handbook for Intelligent Control: Neural, Fuzzy and Adaptive Approaches; White, D., Sofge, D., Eds.; Van Nostrand Reinhold: Florence, KY, USA, 1992; pp. 527–559. [Google Scholar]
  104. Schultz, W. Reward prediction error. Curr. Biol. 2017, 27, 369–371. [Google Scholar] [CrossRef] [Green Version]
  105. Schultz, W.; Dayan, P.; Montague, P.R. A neural substrate of prediction and reward. Science 1997, 275, 1593–1599. [Google Scholar] [CrossRef] [Green Version]
  106. Palminteri, S.; Lebreton, M.; Worbe, Y.; Grabli, D.; Hartmann, A.; Pessiglione, M. Pharmacological modulation of subliminal learning in Parkinson’s and Tourette’s syndromes. Proc. Natl. Acad. Sci. USA 2009, 106, 19179–19184. [Google Scholar] [CrossRef] [Green Version]
  107. Vehtari, A.; Gelman, A.; Gabry, J. Practical Bayesian model evaluation using leave-one-out cross-validation and WAIC. Stat. Comput. 2017, 27, 1413–1432. [Google Scholar] [CrossRef] [Green Version]
  108. Gronau, Q.F.; Wagenmakers, E.-J. Limitations of Bayesian leave-one-out cross-validation for model selection. Comput. Brain Behav. 2019, 2, 1–11. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  109. Steinke, A.; Lange, F.; Seer, C.; Hendel, M.K.; Kopp, B. Computational modeling for neuropsychological assessment of bradyphrenia in Parkinson’s disease. J. Clin. Med. 2020, 9, 1158. [Google Scholar] [CrossRef] [PubMed]
  110. Lange, F.; Seer, C.; Loens, S.; Wegner, F.; Schrader, C.; Dressler, D.; Dengler, R.; Kopp, B. Neural mechanisms underlying cognitive inflexibility in Parkinson’s disease. Neuropsychologia 2016, 93, 142–150. [Google Scholar] [CrossRef] [PubMed]
  111. Rogers, D.; Lees, A.J.; Smith, E.; Trimble, M.; Stern, G.M. Bradyphrenia in Parkinson’s disease and psychomotor retardation in depressive illness: An experimental study. Brain 1987, 110, 761–776. [Google Scholar] [CrossRef]
  112. Vlagsma, T.T.; Koerts, J.; Tucha, O.; Dijkstra, H.T.; Duits, A.A.; van Laar, T.; Spikman, J.M. Mental slowness in patients with Parkinson’s disease: Associations with cognitive functions? J. Clin. Exp. Neuropsychol. 2016, 38, 844–852. [Google Scholar] [CrossRef] [Green Version]
  113. Revonsuo, A.; Portin, R.; Koivikko, L.; Rinne, J.O.; Rinne, U.K. Slowing of information processing in Parkinson′s disease. Brain Cogn. 1993, 21, 87–110. [Google Scholar] [CrossRef]
  114. Kehagia, A.A.; Barker, R.A.; Robbins, T.W. Neuropsychological and clinical heterogeneity of cognitive impairment and dementia in patients with Parkinson’s disease. Lancet Neurol. 2010, 9, 1200–1213. [Google Scholar] [CrossRef]
  115. Steinke, A.; Lange, F.; Seer, C.; Petri, S.; Kopp, B. A computational study of executive dysfunction in amyotrophic lateral sclerosis. J. Clin. Med. 2020, 9, 2605. [Google Scholar] [CrossRef]
  116. Knowlton, B.J.; Mangels, J.A.; Squire, L.R. A neostriatal habit learning system in humans. Science 1996, 273, 1399–1402. [Google Scholar] [CrossRef] [Green Version]
  117. Yin, H.H.; Knowlton, B.J. The role of the basal ganglia in habit formation. Nat. Rev. Neurosci. 2006, 7, 464–476. [Google Scholar] [CrossRef]
  118. Shohamy, D.; Myers, C.E.; Kalanithi, J.; Gluck, M.A. Basal ganglia and dopamine contributions to probabilistic category learning. Neurosci. Biobehav. Rev. 2008, 32, 219–236. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  119. Cools, R. Dopaminergic modulation of cognitive function-implications for L-DOPA treatment in Parkinson’s disease. Neurosci. Biobehav. Rev. 2006, 30, 1–23. [Google Scholar] [CrossRef] [PubMed]
  120. Gotham, A.M.; Brown, R.G.; Marsden, C.D. ‘Frontal’ cognitive function in patients with Parkinson’s disease “on” and “off” Levodopa. Brain 1988, 111, 299–321. [Google Scholar] [CrossRef] [PubMed]
  121. Thurm, F.; Schuck, N.W.; Fauser, M.; Doeller, C.F.; Stankevich, Y.; Evens, R.; Riedel, O.; Storch, A.; Lueken, U.; Li, S.-C. Dopamine modulation of spatial navigation memory in Parkinson’s disease. Neurobiol. Aging 2016, 38, 93–103. [Google Scholar] [CrossRef]
  122. Vaillancourt, D.E.; Schonfeld, D.; Kwak, Y.; Bohnen, N.I.; Seidler, R. Dopamine overdose hypothesis: Evidence and clinical implications. Mov. Disord. 2013, 28, 1920–1929. [Google Scholar] [CrossRef] [Green Version]
  123. Cools, R.; D’Esposito, M. Inverted-U–shaped dopamine actions on human working memory and cognitive control. Biol. Psychiatry 2011, 69, 113–125. [Google Scholar] [CrossRef] [Green Version]
  124. Li, S.-C.; Lindenberger, U.; Bäckman, L. Dopaminergic modulation of cognition across the life span. Neurosci. Biobehav. Rev. 2010, 34, 625–630. [Google Scholar] [CrossRef]
  125. Floresco, S.B.; Magyar, O. Mesocortical dopamine modulation of executive functions: Beyond working memory. Psychopharmacology 2006, 188, 567–585. [Google Scholar] [CrossRef]
  126. Müller, J.; Dreisbach, G.; Goschke, T.; Hensch, T.; Lesch, K.-P.; Brocke, B. Dopamine and cognitive control: The prospect of monetary gains influences the balance between flexibility and stability in a set-shifting paradigm. Eur. J. Neurosci. 2007, 26, 3661–3668. [Google Scholar] [CrossRef]
  127. Goschke, T.; Bolte, A. Emotional modulation of control dilemmas: The role of positive affect, reward, and dopamine in cognitive stability and flexibility. Neuropsychologia 2014, 62, 403–423. [Google Scholar] [CrossRef]
  128. Shohamy, D.; Adcock, R.A. Dopamine and adaptive memory. Trends Cogn. Sci. 2010, 14, 464–472. [Google Scholar] [CrossRef] [PubMed]
  129. Aarts, E.; Nusselein, A.A.M.; Smittenaar, P.; Helmich, R.C.; Bloem, B.R.; Cools, R. Greater striatal responses to medication in Parkinson’s disease are associated with better task-switching but worse reward performance. Neuropsychologia 2014, 62, 390–397. [Google Scholar] [CrossRef] [PubMed]
  130. Beste, C.; Willemssen, R.; Saft, C.; Falkenstein, M. Response inhibition subprocesses and dopaminergic pathways: Basal ganglia disease effects. Neuropsychologia 2010, 48, 366–373. [Google Scholar] [CrossRef] [PubMed]
  131. Phukan, J.; Pender, N.P.; Hardiman, O. Cognitive impairment in amyotrophic lateral sclerosis. Lancet Neurol. 2007, 6, 994–1003. [Google Scholar] [CrossRef]
  132. Lezak, M.D.; Howieson, D.B.; Bigler, E.D.; Tranel, D. Neuropsychological Assessment, 5th ed.; Oxford University Press: New York, NY, USA, 2012. [Google Scholar]
  133. Naville, F. Études sur les complications et les séquelles mentales de l’encéphalite épidémique. Encéphale 1922, 17, 369–375. [Google Scholar]
  134. Rogers, D. Bradyphrenia in parkinsonism: A historical review. Psychol. Med. 1986, 16, 257–265. [Google Scholar] [CrossRef]
  135. Postuma, R.B.; Berg, D.; Stern, M.; Poewe, W.; Olanow, C.W.; Oertel, W.; Obeso, J.; Marek, K.; Litvan, I.; Lang, A.E.; et al. MDS clinical diagnostic criteria for Parkinson’s disease. Mov. Disord. 2015, 30, 1591–1601. [Google Scholar] [CrossRef]
  136. Bologna, M.; Paparella, G.; Fasano, A.; Hallett, M.; Berardelli, A. Evolving concepts on bradykinesia. Brain 2020, 143, 727–750. [Google Scholar] [CrossRef]
  137. Rustamov, N.; Rodriguez-Raecke, R.; Timm, L.; Agrawal, D.; Dressler, D.; Schrader, C.; Tacik, P.; Wegner, F.; Dengler, R.; Wittfoth, M.; et al. Attention shifting in Parkinson’s disease: An analysis of behavioral and cortical responses. Neuropsychology 2014, 28, 929–944. [Google Scholar] [CrossRef]
  138. Rustamov, N.; Rodriguez-Raecke, R.; Timm, L.; Agrawal, D.; Dressler, D.; Schrader, C.; Tacik, P.; Wegner, F.; Dengler, R.; Wittfoth, M.; et al. Absence of congruency sequence effects reveals neurocognitive inflexibility in Parkinson’s disease. Neuropsychologia 2013, 51, 2976–2987. [Google Scholar] [CrossRef]
  139. Browning, M.; Carter, C.; Chatham, C.H.; Den Ouden, H.; Gillan, C.; Baker, J.T.; Paulus, M.P. Realizing the clinical potential of computational psychiatry: Report from the Banbury Center Meeting, February 2019. Biol. Psychiatry 2020, 88, e5–e10. [Google Scholar] [CrossRef] [PubMed]
  140. Sun, R. Theoretical status of computational cognitive modeling. Cogn. Syst. Res. 2009, 10, 124–140. [Google Scholar] [CrossRef] [Green Version]
  141. Wilson, R.C.; Collins, A.G. Ten simple rules for the computational modeling of behavioral data. Elife 2019, 8, e49547. [Google Scholar] [CrossRef] [PubMed]
  142. Gläscher, J.; Daw, N.; Dayan, P.; O’Doherty, J.P. States versus rewards: Dissociable neural prediction error signals underlying model-based and model-free reinforcement learning. Neuron 2010, 66, 585–595. [Google Scholar] [CrossRef] [Green Version]
  143. Lebreton, M.; Bavard, S.; Daunizeau, J.; Palminteri, S. Assessing inter-individual differences with task-related functional neuroimaging. Nat. Hum. Behav. 2019, 3, 897–905. [Google Scholar] [CrossRef]
  144. Stephan, K.E.; Iglesias, S.; Heinzle, J.; Diaconescu, A.O. Translational perspectives for computational neuroimaging. Neuron 2015, 87, 716–732. [Google Scholar] [CrossRef] [Green Version]
  145. Tsermentseli, S.; Leigh, P.N.; Goldstein, L.H. The anatomy of cognitive impairment in amyotrophic lateral sclerosis: More than frontal lobe dysfunction. Cortex 2012, 48, 166–182. [Google Scholar] [CrossRef]
  146. Abrahams, S.; Goldstein, L.H.; Kew, J.J.M.; Brooks, D.J.; Lloyd, C.M.; Frith, C.D.; Leigh, P.N. Frontal lobe dysfunction in amyotrophic lateral sclerosis. Brain 1996, 119, 2105–2120. [Google Scholar] [CrossRef] [Green Version]
  147. Narayanan, N.S.; Rodnitzky, R.L.; Uc, E.Y. Prefrontal dopamine signaling and cognitive symptoms of Parkinson’s disease. Rev. Neurosci. 2013, 24, 267–278. [Google Scholar] [CrossRef]
  148. McCoy, B.; Jahfari, S.; Engels, G.; Knapen, T.; Theeuwes, J. Dopaminergic medication reduces striatal sensitivity to negative outcomes in Parkinson’s disease. Brain 2019, 142, 3605–3620. [Google Scholar] [CrossRef] [Green Version]
  149. Weidinger, L.; Gradassi, A.; Molleman, L.; van den Bos, W. Test-retest reliability of canonical reinforcement learning models. In Proceedings of the 2019 Conference on Cognitive Computational Neuroscience, Brentwood, TN, USA, 14 September 2019. [Google Scholar]
  150. Kopp, B.; Lange, F.; Steinke, A. The Reliability of the Wisconsin Card Sorting Test in Clinical Practice. Assessment 2019. [Google Scholar] [CrossRef] [PubMed]
  151. Steinke, A.; Kopp, B. RELEX: An Excel-Based Software Tool for Sampling Split-Half Reliability Coefficients. Methods Psychol. 2020, 2, 100023. [Google Scholar] [CrossRef]
  152. Klein, O.; Hardwicke, T.E.; Aust, F.; Breuer, J.; Danielsson, H.; Mohr, A.H.; IJzerman, H.; Nilsonne, G.; Vanpaemel, W.; Frank, M.C. A practical guide for transparency in psychological science. Collabra Psychol. 2018, 4, 1–15. [Google Scholar] [CrossRef] [Green Version]
  153. Gelman, A.; Geurts, H.M. The statistical crisis in science: How is it relevant to clinical neuropsychology? Clin. Neuropsychol. 2017, 31, 1000–1014. [Google Scholar] [CrossRef]
  154. Lange, F. Are difficult-to-study populations too difficult to study in a reliable way? Eur. Psychol. 2020, 25, 41–50. [Google Scholar] [CrossRef]
Figure 1. Three consecutive trials on a computerized variant of the Wisconsin Card Sorting Test (cWCST) [11,27,28,29,30]. The stimulus card on Trial t depicts one green cross. Applicable categories are the number category (far left key card, response 1), the color category (inner left key card, response 2), and the shape category (inner right key card, response 3). The execution of response 3 indicates the application of the shape category. A succeeding positive feedback cue (i.e., “REPEAT”) indicates that response 3 was correct and that the shape category should be repeated on the upcoming trials. Yet, on Trial t + 1, the execution of response 3 indicates the application of the number category. Set-loss errors refer to such erroneous switches of the applied category following positive feedback. A subsequent negative feedback cue (i.e., “SWITCH”) indicates that response 3 was incorrect. Hence, the applied category should be switched. However, on Trial t + 2, the execution of response 2 indicates an erroneous repetition of the number category. Perseveration errors refer to such erroneous category repetitions after negative feedback.
Figure 1. Three consecutive trials on a computerized variant of the Wisconsin Card Sorting Test (cWCST) [11,27,28,29,30]. The stimulus card on Trial t depicts one green cross. Applicable categories are the number category (far left key card, response 1), the color category (inner left key card, response 2), and the shape category (inner right key card, response 3). The execution of response 3 indicates the application of the shape category. A succeeding positive feedback cue (i.e., “REPEAT”) indicates that response 3 was correct and that the shape category should be repeated on the upcoming trials. Yet, on Trial t + 1, the execution of response 3 indicates the application of the number category. Set-loss errors refer to such erroneous switches of the applied category following positive feedback. A subsequent negative feedback cue (i.e., “SWITCH”) indicates that response 3 was incorrect. Hence, the applied category should be switched. However, on Trial t + 2, the execution of response 2 indicates an erroneous repetition of the number category. Perseveration errors refer to such erroneous category repetitions after negative feedback.
Brainsci 10 01000 g001
Figure 2. Increased perseveration error (PE) propensities may result from separable impairments of covert cognitive processes. (A) An exemplary sequence on a computerized WCST variant. On Trial t − 1, the execution of response 3 indicates the application of the number category. A subsequently presented negative feedback cue (i.e., “SWITCH”) indicates that the application of the number category was incorrect. Thus, a switch away from the number category is requested on Trial t. (B) A successful switch away from the number category on Trial t may rely on a number of covert cognitive processes. For example, participants must retain the assumption about the prevailing category on Trial t − 1 (i.e., “number is correct”) until they receive a feedback cue (i.e., “number was correct”). Next, participants must update the retained assumption about the prevailing category by received feedback (i.e., “number is incorrect”). At the level of overt behavior on Trial t, the execution of response 1 indicates the application of the color category, i.e., a successful switch away from the number category. (C) A covert cognitive symptom may describe impaired updating following received feedback. In this example, impaired updating results in the assumption that the number category is still correct, although the received negative feedback indicates that the application of the number category was incorrect. At the level of overt behavior, the execution of response 2 indicates an erroneous repetition of the number category, i.e., a PE. (D) Another covert cognitive symptom may describe impaired retention. In this example, impaired retention results in the assumption that the color category was correct. A received negative feedback (i.e., “Color is incorrect”) renders a subsequent application of the number or shape category likely. At the level of overt behavior, the execution of response 2 indicates the application of the number category, i.e., a PE. Please note that we do not wish to imply that these covert cognitive processes are conscious (i.e., the depicted clouds might just as well reflect implicit processes).
Figure 2. Increased perseveration error (PE) propensities may result from separable impairments of covert cognitive processes. (A) An exemplary sequence on a computerized WCST variant. On Trial t − 1, the execution of response 3 indicates the application of the number category. A subsequently presented negative feedback cue (i.e., “SWITCH”) indicates that the application of the number category was incorrect. Thus, a switch away from the number category is requested on Trial t. (B) A successful switch away from the number category on Trial t may rely on a number of covert cognitive processes. For example, participants must retain the assumption about the prevailing category on Trial t − 1 (i.e., “number is correct”) until they receive a feedback cue (i.e., “number was correct”). Next, participants must update the retained assumption about the prevailing category by received feedback (i.e., “number is incorrect”). At the level of overt behavior on Trial t, the execution of response 1 indicates the application of the color category, i.e., a successful switch away from the number category. (C) A covert cognitive symptom may describe impaired updating following received feedback. In this example, impaired updating results in the assumption that the number category is still correct, although the received negative feedback indicates that the application of the number category was incorrect. At the level of overt behavior, the execution of response 2 indicates an erroneous repetition of the number category, i.e., a PE. (D) Another covert cognitive symptom may describe impaired retention. In this example, impaired retention results in the assumption that the color category was correct. A received negative feedback (i.e., “Color is incorrect”) renders a subsequent application of the number or shape category likely. At the level of overt behavior, the execution of response 2 indicates the application of the number category, i.e., a PE. Please note that we do not wish to imply that these covert cognitive processes are conscious (i.e., the depicted clouds might just as well reflect implicit processes).
Brainsci 10 01000 g002
Figure 3. A modulation of PE propensities by response demands. (A) In a recent behavioral study [26], we stratified PE by response demands. With a demanded response repetition, the commitment of a PE (i.e., the re-application of the number category by executing response 2 on Trial t) implies an alternation of the previously executed response (i.e., response 3 on Trial t − 1). With a demanded response alternation, the commitment of a PE (i.e., the re-application of the number category by executing response 2 on Trial t) implies the repetition of the previously executed response (i.e., response 2 on Trial t − 1). (B) We found a modulation of PE propensities by response demands [26]. Participants showed reduced PE propensities on trials with a demanded response alternation when compared to trials with a demanded response repetition. Please note that we did not find evidence for a modulation of set-loss error (SLE) propensities by response demands.
Figure 3. A modulation of PE propensities by response demands. (A) In a recent behavioral study [26], we stratified PE by response demands. With a demanded response repetition, the commitment of a PE (i.e., the re-application of the number category by executing response 2 on Trial t) implies an alternation of the previously executed response (i.e., response 3 on Trial t − 1). With a demanded response alternation, the commitment of a PE (i.e., the re-application of the number category by executing response 2 on Trial t) implies the repetition of the previously executed response (i.e., response 2 on Trial t − 1). (B) We found a modulation of PE propensities by response demands [26]. Participants showed reduced PE propensities on trials with a demanded response alternation when compared to trials with a demanded response repetition. Please note that we did not find evidence for a modulation of set-loss error (SLE) propensities by response demands.
Brainsci 10 01000 g003
Figure 4. A schematic representation of the attentional-updating (AU) model [48]. Top: an exemplary sequence on a computerized WCST. Bottom: central to the AU model are attentional prioritizations (APs) of categories, a(t). APs from the previous trial a(t − 1) are updated in response to a received feedback. Individual sensitivity parameters p quantify the overall strengths of updating. There are separate sensitivity parameters for trails following positive and negative feedback (not depicted). An attentional focus mechanism further modulates the strength of updating of AP (i.e., a high AP of a category results in strong updating of that AP and vice versa). An individual attentional focus parameter f quantifies the extent to which the magnitude of an AP modulates updating of that AP. An individual response variability parameter d quantifies the extent to which response probabilities correspond to updated AP, a(t).
Figure 4. A schematic representation of the attentional-updating (AU) model [48]. Top: an exemplary sequence on a computerized WCST. Bottom: central to the AU model are attentional prioritizations (APs) of categories, a(t). APs from the previous trial a(t − 1) are updated in response to a received feedback. Individual sensitivity parameters p quantify the overall strengths of updating. There are separate sensitivity parameters for trails following positive and negative feedback (not depicted). An attentional focus mechanism further modulates the strength of updating of AP (i.e., a high AP of a category results in strong updating of that AP and vice versa). An individual attentional focus parameter f quantifies the extent to which the magnitude of an AP modulates updating of that AP. An individual response variability parameter d quantifies the extent to which response probabilities correspond to updated AP, a(t).
Brainsci 10 01000 g004
Figure 5. A schematic representation of the cognitive reinforcement-learning (RL) model. Top: an exemplary sequence on a computerized WCST. Bottom: core to the cognitive RL model are feedback predictions for the application of categories, Qc(t). A prediction error updates feedback predictions from the previous trial Qc(t − 1) following received feedback. Individual cognitive learning rates αc quantify the strength of the updating of feedback predictions by prediction errors. There are separate individual cognitive learning rates for received positive and negative feedback (not depicted). A soft-max rule gives response probabilities as a function of updated feedback predictions. The individual inverse temperature parameter τ quantifies how well response probabilities accord to updated feedback predictions. A retention mechanism gives the extent to which feedback predictions transfer to the next trial. The individual cognitive retention rate γc quantifies the strength of retention of feedback predictions.
Figure 5. A schematic representation of the cognitive reinforcement-learning (RL) model. Top: an exemplary sequence on a computerized WCST. Bottom: core to the cognitive RL model are feedback predictions for the application of categories, Qc(t). A prediction error updates feedback predictions from the previous trial Qc(t − 1) following received feedback. Individual cognitive learning rates αc quantify the strength of the updating of feedback predictions by prediction errors. There are separate individual cognitive learning rates for received positive and negative feedback (not depicted). A soft-max rule gives response probabilities as a function of updated feedback predictions. The individual inverse temperature parameter τ quantifies how well response probabilities accord to updated feedback predictions. A retention mechanism gives the extent to which feedback predictions transfer to the next trial. The individual cognitive retention rate γc quantifies the strength of retention of feedback predictions.
Brainsci 10 01000 g005
Figure 6. A schematic representation of the parallel RL model. Top: an exemplary sequence on a computerized WCST. Bottom: the parallel RL model incorporates independent cognitive and sensorimotor RL (upper and lower grey bar, respectively). Central to cognitive and sensorimotor RL are feedback predictions for the application of categories Qc(t) and the execution of responses Qs(t), respectively. Cognitive and sensorimotor prediction errors update feedback predictions for categories Qc(t − 1) and responses Qs(t − 1) from the previous trial in response to a received feedback. Individual cognitive αc and sensorimotor learning rates αs quantify the strengths of updating by prediction errors. There are separate learning rates for received positive and negative feedback at cognitive and sensorimotor levels (not depicted). The parallel RL adds feedback predictions for responses to those of categories on any trial. A weighting parameter w quantifies the relative strength of cognitive over sensorimotor RL. Response probabilities result from integrated feedback predictions. An inverse temperature parameter τ quantifies how well response probabilities accord to integrated feedback predictions. Cognitive and sensorimotor retention mechanisms ensure that feedback predictions for categories and responses transfer from one trial to the next. Cognitive γc and sensorimotor retention rates γs quantify the strengths of retention.
Figure 6. A schematic representation of the parallel RL model. Top: an exemplary sequence on a computerized WCST. Bottom: the parallel RL model incorporates independent cognitive and sensorimotor RL (upper and lower grey bar, respectively). Central to cognitive and sensorimotor RL are feedback predictions for the application of categories Qc(t) and the execution of responses Qs(t), respectively. Cognitive and sensorimotor prediction errors update feedback predictions for categories Qc(t − 1) and responses Qs(t − 1) from the previous trial in response to a received feedback. Individual cognitive αc and sensorimotor learning rates αs quantify the strengths of updating by prediction errors. There are separate learning rates for received positive and negative feedback at cognitive and sensorimotor levels (not depicted). The parallel RL adds feedback predictions for responses to those of categories on any trial. A weighting parameter w quantifies the relative strength of cognitive over sensorimotor RL. Response probabilities result from integrated feedback predictions. An inverse temperature parameter τ quantifies how well response probabilities accord to integrated feedback predictions. Cognitive and sensorimotor retention mechanisms ensure that feedback predictions for categories and responses transfer from one trial to the next. Cognitive γc and sensorimotor retention rates γs quantify the strengths of retention.
Brainsci 10 01000 g006
Figure 7. Exemplary effects of between-group variations of latent variables of the parallel RL model. (A) A showcase trial sequence on the cWCST as presented in Figure 1. (B) Feedback predictions for the application of the shape category across seven trials (panel A shows the first three of them). A positive feedback followed the application of the shape category on Trial 1, which increased feedback predictions for the shape category. With high values of cognitive retention rates (i.e., γc), such as seen in patients with Parkinson’s disease (PD) and patients with amyotrophic lateral sclerosis (ALS), feedback predictions for categories remain at high levels when transferring to the next trial. (C) Feedback predictions for the execution of response 3. The execution of response 3 produced a positive feedback on Trial 1. Since sensorimotor learning rates for positive feedback were virtually zero in all studies, there was no updating of feedback predictions for the execution of response 3 following received positive feedback. On Trial 2, the execution of response 3 produced a negative feedback which decreased feedback predictions for response 3. With low sensorimotor retention rates (i.e., γs), such as seen in patients with PD, feedback predictions for the execution of responses retain lower levels of activation from trial-to-trial. (D) Feedback predictions for the application of the shape category. With low values of cognitive learning rates for positive feedback (i.e., αc+), such as seen in patients with PD “on” dopamine (DA) medication, feedback predictions for categories receive reduced levels of activation following received positive feedback. (E) Response probabilities on Trial 3. The probability of executing response 3 is the highest (application of the shape category), followed by the probability of executing response 1 (application of the color category) and the probability of executing response 2 (application of the number category). Increased inverse temperature parameters (i.e., τ), such as seen in patients with ALS, attenuate differences between response probabilities. Hence, increased inverse temperature parameters bias response probabilities toward a uniform probability of 0.33. We computed the presented effects of latent variables by varying exclusively the latent variable of interest at arbitrary values while holding all other latent variables constant. Figure 7 is adapted from [109,115].
Figure 7. Exemplary effects of between-group variations of latent variables of the parallel RL model. (A) A showcase trial sequence on the cWCST as presented in Figure 1. (B) Feedback predictions for the application of the shape category across seven trials (panel A shows the first three of them). A positive feedback followed the application of the shape category on Trial 1, which increased feedback predictions for the shape category. With high values of cognitive retention rates (i.e., γc), such as seen in patients with Parkinson’s disease (PD) and patients with amyotrophic lateral sclerosis (ALS), feedback predictions for categories remain at high levels when transferring to the next trial. (C) Feedback predictions for the execution of response 3. The execution of response 3 produced a positive feedback on Trial 1. Since sensorimotor learning rates for positive feedback were virtually zero in all studies, there was no updating of feedback predictions for the execution of response 3 following received positive feedback. On Trial 2, the execution of response 3 produced a negative feedback which decreased feedback predictions for response 3. With low sensorimotor retention rates (i.e., γs), such as seen in patients with PD, feedback predictions for the execution of responses retain lower levels of activation from trial-to-trial. (D) Feedback predictions for the application of the shape category. With low values of cognitive learning rates for positive feedback (i.e., αc+), such as seen in patients with PD “on” dopamine (DA) medication, feedback predictions for categories receive reduced levels of activation following received positive feedback. (E) Response probabilities on Trial 3. The probability of executing response 3 is the highest (application of the shape category), followed by the probability of executing response 1 (application of the color category) and the probability of executing response 2 (application of the number category). Increased inverse temperature parameters (i.e., τ), such as seen in patients with ALS, attenuate differences between response probabilities. Hence, increased inverse temperature parameters bias response probabilities toward a uniform probability of 0.33. We computed the presented effects of latent variables by varying exclusively the latent variable of interest at arbitrary values while holding all other latent variables constant. Figure 7 is adapted from [109,115].
Brainsci 10 01000 g007
Figure 8. Profiles of covert cognitive symptoms in patients with PD and patients with ALS [109,115].
Figure 8. Profiles of covert cognitive symptoms in patients with PD and patients with ALS [109,115].
Brainsci 10 01000 g008
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Steinke, A.; Kopp, B. Toward a Computational Neuropsychology of Cognitive Flexibility. Brain Sci. 2020, 10, 1000. https://0-doi-org.brum.beds.ac.uk/10.3390/brainsci10121000

AMA Style

Steinke A, Kopp B. Toward a Computational Neuropsychology of Cognitive Flexibility. Brain Sciences. 2020; 10(12):1000. https://0-doi-org.brum.beds.ac.uk/10.3390/brainsci10121000

Chicago/Turabian Style

Steinke, Alexander, and Bruno Kopp. 2020. "Toward a Computational Neuropsychology of Cognitive Flexibility" Brain Sciences 10, no. 12: 1000. https://0-doi-org.brum.beds.ac.uk/10.3390/brainsci10121000

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop