entropy-logo

Journal Browser

Journal Browser

Information Theory in Neuroscience

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Information Theory, Probability and Statistics".

Deadline for manuscript submissions: closed (30 April 2018) | Viewed by 67376

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors


E-Mail Website
Guest Editor
Neural Computation Laboratory, Center for Neuroscience and Cognitive Systems@UniTn, Istituto Italiano di Tecnologia, 38068 Rovereto (TN), Italy
Interests: neural coding; information theory; population coding; temporal coding
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Computational Neuroscience Initiative, University of Pennsylvania, Philadelphia, PA 19104, USA
Interests: information processing in complex systems; neural coding; structure-function relationships in neural networks; normative models of neural function; perceptual decision-making; neuroinformatics

Special Issue Information

Dear Colleagues,

As the ultimate information processing device, the brain naturally lends itself to be studied with information theory. Application of information theory to neuroscience has spurred the development of principled theories of brain function, has led to advances in the study of consciousness, and to the development of analytical techniques to crack the neural code, that is to unveil the language used by neurons to encode and process information. In particular, advances in experimental techniques enabling precise recording and manipulation of neural activity on a large scale now enable for the first time the precise formulation and the quantitative test of hypotheses about how the brain encodes and transmits across areas the information used for specific functions.

This Special Issue emphasizes contributions on novel approaches in neuroscience using information theory, and on the development of new information theoretic results inspired by problems in neuroscience. Research work at the interface of neuroscience, Information Theory and other disciplines is also welcome.

Prof. Stefano Panzeri
Dr. Eugenio Piasini
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • population coding
  • redundancy
  • synergy
  • optimal codes
  • directed information
  • integrated information theory
  • neural decoders

Related Special Issue

Published Papers (13 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

3 pages, 161 KiB  
Editorial
Information Theory in Neuroscience
by Eugenio Piasini and Stefano Panzeri
Entropy 2019, 21(1), 62; https://0-doi-org.brum.beds.ac.uk/10.3390/e21010062 - 14 Jan 2019
Cited by 15 | Viewed by 4093
Abstract
This is the Editorial article summarizing the scope and contents of the Special Issue, Information Theory in Neuroscience. Full article
(This article belongs to the Special Issue Information Theory in Neuroscience)

Research

Jump to: Editorial

33 pages, 960 KiB  
Article
Assessing the Relevance of Specific Response Features in the Neural Code
by Hugo Gabriel Eyherabide and Inés Samengo
Entropy 2018, 20(11), 879; https://0-doi-org.brum.beds.ac.uk/10.3390/e20110879 - 15 Nov 2018
Cited by 1 | Viewed by 3155
Abstract
The study of the neural code aims at deciphering how the nervous system maps external stimuli into neural activity—the encoding phase—and subsequently transforms such activity into adequate responses to the original stimuli—the decoding phase. Several information-theoretical methods have been proposed to assess the [...] Read more.
The study of the neural code aims at deciphering how the nervous system maps external stimuli into neural activity—the encoding phase—and subsequently transforms such activity into adequate responses to the original stimuli—the decoding phase. Several information-theoretical methods have been proposed to assess the relevance of individual response features, as for example, the spike count of a given neuron, or the amount of correlation in the activity of two cells. These methods work under the premise that the relevance of a feature is reflected in the information loss that is induced by eliminating the feature from the response. The alternative methods differ in the procedure by which the tested feature is removed, and the algorithm with which the lost information is calculated. Here we compare these methods, and show that more often than not, each method assigns a different relevance to the tested feature. We demonstrate that the differences are both quantitative and qualitative, and connect them with the method employed to remove the tested feature, as well as the procedure to calculate the lost information. By studying a collection of carefully designed examples, and working on analytic derivations, we identify the conditions under which the relevance of features diagnosed by different methods can be ranked, or sometimes even equated. The condition for equality involves both the amount and the type of information contributed by the tested feature. We conclude that the quest for relevant response features is more delicate than previously thought, and may yield to multiple answers depending on methodological subtleties. Full article
(This article belongs to the Special Issue Information Theory in Neuroscience)
Show Figures

Figure 1

27 pages, 1900 KiB  
Article
Information-Theoretical Analysis of the Neural Code in the Rodent Temporal Lobe
by Melisa B. Maidana Capitán, Emilio Kropff and Inés Samengo
Entropy 2018, 20(8), 571; https://0-doi-org.brum.beds.ac.uk/10.3390/e20080571 - 03 Aug 2018
Cited by 2 | Viewed by 4072
Abstract
In the study of the neural code, information-theoretical methods have the advantage of making no assumptions about the probabilistic mapping between stimuli and responses. In the sensory domain, several methods have been developed to quantify the amount of information encoded in neural activity, [...] Read more.
In the study of the neural code, information-theoretical methods have the advantage of making no assumptions about the probabilistic mapping between stimuli and responses. In the sensory domain, several methods have been developed to quantify the amount of information encoded in neural activity, without necessarily identifying the specific stimulus or response features that instantiate the code. As a proof of concept, here we extend those methods to the encoding of kinematic information in a navigating rodent. We estimate the information encoded in two well-characterized codes, mediated by the firing rate of neurons, and by the phase-of-firing with respect to the theta-filtered local field potential. In addition, we also consider a novel code, mediated by the delta-filtered local field potential. We find that all three codes transmit significant amounts of kinematic information, and informative neurons tend to employ a combination of codes. Cells tend to encode conjunctions of kinematic features, so that most of the informative neurons fall outside the traditional cell types employed to classify spatially-selective units. We conclude that a broad perspective on the candidate stimulus and response features expands the repertoire of strategies with which kinematic information is encoded. Full article
(This article belongs to the Special Issue Information Theory in Neuroscience)
Show Figures

Figure 1

19 pages, 1097 KiB  
Article
Category Structure and Categorical Perception Jointly Explained by Similarity-Based Information Theory
by Romain Brasselet and Angelo Arleo
Entropy 2018, 20(7), 527; https://0-doi-org.brum.beds.ac.uk/10.3390/e20070527 - 14 Jul 2018
Cited by 4 | Viewed by 4261
Abstract
Categorization is a fundamental information processing phenomenon in the brain. It is critical for animals to compress an abundance of stimulations into groups to react quickly and efficiently. In addition to labels, categories possess an internal structure: the goodness measures how well any [...] Read more.
Categorization is a fundamental information processing phenomenon in the brain. It is critical for animals to compress an abundance of stimulations into groups to react quickly and efficiently. In addition to labels, categories possess an internal structure: the goodness measures how well any element belongs to a category. Interestingly, this categorization leads to an altered perception referred to as categorical perception: for a given physical distance, items within a category are perceived closer than items in two different categories. A subtler effect is the perceptual magnet: discriminability is reduced close to the prototypes of a category and increased near its boundaries. Here, starting from predefined abstract categories, we naturally derive the internal structure of categories and the phenomenon of categorical perception, using an information theoretical framework that involves both probabilities and pairwise similarities between items. Essentially, we suggest that pairwise similarities between items are to be tuned to render some predefined categories as well as possible. However, constraints on these pairwise similarities only produce an approximate matching, which explains concurrently the notion of goodness and the warping of perception. Overall, we demonstrate that similarity-based information theory may offer a global and unified principled understanding of categorization and categorical perception simultaneously. Full article
(This article belongs to the Special Issue Information Theory in Neuroscience)
Show Figures

Figure 1

16 pages, 447 KiB  
Article
A Measure of Information Available for Inference
by Takuya Isomura
Entropy 2018, 20(7), 512; https://0-doi-org.brum.beds.ac.uk/10.3390/e20070512 - 07 Jul 2018
Cited by 7 | Viewed by 4718
Abstract
The mutual information between the state of a neural network and the state of the external world represents the amount of information stored in the neural network that is associated with the external world. In contrast, the surprise of the sensory input indicates [...] Read more.
The mutual information between the state of a neural network and the state of the external world represents the amount of information stored in the neural network that is associated with the external world. In contrast, the surprise of the sensory input indicates the unpredictability of the current input. In other words, this is a measure of inference ability, and an upper bound of the surprise is known as the variational free energy. According to the free-energy principle (FEP), a neural network continuously minimizes the free energy to perceive the external world. For the survival of animals, inference ability is considered to be more important than simply memorized information. In this study, the free energy is shown to represent the gap between the amount of information stored in the neural network and that available for inference. This concept involves both the FEP and the infomax principle, and will be a useful measure for quantifying the amount of information available for inference. Full article
(This article belongs to the Special Issue Information Theory in Neuroscience)
Show Figures

Figure 1

24 pages, 4605 KiB  
Article
Novel Brain Complexity Measures Based on Information Theory
by Ester Bonmati, Anton Bardera, Miquel Feixas and Imma Boada
Entropy 2018, 20(7), 491; https://0-doi-org.brum.beds.ac.uk/10.3390/e20070491 - 25 Jun 2018
Cited by 7 | Viewed by 3710
Abstract
Brain networks are widely used models to understand the topology and organization of the brain. These networks can be represented by a graph, where nodes correspond to brain regions and edges to structural or functional connections. Several measures have been proposed to describe [...] Read more.
Brain networks are widely used models to understand the topology and organization of the brain. These networks can be represented by a graph, where nodes correspond to brain regions and edges to structural or functional connections. Several measures have been proposed to describe the topological features of these networks, but unfortunately, it is still unclear which measures give the best representation of the brain. In this paper, we propose a new set of measures based on information theory. Our approach interprets the brain network as a stochastic process where impulses are modeled as a random walk on the graph nodes. This new interpretation provides a solid theoretical framework from which several global and local measures are derived. Global measures provide quantitative values for the whole brain network characterization and include entropy, mutual information, and erasure mutual information. The latter is a new measure based on mutual information and erasure entropy. On the other hand, local measures are based on different decompositions of the global measures and provide different properties of the nodes. Local measures include entropic surprise, mutual surprise, mutual predictability, and erasure surprise. The proposed approach is evaluated using synthetic model networks and structural and functional human networks at different scales. Results demonstrate that the global measures can characterize new properties of the topology of a brain network and, in addition, for a given number of nodes, an optimal number of edges is found for small-world networks. Local measures show different properties of the nodes such as the uncertainty associated to the node, or the uniqueness of the path that the node belongs. Finally, the consistency of the results across healthy subjects demonstrates the robustness of the proposed measures. Full article
(This article belongs to the Special Issue Information Theory in Neuroscience)
Show Figures

Graphical abstract

16 pages, 4389 KiB  
Article
A Moment-Based Maximum Entropy Model for Fitting Higher-Order Interactions in Neural Data
by N. Alex Cayco-Gajic, Joel Zylberberg and Eric Shea-Brown
Entropy 2018, 20(7), 489; https://0-doi-org.brum.beds.ac.uk/10.3390/e20070489 - 23 Jun 2018
Cited by 4 | Viewed by 5003
Abstract
Correlations in neural activity have been demonstrated to have profound consequences for sensory encoding. To understand how neural populations represent stimulus information, it is therefore necessary to model how pairwise and higher-order spiking correlations between neurons contribute to the collective structure of population-wide [...] Read more.
Correlations in neural activity have been demonstrated to have profound consequences for sensory encoding. To understand how neural populations represent stimulus information, it is therefore necessary to model how pairwise and higher-order spiking correlations between neurons contribute to the collective structure of population-wide spiking patterns. Maximum entropy models are an increasingly popular method for capturing collective neural activity by including successively higher-order interaction terms. However, incorporating higher-order interactions in these models is difficult in practice due to two factors. First, the number of parameters exponentially increases as higher orders are added. Second, because triplet (and higher) spiking events occur infrequently, estimates of higher-order statistics may be contaminated by sampling noise. To address this, we extend previous work on the Reliable Interaction class of models to develop a normalized variant that adaptively identifies the specific pairwise and higher-order moments that can be estimated from a given dataset for a specified confidence level. The resulting “Reliable Moment” model is able to capture cortical-like distributions of population spiking patterns. Finally, we show that, compared with the Reliable Interaction model, the Reliable Moment model infers fewer strong spurious higher-order interactions and is better able to predict the frequencies of previously unobserved spiking patterns. Full article
(This article belongs to the Special Issue Information Theory in Neuroscience)
Show Figures

Figure 1

22 pages, 708 KiB  
Article
Efficient Algorithms for Searching the Minimum Information Partition in Integrated Information Theory
by Jun Kitazono, Ryota Kanai and Masafumi Oizumi
Entropy 2018, 20(3), 173; https://0-doi-org.brum.beds.ac.uk/10.3390/e20030173 - 06 Mar 2018
Cited by 24 | Viewed by 8490
Abstract
The ability to integrate information in the brain is considered to be an essential property for cognition and consciousness. Integrated Information Theory (IIT) hypothesizes that the amount of integrated information ( Φ ) in the brain is related to the level of consciousness. [...] Read more.
The ability to integrate information in the brain is considered to be an essential property for cognition and consciousness. Integrated Information Theory (IIT) hypothesizes that the amount of integrated information ( Φ ) in the brain is related to the level of consciousness. IIT proposes that, to quantify information integration in a system as a whole, integrated information should be measured across the partition of the system at which information loss caused by partitioning is minimized, called the Minimum Information Partition (MIP). The computational cost for exhaustively searching for the MIP grows exponentially with system size, making it difficult to apply IIT to real neural data. It has been previously shown that, if a measure of Φ satisfies a mathematical property, submodularity, the MIP can be found in a polynomial order by an optimization algorithm. However, although the first version of Φ is submodular, the later versions are not. In this study, we empirically explore to what extent the algorithm can be applied to the non-submodular measures of Φ by evaluating the accuracy of the algorithm in simulated data and real neural data. We find that the algorithm identifies the MIP in a nearly perfect manner even for the non-submodular measures. Our results show that the algorithm allows us to measure Φ in large systems within a practical amount of time. Full article
(This article belongs to the Special Issue Information Theory in Neuroscience)
Show Figures

Figure 1

41 pages, 1887 KiB  
Article
The Identity of Information: How Deterministic Dependencies Constrain Information Synergy and Redundancy
by Daniel Chicharro, Giuseppe Pica and Stefano Panzeri
Entropy 2018, 20(3), 169; https://0-doi-org.brum.beds.ac.uk/10.3390/e20030169 - 05 Mar 2018
Cited by 5 | Viewed by 4756
Abstract
Understanding how different information sources together transmit information is crucial in many domains. For example, understanding the neural code requires characterizing how different neurons contribute unique, redundant, or synergistic pieces of information about sensory or behavioral variables. Williams and Beer (2010) proposed a [...] Read more.
Understanding how different information sources together transmit information is crucial in many domains. For example, understanding the neural code requires characterizing how different neurons contribute unique, redundant, or synergistic pieces of information about sensory or behavioral variables. Williams and Beer (2010) proposed a partial information decomposition (PID) that separates the mutual information that a set of sources contains about a set of targets into nonnegative terms interpretable as these pieces. Quantifying redundancy requires assigning an identity to different information pieces, to assess when information is common across sources. Harder et al. (2013) proposed an identity axiom that imposes necessary conditions to quantify qualitatively common information. However, Bertschinger et al. (2012) showed that, in a counterexample with deterministic target-source dependencies, the identity axiom is incompatible with ensuring PID nonnegativity. Here, we study systematically the consequences of information identity criteria that assign identity based on associations between target and source variables resulting from deterministic dependencies. We show how these criteria are related to the identity axiom and to previously proposed redundancy measures, and we characterize how they lead to negative PID terms. This constitutes a further step to more explicitly address the role of information identity in the quantification of redundancy. The implications for studying neural coding are discussed. Full article
(This article belongs to the Special Issue Information Theory in Neuroscience)
Show Figures

Figure 1

12 pages, 1079 KiB  
Article
Mutual Information and Information Gating in Synfire Chains
by Zhuocheng Xiao, Binxu Wang, Andrew T. Sornborger and Louis Tao
Entropy 2018, 20(2), 102; https://0-doi-org.brum.beds.ac.uk/10.3390/e20020102 - 01 Feb 2018
Cited by 8 | Viewed by 5457
Abstract
Coherent neuronal activity is believed to underlie the transfer and processing of information in the brain. Coherent activity in the form of synchronous firing and oscillations has been measured in many brain regions and has been correlated with enhanced feature processing and other [...] Read more.
Coherent neuronal activity is believed to underlie the transfer and processing of information in the brain. Coherent activity in the form of synchronous firing and oscillations has been measured in many brain regions and has been correlated with enhanced feature processing and other sensory and cognitive functions. In the theoretical context, synfire chains and the transfer of transient activity packets in feedforward networks have been appealed to in order to describe coherent spiking and information transfer. Recently, it has been demonstrated that the classical synfire chain architecture, with the addition of suitably timed gating currents, can support the graded transfer of mean firing rates in feedforward networks (called synfire-gated synfire chains—SGSCs). Here we study information propagation in SGSCs by examining mutual information as a function of layer number in a feedforward network. We explore the effects of gating and noise on information transfer in synfire chains and demonstrate that asymptotically, two main regions exist in parameter space where information may be propagated and its propagation is controlled by pulse-gating: a large region where binary codes may be propagated, and a smaller region near a cusp in parameter space that supports graded propagation across many layers. Full article
(This article belongs to the Special Issue Information Theory in Neuroscience)
Show Figures

Figure 1

22 pages, 1254 KiB  
Article
Information Entropy Production of Maximum Entropy Markov Chains from Spike Trains
by Rodrigo Cofré and Cesar Maldonado
Entropy 2018, 20(1), 34; https://0-doi-org.brum.beds.ac.uk/10.3390/e20010034 - 09 Jan 2018
Cited by 16 | Viewed by 5605
Abstract
The spiking activity of neuronal networks follows laws that are not time-reversal symmetric; the notion of pre-synaptic and post-synaptic neurons, stimulus correlations and noise correlations have a clear time order. Therefore, a biologically realistic statistical model for the spiking activity should be able [...] Read more.
The spiking activity of neuronal networks follows laws that are not time-reversal symmetric; the notion of pre-synaptic and post-synaptic neurons, stimulus correlations and noise correlations have a clear time order. Therefore, a biologically realistic statistical model for the spiking activity should be able to capture some degree of time irreversibility. We use the thermodynamic formalism to build a framework in the context maximum entropy models to quantify the degree of time irreversibility, providing an explicit formula for the information entropy production of the inferred maximum entropy Markov chain. We provide examples to illustrate our results and discuss the importance of time irreversibility for modeling the spike train statistics. Full article
(This article belongs to the Special Issue Information Theory in Neuroscience)
Show Figures

Figure 1

2150 KiB  
Article
Lifespan Development of the Human Brain Revealed by Large-Scale Network Eigen-Entropy
by Yiming Fan, Ling-Li Zeng, Hui Shen, Jian Qin, Fuquan Li and Dewen Hu
Entropy 2017, 19(9), 471; https://0-doi-org.brum.beds.ac.uk/10.3390/e19090471 - 04 Sep 2017
Cited by 9 | Viewed by 8410
Abstract
Imaging connectomics based on graph theory has become an effective and unique methodological framework for studying functional connectivity patterns of the developing and aging brain. Normal brain development is characterized by continuous and significant network evolution through infancy, childhood, and adolescence, following specific [...] Read more.
Imaging connectomics based on graph theory has become an effective and unique methodological framework for studying functional connectivity patterns of the developing and aging brain. Normal brain development is characterized by continuous and significant network evolution through infancy, childhood, and adolescence, following specific maturational patterns. Normal aging is related to some resting state brain networks disruption, which are associated with certain cognitive decline. It is a big challenge to design an integral metric to track connectome evolution patterns across the lifespan, which is to understand the principles of network organization in the human brain. In this study, we first defined a brain network eigen-entropy (NEE) based on the energy probability (EP) of each brain node. Next, we used the NEE to characterize the lifespan orderness trajectory of the whole-brain functional connectivity of 173 healthy individuals ranging in age from 7 to 85 years. The results revealed that during the lifespan, the whole-brain NEE exhibited a significant non-linear decrease and that the EP distribution shifted from concentration to wide dispersion, implying orderness enhancement of functional connectome over age. Furthermore, brain regions with significant EP changes from the flourishing (7–20 years) to the youth period (23–38 years) were mainly located in the right prefrontal cortex and basal ganglia, and were involved in emotion regulation and executive function in coordination with the action of the sensory system, implying that self-awareness and voluntary control performance significantly changed during neurodevelopment. However, the changes from the youth period to middle age (40–59 years) were located in the mesial temporal lobe and caudate, which are associated with long-term memory, implying that the memory of the human brain begins to decline with age during this period. Overall, the findings suggested that the human connectome shifted from a relatively anatomical driven state to an orderly organized state with lower entropy. Full article
(This article belongs to the Special Issue Information Theory in Neuroscience)
Show Figures

Figure 1

14477 KiB  
Article
Life on the Edge: Latching Dynamics in a Potts Neural Network
by Chol Jun Kang, Michelangelo Naim, Vezha Boboeva and Alessandro Treves
Entropy 2017, 19(9), 468; https://0-doi-org.brum.beds.ac.uk/10.3390/e19090468 - 03 Sep 2017
Cited by 8 | Viewed by 4229
Abstract
We study latching dynamics in the adaptive Potts model network, through numerical simulations with randomly and also weakly correlated patterns, and we focus on comparing its slowly and fast adapting regimes. A measure, Q, is used to quantify the quality of latching [...] Read more.
We study latching dynamics in the adaptive Potts model network, through numerical simulations with randomly and also weakly correlated patterns, and we focus on comparing its slowly and fast adapting regimes. A measure, Q, is used to quantify the quality of latching in the phase space spanned by the number of Potts states S, the number of connections per Potts unit C and the number of stored memory patterns p. We find narrow regions, or bands in phase space, where distinct pattern retrieval and duration of latching combine to yield the highest values of Q. The bands are confined by the storage capacity curve, for large p, and by the onset of finite latching, for low p. Inside the band, in the slowly adapting regime, we observe complex structured dynamics, with transitions at high crossover between correlated memory patterns; while away from the band latching, transitions lose complexity in different ways: below, they are clear-cut but last such few steps as to span a transition matrix between states with few asymmetrical entries and limited entropy; while above, they tend to become random, with large entropy and bi-directional transition frequencies, but indistinguishable from noise. Extrapolating from the simulations, the band appears to scale almost quadratically in the pS plane, and sublinearly in pC. In the fast adapting regime, the band scales similarly, and it can be made even wider and more robust, but transitions between anti-correlated patterns dominate latching dynamics. This suggest that slow and fast adaptation have to be integrated in a scenario for viable latching in a cortical system. The results for the slowly adapting regime, obtained with randomly correlated patterns, remain valid also for the case with correlated patterns, with just a simple shift in phase space. Full article
(This article belongs to the Special Issue Information Theory in Neuroscience)
Show Figures

Figure 1

Back to TopTop