Next Article in Journal
Mathematical Modeling: Cisplatin Binding to Deoxyribonucleic Acid
Next Article in Special Issue
Dynamic Image Representation in a Spiking Neural Network Supplied by Astrocytes
Previous Article in Journal
Cyber Security for Detecting Distributed Denial of Service Attacks in Agriculture 4.0: Deep Learning Model
Previous Article in Special Issue
Impact of Astrocytic Coverage of Synapses on the Short-Term Memory of a Computational Neuron-Astrocyte Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Spatial Computing in Modular Spiking Neural Networks with a Robotic Embodiment

by
Sergey A. Lobov
1,2,3,
Alexey N. Mikhaylov
1,
Ekaterina S. Berdnikova
1,
Valeri A. Makarov
4,* and
Victor B. Kazantsev
1,2,3
1
Neurotechnology Department, Lobachevsky State University of Nizhny Novgorod, 23 Gagarin Avenue, 603022 Nizhny Novgorod, Russia
2
Center For Neurotechnology and Machine Learning, Immanuel Kant Baltic Federal University, 14 A. Nevskogo Street, 236041 Kaliningrad, Russia
3
Laboratory of Neurobiomorphic Technologies, The Moscow Institute of Physics and Technology, 1 A. Kerchenskaya Street, 117303 Moscow, Russia
4
Instituto de Matemática Interdisciplinar, Facultad de Ciencias Matemáticas, Universidad Complutense de Madrid, 28040 Madrid, Spain
*
Author to whom correspondence should be addressed.
Submission received: 24 October 2022 / Revised: 12 December 2022 / Accepted: 26 December 2022 / Published: 3 January 2023

Abstract

:
One of the challenges in modern neuroscience is creating a brain-on-a-chip. Such a semiartificial device based on neural networks grown in vitro should interact with the environment when embodied in a robot. A crucial point in this endeavor is developing a neural network architecture capable of associative learning. This work proposes a mathematical model of a midscale modular spiking neural network (SNN) to study learning mechanisms within the brain-on-a-chip context. We show that besides spike-timing-dependent plasticity (STDP), synaptic and neuronal competitions are critical factors for successful learning. Moreover, the shortest pathway rule can implement the synaptic competition responsible for processing conditional stimuli coming from the environment. This solution is ready for testing in neuronal cultures. The neuronal competition can be implemented by lateral inhibition actuating over the SNN modulus responsible for unconditional responses. Empirical testing of this approach is challenging and requires the development of a technique for growing cultures with a given ratio of excitatory and inhibitory neurons. We test the modular SNN embedded in a mobile robot and show that it can establish the association between touch (unconditional) and ultrasonic (conditional) sensors. Then, the robot can avoid obstacles without hitting them, relying on ultrasonic sensors only.
MSC:
37N25; 68T05; 82C32; 92B20; 92B25

1. Introduction

The concept of a living computer or a brain-on-a-chip has been developed since the end of the 20th century. It implies growing dissociated neuronal cultures on multi-electrode arrays and their embodiment in the form of neurorobots or neuroanimats [1,2,3,4,5,6,7,8,9]. In this context, robotic platforms provide neural networks with the ability to learn by interacting with the environment, as happens in brain neural networks. However, despite significant advances [10,11,12], associative learning in neural networks grown in vitro is still limited due to a lack of a universal approach similar to the learning algorithms in artificial neural networks.
One possible reason for the failure of early attempts to achieve associative learning is the homogeneous structure of neural networks grown in vitro [6,13,14,15]. Emerging experimental approaches could mitigate this problem by building modular networks from separate subnets (neuronal ensembles or layers) connected by unidirectional links. This technology comprises polydimethylsiloxane (PDMS) chips with microfluidic channels for axons connecting subnets [16,17]. The unidirectional growth of axons in such systems is ensured by a delay in the cells seeding in different chambers [18,19,20] or by a special shape of microfluidic channels [21,22,23,24,25]. Typically, modular networks include only two subnets, but the possibility of forming several interacting modules has already been reported [26,27,28]. In addition, it is worth mentioning that recently, to ensure heterogeneity, it has been proposed to use modules of neurons of different types within the same network, for example cortical and hippocampal [29] modules, which differ in neurodynamic properties [30].
Thus, the learning algorithms are the only missing element for such a living computer based on modular neural networks grown in vitro. Earlier, we proposed an approach to explain learning problems in unstructured neural networks via the competition between different pathways conducting excitation to a neuron or a group of neurons [31,32]. Moreover, our model study proposed an associative learning approach based on “spatial computations”. The method uses the “shortest path rule” [33]:
Local learning mechanisms (e.g., STDP) potentiates the shortest neural pathways and depresses alternative longer pathways at the global network scale.
STDP is an experimentally discovered form of Hebbian learning [34,35,36,37]. It potentiates a synaptic connection when a presynaptic spike precedes the postsynaptic one. In the opposite case, the connection is depressed. The shortest path rule can be explained as follows: a spike arriving via the shortest path excites the postsynaptic neuron and elicits the potentiating sequence; then, a spike via the longer route comes late and leads to the depression of the corresponding synapse [33].
Although approaches to recording activity and manipulating individual neurons are being developed (see, for example, [38,39]), they do not yet allow for experimental in vitro verification of the “shortest path rule”. Therefore, we aim to adapt our approach to modular architectures consisting of unidirectional connected subnets. To simulate neuronal cultures, we use spiking neural networks (SNNs) since they can exhibit the entire neurocomputational spectrum of behaviors [40,41,42]. So far, we have tested this approach at the scale of individual neuronal circuits. Thus, the issue remained open: Can a medium-scale network containing hundreds or thousands of neurons be trained using spatial computations? If we give an affirmative answer to this question, it will validate the searching for possible algorithms and experimental methods for building a living computer.
Earlier, we formulated the basic principles of associative learning in SNNs [33,43,44]. They are (i) Hebbian learning (using STDP), (ii) synaptic competition or competition of SNN inputs, and (iii) neural competition or competition of SNN outputs. This work aims to consistently implement these principles in a medium-scale modular SNN consisting of several subnets connected by unidirectional links. Specifically, we first test the shortest path rule in the modular SNN. Then, associative learning based on this rule is studied in modular architectures of growing complexity. Finally, we propose a modular SNN capable of the associative learning of stimuli and corroborating its capabilities when embodied in a robot.

2. Models and Methods

To simulate the dynamics of a SNN, we adopt the approach described elsewhere [31]. Briefly, the dynamics of a single neuron is given using [40]:
d v d t = 0.04 v 2 + 5 v + 140 u + I ( t )
d u d t = a ( b v u )
where v is the membrane potential, u is the recovery variable, and I ( t ) is the external driving current. If v 30 , then v c , u u + d , which corresponds to generation of a spike. We set the parameters a = 0.02, b = 0.2, c = −65, and d = 8. Then, the neuron is silent in the absence of the external drive and can generate spikes under a constant stimulus, which is a typical behavior of cortical neurons [40,41]. The driving current is given using:
I ( t ) = ξ ( t ) + I s y n ( t ) + I s t m l ( t )
where ξ ( t ) is an uncorrelated zero-mean white Gaussian noise with variance D , I s y n ( t ) is the synaptic current, and I s t m l ( t ) is the external stimulus. As a stimulus, we use a sequence of square-shaped electric pulses of the duration of 3 ms delivered at a 10 Hz rate, with an amplitude sufficient to excite the neuron.
The synaptic current is the weighted sum of all synaptic inputs to the neuron:
I s y n ( t ) = j g j w j ( t ) y j ( t )
where the sum is taken over all presynaptic neurons, w j is the strength of the synaptic coupling directed from neuron j, g j is the scaling factor equal to 20 or −20 for excitatory and inhibitory neurons, respectively [31], and y j ( t ) describes the amount of neurotransmitters released by presynaptic neuron j when a spike arrives at the presynaptic terminal.
To simulate the neurotransmitters, we use Tsodyks–Markram model, which accounts for short-term depression and facilitation [45]. The model has the following parameters: the decay constant of postsynaptic currents τ I = 10 ms, the recovery time from synaptic depression τ r e c = 50 ms, and the time constant for facilitation τ f a c i l = 1 s.
The dynamics of the synaptic weight w i j of the coupling from excitatory presynaptic neuron j to a postsynaptic neuron i is governed by the STDP with two local variables [46,47]. Assuming that τ i j is the time delay (the so-called axonal delay) of spike transmission between neurons j and i , a presynaptic spike fired at time t j and arriving to neuron i at t j + τ i j induces a weight decrease proportional to the value of the postsynaptic trace s i . Similarly, a postsynaptic spike at t i induces a weight potentiation proportional to the value of the presynaptic trace s j . The weighting functions obey the multiplicative updating rule [46,47]. Thus, the weight dynamics is given using:
d s i d t = s i τ S + t i δ ( t t i ) ,
d s j d t = s j τ S + t j δ ( t t j τ i j ) ,
d w i j d t = λ [ ( 1 w i j ) s j δ ( t t i ) α w i j s i δ ( t t j τ i j ) ] ,
where τ S = 10 ms is the time constant of spiking traces, λ = 0.001 is the learning rate, and α = 5 is the asymmetry parameter.
The modular SNN contains several subnets (Figure 1), each including 500 neurons. This number is a compromise between the standard size of in vitro neuronal cultures and the velocity of numerical simulations of the SNN that should be high enough for experiments with a robot since, otherwise, the robot’s reaction to obstacles degrades significantly. The ratio between excitatory and inhibitory neurons was 4:1. Neurons were randomly distributed on a 1.2 × 0.5 mm rectangular substrate in each subnet (Figure 1A). The number of synapses per each neuron was Nin = 30 ± 3. Within a subnet, neurons were randomly coupled with the probability of interneuron connections decreasing with the distance according to the Gaussian distribution (Figure 1A, intranetwork connections):
f = 1 2 π σ e d 2 2 σ 2
where σ is the standard deviation chosen to obtain an average length of intranetwork connections at 50 μm. This architecture captures the essential features of in vitro neuronal cultures and allows the reproduction of their dynamic modes, e.g., network bursting.
Subnets were connected using unidirectional couplings simulating axons of projecting neurons grown through microfluidic channels in PDMS chips (Figure 1A, internetwork connections). The axon length was up to 400 μm. Figure 1B illustrates the global network architectures studied in this work (for further details, see Section 3).
It should be emphasized that the geometric characteristics of the networks largely determine their dynamics. Coordinates of the neurons define the axonal delay τ i j [see Equations (6) and (7)], which is proportional to the distance between neurons. To evaluate the delays, we assumed a spike propagation velocity of 0.05 m/s [48].
We implemented the SNN model as custom software NeuroNet (available at [49]) developed in the QT C++ environment. On an Intel® CoreTM i3 processor, the simulation can be performed in real-time for an SNN with tens of neurons.
To monitor the SNN activity, we use the following procedure to detect network bursts of spikes in each subnet. In a 50 ms time window, the total number of spikes generated by the subnet is counted, and the time instant when the number of spikes exceeds 50 is set as the burst beginning. Establishing such a threshold value is similar to the procedures described for neural networks in vitro [50,51,52]. If a source (presynaptic) subnet generates a burst followed within a time window of Δ = 100 ms by a burst in the target (postsynaptic) subnet, we call such a burst synchronous. We then define the connection efficiency between the source and target subnets as:
P = F s y n α F s r c ( 1 α ) F s r c ,
where F s r c is the mean frequency of bursts generated by the source subnet, F s y n is the mean frequency of synchronous bursts, and α is the by-chance factor. This factor is the probability of the temporal overlap of bursts in the absence of connections between the source and target subnets. Under the hypothesis of a Poisson process, in the first approximation α Δ F t r g , where F t r g is the mean frequency of bursts in the target subnet. Our direct estimate of α from numerical simulations provided results in complete agreement with the theoretical formula. The connection efficiency is P = 0 and P = 1 if all bursts generated by the target subnet are by-chance (i.e., no causal connection) and by excitation from the source subnet (i.e., all bursts are causal), respectively.

3. Results

3.1. Self-Reinforcing of Internetwork Couplings in Neural Circuits

In model studies, the coupling efficiency between neurons is usually understood as the synaptic weight, w, which determines the current arising in the postsynaptic neuron [Equation (4)]. However, the direct measurement of w is unfeasible in experimental conditions. Therefore, the coupling efficiency is usually estimated indirectly using the postsynaptic potential amplitude or by using the number of spikes “transmitted” from one neuron to another. The latter measure assesses the effectiveness of connections between subnets in vitro [19,53,54].
To address this issue, we study couplings between the individual neurons and subnets (Figure 2A, this and other figures showing network structures are schematic and do not display all the simulated neurons). By default, the source subnet makes ten synaptic connections to the target subnet. These couplings simulate axons grown along microfluidic channels in PDMS chips [16,17,18,19,20,21,22,23,24,25]. Note that the geometric architecture of the network plays an essential role, and internetwork connections link the closest neurons from the connected subnets as happens in in vitro experiments.
The connection efficiency P [Equation (9)] is the ratio of individual spikes or bursts fired either by the postsynaptic neuron or target subnet within a short time after the activation of the presynaptic neuron or target subnet. Figure 2B shows the results of the simulations. The connection efficiency P has a pronounced sigmoid-like dependence on the coupling strength w for the individual neurons and subnets (in the latter case, we used the mean value of w). A similar result has been observed in in vitro experiments with the cultures of cortical neurons. More specifically, the percentage of bursts propagating from the source subnet to target one increased with the number of axon-contained tunnels in a PDMS chip [19].
STDP in unidirectional connections can lead to a self-reinforcement phenomenon, i.e., the potentiation of the spike transfer through an “effective” coupling (evoked response). Such an effect is most apparent for the paired stimulation protocol, i.e., when a postsynaptic neuron is stimulated with a 10 ms delay after the presynaptic cell [36,37]. Figure 2C illustrates the dynamics of the synaptic weight between two neurons in spontaneous conditions, under stimulation of the presynaptic neuron only, and the paired stimulation protocol (S1 + S2). We observe that potentiation is significantly slower for stimulating the presynaptic neuron only (Figure 2C, S1). Notably, an extremely slow potentiation also occurs under spontaneous activity. The presynaptic neuron sometimes excites the postsynaptic one, and the STDP rule increases the coupling (Figure 2C, Spont).
We adapted the stimulation protocols used for single neurons to the modular network. A small region of each subnet, including five excitatory neurons, was stimulated by a sequence of square pulses delivered at 10 Hz (Figure 2A, bottom panel). In the case of the paired stimulation protocol, the time delay (i.e., the time shift between S1 and S2) was increased to 30 ms. The obtained results are similar to what has been observed for individual neurons (Figure 2D). However, the synaptic weights change much slower, and the difference between evoked and spontaneous activity is lower. We also note a pronounced time lag in the rising synaptic weights in the case of evoked activity. It is explained by a relatively slow intra-network rearrangement of synaptic weights required to initiate changes in the inter-network weights.
The self-reinforcement phenomenon can underlie the formation of neural structures with cyclic activity and, possibly, central pattern generators. Let us consider a system consisting of four subnets closed into a ring by unidirectional connections (Figure 3A). With low weights of internet connections and/or an insufficient number of them, the activity of the subnets is almost uncorrelated (Figure 3B). In the presence of sufficient connections between subnets, the self-reinforcement effect leads to their potentiation and the emergence of a circulating activity. In this case, neural activity is transmitted from one subnet to another, and a periodic cycle emerges (Figure 3C). Then, the connection efficiency P can reach 0.8 after learning. Note that this effect occurs if the number of connections between subnets Nw is high enough (Figure 3B). For Nw < 4, no circulating activity is observed, whereas for Nw > 8, the connection efficiency saturates due to STDP.
Although the network structure was predetermined in the previous experiments, circulating waves can be observed in a network with initially bidirectional couplings between subnets (Figure 3D). Self-reinforcement breaks the initial symmetry, and connections that facilitate a wave running either in a clockwise or counterclockwise direction are potentiated (Figure 3E,F).

3.2. The Shortest Pathway Rule

A straightforward stimulus-response association can be achieved in a two-module architecture (Figure 2). The activity of a postsynaptic neuron (or a target subnet) can be considered the system’s output. Then, in Pavlovian conditioning, the stimulation of the pre and postsynaptic neurons (source and target subnets) can be treated as conditional and unconditional stimuli, respectively. However, here we face the following problems:
  • Spontaneous (without stimuli) potentiation of connections (Figure 2C,D);
  • Potentiation of connections when only the conditional stimulus is applied (no association, Figure 2C,D);
  • The lack of a mechanism for depressing an association when it becomes irrelevant (e.g., when the conditional stimulus is not supported by an unconditional one).
Thus, the main problem of associative learning is not the potentiation of synapsis representing associations, but the lack of a mechanism depressing (or controlling) synapses not involved in the stimulus association. Earlier, to solve this problem, we introduced the shortest pathway rule driving the synaptic dynamics and proposed a simple neural circuit with associative learning [33]. Let us now extend this rule to modular SNNs.
Figure 4A shows an architecture with three coupled subnets. Subnet one sends excitation to subnet three through two pathways: directly and via subnet two. In this context, the term “shortest pathway” means both the minimum geometric length from subnet one to subnet three (and hence the minimal axonal delay in spike transmission) and the minimal number of synapses mediating the spike transmission (the minimal synaptic delay). In the presence of two alternative routes, the shortest path (W1 in Figure 4) is potentiated. At the same time, the synapses involved in the longer pathway (via subnet two) are depressed (W2 in Figure 4A). Figure 4B illustrates the dynamics of the internet weights.
The described effect comprises the shortest pathway rule in midscale modular SNNs and provides synaptic competition. In turn, synaptic competition can solve the problem of the uncontrolled growth of synapses and implements depressing synapsis not involved in relevant associations of stimuli.
In our model, experimentally justified STDP is the only learning rule. It implements a local mechanism of synaptic plasticity, which depends only on the activity of the pre- and postsynaptic neurons. Despite this, on a network scale, STDP can imitate global “meta” learning rules through the interaction of the competing neuronal circuits. The shortest path rule can be attributed to such a metarule. Then, the desired learning algorithm can use a network architecture that provides “spatial neurocomputing” based on STDP.

3.3. Synaptic Competition in SNNs

Let us now use the shortest pathway rule to implement associative learning. Figure 5A shows a three-module SNN receiving three different stimuli. We note that bidirectional links couple subnets one and two. In Pavlovian conditioning, this architecture allows us to model a situation with two conditional (CS1, CS2) and one unconditional (US) stimuli. As in Section 3.2, to induce STDP, we use the paired stimulation protocol: US pulses are applied 30 ms after the CS pulses. However, we start with “incorrect” pre-training by associating the US with CS1 during preliminary training. In this stage, CS2 is not applied. Then, we carry out the main learning phase, in which the US combines with CS2 (Figure 5B). This procedure allows us to analyze the system’s ability to form associations and retrain once external conditions have changed (i.e., when CS2 replaces CS1).
During learning, the output subnet three changes connections coming from subnets one and two depending on the correlation with their activity. Since both potentiation and depression of the incoming connections determine the training quality, we introduce the joint coefficient:
Q = 2 W p o t W p o t + W d e p 1 ,
where W p o t and W d e p are the average values of the connections between subnets that should be potentiated and depressed, respectively. The value Q = 1 corresponds to perfect learning. If learning is poor, Q has a value close to zero, and Q is negative for “wrong” learning. Below, we conventionally assume that a SNN with Q > 0.5 is properly trained.
Bidirectional connections between two input subnets (WC in Figure 5A) play a critical role in synaptic competition. They close the long alternative pathway when one of the inputs is activated (see also Figure 4). As a result, along with the strengthening of the currently relevant associative connections, a weakening of the irrelevant associations occurs, and, accordingly, the learning coefficient Q increases (Figure 5B). The simulations with different weights of the competing connections show a specific range for which optimal learning is observed (Figure 5C, pink area).
In a uniform network, some axons connect neurons in different directions and can implement synaptic competition accordingly. Thus, there is no reason to have subnets one and two separate (Figure 5A). Instead, one can build a single but elongated subnetwork and stimulate its extreme points (see also Section 3.4). We tested this hypothesis by combining input subnets one and two into a single network. As expected, the network learned quite well, and the learning coefficient reached the value Q = 0.63 ± 0.11 (n = 6). In the following sections, we use such elongated networks to form input and output modules.

3.4. Neuronal Competition of SNN Outputs

A neural network can have several input modules that receive conditioned signals and several output modules that respond to unconditioned stimuli. Therefore, for effective learning, it is necessary to implement competition between inputs (i.e., synaptic competition, Section 3.3) and between outputs (neuronal competition).
Let us consider an SNN with one input and two outputs (Figure 6A). Here, we have combined two output modules into one elongated. In Pavlovian conditioning, this architecture models a situation with one conditional signal CS and two unconditional stimuli, US1 and US2. To implement neural competition, we use lateral inhibition, which suppresses the activity of neighboring neurons upon a strong excitation of one neuron, the winner [33,43]. As a result of this process, only the winning element undergoes learning and forms an association. Since we do not use the mechanism of removing irrelevant associations, the training protocol cannot contain a phase of “wrong” pre-training. In this case, we combined two output networks into one, bearing in mind the impossibility of forming exclusively inhibitory connections between subnets of biological neurons.
We study how the parameters of the inhibitory neurons determine the learning quality. To this end, we chose the weight of inhibitory connections W I and the decay time of inhibitory postsynaptic current τ I as governing parameters. Figure 6B shows that learning fails for W I < 0.1 or τ I < 40 ms. Thus, lateral inhibition is necessary for learning.

3.5. Robotic Embodiment of Associative Learning

Let us now provide a practical application of the phenomena studied above. We consider the problem of obstacle avoidance using a mobile robot driven by an SNN [33]. The robot has two touch sensors and two ultrasonic sonars (Figure 7A). The activity of a pacemaker neuron drives the right and left motors, which leads to the forward movement of the robot. Figure 7B shows the modular SNN consisting of two elongated subnets. The internal couplings in subnet 1 provide synaptic competition for internetwork connections WP and WD (Figure 7B). In turn, the inhibitory neurons of subnet two provide neuronal competition. If one of the zones of subnet two is excited, the other zone is inhibited.
Signals from the touch sensors serve as unconditional stimuli. They are activated if the robot hits an obstacle. Stimulation of the corresponding US-zone in subnet two (Figure 7B) is transmitted to motoneurons that brake the corresponding wheel. As a result of such an unconditioned reaction, the robot can avoid obstacles upon touching them. Signals from the sonars serve as conditional stimuli. They can detect obstacles at a distance. When one of the CS zones in subnet one (CS one or CS two, Figure 7B) is activated simultaneously with the activation of one of the US zones (US one or US two, Figure 7B), the corresponding connections are potentiated, and the robot learns to go around obstacles in advance without hitting them. Training the robot to mimic Pavlovian conditioning consists of presenting stimuli regularly from both the left and right sides (Figure 7C).
The embodiment of the SNN enables learning of two associative CS-US pairs reacting to obstacles on the left and right sides of the robot. Generally, the CS zones in subnet one (CS one or CS two, Figure 7B) can be mapped to the pair of sonars in an arbitrary order (left–right or right–left). Depending on the mapping, there can be two types of associations between the stimuli and motors: either with strong “parallel” or strong “diagonal” internetwork connections (WP or WD in Figure 7, respectively). Proper learning leads to the potentiation of WP and depression of WD in the case of “the parallel association” and vice versa in the opposite case. Such freedom ensures no a priori chosen structure in the entire SNN. Instead, the SNN adapts to the stimuli coming from the environment.
Figure 7D illustrates an example of learning during the training process. After several learning cycles, the learning quality reaches values close to 0.8, and the robot successfully avoids obstacles. We then tested the robot’s performance for different learning qualities (Q) while moving in an arena with several obstacles (Figure 8A). Figure 8B shows the relationship between Q and the robot’s behavior. The dependence of the number of collisions on Q fits rather well via an exponential function. As expected, the higher the learning quality, the fewer collisions registered during the test (10 min).

4. Discussion

We have studied Pavlovian learning in modular SNNs consisting of several synaptically coupled subnets. The numerical study suggests approaches for implementing associative learning in prospective experimental works on neural networks grown in vitro.
STDP in unidirectional connections between two modules can lead to the self-reinforcement of synapses with efficient spike transfer. We have observed this effect under stimulation of the presynaptic subnet, paired stimulation, and spontaneous activity. Consequently, the efficiency of communication by bursts increases, which can be measured experimentally. An attempt to use a two-module SNN for associative learning has identified several negative factors. The main problem was not the potentiation of the connections responsible for the association of conditional and unconditional stimuli, but the insufficient weakening of the couplings not involved in the association.
Modular SNNs with two presynaptic subnets and one postsynaptic subnet (2IN-1OUT architecture) can implement associative learning with two conditional stimuli and one unconditional stimulus. Then, the competition between converging internetwork connections is essential; we call this phenomenon synaptic competition. We used the shortest pathway rule to implement it: STDP potentiates the shortest neural pathways and depresses alternative longer routes. To simplify the network architecture, we proposed an elongated network module merging the input subnets. Simulations confirmed the validity of this approach. It is worth noting that this rule and the SNN architectures are ready for experimental testing in vitro.
Modular SNNs with one input and two outputs (1IN-2OUT architecture) can implement associative learning with one conditional stimulus and two unconditional stimuli. It requires competition between diverging connections, which we call neuronal competition, meaning that neurons in subnets compete, suppressing their counterparts in the neighboring subnet. We implemented neuronal competition using lateral inhibition and employed a single elongated module for two outputs. Currently, in vitro testing of this result is challenging due to a lack of technology enabling the selection of parameters of excitatory and inhibitory elements in cultures of dissociated neurons.
Then, we proposed a modular SNN with a 2IN-2OUT architecture capable of handling two conditional and two unconditional stimuli and providing two associative links. The two-module SNN has been embedded in a mobile robot, and signals from the robot’s sensors innervated certain local areas of the SNN. Initially, the robot could only avoid obstacles when hitting them due to the activation of unconditional stimuli (touch sensors). Learning consisted of presenting to the robot obstacles, which associated conditional stimuli (mediated by ultrasonic sensors) with unconditional ones. After training, the robot could avoid obstacles without bumping against them by relying on sonars. Experiments with the robot driven by the SNN revealed an exponential decay in the number of collisions with obstacles with an increase in the learning quality.
Note that one can implement the 2IN-2OUT architecture using two 2IN-1OUT modular networks. However, such an attempt is likely to fail in in vitro experiments. The self-reinforcing effect may form false associations upon presenting a conditioned stimulus without an unconditioned one. Moreover, the proposed network architecture (Figure 7C) requires crossing axon-conducting tunnels without forming synapses at the intersection, which can be an issue for in vitro experiments. As a possible solution, there may be a transition from 2D to 3D network architectures [7,17,55]. Besides, the length of axonal internetwork connections can be crucial for bursting network dynamics. Keren and Marom [56] have shown that a circulating activity in a network of cortical neurons can emerge if either the geometric dimension of the network is sufficiently large (tens of centimeters) or the conducting velocity is low (due to, for e.g., inhibitors).
However, all these difficulties are technical and can be solved using a neurohybrid approach with memristive devices instead of internetwork axons. Memristors and memristive systems [57], which are implemented in the form of a CMOS-compatible nanostructure with a memory effect, are ideally suited for the role of such connections [44,58]. Recently, the first step in this direction has been taken: commercial memristive devices with the effect of short-term plasticity are used to arrange communication between individual subnets in vitro and provide synchronous activity of target subnets under the control of the source subnet [59]. Following the general concept of memristive neurohybrid systems [60] and the first experimental results [59,61], we expect that memristive devices could provide a balance in terms of miniaturization, energy efficiency, and computational capabilities required for interfaces between living neurons and their networks.

Author Contributions

Conceptualization: S.A.L., A.N.M., V.A.M. and V.B.K.; methodology: S.A.L., E.S.B. and V.A.M.; software: S.A.L.; robotics experiments: S.A.L. and E.S.B.; writing: S.A.L., A.N.M., V.A.M., and V.B.K. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Russian Science Foundation (grant No. 21-12-00246, SNN simulation), by the Russian Foundation for Basic Research (grant No. 20-01-00368, spatial neurocomputation concept), by the Ministry of Science and Higher Education of the Russian Federation (project No. 0729-2020-0061, experiments with the neurorobot), by the scientific program of the National Center for Physics and Mathematics (project “Artificial intelligence and big data in technical, industrial, natural and social systems”, neurohybrid systems), by the Spanish Ministerio de Ciencia e Innovación (PID2021-124047NB-I00), and by the Santander-UCM grant PR44/21-29927.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Potter, S.M.; Fraser, S.E.; Pine, J. Animat in a petri dish: Cultured neural networks for studying neural computation. In Proceedings of the 4th Joint Symposium on Neural Computation, San Diego, CA, USA, 17 May 1997; pp. 167–174. [Google Scholar]
  2. Pamies, D.; Hartung, T.; Hogberg, H.T. Biological and medical applications of a brain-on-a-chip. Exp. Biol. Med. 2014, 239, 1096–1107. [Google Scholar] [CrossRef] [PubMed]
  3. Meyer, J.A.; Wilson, S.W. From animals to animats: Proceedings of the First International Conference on Simulation of Adaptive Behavior; MIT Press: Cambridge, MA, USA, 1991. [Google Scholar]
  4. Reger, B.D.; Fleming, K.M.; Sanguineti, V.; Alford, S.; Mussa-Ivaldi, F.A. Connecting brains to robots: An artificial body for studying the computational properties of neural tissues. Artif. Life 2000, 6, 307–324. [Google Scholar] [CrossRef] [PubMed]
  5. Wheeler, B.C. Building a brain on a chip. In Proceedings of the 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Vancouver, BC, Canada, 20–25 August 2008; pp. 1604–1606. [Google Scholar]
  6. Brofiga, M.; Pisano, M.; Raiteri, R.; Massobrio, P. On the road to the brain-on-a-chip: A review on strategies, methods, and applications. J. Neural Eng. 2021, 18, 41005. [Google Scholar] [CrossRef]
  7. Forro, C.; Caron, D.; Angotzi, G.N.; Gallo, V.; Berdondini, L.; Santoro, F.; Palazzolo, G.; Panuccio, G. Electrophysiology read-out tools for brain-on-chip biotechnology. Micromachines 2021, 12, 124. [Google Scholar] [CrossRef]
  8. Maoz, B.M. Brain-on-a-Chip: Characterizing the next generation of advanced in vitro platforms for modeling the central nervous system. APL Bioeng. 2021, 5, 30902. [Google Scholar] [CrossRef]
  9. Aaser, P.; Knudsen, M.; Huse Ramstad, O.; van de Wijdeven, R.; Nichele, S.; Sandvig, I.; Tufte, G.; Bauer, U.S.; Halaas, Ø.; Hendseth, S.; et al. Towards making a cyborg: A closed-loop reservoir-neuro system. In Proceedings of the European Conference on Artificial Life; Knibbe, C., Beslon, G., Parsons, D.P., Misevic, D., Rouzaud-Cornabas, J., Bredèche, N., Hassas, S., Simonin, O., Soula, H., Eds.; MIT Press: Lyon, France, 2017; Volume 2017, pp. 430–437. [Google Scholar]
  10. Bakkum, D.J.; Chao, Z.C.; Potter, S.M. Spatio-temporal electrical stimuli shape behavior of an embodied cortical network in a goal-directed learning task. J. Neural Eng. 2008, 5, 310. [Google Scholar] [CrossRef] [Green Version]
  11. Shahaf, G.; Eytan, D.; Gal, A.; Kermany, E.; Lyakhov, V.; Zrenner, C.; Marom, S. Order-based representation in random networks of cortical neurons. PLoS Comput. Biol. 2008, 4, e1000228. [Google Scholar] [CrossRef] [Green Version]
  12. Kagan, B.J.; Kitchen, A.C.; Tran, N.T.; Habibollahi, F.; Khajehnejad, M.; Parker, B.J.; Bhat, A.; Rollo, B.; Razi, A.; Friston, K.J. In vitro neurons learn and exhibit sentience when embodied in a simulated game-world. Neuron 2022, 110, 3952–3969.e8. [Google Scholar] [CrossRef]
  13. Dauth, S.; Maoz, B.M.; Sheehy, S.P.; Hemphill, M.A.; Murty, T.; Macedonia, M.K.; Greer, A.M.; Budnik, B.; Parker, K.K. Neurons derived from different brain regions are inherently different in vitro: A novel multiregional brain-on-a-chip. J. Neurophysiol. 2016, 117, 1320–1341. [Google Scholar] [CrossRef] [PubMed]
  14. Pimashkin, A.; Gladkov, A.; Mukhina, I.; Kazantsev, V. Adaptive enhancement of learning protocol in hippocampal cultured networks grown on multielectrode arrays. Front. Neural Circuits 2013, 7, 87. [Google Scholar] [CrossRef]
  15. Pimashkin, A.; Gladkov, A.; Agrba, E.; Mukhina, I.; Kazantsev, V. Selectivity of stimulus induced responses in cultured hippocampal networks on microelectrode arrays. Cogn. Neurodyn. 2016, 10, 287–299. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Taylor, A.M.; Rhee, S.W.; Tu, C.H.; Cribbs, D.H.; Cotman, C.W.; Jeon, N.L. Microfluidic multicompartment device for neuroscience research. Langmuir 2003, 19, 1551–1556. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Habibey, R.; Rojo Arias, J.E.; Striebel, J.; Busskamp, V. Microfluidics for Neuronal cell and circuit engineering. Chem. Rev. 2022, 122, 14842–14880. [Google Scholar] [CrossRef] [PubMed]
  18. Pan, L.; Alagapan, S.; Franca, E.; Brewer, G.J.; Wheeler, B.C. Propagation of action potential activity in a predefined microtunnel neural network. J. Neural Eng. 2011, 8, 46031. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  19. Pan, L.; Alagapan, S.; Franca, E.; Leondopulos, S.S.; DeMarse, T.B.; Brewer, G.J.; Wheeler, B.C. An in vitro method to manipulate the direction and functional strength between neural populations. Front. Neural Circuits 2015, 9, 32. [Google Scholar] [CrossRef] [Green Version]
  20. DeMarse, T.B.; Pan, L.; Alagapan, S.; Brewer, G.J.; Wheeler, B.C. Feed-forward propagation of temporal and rate information between cortical populations during coherent activation in engineered in vitro networks. Front. Neural Circuits 2016, 10, 32. [Google Scholar] [CrossRef] [Green Version]
  21. le Feber, J.; Postma, W.; de Weerd, E.; Weusthof, M.; Rutten, W.L.C. Barbed channels enhance unidirectional connectivity between neuronal networks cultured on multi electrode arrays. Front. Neurosci. 2015, 9, 412. [Google Scholar] [CrossRef] [Green Version]
  22. Malishev, E.; Pimashkin, A.; Arseniy, G.; Pigareva, Y.; Bukatin, A.; Kazantsev, V.; Mukhina, I.; Dubina, M. Microfluidic device for unidirectional axon growth. J. Phys. Conf. Ser. 2015, 643, 012025. [Google Scholar] [CrossRef] [Green Version]
  23. Gladkov, A.; Pigareva, Y.; Kutyina, D.; Kolpakov, V.; Bukatin, A.; Mukhina, I.; Kazantsev, V.; Pimashkin, A. Design of cultured neuron networks in vitro with predefined connectivity using asymmetric microfluidic channels. Sci. Rep. 2017, 7, 15625. [Google Scholar] [CrossRef] [Green Version]
  24. Forró, C.; Thompson-Steckel, G.; Weaver, S.; Weydert, S.; Ihle, S.; Dermutz, H.; Aebersold, M.J.; Pilz, R.; Demkó, L.; Vörös, J. Modular microstructure design to build neuronal networks of defined functional connectivity. Biosens. Bioelectron. 2018, 122, 75–87. [Google Scholar] [CrossRef]
  25. Na, S.; Kang, M.; Bang, S.; Park, D.; Kim, J.; Sim, S.J.; Chang, S.; Jeon, N.L. Microfluidic neural axon diode. Technology 2016, 4, 240–248. [Google Scholar] [CrossRef]
  26. Dworak, B.J.; Wheeler, B.C. Novel MEA platform with PDMS microtunnels enables the detection of action potential propagation from isolated axons in culture. Lab. Chip 2009, 9, 404–410. [Google Scholar] [CrossRef] [PubMed]
  27. Park, J.; Kim, S.; Park, S.I.; Choe, Y.; Li, J.; Han, A. A microchip for quantitative analysis of CNS axon growth under localized biomolecular treatments. J. Neurosci. Methods 2014, 221, 166–174. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. van de Wijdeven, R.; Ramstad, O.H.; Bauer, U.S.; Halaas, Ø.; Sandvig, A.; Sandvig, I. Structuring a multi-nodal neural network in vitro within a novel design microfluidic chip. Biomed. Microdevices 2018, 20, 9. [Google Scholar] [CrossRef] [PubMed]
  29. Chang, C.; Furukawa, T.; Asahina, T.; Shimba, K.; Kotani, K.; Jimbo, Y. Coupling of in vitro neocortical-hippocampal coculture bursts induces different spike shythms in individual networks. Front. Neurosci. 2022, 16, 873664. [Google Scholar] [CrossRef] [PubMed]
  30. Callegari, F.; Brofiga, M.; Poggio, F.; Massobrio, P. Stimulus-evoked activity modulation of in vitro engineered cortical and hippocampal networks. Micromachines 2022, 13, 1212. [Google Scholar] [CrossRef]
  31. Lobov, S.A.; Zhuravlev, M.O.; Makarov, V.A.; Kazantsev, V.B. Noise enhanced signaling in STDP driven spiking-neuron network. Math. Model. Nat. Phenom. 2017, 12, 109–124. [Google Scholar] [CrossRef]
  32. Lobov, S.; Balashova, K.; Makarov, V.A.; Kazantsev, V. Competition of spike-conducting pathways in STDP driven neural networks. In Proceedings of the 5th International Congress on Neurotechnology, Electronics and Informatics (NEUROTECHNIX 2017); SciTePress: Setúbal, Portugalija, 2017; pp. 15–21. [Google Scholar]
  33. Lobov, S.A.; Mikhaylov, A.N.; Shamshin, M.; Makarov, V.A.; Kazantsev, V.B. Spatial properties of STDP in a self-learning spiking neural network enable controlling a mobile robot. Front. Neurosci. 2020, 14, 88. [Google Scholar] [CrossRef] [PubMed]
  34. Markram, H.; Gerstner, W.; Sjöström, P.J. A history of spike-timing-dependent plasticity. Front. Synaptic Neurosci. 2011, 3, 4. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  35. Markram, H.; Lübke, J.; Frotscher, M.; Sakmann, B. Regulation of synaptic efficacy by coincidence of postsynaptic APs and EPSPs. Science 1997, 275, 213–215. [Google Scholar] [CrossRef]
  36. Bi, G.Q.; Poo, M.M. Synaptic modifications in cultured hippocampal neurons: Dependence on spike timing, synaptic strength, and postsynaptic cell type. J. Neurosci. 1998, 18, 10464–10472. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  37. Sjöström, P.J.; Turrigiano, G.G.; Nelson, S.B. Rate, timing, and cooperativity jointly determine cortical synaptic plasticity. Neuron 2001, 32, 1149–1164. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  38. Jäckel, D.; Bakkum, D.J.; Russell, T.L.; Müller, J.; Radivojevic, M.; Frey, U.; Franke, F.; Hierlemann, A. Combination of high-density microelectrode array and patch clamp recordings to enable studies of multisynaptic integration. Sci. Rep. 2017, 7, 978. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  39. Rigby, M.; Anthonisen, M.; Chua, X.Y.; Kaplan, A.; Fournier, A.E.; Grütter, P. Building an artificial neural network with neurons. AIP Adv. 2019, 9, 075009. [Google Scholar] [CrossRef] [Green Version]
  40. Izhikevich, E.M. Simple model of spiking neurons. IEEE Trans. Neural Networks 2003, 14, 1569–1572. [Google Scholar] [CrossRef] [Green Version]
  41. Izhikevich, E.M. Which model to use for cortical spiking neurons? IEEE Trans. Neural Networks 2004, 15, 1063–1070. [Google Scholar] [CrossRef]
  42. Izhikevich, E.M. Dynamical Systems in Neuroscience: The Geometry of Excitability and Bursting; The MIT Press: Cambridge, MA, USA; London, UK, 2007. [Google Scholar]
  43. Lobov, S.A.; Chernyshov, A.V.; Krilova, N.P.; Shamshin, M.O.; Kazantsev, V.B. Competitive learning in a spiking neural network: Towards an intelligent pattern classifier. Sensors 2020, 20, 500. [Google Scholar] [CrossRef] [Green Version]
  44. Makarov, V.A.; Lobov, S.A.; Shchanikov, S.; Mikhaylov, A.; Kazantsev, V.B. Toward reflective spiking neural networks exploiting memristive devices. Front. Comput. Neurosci. 2022, 16, 859874. [Google Scholar] [CrossRef]
  45. Tsodyks, M.; Pawelzik, K.; Markram, H. Neural networks with dynamic synapses. Neural Comput. 1998, 10, 821–835. [Google Scholar] [CrossRef]
  46. Morrison, A.; Diesmann, M.; Gerstner, W. Phenomenological models of synaptic plasticity based on spike timing. Biol. Cybern. 2008, 98, 459–478. [Google Scholar] [CrossRef]
  47. Song, S.; Miller, K.D.; Abbott, L.F. Competitive Hebbian learning through spike-timing-dependent synaptic plasticity. Nat. Neurosci. 2000, 3, 919. [Google Scholar] [CrossRef] [PubMed]
  48. Lobov, S.A.; Zharinov, A.I.; Semenova, O.; Kazantsev, V.B. Topological classification of population activity in spiking neural network. In Proceedings of SPIE: The Saratov Fall Meeting 2020: Computations and Data Analysis: From Molecular Processes to Brain Functions; Postnov, D.E., Ed.; SPIE: Bellingham, DC, USA, 2021; Volume 11847, pp. 1–6. [Google Scholar]
  49. Spiking Neurosimulator NeuroNet with a User-Friendly Graphical Interface. Available online: http://spneuro.net/ (accessed on 29 December 2022).
  50. Wagenaar, D.A.; Pine, J.; Potter, S.M. An extremely rich repertoire of bursting patterns during the development of cortical cultures. BMC Neurosci. 2006, 7, 11. [Google Scholar] [CrossRef] [PubMed]
  51. Chiappalone, M.; Novellino, A.; Vajda, I.; Vato, A.; Martinoia, S.; van Pelt, J. Burst detection algorithms for the analysis of spatio-temporal patterns in cortical networks of neurons. Neurocomputing 2005, 65–66, 653–662. [Google Scholar] [CrossRef]
  52. Stegenga, J.; Le Feber, J.; Marani, E.; Rutten, W.L.C. Analysis of cultured neuronal networks using intraburst firing characteristics. IEEE Trans. Biomed. Eng. 2008, 55, 1382–1390. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  53. Pigareva, Y.; Gladkov, A.; Kolpakov, V.; Mukhina, I.; Bukatin, A.; Kazantsev, V.B.; Pimashkin, A. Experimental platform to study spiking pattern propagation in modular networks in vitro. Brain Sci. 2021, 11, 717. [Google Scholar] [CrossRef]
  54. Pan, L.; Alagapan, S.; Franca, E.; DeMarse, T.; Brewer, G.J.; Wheeler, B.C. Large extracellular spikes recordable from axons in microtunnels. IEEE Trans. Neural Syst. Rehabil. Eng. 2014, 22, 453–459. [Google Scholar] [CrossRef] [Green Version]
  55. Geramifard, N.; Lawson, J.; Cogan, S.F.; Black, B.J. A novel 3D helical microelectrode array for in vitro extracellular action potential recording. Micromachines 2022, 13, 1692. [Google Scholar] [CrossRef] [PubMed]
  56. Keren, H.; Marom, S. Long-range synchrony and emergence of neural reentry. Sci. Rep. 2016, 6, 36837. [Google Scholar] [CrossRef] [Green Version]
  57. Chua, L.O.; Kang, S.M. Memristive devices and systems. Proc. IEEE 1976, 64, 209–223. [Google Scholar] [CrossRef]
  58. Strukov, D.B.; Snider, G.S.; Stewart, D.R.; Williams, R.S. The missing memristor found. Nature 2008, 453, 80–83. [Google Scholar] [CrossRef]
  59. Dias, C.; Castro, D.; Aroso, M.; Ventura, J.; Aguiar, P. A memristor-based neuromodulation device for real-time monitoring and adaptive control of in vitro neuronal populations. ACS Appl. Electron. Mater. 2022, 4, 2380–2387. [Google Scholar] [CrossRef] [PubMed]
  60. Mikhaylov, A.; Pimashkin, A.; Pigareva, Y.; Gerasimova, S.A.; Lobov, S.; Gryaznov, E.; Talanov, M.; Lavrov, I.; Demin, V.; Erokhin, V.; et al. Neurohybrid memristive CMOS-integrated systems for biosensors and neuroprostetics. Front. Neurosci. 2020, 14, 358. [Google Scholar] [CrossRef] [PubMed]
  61. Juzekaeva, E.; Nasretdinov, A.; Battistoni, S.; Berzina, T.; Iannotta, S.; Khazipov, R.; Erokhin, V.; Mukhtarov, M. Coupling cortical neurons through electronic memristive synapse. Adv. Mater. Technol. 2019, 4, 1800350. [Google Scholar] [CrossRef]
Figure 1. Model SNN architectures mimicking in vitro neuronal cultures coupled through microfluidic channels in global circuits. (A) Local networks, each consisting of 500 neurons distributed over a rectangular substrate, are coupled using long-scale axons of projecting neurons in a global network structure. Within each network, neurons are linked by predominantly local couplings (red and blue circles correspond to excitatory and inhibitory neurons, respectively). (B) Global network architectures with a different number of inputs and outputs subject to Pavlovian learning studied in this work. Each circle corresponds to a local subnetwork (see (A)). Arrowed links indicate the direction of internetwork couplings. Blue couplings are inhibitory.
Figure 1. Model SNN architectures mimicking in vitro neuronal cultures coupled through microfluidic channels in global circuits. (A) Local networks, each consisting of 500 neurons distributed over a rectangular substrate, are coupled using long-scale axons of projecting neurons in a global network structure. Within each network, neurons are linked by predominantly local couplings (red and blue circles correspond to excitatory and inhibitory neurons, respectively). (B) Global network architectures with a different number of inputs and outputs subject to Pavlovian learning studied in this work. Each circle corresponds to a local subnetwork (see (A)). Arrowed links indicate the direction of internetwork couplings. Blue couplings are inhibitory.
Mathematics 11 00234 g001
Figure 2. Quantification of interneuron couplings in single neurons and network structures. (A) Two model architectures with unidirectionally coupled individual neurons (top) and subnets by ten projecting axons (bottom). (B) The connection efficiency (bursts “passing” from one structure to the other) vs. coupling weight (for subnets the average weight is used). (C) Self-reinforcement of the coupling weight for individual neurons in spontaneous conditions (Spont), under stimulation of the presynaptic neuron (S1), and paired stimulation (S1 + S2). (D) The same as in (C) but for the subnets.
Figure 2. Quantification of interneuron couplings in single neurons and network structures. (A) Two model architectures with unidirectionally coupled individual neurons (top) and subnets by ten projecting axons (bottom). (B) The connection efficiency (bursts “passing” from one structure to the other) vs. coupling weight (for subnets the average weight is used). (C) Self-reinforcement of the coupling weight for individual neurons in spontaneous conditions (Spont), under stimulation of the presynaptic neuron (S1), and paired stimulation (S1 + S2). (D) The same as in (C) but for the subnets.
Mathematics 11 00234 g002
Figure 3. Emergence of cyclic structures and rhythmic activity. (A) Architecture with unidirectional clockwise connections facilitates a clockwise cyclic activity. (B) The connection efficiency (bursts passing from one subnet to another) vs. the number of connections between subnets (A) before and after learning. (C) Raster plots of spiking activity before and after learning. Learning leads to the emergence of a wave circulating in the network. Colors indicate spikes of neurons from the corresponding subnets shown in (A). (D) Initial architecture with bidirectional connections. A cyclic activity running either clockwise or counterclockwise can emerge. (E) An example of the weight’s dynamics: Wckw and Wcckw are the average weights of clockwise and counterclockwise connections, respectively. Clockwise connections are depressed, while counterclockwise couplings are potentiated. (F) Example of raster plots for the bidirectional SNN with a counterclockwise activity after learning.
Figure 3. Emergence of cyclic structures and rhythmic activity. (A) Architecture with unidirectional clockwise connections facilitates a clockwise cyclic activity. (B) The connection efficiency (bursts passing from one subnet to another) vs. the number of connections between subnets (A) before and after learning. (C) Raster plots of spiking activity before and after learning. Learning leads to the emergence of a wave circulating in the network. Colors indicate spikes of neurons from the corresponding subnets shown in (A). (D) Initial architecture with bidirectional connections. A cyclic activity running either clockwise or counterclockwise can emerge. (E) An example of the weight’s dynamics: Wckw and Wcckw are the average weights of clockwise and counterclockwise connections, respectively. Clockwise connections are depressed, while counterclockwise couplings are potentiated. (F) Example of raster plots for the bidirectional SNN with a counterclockwise activity after learning.
Mathematics 11 00234 g003
Figure 4. The shortest pathway rule for modular SNNs solves the problem of depressing synapses not involved in relevant associations of stimuli. (A) The network architecture. Subnet 3 receives input from subnet 1 directly and through subnet 2. The rule states that the shortest path (through W1) is potentiated while the longer (through W2) is inhibited. (B) Dynamics of the internetwork connections. The red arrow indicates the moment when the stimulation is turned on.
Figure 4. The shortest pathway rule for modular SNNs solves the problem of depressing synapses not involved in relevant associations of stimuli. (A) The network architecture. Subnet 3 receives input from subnet 1 directly and through subnet 2. The rule states that the shortest path (through W1) is potentiated while the longer (through W2) is inhibited. (B) Dynamics of the internetwork connections. The red arrow indicates the moment when the stimulation is turned on.
Mathematics 11 00234 g004
Figure 5. Associative learning due to synaptic competition. (A) Network architecture with internetwork connections WC implementing competition between subnets 1 and 2. The SNN provides association of one of the conditional stimuli (CS 1 or CS 2) with unconditional stimulus US. (B) Dynamics of weights and the learning quality under conditional stimuli CS1 and CS2. Blue and orange areas correspond to associations CS1-US and CS2-US, respectively. (C) Average weights of internetwork connections W1 and W2 and the learning quality coefficient vs. average weights of internetwork connections WC.
Figure 5. Associative learning due to synaptic competition. (A) Network architecture with internetwork connections WC implementing competition between subnets 1 and 2. The SNN provides association of one of the conditional stimuli (CS 1 or CS 2) with unconditional stimulus US. (B) Dynamics of weights and the learning quality under conditional stimuli CS1 and CS2. Blue and orange areas correspond to associations CS1-US and CS2-US, respectively. (C) Average weights of internetwork connections W1 and W2 and the learning quality coefficient vs. average weights of internetwork connections WC.
Mathematics 11 00234 g005
Figure 6. Neural competition enables associative learning of conditional stimulus with two unconditional ones. (A) Modular SNN with a single conditional stimulus CS and two unconditional stimuli (US1, US2) applied to a single elongated module. Inhibitory interneurons, which suppress the spread of excitation in subnet 2, provide neuronal competition and association of CS either with US1 or US2. (B) Learning quality Q as a function of the parameters of inhibitory connections: coupling weight WI and decay time τI.
Figure 6. Neural competition enables associative learning of conditional stimulus with two unconditional ones. (A) Modular SNN with a single conditional stimulus CS and two unconditional stimuli (US1, US2) applied to a single elongated module. Inhibitory interneurons, which suppress the spread of excitation in subnet 2, provide neuronal competition and association of CS either with US1 or US2. (B) Learning quality Q as a function of the parameters of inhibitory connections: coupling weight WI and decay time τI.
Mathematics 11 00234 g006
Figure 7. Embedding a modular SNN capable of Pavlovian conditioning in a mobile robot. The robot learns collision avoidance. (A) The LEGO robot and mapping of sensory stimuli. (B) Modular SNN consisting of two subnets connected by unidirectional couplings (WP and WD) and the configuration of stimuli. (C) Training the robot. The left touch sensor and left ultrasonic sonar are simultaneously triggered. (D) The dynamics of the connections WP and WD, and the learning quality Q during learning.
Figure 7. Embedding a modular SNN capable of Pavlovian conditioning in a mobile robot. The robot learns collision avoidance. (A) The LEGO robot and mapping of sensory stimuli. (B) Modular SNN consisting of two subnets connected by unidirectional couplings (WP and WD) and the configuration of stimuli. (C) Training the robot. The left touch sensor and left ultrasonic sonar are simultaneously triggered. (D) The dynamics of the connections WP and WD, and the learning quality Q during learning.
Mathematics 11 00234 g007
Figure 8. Testing the robot’s performance. (A) The experimental arena. The robot’s trace is shown in purple. Obstacles are in white. (B) The number of collisions with obstacles vs. the learning quality Q.
Figure 8. Testing the robot’s performance. (A) The experimental arena. The robot’s trace is shown in purple. Obstacles are in white. (B) The number of collisions with obstacles vs. the learning quality Q.
Mathematics 11 00234 g008
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lobov, S.A.; Mikhaylov, A.N.; Berdnikova, E.S.; Makarov, V.A.; Kazantsev, V.B. Spatial Computing in Modular Spiking Neural Networks with a Robotic Embodiment. Mathematics 2023, 11, 234. https://0-doi-org.brum.beds.ac.uk/10.3390/math11010234

AMA Style

Lobov SA, Mikhaylov AN, Berdnikova ES, Makarov VA, Kazantsev VB. Spatial Computing in Modular Spiking Neural Networks with a Robotic Embodiment. Mathematics. 2023; 11(1):234. https://0-doi-org.brum.beds.ac.uk/10.3390/math11010234

Chicago/Turabian Style

Lobov, Sergey A., Alexey N. Mikhaylov, Ekaterina S. Berdnikova, Valeri A. Makarov, and Victor B. Kazantsev. 2023. "Spatial Computing in Modular Spiking Neural Networks with a Robotic Embodiment" Mathematics 11, no. 1: 234. https://0-doi-org.brum.beds.ac.uk/10.3390/math11010234

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop