Next Article in Journal
Prototype of 3D Reliability Assessment Tool Based on Deep Learning for Edge OSS Computing
Next Article in Special Issue
A Combinatorial Characterization of H(4, q2)
Previous Article in Journal
Knacks of Fractional Order Swarming Intelligence for Parameter Estimation of Harmonics in Electrical Systems
Previous Article in Special Issue
Commutativity and Completeness Degrees of Weakly Complete Hypergroups
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Construction of an Infinite Cyclic Group Formed by Artificial Differential Neurons

1
Department of Mathematics, Faculty of Electrical Engineeering and Comunication, Brno University of Technology, Technická 8, 616 00 Brno, Czech Republic
2
Department of Quantitative Methods, University of Defence, Kounicova 65, 662 10 Brno, Czech Republic
*
Author to whom correspondence should be addressed.
Submission received: 24 February 2022 / Revised: 26 April 2022 / Accepted: 4 May 2022 / Published: 6 May 2022
(This article belongs to the Special Issue Hypergroup Theory and Algebrization of Incidence Structures)

Abstract

:
Infinite cyclic groups created by various objects belong to the class to the class basic algebraic structures. In this paper, we construct the infinite cyclic group of differential neurons which are modifications of artificial neurons in analogy to linear ordinary differential operators of the n-th order. We also describe some of their basic properties.

1. Introduction

In our paper, we study artificial, or formal, neurons. Recall that these are the building blocks of mathematically modeled neural networks, e.g., [1]. The design and functionality of artificial neurons are derived from observations of biological neural networks. Our investigation belongs to the theory which is developed and applied in various directions contained in many publications, cf. [2,3,4,5,6]. The bodies of artificial neurons compute the sum of the weighted inputs and bias and “process” this sum with a transfer function, cf. [1,2,3,4,5,6,7,8,9,10].
In the next step, the information is passed via outputs (output functions). Thus, artificial neural networks have the structure similar to that of weighted directed graphs with artificial neurons being their nodes and connections between neuron inputs and outputs being directed edges with weights. Recall that in the framework of artificial neural networks there are networks of simple neurons called perceptrons. The basic concept (perceptron) was introduced by Rosenblatt in 1958. Perceptrons compute single outputs (the output function) from multiple real-valued inputs by forming a linear combination according to input weights, and then possibly putting the output through some nonlinear activation functions. Mathematically, this can be written as
y ( t ) = φ i = 1 n w i ( t ) x i ( t ) + b = φ w T ( t ) x ( t ) + b ,
where w ( t ) = ( w 1 ( t ) , , w n ( t ) ) denotes the vector of time dependent weight functions, x ( t ) = ( x 1 ( t ) , , x n ( t ) ) is the vector of time dependent (or time varying) input functions, b is the bias and φ is the activation function. The use of time varying functions as weights and inputs is a certain generalization of the classical concept of artificial neurons from the work of Warren McCulloch and Walter Pitts (1943); see also [1,2,3,4,5,6,7,8,9,10] and references mentioned therein.

2. Differential Neurons and Their Output Functions

In accordance with our previous papers [1,7,8,9], we regard the above mentioned artificial neurons such that inputs x i and weights w i will be functions of argument t belonging into a linearly ordered (tempus) set T with the least element 0 . As the index set we use the interval of real numbers [ 1 , ) = { x R ; 1 x } , where R denotes the set of all real numbers. So, denote by W the set of all non-negative functions w : T R forming a subsemiring of the ring of all real functions of one real variable x : R R . Denote by N e ( w r ( t ) ) = N e ( w r 1 ( t ) , , w r n ( t ) ) for r [ 1 , ) , n N and the mapping
y r ( t ) = k = 1 n w r , k ( t ) x r , k ( t ) + b r
which will be called the artificial neuron with the bias b r R , in fact the output function of the corresponding neuron. By AN ( T ) we denote the collection of all such artificial neurons.
Neurons are usually denoted by capital letters X , Y or X i , Y i . However, we use also notation N e ( w ) , where w = ( w 1 , , w n ) is the vector of weights.
We suppose, for the sake of simplicity, that transfer functions (activation functions) φ , σ (or f) are the same for all neurons from the collection AN ( T ) or that this function is the identity function f ( y ) = y .
Now, similarly as in the case of the collection of linear differential operators, we will construct a cyclic group of artificial neurons, extending their monoid, cf. [1].
Denote by δ i j the so called Kronecker delta, i , j N , , i.e., δ i i = δ j j = 1 and δ i j = 0 , whenever i j .
Suppose N e ( w r ) , N e ( w s ) AN ( T ) , r , s [ 1 , ) , w r = ( w r 1 , , w r , n ) , w s = ( w s 1 , , w s , n ) , n N . Let m N , 1 m n be a such an integer that w r , m > 0 . We define
N e ( w v ( t ) ) = N e ( w r ( t ) ) · m N e ( w s ( t ) ) ,
where
w v ( t ) = w v , 1 ( t ) , , w v , n ( t ) ,
w v , k ( t ) = w r , m ( t ) w s , k ( t ) + ( 1 δ m , k ) w r , k ( t ) , t T
and, of course, the neuron N e ( w v ) is defined as mapping y v ( t ) = k = 1 n w k ( t ) x k ( t ) + b v , t T , b v = b r b s . Further, for a pair N e ( w r ( t ) ) , N e ( w s ( t ) ) of neurons from AN ( T ) we put
N e ( w r ( t ) ) m N e ( w s ( t ) ) , w r ( t ) = w r , 1 ( t ) , , w r , n ( t ) , w s ( t ) = w s , 1 ( t ) , , w s , n ( t )
if w r , k ( t ) w s , k ( t ) , k N , k m and w r , m ( t ) = w s , m ( t ) , t T and with the same bias.
Remark 1.
There exists a link between formal neurons and linear differential operators of the n-th order. This link is important for our future considerations. Recall the expression of formal neuron with inner potential y i n = k = 1 n w k ( t ) x k ( t ) , where x ( t ) = x 1 ( t ) , , x n ( t ) is the vector of inputs, w ( t ) = w 1 ( t ) , , w n ( t ) is the vector of weights. Using the bias b of the considered neuron and the transfer function σ we can expressed the output as y ( t ) = σ k = 1 n w k ( t ) x k ( t ) + b .
Now consider a fundamental function u : J R , where J R is an open interval; inputs are derived from the function u C n ( J ) as follows:
x 1 ( t ) = u ( t ) , x 2 = d u ( t ) d t , , x n ( t ) = d n 1 ( t ) d t n 1 , n N .
Further the bias b = b 0 d n u ( t ) d t n . As weights we use continuous functions w k : J R , k = 1 , , n 1 .
Then formula
y ( t ) = σ k = 1 n w k ( t ) d k 1 u ( t ) d t k 1 + b 0 d n u ( t ) d t n
is a description of the action of the neuron D n which will be called a formal (artificial) differential neuron. This approach allows to use solution spaces of corresponding linear differential equations.

3. Products and Powers of Differential Neurons

Suppose w ( t ) = w 1 ( t ) , , w n ( t ) are fixed vectors of continuous functions w k : R R and b 0 be the bias for any polynomial p R s [ t ] , n s , s N 0 . We consider a differential neuron D N e p ( w ) by the action
y 1 ( t ) = k = 1 n w 1 , k ( t ) d k 1 p ( t ) d t k 1 + b 0 d n p ( t ) d t n
with the identity activation function φ ( u ) = u . According to the formula, we can calculate the output function of the differential neuron D 2 N e p ( w ) = D N e p ( w ) · D N e p ( w ) .
Firstly, we describe the product of neurons N e ( w r ) · N e ( w s ) = N e ( w u ) ; , i.e., outputs of neurons
y r ( t ) = k = 1 n w r , k ( t ) x k ( t ) + b r , y s ( t ) = k = 1 n w s , k ( t ) x k ( t ) + b s .
The vector of weights of the neuron N e ( w u ) is of the form w u ( t ) = ( w u , 1 , , w u , n ) , where
w u , k ( t ) = w r , m ( t ) w s , k ( t ) + ( 1 δ m , k ) w r , k ( t ) , t T and 1 m n .
Then the neuron N e ( w u ) is defined using its output function y u ( t ) = k = 1 n w u , k ( t ) x k ( t ) + b r b s , t T .
In a greater detail:
w u , 1 ( t ) = w r , m ( t ) w s , 1 ( t ) + w r , 1 ( t ) , w u , 2 ( t ) = w r , m ( t ) w s , 2 ( t ) + w r , 2 ( t ) ,     w u , m ( t ) = w r , m ( t ) w s , m ( t ) ,     w u , n ( t ) = w r , m ( t ) w s , n ( t ) + w r , n ( t ) .
Application of the above product onto the case of differential neurons: Suppose D N e p ( w r ) , D N e p ( w s ) are neurons with output functions
y r ( t ) = k = 1 n w r , k ( t ) d k 1 p ( t ) d t k 1 + b r d n p ( t ) d t n , y s ( t ) = k = 1 n w s , k ( t ) d k 1 p ( t ) d t k 1 + b s d n p ( t ) d t n ,
where p R l [ t ] , n l . Denote
D N e p ( w u ) = D N e p ( w r ) · D N e p ( w s ) .
Then the output function of the neuron D N e p ( w u ) has the form
y u ( t ) = k = 1 k m n w r , m ( t ) w s , k ( t ) + w r , k ( t ) d k 1 p ( t ) d t k 1 + + w r , m ( t ) w s , m ( t ) d m 1 p ( t ) d t m 1 + b r b s d n p ( t ) d t n 2 .
Now, using the above formula we can express output functions of powers D 2 N e p ( w r ) , D α N e p ( w r ) (for α N ) and D 0 N e p ( w r ) (the neutral element-unit) of the infinite cyclic group { D α N e p ( w r ) ; α Z } . The output function y u [ 2 ] ( t ) of the differential neuron is of the form
y u [ 2 ] ( t ) = k = 1 k m n ( ( w r , m ( t ) + 1 ) w r , k ( t ) ) d k 1 p ( t ) d t k 1 + w r , m 2 ( t ) d m 1 p ( t ) d t m 1 + b r 2 d n p ( t ) d t n 2 = = ( w r , m ( t ) + 1 ) k = 1 k m n w r , k ( t ) d k 1 p ( t ) d t k 1 + w r , m 2 ( t ) d m 1 p ( t ) d t m 1 + b r 2 d n p ( t ) d t n 2 .
In the paper [1] the following theorem is proved:
Theorem 1.
Consider a differential neuron D N e p ( w ) with the vector w ( t ) = w 1 ( t ) , , w n ( t ) of time variable weights and the vector of inputs x ( t ) = p ( t ) , d p ( t ) d t , , d n p ( t ) d t n with polynomial p R l [ t ] , n l , t T and 1 m n , n N = { 1 , 2 , } . The output function y ( t ) of the above mentioned neuron is of the form
y ( t ) = k = 1 n w k ( t ) d k 1 p ( t ) d t k 1 + b d n p ( t ) d t n
with the bias b d n p ( t ) d t n . Suppose α N , 2 α . Then the output function of the differential neuron D α N e p ( w ) has the form
y [ α ] ( t ) = k = 0 α 1 w m k ( t ) k = 1 k m n w k ( t ) d k 1 p ( t ) d t k 1 + w m α ( t ) d m 1 p ( t ) d t m 1 + b d n p ( t ) d t n α .
Now, we discuss a certain type of subgroup which appears in all groups. The following text up to Proposition 2 incl. contains well-known facts, which are overtaken from the monography [11] (Chapter 2, §2,4).
Take any group G and any element a G . Consider all powers of a : Define a 0 = e (the neutral element), a 1 = a , and for k > 1 , define a k to be the product of k factors of a . (A little more properly, a k is defined inductively by declaring a k = a a k 1 . ) For k > 1 define a k = ( a 1 ) k .
Recall briefly some well-known classical facts.
Definition 1.
Let a be an element of a group G . The set of powers of a a = { a k : k Z } is a subgroup of G , called the cyclic subgroup generated by a . If there is an element a G such that a = G , one says that G is a cyclic group. We say that a is a generator of the cyclic group.
There are two possibilities for a , one possibility is that all the powers a k are distinct, in which case, of course, the subgroup a is infinite; if this is so, we say that a has infinite order.
The other possibility is that two powers of a coincide, but this is not our case.
Definition 2.
The order of the cyclic subgroup generated by a is called the order of a . If the order of a is finite, then it is the least positive integer n such that a n = e .
Proposition 1.
Let a be an element of a group G .
(a) 
If a has infinite order then a is isomorphic to Z .
(b) 
If a has finite order n , then a is isomorphic to the group C n of n-th roots of 1 .
Proposition 2.
(a) 
Any non-trivial subgroup of Z is cyclic and isomorphic to Z .
(b) 
Let G = a be a finite cyclic group. Any subgroup of G is also cyclic.
For a construction of a cyclic group of artificial differential neurons we need to extend the cyclic monoid of differential neurons obtained in the paper [1] by negative powers of differential neurons, in particular to describe their output functions, so we need to construct negative powers D α N e ( w ) of differential neurons which belong to the basic contribution of this paper. We suppose the existence of such inverse elements, i.e., negative powers of the generated element of the considered group.
In general, for the construction of the negative power D α N e ( w ) with α N it seems to be a suitable way of a using of this equality:
D α + 1 N e ( w ) · m D α N e p ( w ) = D N e p ( w ) ,
where on the right hand side is given an arbitrary general differential neuron with the vector w ( t ) = w 1 ( t ) , , w m ( t ) , , w n ( t ) of time variable weight functions, with the vector of inputs
x ( t ) = p ( t ) , d p ( t ) d ( t ) , , d n p ( t ) d t n ,
with a polynomial p R l [ t ] , n l , t T and 1 m n , n N = { 1 , 2 , } . The neuron D N e p ( w ) has the output function
y ( t ) = k = 1 n w k ( t ) d k 1 p ( t ) d t k 1 + b 0 d n p ( t ) d t n ,
with the bias b = b 0 d n p ( t ) d t n . However, we will construct the proof using mathematical induction—similarly as in [1]—the proof of the Theorem 1, which seems to be a more convenient way. So we are going to prove the following theorem.
Theorem 2.
Suppose the existence of an inverse elements (i.e., negative powers of the generated element of the considered group). Let D N e p ( w ) be a differential neuron with the vector w ( t ) = ( w 1 ( t ) , , w m ( t ) , , w n ( t ) ) of time variable weights and with the vector of inputs x ( t ) = p ( t ) , d p ( t ) d t , , d n p ( t ) d t n , with a polynomial p R l [ t ] , n l , t T and 1 m n , n N = { 1 , 2 , } , i.e., the output function y ( t ) of the neuron D N e p ( w ) is of the form
y ( t ) = k = 1 n w k ( t ) d k 1 p ( t ) d t k 1 + b 0 d n p ( t ) d t n ,
with the bias b = b 0 d n p ( t ) d t n . Suppose α N . Then the output function of the differential neuron D N e p α ( w ) has the form
y [ α ] ( t ) = 1 w m α ( t ) ξ = 0 α 1 w m ξ ( t ) k = 1 k m n w k ( t ) d k 1 p ( t ) d t k 1 + 1 w m α ( t ) · d m 1 p ( t ) d t m 1 + b 0 d n p ( t ) d t n α
or
y [ α ] ( t ) = w m α ( t ) k = 0 α 1 w m k ( t ) k = 1 k m n w k ( t ) d k 1 p ( t ) d t k 1 + w m α ( t ) d m 1 p ( t ) d t m 1 + b 0 d n p ( t ) d t n α .
Proof. 
Consider the equality
D N e p ( w ) · m D 1 N e p ( w ) = N 1 ( e ) m ,
where the output function of the neuron N 1 ( e ) m (the identity element of the monoid ( S 1 , · m ) from [1]) is of the form y N 1 ( t ) = d m 1 p ( t ) d t m 1 + 1 .
Let y ( t ) = k = 1 n w k ( t ) d k 1 p ( t ) d t k 1 + b 0 d n p ( t ) d t n be the output function of the neuron D N e p ( w ) with the bias b = b 0 d n p ( t ) d t n and
y [ 1 ] ( t ) = k = 1 n w s , k ( t ) d k 1 p ( t ) d t k 1 + b s
be the output function of the neuron D 1 N e p ( w ) . Since 0 = w 1 , k = w m ( t ) · w s , k ( t ) + w k ( t ) and w m ( t ) · w s , m ( t ) = 1 for any k { 1 , 2 , , n } { m } , we have
w s , m ( t ) = 1 w m ( t ) and w s , k ( t ) = w k ( t ) w m ( t ) .
Moreover, 1 = b · b s = b 0 d n p ( t ) d t n · b s which implies that the bias b s = b 0 d n p ( t ) d t n 1 . Thus, the output function is of the form
y [ 1 ] ( t ) = k = 1 k m n w k ( t ) w m ( t ) d k 1 p ( t ) d t k 1 + 1 w m ( t ) · d m 1 p ( t ) d t m 1 + b s = = 1 w m ( t ) k = 1 k m n w k ( t ) d k 1 p ( t ) d t k 1 + 1 w m ( t ) · d m 1 p ( t ) d t m 1 + b 0 d n p ( t ) d t n 1 .
Using of Equation (16) we obtain after some simple calculation the expression:
y [ α ] ( t ) = 1 w m α ( t ) ξ = 0 α 1 w m ξ ( t ) k = 1 k m n w k ( t ) d k 1 p ( t ) d t k 1 + + 1 w m α ( t ) · d m 1 p ( t ) d t m 1 + b 0 d n p ( t ) d t n α .
This function is in a fact the output function of the neuron D α N e p ( w ) .
Now, for α = 1 we obtain
y [ 1 ] ( t ) = 1 w m ( t ) k = 1 k m n w k ( t ) d k 1 p ( t ) d t k 1 + 1 w m ( t ) · d m 1 p ( t ) d t m 1 + b 0 d n p ( t ) d t n 1 ,
which is in fact the Expression (24).
We have
y [ α 1 ] ( t ) = 1 w m α + 1 ( t ) ξ = 0 α w m ξ ( t ) k = 1 k m n w k ( t ) d k 1 p ( t ) d t k 1 + + 1 w m α + 1 ( t ) · d m 1 p ( t ) d t m 1 + b 0 d n p ( t ) d t n α 1 ,
which is the Equality (25) written for ( α + 1 ) instead for α . The other negative powers can be also obtained from example we have. □
Using output functions of corresponding differential neurons we verify a validity of equalities
D α N e p ( w ) · m N 1 ( e ) m = D α N e p ( w ) = N 1 ( e ) m · m D α N e p ( w )
certifying that the neuron N 1 ( e ) m is the neutral element also for negative powers of the neuron D N e p ( w ) .
Denote by y u ( t ) the output function of the neuron
D N e p ( w u ) = D α N e p ( w ) · m N 1 ( e ) m .
Since the output function of the neuron N 1 ( e ) (the unit element) has the form
y 1 ( t ) = w N 1 , m ( t ) d m 1 p ( t ) d t m 1 + 1   with   w N 1 , m ( t ) = 1 ,
we have
y u ( t ) = 1 w m α ( t ) ξ = 0 α 1 w m ξ ( t ) k = 1 k m n w k ( t ) d k 1 p ( t ) d t k 1 + 1 · w α ( t ) d m 1 p ( t ) d t m 1 + 1 · b 0 d n p ( t ) d t n α ,
which is in fact the output function y [ α ] ( t ) of the differential neuron D α N e p ( w ) . In a similar way we can verify the second equality.
Remark 2.
In paper [12] there is defined a concept of a general n-hyperstructure as there follows:
Let n N be an arbitrary positive integer and { X k ; k = 1 , , n } be a system of non-empty sets. By a general n-hyperstructure we mean the pair
{ X k ; k = 1 , , n } , n ,
where n : k = 1 n X k P k = 1 n X k is a mapping assigning to any n-tuple [ x 1 , , x n ] k = 1 n X k a non-empty subset n x 1 , , x n k = 1 n X k . Here P M means the power set of M without the empty set .
Similarly as above, with this hyperoperation there is associated a mapping of power sets
n : k = 1 n P X k P k = 1 n X k
defined by
n A 1 , , A n = n ( x 1 , , x n ) ; [ x 1 , , x n ] k = 1 n A k .
This construction is also based on an idea of Nezhad and Hashemi for N 2 .
At the end of this section we give this example:
Let J R be an open interval, C n ( J ) be the ring (with respect to the usual addition and multiplication of functions) of all real functions f : J R with continuous derivatives up to the order n 0 including. Now, as in suppositions of Theorems 1 and 2, we consider a differential neuron D N e p ( w ) with the vector w ( t ) = w 1 ( t ) , , w n ( t ) of time variable weights and the vector of inputs x ( t ) = p ( t ) , d p ( t ) d t , , d n p ( t ) d t n with the polynomial p R l [ t ] , n l , t T and 1 m n , n N = { 1 , 2 , } . The output function y ( t ) of the mentioned neuron is of the form
y ( t ) = k = 1 n w k ( t ) d k 1 p ( t ) d t k 1 + b d n p ( t ) d t n
with the bias b d n p ( t ) d t n and w k : T R , w k C n ( T ) . In accordance with [13], we put
DAN k ( T ) = D N e p ( w s ) ; p R l [ t ] , w s [ C n ( T ) ] k .
As above, we put D N e p ( w s ) D N e p ( w r ) whenever w s ( t ) = w s , 1 ( t ) , , w s , n ( t ) , w r ( t ) = w r , 1 ( t ) , , w r , n ( t ) and w s , k ( t ) w r , k ( t ) , t T , k = 1 , 2 , , n . Defining
n , p D N e p ( w 1 ( t ) , D N e p ( w 2 ( t ) , , D N e p ( w n ( t ) = = k = 1 n D N e p ( w ( t ) DAN k ( T ) p ; N e p ( w k ( t ) ) N e p ( w ( t ) )
for any n-tuple [ N e p ( w 1 ( t ) ) , N e p ( w 2 ( t ) ) , , N e p ( w n ( t ) ) ] k = 1 n DAN k ( T ) p , we obtain that
D p ( n ) = ( { DAN k ( T ) p ; k = 1 , 2 , , n } , n , p )
is a general n-hyperstructure for the polynomial p R l [ t ] .
It is to be noted, that the used concept of investigated neurons is in a certain sense motivated by ordinary differential operators forming of left-hand sides of corresponding differential equations, see, e.g., [13,14].
Therefore, the construction of differential neurons consists of a certain modification of the concept of an artificial neuron which is investigated in a certain formal analogy to linear differential operators as mentioned above. Using the obtained cyclic group of differential neurons, we will construct a certain other hyperstructure of differential neurons. The mentioned relationship is in [8] described by the construction of a homomorphism.
It is to be noted that a hypergroup is a multistructure ( H , ) , where H is a non-empty set and : H × H P ( H ) is a mapping which is associative, i.e.,
( a b ) c = a ( b c )
for any triad a , b , c H , where A B = ( a , b ) A × B a b for A B , A , B H , and b A = { b } A . Further, the reproduction axiom
a H = H = H a
for any element a H is satisfied.
The above definition of a hypergroup is in the sense of F. Marty.
Let J R be an open interval (bounded or unbounded) of real numbers, C k ( J ) be the ring (with respect to usual addition and multiplication of functions) of all real functions with continuous derivatives up to the order k 0 including. We write C ( J ) instead of C 0 ( J ) . For a positive integer n 2 we denote by A n the set of all linear homogeneous differential equations of the n-th order with continuous real coefficients on J,, i.e.,
y ( n ) + p n 1 ( x ) y ( n 1 ) + + p 0 ( x ) y = 0 ,
(cf. [14,15,16]), where p k C ( J ) , k = 0 , 1 , , n 1 , p 0 ( x ) > 0 for any x J (this is not an essential restriction). Denote L ( p 0 , , p n 1 ) : C n ( J ) C n ( J ) the above defined linear operator defined by
L ( p 0 , , p n 1 ) ( y ) = y ( n ) + p n 1 ( x ) y ( n 1 ) + + p 0 ( x ) y
and put
LA n ( J ) = { L ( p 0 , , p n 1 ) ; p k C ( J ) , p 0 > 0 } .
Further N 0 ( n ) = { 0 , 1 , , n 1 } and δ i j stands for the Kronecker δ , δ i j ¯ = 1 δ i j . For any m N 0 ( n ) we denote by LA n ( J ) m the set of all linear differential operators of the n-th order L 0 ( p 0 , , p n 1 ) : C n ( J ) C ( J ) , where p k C ( J ) for any k N 0 ( n ) , p m C 1 ( J ) , (i.e., p m ( x ) > 0 for each x J ). Using the vector notation p ( x ) = ( p 0 ( x ) , , p n 1 ( x ) ) , x J we can write L n ( p 0 ) y = y ( n ) + ( p ( x ) · ( y , y , , y ( n 1 ) ) ) , , i.e., a scalar product.
We define a binary operation m and a binary relation m on the set LA n ( J ) m in this way:
For arbitrary pair L ( p ) , L ( q ) LA n ( J ) m , p = ( p 0 , , p n 1 ) , q = ( q 0 , , q n 1 ) we put L ( p ) m L ( q ) = L ( u ) , u = ( u 0 , , u n 1 ) , where
u k ( x ) = p m ( x ) q k ( x ) + ( 1 δ k m ) p k ( x ) , x J
and L ( p ) L ( q ) whenewer p k ( x ) q k ( x ) , k N 0 ( n ) , p m ( x ) = q m ( x ) , x J . Evidently, ( LA n ( J ) m , m ) is an ordered set.
In paper [14] there is presented the sketch of the proof of the following lemma:
Lemma 1.
The triad ( LA n ( J ) m , m , m ) is an ordered (non-commutative) group.

4. Groups and Hypergroups of Artificial Neurons

As it is mentioned in the dissertation [2] neurons are the atoms of neural computation. Out of those simple computational units all neural networks are build up. For a pair N e ( w r ) , N e ( w s ) of neurons from AN ( T ) we put N e ( w r ) m N e ( w s ) , w r = w r , 1 ( t ) , , w r , n ( t ) , w s = w s , 1 ( t ) , , w s , n ( t ) if w r , k ( t ) w s , k ( t ) , k N , k m and w r , m ( t ) = w s , m ( t ) , t T and with the same bias. Evidently ( AN ( T ) , m ) is an ordered set. A relationship (compatibility) of the binary operation “·” and the ordering m on AN ( T ) is given by this assertion analogical to the above one. In paper [1] there is established that the structure AN ( T ) , · m is a non-commutative group.
Lemma 2.
The triad AN ( T ) , · m , m (algebraic structure with an ordering) is a non-commutative ordered group.
Sketch of the proof is presented in [8]. Denoting
AN 1 ( T ) m = { N e ( w ) ; w = ( w 1 , , w n ) , w k C ( T ) , k = 1 , , n , w m ( t ) 1 } ,
we get the following assertion, the proof of which with necessary concepts is contained in [1].
Proposition 3.
Let T = 0 , t 0 ) R , t 0 R { } . Then for any positive integer n N , n 2 and for any integer m such that 1 m n the semigroup ( AN 1 ( T ) m , · m ) is an invariant subgroup of the group ( AN ( T ) m , · m ) .
If m , n N , 1 m n 1 , then a certain relationship between groups ( AN n ( T ) m , · m ) , ( LA n ( T ) m + 1 , m + 1 ) is contained in the following proposition:
Proposition 4.
Let t 0 R , t 0 > 0 , T = 0 , t 0 ) R and m , , n N are integers such that 1 m n 1 . Define a mapping F : AN n ( T ) m LA n ( T ) m + 1 by this rule: For an arbitrary neuron N e ( w r AN n ( T ) m , where w r = w r , 1 ( t ) , , w r , n ( t ) [ C ( T ) ] n we put F ( N e ( w r ) ) = L ( w r , 1 , , w r , n ) LA n ( T ) m + 1 with the action:
L ( w r , 1 , , w r , n ) y ( t ) = d n y ( t ) d t n + k = 1 n w r , k ( t ) d k 1 ( t ) d t k 1 , y C n ( T ) .
Then the mapping F : AN n ( T ) m LA n ( T ) m + 1 is a homomorphism of the group ( AN n ( T ) m , · m ) into the group ( LA n ( T ) m + 1 , m + 1 ) .
Consider N e ( w r ) , N e ( w s ) AN n ( T ) m and denote F ( N e ( w r ) ) = L ( w r , 1 , , w r , n ) , F ( N e ( w s = L ( w s , 1 , , w s , n ) . Denote N e ( w u ) = N e ( w r ) · m N e ( w s ) . There holds
F ( N e ( w r ) · m N e ( w s ) ) = F ( N e ( w u ) ) = L ( w u , 1 , , w u , n ) ,
where
L ( w u , 1 , , w u , n ) y ( t ) = y ( n ) ( t ) + k = 1 n w u , k ( t ) y ( k 1 ) ( t ) .
Here w u , k ( t ) = w r , m + 1 ( t ) w s , k ( t ) + w r , k ( t ) , k m , and w u , m + 1 ( t ) = w r , m + 1 ( t ) w s , m + 1 ( t ) . Then L ( w u , 1 , , w u , n ) = L ( w r , 1 , , w r , n ) · m L ( w s , 1 , , w s , n ) = F ( N e ( w r ) ) · m F ( N e ( w s ) ) . The neutral element N e ( w ) AN n ( T ) m is also mapped onto the neutral element of the group ( L n A ( T ) m + 1 , · m + 1 ) , thus the mapping F : ( AN n ( T ) m , · m ) ( L n A ( T ) m + 1 , m + 1 ) is a group homomorphism.
Now, using the construction described in Lemma 2, we obtain the final transpozition hypergroup (called also non-commutative join space). Denote by P ( AN ( T ) m ) the power set of AN ( T ) m consisting of all nonempty subsets of the last set and define a binary hyperoperation
m : AN ( T ) m × AN ( T ) m P ( AN ( T ) m )
by the rule
N e ( w r ) m N e ( w s ) = { N e ( w u ) ; N e ( w r ) · m N e ( w s ) m N e ( w u ) }
for all pairs N e ( w r ) , N e ( w s ) AN ( T ) m . More in detail if w ( u ) = ( w u , 1 , , w u , n ) , w ( r ) = ( w r , 1 , , w r , n ) , w ( s ) = ( w s , 1 , , w s , n ) , then w r , m ( t ) w s , m ( t ) = w u , m ( t ) , w r , m ( t ) w s , k ( t ) + w r , k ( t ) w u , k ( t ) , if k m , t T . Then we have that ( AN ( T ) m , m ) is a non-commutative hypergroup. The above defined invariant (termed also normal) subgroup ( AN 1 ( T ) m , · m ) of the group ( AN ( T ) m , · m ) is the carried set of a subhypergroup of the hypergroup ( AN ( T ) m , m ) and it has certain significant properties.
Using certain generalization of methods from [8] we obtain after investigation of constructed structures this result:
Let T = [ 0 , t 0 ) R , t 0 R { } . Then for any positive integer n N , n 2 and for any integer m such that 1 m n the hypergroup ( AN ( T ) m , m ) , where
AN ( T ) m = { N e ( w r ) ; w r = ( w r , 1 ( t ) , , w r , n ( t ) ) [ C ( T ) ] n , w r , m ( t ) > 0 , t T } ,
is a transpozition hypergroup (i.e., a non-commutative join space) such that ( AN ( T ) m , m ) is its subhypergroup, which is
-
Invertible (i.e., N e ( w r ) / N e ( w s ) AN 1 ( T ) m implies N e ( w s ) / N e ( w r ) AN 1 ( T ) m and N e ( w r ) N e ( w s ) AN 1 ( T ) m implies N e ( w s ) N e ( w r ) AN 1 ( T ) m for all pairs of neurons N e ( w r ) , N e ( w s ) AN 1 ( T ) m ,
-
Closed (i.e., N e ( w r ) / N e ( w s ) AN 1 ( T ) m , N e ( w r ) N e ( w s ) AN 1 ( T ) m for all pairs N e ( w r ) , / , N e ( w s ) AN 1 ( T ) m ,
-
Reflexive (i.e., N e ( w r ) AN 1 ( T ) m = AN 1 ( T ) m / N e ( w r ) for any neuron N e ( w r ) AN ( T ) m and
-
Normal (i.e., N e ( w r ) AN 1 ( T ) m = AN 1 ( T ) m N e ( w r ) for any neuron N e ( w r ) AN ( T ) m .
Remark 3.
We can define a certain transformation function which mappes the output function y [ α ] ( t ) into the output function y [ α + 1 ] ( t ) . This function denoting by ρ [ α ] also determines the transformation S [ α ] of powers of corresponding differential neurons: D α S [ α ] D α + 1 . In more detail, let us describe output functions y [ α ] ( t ) , y [ α + 1 ] ( t ) and mentioned transformation function ρ [ α ] .
y [ α ] ( t ) = ( 1 + w m ( t ) + + w m α 1 ( t ) ) ( w 1 ( t ) p ( t ) + w 2 ( t ) d p ( t ) d t + + w m 1 ( t ) d m 2 p ( t ) d t m 2 +
w m α d m 1 p ( t ) d t m 1 + w m + 1 d m p ( t ) d t m + + ( b d n p ( t ) d t n ) α ) ,
y [ α + 1 ] ( t ) = ( 1 + w m ( t ) + + w m α 1 ( t ) + w m α ( t ) ) ( w 1 ( t ) p ( t ) + w 2 ( t ) d p ( t ) d t +
w m 1 ( t ) d m 2 p ( t ) d t m 2 + w m α + 1 d m 1 p ( t ) d t m 1 + w m + 1 d m p ( t ) d t m + + ( b d n p ( t ) d t n ) α + 1 ) .
Transformation function ρ [ α ] of the output function y [ α ] ( t ) into the output function y [ α + 1 ] ( t ) which determines the transformation D α S [ α ] D α + 1 of powers of corresponding differential neurons.
So,
ρ [ α ] r = 0 α 1 w m r ( t ) · k 1 k m m + 1 w k ( t ) d k 1 p ( t ) d t k 1 + w m α ( t ) d m 1 p ( t ) d t m 1 + b d n p ( t ) d t n α =
= r = 0 α w m r ( t ) · k 1 k m m + 1 w k ( t ) d k 1 p ( t ) d t k 1 + w m α + 1 ( t ) d m 1 p ( t ) d t m 1 + b d n p ( t ) d t n α + 1 .
Denoting
w m [ α 1 ] = r = 0 α 1 w m r ( t )   and   v [ α ] = k 1 k m m + 1 w k ( t ) d k 1 p ( t ) d t k 1 + w m α ( t ) d m 1 p ( t ) d t m 1 + b d n p ( t ) d t n α ,
we can write
ρ [ α ] w m [ α 1 ] · w m [ α ] = w m [ α ] · v [ α + 1 ] .

5. Conclusions

We have constructed the infinite cyclic group ( G D n , · m ) of differential neurons which is isomorphic to the cyclic group ( Z , + ) , possessing the neuron N 1 ( e ) m as the identity element of ( G D n , · m ) . Thus,
{ N 1 ( e ) m } { D α N e p ( w ; α Z , α 0 } , · m = ( G D n , · m ) ( Z , + ) .
It is to be noted that the above constructed cyclic (infinite) group of artificial differential neurons can be also used for the construction of certain hyperstructures formed by such neurons [17,18,19,20]. So the above presented approach enables an additional elaboration of the hyperstructure theory ([8,9,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32]) in connection with time varying weights and with vectors of differentiable input functions.
The construction of the considered infinite cyclic group of differential neurons can be onto other its isomorphic images under the using other weights and inputs. After those constructions there is possible to create abelian finitely or infinitely generated groups of artificial differential neurons and to investigate their direct products or sums. Using a suitable ordering these considerations involve to obtain neural networks with prescribed structures.

Author Contributions

Investigation, J.C., B.S.; writing—original draft preparation, J.C., B.S.; writing—review and editing, J.C., B.S., J.V. All authors have read and agreed to the published version of the manuscript.

Funding

J.C. was supported by the FEKT-S-17-4225 grant of Brno University of Technology and J.V. was supported by the FEKT-S-20-6225 grant of Brno University of Technology.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to express their thanks to Dario Fasino and Domenico Freni.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Chvalina, J.; Novák, M.; Smetana, B. Construction on an Infinite Cyclic Monoid of Differential Neurons. In Mathematics, Information Technologies and Applied Sciences 2021 Post-Conference Proceedings of Extended Versions of Selected Papers, Proceedings of the MITAV 2021, Brno, Czech Republic, 17–18 June 2021; Baštinec, J., Hrubý, M., Eds.; University of Defence: Brno, Czech Republic, 2021; pp. 1–10. [Google Scholar]
  2. Koskela, T. Neural Network Methods in Analysing and Modelling Time Varying Processes; Report B, 35; Helsinki University of Technology Laboratory of Computational Engineering Publications, Department of Electrical and Communications: Helsinki, Finland, 2003. [Google Scholar]
  3. Buchholz, S. A Theory of Neural Computation with Clifford-Algebras; Technical Report Number 0504; Christian-Albrechts-Universität zu Kiel, Institut für Informatik und Praktische Mathematik: Kiel, Germany, 2005. [Google Scholar]
  4. Tučková, J.; Šebesta, V. Data Mining Approach for Prosody Modelling by ANN in Text-to-Speech Synthesis. In Proceedings of the IASTED Inernational Conference on Artificial Intelligence and Applications—AIA 2001, Marbella, Spain, 4–7 September 2001; Hamza, M.H., Ed.; ACTA Press: Marbella, Spain, 2001; pp. 164–166. [Google Scholar]
  5. Volná, E. Neuronové Sítě 1. [Neural Networks], 2nd ed.; Ostravská Univerzita: Ostrava, Czech Republic, 2008. [Google Scholar]
  6. Waldron, M.B. Time varying neural networks. In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, New Orleans, LA, USA, 4–7 November 1988. [Google Scholar]
  7. Chvalina, J.; Smetana, B. Models of Iterated Artificial Neurons. In Proceedings of the 18th Conference on Aplied Mathematics Aplimat 2019, Bratislava, Slovakia, 5–7 February 2019; pp. 203–212. [Google Scholar]
  8. Chvalina, J.; Smetana, B. Groups and Hypergroups of Artificial Neurons. In Proceedings of the 17th Conference on Aplied Mathematics Aplimat 2018, Bratislava, Slovakia, 6–8 February 2018; pp. 232–243. [Google Scholar]
  9. Chvalina, J.; Smetana, B. Solvability of certain groups of time varying artificial neurons. Ital. J. Pure Appl. Math. 2021, 45, 80–94. [Google Scholar]
  10. Pollock, D.; Waldron, M.B. Phase dependent output in a time varying neural net. In Proceedings of the Annual Conference on EMBS, Seattle, WA, USA, 9–12 November 1989; pp. 2054–2055. [Google Scholar]
  11. Goodman, F.M. Algebra: Abstract and Concrete; Prentice Hall: London, UK, 1998. [Google Scholar]
  12. Chvalina, J.; Hošková-Mayerová, Š.; Dehghan Nezhad, A. General actions of hypergroups and some applications. Analele Stiint. Univ. Ovidius Constanta 2013, 21, 59–82. [Google Scholar]
  13. Chvalina, J.; Chvalinová, L. Action of centralizer hypergroups of n-th order linear differential operators on rings on smooth functions. J. Appl. Math. 2008, 1, 45–53. [Google Scholar]
  14. Chvalina, J.; Chvalinová, L. Modelling of join spaces by n-th order linear ordinary differential operators. In Proceedings of the 4th International Conference APLIMAT 2005, Bratislava, Slovakia, 1–4 February 2005; pp. 279–284. [Google Scholar]
  15. Chvalina, J.; Novák, M.; Smetana, B.; Staněk, D. Sequences of Groups, Hypergroups and Automata of Linear Ordinary Differential Operators. Mathematics 2021, 9, 319. [Google Scholar] [CrossRef]
  16. Chvalina, J.; Novák, M.; Staněk, D. Sequences of groups and hypergroups of linear ordinary differential operators. Ital. J. Pure Appl. Math. 2019. accepted. [Google Scholar]
  17. Novák, M. n-ary hyperstructures constructed from binary quasi-ordered semigroups. Analele Stiint. Univ. Ovidius Constanta Ser. Mat. 2014, 22, 147–168. [Google Scholar] [CrossRef] [Green Version]
  18. Novák, M. On EL-semihypergroups. Eur. J. Comb. 2015, 44, 274–286. [Google Scholar] [CrossRef]
  19. Novák, M. Some basic properties of EL-hyperstructures. Eur. J. Comb. 2013, 34, 446–459. [Google Scholar] [CrossRef]
  20. Cristea, I.; Novák, M.; Křehlík, Š. A class of hyperlattices induced by quasi-ordered semigroups. In Proceedings of the 16th Conference on Aplied Mathematics Aplimat 2017, Bratislava, Slovakia, 31 January–2 February 2017; pp. 1124–1135. [Google Scholar]
  21. Corsini, P. Prolegomena of Hypergroup Theory; Aviani Editore Tricesimo: Udine, Italy, 1993. [Google Scholar]
  22. Corsini, P.; Leoreanu, V. Applications of Hyperstructure Theory; Kluwer: Dordrecht, The Netherlands; Boston, MA, USA; London, UK, 2003. [Google Scholar]
  23. Cristea, I. Several aspects on the hypergroups associated with n-ary relations. Analele Stiint. Univ. Ovidius Constanta 2009, 17, 99–110. [Google Scholar]
  24. Cristea, I.; Ştefănescu, M. Binary relations and reduced hypergroups. Discrete Math. 2008, 308, 3537–3544. [Google Scholar] [CrossRef] [Green Version]
  25. Cristea, I.; Ştefănescu, M. Hypergroups and n-ary relations. Eur. J. Combin. 2010, 31, 780–789. [Google Scholar] [CrossRef] [Green Version]
  26. Leoreanu-Fotea, V.; Ciurea, C.D. On a P-hypergroup. J. Basic Sci. 2008, 4, 75–79. [Google Scholar]
  27. Račková, P. Hypergroups of symmetric matrices. In Proceedings of the 10th International Congress of Algebraic Hyperstructures and Applications, (AHA), Brno, Czech Republic, 3–9 September 2008; pp. 267–272. [Google Scholar]
  28. Vougiouklis, T. Hyperstructures and their Representations; Hadronic Press: Palm Harbor, FL, USA, 1994. [Google Scholar]
  29. Novák, M.; Cristea, I. Composition in EL–hyperstructures. Hacet. J. Math. Stat. 2019, 48, 45–58. [Google Scholar] [CrossRef]
  30. Vougiouklis, T. Cyclicity in a special class of hypergroups. Acta Univ. Carol. Math. Phys. 1981, 22, 3–6. [Google Scholar]
  31. Chvalina, J.; Svoboda, Z. Sandwich semigroups of solutions of certain functional equations and hyperstructures determined by sandwiches of functions. J. Appl. Math. 2009, 2, 35–43. [Google Scholar]
  32. Borzooei, R.A.; Varasteh, H.R.; Hasankhani, A. F-Multiautomata on Join Spaces Induced by Differential Operators. Appl. Math. 2014, 5, 1386–1391. [Google Scholar] [CrossRef] [Green Version]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chvalina, J.; Smetana, B.; Vyroubalová, J. Construction of an Infinite Cyclic Group Formed by Artificial Differential Neurons. Mathematics 2022, 10, 1571. https://0-doi-org.brum.beds.ac.uk/10.3390/math10091571

AMA Style

Chvalina J, Smetana B, Vyroubalová J. Construction of an Infinite Cyclic Group Formed by Artificial Differential Neurons. Mathematics. 2022; 10(9):1571. https://0-doi-org.brum.beds.ac.uk/10.3390/math10091571

Chicago/Turabian Style

Chvalina, Jan, Bedřich Smetana, and Jana Vyroubalová. 2022. "Construction of an Infinite Cyclic Group Formed by Artificial Differential Neurons" Mathematics 10, no. 9: 1571. https://0-doi-org.brum.beds.ac.uk/10.3390/math10091571

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop