Next Article in Journal
The Improvement of Elliptic Curve Factorization Method to Recover RSA’s Prime Factors
Next Article in Special Issue
Analysis on ψ-Hilfer Fractional Impulsive Differential Equations
Previous Article in Journal
Applications of a Multiplier Transformation and Ruscheweyh Derivative for Obtaining New Strong Differential Subordinations
Previous Article in Special Issue
Stability of Bi-Additive Mappings and Bi-Jensen Mappings
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Existence, Uniqueness, and Stability Analysis of the Probabilistic Functional Equation Emerging in Mathematical Biology and the Theory of Learning

1
Department of Mathematics and Statistics, Faculty of Science and Technology, Thammasat University Rangsit Center, Pathum Thani 12120, Thailand
2
Department of Mathematics Education, College of Education, Mokwon University, Daejeon 35349, Korea
3
School of Science, Nanjing University of Science and Technology, Nanjing 210094, China
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Submission received: 13 July 2021 / Revised: 20 July 2021 / Accepted: 21 July 2021 / Published: 21 July 2021
(This article belongs to the Special Issue Advance in Functional Equations)

Abstract

:
Probabilistic functional equations have been used to analyze various models in computational biology and learning theory. It is worth noting that they are linked to the symmetry of a system of functional equations’ transformation. Our objective is to propose a generic probabilistic functional equation that can cover most of the mathematical models addressed in the existing literature. The notable fixed-point tools are utilized to examine the existence, uniqueness, and stability of the suggested equation’s solution. Two examples are also given to emphasize the significance of our findings.

1. Introduction

In an animal or human being, the learning phase may often be viewed as a series of choices between multiple possible reactions. Even in basic repetitive experiments under strictly regulated conditions, preference chains are mostly volatile, recommending that the probability governs the choice of feedback. It is also helpful to identify structural adjustments in the series of alternatives that reflect changes in trial-to-trial outcomes. From this perspective, most of the learning analysis explains the probability of a trial-to-test occurrence that describes a stochastic mechanism.
Experiments in mathematical learning have recently shown that the behavior of a basic learning experiment follows a stochastic model. Thus, it is not a novel idea (for detail, see [1,2]). However, following 1950, two crucial characteristics emerged mainly in the Bush, Estes, and Mosteller study. Firstly, one of the suggested models’ most important features is the inclusive character of the learning process. Second, such models can be examined in this way so that they cannot hide their statistical features.
Symmetries have emerged in mathematical formulations many times, and they have been shown to be important for solving problems or furthering research. It is possible to see high-quality research that uses nontrivial mathematics and related geometries in the context of important issues from a wide range of fields.
In learning theory and mathematical biology, the solution to the subsequent equation is of great importance
L ( x ) = x L ( u + ( 1 u ) x ) + ( 1 x ) L ( ( 1 v ) x ) , x J ( = [ 0 , 1 ] ) ,
where L : J R is an unknown and 0 < u v < 1 are the learning-rate parameters that measure the effectiveness of the responses in a two-choice situation.
In 1976, Istrăţescu [3] used the above functional equation to inspect the involvement of predatory animals that prey on two distinct types of prey. Markov transitions were used to describe this behavior by converting the states x and ( 1 x ) to u + ( 1 u ) x and ( 1 v ) x with P ( x u + ( 1 u ) x ) and P ( 1 x ( 1 v ) x ) .
Bush and Wilson [1] used such operators to examine the movement of a fish in two-choice circumstances. They claimed that under such behavior, there are four possible events: left-reward, right-nonreward, right-reward, left-nonreward.
It is widely assumed that getting rewarded on one side would increase the probability of that side being selected in the following trial. However, the reasoning for non-rewarded trials is less apparent. According to an extinction or reinforcement theory (see Table 1), the probability of choosing an unrewarded side in the next trial would decrease. In contrast, a model that relies on habit formation or secondary reinforcement (see Table 2) would suggest that simply choosing a side would increase the probability of selecting that side in the upcoming trials.
In 2015, Berinde and Khan [4] generalized the above idea by proposing the following functional equation
L ( x ) = x L ( z 1 ( x ) ) + ( 1 x ) L ( z 2 ( x ) ) , x J ,
where L : J R and z 1 , z 2 : J J are given contraction mappings with z 1 ( 1 ) = 1 and z 2 ( 0 ) = 0 .
Recently, Turab and Sintunavarat [5] utilized the above ideas and suggested the functional equation stated below
L ( x ) = x L ( r x + ( 1 r ) Θ 1 ) + ( 1 x ) L ( s x + ( 1 s ) Θ 2 ) , x J ,
where L : J R is an unknown, 0 < r s < 1 and Θ 1 , Θ 2 J . The aforementioned functional equation was used to study a specific kind of psychological resistance of dogs enclosed in a small box.
Several other studies on human and animal actions in probability-learning scenarios have produced different results (see [6,7,8,9,10,11,12]).
Here, by following the above work with the four possible events (right-reward, right-nonreward, left-reward, left-nonreward) discussed by Bush and Wilson [1], we propose the following general functional equation
L ( x ) = ν ( x ) k k ζ k k L ( z 1 ( x ) ) + ν ( x ) k k 1 ζ k k L ( z 2 ( x ) ) + ζ k k 1 ν ( x ) k k L ( z 3 ( x ) ) + 1 ζ k k 1 ν ( x ) k k L ( z 4 ( x ) ) ,
for all x [ k , ] , where 0 ζ 1 ,   L : [ k , ] R is an unknown and z 1 , z 2 , z 3 , z 4 : [ k , ] [ k , ] are given mappings. In addition, ν : J J is a non-expansive mapping with ν ( k ) = k and | ν ( x ) | λ 5 , for λ 5 0 .
Our objective is to prove the existence, uniqueness, Hyers–Ulam (HU)- and Hyers–Ulam–Rassias (HUR)-type stability conclusions of Equation (4) by using the appropriate fixed-point method. Following that, we provide two examples to demonstrate the importance of our findings.
The following stated outcome will be required in the advancement.
Theorem 1
([13]). Let ( J , d ) be a complete metric space and L : J J be a Banach contraction mapping (BCM) defined by
d ( L u , L v ) η d ( u , v )
for some η < 1 and for all u , v J . Then L has only one fixed point. Furthermore, the Picard iteration (PI) { u n } in J , defined as u n = L u n 1 for all n N , where u 0 J , converges to the unique fixed point of L .

2. Main Results

Let J = [ k , ] , where k , R with k < . We indicate the class L : J R consisting of all continuous real-valued functions by T such that L ( k ) = 0 and
sup ξ χ L ( ξ ) L ( χ ) ξ χ < .
We can see that T , · is a normed space (for the detail, see [4,12]), where · is given by
L = sup ξ χ L ( ξ ) L ( χ ) ξ χ
for all L T .
For computational convenience, we write (4) as
L ( x ) = u 1 ( y ) w 1 L ( z 1 ( x ) ) + u 1 ( y ) w 2 L ( z 2 ( x ) ) + w 1 u 2 ( y ) L ( z 3 ( x ) ) + w 2 u 2 ( y ) L ( z 4 ( x ) ) ,
where u 1 ( y ) = y k k , u 2 ( y ) = y k ,   w 1 = ζ k k , w 2 = ζ k , L : J R is an unknown function such that L ( k ) = 0 . In addition, z 1 , z 2 , z 3 , z 4 : J J are BCMs with contractive coefficients λ 1 , λ 2 , λ 3 , λ 4 , respectively, and accomplishing the conditions
z 3 ( k ) = k = z 4 ( k ) .
The primary goal of this part is to use fixed-point techniques to determine the existence and uniqueness results of (7). We begin with the outcome stated below.
Theorem 2.
Consider the probabilistic functional Equation (7) with (8). Suppose that z 1 ( ) = = z 2 ( ) and η 1 < 1 such that
η 1 : = 1 + λ 7 w 1 λ 1 + w 2 λ 2 + 1 + λ 8 w 1 λ 3 + w 2 λ 4 ,
where λ 7 = λ 5 k k and λ 8 = λ 5 k . Assume that there is a nonempty subset C of S : = L T | L ( ) such that ( C , · ) is a Banach space (BS), where · is given in (6), and the mapping M from C to C defined for each L T by
( ML ) ( x ) = u 1 ( y ) w 1 L ( z 1 ( x ) ) + u 1 ( y ) w 2 L ( z 2 ( x ) ) + w 1 u 2 ( y ) L ( z 3 ( x ) ) + w 2 u 2 ( y ) L ( z 4 ( x ) ) ,
for all x J is a self mapping. Then M is a BCM with the metric d induced by · .
Proof. 
Let d : C × C R be a metric induced by · on C . Thus ( C , d ) is a complete metric space. We deal with the operator M from C defined in (10).
In addition, M is continuous and ML < for all L C . Therefore, M is a self operator on C . Furthermore, it is clear that the solution of (7) is equivalent to the fixed point of M . Since M is a linear mapping, so for L 1 , L 2 C , we obtain
ML 1 ML 2 = M ( Δ L ) ,
where Δ L = L 1 L 2 . Thus, to evaluate ML 1 ML 2 , we mark the following framework
Λ τ 1 , τ 2 : = M ( Δ L ) ( τ 1 ) M ( Δ L ) ( τ 2 ) Δ τ ,
where Δ τ = τ 1 τ 2 . To obtain this, let L 1 , L 2 C , and for each τ 1 , τ 2 J with τ 1 τ 2 , we obtain
M ( Δ L ) ( τ 1 ) M ( Δ L ) ( τ 2 ) Δ τ = 1 Δ τ u 1 ( v ( τ 1 ) ) w 1 ( Δ L ) ( z 1 ( τ 1 ) ) + u 1 ( v ( τ 1 ) ) w 2 ( Δ L ) ( z 2 ( τ 1 ) ) + w 1 u 2 ( v ( τ 1 ) ) ( Δ L ) ( z 3 ( τ 1 ) ) + w 2 u 2 ( v ( τ 1 ) ) ( Δ L ) ( z 4 ( τ 1 ) ) u 1 ( v ( τ 2 ) ) w 1 ( Δ L ) ( z 1 ( τ 2 ) ) w 2 u 1 ( v ( τ 2 ) ) ( Δ L ) ( z 2 ( τ 2 ) ) u 2 ( v ( τ 2 ) ) w 1 ( Δ L ) ( z 3 ( τ 2 ) ) w 2 u 2 ( v ( τ 2 ) ) ( Δ L ) ( z 4 ( τ 2 ) ) = | 1 Δ τ u 1 ( v ( τ 1 ) ) w 1 ( Δ L ) ( z 1 ( τ 1 ) ) u 1 ( v ( τ 1 ) ) w 1 ( Δ L ) ( z 1 ( τ 2 ) ) + 1 Δ τ u 1 ( v ( τ 1 ) ) w 2 ( Δ L ) ( z 2 ( τ 1 ) ) u 1 ( v ( τ 1 ) ) w 2 ( Δ L ) ( z 2 ( τ 2 ) ) + 1 Δ τ w 1 u 2 ( v ( τ 1 ) ) ( Δ L ) ( z 3 ( τ 1 ) ) w 1 u 2 ( v ( τ 1 ) ) ( Δ L ) ( z 3 ( τ 2 ) ) + 1 Δ τ w 2 u 2 ( v ( τ 1 ) ) ( Δ L ) ( z 4 ( τ 1 ) ) w 2 u 2 ( v ( τ 1 ) ) ( Δ L ) ( z 4 ( τ 2 ) ) + 1 Δ τ w 1 u 1 ( v ( τ 1 ) ) ( Δ L ) ( z 1 ( τ 2 ) ) w 1 u 1 ( v ( τ 2 ) ) ( Δ L ) ( z 1 ( τ 2 ) ) + 1 Δ τ u 1 ( v ( τ 1 ) ) w 2 ( Δ L ) ( z 2 ( τ 2 ) ) u 1 ( v ( τ 2 ) ) w 2 ( Δ L ) ( z 2 ( τ 2 ) ) + 1 Δ τ w 1 u 2 ( v ( τ 1 ) ) ( Δ L ) ( z 3 ( τ 2 ) ) w 1 u 2 ( v ( τ 2 ) ) ( Δ L ) ( z 3 ( τ 2 ) ) + 1 Δ τ w 2 u 2 ( v ( τ 1 ) ) ( Δ L ) ( z 4 ( τ 2 ) ) w 2 u 2 ( v ( τ 2 ) ) ( Δ L ) ( z 4 ( τ 2 ) ) | .
Our aim is to use the definition of the norm (6) here. Therefore, by utilizing (8) with the condition z 1 ( ) = = z 2 ( ) , we have
M ( Δ L ) ( τ 1 ) M ( Δ L ) ( τ 2 ) Δ τ 1 | Δ τ | w 1 u 1 ( v ( τ 1 ) ) × | ( Δ L ) ( z 1 ( τ 1 ) ) ( Δ L ) ( z 1 ( τ 2 ) ) | | z 1 ( τ 1 ) z 1 ( τ 2 ) | × | z 1 ( τ 1 ) z 1 ( τ 2 ) | + 1 | Δ τ | w 2 u 1 ( v ( τ 1 ) ) × | ( Δ L ) ( z 2 ( τ 1 ) ) ( Δ L ) ( z 2 ( τ 2 ) ) | | z 2 ( τ 1 ) z 2 ( τ 2 ) | × | z 2 ( τ 1 ) z 2 ( τ 2 ) | + 1 | Δ τ | w 1 u 2 ( v ( τ 1 ) ) × | ( Δ L ) ( z 3 ( τ 1 ) ) ( Δ L ) ( z 3 ( τ 2 ) ) | | z 3 ( τ 1 ) z 3 ( τ 2 ) | × | z 3 ( τ 1 ) z 3 ( τ 2 ) | + 1 | Δ τ | w 2 u 2 ( v ( τ 1 ) ) × | ( Δ L ) ( z 4 ( τ 1 ) ) ( Δ L ) ( z 4 ( τ 2 ) ) | | z 4 ( τ 1 ) z 4 ( τ 2 ) | × | z 4 ( τ 1 ) z 4 ( τ 2 ) | + 1 k w 1 × | ( Δ L ) ( z 1 ( τ 2 ) ) ( Δ L ) ( z 1 ( ) ) | | z 1 ( τ 2 ) z 1 ( ) | × | z 1 ( τ 2 ) z 1 ( ) | + 1 k w 2 × | ( Δ L ) ( z 2 ( τ 2 ) ) ( Δ L ) ( z 2 ( ) ) | | z 2 ( τ 2 ) z 2 ( ) | × | z 2 ( τ 2 ) z 2 ( ) | + 1 k w 1 × | ( Δ L ) ( z 3 ( τ 2 ) ) ( Δ L ) ( z 3 ( k ) ) | | z 3 ( τ 2 ) z 3 ( k ) | × | z 3 ( τ 2 ) z 3 ( k ) | + 1 k w 2 × | ( Δ L ) ( z 4 ( τ 2 ) ) ( Δ L ) ( z 4 ( k ) ) | | z 4 ( τ 2 ) z 4 ( k ) | × | z 4 ( τ 2 ) z 4 ( k ) | .
As z 1 z 4 are contraction mappings with the contractive coefficients λ 1 λ 4 , respectively, we obtain
M ( Δ L ) ( τ 1 ) M ( Δ L ) ( τ 2 ) Δ τ w 1 u 1 ( v ( τ 1 ) ) λ 1 Δ L + w 2 u 1 ( v ( τ 1 ) ) λ 2 Δ L + w 1 u 2 ( v ( τ 1 ) ) λ 3 Δ L + w 2 u 2 ( v ( τ 1 ) ) λ 4 Δ L + w 1 λ 1 Δ L + w 2 λ 2 Δ L + w 1 λ 3 Δ L + w 2 λ 4 Δ L .
Hence,
M ( Δ L ) ( τ 1 ) M ( Δ L ) ( τ 2 ) Δ τ η 1 Δ L ,
where η 1 is defined in (9). This gives that
d ( ML 1 , ML 2 ) = ML 1 ML 2 η 1 Δ L = η 1 d ( L 1 , L 2 ) .
As 0 < η 1 < 1 , this implies that M is a BCM with metric d induced by · . □
Theorem 3.
Consider the probabilistic Equation (7) with (8). Suppose that z 1 ( ) = = z 2 ( ) and η 1 < 1 , where η 1 is defined in (9). Assume that there is a nonempty subset C of S : = L T | L ( ) such that ( C , · ) is a BS, where · is given in (6), and the mapping M from C to C defined for each L T by (10) is a self mapping. Subsequently, the probabilistic Equation (7) with (8) has a unique solution in C . Furthermore, the iteration L n in C defined by
( L n ) ( x ) = u 1 ( y ) w 1 L n 1 ( z 1 ( x ) ) + u 1 ( y ) w 2 L n 1 ( z 2 ( x ) ) + w 1 u 2 ( y ) L n 1 ( z 3 ( x ) ) + w 2 u 2 ( y ) L n 1 ( z 4 ( x ) )
for all n N , where L 0 C , converges to the unique solution of (7).
Proof. 
From Theorem 2, it is clear that M : C C defined for each L T by (10) is a BCM with metric d induced by · . Thus, by utilizing the Banach fixed-point theorem, we get the conclusion of this theorem. □
A similar estimation approach has been applied in a group control system (for the detail, see [14]).
We shall look at a unique situation here. If z 1 , z 2 , z 3 , z 4 : J J are contraction mappings with contractive coefficients λ 1 λ 2 λ 3 λ 4 , respectively, then by Theorems 2 and 3, the outcomes are as follows.
Corollary 1.
Consider the probabilistic Equation (7) associated with (8). Assume that z 1 ( ) = = z 2 ( ) with η 1 : = 3 λ 4 ( w 1 + w 2 ) < 1 , and there is a nonempty subset C of S : = L T | L ( ) such that ( C , · ) is a BS, where · is given in (6), and the mapping M from C to C defined for each L T by
( ML ) ( x ) = u 1 ( y ) w 1 L ( z 1 ( x ) ) + u 1 ( y ) w 2 L ( z 2 ( x ) ) + w 1 u 2 ( y ) L ( z 3 ( x ) ) + w 2 u 2 ( y ) L ( z 4 ( x ) ) ,
for all x J is a self mapping. Then M is a BCM with the metric d induced by · .
Corollary 2.
Consider the probabilistic Equation (7) associated with (8). Assume that z 1 ( ) = = z 2 ( ) with η 1 : = 3 λ 4 ( w 1 + w 2 ) < 1 , and there is a nonempty subset C of S : = L T | L ( ) such that ( C , · ) is a BS, where · is given in (6), and the mapping M from C to C defined for each L T by (12) is a self mapping. Subsequently, the probabilistic Equation (7) with (8) has a unique solution in C . Furthermore, the iteration L n in C given as
( L n ) ( x ) = u 1 ( y ) w 1 L n 1 ( z 1 ( x ) ) + u 1 ( y ) w 2 L n 1 ( z 2 ( x ) ) + w 1 u 2 ( y ) L n 1 ( z 3 ( x ) ) + w 2 u 2 ( y ) L n 1 ( z 4 ( x ) )
for all n N , where L 0 C , converges to the unique solution of (7).
The conditions z 1 ( ) = = z 2 ( ) and z 3 ( k ) = k = z 4 ( k ) are sufficient, but not necessary, to prove the main results. In the following results, we use different conditions to prove the main conclusion.
Theorem 4.
Consider the probabilistic Equation (7) with (8). Assume that there exist λ 6 0 such that
| z i ( x ) | λ 6 , f o r a l l x J , i = 1 , 2 , 3 , 4 ,
and that η 2 < 1 , where
η 2 : = 1 k 2 λ 6 ( w 1 + w 2 ) + ( k ) λ 7 w 1 λ 1 + w 2 λ 2 + λ 8 w 1 λ 3 + w 2 λ 4 ,
with λ 7 = λ 5 k k and λ 8 = λ 5 k . Suppose that there is a nonempty subset C of S : = L T | L ( ) such that ( C , · ) is a BS, where · is given in (6), and the mapping M from C to C defined for each L T by
( ML ) ( x ) = u 1 ( y ) w 1 L ( z 1 ( x ) ) + u 1 ( y ) w 2 L ( z 2 ( x ) ) + w 1 u 2 ( y ) L ( z 3 ( x ) ) + w 2 u 2 ( y ) L ( z 4 ( x ) ) ,
for all x J is a self mapping. Then M is a BCM with the metric d induced by · .
Proof. 
Let d : C × C R be a metric induced by · on C . Thus ( C , d ) is a complete metric space. We deal with the operator M from C defined in (16).
In addition, M is continuous and ML < for all L C . Therefore, M is a self operator on C . Furthermore, it is clear that the solution of (7) is equivalent to the fixed point of M . Since M is a linear mapping, so for L 1 , L 2 C , we obtain
ML 1 ML 2 = M ( Δ L ) ,
where Δ L = L 1 L 2 . Thus, to evaluate ML 1 ML 2 , we mark the following framework
Λ τ 1 , τ 2 : = M ( Δ L ) ( τ 1 ) M ( Δ L ) ( τ 2 ) Δ τ ,
where Δ τ = τ 1 τ 2 . To obtain this, let L 1 , L 2 C , and for each τ 1 , τ 2 J with τ 1 τ 2 , we obtain
M ( Δ L ) ( τ 1 ) M ( Δ L ) ( τ 2 ) Δ τ = 1 Δ τ u 1 ( v ( τ 1 ) ) w 1 ( Δ L ) ( z 1 ( τ 1 ) ) + u 1 ( v ( τ 1 ) ) w 2 ( Δ L ) ( z 2 ( τ 1 ) ) + w 1 u 2 ( v ( τ 1 ) ) ( Δ L ) ( z 3 ( τ 1 ) ) + w 2 u 2 ( v ( τ 1 ) ) ( Δ L ) ( z 4 ( τ 1 ) ) u 1 ( v ( τ 2 ) ) w 1 ( Δ L ) ( z 1 ( τ 2 ) ) w 2 u 1 ( v ( τ 2 ) ) ( Δ L ) ( z 2 ( τ 2 ) ) u 2 ( v ( τ 2 ) ) w 1 ( Δ L ) ( z 3 ( τ 2 ) ) w 2 u 2 ( v ( τ 2 ) ) ( Δ L ) ( z 4 ( τ 2 ) ) = | 1 Δ τ u 1 ( v ( τ 1 ) ) w 1 ( Δ L ) ( z 1 ( τ 1 ) ) u 1 ( v ( τ 1 ) ) w 1 ( Δ L ) ( z 1 ( τ 2 ) ) + 1 Δ τ u 1 ( v ( τ 1 ) ) w 2 ( Δ L ) ( z 2 ( τ 1 ) ) w 2 u 1 ( v ( τ 1 ) ) ( Δ L ) ( z 2 ( τ 2 ) ) + 1 Δ τ w 1 u 2 ( v ( τ 1 ) ) ( Δ L ) ( z 3 ( τ 1 ) ) u 2 ( v ( τ 1 ) ) w 1 ( Δ L ) ( z 3 ( τ 2 ) ) + 1 Δ τ w 2 u 2 ( v ( τ 1 ) ) ( Δ L ) ( z 4 ( τ 1 ) ) w 2 u 2 ( v ( τ 1 ) ) ( Δ L ) ( z 4 ( τ 2 ) ) + 1 Δ τ w 1 u 1 ( v ( τ 1 ) ) ( Δ L ) ( z 1 ( τ 2 ) ) w 1 u 1 ( v ( τ 2 ) ) ( Δ L ) ( z 1 ( τ 2 ) ) + 1 Δ τ u 1 ( v ( τ 1 ) ) w 2 ( Δ L ) ( z 2 ( τ 2 ) ) u 1 ( v ( τ 2 ) ) w 2 ( Δ L ) ( z 2 ( τ 2 ) ) + 1 Δ τ w 1 u 2 ( v ( τ 1 ) ) ( Δ L ) ( z 3 ( τ 2 ) ) w 1 u 2 ( v ( τ 2 ) ) ( Δ L ) ( z 3 ( τ 2 ) ) + 1 Δ τ w 2 u 2 ( v ( τ 1 ) ) ( Δ L ) ( z 4 ( τ 2 ) ) w 2 u 2 ( v ( τ 2 ) ) ( Δ L ) ( z 4 ( τ 2 ) ) |
Here, we use the norm (6) and the condition (14). Thus, we have
M ( Δ L ) ( τ 1 ) M ( Δ L ) ( τ 2 ) Δ τ 1 | Δ τ | w 1 u 1 ( v ( τ 1 ) ) × | ( Δ L ) ( z 1 ( τ 1 ) ) ( Δ L ) ( z 1 ( τ 2 ) ) | | z 1 ( τ 1 ) z 1 ( τ 2 ) | × | z 1 ( τ 1 ) z 1 ( τ 2 ) | + 1 | Δ τ | w 2 u 1 ( v ( τ 1 ) ) × | ( Δ L ) ( z 2 ( τ 1 ) ) ( Δ L ) ( z 2 ( τ 2 ) ) | | z 2 ( τ 1 ) z 2 ( τ 2 ) | × | z 2 ( τ 1 ) z 2 ( τ 2 ) | + 1 | Δ τ | w 1 u 2 ( v ( τ 1 ) ) × | ( Δ L ) ( z 3 ( τ 1 ) ) ( Δ L ) ( z 3 ( τ 2 ) ) | | z 3 ( τ 1 ) z 3 ( τ 2 ) | × | z 3 ( τ 1 ) z 3 ( τ 2 ) | + 1 | Δ τ | w 2 u 2 ( v ( τ 1 ) ) × | ( Δ L ) ( z 4 ( τ 1 ) ) ( Δ L ) ( z 4 ( τ 2 ) ) | | z 4 ( τ 1 ) z 4 ( τ 2 ) | × | z 4 ( τ 1 ) z 4 ( τ 2 ) | + 1 k w 1 × | ( Δ L ) ( z 1 ( τ 2 ) ) ( Δ L ) ( 0 ) | | z 1 ( τ 2 ) 0 | × | z 1 ( τ 2 ) 0 |
+ 1 k w 2 × | ( Δ L ) ( z 2 ( τ 2 ) ) ( Δ L ) ( 0 ) | | z 2 ( τ 2 ) 0 | × | z 2 ( τ 2 ) 0 | + 1 k w 1 × | ( Δ L ) ( z 3 ( τ 2 ) ) ( Δ L ) ( 0 ) | | z 3 ( τ 2 ) 0 | × | z 3 ( τ 2 ) 0 | + 1 k w 2 × | ( Δ L ) ( z 4 ( τ 2 ) ) ( Δ L ) ( 0 ) | | z 4 ( τ 2 ) 0 | × | z 4 ( τ 2 ) 0 | .
As z 1 z 4 are contraction mappings with the contractive coefficients λ 1 λ 4 , respectively, we obtain
M ( Δ L ) ( τ 1 ) M ( Δ L ) ( τ 2 ) Δ τ w 1 u 1 ( v ( τ 1 ) ) λ 1 Δ L + w 2 u 1 ( v ( τ 1 ) ) λ 2 Δ L + w 1 u 2 ( v ( τ 1 ) ) λ 3 Δ L + w 2 u 2 ( v ( τ 1 ) ) λ 4 Δ L + w 1 λ 6 k Δ L + w 2 λ 6 k Δ L + w 1 λ 6 k Δ L + w 2 λ 6 k Δ L η 2 Δ L ,
where η 2 is defined in (15). This gives that
d ( ML 1 , ML 2 ) = ML 1 ML 2 η 2 Δ L = η 2 d ( L 1 , L 2 ) .
As 0 < η 2 < 1 , this implies that M is a BCM with metric d induced by · . □
Theorem 5.
Consider the probabilistic functional Equation (7) with (8). Suppose that (14) holds and η 2 < 1 , where η 2 is defined in (15). Assume that there is a nonempty subset C of S : = L T | L ( ) such that ( C , · ) is a BS, where · is given in (6), and the mapping M from C to C defined for each L T by (16) is a self mapping. Subsequently, the probabilistic Equation (7) with (8) has a unique solution in C . Furthermore, the iteration L n in C can be defined by
( L n ) ( x ) = u 1 ( y ) w 1 L n 1 ( z 1 ( x ) ) + u 1 ( y ) w 2 L n 1 ( z 2 ( x ) ) + w 1 u 2 ( y ) L n 1 ( z 3 ( x ) ) + w 2 u 2 ( y ) L n 1 ( z 4 ( x ) )
for all n N , where L 0 C , converges to the unique solution of (7).
Proof. 
From Theorem 4, it is clear that M : C C defined for each L T by (16) is a BCM with metric d induced by · . Thus, by utilizing the Banach fixed-point theorem, we get the conclusion of this theorem. □
We shall look at a unique situation here. If z 1 , z 2 , z 3 , z 4 : J J are contraction mappings with contractive coefficients λ 1 λ 2 λ 3 λ 4 , respectively, then by Theorems 4 and 5, the outcomes are as follows.
Corollary 3.
Consider the probabilistic functional Equation (7) associated with (8). Assume that there exist λ 6 0 defined in (14) and η 2 < 1 , where
η 2 : = w 1 + w 2 k ( λ 4 ( k ) + 2 λ 6 .
Suppose that there is a nonempty subset C of S : = L T | L ( ) such that ( C , · ) is a BS, where · is given in (6), and the mapping M from C to C defined for each L T by
( ML ) ( x ) = u 1 ( y ) w 1 L ( z 1 ( x ) ) + u 1 ( y ) w 2 L ( z 2 ( x ) ) + w 1 u 2 ( y ) L ( z 3 ( x ) ) + w 2 u 2 ( y ) L ( z 4 ( x ) ) ,
for all x J is a self mapping. Then M is a BCM with the metric d induced by · .
Corollary 4.
Consider the probabilistic Equation (7) associated with (8). Assume that (14) holds and η 2 < 1 , where η 2 is defined in (18). Suppose that there is a nonempty subset C of S : = L T | L ( ) such that ( C , · ) is a BS, where · is given in (6), and the mapping M from C to C defined for each L T by (19) is a self mapping. Then, the functional Equation (7) with (8) has a unique solution in C . Furthermore, the iteration L n in C is defined as
( L n ) ( x ) = u 1 ( y ) w 1 L n 1 ( z 1 ( x ) ) + u 1 ( y ) w 2 L n 1 ( z 2 ( x ) ) + w 1 u 2 ( y ) L n 1 ( z 3 ( x ) ) + w 2 u 2 ( y ) L n 1 ( z 4 ( x ) )
for all n N , where L 0 C , converges to the unique solution of (7).
Remark 1.
Our proposed probabilistic Equation (7) is a generalization of the functional equations discussed in [6,8].
We now offer the following examples to show the significance of our results.
Example 1.
Consider the probabilistic functional equation given below
L ( x ) = w 1 x k k L a k k x + 1 a k k + w 2 x k k L b k k x + 1 b k k + w 1 1 x k k L c k k ( x k ) + k + w 2 1 x k k L d k k ( x k ) + k ,
for all x J with k < a , b , c , d < and L T . If we set the mappings ν , z 1 , z 2 , z 3 , z 4 : J J by
ν ( x ) = x , z 1 ( x ) = a k k x + 1 a k k , z 2 ( x ) = b k k x + 1 b k k , z 3 ( x ) = c k k ( x k ) + k , z 4 ( x ) = c k k ( x k ) + k ,
for all x J , then our Equation (7) reduces to the Equation (21). It is easy to see that z 3 , z 4 satisfy our boundary conditions (8), and z 1 ( ) = = z 2 ( ) . In addition,
| z 1 μ z 1 υ | a k k | μ υ | , | z 2 μ z 2 υ | b k k | μ υ | , | z 3 μ z 3 υ | c k k | μ υ | , | z 4 μ z 4 υ | d k k | μ υ | ,
for all μ , υ J . This implies that z 1 z 4 are contraction mappings with coefficients
λ 1 = a k k , λ 2 = b k k , λ 3 = c k k , a n d λ 4 = d k k ,
respectively, and ν : J J is a non-expansive mapping with λ 5 = 1 . If
η 1 : = + 1 2 k k w 1 ( a k ) k + w 2 ( b k ) k + 2 1 k k w 1 ( c k ) k + w 2 ( d k ) k < 1 ,
and there is a nonempty set C of S : = L T | L ( ) such that ( C , · ) is a BS, and the mapping M from C given in (21) for all x J is a self mapping. Then, all constraints of Theorem 2 are fulfilled, and therefore, we get the existence of a solution to the functional Equation (21).
If we define L 0 = I C (whereas I is an identity function), by considering as an initial approximation L 0 , then by Theorem 3, the next iteration converges to a unique solution of (21):
L 1 ( x ) = w 1 x k k L 0 a k k x + 1 a k k + w 2 x k k L 0 b k k x + 1 b k k + w 1 1 x k k L 0 c k k ( x k ) + k + w 2 1 x k k L 0 d k k ( x k ) + k , L 2 ( x ) = w 1 x k k L 1 a k k x + 1 a k k + w 2 x k k L 1 b k k x + 1 b k k + w 1 1 x k k L 1 c k k ( x k ) + k + w 2 1 x k k L 1 d k k ( x k ) + k , L n ( x ) = w 1 x k k L n 1 a k k x + 1 a k k + w 2 x k k L n 1 b k k x + 1 b k k + w 1 1 x k k L n 1 c k k ( x k ) + k + w 2 1 x k k L n 1 d k k ( x k ) + k
for all n N .
Example 2.
Consider the probabilistic functional equation given below
L ( x ) = ζ x L p x + 1 p 4 + ( 1 ζ ) x L q x + 1 q 9 + ζ ( 1 x ) L r x 13 + ( 1 ζ ) ( 1 x ) L s x 7 ,
for all x J = [ 0 , 1 ] , 0 ζ 1 with 0 < p , q , r , s < 1 and L T . If we set the mappings ν , z 1 , z 2 , z 3 , z 4 : J J by
ν ( x ) = x , z 1 ( x ) = p x + 1 p 4 , z 2 ( x ) = q x + 1 q 9 , z 3 ( x ) = r x 13 , z 4 ( x ) = s x 7 ,
for all x J , then our Equation (7) reduces to the Equation (22). It is easy to see that z 3 and z 4 satisfy our boundary conditions (8). In addition,
| z 1 μ z 1 υ | p 4 | μ υ | , | z 2 μ z 2 υ | q 9 | μ υ | , | z 3 μ z 3 υ | r 13 | μ υ | , | z 4 μ z 4 υ | s 7 | μ υ | ,
for all μ , υ J . This implies that z 1 z 4 are contraction mappings with coefficients
λ 1 = p 4 , λ 2 = q 9 , λ 3 = r 13 , a n d λ 4 = s 7 ,
respectively, and ν : J J is a non-expansive mapping with λ 5 = 1 . Furthermore,
η 2 : = p ζ 4 + ( 1 ζ ) q 9 + 1 2 < 1 ,
and there is a nonempty set C of S : = L T | L ( 1 ) 1 such that ( C , · ) is a BS, and the mapping M from C given in (22) for all x J is a self mapping. Then, all hypotheses of Theorem 4 are fulfilled, and therefore, we get the existence of a solution to the functional Equation (22).
If we define L 0 = x as an initial approximation, then by Theorem 5, the next iteration converges to a unique solution of (22):
L 1 ( x ) = ζ x L 0 p x + 1 p 4 + ( 1 ζ ) x L 0 q x + 1 q 9 + ζ ( 1 x ) L 0 r x 13 + ( 1 ζ ) ( 1 x ) L 0 s x 7 , L 2 ( x ) = ζ x L 1 p x + 1 p 4 + ( 1 ζ ) x L 1 q x + 1 q 9 + ζ ( 1 x ) L 1 r x 13 + ( 1 ζ ) ( 1 x ) L 1 s x 7 , L n ( x ) = ζ x L n 1 p x + 1 p 4 + ( 1 ζ ) x L n 1 q x + 1 q 9 + ζ ( 1 x ) L n 1 r x 13 + ( 1 ζ ) ( 1 x ) L n 1 s x 7
for all n N .

3. Stability Analysis of the Suggested Functional Equation

In mathematical modeling theory, the consistency of solutions is critical. Slight changes in the data set, such as those caused by natural measurement mistakes, have no corresponding impact on the conclusion. Hence, it is essential to analyze the stability of a solution to the suggested functional (7). For the details of the HU and HUR stability, we refer to [15,16,17,18,19].
Theorem 6.
Under the hypothesis of Theorem 2, the equation ML = L , where M : C C , is defined as
( ML ) ( x ) = w 1 u 1 ( y ) L ( z 1 ( x ) ) + w 2 u 1 ( y ) L ( z 2 ( x ) ) + w 1 u 2 ( y ) L ( z 3 ( x ) ) + w 2 u 2 ( y ) L ( z 4 ( x ) ) ,
for all L C and x J , has HUR stability; that is, for a fixed function φ : C [ 0 , ) , we have that for every L C with d ( ML , L ) φ ( L ) , there exists a unique L C such that M L = L and d ( L , L ) ς φ ( L ) for some ς : = 1 1 η 1 > 0 , where η 1 is given in (9).
Proof. 
Let L C such that d ( ML , L ) φ ( L ) . By Theorem 2, we have a unique L C such that M L = L . Thus, we obtain
d ( L , L ) d ( L , ML ) + d ( ML , L ) φ ( L ) + d ( ML , M L ) φ ( L ) + η 1 d ( L , L ) ,
and hence
d ( L , L ) ς φ ( L ) .
 □
From the above analysis, we get the subsequent result related to the HU stability.
Corollary 5.
Under the hypothesis of Theorem 2, the equation ML = L , where M : C C is defined as
( ML ) ( x ) = w 1 u 1 ( y ) L ( z 1 ( x ) ) + w 2 u 1 ( y ) L ( z 2 ( x ) ) + w 1 u 2 ( y ) L ( z 3 ( x ) ) + w 2 u 2 ( y ) L ( z 4 ( x ) ) ,
for all L C and x J , has HU stability; that is, for a fixed λ > 0 , we have that for every L C with d ( ML , L ) λ , there exists a unique L C such that M L = L and d ( L , L ) ς λ , for some ς > 0 .

4. Conclusions

The predator–prey analogy is among the most appealing paradigms in a two-choice scenario emerging in mathematical biology. In such models, a predator has two possible prey choices, and the solution occurs when the predator is attracted to a particular type of prey. In this paper, we proposed a general functional equation that can cover numerous learning theory models in the existing literature. We also discussed the existence, uniqueness, and stability results of the suggested functional equation. The functional equations that appeared in [3,4,8] focused on just two cases, while our proposed functional Equation (4) covers all the possible cases discussed by Bush and Wilson in [1]. In addition, in [3,4,12], the authors used the boundary conditions z 1 ( 1 ) = 1 and z 2 ( 0 ) = 0 to prove their main results, but in Theorem 4, we did not employ such assumptions. Therefore, our method is novel and can be applied to many mathematical models associated with mathematical psychology and learning theory.
To conclude, we propose the following open problem for the interested readers.
Question: Can we use another method to prove the conclusions of Theorems 2 and 3?

Author Contributions

Conceptualization, A.T. and W.-G.P.; methodology, A.T. and W.-G.P.; validation, A.T., W.-G.P. and W.A.; investigation, A.T., W.-G.P. and W.A.; writing—original draft preparation, A.T., W.-G.P. and W.A.; writing—review and editing, A.T., W.-G.P. and W.A.; project administration, A.T., W.-G.P. and W.A.; funding acquisition, A.T., W.-G.P. and W.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bush, A.A.; Wilson, T.R. Two-choice behavior of paradise fish. J. Exp. Psychol. 1956, 51, 315–322. [Google Scholar] [CrossRef] [PubMed]
  2. Bush, R.; Mosteller, F. Stochastic Models for Learning; Wiley: New York, NY, USA, 1955. [Google Scholar]
  3. Istrăţescu, V.I. On a functional equation. J. Math. Anal. Appl. 1976, 56, 133–136. [Google Scholar] [CrossRef]
  4. Berinde, V.; Khan, A.R. On a functional equation arising in mathematical biology and theory of learning. Creat. Math. Inform. 2015, 24, 9–16. [Google Scholar] [CrossRef]
  5. Turab, A.; Sintunavarat, W. On the solution of the traumatic avoidance learning model approached by the Banach fixed point theorem. J. Fixed Point Theory Appl. 2020, 22, 50. [Google Scholar] [CrossRef]
  6. Turab, A.; Sintunavarat, W. On a solution of the probabilistic predator–prey model approached by the fixed point methods. J. Fixed Point Theory Appl. 2020, 22, 64. [Google Scholar] [CrossRef]
  7. Estes, W.K.; Straughan, J.H. Analysis of a verbal conditioning situation in terms of statistical learning theory. J. Exp. Psychol. 1954, 47, 225–234. [Google Scholar] [CrossRef] [PubMed]
  8. Turab, A.; Sintunavarat, W. On the solutions of the two preys and one predator type model approached by the fixed point theory. Sādhanā 2020, 45, 211. [Google Scholar] [CrossRef]
  9. Grant, D.A.; Hake, H.W.; Hornseth, J.P. Acquisition and extinction of a verbal conditioned response with differing percentages of reinforcement. J. Exp. Psychol. 1951, 42, 1–5. [Google Scholar] [CrossRef] [PubMed]
  10. Humphreys, L.G. Acquisition and extinction of verbal expectations in a situation analogous to conditioning. J. Exp. Psychol. 1939, 25, 294–301. [Google Scholar] [CrossRef]
  11. Jarvik, M.E. Probability learning and a negative recency effect in the serial anticipation of alternative symbols. J. Exp. Psychol. 1951, 41, 291–297. [Google Scholar] [CrossRef] [PubMed]
  12. Turab, A.; Sintunavarat, W. On analytic model for two-choice behavior of the paradise fish based on the fixed point method. J. Fixed Point Theory Appl. 2019, 21, 56. [Google Scholar] [CrossRef]
  13. Banach, S. Sur les operations dans les ensembles abstraits et leur applications aux equations integrales. Fund. Math. 1922, 3, 133–181. [Google Scholar] [CrossRef]
  14. Shang, Y. L1 group consensus of multi-agent systems with stochastic inputs under directed interaction topology. Int. J. Control 2013, 86, 1–8. [Google Scholar] [CrossRef]
  15. Hyers, D.H.; Isac, G.; Rassias, T.M. Stability of Functional Equations in Several Variables; Birkhauser: Basel, Switzerland, 1998. [Google Scholar]
  16. Morales, J.S.; Rojas, E.M. Hyers-Ulam and Hyers-Ulam-Rassias stability of nonlinear integral equations with delay. Int. J. Nonlinear Anal. Appl. 2011, 2, 1–6. [Google Scholar]
  17. Rassias, T.M. On the stability of the linear mapping in Banach spaces. Proc. Am. Math. Soc. 1978, 72, 297–300. [Google Scholar] [CrossRef]
  18. Bae, J.H.; Park, W.G. A fixed point approach to the stability of a Cauchy-Jensen functional equation. Abst. Appl. Anal. 2012, 2012, 1–10. [Google Scholar] [CrossRef]
  19. Gachpazan, M.; Bagdani, O. Hyers-Ulam stability of nonlinear integral equation. Fixed Point Theory Appl. 2010, 2010, 1–6. [Google Scholar] [CrossRef] [Green Version]
Table 1. Operators for reinforcement-extinction model.
Table 1. Operators for reinforcement-extinction model.
Fish’s ResponsesOutcomes (Left Side)Outcomes (Right Side)Events
Reinforcement u x u x + 1 u E 1 R E
Non-reinforcement v x + 1 v v x E 2 R E
Table 2. Operators for habit formation model.
Table 2. Operators for habit formation model.
Fish’s ResponsesOutcomes (Left Side)Outcomes (Right Side)Events
Reinforcement u x u x + 1 u E 1 H F
Non-reinforcement v x v x + 1 v E 2 H F
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Turab, A.; Park, W.-G.; Ali, W. Existence, Uniqueness, and Stability Analysis of the Probabilistic Functional Equation Emerging in Mathematical Biology and the Theory of Learning. Symmetry 2021, 13, 1313. https://0-doi-org.brum.beds.ac.uk/10.3390/sym13081313

AMA Style

Turab A, Park W-G, Ali W. Existence, Uniqueness, and Stability Analysis of the Probabilistic Functional Equation Emerging in Mathematical Biology and the Theory of Learning. Symmetry. 2021; 13(8):1313. https://0-doi-org.brum.beds.ac.uk/10.3390/sym13081313

Chicago/Turabian Style

Turab, Ali, Won-Gil Park, and Wajahat Ali. 2021. "Existence, Uniqueness, and Stability Analysis of the Probabilistic Functional Equation Emerging in Mathematical Biology and the Theory of Learning" Symmetry 13, no. 8: 1313. https://0-doi-org.brum.beds.ac.uk/10.3390/sym13081313

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop