Next Article in Journal
Knowledge Trajectories on Public Crisis Management Research from Massive Literature Text Using Topic-Clustered Evolution Extraction
Next Article in Special Issue
A Multi-Start Biased-Randomized Algorithm for the Capacitated Dispersion Problem
Previous Article in Journal
Some New Quantum Hermite–Hadamard Inequalities for Co-Ordinated Convex Functions
Previous Article in Special Issue
Design of Aquila Optimization Heuristic for Identification of Control Autoregressive Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Multi-Objective Hybrid Election Algorithm for Higher-Order Random Satisfiability in Discrete Hopfield Neural Network

by
Syed Anayet Karim
1,
Mohd Shareduwan Mohd Kasihmuddin
1,
Saratha Sathasivam
1,*,
Mohd. Asyraf Mansor
2,
Siti Zulaikha Mohd Jamaludin
1 and
Md Rabiol Amin
3
1
School of Mathematical Sciences, Universiti Sains Malaysia, Penang 11800, Malaysia
2
School of Distance Education, Universiti Sains Malaysia, Penang 11800, Malaysia
3
Department of Computer Science and Engineering, CCN University of Science and Technology, Cumilla 3503, Bangladesh
*
Author to whom correspondence should be addressed.
Submission received: 11 April 2022 / Revised: 26 May 2022 / Accepted: 2 June 2022 / Published: 7 June 2022
(This article belongs to the Special Issue Metaheuristic Algorithms)

Abstract

:
Hybridized algorithms are commonly employed to improve the performance of any existing method. However, an optimal learning algorithm composed of evolutionary and swarm intelligence can radically improve the quality of the final neuron states and has not received creative attention yet. Considering this issue, this paper presents a novel metaheuristics algorithm combined with several objectives—introduced as the Hybrid Election Algorithm (HEA)—with great results in solving optimization and combinatorial problems over a binary search space. The core and underpinning ideas of this proposed HEA are inspired by socio-political phenomena, consisting of creative and powerful mechanisms to achieve the optimal result. A non-systematic logical structure can find a better phenomenon in the study of logic programming. In this regard, a non-systematic structure known as Random k Satisfiability (RANkSAT) with higher-order is hosted here to overcome the interpretability and dissimilarity compared to a systematic, logical structure in a Discrete Hopfield Neural Network (DHNN). The novelty of this study is to introduce a new multi-objective Hybrid Election Algorithm that achieves the highest fitness value and can boost the storage capacity of DHNN along with a diversified logical structure embedded with RANkSAT representation. To attain such goals, the proposed algorithm tested four different types of algorithms, such as evolutionary types (Genetic Algorithm (GA)), swarm intelligence types (Artificial Bee Colony algorithm), population-based (traditional Election Algorithm (EA)) and the Exhaustive Search (ES) model. To check the performance of the proposed HEA model, several performance metrics, such as training–testing, energy, similarity analysis and statistical analysis, such as the Friedman test with convergence analysis, have been examined and analyzed. Based on the experimental and statistical results, the proposed HEA model outperformed all the mentioned four models in this research.

1. Introduction

The term Artificial Neural Network (ANN) refers to generalizations of mathematical models of the biological nervous system. Over the previous two decades, researchers have enhanced the potentiality of ANNs, which added a new dimension to the study of numerous scientific subjects. Dr Hopfield advanced the dynamics of ANN by proposing a method for integrating feedback loop mechanisms, named the Hopfield Neural Network (HNN) [1]. With a recursive structure and graded response, this HNN, one of the earliest types of neural networks, is akin to a dynamic system with two or more stable points of equilibrium [2]. This HNN has variety and is potentially useful for health [3], behavioral sciences [4], and energy [5]. However, the HNN not only always converges to an exact pattern, which results in sub-optimal solutions but also suffers highly from its storage capacity. In such a case, the introduction of logic programming can be a good counter. Following this research, we explore the connections as a single symbolic instruction and its nature into logical rule assemblies using HNN.
Satisfiability (SAT) representation is an important means of conveying logical rules with mathematical information in AI through ANNs. In this context, a general query arises—is the SAT necessary in a Discrete Hopfield Neural Network (DHNN)? To answer this, SAT is designed to be readily attached as a symbolic instruction to represent the output of DHNN. Abdullah invented the general concept of logic programming on DHNN by computing the synaptic weights [6]. Later, a study [7] introduced new light to the satisfiability study. This work proposed a higher-order Horn-Satisfiability (Horn-SAT) programming, which focused on embedding Horn-clauses in a Radial Basis Function Neural Network. Therefore, several researchers further extended the fundamentals of Horn-SAT by proposing different systematic k Satisfiability (k-SAT), such as 2SAT and 3SAT. The work by [8] transformed the constraint satisfaction problem into the 2SAT problem, which is more effective in achieving a different number of solutions. Then, the work by [9] introduced the systematic 3SAT logical expression with a well-known metaheuristics Genetic Algorithm (GA). This work explored previous works to direct the search into regions of better performance within the search space, thus, reducing the time and space complexity. Maximum k Satisfiability (MAXkSAT), inspired by an expanded version of Boolean SAT, began to gain favor in the ANN research field because it used false/negative output in comparison to other SAT presentations [10]. Following that, [11] developed a hybrid technique that uses an algorithm to optimize the 2SAT logical rule. This work has risen to a new type of SAT study known as Maximum 2 Satisfiability (MAX2SAT). For its nonredundant literals, the result of this MAX2SAT logical rule was also negative. The proposed approach could display MAX2SAT behavior optimally during the DHNN testing phase. However, there is no involvement of any process that can minimize the complexity of the MAX2SAT model in the training phase. Subsequently, these studies investigated only the systematic logical rule. The first attempt is to represent the symbolic output of the HNN in terms of the non-systematic logical rule made by [12]. This research showed that Random 2 Satisfiability (RAN2SAT) can be integrated into DHNN by minimizing a cost function that corresponds to the network inconsistencies. Nonetheless, this work restricted only k = 1, 2 in which no discussions were made on improvement strategy for k = 1, 2, 3 or Horn Satisfiability. Another different type of non-systematic logical rule was sketched by [13]. This study was a new variant of the 2 Satisfiability problems. This work revealed that Major 2 Satisfiability (MAJ2SAT) is able to provide a new perspective in representing some NP and probabilistic class problems. The non-systematic logic study was further extended by [14], proposing a new dimension in the non-systematic Random k Satisfiability (RANkSAT) study. This work showed the achievement of 100% accurate synaptic weights by a higher order of RANkSAT (k = 3, 2) combination leads to 100% global minima solutions. Though some researchers continue to focus on the usefulness of non-systematic logical rules in DHNN, the notion has yet to be fully explored.
The effectiveness of a learning phase is a significant issue in DHNN. Ref. [15] revealed the weakness of HNN as the number of neurons grows in terms of retrieval capacity. To address this issue, a metaheuristic method was used to determine the best neuron state that minimizes the cost function. Another significant work was proposed by [16], in which the Hybrid Genetic Algorithm (GA) was incorporated with the training phase of HNN. This hybrid model successfully obtained the best trade-off between solution quality and computational time. Another study conducted by [17] demonstrated the Artificial Immune System (AIS) in the training phase with the 3SAT logical rule. This investigation explained that AIS outperformed the Brute-Force algorithm in terms of the global minima ratio, hamming distance and computational time. Ref. [18] successfully integrated the usage of GA with HNN for solving combinatorial optimization problems. In their study, the HNN was applied to GA to minimize each other’s shortcomings. The combination of both methods reduces the possible local minima in solving various NP problems. Furthermore, [19] introduced propositional satisfiability via a hybrid metaheuristic named Hybrid Genetic Algorithm (HGA) in DHNN. This work explained that GA as a non-biased algorithm can converge to a global solution compared to conventional learning models. Finally, the authors [20] developed a hybrid HNN and integrated it with an Imperialist Competitive Algorithm (ICA). This proposed hybrid method successfully found an optimal solution in an acceptable computation time and managed to obtain a high-quality solution with minimum cost. However, these metaheuristics focused exclusively on systematic logical rules. Additionally, these metaheuristics lack a partition solution space for which a more efficient solution is required to identify alternative neuron states to minimize the cost function. Moreover, the mentioned metaheuristics have no specific balance with the extrapolation and exploitation strategy.
Recently, social and political behaviours have also inspired the development of numerous metaheuristic solvers. An election Algorithm (EA) is a form of an iterative social-political algorithm that operates on populations of solutions via the ‘election’ procedure. Ref. [21] addressed this Election Algorithm (EA) as a highly optimized metaheuristic for its powerful solution search space, drawing the attention of many optimization researchers. EA is a population-based iterative algorithm that works with solution groups. First, the population will be divided into several parties based on their shared beliefs and views, the best members of each party will be chosen as candidates, while the others will be voters who support the candidate. Since EA suffers from a fundamental challenge; it always becomes stuck at local optima due to the incapability of the operators. To overcome this issue, later, the author [22] proposed the Chaotic Election Algorithm (CEA), which accelerates the convergence of the existing EA by introducing a migration operator. However, these works also have no justification for several objectives to enhance the capacity of these metaheuristics.
Research recently indicated that non-systematic logical rules can provide a flexible structure and generate various interpretations that converge to global minimum solutions. Pioneering work by [23] created a bridge between logic programming and a metaheuristic named EA. This proposed method utilized EA with RAN2SAT logic programming that created a lower error and higher retrieval capacity with glaring computational ability. The work by [24] utilized higher-order RANkSAT (k = 1, 2, 3), which was implemented on EA in the learning phase of DHNN to enhance the correct synaptic weight with minimal error and resulted in high retrieval capacity. Notably, these works only focused on accuracy, which means achieving the maximum fitness value of the EA model. The major lacking of the previous studies is that they merely focused on a single objective function, which is the highest fitness value, and there is no strategy in terms of the logical rule, which can create a new diversity. Achieving only a fitness value or a single objective function policy cannot discover the metaheuristics performances perfectly. Nevertheless, there have not been any new metaheuristics that concentrate on multiple goals or multi-objective goals, such as accuracy and diversity, that focused not only on exploration but also on exploitation simultaneously in higher-order RANkSAT logical presentations. Notably, the optimal solution to a multi-objective function is a better trade-off solution for several objectives than the optimal solution to a single-objective function [25]. Furthermore, metaheuristics with a multi-objective approach focus on two critical search processes: first, investigation of the entire feasible space (exploration) and second, the examination of a local area of the search space (exploitation) [26]. Excessive exploration strategies frequently decrease the algorithm/metaheuristics performance [27]. It is imperative that unbalanced exploration and exploitation assessment forwards to slow convergence that suffers from local optima [28]. Hence, we proposed this novel Hybrid Election Algorithm (HEA) that employs a multi-objective concept where both fitness (accuracy) and diversity are put in the same pair with higher-order RANkSAT (k = 3, 2) representation in DHNN. This strengthens our proposed HEA model and strongly tunes the exploration and exploitation strategy in a balanced manner.
Moreover, a method needs to be employed to achieve the trade-off between diversity and accuracy (fitness value), which can scientifically improve the storage capacity of DHNN and the global solutions of a model. For this reason, our proposed article promotes fitness values addressed as accuracy and diversity in terms of the logical rule to develop a distinct Hybrid Election Algorithm (HEA) identity. Furthermore, our proposed HEA model introduces the notion of multi-objective functions, which adds another new light to our research. The following are the novel contributions of our study:
  • To construct a randomly generated second-third order Random k Satisfiability logical rule in the training phase that can optimize the correct synaptic weight and cost function of the Discrete Hopfield Neural Network.
  • To create multi-objective functions that maximize the fitness, and employ k-Ideal solution strings with a diversified logical rule to increase the storage capacity of the Discrete Hopfield Neural Network.
  • To propose a new bipolar Hybrid Election Algorithm with an effective operator that can balance the exploration and exploitation strategy in the training phase of the Discrete Hopfield Neural Network.
  • To compare the compatibility of our proposed model with benchmark algorithms in terms of the storage capacity, training error, testing error, the ratio of global solutions, neuron variations and similarity analysis.
Our novel model will be tested in a series of computer simulations to see how effective it is at reducing the cost function of the DHNN and arriving at useful end states. This work is initially organized into three sections: an introduction (Section 1), an explanation of the basic RANkSAT formulas (Section 2) and a discussion of DHNN (Section 3). The Proposed Methods are explained in Section 4 and Section 5. Experimental setup and performance evaluation measures are further dissected in Section 6 and Section 7. The last parts of the report include the Results and Discussions (Section 8) and the Conclusion (Section 9). Here, a summary of the related studies is given in Table 1 below.

2. Random k Satisfiability (RANkSAT)

One of the significant breakthroughs in the SAT study is Random k Satisfiability (RANkSAT), which continues to be the preferred choice for ANN researchers due to its independent clause composition [29]. RANkSAT is a non-systematic logical structure that comprises a set of x literals as A 1 ,   A 2 ,   A 3 , ,   A x with a group of y clauses where J 1 ( k ) ,   J 2 ( k ) ,   J 3 ( k ) ,   ,   J y ( k ) [30].
Generally, a collection of random instances B = e N over N Boolean variables forms the RANkSAT clause. Every logical clause normally has exactly k variables that are linked with the OR ( ) operator and are negated with a probability of 1 2 [31]. The literal values are expressed in the bipolar form {1, −1}, which denotes either true or false. For k 3 , the probability proportion in the negative to positive form is 1:1, 2:1, or 1:2. Note that RANkSAT can employ α RAN k SAT 3 , 1 , α RAN k SAT 3 , 2 and α RAN k SAT 3 , 2 , 1 where the formulation is shown in the Equations (1)–(3):
α RAN k SAT 3 , 1 = i = 1 w J i ( 3 ) i = 1 u J i ( 1 )
α RAN k SAT 3 , 2 = i = 1 w J i ( 3 ) i = 1 v J i ( 2 )
α RAN k SAT 3 , 2 , 1 = i = 1 w J i ( 3 ) i = 1 v J i ( 2 ) i = 1 u J i ( 1 )
where
J i ( k ) = { G i , k = 1 ( H i I i ) , k = 2 ( M i N i O i ) , k = 3
whereby w , v and u are the total numbers of the third, second and first order of logic in each clause in α RAN 3 SAT k , respectively. From Equation (4), the literals (positive or negative) are set at random.
Equations (5)–(7) show examples of α RAN k SAT 3 , 1 , α RAN k SAT 3 , 2 and α RAN k SAT 3 , 2 , 1 :
α RAN k SAT 3 , 1 = ( ¬ M 1 N 1 O 1 ) ( M 2 N 2 F 2 ) G 1 G 2
α RAN k SAT 3 , 2 = ( ¬ M 1 N 1 O 1 ) ( H 1 I 1 ) ( ¬ H 2 I 2 )
α RAN k SAT 3 , 2 , 1 = ( ¬ M 1 N 1 O 1 ) ( M 2 N 2 O 2 ) ( H 1 I 1 ) ¬ G 1
from the equations above, the outcome of each logic is known by replacing the values of {1, −1} (neuron states) with each literal. For example, α RAN k SAT 3 , 2 is said to be satisfiable when α RAN k SAT 3 , 2 = 1 that provides true values. On the other hand, α RAN k SAT 3 , 2 = 1 denotes unsatisfiable, which gives false values. In this paper, the RANkSAT complies only k = 3 , 2 since [14] showed that α RAN k SAT 3 , 2 structural combination provides a more consistent interpretation. In the next section, we focus on how α RAN k SAT 3 , 2 logic can be represented via DHNN.

3. Random k Satisfiability in Discrete Hopfield Neural Network (DHNN)

The Discrete Hopfield Neural Network (DHNN) is a type of ANN referred to as a feedback network. In most cases, feedback refers to the output being sent back into the network. DHNN used one of the most successful storage techniques, termed content addressable memory (CAM) processes with binary/bipolar threshold units that are guaranteed local minimum convergence. DHNN is a network with no hidden layers and comprises interconnected neurons where the neurons are updated asynchronously. Here, we used the asynchronous neuron adaption of Theorem 1 to represent the DHNN units in bipolar values (1, −1). Theorem 1 shows that DHNN operated asynchronously concerning its condition.
Theorem 1.
All networks described by (8) in randomized asynchronous mode will fall into a network gap with a probability of one when starting at an initial state in search space [32].
S i = { 1 , i f   j n W a b c S b S c U p 1 , o t h e r w i s e
From (8), W a b c is the synaptic weight from unit a to c . S b is the current state of the unit b , and U p is the pre-defined threshold. Several studies [33,34] defined U p = 0 to verify that the DHNN always lead to a decrease in energy monotonically. The synaptic weight between neurons a and b corresponds to the intensity of connections between two neurons. Likewise, the neuron connections are approached W a b as W a b c ( 3 ) = [ W a b c ( 3 ) ] n × n with [ U p ] n × 1 = [ U 1 , U 2 , U 3 , , U n ] T . The computation of the cost function C α RAN 3 SAT k in DHNN is significant to decrease the logical inconsistency of α RAN 3 SAT k   ( C α RAN 3 SAT k = 0 ) . The design of C α RAN 3 SAT k Equations (9) and (10) that adapts all forms of logic combinations α RAN 3 SAT k is as follows.
C α RAN 3 SAT k = 1 8 i = 1 w ( j = 1 3 L i j ) + 1 4 i = 1 v ( j = 1 2 L i j ) + 1 2 i = 1 u ( j = 1 1 L i j )
L i j = { 1 2 ( 1 S i 1 ) ,   i f   ¬ i 1 1 2 ( 1 + S i 1 ) ,   o t h e r w i s e
where S i 1 is the neuron state where i 1 { 1 , 1 } . The probability for consistent interpretation is expressed in (11).
P ( C α RAN 3 SAT k = 0 ) = i = 1 3 ( 1 1 2 i ) χ   ( J i ( k ) )
where χ ( J i ( k ) ) is the number of J i ( k ) clauses. The fundamental goal of using α RAN 3 SAT k DHNN is to successfully minimize the cost function C α RAN 3 SAT k , which aids in finding proper synaptic weights and producing a good energy profile. Since α RAN 3 SAT k has a zero-cost function, it provides a satisfied interpretation (all clauses give truth value).
The local field of DHNN is given by (12). S i ( t ) is symbolized as the final state of neurons, whereby W a b c ( 3 ) ,   W a b ( 2 ) ,     W a ( 1 ) are for third, second and first-order, respectively. The Hyperbolic Tangent Activation Function (HTAF) was used in the testing phase dynamics of DHNN to enable the convergence of final neuron states while avoiding neuron oscillation [17]. The local field of our proposed model is formulated in (12) and (13) as follows:
h p ( t ) = c = 1 , c b n b = 1 , b c n W a b c ( 3 ) S b S c   + b = 1 , b a n W a b ( 2 ) S b + W b ( 1 )
S i ( t ) = { 1 , c = 1 , c b n c = 1 , c b n W a b c ( 3 ) S b S c + b = 1 , b   a n W a b ( 2 ) S b + W a ( 1 ) 0 1 , c = 1 , c b n c = 1 , c b n W a b c ( 3 ) S c S b + b = 1 , b a n W a b ( 2 ) S b + W a ( 1 ) < 0
Here, Equation (12) is the overall formulation of the local field for α RAN 3 SAT k and (13) is a piecewise function of the generated final state of the neuron according to the value of (8). In this paper, Wan Abdullah (WA) method is used to compare Equation (9) with Equation (14), which is noted as an energy function H α RAN 3 SAT k . Therefore, the WA method is an ideal method to find W a b c in the case of α RAN 3 SAT k DHNN.
H α RAN 3 SAT k = 1 3 a = 1 , a b c   n b = 1 , a b c   n c = 1 , a b c n W a b c ( 3 ) S a S b S c 1 2 a = 1 , a n b = 1 , a b n W a b ( 2 ) S a S b   a = 1 n W a ( 1 ) S a
and then the value H α RAN 3 SAT k attains the absolute final energy, and the minimum energy H α RAN 3 SAT k m i n is gained from α RAN 3 SAT k that reduced monotonically [24]. Hence, H α RAN 3 SAT k m i n is calculated by (15).
H α RAN 3 SAT k m i n = a ( ψ i 3 ) + 2 ( b ( ψ i 2 ) ) + 4 ( c ( ψ i 1 ) ) 8
and ψ i 1 ,   ψ i 2 ,   ψ i 3     J i ( k ) and c , b , a symbolize the numbers for 1 literal, 2 literals and 3 literal clauses in α RAN 3 SAT k .
Finally, Equation (16) can analyze the final neuron states’ quality by distinguishing between the global and local minimum solutions. Notably, if (16) is satisfied, and the final neuron states will attain global minima solution; else, it would be trapped in a local minima solution.
| H α RAN 3 SAT k H α RAN 3 SAT k m i n | τ
where τ = 0.001 is the pre-defined value, which is known as the tolerance value. Algorithm 1 summarizes the steps of DHNN-RANkSAT through the pseudo-code below.
Algorithm 1 The pseudocode of DHNN-RANkSAT in the logic phase.
1Start
2Set the initial parameters, maximum combination (COMBMAX) = 1, trial number
3Initialize the neuron to each variable consisting of S i [ S 1 , S 2 , S 3 , S n ]
4While ( i t r i a l )
5Forming initial states by using Equation (8)
[TRAINING PHASE]
6Define cost function C α RAN 3 SAT k by using Equation (9)
7For S i [ S 1 , S 2 , S 3 , S n ] do
8Check clauses satisfaction by Equation (9)
9If C α   RAN 3 SAT   k = 0
10 S i Satisfied
11Else
12 S i Unsatisfied
13End For
14Calculate synaptic weights by using the Wan Abdullah method.
15Compute the H α RAN 3 SAT k by using Equation (14).
[TESTING PHASE]
16Local field computation to find the final state by using Equation (12)
17For ( i t r i a l )
18End For
19Compute the H α RAN 3 SAT k min by using Equation (15)
20Calculate the final energy with tolerance value.
21Check If it is Global minimum energy or local minimum energy
22If | H P RAN 3 SAT H P RAN 3 SAT min | Tolerance
23Assign Global minimum energy
24Else
25Assign local minimum energy
26End
The schematic diagram for DHNN-RANkSAT is shown in Figure 1 where the RANkSAT represents k = 3, 2. In this diagram, the red line represents the third-order clauses, and the purple line denotes the second-order clauses, respectively. Within each main block, the pink, orange and blue colored lines illustrate the connection of each neuron. The energy filter represents whether it aligns with the tolerance value or not. The output of the diagram depicts that either it can achieve the global minimum or a local minimum energy.

4. Proposed Multi-Objective Functions for DHNN

This paper proposes several objective functions that ensure accuracy and diversity. The foremost concept is to introduce the multi-objective nature in our proposed work so that we can maximize the fitness value as well as create diversity in the logical rule with enhancing the storage capacity of DHNN. The potentiality of the proposed multi-objective concept is that it embeds all the objectives in a perfect alignment maintaining the RANkSAT strategy. Analyzing the accuracy and diversity would be an inventive study of higher-order RANkSAT representation.
Consequently, the question arises: How can an algorithm achieve both accuracy and diversity at the same flow? To answer this question, a satisfactory equilibrium needs to be set up with two fundamental concepts- exploration and exploitation of the search space that can balance with accuracy and diversity of the proposed algorithm. Additionally, researchers ponder that those algorithms/metaheuristics search methods can reach better performance if the exploration and exploitation of the search space maintain the appropriate balance [35].
Researchers needs to know that accuracy and diversity are also not in the same pair, and to achieve accuracy and diversity in an algorithm, a multi-objective concept needs to comply to achieve the trade-off between diversity and accuracy also. To find the novelty of our proposed work, we need to investigate the performance and robustness of optimum solutions. Here, the main novelty of our proposed work optimizes three objectives: (i) the maximum fitness, (ii) diversity ratio and (iii) k ideal solution strings. Mathematically, the proposed multi-objective functions can be generalized as:
f ( F max , , γ , S max ( i ) )
Now these three objective functions are explained below in detail.

4.1. Maximum Fitness Value

To achieve maximum fitness, the fitness value is estimated through certain steps of the individual operations strategy of an algorithm. Strings that obtain the highest fitness value are selected for the next stage. If a certain number of strings of an algorithm cannot achieve the highest fitness value, it needs to undergo the trial again from the beginning of its mechanism. The illustration of the general form of maximum fitness value of RANkSAT by summing up the fitness value of each order of clauses is shown below:
F max = i = 0 p C i ( 3 ) + i = 0 q C i ( 2 ) + i = 0 r C i ( 1 )
where C i ( 3 ) , C i ( 2 ) , C i ( 1 ) are the third-order, second-order and first-order clauses of RANkSAT, respectively.

4.2. Each Clause Contains at Least One Negative State C i k , k 2

The state of the literal is one of our objectives for addressing diversity in a logical rule. To ensure this objective, we include at least one negative state in each clause of RANkSAT, which will make the logical rule more diverse. In the logical diversity arrangement for RANkSAT, we focus on the literals arrangement for δ P 3 S A T (3SAT) and δ P 2 S A T (2SAT) clauses. Since δ P 3 S A T and δ P 2 S A T can create more than one possible solution, we only evaluate these clauses in our diversity calculation strategy. Note that δ P 1 S A T (first-order) clauses have no substitute options for clause satisfaction so it has no impact on our diversity logical strategy. Hence, the calculation of diversity in terms of logical structure for δ P 3 S A T and δ P 2 S A T is expressed as in the equation is described below:
γ = i = 0 p C i ( 3 ) + i = 0 q C i ( 2 )

4.3. k-Ideal Solutions Strings (ISS)

In DHNN, the solution string is represented in bipolar form. This solution string is stored as an associative memory system known as Content Addressable Memory (CAM). Each solution string from the learning phase will span the initial bipolar vector and become the root of the storage. After satisfying Equations (18) and (19), the solution strings are chosen. These solution strings are known as Ideal Solution Strings (ISS). Notably, if the mechanism of an algorithm fails to achieve ideal strings, a further trial will occur from the highest fitness string reservoir. Since DHNN suffers from limited storage capacity, enhancing the storage capacity of DHNN is the real focus of our proposed HEA. Here the arrangements of Ideal strings are shown below:
S max ( i ) = [ S ( 1 ) , S ( 2 ) , S ( 3 ) , S ( 4 ) , S ( 5 ) ]
overall, these multi-objective functions have been written mathematically in Equations (17)–(20):
f ( F max , , γ , S max ( i ) )
Subject to
( For   Fitness )   F max = i = 0 p C i ( 3 ) + i = 0 q C i ( 2 ) + i = 0 r C i ( 1 )
  ( For   Diversity )   γ = i = 0 p C i ( 3 ) + i = 0 q C i ( 2 )
( k - Ideal   Solution   Strings )   S max ( i ) = [ S ( 1 ) , S ( 2 ) , S ( 3 ) , S ( 4 ) , S ( 5 ) ]
where C i ( 3 ) , C i ( 2 ) , C i ( 1 ) are the third-order, second-order and first-order clauses and γ represent the combination of a minimum number of negative states in each clause of RANkSAT and S ( 1 ) ,…, S ( 5 ) are the total (05) CAM or Ideal solution strings, respectively.

5. Proposed Hybrid Election Algorithm (HEA)

The key motivation of a hybridized algorithm is that it can cover a wide range of solutions and generate several distinct computations. Generally, metaheuristics have two segments that allow them to carry out the optimization process. These segments include the exploration of search spaces to identify potential regions of good solutions, as well as the exploitation phase, which intensifies the search for the best regions to find better solutions [36].
Moreover, a logical representation that follows non-systematic logical expressions, such as RANkSAT, is more effective in a hybrid metaheuristic for avoiding overfitting solutions. In this context, the novel Hybrid Election Algorithm (HEA) is introduced, a type of social-political metaheuristics that combines evolutionary algorithms and swarm intelligence operations, which occupy both exploration and exploitation in a proper manner. There are no recent works that employ achieving the highest fitness value along with creating diversity in the logical rules. The core reason for addressing HEA in our paper is achieving maximum fitness value and simultaneously creating diversity in the logical rules in the same pair.
In terms of optimization, HEA introduced another efficient optimizer that can improve the local solutions. Following this, the characteristics of RANkSAT can rely heavily on the HEA because of its effective and robust mechanism. The HEA is utilized in this paper to find the best RANkSAT assignment that minimizes the cost function during the training phase of DHNN. The model is elucidated with detailed information in the next section. The procedure of the Hybrid Election Algorithm in DHNN-RANkSAT is explained in Section 5.1, Section 5.2, Section 5.3, Section 5.4 and Section 5.5

5.1. Initialization

The population ( N P O P ) of individuals consisting of voters and candidates includes potential solutions of the search space of α RAN 3 SAT H E A is generated randomly. Let S i = { S 1 , S 2 , S 3 , S N } , S i ( 1 , 1 ) is initialized. The state of each individual is noted as 1(TRUE) and −1(FALSE), which aligns with the possible instances α RAN 3 SAT H E A . Consider the search space of α RAN 3 SAT H E A being S α RAN 3 SAT H E A = { S 1 , S 2 , S 3 , S 2 n } .

5.2. Eligibility Assessment

All randomized instances must undergo eligibility/fitness function assessment. There will be a reward for each of the correct instances that results in a satisfying RANkSAT clause. During the eligibility/fitness assessment process, the number of achieved clauses is used to determine the eligibility of the individuals. The eligibility or fitness value can be determined from Equation (21).
f L j = i = 1 N C C i ( k ) ,   k = 1 , 2 , 3
C i ( k ) = { 0 ,   F a l s e 1 ,   T r u e
where C i ( k ) means the order of RANkSAT clauses. The aim is to increase the value of the eligibility/fitness function or decrease the value of the cost function.

5.3. Initial Formation of the Parties

In this stage, the solution space is divided into N p a r t y parties. Then, the process of calculation of voters for each party can be written in Equation (23).
N j = N P o p N P a r t y ,   j = 1 , 2 , 3 , 4
where N P o p is the size of the population. Equation (23) is used to determine the eligibility of each instance (voters or candidates). In each party j , the prospective solution with the highest eligibility value is elected as a candidate L w . The remaining the instances are attached as the voter V w of the candidate. Then, the correlation distance function (CorD) between the candidate L w and the voter V w was expressed in Equation (24).
C o r D ( f L w , f V w ) = f L w f V w

5.4. Advertisement Campaign

After organizing the initial parties, we select the initial candidate with the highest fitness solution in each party. The main distinction between standard EA and the proposed HEA is largely due to the progress made in this advertising campaign. Next, each candidate will launch its advertising campaign, which will consist of four steps: positive advertisement, negative advertisement, coalition, and newly affiliated- another effective step is the caretaker party. Hence, these sub-steps of the advertisement campaign are explained below.

5.4.1. Positive Advertisement

The candidate will reveal their plans during this stage and attempt to sway voters voting selections. Hence, the number of voters who the candidate will influence is given as follows:
N A j = N j σ p ,   j = 1 , 2 , 3 , 4
where σ p is a positive advertisement rate σ p [ 0 , 0.5 ] . The reasonable effect between the candidate and the voter is expressed as the eligibility correlation distance coefficient by (26).
ω v i = 1 1 + C o r D ( f L w , f V w )
Each influenced voter v i will determine the number of neuron states that can update based on the following Equation (27).
S v i = N j ω v i
where N j = 3 w + 2 v + u is the sum of the first, second and third-order of α RAN k SAT H E A . First, the influenced voter will update the neuron state randomly according to the predetermined number. Then, the eligibility/fitness value of each of the influenced voters v i will be evaluated based on (21). In each party, there is the possibility that the influenced voter v i will replace the current candidate due to higher eligibility/fitness value.
As a result, the candidate will be replaced by the solution with the highest fitness value in order to improve the quality of the solutions in the parties (more qualified supporters). If a voter and a candidate have the same eligibility, the candidate position shall be maintained (no replacement will be made). Suppose the best solution is identified in the positive advertisement. In that case, it will continue to be a candidate in the following steps until the first iteration is completed, at which point it will be announced as the best solution in the election stage.

5.4.2. Negative Advertisement

At this point, candidates use negative advertising to try to entice supporters from other parties to their side to expand the search space. This negative campaign generally benefits popular parties because it leads to an increase in popularity. Equation (28) depicts the number of voters ( N j ) candidates can attract from other parties voters ( N A j ) with the highest fitness value.
N A j = σ n ( N j N A j )
where A j are the voters from other parties, and σ n is a negative advertisement rate σ n [ 0 , 0.5 ] . The correlation or similarity belief between the voters and the candidates is similar to Equation (29).
C o r D ( f L w , f V w ) = f L w f V w
The reasonable effect from the candidate to the voter from another party is defined based on the eligibility distance correlation coefficient ω v i .
ω v i = 1 1 + C o r D ( f L w , f V w )
Each influenced voter v i will determine the number of neuron states, which can be updated based on the following Equation (31).
S v i = N j ω v i
By using Equation (26), we can calculate the eligibility of the new supporters. Again, if a voter has a fitness value greater than the candidate, the candidate will be replaced by this voter.

5.4.3. Coalition

During this stage, parties collaborate and establish a coalition in order to explore additional areas in the search space α RAN k SAT H E A . The processes and formulations used in the coalition stage were the same as in the previous strategy. First, the two parties will be randomly combined to determine the new candidate after this merger. To begin, use Equation (22) to determine the eligibility distance function and distance coefficient. Then, using Equation (23), specify the number of variables that need to be updated in each voter from this united party. The fitness values of all voters have now been updated. Finally, a comparison is made between the voters’ fitness and previous candidates’ fitness. This will be elected if the old candidate still has the highest fitness value in the next stage. However, if other circumstances arise, such as a voter having a higher fitness rating than the previous candidate. This fittest voter will be a new candidate who will face off against another political party.

5.4.4. Caretaker Party

After completing the traditional coalition stage, the accuracy of the proposed model is tested. If this proposed model satisfies its accuracy (by finding k strings that achieve the highest fitness value), we can proceed to the next phase, known as the diversity phase. This diversity phase generates the ability to create the ‘best pool’ of instances where voters can only stay with the maximum fitness value. This ‘best pool’ is named the Caretaker party. In this stage, those who achieved the highest fitness value were selected for this pool. Mainly Caretaker party takes care of all of its best strings. Therefore, the dynamic stairs enhance the diversity of the proposed HEA mechanism model. The selection process for choosing the highest fitness value can be calculated by Equation (28).
C p b e s t = θ f L j
where θ = [ 0.1 , 0.4 ] is the ratio of the achieved maximum fitness value.
The mutation insertion concept is also utilized in this Caretaker party to improve its diversity, which creates a new dimension in the study of the proposed HEA model. Notably, the Caretaker party emphasizes the exploitation mechanism. Thus, it is better to choose such a type of mutation insertion, which can choose randomly with a more localized search. Moreover, the general mutation is muted randomly while there is no mechanism to detect unsatisfied clauses and inclusion of positive/negative states. In this context, we exploit shift mutation that works by shifting a randomly chosen frontier between two adjacent clauses by one/ single step, either to the right to the left or vice versa [37]. Shift mutation not only focuses on non-satisfied clauses but also focuses on the inclusion of positive-negative states combination in each clause. In retrospect, the Shift mutation operator essentially provides HEA with local search capability; a phenomenon called intensification. However, shift mutation is a more localized search operation than swap mutation.
Hence, the condition of Equation (32) is satisfied; the fittest voters are moved to the next stage for the participating final round—‘Election Day’.

5.5. The Election Day

The best solution (the candidate) in each party will be tested at this stage. Then, this solution will be announced as the optimal solution if it has attained the maximum fitness value with desired logical states (at least one negative state in the solution string). Otherwise, a second iteration will be carried out. After that, the procedures will be repeated until all conditions (from Equations (23)–(32)) have been met.
Meanwhile, we provide a real example in Appendix A that may guide how voters and candidates represent the value of a Party. Now, we present the pseudocode in the Algorithm 2 and sketch the total flowchart in Figure 2, of our proposed Hybrid Election Algorithm (HEA) below:
Algorithm 2 Pseudocode for proposed Hybrid Election Algorithm.
1Start
2Initialize the population N P O P consisting of S i { S 1 , S 2 , S 3 , S N P O P }
3While  i t r i a l
4Forming initial parties by using Equation (23)
5For J { 1 , 2 , 3 , N P a r t y }  do
6Calculate the similarity between the voters and the candidates by using Equation (24)
7End For
8[Positive Advertisement]
9For  S i { 1 , 2 , 3 , N s i }  do
10Evaluate the number of voters by using Equation (25)
11Evaluate the reasonable effect from the candidate ω v i by using Equation (26)
12Update the neuron state according to Equation (27)
13If  f v i j > f L j
14Assign v i j as new L j
15Else
16Remain L j
17End For
Negative Advertisement]
18For  S i { 1 , 2 , 3 , N v j }  do
19Evaluate the similarity between the voters from the other party and the candidate from Equation (28)
20Evaluate the reasonable effect from the candidate ω v i and update the neuron state by using Equation (30)
21If  f v i > f L j
22Assign v i as new L j
23Else
24Remain L j
25End For
26[Coalition]
27For  S i { 1 , 2 , 3 , N v j }  do
28Evaluate the similarity between the voters from the other party and the candidate from Equation (29)
29Evaluate the reasonable effect from the candidate ω v i and update the neuron state by using Equation (30)
30If  f v i > f L j
31Assign v i as new L j
32Else
33Remain L j
34End For
35End While
[Caretaker Party]
36For  S i { 1 , 2 , 3 , N }  do
37//Input Mutation operator//
38If  f v i N > f L j
39//Choose Five Highest fitness candidates//
40Assign 5 v i as new 5 L j (Select five voters as five new candidates)
41Else
42Remain 5 L j (Five new candidates)
43End For
44Return Output the Five final neuron states
45End
46End While
47Return Output the final neuron state

5.6. Model Reproducibility

The same experiments need to be conducted to reproduce a framework with the model DHNNRANkSAT-HEA repeatedly on certain data sets or obtain the same results. Specific factors should be considered as below:
i.
The logical presentation must be RANkSAT for k = 3, 2 structure where the threshold iteration is set at 100 to distinguish the maximum capacity of the presented algorithms.
ii.
To achieve the optimal synaptic weight in the training phase, the WA method is utilized. According to [24], the WA method is more stable than Hebbian learning. It is worth mentioning that different logic operators have different capabilities in producing negative literal.

6. Experimental Setup

In this section, we verify the effectiveness of the proposed DHNNRANkSAT-HEA model during the training phase. This experiment considers multi objectives concept, which determines the proposed model achieves the highest fitness value, generates a correct number of negative literals in each clause and produces ideal solution strings. To guarantee the reproducibility of the experiment, we set up our experiment as follows:

6.1. Simulation Design and Simulation Datasets

All the simulations will be conducted on the same features to avoid biases during experimentation. The features are as follows:
i.
Device Setting: The simulation was run on the device with a 4 GB RAM Intel Core i5 processor with a 64-bit Windows 10 Operating System where the CPU time threshold for data generation will be 24 h. The proposed model was implemented and analyzed by using Dev C++ version 5.11.
ii.
SAT Configuration: The number of neurons ranging from 10 N N 120 . The logical framework is based on third and second-order logic. Therefore, the clauses in a logical structure are chosen randomly.
iii.
Proposed HEA Code (Online): The source code of the proposed model can be found at the mentioned link: https://bit.ly/3PEcYl4 (25 May 2022)
The experiment will be conducted with simulated data produced at random by the proposed model. The simulated data set’s elements are strings of bipolar values that are (−1, 1) based on the RANkSAT for k = (3, 2) structure. Simulation data sets are often utilized in testing and evaluating the capabilities of a new proposed SAT in DHNN, according to research by [12,14,15,23,24]. As a result, when applied to real-life data sets, the results of the simulated data set will project the usefulness of the proposed model.

6.2. Parameters Assignment

Table 2, Table 3 and Table 4 show the parameter control considered in this study. The number of ideal strings will be set to 5. To evaluate the final energy of DHNN, the tolerance value will be 0.001 [23]. DHNN will undergo relaxation by applying the Sathasivam relaxation method. The relaxation rate is set to 3 because a lower value will cause the neuron to exchange information and relax to global minimum energy. All parameters are tuned offline. Since the parameter values are identified before applying the algorithm.
The optimal parameter setting founded by the tuning process is used in solving problems and these parameter values remain unchanged during the simulation run. Moreover, our proposed algorithm is following the work of [12,14,15,21,22,23,24] and to make a fair comparison, all the parameter values should be automatically tuned with the mentioned studies. Otherwise, strongly impairing a non-tuned algorithm could mislead the conclusion from the proposed algorithm [38]. The training and testing phase will randomly initialize the neuron state. Table 2, Table 3 and Table 4 summarize the parameter assignments.

7. Performance Evaluation Metrics

7.1. Numerical Calculation of Storage Capacity ( β C )

An associative memory system or CAM is formed as a ‘storage box’ for the Discrete Hopfield Neural Network. Discovering multiple CAM in a single model should increase the storage capacity of DHNN. In the general simulation strategy, each solution string is stored in a single CAM, which is the basic nature of DHNN. Interestingly, more solution strings will be approached in our proposed simulation approach.
The string that satisfies both maximum fitness and diversity phase is called ‘Fully Satisfied Strings ( μ ) ’. In a single simulation, if the model generates at least five (05) full satisfied strings ( μ ) are considered “Ideal Solution Strings ( ρ ) ”. To predict the storage capacity via CAM, it is necessary to find the relation between full satisfied strings ( μ ) and Ideal solution strings ( ρ ) . Equation (33) explains how our model achieves its destination in terms of the storage capacity of DHNN:
β C = { 1 ,       W h e n   μ ρ 0 ,   O t h e r w i s e
where the value of β C corresponding to 1 means that the model achieved its full storage requirement regarding CAM.

7.2. Diversity Calculation Strategy

In a multi-objective function, diversification of a model is a vital criteria. In a population-based metaheuristic, the diversity metric is a state-of-the-art metric that can assist in exploration and exploitation assessment that can find the real solutions for real-world applications [39]. The calculation procedure for diversity depends on the various parameters and relevant objects. Here, in our paper, we calculate the diversity ratio in a logical structure. The maximum diversity and the calculation of the diversity rate of a logical structure are shown in Equations (34) and (35).
Maximum   Diversity ,   D max = ξ ( 3 m + 2 n )
where ξ = 0.40 , means that the total diversity of the logical rule will be calculated from 40%, m = Number of 3SAT clauses and n = Number of 2SAT clauses.
Calculation   of   Proposed   Diversity   rate = ν A ν T × 100 % 40 %
v A = Achieved Number of States, and v T = Target Number of States.

7.3. Mean Absolute Error

During the training and testing phase, the standard error based on the average difference between the computed fitness value and the expected fitness value of the DHNN-RANkSAT model process is acquainted with Mean Absolute Error (MAE). According to [40], MAE appears to be a trustworthy statistic in measuring the correctness of a model. Thus, the calculation process for MAE in DHNN-RANkSAT (Training and Testing phase) is recrafted as follows:
M A E = 1 n | f max f i |
where f i refers to the obtained fitness value, f max is the maximum number of fitness values, and n denotes the number of iterations that corresponds to the RANkSAT logic.

7.4. The Ratio of Global Solutions (RGS)

Energy analysis can be carried out by observing the quality of the solutions to determine if it is global minima solutions that correspond to global minimum energy. The ratio of global minima solutions can be calculated as
R G S = 1 N T N C M i = 1 n N G H P RAN k SAT
where N G means the number of global minimum energy by the proposed model.

7.5. Similarity Index (SI)

This study will use a similarity analysis metric to compare the final states obtained using DHNN-RANkSAT. In theory, the majority of neuron states recovered by DHNN have reached global minimum energy. Inspired by the work of [23], this similarity metric will be extended to examine the RANkSAT logical representation form of the network final states. The comparison is performed by taking the benchmark states S max i with the states attained by the network S i . The formula of the general comparison of the benchmark state and final state is given as in Equation (39):
C S max i , S i = { ( S max i , S i ) ; i = 1 , 2 , n }
The standard specification variables can be defined by considering the following domains:
  • l refers to the total number of occurrences for ( S max i = 1 , S i = 1 ) in C S max i , S i .
  • m refers to the total number of occurrences for ( S max i = 1 , S i = 1 ) in C S max i , S i .
  • n refers to the total number of occurrences for ( S max i = 1 , S i = 1 ) in C S max i , S i .
  • o refers to the total number of occurrences for ( S max i = 1 , S i = 1 ) in C S max i , S i .
In this paper, we suggest a variant of the similarity index called the Gower–Legendre similarity index (GLI) that considers negative co-occurrences, with a particular emphasis on positive-negative states [41]. The Gower–Legendre similarity index is adjusted in this study to account for the similarity between the simulation of final states S i and the benchmark final states S max i .
G L I = l o l + 0.5 ( m + n ) + o

7.6. Total Neuron Variation (TNV)

The Total Neuron Variation ( T N V ) of the DHNN model is the number of solutions that are formed in each neuron combination, and this is calculated by (40) and (41):
T N V = V = 1 λ G V
G V + 1 = { 1 ,   x i + 1 x i 0 ,   x i + 1 = x i
where λ is the total number of solutions produced by the DHNN model and is the solution produced in the i-th trial.

7.7. Median Absolute Deviation (MAD)

Median Absolute Deviation (MAD) is a robust measure of variability because it uses the median as an estimate of the distribution’s center and the absolute difference rather than the squared difference [42].
M A D = M e d i a n ( | x i x ¯ | )
Here, x i is the refers to each value while x ¯ means the average value that corresponds to the RANkSAT models.

7.8. Friedman Statistical Analysis (Fd)

Friedman statistical analysis is used to find significant differences in the output of two or more algorithms simulated data. By computing the ranking of observed results for each metaheuristic, the Friedman test can be used to compare several methods. The general form for computing the Friedman test is given in Equation (43) [43].
F d = 12 N K ( K + 1 ) ( j R j 2 K ( K + 1 ) 2 4 )
where K is the number of metaheuristics in the test, j represents its associative index, N is the number of runs, and R j stands for the average rank of each algorithm. In addition, the distribution of the p-value is according to Chi-Squared distribution with ( K 1 ) degree of freedom. It is common to declare a result as a significant one if the p-value is less than 0.05 or 0.01 [44].

7.9. Baseline Method

Our paper focuses on investigating the performance of DHNNRANkSAT-HEA. The proposed DHNNRANkSAT-HEA model will be compared with [19,23,24,45]. Now we will discuss the following as our baseline method:
i.
The conventional EA model was proposed by [24] utilizing RAN3SAT representation, integrated with the WA method. The determination of H α RAN 3 SAT min follows Equation (15) and the populations are selected randomly. However, this EA emphasized accuracy (fitness value) by utilizing Equation (21), and there was no discussion based on the logical rule. Additionally, there was no more local improvement operator introduced in that work. Moreover, this work focused only single objective, which is enhancing fitness value with less iteration.
ii.
Ref. [23] also complied ES technique for checking the efficiency of the ES model. Note that ES has no optimizer to enhance its capacity to improve the local or global solutions. Even though ES has no specific mechanism to increase the solution search competence. In that work, there is no involvement of the multi-objectives that can focus on the storage capacity in terms of solution string.
iii.
The study by [19] introduced the GA model for evaluating the competence of the model with HNN. The qualified initial population can result in more fitted answers and a higher convergence rate. This belief encouraged to produce elementary population required for GA using HNN. Conversely, GA is highly dependent on the mutation operator during the initial stage due to an ineffective crossover operator. In that work, there is no contribution to improving the performances of the operators and utilizing the concept of multi-objective functions.
iv.
Ref. [45] incorporates HNN with ABC algorithm in minimizing 2SAT architecture. This work indicates the credibility of 2SAT in representing the behaviors of neurons in HNN. On the contrary, there was no involvement of logical diversity and as well as no strategy to enhance the storage capacity of HNN.
Our proposed work focuses on the learning phase instead of the retrieval phase of the DHNN. Thus, different classes of HNN, such as Mutation HNN, Kernel HNN and Boltzmann HNN, are not applicable for comparison. Meanwhile, Figure 3 depicts the overall flowchart of different DHNN-RANkSAT models.

8. Result and Discussions

Estimating the performance of any metaheuristics requires metrics for accuracy in terms of fitness, bias and variability. To check the prediction of our algorithm, we simulated the model and studied the dynamics of the various number of neurons. This study aims to see how effective our proposed model is by limiting the number of neurons ( N N ) . In addition, we experimented with solution strings (storage capacity), training error, testing error, energy analysis and similarity index in the interval 10 N N 120 . Moreover, to assess the performances via statistical metric, we consider ‘MAE’ as a performance indicator in training, testing and energy analysis. The error value is calculated in MAE. Notably, a lower MAE value will be considered as a higher tendency in achieving optimal solutions. Finally, Friedman test analysis is conducted for each part where the analysis is attached in each table by mentioning average (AVG), minimum (MIN), maximum (MAX) value with average rank (AVG.RANK) of each model.

8.1. Training Phase

8.1.1. Ideal Solution Strings

Figure 4 depicts the relative performances of α RAN k SAT E S , α RAN k SAT E A , α RAN k SAT G A , α RAN k SAT A B C and α RAN k SAT H E A models in terms of Ideal Solution Strings, which are represented as CAM. Here, we see that for the small number of neurons, α RAN k SAT E S , α RAN k SAT E A , α RAN k SAT G A , α RAN k SAT A B C only can generate five (05) Ideal solution strings. From the interval 50 N N 70 , these models’ performances go down rapidly. For α RAN k SAT E S , the generating number of ideal solution strings is almost zero at N N 90 . This happened due to lower management of synaptic weight calculation, for which this model cannot gather correct stored patterns in the testing phase. Additionally, α RAN k SAT E S has no filtering process, which satisfies our objective functions for subtle desired ideal solution strings. Similarly, for higher N N , α RAN k SAT E A , α RAN k SAT G A , α RAN k SAT A B C is also inefficient in achieving ideal solution strings. This is because α RAN k SAT E A , α RAN k SAT G A , α RAN k SAT A B C has no additional operator except a positive advertisement/ mutation/scout bee operator, respectively, that can improve the local solution, which impacts finding ideal solution strings. These models failed to utilize the Pareto optimality concept to generate ideal solution strings for higher N N . If we look into the α RAN k SAT H E A model, the arranging capacity of ideal solution strings is very impressive. From the initial point to N N = 120 , this model continuously produces five ideal solution strings. This gives clear evidence that the α RAN k SAT H E A model has fully utilized both diversity and fitness phase concepts by which more local solution improving optimizers are involved to achieve optimal synaptic weight that leads to our target ideal solution strings.
According to Deb and Deb [46], introducing a mutation operator in any algorithm can improve the exploration and exploitation strategy. The Shift mutation in the caretaker party of the advertisement campaign concept develops the population to introduce a higher ability to find more satisfied solution strings that leads to global solutions too. Hence, we can conclude by mentioning that the α RAN k SAT H E A model created desired ideal solution strings, expressed herein CAM and outperformed the other models.

8.1.2. Fitness

Figure 5a and Table 5 show that the results of training errors (the fitness value) in MAE and Figure 5b present the MAD-Fitness value for the models α RAN k SAT E S , α RAN k SAT E A , α RAN k SAT G A , α RAN k SAT A B C and α RAN k SAT H E A . Having a lower value of MAE and MAD indicates a higher degree of accuracy (fitness). In general, it is noticeable that the MAE value for α RAN k SAT E S , α RAN k SAT E A , α RAN k SAT G A , α RAN k SAT A B C models rise after a certain number of neurons.
This means that the higher MAE of a model cannot optimize desired fitness value in the training phase. After N N 15 , the MAE values for α RAN k SAT E S model start continuously rising without any intervention. This has happened because α RAN k SAT E S has trial and error nature that causing the sub-optimal training phase. It was stated by [17] that α RAN k SAT E S exploiting ’trial and error’ nature for the higher number of neurons.
If we check the α RAN k SAT E A , α RAN k SAT G A , α RAN k SAT A B C conditions, the error proportionally increases as the number of neurons rises. This elucidates that these models have no additional optimization phase to achieve the multi-objectives, which is not defined, and there is a high chance to gain non-improving solutions. On the other hand, fitness for MAE value α RAN k SAT H E A is up to the level. From the interval 10 N N 100 , the MAE value of α RAN k SAT H E A is zero. This means that the combination of 3SAT and 2SAT clauses can gain more satisfied interpretations, which helps to achieve maximum fitness value.
However, the proposed α RAN k SAT H E A model has a strong influencer, such as the caretaker party in the advertisement campaign, which mainly reduces the fluctuation of MAE. After the negative and coalition campaign strategy, the voter increases the chances to enhance the eligibility where the selected voters with the highest eligibility value have formed a stronger party. Thus, individual eligibility for all the voters and candidates increases and the absolute error is reduced vividly. This finding has a good agreement with the study of [14] where clause arrangement is the key issue in Random Satisfiability. Notably, 3SAT clauses α RAN k SAT create more satisfied state options that show the potential MAE value is almost zero in DHNN. Moreover, in the whole simulation, the α RAN k SAT H E A generates a poor MAE value in terms of fitness, which shows that α RAN k SAT H E A consists of effective optimizers that can achieve the optimal training phase.
A similar concept applies to Figure 5b, which represents the MAD-Fitness value for the different RANkSAT models. In this Figure, it is also clearly visible that the MAD value for α RAN k SAT H E A is straight zero. This means that the α RAN k SAT H E A model achieved 100% of the desired fitness value.
Additionally, the Friedman test has also been conducted and is shown in Table 5 for the α RAN k SAT E A α RAN k SAT E S , α RAN k SAT G A , α RAN k SAT A B C and α RAN k SAT H E A models. The degree of freedom is d f = 4 , considering a 0 = 0.05 , and the Chi-Square value for MAE-fitness is χ 2 = 72.167 . The null hypothesis of equal performance for all the models is rejected. Furthermore, the lower value of an average rank represents the better position of a model. Here, we have also checked the rank analysis of the models through Table 5 and identified that the lowest number rank for fitness occurred for α RAN k SAT H E A model, which is 1.87.

8.1.3. Diversity

Figure 6a and Table 6 depict the training error in MAE concerning the diversity and Figure 6b represents the error in MAD for α RAN k SAT E S , α RAN k SAT E A , α RAN k SAT G A , α RAN k SAT A B C and α RAN k SAT H E A models. Here, the lower MAE and MAD values present the higher diversity of the model. In all models, it has been found that as the number of neurons increases the MAE value is getting higher, and the diversity is projecting lower. This means that as N N increases, the population diversity diminishes.
During the interval 10 N N 50 , α RAN k SAT E S has a very negligible error in terms of diversity. After the mentioned interval, the MAE values increase rapidly. This happens because this model has no partition and that cannot create more negative literal in searching satisfied interpretations. Moreover, we know that α RAN k SAT E S is always able to run for a lower number of neurons for its weak mechanism. Following this, the α RAN k SAT E A , α RAN k SAT G A , α RAN k SAT A B C models performance ability is better than α RAN k SAT E S model because these models have partition solution space, that can adopt more negative literal for finding satisfied interpretations. Although α RAN k SAT E A , α RAN k SAT G A , α RAN k SAT A B C perform better, the lack of exploration and exploitation balance strategy in the operators creates lower diversity when N N increases.
Alternatively, the proposed α RAN k SAT H E A makes the combination of the state with the non-benchmark logical state, which achieved a maximum diversity rate that creates a dynamic shed until N N 100 (the accumulated error is practically zero). α RAN k SAT H E A has more partition solution spaces rather than other models, which keeps the advertisement campaign strategy of this algorithm in a balanced manner and retains the lower status of MAE value. This lines up that the different number of negative literals α RAN k SAT H E A shows additional compatibility as a symbolic instruction in DHNN. Involving mutation strategy in the diversity phase ensures the algorithm’s exploitation and global search abilities [47]. However, the effect of the interaction of the mutation machinist in the advertisement campaigns caretaker party improves the final states of the neurons as well as leads the global search abilities.
Similarly, we can write for Figure 6b that represents the MAD value for the diversity in different RANkSAT models. In this Figure, it is also clearly visible that the MAD value for α RAN k SAT H E A is absolutely zero. This means that the α RAN k SAT H E A model has fulfilled with a 100% diversity strategy.
Here, the Friedman test was also conducted in Table 6 for α RAN k SAT E A α RAN k SAT E S , α RAN k SAT G A , α RAN k SAT A B C and α RAN k SAT H E A models. The degree of freedom is d f = 4 , considering a 0 = 0.05 , and the Chi-Square value for the MAE-fitness is χ 2 = 42.579 . The null hypothesis of equal performance for all the models is rejected. Furthermore, the lower value of an average rank represents the better position of a model. Here, we have also checked the rank analysis of the models through Table 6, that identifies the lowest rank for fitness occurred for α RAN k SAT H E A model, which is 1.48.

8.2. Testing Phase

The error analysis in the testing phase is sketched in Figure 7 and Table 7. In this section, the behavior of various models in terms of synaptic weight management that retrieve the final states of neurons and produce global minima solutions are discussed. The competence of the testing phase in DHNN will be the indicator of α RAN 3 SAT k model which can successfully achieve optimal synaptic weights to retrieve the final states that produce global minima solutions.
Referring to Figure 7, α RAN k SAT E S generates zero MAE value with lower N N whereas the testing-MAE value sharply rises and reaches maximum error that is M A E = 100 when N N is higher. This has happened due to α RAN k SAT E S generating the wrong synaptic weight for higher N N , which complies with the sub-optimal training phase. However, the models α RAN k SAT E A , α RAN k SAT G A , α RAN k SAT A B C also failed to achieve its desired correct synaptic weight for higher N N , and this creates a higher testing-MAE value. This happened due to more N N , α RAN k SAT E A , α RAN k SAT G A , α RAN k SAT A B C , which cannot explore the search space properly, which affects minimizing the cost function. To avoid this issue in the future, these models can adopt a greedy selection operator that can avoid sub-optimal solutions [48].
On the other hand, we can investigate the scenario of α RAN k SAT H E A , which depicts the best performance in the testing phase. From the initial stage to N N = 120 , the proposed α RAN k SAT H E A creates the finest result achieving almost zero MAE values in the testing phase. This has happened since α RAN k SAT H E A has two filtering phases (fitness and diversity) are mentioned in the multi-objective concepts, which have several layers. These layers have a strategy for improving global search space, which is an exploration and local search space, which is known as exploitation. This strategy generates a zero-value cost function for α RAN k SAT H E A that leads to 100% global minimum solutions. Importantly, if the mechanism of α RAN k SAT H E A failed to retrieve optimal synaptic weight, the testing phase will be affected. Hence, it can be validated that α RAN k SAT H E A is a better model for finding global minima solutions compared to the other mentioned models.
Here, the Friedman test is conducted in Table 7 for α RAN k SAT E A α RAN k SAT E S , α RAN k SAT G A , α RAN k SAT A B C and α RAN k SAT H E A models. The degree of freedom is d f = 4 , considering a 0 = 0.05 , and the Chi-Square value for the MAE-fitness is χ 2 = 51.399 . The null hypothesis of equal performance for all the models is rejected. Furthermore, the lowest value of an average rank represents the better position of a model. Here, we also check the rank analysis of the models through Table 7 and identify that the lowest rank occurred for α RAN k SAT H E A model, which is 2.00.

Energy Analysis

In this section, the types of energy analysis for different DHNN models are explained. Here, Figure 8 and Table 8 elucidate the ratio of global solutions ( R G S ) reached by α RAN k SAT E S , α RAN k SAT E A and α RAN k SAT H E A models. Figure 9 and Table 9 represent the difference in energy examined by measuring the MAE value with different N N .
In Figure 8 and Table 8, we can check whether the number of negative literals of 3SAT and 2SAT clauses influences the R G S . The capability of a model in achieving R G S = 1 , indicates the effectiveness of a proposed model to produce a consistent final neuron state. In this figure, we see that α RAN k SAT E S , α RAN k SAT E A , α RAN k SAT G A approaches R G S 1 for a certain interval 10 N N 60 . Conversely, this consistency is degraded when N N increases. At N N 60 the R G S for α RAN k SAT A B C , α RAN k SAT E A , α RAN k SAT G A has rapidly drops, and for N N 100 , it approaches zero. This is happened due to the presence of more 2SAT clauses, which creates more non-satisfied interpretations as well as no logical variety involved to overcome the final neuron states effectively. Though α RAN k SAT A B C , α RAN k SAT E A have a good optimizer, there are no specific objectives that can improve the logical rule and the operators cannot explore and exploit it properly.
Interestingly, our proposed α RAN k SAT H E A consistently produces the highest ratio of global solutions in the interval 10 N N 120 . The abovementioned optimal range consistently produces R G S = 1 for α RAN k SAT H E A . This is caused since α RAN k SAT H E A employs the multi-objective function concept, which brings a different number of solutions by introducing negative states in the diversity phase of 3SAT and 2SAT clauses. More importantly, the appropriate placement of the local and global search operators tends R G S = 1 . This logical variety includes the ideal neuron state where the HTAF update the final neuron state successfully.
In the α RAN k SAT H E A mechanism, both exploration (negative and coalition campaign) and exploitation (positive and caretaker party) function equally for which this α RAN k SAT H E A executes only a single iteration in comparison to other models. Hence, the R G S obtained by the α RAN k SAT H E A model has a good agreement of the work by [15] where R G S approaching 1 can achieve 100% global minimum energy.
Here, the Friedman test results for α RAN k SAT E A α RAN k SAT E S , α RAN k SAT G A , α RAN k SAT A B C and α RAN k SAT H E A models have been demonstrated in the Table 8. The degree of freedom is d f = 4 , considering a 0 = 0.05 , and the Chi-Square value for R G S is χ 2 = 42.041 . The null hypothesis of equal performance for all the models is rejected. Furthermore, the higher R G S value of an average rank represents the better position of a model. Here, we also check the rank analysis of the models through Table 8 and identify that the highest value for R G S occurred for α RAN k SAT H E A model, which is 4.00.
In terms of energy analysis, Figure 9 and Table 9 illustrate how the MAE value between the minimum energy ( H α RAN 3 SAT k m i n ) and final energy ( H α RAN 3 SAT k ) is used to examine the difference in energy. From the mentioned Figure and Table, we also observe a similar trend for α RAN k SAT E S , α RAN k SAT E A , α RAN k SAT G A , α RAN k SAT A B C models in which the MAE value in the energy analysis is sharply increasing in the interval of 60 N N 120 . This takes place since α RAN k SAT E S , α RAN k SAT E A , α RAN k SAT G A , α RAN k SAT A B C cannot generate more satisfied solution strings for which it requires higher iterations that are trapped to local optima. On the other hand, our proposed α RAN k SAT H E A model has no error that is zero MAE value from the initial point until the end of the simulation. This has happened due to the generating satisfied solution strings following Equations (18) and (19). After the mutation the caretaker party α RAN k SAT H E A has a large capacity in achieving more satisfied solution strings, which arises zero MAE value for energy analysis and satisfied Equation (16) with tolerance value accurately. Despite creating intelligence effort throughout the learning phase, this network has been presented with an excellent error reduction mechanism to prevent non-improving solutions. As a result, the neuron state for α RAN k SAT H E A has minimum state oscillation and has been updated properly during the testing phase.
In Table 9, the Friedman test directed for α RAN k SAT E A α RAN k SAT E S , α RAN k SAT G A , α RAN k SAT A B C and α RAN k SAT H E A models. The degree of freedom is d f = 4 , considering a 0 = 0.05 , and the Chi-Square value for Energy-MAE is χ 2 = 37.536 . The null hypothesis of equal performance for all the models is rejected. Furthermore, the lower value of an average rank represents the better position of a model. Here, we also check the rank analysis of the models through Table 9 and recognize that the lowest rank value for MAE-Energy occurred for the α RAN k SAT H E A model, which is 1.93.

8.3. Similarity Index

In Figure 10 and Figure 11 as well as Table 10 and Table 11, we have examined the similarity and dissimilarity of the obtained final neuron states using the similarity index (SI). This indexing is specific for binary variables and use for divergency studies [49]. In the similarity index analysis, we have looked for the study of Total Neuron Variation (TNV) and the Gower–Legendre Index (GLI) according to the Equations (40) and (39).

8.3.1. Total Neuron Variation ( T N V )

Figure 10 and Table 10 show the evaluation of total neuron variation ( T N V ) for various DHNN-RANkSAT models. The performance of α RAN k SAT E S , α RAN k SAT E A , α RAN k SAT G A , α RAN k SAT A B C and α RAN k SAT H E A with DHNN in terms of variation of final neuron states in the training phase will be examined in this section. According to the finding, the neuron variations generate for α RAN k SAT E S , α RAN k SAT E A , α RAN k SAT G A are slowly growing up, and these models have achieved maximum peak with T N V = 94.8 , T N V = 98.6 and T N V = 91.8 , respectively in N N = 50 . The model α RAN k SAT A B C reached the highest peak at N N = 60 and achieved T N V = 100 .
After N N = 50 and N N = 60 , the number of solution variations for the α RAN k SAT E S , α RAN k SAT E A , α RAN k SAT G A , α RAN k SAT A B C models continuously drops down, and after N N 100 , the neuron variation is very low for α RAN k SAT E A and almost zero for α RAN k SAT E S , α RAN k SAT G A , α RAN k SAT A B C . This is due to α RAN k SAT E S having no optimization layer that cannot explore the search space. A higher number of neurons ( N N ), α RAN k SAT E A , α RAN k SAT G A , α RAN k SAT A B C becomes stuck at local optima due to the inability of the key operators, such as advertising campaigns for EA in searching for solution space.
To the contrary, our proposed α RAN k SAT H E A model has gradually reached the peak and remained until the end of the simulation. This is owing to the efficient synaptic weight management and training provided by α RAN k SAT H E A . Furthermore, α RAN k SAT H E A has two phases with four-layer optimization, which allows for the intensification and diversification of a large search space with the use of optimization operators. This model has manifested that the proposed logical structure is very effective in creating more variation of solutions as the number of neurons increases. Moreover, the variation analysis by α RAN k SAT H E A shows the diversified negative literals impact of DHNN on the production of global solutions. Consequently, we can say that lower energy analysis (ratio global) affects the low performance T N V (for α RAN k SAT E S , α RAN k SAT E A , α RAN k SAT G A , α RAN k SAT A B C ) and higher energy analysis (ratio global) depicts the increased performance of T N V for α RAN k SAT H E A .
In Table 10, the Friedman test directed for α RAN k SAT E A α RAN k SAT E S , α RAN k SAT G A , α RAN k SAT A B C and α RAN k SAT H E A models. The degree of freedom is d f = 4 , considering a 0 = 0.05 , and the Chi-Square value for T N V is χ 2 = 61.511 . The null hypothesis of equal performance for all the models is rejected. Furthermore, the highest value of an average rank represents the best position of a model. Here, we have also checked the rank analysis of the models through Table 10 and identified that the highest rank for T N V has occurred for α RAN k SAT H E A model, which is 4.91.

8.3.2. Gower–Legendre Index (GLI)

Figure 11 and Table 11 portrayed the GLI value attained by different DHNN-RANkSAT models. GLI measures the similarity of negative states concerning the benchmark state. For this case, the higher GLI value is better for investigating the similarity index. In our analysis, the α RAN k SAT E S , α RAN k SAT G A , α RAN k SAT A B C models have recorded poor values 0.2, 0.198 and 0.126, respectively, for higher N N . This states that for higher N N , different number of negative states in the logical rule results valleys, which decrease to a lower value. Even at 95 N N 120 , α RAN k SAT E S , α RAN k SAT G A , α RAN k SAT A B C cannot generate any value, which means that these models are trapped into 100% local minima solutions. Correspondingly, for higher N N , the GLI value is staked at 0.15, which is also very low. This α RAN k SAT E A model can generate a minimum GLI value at the end of the simulation because it can produce a few global solutions for higher N N and the variables of each state for α RAN 3 SAT k obtained are not all equal to the ideal neuron states.
Nevertheless, the proposed α RAN k SAT H E A generates a designated value 0.65 G L I 0.75 until the end of the simulation, which shows a higher GLI value rather than other presented models. This demonstrates that the proposed α RAN k SAT H E A showcased with the different N N do not affect the variation of final neuron states. Moreover, the composition of multi-objective concept effect α RAN k SAT H E A for higher GLI value since this proposed model makes a good alignment with RANkSAT representation. In addition, the higher GLI value admits the mechanism that is capable of finding ideal solution strings, which enhances the storage capacity of DHNN. This model successfully explored more diverse states that result in global minimum energy.
In Table 11, the Friedman test directed for α RAN k SAT E A α RAN k SAT E S , α RAN k SAT G A , α RAN k SAT A B C and α RAN k SAT H E A models. The degree of freedom is d f = 4 , considering a 0 = 0.05 , and the Chi-Square value for GLI is χ 2 = 17.072 . The null hypothesis of equal performance for all the models is rejected. Furthermore, the highest value of an average rank represents the best position of a model. Here, we also check the rank analysis of the models through Table 11 and identify that the highest rank for GLI occurred for α RAN k SAT H E A model, which is 1.87.
From Figure 4, Figure 5, Figure 6, Figure 7, Figure 8, Figure 9, Figure 10 and Figure 11 and Table 5, Table 6, Table 7, Table 8, Table 9, Table 10 and Table 11, the scenario of the overall performances of the models is demonstrated. The final neuron states attained by the proposed α RAN k SAT H E A model is found very nominal in the over-fitting issue and obtain the highest neuron variation achieved at the end of the simulation. More importantly, the storage capability in terms of ideal solution strings, error calculations and energy analysis exposed that the α RAN k SAT H E A model achieved the highest global minima solutions, which correspond to global minimum energy. If we observe the compatibility of the mentioned models in terms of storage capacity, training analysis, testing analysis, energy analysis and similarity analysis our proposed DHNNRANkSAT-HEA model outperformed the traditional EA, ES, GA and ABC models.

8.4. Impact Analysis

The pioneer observation from the above discussions and analysis, it is clearly visible that α RAN k SAT H E A is superior to other presented models. The balanced exploration–exploitation mechanist, proper placement of the local–global search operators and the perfect intelligent mutation mechanism piled α RAN k SAT H E A as an unbeatable strength compared to other mentioned models with the optimal solutions. As shown in the pseudo-code of α RAN k SAT H E A , the diversity phase was observed to explore more negative states that correspond to the projected solution string. For the exploration part, the trajectory of the fitness value is obtained by the ‘maximum fitness phase’ in the advertisement campaign (Negative advertisement and Coalition part).
This is contrary to other models ( α RAN k SAT E S , α RAN k SAT E A , α RAN k SAT G A , α RAN k SAT A B C ), which increase the gap between the desired outcome and the current fitness value of the population. In terms of exploitation, the Positive advertisement and Caretaker party of the advertisement campaign keep fasting to acquire the highest fitness value. The exploration and exploitation strategy assemble to achieve our objectives in a single iteration. On the other hand, other models do not acquire similar benefit because of unbalancing position of exploration–exploitation mechanisms that creates lower fitness value. Thus, α RAN k SAT H E A is reported to provide a positive impact although the initial population was initialized in random nature.

8.5. Convergence Analysis

In this experiment, Figure 12a–d shows the convergence behavior of the proposed α RAN k SAT H E A was compared with other state-of-the-art algorithms. The combined existence of the exploration and exploitation strategy of the proposed α RAN k SAT H E A is more balanced in comparison to other algorithms. As shown in Figure 12a–d, the convergence curve for different RANkSAT models illustrated that α RAN k SAT H E A requires only single iterations to obtain optimal solutions compared to other state-of-the-art algorithms. Having more exploitation mechanisms, the α RAN k SAT H E A model manages to avoid being trapped in the local solutions.
Although, α RAN k SAT E A , α RAN k SAT A B C showed competitive performances, the number of iterations requires more than the proposed model. Another obvious issue is that α RAN k SAT G A needs higher iterations since the operator is not quite effective. The models α RAN k SAT E A , α RAN k SAT A B C and α RAN k SAT G A have the primary weakness in terms of exploration and exploitation in deploying and utilizing the operators. This convergence analysis shows that α RAN k SAT E S failed to present the influence since it has no exploration and exploitations mechanisms.

8.6. Overall Comparative Overview of the Proposed α RAN k SAT H E A Method with Existing Methods

In this section, we consider a different number of neurons ( N N = 30 , 60 , 90 , 120 ) to verify the effectiveness of the proposed model with some other standard models. Table 12 illustrates an overall comparison of α RAN k SAT H E A with existing methods with different metrics.
Based on Table 12, for different numbers of neurons α RAN k SAT H E A generated the superior result in terms of fitness accuracy, diversity accuracy, testing error, energy error, total neuron variations and similarity index analysis compared to other mentioned models. It means that α RAN k SAT H E A satisfies our desired objectives: maximum fitness value and diversified logical structure with desired ideal solution string that enhances the storage capacity of DHNN. It is worthwhile to discuss the intriguing fact revealed by the retrieval capability of DHNN in ensuring the final states of the neurons that lead to global convergence. The robust operators of the proposed α RAN k SAT H E A enhanced the capability of achieving the highest number of global solutions. In a nutshell, we can express that our proposed α RAN k SAT H E A model outperformed all the mentioned models.

8.7. Pareto Optimality Analysis

The performance of α RAN k SAT H E A in doing multi-objective function can be examined through Pareto front solutions [50]. Here, Figure 13 explains the Pareto frontier of a multi-objective function with two criteria where most points belong to the Pareto frontier area. This figure delineates several important features—dominated states, non-dominated states, Pareto front and Ideal objective state of Pareto optimality for a multi-objective function.
Generally, the Pareto front consists of the set of best trade-off points, which are noted as the non-dominated points. While the Ideal states define the upper bounds (optimal points) for the objective function values, respectively. According to the study of [51], the neuron state produced by the α RAN k SAT H E A model as shown in Figure 14 that follows the non-dominated states as well as upper bound states which achieves Ideal states.
However, our proposed model does not entirely incorporate the concept of Pareto dominance in the selection of Ideal strings. Our proposed α RAN k SAT H E A relies greatly in the competency of both objective functions. In this context, an Ideal string must comply with maximum fitness with logical diversity that are stored in CAM (DHNN). When the string achieves maximum fitness then it proceeds to the diversity phase. The mechanism continues unless any one of the phases failed to satisfy the objective functions. Moreover, the Pareto optimality can be considered when five (5) ideal solution strings fail to generate in a simulation. In our total simulation ( 10 N N 120 ) , the proposed model successfully achieved five ideal solution strings. Hence, the sub optimal neuron state for α RAN k SAT H E A cannot be found and analyzed.

9. Conclusions

This paper reveals a novel multi-objective DHNNRANkSAT-HEA model, and a new insight into the non-systematic logical rule for a high dimensional decision system. A higher order logical structure addressed as RANkSAT (for k = 3 , 2 ) is formed to optimize the cost function of DHNN. This paper proposed a novel multi-objective function that capitalize maximum fitness and diversified ratio of negative literals in each clause with five ideal solution strings that increase the storage capacity of DHNN.
A new hybrid metaheuristic, named Hybrid Election Algorithm (HEA), is also proposed by introducing a dynamic ‘Caretaker party’ operator in the mechanism of an advertisement campaign that improves the quality of local solutions. More importantly, our proposed model is capable of maintaining balance between the exploration–exploitation strategy for which it can avoid sub optimal solutions. Importantly, our proposed DHNNRANkSAT-HEA model successfully interprets to minimize the cost function within a single iteration during the training phase. Finally, we observed from the experimental evaluations and statistical and impact analysis that our proposed DHNNRANkSAT-HEA model achieved superior results in comparison with other models.
According to the famous no-free-lunch theorem [52], no metaheuristic/algorithm can perform equally well in all types of conditions. In this regard, our model might not guarantee that it can be successfully implemented on other ANNs, such as the Radial Basis Function Neural Network, since each ANN model has individual architecture analysis. On the other hand, this DHNNRANkSAT-HEA model has several shortcomings that could be addressed in future research. We emphasize that our proposed model limits the number of maximum combinations (COMBMAX), which also limits the simulation ability to generate satisfiable neuron states. Furthermore, this study stops at 120 neurons, in contrast to [23], which takes up to 300 neurons. This discussion also verified that the compatibility of α RAN k SAT 3 , 2 with the proposed α RAN k SAT H E A model in DHNN logic programming can be applied in the logic/data mining field [53,54] in the next exploration.

Author Contributions

Conceptualization and methodology, S.A.K. and M.R.A.; validation, S.S.; investigation and writing—original draft preparation, S.A.K.; writing—review and editing, M.A.M.; visualization, S.Z.M.J.; and supervision and funding acquisition, M.S.M.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by Research University Grant (RUI) (1001/PMATHS/8011131) by Universiti Sains Malaysia.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to express special dedication to Nur Ezlin Zamri, Siti Syatirah Muhammad Sidik, Shah Mahmud. Yasin for their technical support and special thanks to Muhammad Mushfiqur Rahman, Department of English, Noakhali Science and Technology University Bangladesh.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviation

ANNArtificial Neural Network
HNNHopfield Neural Network
DHNNDiscrete Hopfield Neural Network
SATBoolean Satisfiability
3SAT3 Satisfiability
2SAT2 Satisfiability
RANkSATRandom k Satisfiability
DHNNRANkSAT-HEARandom k Satisfiability with Hybrid Election Algorithm in Discrete Hopfield Neural Network
A DHNNRANkSAT-EARandom k Satisfiability with Election Algorithm in a Discrete Hopfield Neural Network
DHNNRANkSAT-ESRandom k Satisfiability with Exhaustive search in a Discrete Hopfield Neural Network
DHNNRANkSAT-GARandom k Satisfiability with Genetic Algorithm in a Discrete Hopfield Neural Network
DHNNRANkSAT-ABCRandom k Satisfiability with Artificial Bee Colony algorithm in a Discrete Hopfield Neural Network
α RAN 3 SAT k Random 3 Satisfiability
α RAN k SAT 3 , 2 Random k Satisfiability for k= 3,2
RAN2SATRandom 2 Satisfiability
RAN3SATRandom 3 Satisfiability
MAJ2SATMajor 2 Satisfiability
α RAN k SAT H E A Hybrid Election Algorithm for Random k Satisfiability
α RAN k SAT E A Election Algorithm for Random k Satisfiability
α RAN k SAT E S Exhaustive Search Algorithm for Random k Satisfiability
α RAN k SAT G A Genetic Algorithm for Random k Satisfiability
α RAN k SAT A B C Artificial Bee Colony Algorithm for Random k Satisfiability
δ P 3 S A T 3 SAT clauses
ESExhaustive Search
δ P 2 S A T 2 SAT clauses
MAEMean Absolute Error
MADMedian Absolute Deviation
N N Number of Neurons
HTAFHyperbolic Tangent Activation Function
RGSRatio of Global Solutions
J   i ( k ) Clause combination for 3,2,1 literal.
f L j Best fitness
S i State of the i-th neuron
W a b c Synaptic weight from the unit a to c
σ Advertisement rate
μ Full Satisfied Strings
ρ Ideal Solution Strings
h i Local field
CorDCorrelation Distance function
Conjunction (AND)
Disjunction (OR)
¬ Negation
GLIGower–Legendre Index
H α RAN 3 SAT k Energy function Random 3 Satisfiability
H α RAN 3 SAT k m i n Minimum Energy function for Random 3 Satisfiability
f ( F max , γ , S max ( i ) ) Multi-objective functions that contain maximum fitness value with diversity ratio and ideal strings.
T F Tolerance value for fitness function
T D Tolerance value for diversity analysis
ν A Achieved a number of states for diversity
ν T The target number of states for diversity
β C Numerical calculation of storage capacity
ξ Diversity of the logical rule in percentage

Appendix A

We provide a real example that may guide you on how a party, voters and candidate represent the value.
Consider an example: α RAN k SAT H E A = ( A B ¬ C ) ( D ¬ E F ) ( ¬ G ¬ H )
Table A1. Calculation of voters and Candidate of a Party P.
Table A1. Calculation of voters and Candidate of a Party P.
VotersSASBSCSDSESFSGSHFitness Value
1111111112
21−11−111−113
3−1−111−11111
4−1−11−111−112
From the above table, we observe that there are three voters of the Party P. Each voter has an individual fitness value. Here, we see that Voter 2 achieved the highest fitness value, and accordingly, Voter 2 will be considered as the Candidate of the Party P.

References

  1. Hopfield, J.J.; Tank, D.W. Neural computation of decisions in optimization problems. Biol. Cybern. 1985, 52, 141–152. [Google Scholar] [CrossRef]
  2. Wan, L.; Zhou, Q.; Fu, H.; Zhang, Q. Exponential stability of Hopfield neural networks of neutral type with multiple time-varying delays. AIMS Math. 2021, 6, 8030–8043. [Google Scholar] [CrossRef]
  3. Sani, S.; Shermeh, H.E. A novel algorithm for detection of COVID-19 by analysis of chest CT images using Hopfield neural network. Expert Syst. Appl. 2022, 197, 116740. [Google Scholar] [CrossRef] [PubMed]
  4. Chen, H.; Lian, Q. Poverty/investment slow distribution effect analysis based on Hopfield neural network. Future Gener. Comput. Syst. 2021, 122, 63–68. [Google Scholar] [CrossRef]
  5. He, M.; Zhuang, L.; Yang, S.; Zhang, J.; Meng, H. Energy-efficient virtual network embedding algorithm based on Hopfield neural network. Wirel. Commun. Mob. Comput. 2021, 2021, 8889923. [Google Scholar] [CrossRef]
  6. Wan Abdullah, W.A.T. Logic programming on a neural network. Int. J. Intell. Syst. 1992, 7, 513–519. [Google Scholar] [CrossRef]
  7. Hamadneh, N.; Sathasivam, S.; Choon, O.H. Higher order logic programming in radial basis function neural network. Appl. Math. Sci. 2012, 6, 115–127. [Google Scholar]
  8. Batenburg, K.J.; Kosters, W.A. Solving Nonograms by combining relaxations. Pattern Recognit. 2009, 42, 1672–1683. [Google Scholar] [CrossRef]
  9. Aiman, U.; Asrar, N. Genetic algorithm-based solution to SAT-3 problem. J. Comput. Sci. Appl. 2015, 3, 33–39. [Google Scholar]
  10. Poloczek, M.; Schnitger, G.; Williamson, D.P.; Van Zuylen, A. Greedy algorithms for the maximum satisfiability problem: Simple algorithms and inapproximability bounds. SIAM J. Comput. 2017, 46, 1029–1061. [Google Scholar] [CrossRef] [Green Version]
  11. Xu, Z.; He, K.; Li, C.M. An iterative Path-Breaking approach with mutation and restart strategies for the MAX-SAT problem. Comput. Oper. Res. 2019, 104, 49–58. [Google Scholar] [CrossRef] [Green Version]
  12. Sathasivam, S.; Mansor, M.A.; Ismail, A.I.M.; Jamaludin, S.Z.M.; Kasihmuddin, M.S.M.; Mamat, M. Novel Random k Satisfiability for k ≤ 2 in Hopfield Neural Network. Sains Malays. 2020, 49, 2847–2857. [Google Scholar] [CrossRef]
  13. Bailey, D.D.; Dalmau, V.; Kolaitis, P.G. Phase transitions of pp-complete satisfiability problems. In Proceedings of the IJCAI 2001, Seattle, WA, USA, 4–10 August 2001; pp. 183–192. [Google Scholar]
  14. Karim, S.A.; Zamri, N.E.; Alway, A.; Kasihmuddin, M.S.M.; Ismail, A.I.M.; Mansor, M.A.; Hassan, N.F.A. Random satisfiability: A higher-order logical approach in discrete Hopfield Neural Network. IEEE Access 2021, 9, 50831–50845. [Google Scholar] [CrossRef]
  15. Kasihmuddin, M.S.; Mansor, M.; Md Basir, M.F.; Sathasivam, S. Discrete mutation Hopfield neural network in propositional satisfiability. Mathematics 2019, 7, 1133. [Google Scholar] [CrossRef] [Green Version]
  16. Megala, N.; Jawahar, N. Genetic algorithm and Hopfield neural network for a dynamic lot sizing problem. Int. J. Adv. Manuf. Technol. 2006, 27, 1178–1191. [Google Scholar] [CrossRef]
  17. Mansor, M.A.; Kasihmuddin, M.S.M.; Sathasivam, S. Artificial immune system paradigm in the Hopfield network for 3-satisfiability problem. Pertanika J. Sci. Technol. 2017, 25, 1173–1188. [Google Scholar]
  18. Khoshahval, F.; Fadaei, A. Application of a hybrid method based on the combination of genetic algorithm and Hopfield neural network for burnable poison placement. Ann. Nucl. Energy 2012, 47, 62–68. [Google Scholar] [CrossRef]
  19. Kasihmuddin, M.S.M.; Mansor, M.A.; Sathasivam, S. Hybrid Genetic Algorithm in the Hopfield Network for Logic Satisfiability Problem. Pertanika J. Sci. Technol. 2017, 25, 139–152. [Google Scholar]
  20. Abdechiri, M.; Meybodi, M.R. A Hybrid Hopfield Network-Imperialist Competitive Algorithm for Solving the Satisfiability Problems. Int. J. Comput. Electr. Eng. 2012, 4, 726. [Google Scholar] [CrossRef] [Green Version]
  21. Emami, H.; Derakhshan, F. Election algorithm: A new socio-politically inspired strategy. AI Commun. 2015, 28, 591–603. [Google Scholar] [CrossRef]
  22. Emami, H. Chaotic election algorithm. Comput. Inform. 2019, 38, 1444–1478. [Google Scholar] [CrossRef]
  23. Sathasivam, S.; Mansor, M.; Kasihmuddin, M.S.M.; Abubakar, H. Election algorithm for random k satisfiability in the Hopfield neural network. Processes 2020, 8, 568. [Google Scholar] [CrossRef]
  24. Bazuhair, M.M.; Jamaludin, S.Z.M.; Zamri, N.E.; Kasihmuddin, M.S.M.; Mansor, M.; Alway, A.; Karim, S.A. Novel Hopfield neural network model with election algorithm for random 3 satisfiability. Processes 2021, 9, 1292. [Google Scholar] [CrossRef]
  25. Qais, M.H.; Hasanien, H.M.; Alghuwainem, S. Enhanced salp swarm algorithm: Application to variable speed wind generators. Eng. Appl. Artif. Intell. 2019, 80, 82–96. [Google Scholar] [CrossRef]
  26. Bansal, S.; Gupta, N.; Singh, A.K. Application of bat-inspired computing algorithm and its variants in search of near-optimal golomb rulers for WDM systems: A comparative study. In Applications of Bat Algorithm and Its Variants; Dey, N., Rajinikanth, V., Eds.; Springer: Singapore, 2021; pp. 79–101. [Google Scholar]
  27. Abualigah, L.; Yousri, D.; Abd Elaziz, M.; Ewees, A.A.; Al-Qaness, M.A.; Gandomi, A.H. Aquila optimizer: A novel meta-heuristic optimization algorithm. Comput. Ind. Eng. 2021, 157, 107250. [Google Scholar] [CrossRef]
  28. Weerasuriya, A.U.; Zhang, X.; Wang, J.; Lu, B.; Tse, K.T.; Liu, C.H. Performance evaluation of population-based metaheuristic algorithms and decision-making for multi-objective optimization of building design. Build. Environ. 2021, 198, 107855. [Google Scholar] [CrossRef]
  29. Fu, H.; Wu, G.; Liu, J.; Xu, Y. More efficient stochastic local search for satisfiability. Appl. Intell. 2021, 51, 3996–4015. [Google Scholar] [CrossRef]
  30. Mézard, M.; Zecchina, R. Random k-satisfiability problem: From an analytic solution to an efficient algorithm. Phys. Rev. E 2002, 66, 056126. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  31. De La Vega, W.F. Random 2-SAT: Results and problems. Theor. Comput. Sci. 2001, 265, 131–146. [Google Scholar] [CrossRef] [Green Version]
  32. Ma, J. The stability of the generalized Hopfield networks in randomly asynchronous mode. Neural Netw. 1997, 10, 1109–1116. [Google Scholar] [CrossRef]
  33. Gosti, G.; Folli, V.; Leonetti, M.; Ruocco, G. Beyond the maximum storage capacity limit in Hopfield recurrent neural networks. Entropy 2019, 21, 726. [Google Scholar] [CrossRef] [Green Version]
  34. Barra, A.; Beccaria, M.; Fachechi, A. A new mechanical approach to handle generalized Hopfield neural networks. Neural Netw. 2018, 106, 205–222. [Google Scholar] [CrossRef]
  35. Morales-Castañeda, B.; Zaldivar, D.; Cuevas, E.; Fausto, F.; Rodríguez, A. A better balance in metaheuristic algorithms: Does it exist? Swarm Evol. Comput. 2020, 54, 100671. [Google Scholar] [CrossRef]
  36. Lemus-Romani, J.; Becerra-Rozas, M.; Crawford, B.; Soto, R.; Cisternas-Caneo, F.; Vega, E.; García, J. A novel learning-based binarization scheme selector for swarm algorithms solving combinatorial problems. Mathematics 2021, 9, 2887. [Google Scholar] [CrossRef]
  37. Ramos-Figueroa, O.; Quiroz-Castellanos, M.; Mezura-Montes, E.; Kharel, R. Variation operators for grouping genetic algorithms: A review. Swarm Evol. Comput. 2021, 60, 100796. [Google Scholar] [CrossRef]
  38. Liao, T.; Daniel, M.; Thomas, S. Performance evaluation of automatically tuned continuous optimizers on different benchmark sets. Appl. Soft Comput. 2015, 27, 490–503. [Google Scholar] [CrossRef]
  39. Osuna-Enciso, V.; Cuevas, E.; Castañeda, B.M. A diversity metric for population-based metaheuristic algorithms. Inf. Sci. 2022, 586, 192–208. [Google Scholar] [CrossRef]
  40. Willmott, C.J.; Matsuura, K. Advantages of the mean absolute error (MAE) over the root mean square error (RMSE) in assessing average model performance. Clim. Res. 2022, 30, 79–82. [Google Scholar] [CrossRef]
  41. Albatineh, A.N.; Niewiadomska-Bugaj, M. Correcting Jaccard and other similarity indices for chance agreement in cluster analysis. Adv. Data Anal. Classif. 2011, 5, 179–200. [Google Scholar] [CrossRef]
  42. Chung, N.; Zhang, X.D.; Kreamer, A.; Locco, L.; Kuan, P.F.; Bartz, S.; Linsley, P.S.; Ferrer, M.; Strulovici, B. Median absolute deviation to improve hit selection for genome-scale RNAi screens. J. Biomol. Screen. 2008, 13, 149–158. [Google Scholar] [CrossRef]
  43. Derrac, J.; García, S.; Molina, D.; Herrera, F. A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol. Comput. 2011, 1, 3–18. [Google Scholar] [CrossRef]
  44. Zamani, H.; Nadimi-Shahraki, M.H.; Gandomi, A.H. CCSA: Conscious neighborhood-based crow search algorithm for solving global optimization problems. Appl. Soft Comput. 2019, 85, 105583. [Google Scholar] [CrossRef]
  45. Kasihmuddin, M.S.M.; Mansor, M.; Sathasivam, S. Robust Artificial Bee Colony in the Hopfield Network for 2-Satisfiability Problem. Pertanika J. Sci. Technol. 2017, 25, 453–468. [Google Scholar]
  46. Deb, K.; Deb, D. Analysing mutation schemes for real-parameter genetic algorithms. Int. J. Artif. Intell. Soft Comput. 2014, 4, 1–28. [Google Scholar] [CrossRef]
  47. Ma, L.; Wang, C.; Xie, N.G.; Shi, M.; Ye, Y.; Wang, L. Moth-flame optimization algorithm based on diversity and mutation strategy. Appl. Intell. 2021, 51, 5836–5872. [Google Scholar] [CrossRef]
  48. Zou, J.; Sun, R.; Yang, S.; Zheng, J. A dual-population algorithm based on alternative evolution and degeneration for solving constrained multi-objective optimization problems. Inf. Sci. 2021, 579, 89–102. [Google Scholar] [CrossRef]
  49. Da Silva Meyer, A.; Garcia, A.A.F.; Pereira de Souza, A.; Lopes de Souza, C. Comparison of similarity coefficients used for cluster analysis with dominant markers in maize (Zea mays L.). Genet. Mol. Biol. 2004, 27, 83–91. [Google Scholar] [CrossRef]
  50. Wang, Z.; Rangaiah, G.P. Application and analysis of methods for selecting an optimal solution from the Pareto-optimal front obtained by multi-objective optimization. Ind. Eng. Chem. Res. 2017, 56, 560–574. [Google Scholar] [CrossRef]
  51. Li, Y.; Liao, S.; Liu, G. Thermo-economic multi-objective optimization for a solar-dish Brayton system using NSGA-II and decision making. Int. J. Electr. Power Energy Syst. 2015, 64, 167–175. [Google Scholar] [CrossRef]
  52. Hu, H.; Kantardzic, M.; Sethi, T.S. No Free lunch theorem for concept drift detection in streaming data classification: A review. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2020, 10, e1327. [Google Scholar] [CrossRef]
  53. Issad, H.A.; Aoudjit, R.; Rodrigues, J.J. A comprehensive review of data mining techniques in smart agriculture. Eng. Agric. Environ. Food 2019, 12, 511–525. [Google Scholar] [CrossRef]
  54. Gordan, M.; Sabbagh-Yazdi, S.R.; Ismail, Z.; Ghaedi, K.; Carroll, P.; McCrum, D.; Samali, B. State-of-the-Art Review on Advancements of Data Mining in Structural Health Monitoring. Measurement 2022, 193, 110939. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram for DHNN-RANkSAT.
Figure 1. Schematic diagram for DHNN-RANkSAT.
Mathematics 10 01963 g001
Figure 2. Flowchart of Hybrid Election Algorithm in DHNN-RANkSAT.
Figure 2. Flowchart of Hybrid Election Algorithm in DHNN-RANkSAT.
Mathematics 10 01963 g002
Figure 3. Flowchart of DHNNRANkSAT-HEA, DHNNRANkSAT-EA, DHNNRANkSAT-ES, DHNNRANkSAT-GA and DHNNRANkSAT-ABC.
Figure 3. Flowchart of DHNNRANkSAT-HEA, DHNNRANkSAT-EA, DHNNRANkSAT-ES, DHNNRANkSAT-GA and DHNNRANkSAT-ABC.
Mathematics 10 01963 g003
Figure 4. Number of Ideal Solution Strings (ISS) for different DHNN-RANkSAT models.
Figure 4. Number of Ideal Solution Strings (ISS) for different DHNN-RANkSAT models.
Mathematics 10 01963 g004
Figure 5. (a) MAE-Fitness value for different DHNN-RANkSAT models. (b) MAD-Fitness value for different DHNN-RANkSAT models.
Figure 5. (a) MAE-Fitness value for different DHNN-RANkSAT models. (b) MAD-Fitness value for different DHNN-RANkSAT models.
Mathematics 10 01963 g005
Figure 6. (a) MAE-Diversity of different RANkSAT models. (b) MAD-Diversity of different RANkSAT models.
Figure 6. (a) MAE-Diversity of different RANkSAT models. (b) MAD-Diversity of different RANkSAT models.
Mathematics 10 01963 g006
Figure 7. MAE-Testing values for different DHNN-RANkSAT models.
Figure 7. MAE-Testing values for different DHNN-RANkSAT models.
Mathematics 10 01963 g007
Figure 8. Ratio Global Solutions (RGS) for different DHNN-RANkSAT models.
Figure 8. Ratio Global Solutions (RGS) for different DHNN-RANkSAT models.
Mathematics 10 01963 g008
Figure 9. MAE-Energy values for different DHNN-RANkSAT models.
Figure 9. MAE-Energy values for different DHNN-RANkSAT models.
Mathematics 10 01963 g009
Figure 10. Total Neuron Variation (TNV) for different DHNN-RANkSAT models.
Figure 10. Total Neuron Variation (TNV) for different DHNN-RANkSAT models.
Mathematics 10 01963 g010
Figure 11. Gower–Legendre Similarity Index (GLI) for different DHNN-RANkSAT models.
Figure 11. Gower–Legendre Similarity Index (GLI) for different DHNN-RANkSAT models.
Mathematics 10 01963 g011
Figure 12. (a) Convergence curves for NN = 30 in different DHNNRANkSAT models. (b) Convergence curves for NN = 60 in different DHNNRANkSAT models. (c). Convergence curves for NN = 90 in different DHNNRANkSAT models. (d) Convergence curves for NN = 120 in different DHN-NRANkSAT models.
Figure 12. (a) Convergence curves for NN = 30 in different DHNNRANkSAT models. (b) Convergence curves for NN = 60 in different DHNNRANkSAT models. (c). Convergence curves for NN = 90 in different DHNNRANkSAT models. (d) Convergence curves for NN = 120 in different DHN-NRANkSAT models.
Mathematics 10 01963 g012aMathematics 10 01963 g012b
Figure 13. Sample of Pareto Frontier of a multi-objective function [51].
Figure 13. Sample of Pareto Frontier of a multi-objective function [51].
Mathematics 10 01963 g013
Figure 14. A trade-off plot of fitness (Fmax) and diversity (γ) for different DHNN.
Figure 14. A trade-off plot of fitness (Fmax) and diversity (γ) for different DHNN.
Mathematics 10 01963 g014
Table 1. Summarizes of the related studies.
Table 1. Summarizes of the related studies.
Author(s)Detail of the StudiesSummary and Findings
Hopfield and Tank [1]Nonlinear analogue response of the neurons and energy analysis in HNN.The proposed work can solve various combinatorial problems.
Wan Abdullah [6]The first work incorporates SAT with HNN.The work capitalized on the structure of SAT and the approach is able to yield connection strengths (synaptic weight) between each neuron resulting in an energy-minimizing dynamic network.
Sathasivam et al. [12]A non-systematic RAN2SAT was developed to represent the symbolic output in HNN.The work can locate maximum production of global solutions, which indicates RAN2SAT is successfully embedded in the operations of HNN.
Karim et al. [14]A higher-order non-systematic RANkSAT representation was developed to present the symbolic output in DHNN.The work can attain a 100% global minima solution that indicates RANkSAT is successfully embedded in the operations of DHNN for the higher order of k.
Kashumuddin et al. [15]An integrated representation of k-satisfiability (k-SAT) in a mutation Hopfield neural network (MHNN).The main purpose is to estimate other possible neuron states that lead to global minimum energy through available output measurements.
Mansor et al. [17]Artificial Immune System (AIS) integrated with HNN to do 3SAT.A new algorithm named AIS was proposed with a few parameters and compared with existing algorithms.
Emami and Derakhsan [21]Developed a new socio-politically inspired algorithm named Election Algorithm (EA).A comprehensive comparison was made with several benchmarks problem. The performance of EA is up to the level in terms of final solution accuracy, convergence speed and robustness.
Sathasivam et al. [23]This paper utilizes a bipolar EA incorporated with the HNN in optimizing RANkSAT representation.The effect of Bipolar EA in enhancing the learning processes of a Hopfield Neural Network (HNN) to generate global solutions.
Bazuhair et al. [24]This study proposed higher-order random k satisfiability for k ≤ 3 representation with EA.The proposed RANkSAT representation incorporated with EA in HNN is capable to optimize the learning and retrieval phase as compared to the traditional Exhaustive search model.
Table 2. Parameter list for the DHNNRANkSAT-HEA model.
Table 2. Parameter list for the DHNNRANkSAT-HEA model.
ParameterParameter Value
Number   of   Neurons   ( N N ) 10 N N 120 [14]
Number   of   Learning   ( N H ) 100 [23]
Number   of   Trials   ( N T ) 100 [23]
Neuron   Combinations   ( N C M ) 100 [14]
CPU time threshold24 h [12]
Order of Clauses Z i ( 3 ) , Z i ( 2 ) [14]
Total Number of Clauses 1 n ( Z i ( 3 ) + Z i ( 2 ) ) 50 [14]
Size   of   Population   ( N P O P ) 120 [21]
Number   of   Parties   ( N P a r t y ) 4 [21]
Positive   advertisement   ( σ P ) 0.5 [21]
Negative   advertisement   ( σ n ) 0.5 [21]
Diversity of logical rules (in Percentage)40%
Activation FunctionHyperbolic Tangent Activation Function [17]
Neuron state Initialization (Training Phase)Random
Testing Phase Neuron StateRandom
Number of Learning Iterations100 [24]
Table 3. List of parameters used in DHNNRANkSAT-EA, DHNNRANkSAT-GA and DHNNRANkSAT-ABC.
Table 3. List of parameters used in DHNNRANkSAT-EA, DHNNRANkSAT-GA and DHNNRANkSAT-ABC.
ParameterParameter Value
Number   of   Neurons   ( N N ) 10 N N 120 [14]
Number   of   Learning   ( N H ) 100 [23]
Number   of   Trials   ( N T ) 100 [23]
Neuron   Combinations   ( N C M ) 100 [14]
Size   of   Population   ( N P O P ) 120 [23]
Number   of   Parties   ( N P a r t y ) 4 [24]
Positive   advertisement   ( σ P ) 0.5 [23]
Negative   advertisement   ( σ n ) 0.5 [23]
Neuron state Initialization (Training Phase)Random
Testing Phase Neuron State InitializationRandom
Number of Learning Iterations100 [24]
Table 4. List of parameters used for DHNNRANkSAT-ES.
Table 4. List of parameters used for DHNNRANkSAT-ES.
ParameterParameter Value
Number   of   Neurons   ( N N ) 10 N N 120 [14]
Number   of   Learning   ( N H ) 100 [15]
Number   of   Trials   ( N T ) 100 [17]
Neuron   Combinations   ( N C M ) 100 [23]
Neuron state Initialization (Training Phase)Random
Testing Phase Neuron State InitializationRandom
Number of Learning Iterations100 [24]
Table 5. Tabulated fitness values for different DHNN-RANkSAT models.
Table 5. Tabulated fitness values for different DHNN-RANkSAT models.
NNHEAEAESGAABC
1000000
1500000
20006.500
25009.900
3000112.1730
350013.83.9810
400015.465.940
450017.948.912.98
500019.7911.883.96
5502.920.5814.854.8
6004.822.23117.8210
650122620.8712
7001528.42414.5
7501829.72721.25
80025.631.6793022.77
85023.2343328.79
90028.7937.253630.4
95030.4403935
100036404236.35
1054.98537.8434541.06
1106.02841.06454442.25
1157.99444.5464645.15
1209.645.15484546.991
AVG1.24415.87825.48821.62717.315
Min00000
Max9.645.15484546.991
Avg. Rank1.872.593.913.573.07
Table 6. Tabulated MAE-Diversity values for different DHNN-RANkSAT models.
Table 6. Tabulated MAE-Diversity values for different DHNN-RANkSAT models.
NNHEAEAESGAABC
1000000
1500000
2000000
2500000
3000000
3500000
4000000
4500000
5000000
5500000
6001.499.5913.9792.065
6502.9738.9487.776
7004.0218.1189.21
7503.6667.29.610.11
8007.689119.86
8508.1613.612.110.16
90011.5214.91311.52
95012.1615.2513.7312.26
100012.791614.0112
1053.85113.4416.99214.513.86
1105.55114.0817.5415.0215
1156.02415.74181615.61
1207.6816.819.217.8115.32
AVG1.0055.4147.5796.8156.293
MIN00000
MAX7.6816.819.217.8115.61
AVG. RANK1.482.244.123.82.76
Table 7. Tabulated MAE-Testing values for different DHNN-RANkSAT models.
Table 7. Tabulated MAE-Testing values for different DHNN-RANkSAT models.
NNHEAEAESGAABC
1000000
1500000
2000000
2500000
3000000
3500000
4000000
4500000
5000000
550011400
600020600
650030800
700040800
75040601000
80060601000
850606010015
900808010020
9508010010030
10008010010040
10508010010040
11008010010060
11508010010065
12008010010070
AVG031.30441.78254.78314.782
MIN00000
MAX08010010070
AVG. RANK22.893.674.092.56
Table 8. Tabulated values of RGS for different DHNN-RANkSAT models.
Table 8. Tabulated values of RGS for different DHNN-RANkSAT models.
NNHEAEAESGAABC
1011111
1511111
2011111
2511111
3011111
3511111
4011111
4511111
5011111
5510.20.211
6010.20.20.21
6510.20.20.21
7010.20.20.21
7510.20.20.21
8010.20.20.21
8510.080.120.21
9010.040.040.121
9510.0400.121
10010.0400.120.8
10510.0400.120.2
11010.0400.040.2
11510.0400.040
12010.0400.040
AVG10.6590.4500.5130.834
MIN10.0400.040
MAX11111
AVG. RANK42.862.132.853.57
Table 9. Tabulated MAE-Energy values for different DHNN-RANkSAT models.
Table 9. Tabulated MAE-Energy values for different DHNN-RANkSAT models.
NNHEAEAESGAABC
1000000
1500000
2000000
2500000
3000000
3500000
4000000
4500000
5000000
5500000
600000.8950
6501.8391.881.2215.39
7003.2983.7173.6576.28
7503.5434.3334.027.96
8005.6016.8215.8718.01
8508.0038.5417.098.13
9008.16911.2228.9918.19
95010.85412.9821012.54
10001113.07911.47314
105012.7214.71412.31414.72
110013.7514.0771314.75
115014.2815.08914.6515.28
120015.7516.02916.82615.95
AVG04.7315.3254.7835.704
MIN00000
MAX015.7516.02916.82615.95
AVG.RANK1.932.673.633.003.76
Table 10. Tabulated T N V values for different DHNN-RANkSAT models.
Table 10. Tabulated T N V values for different DHNN-RANkSAT models.
NNHEAEAESGAABC
1021.417.815.46.213
15363538.83322
2068.436.65546.231
2578.87569.456.672
3090.277.27764.281
359676.581.873.682.8
4096.490.887.48181.6
4598.695938594
509998.694.891.892
5599.292.89069.295
6099.49378.848100
65100575944.280.5
7010047.4493955
7510039.8393235
8010040482325
85100404020.520
901002012.81320
9510020012.520
10010020012.520
105100200020
11010020000
11510020000
12010020000
AVG90.58350.10844.74737.05346.083
MIN21.417.8000
MAX10098.694.891.8100
AVG. RANK4.913.332.521.462.78
Table 11. Tabulated GLI values for different DHNN-RANkSAT models. The notation * means no value has been generated in that particular point.
Table 11. Tabulated GLI values for different DHNN-RANkSAT models. The notation * means no value has been generated in that particular point.
NNHEAEAESGAABC
100.6760.6240.6030.7110.671
150.6680.6190.6800.6980.698
200.6940.6870.6770.7240.667
250.7240.6870.6920.7030.704
300.7300.6950.6980.7440.704
350.7050.7280.7010.7310.697
400.7290.7390.7150.7580.724
450.7210.7240.7090.8380.728
500.7500.7380.7460.8120.719
550.7190.7100.7340.7930.666
600.7630.7060.7090.5900.619
650.7770.5790.6770.5510.606
700.7420.4970.4690.4900.559
750.7310.3160.2710.3870.487
800.7600.3110.2910.2980.341
850.7490.3040.2440.2240.311
900.7510.1540.2570.1980.324
950.7490.167**0.134
1000.7600.146**0.145
1050.7700.151**0.126
1100.7480.159***
1150.7510.156***
1200.7490.159***
AVG0.7350.4660.4290.4460.459
MIN0.6680.1460.2570.1980.126
MAX0.7770.7390.7460.8380.728
AVG. RANK1.871.591.171.671.64
Table 12. Overall performance analysis for different DHNN-RANkSAT models.
Table 12. Overall performance analysis for different DHNN-RANkSAT models.
ModelsFitness
( f L j )
Diversity   ( ν T ) MAE-Testing Number   of   Global   Solutions   ( N G l o b a l ) MAE-Energy Total   Neuron   Variations   ( T N V ) Gower–Legendre Index (GLI)
α R A N k S A T H E A 97.60%98.08%0100%097.4074.82%
α RAN k SAT E A (Original version) [23]80.31%92.55%4070%5.9852.5542.85%
α RAN k SAT E S [12]70.38%89.07%5051%6.8142.1541.6%
α RAN k SAT G A [19]74.75%91.30%47.556%6.6838.8038.30%
α RAN k SAT A B C [45]78.15%92.73%22.575%6.0450.2541.17%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Karim, S.A.; Kasihmuddin, M.S.M.; Sathasivam, S.; Mansor, M.A.; Jamaludin, S.Z.M.; Amin, M.R. A Novel Multi-Objective Hybrid Election Algorithm for Higher-Order Random Satisfiability in Discrete Hopfield Neural Network. Mathematics 2022, 10, 1963. https://0-doi-org.brum.beds.ac.uk/10.3390/math10121963

AMA Style

Karim SA, Kasihmuddin MSM, Sathasivam S, Mansor MA, Jamaludin SZM, Amin MR. A Novel Multi-Objective Hybrid Election Algorithm for Higher-Order Random Satisfiability in Discrete Hopfield Neural Network. Mathematics. 2022; 10(12):1963. https://0-doi-org.brum.beds.ac.uk/10.3390/math10121963

Chicago/Turabian Style

Karim, Syed Anayet, Mohd Shareduwan Mohd Kasihmuddin, Saratha Sathasivam, Mohd. Asyraf Mansor, Siti Zulaikha Mohd Jamaludin, and Md Rabiol Amin. 2022. "A Novel Multi-Objective Hybrid Election Algorithm for Higher-Order Random Satisfiability in Discrete Hopfield Neural Network" Mathematics 10, no. 12: 1963. https://0-doi-org.brum.beds.ac.uk/10.3390/math10121963

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop