Next Article in Journal
Effective Conductivity of Densely Packed Disks and Energy of Graphs
Next Article in Special Issue
Dynamics of Fourier Modes in Torus Generative Adversarial Networks
Previous Article in Journal
Meta-Analysis with Few Studies and Binary Data: A Bayesian Model Averaging Approach
Previous Article in Special Issue
Comparison of Ensemble Machine Learning Methods for Automated Classification of Focal and Non-Focal Epileptic EEG Signals
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Networks of Picture Processors with Filtering Based on Evaluation Sets as Solvers for Cryptographic Puzzles Based on Random Multivariate Quadratic Equations

by
Karina Paola Jiménez
1,2,
Sandra Gómez-Canaval
2,*,
Ricardo Villanueva-Polanco
3 and
Silvia Martín Suazo
2
1
Facultad de Ingenierías, Universidad Simón Bolívar, Calle 58 No 55-132, Barranquilla 080001, Colombia
2
Departamento de Sistemas Informáticos, Universidad Politécnica de Madrid, Calle Alan Turing s/n, 28031 Madrid, Spain
3
Computer Science Department, Universidad del Norte, Barranquilla 080001, Colombia
*
Author to whom correspondence should be addressed.
Submission received: 10 November 2020 / Revised: 25 November 2020 / Accepted: 30 November 2020 / Published: 4 December 2020
(This article belongs to the Special Issue Bioinspired Computation: Recent Advances in Theory and Applications)

Abstract

:
Networks of picture processors is a massively distributed and parallel computational model inspired by the evolutionary cellular processes, which offers efficient solutions for NP-complete problems. This bio-inspired model computes two-dimensional strings (pictures) using simple rewriting rules (evolutionary operations). The functioning of this model mimics a community of cells (pictures) that are evolving according to these bio-operations via a selection process that filters valid surviving cells. In this paper, we propose an extension of this model that empowers it with a flexible method that selects the processed pictures based on a quantitative evaluation of its content. In order to show the versatility of this extension, we introduce a solver for a cryptographic proof-of-work based on the hardness of finding a solution to a set of random quadratic equations over the finite field F 2 . This problem is demonstrated to be NP-hard, even with quadratic polynomials over the field F 2 , when the number of equations and the number of variables are of roughly the same size. The proposed solution runs in O ( n 2 ) computational steps for any size ( n , m ) of the input pictures. In this context, this paper opens up a wide field of research that looks for theoretical and practical solutions of cryptographic problems via software/hardware implementations based on bio-inspired computational models.

1. Introduction

Networks of bio-inspired processors (NBP) is a family of computational models facing NP-complete problems by mimicking, from a syntactical perspective, the manner by which cell communities evolve via gene mutations in DNA molecules. Each cell is represented by a word that describes their DNA sequences. Mutation and cellular division are defined by operations based on string rewritings. Cells form colonies, which belongs to specific species, and a colony can be interpreted as an evolutionary system that evolves according to these operations. This dynamic abstracts the natural process of evolution and it is defined over a highly parallel and distributed computing architecture for symbolic processing. NBP can process one or two-dimensional arrays of symbols belonging to a predefined alphabet. The specific NBP model processing one-dimensional arrays of symbols (strings) are: network of [evolutionary/splicing/genetic] processors [1,2,3]. On the other hand, NBP models processing pictures (two-dimensional strings) are named networks of picture processors (NPP) [4]. The NBP model family defines a computational process based on the application of rewriting operations over strings, which can be explained as an abstraction of the behavior of mutating genes in chromosomes. For the NPP model, this process is applied to pictures representing a generalization of gene mutations in chromosomes over two-dimensional strings [4].
The NBP architecture consists of a set of processors, each of which occupies a node of a virtual graph, employed to set up a network of evolutionary processors. An evolutionary processor is a theoretical device that applies simple rewriting operations to data. These rules, called evolutionary rules, are specialized in either deleting a symbol, inserting a symbol or changing a symbol for other symbol, which simulates operations over a pair of nucleotides. It is important to stress that each processor can apply a single type of rule (insertion, deletion or substitution) to its local data and that all processors in the network simultaneously applies their predefined rules (this is what is known as an evolutionary step).
Data (syntactical representation of cells) may evolve by these rules (evolutionary process). Then, data may be interchanged through the network by means a predefined protocol. This protocol is a filtering strategy based on enforcing a set of conditions, defined in each processor, in order to control the dispatch and the reception of data (even simultaneously). The navigation of data following a filtering strategy is defined as the communication process. It is important to stress that the filtering strategies mimic the process of selection from a Darwinian perspective.
As computing devices, the functioning of NBP networks occurs in turn repeatedly by intermixing the application of the evolutionary and communication processes. The computation is finished only if a predefined stopping condition is fulfilled. All processors modify their local data at the same time, defining a processing step and then local data that satisfies some filtering condition are allowed to flow from a processor to another according to the network topology (communication step).
NBP models have been extensively studied as language generators, language-accepting devices and problem solvers. Additionally, other properties of them have been extensively investigated, e.g., their computational power, their complexity aspects, their efficiency as problem solvers, the existence of universal networks, etc. For a better comprehension of these studies, we refer the reader to [5].
Particularly, networks of picture processors (NPP) [4,6,7] is a specialized model for computing two-dimensional strings, called pictures. There, a picture is a two-dimensional array of elements belonging a finite set of symbols (alphabet). Traditionally, the NPP model is applied to problems concerning image processing, picture recognizability or pattern matching [8,9,10].
Recently, an NPP model named networks of polarized evolutionary picture processors (NPEPP) was proposed in [11]. NPEPP defines its communication protocol based on the polarization concept. This filtering strategy generalizes the protocol defined by networks working over strings [12,13] in such a way that it does work over rectangular pictures. In particular, the polarization is an abstraction of the electric charge (negative, positive or neutral) that may be applied to both pictures and nodes. Whereas the polarization of a node is predetermined and not subject to or able to be changed during the entire computing process, the polarization of a picture is dynamically computed by a valuation mapping that determines the picture’s polarity based on its symbols and its predefined values. The valuation mapping returns the sign of this value but not its exact value. The movement of a picture from one node to other node through the communication protocol (filtering strategy) relies on both the picture polarization and the node polarization. Thus, this flow of a picture from one node to other node mimics the bio-electrical communication between cells.
A generalization of the polarization concept using evaluation sets to enhance the valuation mapping to calculate the exact value (not only the sign of this value) was proposed in [14] and was used to empower the network of splicing processors [15] and the networks of evolutionary processors [16] models. In these works, the valuation mapping is used to give the precise numerical value of a string. In addition, the polarization is redefined to provide a filtering strategy able to allow the communication between two nodes, simulating the movement of molecules or particles from an area to other along a concentration gradient in a solution. Finally, it is demonstrated that this kind of communication protocol is more flexible and is able to deal with hard optimization problems with integer restrictions in an efficient way.
In this context, we propose an extension of the NPEPP model proposed in [11], enhancing this model with the evaluation sets as filtering strategy for improving the communication protocol. In particular, we adapted the communication protocol working over strings proposed in [14,15] to work over pictures.
With this proposal, we enhanced the general NPP model with a more versatile protocol to include some quantitative features in a simple way, keeping rules for pictures processing. To show the computational power of our proposed model as well as its flexibility, we show how to leverage it to solve systems of quadratic polynomial equations over finite fields, e.g., the so called multivariate quadratic (MQ) problem.
Based on the hardness of the MQ problem (or variants of it), several cryptographic primitives have been constructed over the years. For instance, it is the security underpinning of the multivariate public key cryptosystems, a new family of post-quantum cryptographic schemes that withstand the action or effect of quantum computer attacks. In general terms, the hardness of solving the MQ problem over F 2 is directly connected to the security of these cryptographic schemes [17]. Recently, the authors of [18] proposed a proof-of-work (PoW) based on random multivariate quadratic equations for a post-quantum blockchain [19], which requests a solver (miner) to solve a set of random multivariate quadratic equations (RMQE) over the finite field F 2 .
The main aim of this research paper was to provide a solver for the MQ problem over F 2 using our extended and adapted model from NPP, named network of picture processors with evaluation sets (NPPES). As far as we know, this is the first occasion in which a bio-inspired computational model, such as the NBP model, was used to deal with solving systems of random quadratic equations (SRQE/RMQE) and therefore this paper proposes a unconventional solver for the PoW proposed in [18] running in O ( n 2 ) computational (processing and communication) steps, where n is the number of indeterminates. We remark that some results reported in the literature solving SRQE/RMQE problem require exponential time [20,21,22,23] for specific n , m input values. Therefore, our results suggest that the NPPES model faces such problems adequately and that the proposed network might be potentially deployed into a massively distributed and highly parallel platform without any modification, extension or adaptation.
This paper is structured as follows. In Section 2, we introduce the background concepts as well as the formal definition for the NPPES model. Section 3 discusses the capability of NPPES to solve the RMQE problem. We designed an NPEPP algorithm that works in O ( n 2 ) time. In addition, in Section 4.6, we discuss the obtained results, the reasons we believe our solution is more suitable for such problems and how the proposed algorithm may be transformed to be deploy into an ultra-scalable computational platform without any modification, extension or adaptation. Finally, conclusions and forthcoming works are presented in Section 5.

2. Networks Picture Processors with Filtering by Evaluation Sets

In this section, we introduce the network of picture processors with evaluation sets (NPPES) proposed model. First, we start off by summarizing the notions used throughout this paper, which were previously introduced in [7]. This is the theoretical background underlying our proposed model. We later introduce the formal definitions for the NPPES model.

2.1. Preliminary Concepts

We define an alphabet as a set of non-empty symbols. The number of elements of a finite set S is denoted by c a r d ( S ) (this is also known as the cardinality of S). Also, we define a picture as a two-dimensional array of symbols taken from an alphabet V. The set V * * denotes the set of all pictures over V and ε denotes the empty picture. The pair ( π ̲ ̲ , | π | ) defines the size of the picture π , where π ̲ ̲ denotes the number of rows and | π | denotes the number of columns of π . The size of the empty picture ε is given by ( n , m ) with n m = 0 . The notation π ( i , j ) denotes the symbol placed at the intersection of the i t h row with the j t h column of the picture π .
Let π be a picture of size ( m , n ) over V, 1 i k m and 1 j l n , then [ i , j ] π [ k , l ] is a sub-picture of π that consist of all symbols π ( k i , k j ) with i k i k and j k j l . Following this definition, π is [ 1 , 1 ] π [ m , n ] . The minimal alphabet containing all visible symbols appearing in a picture π is denoted by a l p h ( π ) . Also, we define a valuation of V * in Z as a homomorphism from the monoid V * to the monoid of additive integers Z . For any alphabet V and a symbol a V , a denotes the invisible copy of a and V = { a | a V }. A picture π ( V V ) m n is said to be a well-defined picture if there exist 1 i k m and 1 j l n such that all elements of [ i , j ] π [ k , l ] are from V and all the other elements of π are from V . In this case, the maximal visible subpicture of π is denoted as [ i , j ] π [ k , l ] . In concrete, a well-defined picture π may be interpreted as one having some rows and/or columns on its border hidden but not deleted. We stress that any picture over the alphabet V is a well-defined picture and that, from this point forward, we deal with only well-defined pictures.
We use the rewriting operations or evolutionary rules such as they are introduced and used in [6,7]. Concretely, we use substitution, mask and unmask rules. We summarize the definitions of these rules below. In order to know the rest of rules (insertion and deletion) as well as the formal definitions for all evolutionary rules, we refer the reader to [7].
An evolutionary rule is defined as r = s 1 s 2 , with s 1 , s 2 V { ε } . Particularly, r is defined as a substitution rule if neither s 1 nor s 2 is ε .The set S u b V = { s 1 s 2 s 1 , s 2 V } is the set of substitution rules. On the other hand, a mask rule is a rule able to hide a column or a row. Similarly, an unmask rule is able to make visible a column or a row. The action mode is the element that defines the picture’s position in which a rule r can be applied. Let π be a picture π ( V V ) m n and let π m a x be the maximal visible subpicture of π , then the actions of r on π are ( , , , , + ) . These actions are defined as following:
  • Substitution rules:
    r ( π ) is the set of all pictures obtained from π by replacing an occurrence of s 1 by s 2 in the leftmost column of π m a x . We remark that r is applied to all instances of the symbol s 1 in the leftmost column of the π m a x ’s different copies.
    Similarly as above, r ( π ) is the set of all pictures obtained by applying r to the rightmost column of π ; r ( π ) is the set of all pictures obtained by applying r to the first row of π ; r ( π ) is the set of all pictures obtained by applying r to the last row of π ; r + ( π ) is the set of all pictures obtained by applying r to any column/row of the π m a x .
  • Mask and unmask rules:
    mask ( π ) returns a picture obtained from π , by transforming all visible symbols from the leftmost column of the π m a x into their invisible copies. Analogously, it is defined as the mappings mask , mask and mask .
    unmask ( π ) returns a picture from π , by making its leftmost column visible. If [ i , j ] π [ k , l ] is π m a x , then all invisible symbols π ( s , j 1 ) , i s k , become visible. If j = 1 , then unmask ( π ) = π . Similarly, it occurs for unmask , unmask , and unmask .
For every rule r, symbol β { , , , , + } and L ( V V ) * * , the β -action of r on L is defined by r β ( L ) = π L r β ( π ) . For a finite set of rules M, we define the β -action of M on the picture π and the language L by:
M β ( π ) = r M r β ( π )   and   M β ( L ) = π L M β ( π )
Analogously, for every β { , , , } , it defined mask β ( L ) = { mask β ( π ) π L } and unmask α ( L ) = { unmask α ( π ) π L } .

2.2. NPPES: Formal Model Definition

We start defining a new function that is required to extend the NPP model with the filtering strategy using evaluation sets. This function is necessary to extend or more precisely generalize the NPP variant proposed in [11].
Definition 1.
Let V be a finite alphabet and π be a picture over V with size ( m , n ) . We define the projection function Φ : V * * × P ( V ) V * as follows
Φ ( π , S ) = π 1 , 1 π 1 , 2 π m , n ,   w h e r e   π i , j = π ( i , j ) i f   π ( i , j ) S ε o t h e r w i s e
This function is required because we need to calculate, for a picture, the “concentration” of the symbols included in S. Therefore, this function ignores the other symbols that do not belong to S. With this definition and other components required to propose the evaluation sets filtering strategy, we define our extension of a picture processor as follows.
Definition 2.
Let V be an alphabet. A generalized picture processor over V is a triple ( M , S , α ) such that:
  • M is a set of evolutionary rules containing either substitution or deletion rules over V ( M S u b V or M D e l V ).
  • S P ( V ) .
  • α is a set of mutually disjoint intervals of Z , which gives the generalized polarity of the node. In particular, the polarity of a node, that is , 0 , + , in our generalization, it can be viewed as the intervals ( , 0 ) , { 0 } , ( 0 , ) , respectively.
Definition 3.
A hiding generalized picture processor over V V is a triple ( M , S , α ) such that M is a set of either mask or unmask rules over V. The rest of the other parameters are identical to those in the Definition 2.
The set of (hiding) generalized picture processors over V is denoted by G P P V . Evidently, the generalized picture processor introduced here is a mathematical concept comparable to that of an evolutionary algorithm, both of which are motivated by the Darwinism. Compared to evolutionary algorithms, the evolutionary rules may be understood as a two-dimensional generalization of gene mutations and the evaluation sets (supported by means of S and α elements, Definitions 1 and 2) might be seen as a selection process similar to, as we have mentioned in the Section 1, the bio-electrical properties of the concentration gradients in a solution.
Definition 4.
A network of picture processors with evaluation sets (NPPES) is a 10-tuple
Γ = ( V , U , G , R , β , ρ , φ , I n ̲ , H a l t ̲ , A c c e p t ̲ ) , where:
  • V is the input alphabet and U is the network alphabet.
  • G = ( X G , E G ) is an undirected graph (which is called the underlying graph of the network) such that X G is the set of vertices and E G is the set of edges.
  • R : X G G P P U defines a mapping for associating each node x X G with the generalized picture processor R ( x ) over U, such that R ( x ) = ( M x , S x , α x ) .
  • β : X G { , , , , + } such that β ( x ) defines the action mode of the rules of node x on the pictures existing in that node.
  • ρ is a mapping which allows to associate each subset of U with an interval (possibly empty) of Z satisfying that ρ ( X ) ρ ( Y ) = for any X Y subsets of U.
  • φ : U * Z is a valuation of U * in Z .
  • I n ̲ , H a l t ̲ , A c c e p t ̲ are nodes of X G such that they are the input, halting and accepting nodes of Γ, respectively.
Definition 5.
A configuration of an NPPES Γ such as it is defined in the Definition 4, is a mapping C : X G P ( U * * ) linking every node of the network’s graph with a set of pictures. Informally, a configuration represents the content (the set of pictures) of each node at a given moment. Let π V * * a picture, the initial configuration of Γ on π is formally denoted by C 0 ( π ) ( I n ̲ ) = { π } and C 0 ( π ) ( x ) = , x X G \ { I n ̲ } .
An NPPES network evolves through two steps: processing and communication steps over the configurations. On a processing step for instance, each element belonging to a configuration C evolves according to the set of rules defined for the node x ( M x ). These steps are defined as follows.
Definition 6.
One processing step that produces a configuration C is obtained from the previous configuration C, which is denoted by C C , iff
C ( x ) = M x β ( x ) ( C ( x ) ) x X G
Definition 7.
One communication step, denoted by C C produces a configuration C denoted by
C ( x ) = ( C ( x ) \ H x ) { x , y E G } T y ,
where
H x = { π C ( x ) X S x ( ( φ ( Φ ( π , X ) ) Y α x Y ) ( ( Y α x   s u c h   t h a t   φ ( Φ ( π , X ) ) Y ) ( Y ρ ( X ) ) ) ) } , T y = { w C ( y ) X S y   s u c h   t h a t   φ ( Φ ( π , X ) ) ρ ( X ) a n d ρ ( X ) α x } .
Assuming that a picture within node x has its valuation with respect to some X S x in ρ ( X ) , if ρ ( X ) belongs to α x , then a copy of the picture is kept within the node x, else it is forced out. If ρ ( X ) α y , y X G , then a copy of the picture gets into y as long as x is adjacent to y in G. On condition that a picture is forced out of a node and is not able to get in any node, then it is lost. We remark that a copy of some picture π can stay in x and the same time other copies of π can move from x to other nodes that are adjacent to x.
Definition 8.
A computation for an NPPES Γ on the input picture π V * * is defined as a sequence of configurations C 0 ( π ) , C 1 ( π ) , C 2 ( π ) , such that C 0 ( π ) is the initial configuration and C 2 i ( π ) C 2 i + 1 ( π ) and C 2 i + 1 ( π ) C 2 i + 2 ( π ) , for all i 0 are the processing and communication steps performed in an alternate way. Note that the configuration C i 1 ( π ) determines the configuration C i ( π ) .
Finally, the stop condition for an NPPES Γ occurs when the halting node is non-empty at some point of the computation. Then, the picture language decided by Γ is given by
L ( Γ ) = { π V * * the computation of Γ on π stops with a non empty accepting node } .

3. Random Multivariate Quadratic Equations

In this section, we formally describe a well-known NP-hard problem regarding the hardness of solving a set of m random quadratic equations in n indeterminates over a finite field F. This is called the MQ problem [24].
Let F q be the finite field with q elements. The multivariate quadratic (MQ) problem is defined as follows. Given m quadratic polynomials P ( 1 ) , P ( 2 ) , , P ( m ) in the n indeterminates x 1 , , x n as shown below
P ( 1 ) ( x 1 , x 2 , , x n ) = i = 1 n j = i n p i , j ( 1 ) x i x j + i = 1 n p i ( 1 ) x i + p 0 1 P ( 2 ) ( x 1 , x 2 , , x n ) = i = 1 n j = i n p i , j ( 2 ) x i x j + i = 1 n p i ( 2 ) x i + p 0 ( 2 ) P ( m ) ( x 1 , x 2 , , x n ) = i = 1 n j = i n p i , j ( m ) x i x j + i = 1 n p i ( m ) x i + p 0 ( m )
where p i , j ( k ) , p i ( k ) , p 0 ( k ) F q , with 1 k m and 1 i j n , are chosen uniformly and independently in a random way. The MQ problem asks to find x ^ = ( x ^ 1 , , x ^ n ) F q n such that
P ( 1 ) ( x ^ 1 , , x ^ n ) = P ( 2 ) ( x ^ 1 , , x ^ n ) = = P ( m ) ( x ^ 1 , , x ^ n ) = 0
The MQ problem is proven to be NP hard, even with quadratic polynomials over the field F 2 (MQ 2 ) [24]. Moreover, the hardness of the MQ problem is the security underpinning for multivariate public key cryptosystems [25], a new family of post-quantum cryptosystems that can withstand the action or effect of quantum computer attacks. For instance, several MQ-based signature schemes were submitted to the National Institute of Standardization of Technology post-quantum cryptographic standardization process [23,25,26,27,28]. In particular, Rainbow has been included in the third round of this process [29].
The MQ problem has also been used to construct a proof of work for post-quantum blockchains [19], and is part of the cryptocurrency called ABC Mint [18]. In this setting, the solver’s task is to find a solution to a given instance of the problem within a reasonable period of time in order for the solver to include a block of transactions in the blockchain. In particular, this instance of the MQ problem is defined over F q F 2 with n being a suitable but variable chosen integer and m being set to n 8 . The reason of this choice is to guarantee that a solution can be found with overwhelming probability within a reasonable period of time [18]. To perform the task, a solver first generates a bit string of ( n 8 ) · ( n · ( n 1 ) / 2 + n + 1 ) bits uniformly and independently at random, by applying a cryptographic hash function to a random input formed from the block to be included in the blockchain. This long string is then partitioned into ( n 8 ) strings of n · ( n 1 ) / 2 + n + 1 bits, each of which represent the coefficients of a polynomial in the equations system. The solver then performs a computational task consisting in finding a vector x ^ = ( x ^ 1 , , x ^ n ) such that the Equation (4) holds [18].
As an additional remark, a coefficient-based representation of a polynomial P k ( x ) requires storing n · ( n 1 ) / 2 + n + 1 elements of F q . In total, m · ( n · ( n 1 ) / 2 + n + 1 ) elements of F q are required to represent all polynomials. That is, a complete coefficient-based representation of all polynomials in the equation system requires log ( q ) · m · ( n · ( n 1 ) / 2 + n + 1 ) bits in a classical computer.
Another contribution of this paper is that we construct a solver based on NPPES for solving an instance of the MQ problem, assuming the polynomials P ( 1 ) ( x ) , P ( 2 ) ( x ) , , P ( n 8 ) ( x ) are given in a special form. We introduce such solver based on NPPES, which is able to find a vector x ^ = ( x ^ 1 , , x ^ n ) that satisfies the Equation (4), in the following section.

4. A Solver for the Random Multivariate Quadratic Equations (RMQE) Problem Using NPPES

In this section we discuss how to construct a solver for the MQ problem described in the previous section.
Let n , m N be defined as the number of variables and equations respectively. Following the definitions introduced in the Section 2, we define the next components of an NPPES, named Γ , for our proposed solver.
  • Let V = P X C B { $ } be the input alphabet such that:
    • P = P Q P L P 0 , where
      P Q = P Q ̲ P Q ¯ , where P Q ̲ = { p i , j ( k ) ̲ s . t . 1 k m , 1 i j n } and P Q ¯ = { p i , j ( k ) ¯ s . t . 1 k m , 1 i j n } . The two symbols p i , j ( k ) ¯ and p i , j ( k ) ̲ represent the two values that p i , j ( k ) can assume. In other words, the symbol p i , j ( k ) ¯ represents that p i , j ( k ) assumes the value 1, while the symbol p i , j ( k ) ̲ represents that p i , j ( k ) assumes the value 0.
      P L is the set of all symbols p i ( k ) ¯ , p i ( k ) ̲ for 1 k m and 1 i n . The two symbols p i ( k ) ¯ and p i ( k ) ̲ represent the two values that p i ( k ) can assume. In other words, the symbol p i ( k ) ¯ represents that p i ( k ) assumes the value 1, while the symbol p i ( k ) ̲ represents that p i ( k ) assumes the value 0.
      P 0 = { p 0 ( i ) ¯ , p 0 ( i ) ̲ s . t . 1 i m } . The two symbols p 0 ( i ) ¯ and p 0 ( i ) ̲ represent the two values that p 0 ( i ) can assume. In particular, the symbol p 0 ( i ) ¯ represents that p 0 ( i ) assumes the value 1, while the symbol p 0 ( i ) ̲ represents that p 0 ( i ) assumes the value 0.
    • X = { x 1 , x 2 , x n } , represents the n indeterminates x 1 , x 2 , x n .
    • C = { c 0 , c 1 , c 2 , c n } , is the set of control symbols used through the computation of the NPPES algorithm.
    • B = { 1 , 2 , k , m } , with k representing a signaling symbol for the polynomial P ( k ) from Equation (3).
    Let Π be the set of pictures π ( k ) , for 1 k m , where the size of each picture π ( k ) is ( n + 3 , 3 · n ) . Each picture may be seen as a matrix of which rows are indexed starting from one. The n first rows of π ( k ) encode the terms p i , j ( k ) x i x j for 1 i j n , having the empty entries filled with the symbol $. In particular, the l t h -row, with 1 l n , has the first 3 · ( l 1 ) entries, from left to right, filled with the symbol $, the entry 3 · l filled with a symbol from { p i , j ( k ) ̲ , p i , j ( k ) ¯ } , then the entry 3 · l + 1 filled with x l , then the entry 3 · l + 2 filled with x l and so on. The n + 1 row of π ( k ) encodes the terms p i ( k ) x i for 1 i n , having the empty entries filled with the symbol $. The n + 2 row encodes the independent term p 0 ( k ) and the k term, having the empty entries filled with the symbol $. Finally, the last row encodes the control symbols from the set C, which are required for internal computation of Γ . For example, given the polynomial P ( k ) ( x 1 , x 2 , x 3 ) = i = 1 3 j = i 3 p i , j ( k ) x i x j + i = 1 3 p i ( k ) x i + p 0 ( k ) , it is then encoded as the picture shown by Table 1.
    We remark that our network receives m input pictures encoding the m polynomials P ( 1 ) ( x ) , , P ( m ) ( x ) from the instance of the problem to be solved, and then our network Γ will obtain values (if they exists) for the indeterminates x 1 , x 2 , x 3 , , x n such that Equation (4) holds.
  • Let U = V C C C ¨ X ̲ X ̲ X ¯ X ¯ X ˙ X ¨ R B { $ , , , k ¨ } { 1 , 0 } { c 0 0 , c 0 1 } be the network alphabet, where:
    • C = { c 0 , c 1 , c 2 , c n } .
    • C = { c 0 , c 1 , c 2 , c n } .
    • C ¨ = { c 1 ¨ , c 2 ¨ , c n ¨ } .
    • X ̲ = { x 1 ̲ , x 2 ̲ , x n ̲ } .
    • X ̲ = { x 1 ̲ , x 2 ̲ , x n ̲ } .
    • X ¨ = { x 1 ¨ , x 2 ¨ , , x n ¨ } .
    • X ¯ = { x 1 ¯ , x 2 ¯ , x n ¯ } .
    • X ¯ = { x 1 ¯ , x 2 ¯ , x n ¯ }
    • X ˙ = { x 1 ˙ , x 2 ˙ , , x n ˙ }
    • R = { R 0 , R 1 , , R n 1 }
    • B = { 1 , 2 , m }
    In particular, the sets C , C , C ¨ are needed to control the transition between the different Γ ’s phases, namely: generation, linear evaluation, quadratic evaluation and validation. These phases are represented by different subnetworks as explained below. The set X is the set of symbols representing the indeterminates x 1 , x 2 , x n . Similarly, the sets X ̲ , X ̲ , X ¯ , X ¯ , X ˙ , X ¨ and the set { 1 , 0 } encapsulate the symbols with the values assigned to the indeterminates x 1 , x 2 , x n when they are evaluated in each phase of Γ and require some evaluation. The symbols { $ , , , k ¨ } are trash symbols required for some transformations at different phases of the computation. Finally, { c 0 0 , c 0 1 } are punctual transformations of the control symbol c 0 in order to indicate the final evaluation of the Equation (4) for a picture.
  • The underlying graph G of our proposed Γ is depicted in the Figure 1. The nodes In, H and A are the input, halting and accept nodes, respectively, while the boxes Gen, Eval 1 , Eval 2 , Val are subnetworks representing the different stages or phases for Γ computation. In particular, the box Gen represents the subnetwork in charge of the combinatorial task for generating all pictures π containing the different values for the indeterminates x 1 , x 2 , x n for each input picture. Also, both boxes Eval 1 and Eval 2 represent the subnetworks for the phases of evaluation. The former represents the subnetwork that computes the evaluation of the n + 1 row of a given picture, namely, the evaluation of the terms p i ( k ) x i , while the latter represents the subnetwork that computes the evaluation of the first n-rows of a given picture, namely, the evaluation of the terms p i , j ( k ) x i x j . Finally, the box Val represents the subnetwork in charge of the phase for selecting the pictures that satisfy the Equation (4). Detailed description for each subnetwork as well as the I n , H and A nodes are introduced in the Section 4.1, Section 4.2, Section 4.3, Section 4.4, Section 4.5. In addition, the internals for each subnetwork are given in the Figure 2, Figure 3, Figure 4 and Figure 5.
  • With regards to the mapping R for Γ , it is specified for each node in the Table 2, Table 3, Table 4 and Table 5.
  • With regards to the action mapping β for Γ , it is specified for each node in the Table 2, Table 3, Table 4 and Table 5.
  • The mapping function ρ is defined as follows:
    • ρ ( X ̲ X ̲ X ¯ X ¯ { 1 } { 0 } ) = [ 0 , n ( 2 n + 1 ) ]
    • ρ ( C C ) = [ 2 n ( n + 1 ) , 2 n 2 ( n + 1 ) ]
    • ρ ( { } { $ } ) = [ 2 n 2 ( n + 2 ) , )
    • ρ ( X X ˙ X ¨ { $ } ) = [ ( 2 ( n + 1 ) + 1 ) , 1 ]
    • ρ ( C { } { k ¨ } B ) = [ 2 n ( n + 1 ) ( 2 ( n + 1 ) ) , n ( 2 ( n + 1 ) ) ]
    • ρ ( C ¨ B { c 0 0 } { c 0 1 } ) = { 4 m n ( n + 1 ) ( 2 ( n + 1 ) ) }
    We assume that any symbol in U not appearing in the previous mapping has an empty interval for evaluation since it is not relevant for the proposed algorithm.
  • The function φ is defined as follows:
    • φ ( p i ( k ) ¯ ) = φ ( p i , j ( k ) ¯ ) = φ ( p 0 ( k ) ¯ ) = φ ( x i ¯ ) = φ ( 1 ) = φ ( x i ¯ ) = 1 , for all 1 k m , 1 i , j n
    • φ ( x i ) = φ ( x i ˙ ) = φ ( $ ) = 1 , for all 1 i n
    • φ ( p i ( k ) ̲ ) = φ ( p i , j ( k ) ̲ ) = φ ( p 0 ( k ) ̲ ) = φ ( x i ̲ ) = φ ( x i ̲ ) = φ ( 0 ) = 0 , for all 1 k m , 0 i , j n
    • φ ( x i ¨ ) = ( 2 ( i 1 ) ) , for all 1 i n
    • φ ( c i ) = φ ( c i ) = 2 n ( n + 1 ) , for all 0 i n
    • φ ( k ) = φ ( c i ) = φ ( k ¨ ) = φ ( ) = n ( 2 ( n + 1 ) ) , for all 1 k m , 0 i n
    • φ ( ) = φ ( $ ) = 2 n 2 ( n + 2 )
    • φ ( c i ¨ ) = φ ( k ) = φ ( c 0 0 ) = φ ( c 0 1 ) = 2 k n ( n + 1 ) ( 2 ( n + 1 ) ) , for all 1 i n , 1 k m
    We remark that the values for elements in P are given as input values.

4.1. The Node in

By default, in the initial configuration, the node In holds the input picture coding the instance of the problem to solve. For our network Γ , the node In holds the input pictures representing each polynomial P ( 1 ) ( x ) , P ( 2 ) ( x ) , , P ( m ) ( x ) . In addition, this node serves like a both receptor and transmitter of the pictures processed by subnetwork Gen . For this reason, this node only has a substitution rule that changes one control symbol by other one. In particular, while the subnetwork Gen is generating configurations, i.e., pictures with the corresponding symbols assigned to each indeterminate, these are allowed to pass to the Eval 1 subnetwork by the In node for further processing. This node’s internals are shown in Table 2.

4.2. Generating Subnetwork

The subnetwork Gen represents the generating phase, namely, the phase in charge of generating m · 2 n configurations for the initial pictures π 1 , π 2 , π 3 , π m encoding an RMQE instance. In particular, this subnetwork replaces each x i for a symbol x i ¯ or x i ̲ , which represents the value 1 or 0 respectively, and hence generates 2 n configurations for each initial picture π k . The graphical representation of the underlying subgraph for subnetwork Gen is depicted in the Figure 2.
According to the Figure 2, each indeterminate x i is represented by two nodes, namely, X i 0 and X i 1 . Concretely, the node X i 0 replaces all occurrences of symbol x i for the symbol x i ̲ (which represents 0). On the other hand, the node X i 1 replaces all occurrences of symbol x i for one symbol x i ¯ (which represents the value 1). After n processing steps, the subnetwork Gen will have modified all x i symbols and generated 2 n configurations for each initial picture π k . These configurations will be collected by the In node. The internals of each node X i 1 , X i 0 , 1 i n in this subnetwork are defined in the Table 2.

4.3. Linear-Evaluating Subnetwork

The aim of this subnetwork is to evaluate AND operations for each pair of symbols p i ( k ) x i with 1 i n , which are encoded at the row n + 1 of each picture. The graph representing this subnetwork is depicted in the Figure 3.
This subnetwork starts off with the node G receiving pictures from the node I n . The received pictures are then processed by the node G, by masking the two last rows (at indexes ( n + 2 ) and ( n + 3 ) ) from them. This makes each processed picture’s last visible row be the row that contains the terms p i ( k ) x i (i.e., the n 1 row). This row is what will then be processed by other nodes in this subnetwork, by applying on it substitution rules with the β mode ↓.
The node L serves as a control node to keep the traceability of the substitution of each pair p i ( k ) x i done by the subnetwork X i depicted in the Figure 3 (the internals of these boxes can be seen in the rectangle at left-upper corner). In a similar way as the subnetwork Gen operates, the nodes X i 0 ̲ and X i 1 ¯ replace all occurrences of symbols x i ̲ and x i ¯ for an intermediary symbol in the set X ˙ . With them, this subnetwork is able to evaluate whether the result of the AND operation for the pair p i ( k ) x i is 0 or 1 through the evaluation sets defined in the nodes X i 0 ̲ and X i 1 ¯ , respectively. In these nodes, the symbols belonging to the set P L are substituted by the symbols either x i ̲ or x i ¯ according to the result from the AND operation (representing 0 and 1, respectively). Concretely, in the node X i 1 ¯ , the symbols representing the p i ( k ) values are substituted by trash symbols either $ or ⧫. Subsequently, the nodes X i 1 ¯ and X i 1 ̲ let pass only the pictures which evaluation represents 1 or 0, respectively. Then, they substitute the trash symbols for the specific result (1 or 0). Therefore, the result of the evaluation has been assigned to the place for the p i ( k ) symbols at the picture. The processing for L and nodes represented in the boxes X i is repeated until all terms p i ( k ) x i in the pictures have been substituted, with the valid resulting pictures leaving the node L and entering the node Q. Finally, the node Q masks the recently evaluated row for all received pictures making only the first n rows visible, i.e., the rows that encode the terms the p i , j ( k ) x i x j . In this way, the pictures are prepared for the following phase (subnetwork Eval 2 ), which evaluates the quadratic terms. The internals of each node in this subnetwork Eval 1 are defined in the Table 3.

4.4. Quadratic-Evaluating Subnetwork

This subnetwork is responsible for evaluating each term p i , j ( k ) x i x j in the pictures. The result from evaluating the AND operation between each one these three terms will be represented by the substitution of the p i , j ( k ) symbols by a symbol 0 or 1 (depending of the result). Similarly at the subnetwork Eval 1 , the symbols x i ̲ x i ¯ are replaced by the trash symbols $ or $ . The Figure 4 depicts the graph of this subnetwork. The internals of each node in this subnetwork are defined in the Table 4.
The processing of this subnetwork starts off with the node U 1 receiving all pictures from the node Q. Then, this node unmasks all invisible rows such that it makes the last rows visible. More specifically, the node U 1 unmasks the rows at indexes ( n + 1 ) , ( n + 2 ) and ( n + 3 ) in each received picture. After that, the processed pictures are allowed to move to the node F 1 , which substitutes the control symbol c 0 and the symbol k in each picture for the symbols c 0 and k , respectively. Note that this node only does this substitution once and it is only for controlling the internal processing.
After this substitution is carried out, the process for evaluating each row containing the terms p i , j ( k ) x i x j , 1 i j n starts such that each term is evaluated one at time. The node F 2 is responsible for keeping track the evaluation of each row. Specifically, this node starts with the substitution of the symbol c 0 for c 1 in order to indicate that the row 1 will be evaluated subsequently. In addition, this node substitutes all symbols ■ for the trash symbol $, because they are no longer needed.
The next time this node receive pictures from node F 7 , it can substitute the symbol again and so on until all rows are evaluated. The node M is able to mask all columns from the rightmost column to the column containing the x j (exclusive) symbol of the current p i , j ( k ) x i x j to be evaluated (this is controlled by means of the c i symbol place at this same column). With this masking process, Γ makes visible only the current three terms p i , j ( k ) x i x j at the first visible row. Subsequently, the operation AND is calculated for these three symbols in a similar way as it is processed for the subnetwork Eval 1 (in the boxes X i ). More concretely, this evaluation is done by the nodes A i 0 , A i 1 ̲ and A i 1 ¯ . Then, the node F 3 substitutes the c i for c ¨ i in order to identify that the current quadratic term was evaluated. In addition, this node substitutes the $ symbol for the original one. Note that the result of the AND operation is encoded at the place in which the symbol p i , j ( k ) was placed. In order to carry out the evaluation for the rest of the p i , j ( k ) x i x j terms in the current row, the nodes U 2 and M 1 unmask the rightmost three columns and mask the current three visible columns, respectively. M 1 is connected with the nodes A i 0 , A i 1 ̲ and A i 1 ¯ and it allows that the process for evaluating each p i , j ( k ) x i x j term is repeated until all of them are evaluated. When it occurs, the pictures from node F 3 migrate to the node F 5 and it makes the other branch of the subnetwork will be processed.
Given that the last control symbol substituted was c n by c n ¨ , the pictures can enter to the node F 5 . Then, this node returns the symbols c n ¨ to their original one ( c n ). Similarly, this node substitutes the symbols k by their original ones. After that, the pictures migrate to the node F 6 and this node substitutes one symbol $ for a k ¨ in order to indicate that the current row was just evaluated. Then, the pictures enter to M 2 and the current row is masked. Subsequently, the node U 3 unmasks all masked columns to the left. The processing for this branch finishes when the node F 7 substitutes all c ¨ symbols for their original ones in the set C . With these substitutions, Γ is able to evaluate the next visible row and hence, the pictures migrate to the F 2 node in order to repeat the processing of the next row. When the pictures do not contain p i , j ( k ) x i x j symbols, then they migrate from the node F 2 to the node S 1 . Note that all pictures with the symbol k ¨ are allowed into the node S 1 . Finally, the nodes S 1 and S 2 are responsible for unmasking all hidden rows and columns in order to promote them for the next phase.

4.5. Validation Subnetwork

The goal of this subnetwork is twofold. First, this subnetwork calculates the final value for each picture, e.g., the evaluation to the polynomial that the picture represents. More specifically, this subnetwork computes the resulting value from each P ( i ) (i.e., 0 or 1), for all 1 i m , after performing XOR operations between the terms that compose the picture (summation of all them). This result will be stored as a value for the control symbol c 0 at the last row in the pictures. Second, this subnetwork groups all the pictures sharing the same values for the indeterminates x 1 , x 2 , x n . This grouping will allow to obtain the solutions for the problem, that is, to find vectors x ^ satisfying Equation (4). The values for each x i remains represented by the symbols belonging to X ̲ or X ¯ placed into the row n (0 or 1, respectively). The Figure 5 depicts the graph of this subnetwork.
Concretely, the nodes V 1 , V 0 , V 1 and V 0 performs a XOR operation with the values placed in rows from one to n and replace the control symbol c 0 with the result obtained from performing the XOR operation, i.e., by substituting the symbol c 0 by other control symbol c 0 0 if the result of XOR operation is 0 or c 0 1 otherwise. The nodes V 2 i , 0 i < 2 n , group all pictures belonging to each possible combination of n values assigned to the indeterminates. In particular, each configuration is mapped to an integer in the range [ 0 , 2 n 1 ] . Previously, the node W has substituted all symbols in the sets X ̲ and X ¯ for their corresponding symbols in the sets X ˙ and X ¨ , respectively. It is necessary in order to evaluate each configuration filtering each picture according to the value obtained by nodes V 2 i , 0 i < 2 n . These nodes substitutes the control symbol c 1 for a control symbol R i , 0 i 2 n 1 , where the index i represents the number of configuration. Finally, the node H a l t substitutes all symbols belonging to the sets X ˙ and X ¨ for the original ones and the node A c c e p t collects all pictures representing the solution of the input instance. The internals of each node in this subnetwork are defined in the following Table 5.

4.6. Discussion

Lemma 1.
The NPPES Γ defined in Section 4 computes a solution for the RMQE problem.
Proof of Lemma 1.
At the beginning of the computation, Γ contains the set of pictures Π = { π ( 1 ) , π ( 2 ) , , π ( m ) } encoding the polynomials to be evaluated (all of them in the node I n ). Thus, the initial configuration is defined by C 0 ( I n ) = Π and C 0 ( x ) = , x X G \ { I n } . 25A0 Γ consist of the initial pictures (with the c 0 symbol substituted by c 0 ) for all X i 0 , X i 1 , 1 i n represented in the Figure 2. After n processing steps, the node I n has collected m · 2 n pictures. Note that for each initial picture π Π , the subnetwork Gen generates a picture for each possible n tuple of values to the indeterminates x 1 , x 2 , x n in π . Therefore, it generates m · 2 n pictures that are collected by the node I n and are then sent out to the subnetwork Eval 1 , more concretely to the node G. This node masks the three last rows in all received pictures and then sends the pictures out to the node L, which includes one new trash symbol indicating the first step to evaluate the terms placed in the row n, namely p i ( k ) x i , for all 1 i n , 1 k m .
Note that the nodes X i 0 ̲ , X i 0 ̲ evaluate, in two processing steps, whether each term is 0. At the same processing steps, the X i 0 ¯ , X i 0 ¯ nodes evaluate whether each term is 1. These evaluations are done by applying simple substitution rules. After n processing steps, each picture will have all p i ( k ) x i terms evaluated and be collected by the node L. All pictures then migrate to the Q node, which masks the row containing the recently evaluated terms (last row) within a processing step. Then, the pictures are sent out to the subnetwork Eval 2 , more precisely to the node U 1 . This node makes all masked rows in each received picture visible through two processing steps. Note that F 1 is a control node which performs its task once, then it performs one processing step. F 2 controls the rows being evaluated by the subnetwork, starting from the first row by substituting one symbol c i and n symbols ■ using n processing steps. The nodes M , M 1 , F 3 , U 2 , A 1 0 , A i 1 ̲ , A i 1 ¯ evaluate each term p i , j ( k ) x i x j in each corresponding row of a picture, by making the three columns occupied by each term visible and then performing an AND operation with the corresponding values. When all terms are evaluated, the hidden columns are made visible again and a mechanism to indicate that this row was evaluated is used. Then, these nodes perform n times their respective steps.
The nodes F 5 , F 6 , F 7 , M 2 and U 3 make the just-evaluated row invisible, mark the index of the evaluated row (substituting the respective symbol in C and preparing the pictures for the evaluation of the next row). After the evaluation of the possible occurrences of the p i , j ( k ) x i x j terms, the node S 1 and subsequently S 2 collect all evaluated pictures and return the masked rows to the visible state after perform n and n 3 processing steps, respectively. Then S 2 contains a set of pictures encoding each and every admissible evaluation of each polynomial in the Equation (3). However, at this stage, we need to organize all pictures by classes, where each groups pictures of each polynomial for one of the 2 n n tuple of values that can be assigned to the indeterminates, and then determine which classes satisfy the Equation (4), if any (if a class has m pictures then the corresponding n tuple is a solution).
The node W performs n processing steps in order to do all substitutions. V 0 and V 1 substitutes the independent term in the n + 1 row of a picture for the corresponding symbol for all pictures at the same processing step. In addition, these nodes let can pass to the nodes V 0 and V 1 only the pictures with evaluation of all symbols 1 being wither odd or even, respectively. V 0 and V 1 nodes substitute control symbols at the last row in the same processing step. The nodes V 2 i group the pictures according to each possible n tuple of values for the indeterminates that satisfy Equation (4), via marking the picture with an R i symbol, where the index i represents the i t h n tuple of values, at same processing step. The transition of H to A is immediate: H returns the symbols of the vector to the original ones and A puts the final result of the evaluation in the control symbol c 0 0 . Each one of these two nodes perform one processing step. Finally, the node A collects the pictures containing the solution for the given instance. □
Therefore, as a consequence of this result, we obtained the following theorem:
Theorem 1.
The RMQE problem as it was defined in Section 3 can be solved by a NPPES Γ in time O ( n 2 ) .
Proof of Theorem 1.
Following the identical conditions as we have mentioned before, the processing steps in Γ are:
  • Node In: n + 1 steps
  • Subnetwork Gen : n steps performed by X i 0 , X i 1 , 1 i n , nodes.
  • Subnetwork Eval 1 : 3 n + 5 steps performed as follows:
    Node G: 2 steps
    Node L: ( n + 1 ) steps
    Node Q: 2 steps
    Nodes X i 0 ̲ , X i 1 ¯ , 1 i n : n steps
    Nodes X i 0 ̲ , X i 1 ̲ , X i 1 ¯ , 1 i n : n steps
  • Subnetwork Eval 2 : 3 n ( n + 3 ) steps performed as follows:
    Node U 1 : 2 steps.
    Node F 1 : 1 step.
    Nodes: F 2 , F 3 , F 4 , F 5 , F 6 , M 2 , F 7 : 7 n steps (1 processing step performed by every node)
    Nodes M 1 , U 2 : 6 n steps ( 3 n processing steps performed by every node)
    Nodes M , U 3 : at most n ( n 3 ) steps for each of these nodes, then 2 n ( n 3 )
    Nodes A 1 0 , A i 1 ¯ and A i 1 ̲ : at most n processing steps for each n row, then, n 2 steps
    Node S 1 : n steps
    Node S 2 : n 3 steps
  • Subnetwork Val : n + 3 steps performed as follows:
    Node W: n processing steps
    Nodes V 1 , V 0 , 1 step
    Nodes V 1 , V 0 , 1 step
    Nodes V 2 i , 1 i n , 1 step
  • Nodes H and A, 1 step every node, then, 2 steps.
Therefore, the total number of processing steps is not more than 3 n 2 + 15 n + 11 . Note that, if a given instance of the problem has k solutions, with k 0 , then Γ finds the k n tuples of values for the indeterminates that are solutions for the given instance. Consequently, the overall time for solving an instance of the RMQE problem, as it is defined in Equation (4), is O ( n 2 ) . □
We remark that the proposed network Γ seems to have a significant size in terms of number of nodes. This can be seen as an aspect to be improved; however, note that this construction allows us to control the exponential growth of the generated strings in any moment. In addition, we obtain a viable solution in an acceptable computation time given the complexity of the problem we are dealing with. Also, Γ sieves the correct pictures in an early identification step, minimizing the unnecessary copies. Additionally, note that other NBP models do not give such flexibility when the problem needs to meet integer constraints. Consequently, NPPES opens up an interesting perspective to fill this gap. Another important reason to support the solution we presented here is that, given the high maturity achieved by the ultra-scalable computing platforms, since this facilities deploying distributed algorithms in a reasonable way [16], our proposed solution might be deployed by using any hardware/software solution as those reported in [30], without requiring any modification (re-structuring of nodes, rules, mechanism to workload distribution between computational units or so on). That is to say, we potentially may put our solution to run on workers or parallel computing units in any scalable software or hardware platform without re-defining the proposed algorithm, because our distribution of nodes would be more suitable to reallocate parallel processing units within a real computing platform.

5. Conclusions and Final Remarks

We have introduced a solver for the RMQE problem that is based on NPPES and runs in quadratic time, suggesting that the NPPES model can be employed to cope with hard complex problems in which numerical evaluation has an essential role. In this context, the extended model is able to deal with a new variety of hard problems because of the model’s new features, in particular its filtering strategy makes the selection of correct pictures easier compared to the previous model. Additionally, the model exhibits efficiency and uses up less resources than similar bio-inspired models facing similar hard problems. To the best of our knowledge, this is the first time in which an NBP model is used to propose a solver for the RMQE problem. Also this solver demonstrates the computational power that may be achieved by employing bio-inspired computational models, since existing non-bio-inspired solutions run in exponential time for specific n , m input values. Furthermore, our proposed solution for the RMQE problem opens an interesting opportunity to propose new solvers for these complex problems, bringing about real computing solutions with a reasonable computing time (which can be understood as “efficient time”).
In previous works [14,31], it is stated that NPEP solutions need an adaptation for allowing the deployment on computational platforms for massive distributed processing. Moreover, this adaptation must preserve the solution’s complexity in order to achieve practical solutions to real problems on big data computing platforms (which is extensible to any solution using similar models computing pictures instead of strings). In this context, the proposed solution here does not require any adaptation to be deployed in massively parallel and distributed computing platforms, and therefore, we avoid the previously commented drawbacks when these algorithms are deployed in practical scenarios.
Our results additionally suggest that the NPPES model may be more simple than other models working on pictures or strings for tackling hard optimization problems with numerical evaluations. We thus think this model shows features worthy of further investigation, both from a theoretical approach and from a practical point of view. With respect to the latter, we think studying the limits of software and hardware implementations to attack complex problems with big data computing platforms deserves effort and attention. Additionally, researching into the suitability of extending this model to reinforce it for other related high-processing problems also is interesting. Finally, we shall return to these topics in a forthcoming work.

Author Contributions

Conceptualization, R.V.-P., S.G.-C. and K.P.J.; methodology, R.V.-P., S.G.-C. and K.P.J.; theoretical and practical validation, S.M.S.; formal analysis, R.V.-P., S.G.-C. and K.P.J.; investigation, R.V.-P., S.G.-C. and K.P.J.; writing–original draft preparation, R.V.-P., S.G.-C. and K.P.J.; writing–review and editing, R.V.-P., S.G.-C. and K.P.J.; All authors have read and agreed to the published version of the manuscript.

Funding

Ricardo Villanueva-Polanco was funded by Universidad del Norte (Grant 2019-029), and Karina Paola Jiménez was funded by Fundación Carolina (call 2014) and Universidad Simon Bolivar.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
NBPNetwork of Bio-inspired Models
NPEPNetworks of Polarized Evolutionary Processors
NPPNetworks of Picture Processors
RMQERandom Multivariate Quadratic Equations
MQMultivariate Quadratic problem
NPPESNetwork of Picture Processors with Evaluation Sets

References

  1. Castellanos, J.; Martín-Vide, C.; Mitrana, V.; Sempere, J. Networks of evolutionary processors. Acta Inform. 2003, 39, 517–529. [Google Scholar] [CrossRef]
  2. Manea, F.; Martin-Vide, C.; Mitrana, V. Accepting networks of splicing processors: Complexity results. Theor. Comput. Sci. 2007, 371, 72–82. [Google Scholar] [CrossRef] [Green Version]
  3. Campos, M.; Sempere, J. Accepting Networks of Genetic Processors are computationally complete. Theor. Comput. Sci. 2012, 456, 18–29. [Google Scholar] [CrossRef] [Green Version]
  4. Bottoni, P.; Labella, A.; Manea, F.; Mitrana, V.; Sempere, J.M. Networks of Evolutionary Picture Processors with Filtered Connections. In Unconventional Computation; Springer: Berlin/Heidelberg, Germany, 2009; pp. 70–84. [Google Scholar]
  5. Manea, F.; Martín-Vide, C.; Mitrana, V. Accepting networks of evolutionary word and picture processors. A survey. Math. Comput. Lang. Life Front. Math. Linguist. Lang. Theory World Sci. 2010, 2010, 523–560. [Google Scholar]
  6. Bottoni, P.; Labella, A.; Mitrana, V. Networks of Evolutionary Picture Processors. Fundam. Informaticae 2014, 131, 337–349. [Google Scholar] [CrossRef] [Green Version]
  7. Bordihn, H.; Bottoni, P.; Labella, A.; Mitrana, V. Networks of picture processors as problem solvers. Soft Comput. 2017, 21, 5529–5541. [Google Scholar] [CrossRef]
  8. Bordihn, H.; Bottoni, P.; Labella, A.; Mitrana, V. Solving 2D-Pattern Matching with Networks of Picture Processors. In Theory and Practice of Natural Computing; Dediu, A.H., Lozano, M., Martín-Vide, C., Eds.; Springer International Publishing: Cham, Switzerland, 2014; pp. 157–168. [Google Scholar]
  9. Popescu, S. Solving 2-D Pattern Matching using Networks of Polarized Picture Processors with Circular Permutation. In Proceedings of the 2017 19th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC), Timisoara, Romania, 21–24 September 2017; pp. 338–343. [Google Scholar]
  10. Arroyo, F.; Gómez-Canaval, S.; Mitrana, V.; Sánchez, J. Networks of Picture Processors with Circular Permutation. Proc. Rom. Acad. Ser. 2019, 20, 311–319. [Google Scholar]
  11. Popsecu, S. Networks of Polarized Evolutionary Picture Processors. Rom. J. Inf. Sci. Technol. 2015, 18, 3–17. [Google Scholar]
  12. Alarcón, P.; Arroyo, F.; Mitrana, V. Networks of polarized evolutionary processors. Inf. Sci. 2014, 265, 189–197. [Google Scholar] [CrossRef]
  13. Arroyo, F.; Gómez-Canaval, S.; Mitrana, V.; Popescu, S. Networks of Polarized Evolutionary Processors Are Computationally Complete. In Language and Automata Theory and Applications; Lecture Notes in Computer Science; Springer International Publishing: Berlin/Heidelberg, Germany, 2014; Volume 8370, pp. 101–112. [Google Scholar] [CrossRef] [Green Version]
  14. Gómez-Canaval, S.; Jiménez, K.; Ortega, A.; Vakaruk, S. Towards Quantitative Networks of Polarized Evolutionary Processors: A Bio-Inspired Computational Model with Numerical Evaluations. In Trends in Practical Applications of Scalable Multi-Agent Systems, the PAAMS Collection; Springer International Publishing: Berlin/Heidelberg, Germany, 2016; pp. 251–259. [Google Scholar] [CrossRef]
  15. Gómez-Canaval, S.; Mitrana, V.; Sánchez-Couso, J. Networks of Splicing Processors With Evaluation Sets as Optimization Problems Solvers. Inf. Sci. 2016, 369, 457–466. [Google Scholar] [CrossRef]
  16. Arroyo, F.; Gómez-Canaval, S.; Jiménez, K.; Ortega, A. A Linear Time Solution for N-Queens Problem Using Generalized Networks of Evolutionary Polarized Processors. Int. J. Found. Comput. Sci. 2020, 31, 7–21. [Google Scholar] [CrossRef]
  17. Faugere, J.C.; Horan, K.; Kahrobaei, D.; Kaplan, M.; Kashefi, E.; Perret, L. Fast Quantum Algorithm for Solving Multivariate Quadratic Equations. Quantum Inf. Comput. 2017, 2017, 1236–1252. [Google Scholar]
  18. Ding, J. A New Proof of Work for Blockchain Based on Random Multivariate Quadratic Equations. In Applied Cryptography and Network Security Workshops; Zhou, J., Deng, R., Li, Z., Majumdar, S., Meng, W., Wang, L., Zhang, K., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 97–107. [Google Scholar]
  19. Fernández-Caramès, T.M.; Fraga-Lamas, P. Towards Post-Quantum Blockchain: A Review on Blockchain Cryptography Resistant to Quantum Computing Attacks. IEEE Access 2020, 8, 21091–21116. [Google Scholar] [CrossRef]
  20. Thomae, E.; Wolf, C. Solving Underdetermined Systems of Multivariate Quadratic Equations Revisited. In Public Key Cryptography–PKC 2012; Fischlin, M., Buchmann, J., Manulis, M., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; pp. 156–171. [Google Scholar]
  21. Courtois, N.; Klimov, A.; Patarin, J.; Shamir, A. Efficient Algorithms for Solving Overdefined Systems of Multivariate Polynomial Equations. In Advances in Cryptology—EUROCRYPT 2000; Preneel, B., Ed.; Springer: Berlin/Heidelberg, Germany, 2000; pp. 392–407. [Google Scholar]
  22. Chen, J.; Ning, J.; Ling, J.; Lau, T.S.C.; Wang, Y. A new encryption scheme for multivariate quadratic systems. Theor. Comput. Sci. 2020, 809, 372–383. [Google Scholar] [CrossRef]
  23. Chen, M.; Hulsing, A.; Rijneveld, J.; Samardjiska, S.; Schwabe, P. MQDSS Specifications. Post-Quantum Cryptography Standardization. 2020. Available online: http://mqdss.org/files/mqdssVer2point1.pdf (accessed on 30 September 2020).
  24. Garey, M.; Johnson, J. Computers and Intractability: A Guide to the Theory of NP-Completeness; W. H. Freeman & Co.: New York, NY, USA, 1990. [Google Scholar]
  25. Ding, J.; Schmidt, D. Rainbow, a new Multivariable Polynomial Signature Scheme. In Applied Cryptography and Network Security; Ioannidis, J., Keromytis, A., Yung, M., Eds.; Springer: Berlin/Heidelberg, Germany, 2005; pp. 164–175. [Google Scholar]
  26. Beullens, W.; Preneel, B. Field Lifting for Smaller UOV Public Keys. In Progress in Cryptology–INDOCRYPT 2017; Patra, A., Smart, N.P., Eds.; Springer International Publishing: Cham, Switzerland, 2017; pp. 227–246. [Google Scholar]
  27. Beullens, W.; Preneel, B.; Szepieniec, A.; Vercauteren, F. LUOV: Signature Scheme proposal for NIST PQC Project (Round 2 version). Post-Quantum Cryptography Standardization. 2020. Available online: https://github.com/WardBeullens/LUOV/blob/master/Supporting_Documentation/luov.pdf (accessed on 30 September 2020).
  28. Chen, M.; Hülsing, A.; Rijneveld, J.; Samardjiska, S.; Schwabe, P. From 5-Pass MQ-Based Identification to MQ-Based Signatures. In Advances in Cryptology–ASIACRYPT 2016; Cheon, J.H., Takagi, T., Eds.; Springer: Berlin/Heidelberg, Germany, 2016; pp. 135–165. [Google Scholar]
  29. Ding, J.; Chen, M.S.; Petzoldt, A.; Schmidt, D.; Yang, B.Y.; Kannwischer, M.; Patarin, J. Rainbow-Algorithm Specification and Documentation. Post-Quantum Cryptography Standardization. 2020. Available online: https://www.pqcrainbow.org/ (accessed on 30 September 2020).
  30. Gómez-Canaval, S.; Ordozgoiti, B.; Mozo, A. NPEPE: Massive Natural Computing Engine for Optimally Solving NP-complete Problems in Big Data Scenarios. In New Trends in Databases and Information Systems; Springer International Publishing: Berlin/Heidelberg, Germany, 2015; pp. 207–217. [Google Scholar] [CrossRef]
  31. Gómez-Canaval, S.; Sánchez-Couso, J.; Vinyals, M. Solving optimization problems by using networks of evolutionary processors with quantitative filtering. J. Comput. Sci. 2016, 16, 65–71. [Google Scholar] [CrossRef]
Figure 1. Underlying graph G.
Figure 1. Underlying graph G.
Mathematics 08 02160 g001
Figure 2. Subnetwork Gen .
Figure 2. Subnetwork Gen .
Mathematics 08 02160 g002
Figure 3. Subnetwork Eval 1 .
Figure 3. Subnetwork Eval 1 .
Mathematics 08 02160 g003
Figure 4. Subnetwork Eval 2 .
Figure 4. Subnetwork Eval 2 .
Mathematics 08 02160 g004
Figure 5. Subnetwork Val .
Figure 5. Subnetwork Val .
Mathematics 08 02160 g005
Table 1. Input picture for the polynomial i = 1 3 j = i 3 p i , j ( k ) x i x j + i = 1 3 p i ( k ) x i + p 0 ( k ) .
Table 1. Input picture for the polynomial i = 1 3 j = i 3 p i , j ( k ) x i x j + i = 1 3 p i ( k ) x i + p 0 ( k ) .
p 1 , 1 ( k ) ¯ p 1 , 1 ( k ) ̲ x 1 x 1 p 1 , 2 ( k ) ¯ p 1 , 2 ( k ) ̲ x 1 x 2 p 1 , 3 ( k ) ¯ p 1 , 3 ( k ) ̲ x 1 x 3
$$$ p 2 , 2 ( k ) ¯ p 2 , 2 ( k ) ̲ x 2 x 2 p 2 , 3 ( k ) ¯ p 2 , 3 ( k ) ̲ x 2 x 3
$$$$$$ p 3 , 3 ( k ) ¯ p 3 , 3 ( k ) ̲ x 3 x 3
p 1 ( k ) ¯ p 1 ( k ) ̲ x 1 $ p 2 ( k ) ¯ p 2 ( k ) ̲ x 2 $ p 3 ( k ) ¯ p 3 ( k ) ̲ x 3 $
p 0 ( k ) ̲ p 0 ( k ) ¯ $$$$$$$ k
c 0 $ c 1 $$ c 2 $$ c 3
Table 2. Internals of the In node and nodes of the subnetwork Gen .
Table 2. Internals of the In node and nodes of the subnetwork Gen .
NodeM α β S
In { c 0 c 0 } [ 2 n ( n + 1 ) , 2 n 2 ( n + 1 ) ] { } { C C } ,
{ n ( 2 ( n + 1 ) ) } { C { } { k ¨ } B }
X i 0 , 1 i n { x i x i ̲ , c i c i , c 0 c 0 } [ ( 2 ( n + 1 ) + 1 ) , 1 ] , { + } { X X ˙ X ¨ { $ } } ,
{ n ( 2 ( n + 1 ) ) } { C { } { k ¨ } B }
X i 1 , 1 i n { x i x i ¯ , c i c i , c 0 c 0 } [ ( 2 ( n + 1 ) + 1 ) , 1 ] , { + } { X X ˙ X ¨ { $ } } ,
{ n ( 2 ( n + 1 ) ) } { C { } { k ¨ } B }
Table 3. Internals of the nodes of the subnetwork Eval 1 .
Table 3. Internals of the nodes of the subnetwork Eval 1 .
NodeM α β S
G { mask } [ 2 n ( n + 1 ) ( 2 ( n + 1 ) ) , n ( 2 ( n + 1 ) ) ] { } { C { } { k ¨ } B } ,
{ { } { $ } }
L { $ } [ 2 n 2 ( n + 2 ) , ) , { + } { { } { $ } } ,
[ n 2 ( 2 ( n + 1 ) ) , n ( 2 ( n + 1 ) ) ] { C { } { k ¨ } B } ,
Q { mask } [ 0 , n ( 2 n + 1 ) ] , { } { X ̲ X ̲ X ¯ X ¯ { 1 } { 0 } }
[ n 2 ( 2 ( n + 1 ) ) , n ( 2 ( n + 1 ) ) ] { C { } { k ¨ } B } ,
X i 0 ̲ , 1 i n { x i ̲ x i ˙ } [ 2 n ( n + 1 ) ( 2 ( n + 1 ) ) , n ( 2 ( n + 1 ) ) ] , { } { C { } { k ¨ } B } ,
{ 1 } { X X ˙ X ¨ { $ } }
X i 0 ̲ , 1 i n { p i ( k ) ¯ 0 , p i ( k ) ̲ 0 , [ ( 2 ( n + 1 ) + 1 ) , 1 ] { } { X X ˙ X ¨ { $ } } ,
x i ˙ x i ̲ } { X ̲ X ̲ X ¯ X ¯ { 1 } { 0 } }
X i 1 ¯ , 1 i n { p i ( k ) ¯ , p i ( k ) ̲ $ , [ 2 n ( n + 1 ) ( 2 ( n + 1 ) ) , n ( 2 ( n + 1 ) ) ] , { } { C { } { k ¨ } B } ,
x i ¯ x ˙ } { 2 n 2 ( n + 2 ) } { { } { $ } }
X i 1 ̲ , 1 i n { x ˙ x i ¯ , $ 0 } [ ( 2 ( n + 1 ) + 1 ) , 1 ] , { } { X X ˙ X ¨ { $ } } ,
[ n 2 ( 2 ( n + 1 ) ) , n ( 2 ( n + 1 ) ) ] { C { } { k ¨ } B }
X i 1 ¯ , 1 i n { x ˙ x i ¯ , 1 } [ 2 n 2 ( n + 2 ) , ) , { } { { } { $ } }
[ n 2 ( 2 ( n + 1 ) ) , n ( 2 ( n + 1 ) ) ] { C { } { k ¨ } B }
Table 4. Internals of the nodes of the subnetwork Eval 2 .
Table 4. Internals of the nodes of the subnetwork Eval 2 .
NodeM α β S
U 1 { unmask } [ 2 n 2 ( n + 2 ) , ) , { } { { } { $ } }
{ 2 ( n + 1 ) ( 2 ( n + 1 ) ) } { C { } { k ¨ } B }
F 1 { c 0 c 0 , k k } [ 2 n ( n + 1 ) ( 2 ( n + 1 ) ) , n ( 2 ( n + 1 ) ) ] , { } { C { } { k ¨ } B }
1 k m { 2 n ( n + 1 ) } { C C }
F 2 { c i c i + 1 , $ } [ 2 n 2 ( n + 2 ) , ) , { + } { { } { $ } }
for   all   0 i n { ( n 2 ) ( 2 ( n + 1 ) ) } { C { } { k ¨ } B }
M { mask } [ 2 n 2 ( n + 2 ) , ) , { } { { } { $ } }
{ n ( 2 ( n + 1 ) ) } { C { } { k ¨ } B }
A 1 0 { p i , j ( k ) ̲ 0 , x i ̲ $ , x i ¯ $ } [ 2 n 2 ( n + 2 ) , ) , { } { { } { $ } } ,
for all 1 i , j n { 2 } { X X ˙ X ¨ { $ } }
A i 1 ¯ { p i , j ( k ) ¯ 1 , x i ¯ $ , x i ̲ $ } [ 2 n 2 ( n + 2 ) , ) , { } { { } { $ } } ,
for all 1 i , j n { 2 } { X X ˙ X ¨ { $ } }
A i 1 ̲ { p i , j ( k ) ¯ 0 , x i ¯ $ , x i ̲ $ } [ 2 n 2 ( n + 2 ) , ) , { } { { } { $ } } ,
for all 1 i , j n [ 2 , 1 ] { X X ˙ X ¨ { $ } }
F 3 { c i c i ¨ , $ $ } [ ( 2 ( n + 1 ) + 1 ) , 1 ] { + } { X X ˙ X ¨ { $ } } ,
for all 1 i n { C { } { k ¨ } B }
U 2 { unmask } [ 2 n 2 ( n + 2 ) , ) , { } { { } { $ } } ,
{ n ( 2 ( n + 1 ) ) } { C { } { k ¨ } B }
M 1 { mask } { 4 m n ( n + 1 ) ( 2 ( n + 1 ) ) } { } { C ¨ B { c 0 0 } { c 0 1 } } ,
{ { } { $ } }
F 5 { c n ¨ c n , i k } { 4 m n ( n + 1 ) ( 2 ( n + 1 ) ) } { } { C ¨ B { c 0 0 } { c 0 1 } } ,
1 k m { { } { $ } }
F 6 { $ k ¨ } [ 2 n 2 ( n + 2 ) , ) , { } { { } { $ } } ,
[ ( n + 2 ) ( 2 ( n + 1 ) ) , 3 n ( 2 ( n + 1 ) ) ] { C { } { k ¨ } B }
M 2 { mask } [ 2 n 2 ( n + 2 ) , ) , { } { { } { $ } }
{ 3 n ( 2 ( n + 1 ) ) } { C { } { k ¨ } B }
U 3 { unmask } [ 0 , n ( 2 n + 1 ) ] , { } { X ̲ X ̲ X ¯ X ¯ { 1 } { 0 } }
{ 2 n ( n + 1 ) 2 } { C C }
F 7 { c i ¨ c i ,   for   all   1 i n } [ 2 n 2 ( n + 2 ) , ) , { } { { } { $ } }
{ n ( n + 2 ) ( 2 ( n + 1 ) ) } { C { } { k ¨ } B }
S 1 { unmask } [ 2 n 2 ( n + 2 ) , ) , { } { { } { $ } }
{ ( 2 n + 1 ) n ( 2 ( n + 1 ) ) } { C { } { k ¨ } B }
S 2 { unmask } [ 2 n 2 ( n + 2 ) , ) , { } { { } { $ } }
{ 2 n ( n + 1 ) } { C C }
Table 5. Internals of the nodes of the subnetwork Val .
Table 5. Internals of the nodes of the subnetwork Val .
NodeM α β S
W { x i ¯ x i ¨ , x i ̲ x ˙ , [ 2 n 2 ( n + 2 ) , ) , { + } { { } { $ } } ,
k ¨ $ } , 1 i n { n ( n + 1 ) ( 2 ( n + 1 ) ) } { C { } { k ¨ } B }
V 0 { p 0 ( k ) ¯ 1 , p 0 ( k ) ̲ 0 } { [ 2 n ( n + 1 ) ( 2 ( n + 1 ) ) , n ( 2 ( n + 1 ) ) ] , { + } { C { } { k ¨ } B } ,
1 k m { 1 , 3 , 5 , 7 . ( 2 n + 1 ) } { X ̲ X ̲ X ¯ X ¯ { 1 } { 0 } }
V 1 { p 0 ( k ) ¯ 1 , p 0 ( k ) ̲ 0 } { [ 2 n ( n + 1 ) ( 2 ( n + 1 ) ) , n ( 2 ( n + 1 ) ) ] , { + } { C { } { k ¨ } B } ,
1 k m { 0 , 2 , 4 , 2 n } { X ̲ X ̲ X ¯ X ¯ { 1 } { 0 } }
V 0 { c 0 c 0 1 , c 1 c 1 ¨ , [ 2 n 2 ( n + 2 ) , ) , { } { { } { $ } } ,
c i $ } , 2 i n { n ( n + 1 ) ( 2 ( n + 1 ) ) } { C { } { k ¨ } B }
V 1 { c 0 c 0 0 , c 1 c ¨ 1 , [ 2 n 2 ( n + 2 ) , ) , { } { { } { $ } } ,
c i $ } , 2 i n { n ( n + 1 ) ( 2 ( n + 1 ) ) } { C { } { k ¨ } B }
V 2 i , { c 1 R i }, [ 2 n 2 ( n + 2 ) , ) , { + } { { } { $ } } ,
0 i < 2 n 1 i < 2 n { 2 i } { X X ˙ X ¨ { $ } }
H { x i ¨ x i ¯ , x i ˙ x i ̲ } , [ ( 2 ( n + 1 ) + 1 ) , 1 ] { + } { X X ˙ X ¨ { $ } } ,
1 i n { { } { $ } }
A { c 0 0 0 } [ 2 n 2 ( n + 2 ) , ) { } { { } { $ } }
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Jiménez, K.P.; Gómez-Canaval, S.; Villanueva-Polanco, R.; Suazo, S.M. Networks of Picture Processors with Filtering Based on Evaluation Sets as Solvers for Cryptographic Puzzles Based on Random Multivariate Quadratic Equations. Mathematics 2020, 8, 2160. https://0-doi-org.brum.beds.ac.uk/10.3390/math8122160

AMA Style

Jiménez KP, Gómez-Canaval S, Villanueva-Polanco R, Suazo SM. Networks of Picture Processors with Filtering Based on Evaluation Sets as Solvers for Cryptographic Puzzles Based on Random Multivariate Quadratic Equations. Mathematics. 2020; 8(12):2160. https://0-doi-org.brum.beds.ac.uk/10.3390/math8122160

Chicago/Turabian Style

Jiménez, Karina Paola, Sandra Gómez-Canaval, Ricardo Villanueva-Polanco, and Silvia Martín Suazo. 2020. "Networks of Picture Processors with Filtering Based on Evaluation Sets as Solvers for Cryptographic Puzzles Based on Random Multivariate Quadratic Equations" Mathematics 8, no. 12: 2160. https://0-doi-org.brum.beds.ac.uk/10.3390/math8122160

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop