Next Article in Journal
Cryptography as the Means to Protect Fundamental Human Rights
Previous Article in Journal
Contemporary Physical Clone-Resistant Identity for IoTs and Emerging Technologies
Previous Article in Special Issue
Improvements on Making BKW Practical for Solving LWE
Article

Generalized Concatenated Codes over Gaussian and Eisenstein Integers for Code-Based Cryptography

Institute for System Dynamics (ISD), HTWG Konstanz, University of Applied Sciences, 78462 Konstanz, Germany
*
Author to whom correspondence should be addressed.
Academic Editors: Edoardo Persichetti, Paolo Santini, Marco Baldi and Qiang Wang
Received: 1 November 2021 / Revised: 21 November 2021 / Accepted: 25 November 2021 / Published: 29 November 2021
(This article belongs to the Special Issue Public-Key Cryptography in the Post-quantum Era)

Abstract

The code-based McEliece and Niederreiter cryptosystems are promising candidates for post-quantum public-key encryption. Recently, q-ary concatenated codes over Gaussian integers were proposed for the McEliece cryptosystem, together with the one-Mannheim error channel, where the error values are limited to the Mannheim weight one. Due to the limited error values, the codes over Gaussian integers achieve a higher error correction capability than maximum distance separable (MDS) codes with bounded minimum distance decoding. This higher error correction capability improves the work factor regarding decoding attacks based on information-set decoding. The codes also enable a low complexity decoding algorithm for decoding beyond the guaranteed error correction capability. In this work, we extend this coding scheme to codes over Eisenstein integers. These codes have advantages for the Niederreiter system. Additionally, we propose an improved code construction based on generalized concatenated codes. These codes extend to the rate region, where the work factor is beneficial compared to MDS codes. Moreover, generalized concatenated codes are more robust against structural attacks than ordinary concatenated codes.
Keywords: public-key cryptography; McEliece cryptosystem; Niederreiter cryptosystem; maximum distance separable codes; concatenated codes public-key cryptography; McEliece cryptosystem; Niederreiter cryptosystem; maximum distance separable codes; concatenated codes

1. Introduction

Public-key cryptographic algorithms are important for today’s cyber security. They are used for key exchange protocols or digital signatures, e.g., in communication standards like transport layer security (TLS), S/MIME, and PGP. Public-key encryption is based on a trapdoor-function which also defines the systems security. The most common public-key cryptosystems nowadays are the Rivest–Shamir–Adleman algorithm (RSA) and the elliptic curve cryptography (ECC). Those are based on the intractability of integer factorization and the elliptic curve discrete logarithm problem, respectively. Both problems can be solved using quantum algorithms [1,2]. Hence, large scale quantum computers threaten the security of today’s RSA and ECC cryptosystems.
To cope with this issue, many post-quantum encryption methods were proposed [3], e.g., code-based cryptography. Code-based cryptography is based on the problem of decoding random linear codes, which is known to be NP-hard [4]. The best-known code-based cryptosystems are the McEliece system [5] and the Niederreiter system [6].
For the McEliece system, the public key is a permuted and scrambled version of the generator matrix of an error correcting code. The message is encrypted by encoding the information with the scrambled generator matrix and adding intentional errors. The private key is the original generator matrix and the matrices used for scrambling and permutation. Using the private key, the received vector can be decoded into the original message. Due to the scrambling of the generator matrix, it is not possible to obtain its structure without the knowledge of the private key. Hence, an attacker needs to decode the received vector for a random-looking linear code. The best-known decoding attacks for code-based cryptosystems are based on information-set decoding (ISD) [7], which is therefore the most interesting attack scenario [8,9,10,11]. For information-set decoding, the attacker tries to find an error-free information set, which is then used to re-encode the codeword.
The Niederreiter system is comparable to the McEliece system. However, secure digital signature schemes are only known for the Niederreiter system [12]. Instead of the generator matrix, the scrambled parity check matrix is used as public key. For encryption, the message is encoded as an error vector, and the cypher text is the syndrome calculated with the public parity check matrix. The private key consists of the original parity check matrix, as well as the matrices used for scrambling. For decryption, a syndrome decoding algorithm is required, which recovers the error vector from the syndrome. As for the McEliece scheme, the most relevant attacks are based on ISD.
Different code families were proposed for those systems, e.g., Reed–Solomon (RS) codes [13,14], BCH codes [15], LDPC codes [16,17,18,19], or polar codes [20]. For some code families, there exist structural attacks, which make use of the structure of the codes, e.g., the attacks in [21,22].
In [23], product codes of outer RS codes and inner one-Mannheim error correcting (OMEC) codes were proposed for the McEliece system. Those codes are defined over Gaussian integers, which are complex numbers with integers as real and imaginary parts [24,25]. They are able to correct more errors than maximum distance separable (MDS) codes. MDS codes are linear block codes which are optimal codes for the minimum Hamming distance, i.e., they achieve equality in the Singleton bound. The codes over Gaussian integers achieve a higher error correction capability due to restriction of the error values. The used channel model allows only errors of Mannheim weight (magnitude) one. The work factor of ISD only depends on the number of errors, but not on their values. A higher error correction capability leads to a higher work factor for comparable parameters. On the other hand, the concatenated codes presented in [23] can be attacked with a combination of the structural attacks from [21,22].
In this work, we propose a new code construction based on generalized concatenated (GC) codes. This construction is motivated by the results in [26], which show that GC codes are more robust against structural attacks than ordinary concatenated codes. Furthermore, we adapt the code construction to Eisenstein integers. Eisenstein integers are complex numbers of the form a + b ω , where a and b are integers and ω = 1 / 2 i 3 / 2 is a third root of unity [27]. Eisenstein integers form a hexagonal lattice in the complex plain [28]. While the one-Mannheim error channel has four different error values, a similar channel model for Eisenstein integers has six different error values. In the Niederreiter cryptosystem, the message is encoded as an error vector. Hence, the representation with Eisenstein integers allows for longer messages compared with codes over Gaussian integers. In this work, we additionally derive and discuss the channel capacity of the considered weight-one channel over Eisenstein integers. Moreover, we extend the GC code construction to Eisenstein integers.
This publication is structured as follows. In Section 2, we review the McEliece and the Niederreiter system, as well as the attacks based on information-set decoding. In Section 3, we briefly explain Gaussian and Eisenstein integers, as well as the one-Mannheim error channel. We investigate the weight-one error channel for Eisenstein integers in Section 4. In Section 5, we adapt the product codes from [23] to Eisenstein integers. The new code construction based on generalized concatenated codes is discussed in Section 6. Finally, we conclude our work in Section 7.

2. Code-Based Cryptosystems

In this section, we review the basics of the McEliece and Niederreiter systems, as well as information-set decoding.

2.1. The McEliece System

The McEliece cryptosystem utilizes the problem of decoding random linear codes as trapdoor function. In the following, we will shortly explain the basic concept of this system.
Consider a q-ary code C ( n , k , t ) of length n, dimension k, and error correction capability t. The code can be represented by its generator matrix G , and should enable an efficient decoding algorithm ϕ ( · ) for up to t errors. The public key is the pair ( G , t ) . The matrix G is a scrambled generator matrix G = SGP , with the random non-singular k × k scrambling matrix S , and the n × n permutation matrix P . The private key consists of the three matrices ( G , S , P ) .
For encrypting a message u of length k, the message is encoded using the public generator matrix G and a random error vector e , containing at most t non-zero error values added, i.e., v = u G + e . Using the private key, the message can be decrypted by first computing r = v P 1 = u SG + e P 1 . Note that e P 1 is a permuted error vector and the permutation does not change the number of errors. We decode r as ϕ ( r ) = ϕ ( v P 1 ) = u S . Finally, the message can be calculated using the inverse scrambling matrix.

2.2. The Niederreiter System

The Niederreiter system is based on the parity check matrix. Consider a code C ( n , k , t ) with parity check matrix H and an efficient syndrome decoding algorithm ϕ ( · ) . The public key is ( H , t ) . The scrambled parity check matrix is calculated as H = SHP , where S is a random non-singular ( n k ) × ( n k ) scrambling matrix, and P is a random n × n permutation matrix. The private key consists of the three matrices ( H , S , P ) .
For encryption, a message is first encoded as an error vector m of length n and at most t non-zero symbols. The cyphertext is the syndrome calculated using the public parity check matrix, i.e., s T = H m T . The legitimate recipient receives s T = H m T = SHP m T and computes S 1 s T = HP m T . Applying the syndrome decoding algorithm ϕ ( · ) results in the permuted error vector P m T . Finally, the message m is obtained using the inverse permutation P 1 . As for the McEliece system, this decoding is only feasible with the knowledge of the scrambling and permutation matrices S and P .

2.3. Information-Set Decoding

The best known attacks on the McEliece system as well as the Niederreiter system are based on information-set decoding (ISD). Those attacks do not rely on any code structure except linearity, i.e., the attacks try to decode a random-looking linear code. Such attacks were proposed in [8,9], and more recently, some improvements were proposed in [10,11]. We only review the basic concept of attacks based on ISD.
For the McEliece system, the attacker tries to recover the information vector u = u S from the cyphertext v = u G + e . To achieve this, the attacker tries to guess k error-free positions u , such that the corresponding columns of the public generator matrix G form a non-singular matrix G . If such positions are found, the attacker can use Gaussian elimination on the guessed positions of G and re-encode a codeword v = u G agreeing with v in the guessed positions. If v differs in at most t positions from v , there are no errors in u , and the attacker obtains u = u G 1 .
For the Niederreiter system, the attacker tries to find an error vector m of weight t, such that H m T = s T . To achieve this, an attacker tries random permutations P ˜ on the public key H and computes the systematic form as H = UH P ˜ = A | I n k , where U is the matrix that produces the systematic form and I n k is the ( n k ) × ( n k ) identity matrix. The attacker searches for a permutation such that the permuted message vector P ˜ m has all non-zeros in the rightmost n k positions. Such a permutation can be detected by the Hamming weight of the scrambled syndrome U s T = H m T . Due to the systematic form of H , the permuted message vector is P ˜ m = 0 , , 0 | U s T .
The complexity of information-set decoding attacks is determined by the expected number of trials required to find a permutation fulfilling those criteria. The probability for such a permutation is
P s = n k t n t
and the expected number of trials is
N I S D = 1 P s = n t n k t .
We use N I S D to measure the work factor for ISD attacks.

3. Codes over Gaussian and Eisenstein Integers

Next, we review some properties of Gaussian and Eisenstein integers, as well as some known code constructions for these number fields.

3.1. Gaussian and Eisenstein Integers

Gaussian integers are a subset of complex numbers with integers as real and imaginary parts, i.e., of the form a + b i , where a and b are integers. We denote the set of Gaussian integers by G . The modulo operation in the complex plain is defined as
z mod π = z z π * π π * · π ,
where [ · ] denotes rounding to the closest Gaussian integer, which is equivalent to rounding the real and imaginary parts individually. The set of Gaussian integers modulo π G with p = π π * elements is denoted by G p . For π G , such that p mod 4 1 , the set G p = G mod π is a finite field which is isomorph to the prime field F p [24].
We measure the weight w t M ( z ) of a Gaussian integer z as Mannheim weight which is the sum of the absolute values of its real and imaginary parts, i.e.,
w t M ( z ) = min a + b i K ( z ) | a | + | b | ,
where K ( z ) is the set of Gaussian integers z , such that z = z mod π . The Mannheim distance between two Gaussian integers is the weight of the difference
d M ( z , y ) = w t M ( z y ) .
The Mannheim weight of a vector is the sum of Mannheim weights of all elements of the vector. The same holds for the Mannheim distance between two vectors.
Eisenstein integers are similar to Gaussian integers, but of the form x = a + b ω , where a and b are integers, and ω = 1 2 + 3 2 i is a third root of unity. Eisenstein integers form a hexagonal structure in the complex plain and are denoted as E . As for Gaussian integers, a finite field can be defined as the set E p = E mod π , where π E and p = π π * . In contrast to Gaussian integers, the prime p has to fulfill p mod 6 1 due to the hexagonal structure. For such π , the field E p is isomorph to the prime field F p [27].
We measure the weight of an Eisenstein integer as a hexagonal weight, which is defined by the minimum number of unit steps in directions which are a multiples of 60 . An Eisenstein integer z can be written as z = g 1 ϵ 1 + g 2 ϵ 2 , with ϵ 1 , 2 { ± 1 , ± ω , ± ( 1 + ω ) } . Note, that ( 1 + ω ) is a sixth root of unity and ω is a third root of unity. Hence, ϵ 1 , 2 can take the six powers of the sixth root of unity. The weight is defined as
w t H X = min { g 1 , g 2 : g 1 ϵ 1 + g 2 ϵ 2 = z } | g 1 | + | g 2 | .
As for Gaussian integers, the weight of a vector is the sum of weights of the elements, and the distance between two Eisenstein integers is the weight of the difference.

3.2. One Error Correcting (OEC) Codes

One error correcting (OEC) codes over Gaussian as well as over Eisenstein integers fields were proposed in [24,27], respectively. The parity check matrix H is defined as
H = α 0 , α 1 , α 2 , , α n 1 ,
where α is a primitive element of the field. A vector v = v 0 , v 1 , v 2 , , v n 1 is a codeword if, and only if, H v T = 0 . For codes over Eisenstein integers, we have v i E p , and the length of an OEC code satisfies n p 1 6 . For OEC codes over Gaussian integers, we have n p 1 4 and v i G p .
The dimension of an OEC code is k = n 1 , and the minimum Hamming distance is d H = 2 . The minimum hexagonal distance is d H X = 3 for OEC codes over Eisenstein integers, and the minimum Mannheim distance is d M = 3 for OEC codes over Gaussian integers. Hence, such codes can detect any single error of arbitrary weight and correct a single error of Mannheim weight one or hexagonal weight one, respectively.

3.3. Product Codes over Gaussian Integers

In [23] a product code construction from outer Reed–Solomon (RS) and inner one, error correcting (OEC) codes over Gaussian integers was proposed. In the following, we review this code construction. Later on, this construction is extended to codes over Eisenstein integers.
We consider an outer RS code C o n o , k o , d o over F p and an inner OEC code C i n i , k i , d i over G p , where p = π π * . Note that d o denotes the minimum Hamming distance of the RS code, while d i denotes the minimum Mannheim distance of the OEC code. The codeword of a product code can be represented as ( n i × n o ) -matrix. For encoding, first, k i codewords of the outer RS code are encoded and written to the first k i rows of the codeword matrix. Next, the symbols are mapped from F p to the isomorphic G p , and each column of the codeword matrix is encoded in the inner OEC code. The product code has length n = n o n i , dimension k = k o k i = k o , and minimum Mannheim distance d = d o d i , as shown in [23].
For instance, consider the special case of the inner OEC codes of length n i = 2 and minimum Mannheim distance d i = 4 [23]. These codes are generated by a field element a of weight at least three. The parity check matrix is H = 1 , a and the generator matrix is G = a , 1 . Depending on the choice of a, this can result in a code of minimum Mannheim distance d i 4 . Note, that this does not change the Hamming weight, hence, only one error of arbitrary weight can be detected. Like the original OEC codes proposed in [24], this code can only correct one error of Mannheim weight one, but it can detect any error vector of weight two. The product code has length n = 2 n o and minimum Mannheim distance d = 4 ( n o k o + 1 ) .
In order to develop a low-complexity decoding algorithm that can decode up to half the minimum distance, a new channel model was considered in [23]. This one-Mannheim error channel is a discrete memoryless channel restricting the error values to Mannheim weight one [29]. Given an error probability ϵ , each error symbol is zero with probability 1 ϵ . Error values are from the set { 1 , 1 , i , i } , which occur with probability ϵ / 4 . Due to this restriction, the error vector in each inner codeword with n i = 2 can have a Mannheim weight of at most two, and therefore can be detected by the inner OEC codes. While the inner decoder corrects any error vector of Mannheim weight one, it declares an erasure for each error vector of Mannheim weight two. Hence, all error positions are known for the outer RS decoder, and an erasure-only decoding method can be applied. Using the Forney algorithm, this erasure-only decoding can correct up to n o k o erasures.
The restriction of the error values allows for a guaranteed error correction capability of t = 2 ( n o k o ) + 1 = n 2 k + 1 errors, because n o k o erasures can be corrected, and each erasure requires at least two errors. One additional error can be corrected in any inner codeword. For code rates R < 1 / 3 , this error correction capability is higher than the error correction capability of MDS codes, i.e., t M D S = ( n k ) / 2 .

4. The Weight-One Error Channel

In this section, we extend the concept of the one Mannheim error channel from [23] to Eisenstein integers. Furthermore, we derive the capacity for this weight-one error channel and discuss its relation to code-based cryptosystems. These results demonstrate that codes over Eisenstein integers are attainable, where the expected number of errors exceeds the number of redundancy symbols n k , which prevents error free information sets.
While the minimum Hamming distance of codes over Eisenstein integer fields is comparable with other code constructions, they may have a significantly higher minimum hexagonal distance. This leads to an increased error correction capability in terms of hexagonal-weight errors. Hence, a channel model, which restricts the error weight, is advantageous for such codes.
The weight-one error channel is a discrete memoryless channel, which restricts the error values to hexagonal weight one. Hence, only error values e i { ± 1 , ± ω , ± ( 1 + ω ) } are possible. Note that ω is a third root of unity and 1 + ω is a sixth root of unity. Hence, these six possible values form a hexagon in the complex plain.
Figure 1 illustrates the channel model of the weight-one error channel. For a given channel error probability ϵ , error-free transmission ( e i = 0 ) occurs with probability 1 ϵ , while each of the six errors has the same probability of ϵ 6 .
Proposition 1.
The channel capacity of the weight-one error channel with transmitted symbols v i E p is
C = log 2 ( p ) + ( 1 ϵ ) · log 2 ( 1 ϵ ) + ϵ · log 2 ϵ 6 .
Proof. 
The channel capacity of a symmetric discrete memory-less channel is [30]
C = log 2 | R | H P ,
where | R | is the cardinality of the output alphabet R = E p and H ( P ) the entropy of a row P of the transition matrix. The cardinality of the output alphabet E p is p. Each row of the transition matrix has seven non-zero elements, one element ( 1 ϵ ) for the case that no error happened, and six elements ϵ / 6 for the six equally probable error values. Hence, the entropy is
H ( P ) = i = 0 p 1 P i · log 2 ( P i ) = ( 1 ϵ ) · log 2 ( 1 ϵ ) 6 · ϵ 6 · log 2 ϵ 6
and thus follows (8). □
Example 1.
Figure 2 shows the relative channel capacity C / log 2 ( p ) of the weight-one error channel. This relative capacity is the supremum of all achievable code rates R. Moreover, the line 1 ϵ is shown, on which the expected relative number of errors is equal to the relative amount of redundancy ( n k ) / n = 1 R . For the achievable rate region above this line, the expected number of errors ϵ n surpasses n k , and therefore no error-free information sets exist. As shown in Figure 2, codes which are able to correct more than n k errors are possible for code rates above 0.3 or 0.2 for p = 43 and p = 97 , respectively.

5. Product Codes over Eisenstein Integers

In this section, we adapt the product code construction from [23] to Eisenstein integers for the Niederreiter system. The adaptation of the product code construction is trivial, i.e., we simply replace the inner codes over Gaussian integers by codes over Eisenstein integers. The restriction of applicable primes is different for Gaussian and Eisenstein integers. However, there are primes that fulfill both restrictions leading to the same code parameters. Nevertheless, Eisenstein integers have advantages for the Niederreiter cryptosystem, where the message is encoded as error vector of weight at most t. The information mapping consists of two parts. One part defines the error positions, and can take log 2 n t bits of information. The other part defines the error values and can take log 2 m t bits of information, where m is the number of possible error values. Codes over Eisenstein integers increase the message length compared to codes over Gaussian integers, because the number of possible error values m for Eisenstein integers is higher than for Gaussian integers.
The Niederreiter cryptosystem requires an adaptation of decoding method, because only the syndrome is available, and the decoding method needs to find the corresponding error vector. In the following, we devise such a syndrome decoding procedure.

5.1. Syndrome Decoding

For the syndrome decoding, we use look-up tables for the inner OEC codes and erasure decoding for the outer RS codes. We consider the private parity check matrix of the form
H = H R S 0 I n o a · I n o ,
where H R S is the parity check matrix of the outer RS code, and the lower part I n o a · I n o is the Kronecker-product of the parity check matrix of the OEC codes and an n o × n o identity-matrix. With this definition, the first n o k syndrome values correspond to the RS code, and the last n o syndrome values belong to the inner OEC codes. The public key is a scrambled version of the parity check matrix, i.e., H = SHP , where S is a random invertible scrambling matrix, and P is a random permutation matrix.
To decode the scrambled syndrome s T = SHP m T , one first unscrambles the syndrome as s ˜ T = S 1 s T = HP m T , and then decodes the inner OEC codes using a look-up in a precomputed syndrome table. Since the inner codewords have a length of two, and the OEC codes have minimum hexagonal distance d i 4 , any single error resulting from the weight-one error channel can be corrected, while any error vector of up to two errors can be detected. The precomputed syndrome table provides the error location and value for each correctable error pattern, i.e., each error pattern with only one error. For each error pattern with two errors, an erasure is declared. These erasures are resolved in the outer decoder. Since s ˜ T = HP e T , the inner decoder produces parts of the permuted error vector, which is denoted as P e ^ T .
After the inner decoding, we update the residual syndrome for the outer decoder. The residual syndrome is the syndrome corresponding to an error vector e e ^ of lower weight. The syndrome to the partial error vector e ^ can be computed using the private matrices H and P . This syndrome can be subtracted from the received syndrome
s ˜ r e s T = HP ( e e ^ ) T = s ˜ T HP e ^ T .
The outer RS code is now decoded using the residual syndrome s ˜ r e s , as well as the erasure positions declared by the inner decoders. Since the inner decoders detected all error vectors, there are no unknown error positions, and erasure only decoding can be applied to the RS code. This is done using the Forney algorithm [31]. Using the positions j i , i = 1 , , ν corresponding to the ν erasures, the error location polynomial can be calculated as
Λ ( x ) = i = 1 ν 1 x X i .
This polynomial has roots at X 1 1 , , X ν 1 , with X i = α j i . Similarly, we represent the residual syndrome as polynomial, i.e., S r e s ( x ) = s 0 + s 1 x + + s n o k 1 x n o k 1 and calculate the error-evaluator polynomial Ω ( x ) using the key equation
Ω ( s ) = S r e s ( x ) Λ ( x ) mod x n o k .
The error values are determined as
e ^ i = Ω ( X i 1 ) Λ ( X i 1 ) ,
where Λ ( x ) is the derivative of Λ ( x ) .
The RS decoder is able to find all error values in the information digits of the OEC codewords if the number of erasures ν does not exceed n o k . Now, the step in (12) can be used again, with an updated error vector e ^ . Hence, the syndrome decoding of the OEC codewords can be repeated to find all remaining errors. The inner codewords have a length of two. Consequently, after correcting one position using the outer code, only a single weight-one error can remain, which is corrected using the syndrome table for the inner code.
Next, we estimate the error correcting capability of this decoding procedure. A minimum of 2 ( n o k ) channel errors is required to cause a decoding failure in the outer decoder, because n o k erasures can be corrected by the outer decoder, and an erasure requires two errors in an inner codeword. Additionally, the OEC code corrects all single errors in the inner codewords. Therefore, at least t = 2 ( n o k ) + 1 = n 2 k + 1 errors can be corrected. Depending on the error positions, this decoding procedure can correct some patterns with up to 2 ( n o k ) + k = n k errors. In comparison with MDS codes, which have an error correction capability of ( n k ) / 2 , the proposed construction is advantageous for code rates R < 1 / 3 .

5.2. Code Examples

Table 1 shows a comparison of the proposed code construction with MDS codes. The table provides the field size p, code length n, dimension k, and error correction capability t, as well as the work factor N I S D for information-set decoding. The left-hand side of the table considers the proposed code construction, while the right-hand side illustrates comparable MDS codes. In all examples, the work factor for information-set decoding of the proposed construction is significantly higher than for MDS codes.
Table 2 shows a comparison of the proposed code construction over Eisenstein integers, with the same construction over Gaussian integers from [23], where we compare the message lengths for a Niederreiter system. Note that the restrictions of the field sizes are different. For p = 137 , we can construct only codes over Gaussian integers, whereas for p = 139 , we can construct only codes over Eisenstein integers. However, the corresponding codes are comparable. For p = 157 and p = 193 , Eisenstein and Gaussian integer fields exist. The message size with Eisenstein integers is notably increased. This results from the different channel models. Eisenstein integers allow for six different error values, instead of four with Gaussian integers. Due to the same code parameters, the work factor for information-set decoding is the same. Therefore, the codes over Eisenstein integers are only advantageous for Niederreiter systems.

6. Generalized Concatenated (GC) Codes over Gaussian and Eisenstein Integers

While the product code construction shows a significantly increased work factor for information-set decoding, the construction may not be secure against structural attacks. The attack proposed in [22] may allow one to produce the concatenated structure of the code construction. Afterwards, the attack proposed in [21] can produce the structure of the outer Reed–Solomon code.
In [26], it was shown that generalized concatenated codes may withstand the aforementioned structural attacks. Furthermore, those codes enable higher code rates. In the following, we will discuss a generalize concatenated code construction, which may withstand the structural attacks, and has a higher work factor for information-set decoding than MDS codes, as well as the proposed product codes.
In this section, we propose a generalized concatenated code construction. First, we consider codes over Gaussian integers, which, in combination with the one-Mannheim error channel, is advantageous for use in code-based cryptosystems. We investigate a decoding procedure for those codes. Finally, we demonstrate that the GC construction can be extended to codes over Eisenstein integers.

6.1. Code Construction

Generalized concatenated (GC) codes are multilevel codes with one inner code B ( n i , k i , d i ) and multiple outer codes A ( l ) n o , k o ( l ) , d o ( l ) with different dimensions. The basic idea of GC codes is to partition the inner code into multiple levels of subcodes, which are then protected by different outer codes. For the sake of clarity, we only consider GC codes with two outer codes A ( 0 ) and A ( 1 ) of same length n o , but different dimensions. Again, we represent a codeword as a matrix, where each column is a codeword of the inner code B .
Figure 3 shows the encoding of GC codewords, where first the outer encoder encodes the two codewords a 1 A ( 1 ) and a 0 A ( 0 ) . Then, each column is encoded by the inner encoder to a codeword b j B . The length of the GC code is n = n o n i , as can be seen from the construction. The dimension is the sum of the outer dimensions.
For the inner codes, we consider codes over Gaussian integers which achieve a high error correction capability over the one-Mannheim error channel, and enable a partitioning into subcodes with increased minimum distance. Table 3 shows some examples for such inner codes, with their field size p, their modulus π , their generator matrix, as well as the minimum Mannheim distance d of the code and d ( 1 ) for the subcode. These codes are not constructed from one-Mannheim error correcting codes, but found by computed search. The generator matrix of the code B is chosen in the form
G = 1 a b 0 1 c ,
where a, b, and c are elements of G p . In this case, the first row is the generator matrix of a subcode B ( 1 ) 3 , 1 , d ( 1 ) B with minimum Mannheim distance d ( 1 ) at least 7. This distance allows one to correct any possible error pattern introduced by the one-Mannheim error channel. Note that no codes with d 5 were found for field sizes p < 109 .
For the GC construction, we consider inner codes of length n i = 3 and dimension k i = 2 , i.e., B ( 3 , 2 , d i ) , where d i 5 is the minimum Mannheim distance. Those codes can correct up to two errors of Mannheim weight one. For the first level outer code A ( 0 ) , we apply a Reed–Solomon code of length n o and dimension k o . Since the subcodes in Table 3 are able to correct at least three errors of Mannheim weight one, the information digits of the second level need no further protection if the one-Mannheim error channel model is used.
The resulting GC code has length n = 3 n o and dimension k = n o + k o , because the second outer level is uncoded. Figure 4 represents the encoding of a single column of the codeword. The outer code symbol a j , 0 is encoded with the second row of the generator matrix G of the inner code, which results in a codeword b j ( 0 ) B . The outer code symbol a j , 1 is encoded with the first row of G , which is the generator matrix of the subcode, and results in b j ( 1 ) B ( 1 ) . The codeword in the j-th column is the sum of two codewords, i.e., b j = b j ( 0 ) + b j ( 1 ) B . Note, that the upper part of Figure 4 has the same form as the generator matrix (16), where the gray blocks represent the parity symbols.

6.2. Decoding

For decoding the GC code, we first decode the inner codes B ( 3 , 2 , 5 ) . While those codes are able to correct two errors of Mannheim weight one, we only correct one error, and therefore can detect any possible error pattern generated by the one-Mannheim error channel. A look-up table with precomputed syndromes is used for decoding all error patterns with a single error. In cases where more errors occur, we declare an erasure, and store the erasure location. Note that all error patterns are detected. Hence, an erasure only decoding can be applied for the outer RS code.
Decoding the outer code A ( 0 ) requires the code symbols a j , 0 for all positions where no erasure was declared. Note that the inner codeword in the j-th column is the sum of two codewords of the subcodes, i.e., b j = b j ( 0 ) + b j ( 1 ) . The first digit of b j is the outer code symbol a j , 1 (cf. Figure 4), as the second row of G has a zero in the first position. Hence, this symbol can be used to determine the codeword b j ( 1 ) of the subcode B ( 1 ) . Subtracting b j ( 1 ) from b j results in b j ( 0 ) .
Now, we can decode the row consisting of the symbols a j , 0 ; j = 0 , , n o 1 , which we obtained by re-encoding. We apply an erasure decoding to the Reed–Solomon code [31,32], which is based on the Forney algorithm, as explained for the outer RS code in Section 5.1. This method can correct up to n o k o erasures.
The outer decoding determines all symbols a j , 0 in the codeword of the outer code A ( 0 ) . With these symbols, we can calculate the inner codewords b j ( 0 ) for all columns with erasures. Furthermore, we can determine the inner codewords b j ( 1 ) = b j b j ( 0 ) B ( 1 ) in the subcode. Finally, we can decode the resulting codewords in the subcode B ( 1 ) , which has a minimum distance d ( 1 ) 7 , and can correct all remaining errors.
We summarize the GC code parameters and the properties of the proposed decoding algorithm in the next proposition.
Example 2.
Consider the code over E 109 with π = 10 + 3 i , as given in the first row of Table 3. In this example, we focus on the decoding of the j-th inner codeword. Let us assume a j , 1 = 2 4 i and a j , 0 = 1 + 3 i as information symbols of the inner codeword. The codewords encoded with the two individual rows of the generator matrix are b j ( 1 ) = 2 4 i , 4 + i , 1 and b j ( 0 ) = 0 , 1 + 3 i , 2 + 2 i . The inner codeword is now b j = b j ( 0 ) + b j ( 1 ) = 2 4 i , 2 6 i , 3 + 2 i .
We distinguish two cases for the decoding. First, consider the case where at most one error was introduced in the inner codeword. In this case, the inner codeword can be corrected and no errors remain for outer decoding. The first symbol of b j is equal to the information symbol a j , 1 = 2 4 i . This symbol can be used to re-encode the codeword b j ( 1 ) = 2 4 i , 4 + i , 1 . Subtracting b j ( 1 ) from b j gives b j ( 0 ) = 0 , 1 + 3 i , 2 + 2 i , which has a j , 0 = 1 + 3 i as its second symbol. This symbol is used by the outer RS decoder to correct the symbols corresponding to the erasures.
In the second case, more than one error was introduced in the inner codeword. In this case, the inner decoder results in an erasure. Note that the RS decoder does not need any symbol value for the erasure positions. Hence, no re-encoding is required to obtain a j , 0 . The value of a j , 0 is determined by the outer RS decoder. If the RS decoder is successful, we obtain the correct information digit a j , 0 = 1 + 3 i . This value can be used to re-encode the codeword b j ( 0 ) = 0 , 1 + 3 i , 2 + 2 i , which is subtracted from the received vector b j + e , resulting in the vector b j ( 1 ) + e . The error vector is still the error introduced by the channel with restricted values. The error vector of Mannheim weight of at most three can be corrected in this code, because the inner subcode B ( 1 ) has minimum Mannheim distance d ( 1 ) = 11 .
Proposition 2.
The generalized concatenated code with outer Reed-Solomon code A ( 0 ) n o , k o , d o ( 0 ) and inner code B ( 3 , 2 , 5 ) over G p with subcode B ( 1 ) 3 , 1 , d ( 1 ) 7 can correct
t 2 ( n o k o ) + 1
errors of Mannheim weight one.
Proof. 
Let b B ( 3 , 2 , 5 ) be a transmitted codeword of the inner code and e a length three error vector with up to three errors of Mannheim weight one. For any codeword b B ( 3 , 2 , 5 ) , the Mannheim distance to the received sequence is lower bounded by
d M b , b + e = w t M b b e d w t M e 2
Hence, any error pattern of a Mannheim weight one can be corrected, and any error pattern of Mannheim weight two or three can be detected. For error patterns of weight greater than one, an erasure is declared. The outer Reed–Solomon code can correct up to n o k o erasures [32], and each erasure requires at least two errors. Hence, 2 ( n o k o ) errors can be corrected in the erasure positions, and at least one additional error in any position. This results in (17) for the first level. If the first level decoding is successful, the second level is decoded in the inner subcode B ( 1 ) 3 , 1 , d ( 1 ) with d ( 1 ) 7 . Note that this subcode is able to correct any possible error pattern with up to three errors, thus no outer decoding is required in the second level. The decoding procedure only fails if the first level fails, i.e., if more than n o k o erasures happen, which requires more than 2 ( n o k o ) + 1 errors. □
The maximum number of errors, which can be corrected by this decoding procedure, is 3 ( n o k o ) + k o . For this we assume, that each erasure results from three errors and each of the k o inner codewords, which does not result in an erasure with exactly one error. On the other hand, this requires a very specific distribution of the errors. Nevertheless, the decoder is able to decode many error patterns with more than 2 ( n o k o ) + 1 errors. This is demonstrated in Section 6.4.

6.3. GC Code Examples

The guaranteed error correction capability of the proposed code construction is t = 2 ( n o k o ) + 1 , which for code rates R 5 / 9 , is higher than the error correction capability ( n k ) / 2 of MDS codes. We compare the proposed code construction with the product code construction from [23], as well as MDS codes with respect to the work factor for information-set decoding, created according to (2).
Table 4 shows a comparison of the proposed GC codes with comparable MDS codes. We compare the codes with varying code rate R for constant code length n = 312 . For low code rates, a significant gain is achieved, which decreases for higher code rates. This effect is also shown in Figure 5, where the work factors for ISD of GC codes and MDS codes are plotted over the code rate R for different code length n.
In Table 5, we compare the proposed code construction with product codes over Gaussian integers proposed in [23], since those codes are constructed for the same channel model. Note that those product codes are only applicable for low code rates, and have a higher work factor than MDS codes only for code rates R < 1 / 3 . Hence, we compare rate 0.2 product codes with rate 0.5 GC codes with comparable lengths. While the error correction capability is significantly higher for the product codes, due to the lower code rate, the work factor is much lower.

6.4. Decoding beyond the Guaranteed Error Correction Capability

The guaranteed error correction capability of the proposed generalized concatenated codes is given in (17). Up to this bound, all possible error patterns can be corrected, but also, some error patterns with more errors are correctable. In this section, we discuss the error correction capability for decoding beyond the guaranteed error correction capability.
Example 3.
Figure 6 shows the residual word error rate (WER) versus the channel error probability ϵ, with decoding beyond the guaranteed error correction capability. We compare the proposed decoding method with bounded distance decoding up to the guaranteed error correction capability for the GC code of length n = 270 and rate R = 0.5 . As can be seen, the proposed decoding method achieves a significant gain.
On the other hand, decoding beyond the guaranteed error correction capability leads to a residual error rate. Note that this is the case for many decoders, which were proposed for McEliece systems [16,17,18,19]. While in some cases this may be undesirable, this allows for an increased number of errors, and therefore an increased work factor for information-set decoding.
Example 4.
As an example, we compare the work factor for information-set decoding with the guaranteed error correction capability, with an expected number of errors such that the residual error rate is at most 10 5 . Consider the code for length n = 270 from Example 3. The proposed decoding allows for 35 % of errors, which corresponds to about 95 errors. According to (2), this results in a work factor of 2 133 . The work factor for the guaranteed error correction capability is only 2 125 , as shown in Table 5. Note that the work factor increases if a higher residual error rate is allowed. For instance, the work factor is increased to about 2 144 for a residual error rate of 10 4 .

6.5. Adaptation to Eisenstein Integers

As for the product code construction over Eisenstein integers, which was adapted from the product code construction over Gaussian integers proposed in [23], the generalized concatenated code construction can also be applied to codes over Eisenstein integers. While the restrictions for the primes are different, using the same field size leads to the same code parameters, and therefore the same error correction capability. Hence, for the McEliece systems, this would result in the same work factor for information-set decoding-based attacks. However, for the Niederreiter system, the increased number of different error values leads to an increased message length. The adaption of the GC code construction to Eisenstein integers is straightforward given the partitioning of the inner codes. Table 6 shows some possible inner codes over Eisenstein integer fields, which were found by computed search. For primes less than 223, no codes with d 5 were found.
Example 5.
For a comparison of the message length, we consider codes over fields of size p = 229 , because this field size allows for inner codes over Gaussian as well as Eisenstein integers. Using the outer RS code C o 80 , 1 , 80 of rate R = 1 / 80 leads to GC codes of length n = 3 n o = 240 and rate R = 0.34 . Those codes can correct at least t = 2 · ( n o k o ) + 1 = 159 errors of Mannheim weight one or hexagonal weight one, respectively. The number of bits that can be mapped to the error vector for the Gaussian integer code is
t · log 2 ( 4 ) + log 2 n t 535 .
These bits are mapped to the error positions and to the error values. For the code over Eisenstein integers, the error values can take t · log 2 ( 6 ) bits of information. Hence, the overall number of bits that can be mapped to the error vector is 628, which is about 17 % higher than for Gaussian integers. To use the increased message length, the error values cannot be mapped independently, but as a vector of length t, where each component can take six different values.

7. Conclusions

In this work, we have proposed a code construction based on generalized concatenated codes over Gaussian and Eisenstein integers for their use in code-based cryptosystems. These GC codes can be decoded with a simple decoding method that requires only table look-ups for the inner codes and erasure decoding of the outer Reed–Solomon codes. The proposed construction is a generalization of the ordinary concatenated codes proposed in [23]. The GC codes enable higher code rates. While the number of correctable errors is lower than with the concatenated codes, the work factor for information-set decoding (ISD) is increased with GC codes. For rates R 5 / 9 , the generalized concatenated codes can correct more errors than MDS codes. Very high work factors are achievable with short codes.
Codes over Eisenstein integers are advantageous for the Niederreiter system due to the increased message length. An investigation of the channel capacity of the weight-one error channel was performed. Capacity achieving codes over Eisenstein integers can correct more than n k errors, leading to increased security against information-set decoding attacks.
While we have adapted the GC code construction to Eisenstein integers, the syndrome decoding for the corresponding Niederreiter system, is still an open issue. An investigation of suitable decoding methods would be an interesting topic for further research.
The value of the proposed GC code construction can be seen when compared to the classic McEliece key encapsulation mechanism (KEM), which is among the finalists of the NIST standardization [3]. For example, the security against ISD attacks for the parameter set McEliece 348864 is N I S D = 2 143 (according to (2)) and the public-key size is about 261 kByte. A GC code over p = 109 , of length n = 159 and dimension k = 54 , results in the same work factor for ISD attacks, but its public-key size is only 7.3 kByte, which is about 3 % of the key size for the classic McEliece system. For the longer code McEliece 6688128, the work factor is about N I S D = 2 262 , and the public-key size approximately 1045 kByte. A comparable GC code over p = 109 has length n = 291 , dimension k = 98 , work factor N I S D = 2 264 , and public-key size of only 24.4 kByte. However, the classic McEliece KEM uses Goppa codes, as originally proposed by McEliece in 1984. Goppa codes are still considered to be secure, as no structural attacks on these codes were found. On the other hand, the proposed GC code construction has no complete security analysis against structural attacks, such as the attacks proposed in [21,26,33]. This security analysis is subject to future work.

Author Contributions

The research for this article was exclusively undertaken by J.-P.T. and J.F. Conceptualization and investigation, J.-P.T. and J.F.; writing—review and editing, J.-P.T. and J.F.; writing—original draft preparation, J.-P.T.; supervision, project administration, and funding acquisition J.F. All authors have read and agreed to the published version of the manuscript.

Funding

The German Federal Ministry of Research and Education (BMBF) supported the research for this article (16ES1045) as part of the PENTA project 17013 XSR-FMC.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available in article.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Shor, P.W. Algorithms for quantum computation: Discrete logarithms and factoring. In Proceedings of the 35th Annual Symposium on Foundations of Computer Science, Washington, DC, USA, 20–22 November 1994; pp. 124–134. [Google Scholar] [CrossRef]
  2. Proos, J.; Zalka, C. Shor’s Discrete Logarithm Quantum Algorithm for Elliptic Curves. Quantum Inf. Comput. 2003, 3, 317–344. [Google Scholar] [CrossRef]
  3. Alagic, G.; Alperin-Sheriff, J.; Apon, D.; Cooper, D.; Dang, Q.; Kelsey, J.; Liu, Y.K.; Miller, C.; Moody, D.; Peralta, R.; et al. Status Report on the Second Round of the NIST Post-Quantum Cryptography Standardization Process; Nistir 8309; National Institute of Standards and Technology: Gaithersburg, MD, USA, 2020. [Google Scholar]
  4. Berlekamp, E.; McEliece, R.; van Tilborg, H. On the inherent intractability of certain coding problems. IEEE Trans. Inf. Theory 1978, 24, 384–386. [Google Scholar] [CrossRef]
  5. McEliece, R. A public-key cryptosystem based on algebraic coding theory. DSN Prog. Rep. 1978, 42–44, 114–116. [Google Scholar]
  6. Niederreiter, H. Knapsack-type cryptosystems and algebraic coding theory. Probl. Control Inf. Theory 1986, 15, 159–166. [Google Scholar]
  7. Prange, E. The use of information sets in decoding cyclic codes. IRE Trans. Inf. Theory 1962, 8, 5–9. [Google Scholar] [CrossRef]
  8. Lee, P.J.; Brickell, E.F. An Observation on the Security of McEliece’s Public-Key Cryptosystem. In Advances in Cryptology—EUROCRYPT ’88, Proceedings of the Workshop on the Theory and Application of Cryptographic Techniques, Davos, Switzerland, 25–27 May 1988; Barstow, D., Brauer, W., Brinch Hansen, P., Gries, D., Luckham, D., Moler, C., Pnueli, A., Seegmüller, G., Stoer, J., Wirth, N., Eds.; Springer: Berlin/Heidelberg, Germany, 1988; pp. 275–280. [Google Scholar]
  9. Stern, J. A method for finding codewords of small weight. In Coding Theory and Applications; Cohen, G., Wolfmann, J., Eds.; Springer: Berlin/Heidelberg, Germany, 1989; pp. 106–113. [Google Scholar]
  10. Bernstein, D.J.; Lange, T.; Peters, C. Attacking and Defending the McEliece Cryptosystem. In Post-Quantum Cryptography; Buchmann, J., Ding, J., Eds.; Springer: Berlin/Heidelberg, Germany, 2008; pp. 31–46. [Google Scholar]
  11. May, A.; Meurer, A.; Thomae, E. Decoding Random Linear Codes in O(20.054n). In Advances in Cryptology—ASIACRYPT 2011, Proceedings of the 17th International Conference on the Theory and Application of Cryptology and Information Security, Seoul, South Korea, 4–8 December 2011; Lee, D.H., Wang, X., Eds.; Springer: Berlin/Heidelberg, Germany, 2011; pp. 107–124. [Google Scholar]
  12. Courtois, N.T.; Finiasz, M.; Sendrier, N. How to Achieve a McEliece-Based Digital Signature Scheme. In Advances in Cryptology—ASIACRYPT 2001, Proceedings of the 7th International Conference on the Theory and Application of Cryptology and Information Security, Gold Coast, Australia, 9–13 December 2001; Boyd, C., Ed.; Springer: Berlin/Heidelberg, Germany, 2001; pp. 157–174. [Google Scholar]
  13. Wieschebrink, C. Two NP-complete Problems in Coding Theory with an Application in Code Based Cryptography. In Proceedings of the IEEE International Symposium on Information Theory, Seattle, WA, USA, 9–14 July 2006; pp. 1733–1737. [Google Scholar] [CrossRef]
  14. Berger, T.P.; Cayrel, P.L.; Gaborit, P.; Otmani, A. Reducing Key Length of the McEliece Cryptosystem. In Progress in Cryptology—AFRICACRYPT, Proceedings of the Second International Conference on Cryptology in Africa, Gammarth, Tunisia, 21–25 June 2009; Preneel, B., Ed.; Springer: Berlin/Heidelberg, Germany, 2009; pp. 77–97. [Google Scholar]
  15. Le Van, T.; Hoan, P.K. McEliece cryptosystem based identification and signature scheme using chained BCH codes. In Proceedings of the International Conference on Communications, Management and Telecommunications (ComManTel), DaNang, Vietnam, 28–30 December 2015; pp. 122–127. [Google Scholar] [CrossRef]
  16. Monico, C.; Rosenthal, J.; Shokrollahi, A. Using low density parity check codes in the McEliece cryptosystem. In Proceedings of the 2000 IEEE International Symposium on Information Theory, Sorrento, Italy, 25–30 June 2000; p. 215. [Google Scholar] [CrossRef]
  17. Shooshtari, M.K.; Ahmadian, M.; Payandeh, A. Improving the security of McEliece-like public key cryptosystem based on LDPC codes. In Proceedings of the 11th International Conference on Advanced Communication Technology, Gangwon-Do, Korea, 15–18 February 2009; Volume 2, pp. 1050–1053. [Google Scholar]
  18. Baldi, M.; Bianchi, M.; Maturo, N.; Chiaraluce, F. Improving the efficiency of the LDPC code-based McEliece cryptosystem through irregular codes. In Proceedings of the IEEE Symposium on Computers and Communications (ISCC), Split, Croatia, 7–10 July 2013; pp. 000197–000202. [Google Scholar] [CrossRef]
  19. Moufek, H.; Guenda, K.; Gulliver, T.A. A New Variant of the McEliece Cryptosystem Based on QC-LDPC and QC-MDPC Codes. IEEE Commun. Lett. 2017, 21, 714–717. [Google Scholar] [CrossRef]
  20. Hooshmand, R.; Shooshtari, M.K.; Eghlidos, T.; Aref, M.R. Reducing the key length of McEliece cryptosystem using polar codes. In Proceedings of the 11th International ISC Conference on Information Security and Cryptology, Tehran, Iran, 3–4 September 2014; pp. 104–108. [Google Scholar] [CrossRef]
  21. Sidelnikov, V.M.; Shestakov, S.O. On insecurity of cryptosystems based on generalized Reed-Solomon codes. Discret. Math. Appl. 1992, 2, 439–444. [Google Scholar] [CrossRef]
  22. Sendrier, N. On the Concatenated Structure of a Linear Code. Appl. Algebra Eng. Commun. Comput. 1998, 9, 221–242. [Google Scholar] [CrossRef]
  23. Freudenberger, J.; Thiers, J.P. A new class of q-ary codes for the McEliece cryptosystem. Cryptography 2021, 5, 11. [Google Scholar] [CrossRef]
  24. Huber, K. Codes over Gaussian integers. IEEE Trans. Inf. Theory 1994, Volume 40, 207–216. [Google Scholar] [CrossRef]
  25. Freudenberger, J.; Ghaboussi, F.; Shavgulidze, S. New Coding Techniques for Codes over Gaussian Integers. IEEE Trans. Commun. 2013, 61, 3114–3124. [Google Scholar] [CrossRef]
  26. Puchinger, S.; Müelich, S.; Ishak, K.; Bossert, M. Code-Based Cryptosystems Using Generalized Concatenated Codes. In Applications of Computer Algebra; Kotsireas, I.S., Martínez-Moro, E., Eds.; Springer International Publishing: Cham, Switzerland, 2017; pp. 397–423. [Google Scholar]
  27. Huber, K. Codes over Eisenstein-Jacobi Integers. In Contemporary Mathematics; American Mathematical Society: Providence, RI, USA, 1994; Volume 168, pp. 165–179. [Google Scholar]
  28. Conway, J.; Sloane, N. Sphere Packings, Lattices and Groups, 3rd ed.; Springer: New York, NY, USA; Berlin/Heidelberg, Germany, 1999. [Google Scholar]
  29. Rohweder, D.; Freudenberger, J.; Shavgulidze, S. Low-Density Parity-Check Codes over Finite Gaussian Integer Fields. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Vail, CO, USA, 17–22 June 2018; pp. 481–485. [Google Scholar] [CrossRef]
  30. Gallager, R.G. Information Theory and Reliable Communication; John Wiley & Sons, Inc.: New York, NY, USA, 1968. [Google Scholar]
  31. Neubauer, A.; Freudenberger, J.; Kühn, V. Coding Theory: Algorithms, Architectures and Applications; John Wiley & Sons: New York, NY, USA, 2007. [Google Scholar]
  32. Bossert, M. Channel Coding for Telecommunications; Wiley: New York, NY, USA, 1999. [Google Scholar]
  33. Fabsic, T.; Hromada, V.; Stankovski, P.; Zajac, P.; Guo, Q.; Johansson, T. A reaction attack on the QC-LDPC McEliece cryptosystem. In Proceedings of the Post-Quantum Cryptography—8th International Workshop (PQCrypto), Utrecht, The Netherlands, 26–28 June 2017; Springer: Berlin/Heidelberg, Germany, 2017; pp. 51–68. [Google Scholar] [CrossRef]
Figure 1. Channel model of the weight-one error channel.
Figure 1. Channel model of the weight-one error channel.
Cryptography 05 00033 g001
Figure 2. Capacity of the weight-one error channel for p = 43 and p = 97 .
Figure 2. Capacity of the weight-one error channel for p = 43 and p = 97 .
Cryptography 05 00033 g002
Figure 3. Encoding of GC codes.
Figure 3. Encoding of GC codes.
Cryptography 05 00033 g003
Figure 4. Inner encoding of GCC.
Figure 4. Inner encoding of GCC.
Cryptography 05 00033 g004
Figure 5. Work factor for information-set decoding over code rate.
Figure 5. Work factor for information-set decoding over code rate.
Cryptography 05 00033 g005
Figure 6. WER over channel error probability in comparison to bounded minimum distance decoding.
Figure 6. WER over channel error probability in comparison to bounded minimum distance decoding.
Cryptography 05 00033 g006
Table 1. Parameters of codes with work factors between 2 89 and 2 124 .
Table 1. Parameters of codes with work factors between 2 89 and 2 124 .
Codes over E p MDS Codes
p n k t N ISD p n k t N ISD
13927655167 2 89 27727655111 2 47
15731263187 2 101 31331263124 2 53
19338477231 2 124 38938477153 2 65
Table 2. Comparison of Eisenstein integers with Gaussian integers.
Table 2. Comparison of Eisenstein integers with Gaussian integers.
pnktMessage-Length [Bytes]
G p E p
1372725516373-
13927655167-86
157312631878497
19338477231103120
Table 3. Examples for inner codes.
Table 3. Examples for inner codes.
p π Gd d ( 1 )
109 10 + 3 i 1 1 3 i 3 + 4 i 0 1 4 3 i 511
157 11 + 6 i 1 2 2 i 4 3 i 0 1 2 + 5 i 512
197 14 + i 1 4 + 3 i 1 5 i 0 1 6 i 513
Table 4. Comparison of proposed GC codes with MDS codes.
Table 4. Comparison of proposed GC codes with MDS codes.
ReferencepnRt N ISD
proposed1573120.34207 2 283
proposed1573120.45127 2 182
proposed1573120.5595 2 104
MDS3133120.34103 2 79
MDS3133120.4586 2 92
MDS3133120.5575 2 100
Table 5. Comparison of proposed GC codes with product codes from [23].
Table 5. Comparison of proposed GC codes with product codes from [23].
ReferencepnRt N ISD
proposed1092700.591 2 125
proposed1573120.5105 2 144
proposed1973840.5129 2 177
[23]1372720.2163 2 88
[23]1573120.2187 2 101
[23]1933840.2231 2 124
Table 6. Examples for inner codes over Eisenstein integer fields.
Table 6. Examples for inner codes over Eisenstein integer fields.
p π Gd d ( 1 )
223 11 + 17 ω 1 1 1 7 ω 0 1 4 7 ω 58
229 12 + 17 ω 1 2 4 ω 2 5 ω 0 1 6 + 2 ω 510
271 10 + 19 ω 1 4 + 5 ω 6 0 1 3 + 2 ω 511
277 12 + 19 ω 1 2 + 4 ω 2 6 ω 0 1 6 + ω 510
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Back to TopTop