Next Article in Journal
A Characterization of Quasi-Metric Completeness in Terms of αψ-Contractive Mappings Having Fixed Points
Next Article in Special Issue
A Multisecret-Sharing Scheme Based on LCD Codes
Previous Article in Journal
A Reallocation Approach for Software Trustworthiness Based on Trustworthy Attributes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Projection Decoding of Some Binary Optimal Linear Codes of Lengths 36 and 40

1
Department of Mathematics, Sogang University, Seoul 04107, Korea
2
Institute of Mathematics, University of the Philippines Diliman, Quezon City 1101, Philippines
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Submission received: 17 November 2019 / Revised: 13 December 2019 / Accepted: 16 December 2019 / Published: 19 December 2019
(This article belongs to the Special Issue Algebra and Its Applications)

Abstract

:
Practically good error-correcting codes should have good parameters and efficient decoding algorithms. Some algebraically defined good codes, such as cyclic codes, Reed–Solomon codes, and Reed–Muller codes, have nice decoding algorithms. However, many optimal linear codes do not have an efficient decoding algorithm except for the general syndrome decoding which requires a lot of memory. Therefore, a natural question to ask is which optimal linear codes have an efficient decoding. We show that two binary optimal [ 36 , 19 , 8 ] linear codes and two binary optimal [ 40 , 22 , 8 ] codes have an efficient decoding algorithm. There was no known efficient decoding algorithm for the binary optimal [ 36 , 19 , 8 ] and [ 40 , 22 , 8 ] codes. We project them onto the much shorter length linear [ 9 , 5 , 4 ] and [ 10 , 6 , 4 ] codes over G F ( 4 ) , respectively. This decoding algorithm, called projection decoding, can correct errors of weight up to 3. These [ 36 , 19 , 8 ] and [ 40 , 22 , 8 ] codes respectively have more codewords than any optimal self-dual [ 36 , 18 , 8 ] and [ 40 , 20 , 8 ] codes for given length and minimum weight, implying that these codes are more practical.

1. Introduction

Coding theory or the theory of error-correcting codes requires a lot of mathematical concepts but has wide applications in data storage, satellite communication, smart phone, and High Definition TV. The well known classes of (linear) codes include Reed-Solomon codes, Reed-Muller codes, turbo codes, LDPC codes, Polar codes, network codes, quantum codes, and DNA codes. One of the most important reasons why these codes are useful is that they have fast and efficient decoding algorithms.
A linear [ n , k ] code C over G F ( q ) or F q is a k-dimensional subspace of F q n . The dual of C is C = { x F q n | x · c = 0   for   any   c C } , where the dot product is either a usual inner product or a Hermitian inner product. A linear code C is called self-dual if C = C . If q = 2 , then C is called binary. A binary self-dual code is called doubly-even if all codewordshave weight 0 ( mod 4 ) and singly-even if some codewords have a weight 2 ( mod 4 ) . If q = 4 , let G F ( 4 ) = { 0 , 1 , ω , ω ¯ } , where ω ¯ = ω 2 = ω + 1 . It is more natural to consider the Hermitian inner product , on G F ( 4 ) n : for x = ( x 1 , x 2 , , x n ) and y = ( y 1 , y 2 , , y n ) in G F ( 4 ) n , x , y = i = 1 n x i y i ¯ , where a ¯ = a 2 .
Researchers have tried to find an efficient decoding algorithms for some optimal linear codes. Some well known methods to decode linear codes are permutation decoding [1,2,3] for codes with large groups of automorphisms, decoding based on error locator polynomials in cyclic or AG codes [3], message passing algorithm for LDPC codes [4,5], and Successive Cancellation Decoding for polar codes [6].
Projection decoding was used for self-dual codes. For example, Pless [7] showed an efficient decoding of the [ 24 , 12 , 8 ] extended binary self-dual Golay code by projecting it onto the [ 6 , 3 , 4 ] hexacode over G F ( 4 ) . Later, Gaborit, Kim, and Pless [8,9] showed that a similar projection can be done for some singly even and doubly-even self-dual binary [ 32 , 16 , 8 ] codes including the binary Reed Muller code. Recently, Kim and Lee [10] gave two algorithms for the projection decoding of a binary extremal self-dual code of length 40. The idea was to use the projection of the said code onto a Hermitian self-dual code over G F ( 4 ) . One of the two algorithms called syndrome decoding uses the syndrome decoding of the shorter code over G F ( 4 ) . Most examples for the projection decoding were extremal self-dual codes. Therefore it is natural to ask whether such a projection decoding can be done for a non-self-dual code which has larger dimensions than a self-dual code with the same length and minimum distance. In this paper, we show that this is possible.
The purpose of this paper is to show how to decode efficiently a binary optimal [ 36 , 19 , 8 ] linear code and a binary optimal [ 40 , 22 , 8 ] code by projecting them onto the much shorter length linear [ 9 , 5 , 4 ] and [ 10 , 6 , 4 ] codes over G F ( 4 ) , respectively. This decoding algorithm, which we will call projection decoding can correct errors of weight up to ( 8 - 1 ) 2 = 3 . It can be seen that the decoding algorithm presented in this paper is a generalization of the syndrome projection decoding in [10] since this algorithm can be used to decode any linear code with a projection into an additive or a linear code over G F ( 4 ) for errors of weight at most 3. Our decoding is the first time to decode those two optimal [ 36 , 19 , 8 ] and [ 40 , 22 , 8 ] codes, whose parameters are better than any optimal self-dual [ 36 , 18 , 8 ] and [ 40 , 20 , 8 ] codes.
This paper is organized as follows. In Section 2, we introduce a way to project a binary code of length 4 n into an additive code of length n over G F ( 4 ) . We also mention some properties of this projection as given in [11,12]. In Section 3, we show explicitly how to construct [ 36 , 19 , 8 ] and [ 40 , 22 , 8 ] binary optimal codes with projections onto additive codes over G F ( 4 ) . Using this projection, we give a handy decoding algorithm to decode the said binary optimal codes in Section 4. This decoding algorithm exploits the properties of codes with projection onto an additive code over G F ( 4 ) to locate the errors in the noisy codewords, assuming not more than 3 errors occurred. This algorithm has low complexity and can be done with hand. In Section 5, we conclude by showing working examples of how the decoding is carried out.

2. Projection of Binary Linear Codes

Let v G F ( 2 ) 4 m . We associate to v a rectangular array v of zeros and ones. If we label the rows of the array with the elements of G F ( 4 ) , then the inner product of a column of the array with the row labels is an element of G F ( 4 ) . Thus, we obtain a corresponding element of G F ( 4 ) m , which we call the projection of v, denoted Proj ( v ) . We show this by the following example. Let v = ( 1000 0100 0010 0001 0110 1110 0101 0111 1111 ) G F ( 2 ) 36 . Then we write v column-wisely as follows.
v = 1 2 3 4 5 6 7 8 9 0 1 0 0 0 0 1 0 0 1 1 0 1 0 0 1 1 1 1 1 ω 0 0 1 0 1 1 0 1 1 ω ¯ 0 0 0 1 0 0 1 1 1 0 1 ω ω ω ¯ ω ¯ ω 0 0
We define the parity of the column to be even (or odd) if an even (or odd) number of ones exists in the column, and the parity of the first row is defined similarly.
We call C 4 G F ( 4 ) n an additive code of length n over G F ( 4 ) if it is closed under addition. Therefore, a linear code over G F ( 4 ) is automatically additive, but not the converse.
Definition 1.
Let S be a subset of G F ( 2 ) 4 m and C 4 an additive code over G F ( 4 ) of length m. Then S is said to have a projection O onto C 4 if for any v S ,
(i) 
Proj ( v ) C 4 .
(ii) 
the columns of v are of the same parity, i.e., the columns are either all even or all odd.
(iii) 
the parity of the first row of v is the same as the parity of the columns.
The main advantage of this projection is that we can decode the long binary code by decoding its projection, thus generally decreasing the complexity of the decoding process. Several authors exhibited this fact [7,8,10].
Another similar projection is defined in [12], called projection E.
Definition 2.
Using the same notation as in Definition 1, S is said to have a projection E onto C 4 if conditions ( i ) and ( i i ) are satisfied together with the additional condition:
(iii) 
the parity of the first row of v is always even.
Let C 4 be an additive code of length m over G F ( 4 ) . Consider the map ϕ : G F ( 4 ) G F ( 2 ) 4 such that ϕ ( 0 ) = 0000 , ϕ ( 1 ) = 0011 , ϕ ( ω ) = 0101 , and ϕ ( ω ¯ ) = 0110 . Define
ϕ ( C 4 ) = { ( ϕ ( c 1 ) , ϕ ( c 2 ) , , ϕ ( c m ) ) : ( c 1 , c 2 , , c m ) C 4 } .
Let d be the code consisting of all even sums of weight 4 vectors whose ones appear in the same column in the projection, together with one additional vector d 1 = ( 1000 1000 1000 ) if m is odd and d 2 = ( 1000 1000 0111 ) if m is even. In [12], the following constructions were given:
  • Construction O: ρ O ( C 4 ) = ϕ ( C 4 ) + d ,
  • Construction E: ρ E ( C 4 ) = ϕ ( C 4 ) + d ,
where d is the code consisting of d 1 if m is even and d 2 if m is odd.
The following result also was also given in [12].
Theorem 1.
Let C 4 be an additive ( m , 2 r ) code over G F ( 4 ) . Then
1. 
ρ O ( C 4 ) and ρ E ( C 4 ) are binary linear [ 4 m , m + r ] codes with projection O and E, respectively, onto C 4 .
2. 
any binary linear code with projection O or E onto C 4 can be constructed in this way.

3. Construction of Binary Optimal [ 36 , 19 , 8 ] and [ 40 , 22 , 8 ] Codes

In this section, we apply the constructions given in the previous section to construct binary optimal codes of lengths 36 and 40. We were able to obtain two inequivalent codes for each length.
Let C 4 9 be a ( 9 , 2 10 ) additive code over G F ( 4 ) with the following generator matrix
G ( C 4 9 ) = 1 0 0 0 0 ω ¯ 1 1 1 ω 0 0 0 0 1 ω ω ω 0 1 0 0 0 1 ω ω ¯ 0 0 ω 0 0 0 ω ω ¯ 1 0 0 0 1 0 0 0 1 ω ω ¯ 0 0 ω 0 0 0 ω ω ¯ 1 0 0 0 1 0 ω ¯ ω ¯ 0 1 0 0 0 ω 0 1 1 0 ω 0 0 0 0 1 1 ω 1 1 0 0 0 0 ω ω ω ¯ ω ω .
In fact the rows consisting of the odd indexed rows of G ( C 4 9 ) form a [ 9 , 5 , 4 ] -linear code over G F ( 4 ) which can be found in MAGMA with the name of B K L C ( G F ( 4 ) , 9 , 5 ) . The code C 4 9 has weight distribution A 4 = 51 , A 5 = 135 , A 6 = 210 , A 7 = 318 , A 8 = 234 , A 9 = 75 .
Let C 4 10 be the ( 10 , 2 12 ) additive code over G F ( 4 ) generated by the following matrix
G ( C 4 10 ) = 1 0 0 0 0 0 ω 0 ω ω ¯ ω 0 0 0 0 0 ω ¯ 0 ω ¯ 1 0 1 0 0 0 0 ω ¯ 1 1 1 0 w 0 0 0 0 1 ω ω ω 0 0 1 0 0 0 1 ω ω ¯ 0 0 0 ω 0 0 0 ω ω ¯ 1 0 0 0 0 1 0 0 0 1 ω ω ¯ 0 0 0 ω 0 0 0 ω ω ¯ 1 0 0 0 0 1 0 ω ¯ ω ¯ 0 1 0 0 0 0 ω 0 1 1 0 ω 0 0 0 0 0 1 1 ω 1 1 0 0 0 0 0 ω ω ω ¯ ω ω .
The minimum distance of both of these codes is 4.
The rows consisting of the odd indexed rows of G ( C 4 10 ) form a [ 10 , 6 , 4 ] -linear code over G F ( 4 ) which can be found in (Computational Algebra System) MAGMA with the name of (Best Known Linear Code) B K L C ( G F ( 4 ) , 10 , 6 ) . The code C 4 9 has weight distribution A 4 = 87 , A 5 = 258 , A 6 = 555 , A 7 = 1020 , A 8 = 1200 , A 9 = 738 , A 10 = 237 .
Denote by C O 36 and C O 40 the binary linear codes obtained from the additive codes C 4 9 and C 4 10 , respectively, over G F ( 4 ) by construction O. That is, C O 36 = ρ O ( C 4 9 ) and C O 40 = ρ O ( C 4 10 ) . Their generator matrices are given below. These two codes constructed are inequivalent and both have projection O on to C 4 9 and C 4 10 , respectively.
G ( C O 36 ) = 100000010001000100010010010001110010 010000010001000100010100011101001110 001000010001000100010001000100101000 000100010001000100010111001000011011 000010010000000000000110001101011111 000001010000000000000101011000110000 000000110000000000000011010101100000 000000001001000000000000011000111010 000000000101000000000000010101100011 000000000011000000000000001101010110 000000000000100100000101010100001001 000000000000010100000011001100000101 000000000000001100000110011000000011 000000000000000010010110001101101001 000000000000000001010101011001010101 000000000000000000110011010100110011 000000000000000000001111000000001111 000000000000000000000000111100001111 000000000000000000000000000011111111 ,
G ( C O 40 ) = 1000000100010001000100010111001000010100 0100000100010001000100010010001001001101 0010000100010001000100010001001001111000 0001000100010001000100010100001000101110 0000100100000000000000000101011001101001 0000010100000000000000000011010101010101 0000001100000000000000000110001100110011 0000000010010000000000000110001101011111 0000000001010000000000000101011000110000 0000000000110000000000000011010101100000 0000000000001001000000000000011000111010 0000000000000101000000000000010101100011 0000000000000011000000000000001101010110 0000000000000000100100000101010100001001 0000000000000000010100000011001100000101 0000000000000000001100000110011000000011 0000000000000000000010010110001101101001 0000000000000000000001010101011001010101 0000000000000000000000110011010100110011 0000000000000000000000001111000000001111 0000000000000000000000000000111100001111 0000000000000000000000000000000011111111 .
Similarly, we apply construction E on the codes C 4 9 and C 4 10 . We obtain two inequivalent codes C E 36 = ρ E ( C 4 9 ) and C E 40 = ρ E ( C 4 10 ) with projection E on to C 4 9 and C 4 10 , respectively. Their generator matrices are as follows.
G ( C E 36 ) = 100000010001000100010010010001111101 010000010001000100010100011101000001 001000010001000100010001000100100111 000100010001000100010111001000010100 000010010000000000000110001101011111 000001010000000000000101011000110000 000000110000000000000011010101100000 000000001001000000000000011000111010 000000000101000000000000010101100011 000000000011000000000000001101010110 000000000000100100000101010100001001 000000000000010100000011001100000101 000000000000001100000110011000000011 000000000000000010010110001101101001 000000000000000001010101011001010101 000000000000000000110011010100110011 000000000000000000001111000000001111 000000000000000000000000111100001111 000000000000000000000000000011111111 ,
G ( C E 40 ) = 1000000100010001000100010111001000011011 0100000100010001000100010010001001000010 0010000100010001000100010001001001110111 0001000100010001000100010100001000100001 0000100100000000000000000101011001101001 0000010100000000000000000011010101010101 0000001100000000000000000110001100110011 0000000010010000000000000110001101011111 0000000001010000000000000101011000110000 0000000000110000000000000011010101100000 0000000000001001000000000000011000111010 0000000000000101000000000000010101100011 0000000000000011000000000000001101010110 0000000000000000100100000101010100001001 0000000000000000010100000011001100000101 0000000000000000001100000110011000000011 0000000000000000000010010110001101101001 0000000000000000000001010101011001010101 0000000000000000000000110011010100110011 0000000000000000000000001111000000001111 0000000000000000000000000000111100001111 0000000000000000000000000000000011111111 .
Proposition 1.
The codes C O 36 and C E 36 are inequivalent binary optimal [ 36 , 19 , 8 ] linear codes.
Proof. 
Since C 4 9 is an additive ( 9 , 2 10 ) code over G F ( 4 ) , it follows from Theorem 1 that C O 36 = ρ O ( C 4 9 ) and C E 36 = ρ E ( C 4 9 ) are binary [ 36 , 19 ] linear codes. It remains to show that the minimum distance is 8. It is known that codes of these parameters are optimal [13].
Please note that any codeword c C O 36 can be written as c = a + b + d where a ϕ ( C 4 9 ) , b is an even sum of weight 4 vectors whose ones appear in the same column in the projection and d is equal to either d 1 = ( 1000 1000 ) or the zero vector. Since the minimum distance of C 4 9 is 4 and thus ϕ ( C 4 9 ) is of minimum distance 8, w t ( a ) 8 . By the definition of b , it is clear that w t ( b ) 8 . Thus, the minimum distance of C O 36 is at most 8. Now partition c into blocks of length 4 and call the ith block c i (which corresponds to the ith column in the projection). For each i = 1 , , 9 , write c i = a i + b i + c i . From the construction, it can be seen that c i = 0000 if and only if a i = b i = d i = 0000 . If d is the zero vector, i.e., d i = 0000 for all i, then w t ( a i + c i ) { 2 , 4 } . Thus, w t ( c ) = w t ( a i + b i ) 8 . Suppose d i = 1000 for all i. If b i = 0000 , then w t ( c i ) = w t ( a i + d i ) = 3 . If b i = 1111 , then w t ( a i + b i + d i ) { 1 , 3 } . Since at least 4 blocks c i are nonzero, we conclude that w t ( c ) 8 . Hence the minimum distance of this code is 8.
The case of C E 36 is proved similarly.
Finally, the codes C O 36 and C E 36 have different weight distributions given in Table 1 and Table 2 which was computed by MAGMA. Therefore they are inequivalent. Furthermore, we checked by MAGMA that these two codes are not equivalent to the currently known optimal [ 36 , 19 , 8 ] code denoted by B K L C ( G F ( 2 ) , 36 , 19 ) in the MAGMA database because B K L C ( G F ( 2 ) , 36 , 19 ) has A 8 = 1033 . Please note that the codes C O 36 and C E 36 have the automorphism groups of order 6, while B K L C ( G F ( 2 ) , 36 , 19 ) has a trivial automorphism. □
For length 40, we have a similar result as in Proposition 2. We display the weight distributions of C O 40 and C E 40 in Table 3 and Table 4. Furthermore, we checked by MAGMA that these two codes are not equivalent to the currently known optimal [ 40 , 22 , 8 ] code denoted by B K L C ( G F ( 2 ) , 40 , 22 ) in the MAGMA database because B K L C ( G F ( 2 ) , 40 , 22 ) has A 8 = 1412 .
Proposition 2.
The codes C O 40 and C E 40 are inequivalent binary optimal [ 40 , 22 , 8 ] linear codes.

4. Projection Decoding

Let v G F ( 2 ) 4 m and v its associated array, defined in Section 2. From this, we can partition the elements of G F ( 2 ) 4 with respect to its inner product with the row labels, as follows:
From Definitions 1 and 2, we know that if v C , where C is a code with projection O on to C 4 , then the columns of v , as well as the first row have the same parity. Before we give the decoding algorithm, we first take note of the following observations regarding the error positions in the array v .
Remark 1. 
1. 
An error on the first row of v preserves Proj ( v ) .
2. 
An error on the coordinate that is not in the first row definitely changes Proj ( v ) .
3. 
Two errors in the same column definitely changes Proj ( v ) .
4. 
Three errors in the same column preserves Proj ( v ) if and only if the first entry is not changed.
We now present a simple decoding algorithm, which we will call the projection decoding, that can be used for any binary linear code with projection O or projection E onto some additive code C 4 over G F ( 4 ) . This decoding algorithm can correct errors of weight up to three. The idea is to use syndrome decoding in C 4 , which has shorter length to locate errors in the binary codeword. The decoding is then completed by changing the columns in the array corresponding to the corrected entry in the projection, by using Table 5.
Let C be a binary linear code of length 4 n with projection O or projection E onto C 4 , an additive code over G F ( 4 ) . Let y be the received vector and assume that y is a noisy codeword of C with at most 3 errors. Let y be the associated array and y ¯ = Proj ( y ) . Denote by y i the ith column of y and by y ¯ i the ith entry of y ¯ . Let H be the parity-check matrix of C 4 and denote the ith column of H by H i . The projection decoding of C is carried out as follows.
Projection decoding algorithm
  • Check the parity of the columns and the first row of y .
  • Let y o , y e be the number of columns of y with odd or even parity, resp., and let p = min ( y o , y e ) .
  • Depending on p, perform the following:
    (a)
    If p = 0 , compute the syndrome s = y ¯ H T .
    i.
    If s = 0 , then we say that no error occurred.
    ii.
    If s 0 , the syndrome is a scalar multiple of one of the columns of H, say s = e i H i for some e i G F ( 4 ) . Hence, the two errors occurred on the ith column of y . Replace the ith coordinate of y ¯ by y i ¯ + e i G F ( 4 ) and replace the ith column of y with the vector corresponding to y i ¯ + e i (see Table 5) such that the parity conditions are satisfied.
    (b)
    If p = 1 , let y i be the column with the different parity. Compute the syndrome s = y ¯ H T .
    i.
    If s = 0 , check the parity of the first row.
    • If the parity of the first row is the same as the parity of y i , then one error occurred on the first entry of y i . Change this entry to decode y.
    • If the parity of the first row is different from the parity of y i , then three errors occurred on the 2nd to 4th entries of y i . Change these entries to decode y.
    ii.
    If 0 s = e i H i , then one or three errors occurred in the ith column of y . Replace the ith coordinate of y ¯ by y i ¯ + e i G F ( 4 ) and replace the ith column of y with the vector corresponding to y i ¯ + e i such that the parity conditions are satisfied.
    iii.
    If s = e j H j for some j i , then two errors occurred in column j and one error in the first entry of column i. Replace the j t h coordinate of y ¯ by y j ¯ + e j G F ( 4 ) and replace the j t h column of y with the vector corresponding to y j ¯ + e j with the same parity as y j and of distance 2 from y j . Finally, replace the first entry of y i and check that the parity conditions are satisfied.
    (c)
    If p = 2 , let y i , y j be the columns with the different parity. Compute the syndrome s = y ¯ H T . We know that s = e i H i + e j H j .
    i.
    If s = 0 , then the two errors occurred on the first coordinates of y i and y j . Replace both coordinates and check if the parity conditions are satisfied.
    ii.
    If e i 0 and e j = 0 , then errors occurred in the ith column of y and the first coordinate of y j . Replace the first coordinate of y j . Then replace the ith coordinate of y ¯ by y i ¯ + e i G F ( 4 ) and replace the ith column of y with the vector corresponding to y i ¯ + e i such that the parity conditions are satisfied.
    iii.
    If e i , e j 0 , then errors occurred in the ith and jth columns of y . Replace the ith coordinate of y ¯ by y i ¯ + e i G F ( 4 ) and the jth coordinate by y j ¯ + e j G F ( 4 ) then replace the ith and jth columns of y with the corresponding vectors such that the parity conditions are satisfied.
    (d)
    If p = 3 , let y i , y j and y k be the columns with different parity. Then there are one error each on these columns. Compute the syndrome s = y ¯ H T . We know that s = e i H i + e j H j + e k H k
    i.
    If s = 0 , then the three errors occurred on the first coordinates of y i , y j and y k . Replace the first coordinates in these columns and check if the parity conditions are satisfied.
    ii.
    If e i 0 and e j , e k = 0 , then one error occurred in the ith column of y and one error each on the first coordinates of y j and y k . Replace the first coordinates of y j and y k . Then replace the ith coordinate of y ¯ by y i ¯ + e i G F ( 4 ) and replace the ith column of y with the vector corresponding to y i ¯ + e i such that the parity conditions are satisfied.
    iii.
    If e i 0 , e j 0 and e k = 0 , then one error each occurred in the ith and jth column of y and the first coordinate of y k . Replace the first coordinate of y k . Then replace the ith coordinate of y ¯ by y i ¯ + e i G F ( 4 ) and the jth coordinate by y j ¯ + e j G F ( 4 ) then replace the ith and jth columns of y with the corresponding vectors such that the parity conditions are satisfied.
    iv.
    If e i , e j , e k 0 , then let y = y ¯ + e where e is the vector of length 9 with e i , e j and e k on the i , j and kth coordinate and zero elsewhere. Replace one coordinate each in the i , j , k th columns of y so that it becomes y and the parity conditions are satisfied.
Remark 2.
We remark that our algorithm for codes C O 36 , C E 36 , C O 40 , and O E 40 is complete and ending because we considered all possible number of column parities p = min ( y o , y e ) 3 and adjusted at most three errors according to the top row parity so that Propositions 1 and 2 are satisfied.

5. Examples

In this section, we provide examples to illustrate how the given decoding algorithm works. Even though these are samples, most remaining cases are done similarly.
The following two examples illustrate how to decode C O 36 by hand. As a linear code over G F ( 4 ) , C 4 9 has the following parity check matrix.
H ( C 4 9 ) = 1 0 0 0 1 1 ω ¯ 0 1 0 1 0 0 1 0 ω ω ¯ 1 0 0 1 0 ω ω ¯ 1 ω 1 0 0 0 1 1 ω ¯ 0 1 ω ¯
Example 1.
In this example, we illustrate the projection decoding of C O 36 . Let y G F ( 2 ) 36 be a codeword in C O 36 with some error of weight up to three. Consider the following projection of y.
y = 1 2 3 4 5 6 7 8 9 0 0 1 0 1 0 1 0 1 0 1 0 1 1 0 1 0 0 1 1 ω 1 1 0 0 0 1 1 0 0 ω ¯ 0 0 0 0 1 0 0 1 0 y ¯ = ω ω ¯ 1 0 ω ω ω ω 1
All the columns have odd parity except for the 5th and 6th columns, and hence p = 2 . Then we have the syndrome
s = y ¯ H ( C 4 9 ) T = ω ω ω ¯ ω = ω 1 1 ω 1 = ω H ( C 4 9 ) 5 .
Therefore we proceed to ii of Step 3(c) with e 5 = ω and e 6 = 0 . We replace y ¯ 5 = ω by y ¯ 5 + e 5 = ω + ω = 0 and then replace y 5 = 0 1 0 1 by a vector in Table 5 corresponding to 0 closest to it. Finally, we replace the first entry of y 6 . So there are two errors in y. Therefore, we decoded y as
y = 1 2 3 4 5 6 7 8 9 0 0 1 0 1 0 0 0 1 0 1 0 1 1 0 1 0 0 1 1 ω 1 1 0 0 1 1 1 0 0 ω ¯ 0 0 0 0 1 0 0 1 0 ω ω ¯ 1 0 0 ω ω ω 1
Example 2.
Let y G F ( 2 ) 36 be a codeword of C E 36 with some error of weight up to three and have the following projection
y = 1 2 3 4 5 6 7 8 9 0 0 1 0 0 0 1 0 1 0 1 0 1 1 0 0 0 0 0 1 ω 1 0 1 0 0 1 0 0 0 ω ¯ 0 1 1 1 1 1 1 1 0 y ¯ = ω ω 0 ω ¯ ω ¯ 1 ω ¯ ω ¯ 1
Please note that p = 1 and the parity of the 8th column differs from the rest of the column, i.e., i = 8 in Step 3(b). We then compute the syndrome:
s = y ¯ H ( C 4 9 ) T = ω ¯ ω ¯ 1 ω ¯ = ω ¯ 1 1 ω 1 = ω ¯ H ( C 4 9 ) 5 .
So we have j = 5 i and e 5 = ω ¯ . Proceeding with iii of Step 3(b), we replace y ¯ 5 = ω ¯ by y ¯ 5 + e 5 = ω ¯ + ω ¯ = 0 and replace the fifth column y 5 = 0 0 0 1 by a vector in Table 5 corresponding to 0 which is of the same parity and distance 2 from y 5 . Then we change the first entry of y 8 . There are three errors in y. We decoded y as
y = 1 2 3 4 5 6 7 8 9 0 0 1 0 0 0 1 0 0 0 1 0 1 1 0 1 0 0 0 1 ω 1 0 1 0 1 1 0 0 0 ω ¯ 0 1 1 1 1 1 1 1 0 ω ω 0 ω ¯ 0 1 ω ¯ ω ¯ 1
Next we show examples of projection decoding of C O 40 and C E 40 . We use the following parity check matrix for C 4 10 .
H ( C 4 10 ) = 1 0 0 0 1 1 ω ¯ 0 1 ω ¯ 0 1 0 0 1 0 ω ω ¯ 1 ω 0 0 1 0 ω ω ¯ 1 ω 1 0 0 0 0 1 1 ω ¯ 0 1 ω ¯ w
Example 3.
Assume y G F ( 2 ) 40 is a codeword of C O 40 with error of weight up to three and with the following projection
y = 1 2 3 4 5 6 7 8 9 10 0 1 0 1 0 0 0 0 1 1 1 1 0 0 1 0 1 1 1 0 0 1 ω 0 0 1 0 1 0 1 0 0 0 ω ¯ 0 1 0 1 1 0 1 0 0 1 y ¯ = 0 ω ¯ ω ¯ ω ¯ 0 1 0 0 0 ω
All the columns and the first row are of odd parity, so p = 0 . Computing the syndrome, we have
s = y ¯ H ( C 4 10 ) T = 0 0 0 ω ¯ = ω ¯ 0 0 0 1 = ω ¯ H ( C 4 10 ) 4 .
Since s 0 , by ii of Step 3(a) with e 4 = ω ¯ , two errors occurred on the 4th column of y . We replace y ¯ 4 = ω ¯ by y ¯ 4 + e 4 = ω ¯ + ω ¯ = 0 and replace the fourth column y 4 by the vector in Table 5 corresponding to 0 and of distance 2 from y 4 = 0 0 0 1 . There are two errors in y. Therefore, y is decoded as
y = 1 2 3 4 5 6 7 8 9 10 0 1 0 1 0 0 0 0 1 1 1 1 0 0 1 1 1 1 1 0 0 1 ω 0 0 1 1 1 0 1 0 0 0 ω ¯ 0 1 0 1 1 0 1 0 0 1 0 ω ¯ ω ¯ 0 0 1 0 0 0 ω .
Example 4.
Assume y G F ( 2 ) 40 is a codeword of C E 40 with some error of weight up to 3. Let y have the following projection
y = 1 2 3 4 5 6 7 8 9 10 0 0 0 1 0 1 0 0 1 0 0 1 1 0 0 0 0 0 1 1 1 0 ω 0 1 1 1 0 1 0 1 0 1 ω ¯ 1 1 1 1 1 1 1 0 1 0 y ¯ = ω 1 1 1 ω ¯ 1 ω ω ¯ ω ω
All the columns except columns 3, 8 and 10 are even so p = 3 . The syndrome is
s = y ¯ H ( C 4 10 ) T = ω ω ω 0 = H ( C 4 10 ) 8 + ω ¯ H ( C 4 10 ) 10
Hence, from iii of Step 3(d), we have e 8 = 1 , e 10 = ω ¯ and e 3 = 0 . So we replace y ¯ 8 by y ¯ 8 + e 8 = ω ¯ + 1 = ω and y 8 by the vector corresponding to ω in Table 5 closest to 1 1 1 0 . We also replace y ¯ 10 by y ¯ 10 + e 10 = ω + ω ¯ = 1 and y 10 = 0 0 1 0 by the vector closest to it corresponding to 1 in Table 5. Finally we replace the first entry of column 3. There are three errors in y. Therefore, y is decoded as
y = 1 2 3 4 5 6 7 8 9 10 0 0 0 0 0 1 0 0 1 0 0 1 1 0 0 0 0 0 1 0 1 0 ω 0 1 1 1 0 1 0 1 0 1 ω ¯ 1 1 1 1 1 1 1 0 1 1 ω 1 1 1 ω ¯ 1 ω ω ω 1
Remark 3.
The most time consuming part (or the dominating complexity part of the algorithm) is to decode the projected vector y ¯ in the linear code C 4 9 or C 4 10 using the syndrome decoding. However, since we know which positions (or columns) have errors, this syndrome decoding can be done in at most 3 × 3 × 3 = 27 possible ways because there are scalar multiples in the syndrome equation involving three nonzero scalars in Step (d) of the algorithm.

6. Conclusions

In this paper, we described how to decode binary optimal [ 36 , 19 , 8 ] and [ 40 , 22 , 8 ] codes by projecting them onto the linear [ 9 , 5 , 4 ] and [ 10 , 6 , 4 ] codes over G F ( 4 ) . Even though there were similar decoding for self-dual codes of lengths 24, 32, 40, there was no known efficient decoding algorithm for these non-self-dual optimal codes. Actually our algorithm works for any linear code with a projection onto a linear or additive code over G F ( 4 ) for errors of weight at most 3.

Author Contributions

Conceptualization, J.-L.K.; methodology, J.-L.K.; validation, L.G. formal analysis, L.G.; investigation, L.G.; resources, J.-L.K.; writing–original draft preparation, L.G.; writing–review and editing, J.-L.K.; visualization, L.G.; supervision, J.-L.K.; funding acquisition, J.-L.K. All authors have read and agreed to the published version of the manuscript.

Funding

J.-L.K. was supported by Basic Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2016R1D1A1B03933259).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Huffman, W.C.; Pless, V. Fundamentals of Error-Correcting Codes; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
  2. MacWilliams, F.J. Permutation decoding of systematic codes. Bell Syst. Tech. J. 1964, 43, 485–505. [Google Scholar] [CrossRef]
  3. MacWilliams, F.J.; Sloane, N.J.A. The Theory of Error-Correcting Codes; North-Holland: Amsterdam, The Netherland, 1983. [Google Scholar]
  4. Gallager, R. Low-density parity-check codes. IEEE Trans. Inform. Theory 1962, 8, 21–28. [Google Scholar] [CrossRef] [Green Version]
  5. Loeliger, H.-A. An introduction to factor graphs. IEEE Signal Proc. Mag. 2004, 21, 28–41. [Google Scholar] [CrossRef] [Green Version]
  6. Arikan, E. Channel polarization: A method for constructing capacityachieving codes for symmetric binary-input memoryless channels. IEEE Trans. Inf. Theory 2009, 55, 3051–3073. [Google Scholar] [CrossRef]
  7. Pless, V. Decoding the Golay codes. IEEE Trans. Inf. Theory 1986, 32, 561–567. [Google Scholar] [CrossRef]
  8. Gaborit, P.; Kim, J.-L.; Pless, V. Decoding binary R(2, 5) by hand. Discret. Math. 2003, 264, 55–73. [Google Scholar] [CrossRef] [Green Version]
  9. Kim, J.-L.; Pless, V. Decoding some doubly even self-dual [32,16,8] codes by hand. In Proceedings of the Conference Honoring Professor Dijen Ray-Chaudhuri on the Occasion of His 65th Birthday, Columbus, OH, USA, 18–21 May 2000; pp. 165–178. [Google Scholar]
  10. Kim, J.-L.; Lee, N. A projection decoding of a binary extremal self-dual code of length 40. Des. Codes. Cryptogr. 2017, 83, 589–609. [Google Scholar] [CrossRef] [Green Version]
  11. Amrani, O.; Be’ery, Y. Reed-Muller codes. Projections on GF(4) and multilevel construction. IEEE Trans. Inform. Theory 2001, 47, 2560–2565. [Google Scholar] [CrossRef]
  12. Kim, J.-L.; Mellinger, K.E.; Pless, V. Projections of binary linear codes onto larger fields. SIAM J. Discret. Math. 2003, 16, 591–603. [Google Scholar] [CrossRef] [Green Version]
  13. Brouwer, A.E. Bounds on the size of linear codes. In Handbook of Coding Theory; Pless, V.S., Huffman, W.C., Eds.; Elsevier: Amsterdam, The Netherland, 1988; pp. 295–461. [Google Scholar]
Table 1. Weight distribution of C O 36 .
Table 1. Weight distribution of C O 36 .
A 0 = 1 A 8 = 444 A 9 = 496 A 10 = 2160 A 11 = 4752
A 12 = 8760 A 13 = 17,856 A 14 = 28,992 A 15 = 44,352 A 16 = 54,318
A 17 = 62,496 A 18 = 72,864 A 19 = 66,528 A 20 = 54,192 A 21 = 41,664
A 22 = 28,992 A 23 = 19,008 A 24 = 8844 A 25 = 4464 A 26 = 2160
A 27 = 528 A 28 = 408 A 32 = 9
Table 2. Weight distribution of C E 36 .
Table 2. Weight distribution of C E 36 .
A 0 = 1 A 8 = 444 A 9 = 528 A 10 = 2160 A 11 = 4464
A 12 = 8760 A 13 = 19,008 A 14 = 28,992 A 15 = 41,664 A 16 = 54,318
A 17 = 66,528 A 18 = 72,864 A 19 = 62,496 A 20 = 54,192 A 21 = 44,352
A 22 = 28,992 A 23 = 17,856 A 24 = 8844 A 25 = 4752 A 26 = 2160
A 27 = 496 A 28 = 408 A 32 = 9
Table 3. Weight distribution of C O 40 .
Table 3. Weight distribution of C O 40 .
A 0 = 1 A 8 = 741 A 10 = 6144 A 12 = 42,736 A 14 = 176,640
A 16 = 484,890 A 18 = 849,408 A 20 = 1,073,184 A 22 = 849,408 A 24 = 484,890
A 26 = 176,640 A 28 = 42,736 A 30 = 6144 A 32 = 741 A 40 = 1
Table 4. Weight distribution of C E 40 .
Table 4. Weight distribution of C E 40 .
A 0 = 1 A 8 = 741 A 10 = 6208 A 12 = 42,096 A 14 = 179,520
A 16 = 477,210 A 18 = 862,848 A 20 = 1,057,056 A 22 = 862,848 A 24 = 477,210
A 26 = 179,520 A 28 = 42,096 A 30 = 6208 A 32 = 741 A 40 = 1
Table 5. Projection of G F ( 2 ) 4 onto G F ( 4 ) .
Table 5. Projection of G F ( 2 ) 4 onto G F ( 4 ) .
0 0 0 0 0 , 1 1 1 1 , 1 0 0 0 , 0 1 1 1 1 1 1 0 0 , 0 0 1 1 , 0 1 0 0 , 1 0 1 1
ω 1 0 1 0 , 0 1 0 1 , 0 0 1 0 , 1 1 0 1 ω ¯ 1 0 0 1 , 0 1 1 0 , 0 0 0 1 , 1 1 1 0

Share and Cite

MDPI and ACS Style

Galvez, L.; Kim, J.-L. Projection Decoding of Some Binary Optimal Linear Codes of Lengths 36 and 40. Mathematics 2020, 8, 15. https://0-doi-org.brum.beds.ac.uk/10.3390/math8010015

AMA Style

Galvez L, Kim J-L. Projection Decoding of Some Binary Optimal Linear Codes of Lengths 36 and 40. Mathematics. 2020; 8(1):15. https://0-doi-org.brum.beds.ac.uk/10.3390/math8010015

Chicago/Turabian Style

Galvez, Lucky, and Jon-Lark Kim. 2020. "Projection Decoding of Some Binary Optimal Linear Codes of Lengths 36 and 40" Mathematics 8, no. 1: 15. https://0-doi-org.brum.beds.ac.uk/10.3390/math8010015

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop