Next Article in Journal
Attitude Determination for GRACE-FO: Reprocessing the Level-1A SC and IMU Data
Next Article in Special Issue
Automatic Relative Radiometric Normalization of Bi-Temporal Satellite Images Using a Coarse-to-Fine Pseudo-Invariant Features Selection and Fuzzy Integral Fusion Strategies
Previous Article in Journal
Detection of Larch Forest Stress from Jas’s Larch Inchworm (Erannis jacobsoni Djak) Attack Using Hyperspectral Remote Sensing
Previous Article in Special Issue
A New Multispectral Data Augmentation Technique Based on Data Imputation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Discrete Atomic Transform-Based Lossy Compression of Three-Channel Remote Sensing Images with Quality Control

1
Department of Information-Communication Technologies, National Aerospace University, 61070 Kharkiv, Ukraine
2
Institut d’Electronique et des Technologies du numéRique, IETR UMR CNRS 6164, University of Rennes 1, 22300 Lannion, France
3
Department of Mathematical Modelling and Data Analysis, National Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”, 03056 Kyiv, Ukraine
4
Department of Space Information Technologies and Systems, Space Research Institute of National Academy of Sciences of Ukraine and State Space Agency of Ukraine, 03187 Kyiv, Ukraine
*
Author to whom correspondence should be addressed.
Submission received: 3 November 2021 / Revised: 16 December 2021 / Accepted: 26 December 2021 / Published: 28 December 2021

Abstract

:
Lossy compression of remote sensing data has found numerous applications. Several requirements are usually imposed on methods and algorithms to be used. A large compression ratio has to be provided, introduced distortions should not lead to sufficient reduction of classification accuracy, compression has to be realized quickly enough, etc. An additional requirement could be to provide privacy of compressed data. In this paper, we show that these requirements can be easily and effectively realized by compression based on discrete atomic transform (DAT). Three-channel remote sensing (RS) images that are part of multispectral data are used as examples. It is demonstrated that the quality of images compressed by DAT can be varied and controlled by setting maximal absolute deviation. This parameter also strictly relates to more traditional metrics as root mean square error (RMSE) and peak signal-to-noise ratio (PSNR) that can be controlled. It is also shown that there are several variants of DAT having different depths. Their performances are compared from different viewpoints, and the recommendations of transform depth are given. Effects of lossy compression on three-channel image classification using the maximum likelihood (ML) approach are studied. It is shown that the total probability of correct classification remains almost the same for a wide range of distortions introduced by lossy compression, although some variations of correct classification probabilities take place for particular classes depending on peculiarities of feature distributions. Experiments are carried out for multispectral Sentinel images of different complexities.

Graphical Abstract

1. Introduction

In recent years, remote sensing (RS) has found various applications [1,2,3,4], including in agriculture [5,6], forestry, catastrophe, ecological monitoring [5], land cover classification [6,7], and so on. This can be explained by several reasons. First, a great amount of useful information can be retrieved from acquired images, especially if they are high resolution and multichannel, which represents a set of component images of the same territory obtained in parallel or sequentially and co-registered [3,5,6] (using the term “multichannel”, we mean that component images can be acquired for different polarizations, wavelengths, or even by different sensors). Second, the situation with RS data value and volume becomes even more complicated because many modern RS systems can carry out frequent data collection, e.g., once a week or more frequently. Sentinel-1 and Sentinel-2 recently put into operation are examples of sensors producing such large volume multichannel RS data [7,8]. Other examples are hyperspectral data provided by different sensors [9,10,11].
Then, problems of big data arise [9,10] where efficient transmission, storage, and dissemination of RS data are several of them, alongside others relating to co-registration, filtering, and classification. In RS data transmission, storage, and dissemination, compression are helpful [12,13,14,15]. Both lossless [12,13,16] and lossy [14,15,17] approaches are intensively studied. Near-lossless methods have been designed and analyzed as well [18,19]. Lossless techniques produce undistorted data after decompression, but the compression ratio (CR) is often not large enough. Near-lossless methods allow obtaining larger values of CR, and introduced distortions are controlled (restricted) in one or another manner [20]. However, CR can be still not large enough. In turn, the lossy compression we focus on in this paper is potentially able to produce CR equal to tens and larger than one hundred [17,21]. This can be achieved by the expense of distortions where larger distortions are introduced for larger CR values. A question is what is a reasonable trade-off between an attained CR and introduced distortions [13,22,23,24,25] and how can it be reached?
An answer depends upon many factors:
(1)
Priority of requirements to compression, restrictions that can be imposed;
(2)
Criteria of compressed image quality, tasks to be solved using compressed images;
(3)
Image properties and characteristics;
(4)
Available computational resources, preference of mathematical tools that can be used as compression basis.
Consider all these factors. Compression can be used for different purposes, including reduction of data size before their downlink transferring from a sensor to a point of data reception via a limited bandwidth communication line, to store acquired images for their further use in national or local centers of RS data, and to transfer data to potential customers [13,23].
First, providing a given CR with a minimal level of distortions can be of prime importance. Then, the use of efficient spectral and spatial decorrelation transforms is needed [17,23,24], combined with modern coding techniques applied to quantized transform coefficients. Spectral decorrelation and 3D compression allow exploiting spectral redundancy of multichannel data inherent for many types of images as, e.g., multispectral and hyperspectral [25], to increase CR [24]. In this paper, we consider three-channel images combined of visible range components of Sentinel-2 images [25]. The main reason we consider separate compressions of component images with central wavelengths 492 nm (Blue), 560 nm (Green), and 665 nm (Red) of Sentinel-2 data (https://www.esa.int/Applications/Observing_the_Earth/Copernicus/Sentinel-2, accessed on 27 October 2021) is that they have the equal resolution that differs from the resolution of most other (except NIR-band) component images (in other words, we assume that Sentinel-2 images can be compressed in several groups, taking into account different resolutions in different component images, namely, 10 × 10, 20 × 20, and 60 × 60 m2). Besides, earlier, discrete atomic compression (DAC) was designed for color images (i.e., for three-channel image compression).
Second, it is possible that the main requirement is to provide the introduced losses below a given level. “Below a given level” can be described quantitatively or qualitatively. In the former case, one needs some criterion or criteria to measure the introduced distortions (see brief analysis below). In the latter case, it can be stated that, e.g., lossy compression should not lead to sufficient reduction of image classification accuracy (although even in this case “sufficient” can be described quantitatively). Here, it is worth recalling that considerable attention has been paid to the classification of lossy compressed images [17,26,27,28,29,30,31,32,33]. It has been shown that lossy compression can sometimes improve classification accuracy or, at least, the classification of compressed data provides approximately the same classification accuracy as classification of uncompressed data [34,35,36]. Thus, if compression is lossy, the procedures of providing appropriate quality of compressed data are needed [36].
Third, it is often desired to carry out compression quickly enough. In this sense, it is not reasonable to employ iterative procedures, especially if the number of iterations is random and depends on many factors. It is worth applying transforms that can be easily implemented and have fast algorithms, can be parallelized, and so on [12]. This explains why most efficient methods are based on discrete cosine transform and wavelets [36,37,38,39]. Here, we consider lossy compression based on discrete atomic transform (DAT) [40,41] where atomic functions are a specific kind of compactly supported smooth functions. As it will be shown below, DAT has a set of properties that are useful in the lossy compression of multichannel images.
Fourth, there can also be other important requirements. They can relate to the visual quality of compressed images [26], the necessity to follow some standards, etc. One of the specific requirements could be the security and privacy of compressed data [42,43]. There are many approaches to providing security and privacy of images in general and RS data in particular [44,45,46]. Currently, we concentrate on the problem of unauthorized viewing of image content. Usually, a processing procedure, which provides both compression and content protection, can be constructed as follows: image is compressed and afterward encrypted. This approach requires considerable additional computational resources, especially if a great number of digital images is processed. Another way is to apply a combination of some special image transform at the first step (for example, scramble technique [46]) with further data compressing. In this case, the following questions arise: (1) what compression efficiency is provided? (2) is it possible to reconstruct an image correctly if a lossy compression algorithm is applied? It has been recently shown that privacy of images compressed by DAT [47] can be provided practically without increasing the size of compressed files (actually, protection is integrated into compression). This is one of its obvious advantages that will be discussed in this paper in more detail.
As has been mentioned above, criteria of compressed image quality and tasks to be solved using compressed images describe the efficiency of compression and applicability of compressed data for further use. The maximal absolute deviation is often used in characterizing near-lossless compression [19]. Root mean square error (RMSE), mean square error (MSE) and peak signal-to-noise ratio (PSNR) are conventional metrics used in lossy image compression [13,26]. Different visual quality metrics are applied as well [48,49,50]. Criteria typical for image classification (a total (aggregate) probability of correct classification, probabilities of correct classification for particular classes, confusion matrices) are worth using if RS data classification is the final task of image processing [51]. There is a certain correlation between all these criteria, but they are not strictly established yet [52]. Because of this, it is worth carrying out studies for establishing such correlations. Note that correlations between aforementioned probabilities and compressed image quality characterized by maximal absolute deviation (MAD) or PSNR depend upon a classifier used [36,53,54]. In this paper, we employ the maximum likelihood (ML) method [25,53,54]. This method has shown itself to be efficient for classifying multichannel data [53,54,55], its efficiency is comparable to the efficiency of neural network classifier [36].
We have stated above that image properties and characteristics influence CR and coder performance. For simpler structure images, the introduced losses are usually smaller than for complex structure images for the same CR where image complexity can be characterized by, e.g., entropy (a larger entropy relates to more complex structure images). This means that compression should be analyzed for images of different complexity and, desirably, of natural scenes. Besides, noise present in images can influence image compression. First, noisy images are compressed worse than the corresponding noise-free images [25]. Second, if images are noisy, this should be taken into account in compression performance analysis and coder parameter setting [18,56]. Since component images in the visible range of multispectral Sentinel-2 data have a high signal-to-noise ratio (that corresponds to noise invisibility in case of image visual inspection), we further consider these images noise-free.
We assume that available computational resources, preference of mathematical tools to be used for compression are not of prime importance. Meanwhile, we can state that the DAT-based compression analyzed below possesses high computational efficiency [57].
Aggregating all these, we concentrate on the following:
  • DAT is used as the basis of compression and we are interested in the analysis of its performance since it is rather fast, allows providing privacy of data, and has some other advantages [40,41,47,57];
  • It is worth investigating how compression characteristics of DAT can be varied (adapted to practical tasks) and how the main criteria characterizing compression performance are inter-related;
  • We are interested in how the DAT-based compression influences classification accuracy and, for this purpose, consider the classification of three-channel Sentinel-2 data using the ML method.
Thus, the goal of this paper is to carry out a thorough analysis of DAT application for compressing and further analysis of multichannel RS images using three-channel Sentinel-2 data. The main contributions of the paper are the following. First, we analyze and show what versions of DAT are the most attractive (preferable) for the considered application. Second, we analyze and propose ways to control distortions introduced by DAC. Third, we study how DAC influences the classification accuracy of RS data and show what parameter values have to be set to avoid sufficient degradation of classification characteristics.
The paper structure is the following. Section 2 considers DAT-based compression and its variants, basics of privacy providing. In Section 3, dependencies between the main parameters and criteria of DAT compression are investigated. Section 4 describes the used classifier and its training for two test images. Section 5 provides the results of experiments (classification) for the real-life three-channel image. A brief discussion is given in Section 6. Finally, the conclusions follow.

2. DAT-Based Compression and Its Properties

Below, we describe the discrete atomic compression that is the DAT-based image compression algorithm. Then, we consider the procedure of DAT, which is its core, and quality control mechanism.

2.1. Discrete Atomic Compression

In the algorithm DAC, the following classic lossy image compression approach is used: preprocessing → discrete transform → quantization → encoding. The application of DAC to full-color digital image processing is shown in Figure 1. In this case, the input is RGB-matrix. At the first step, luma (Y) and chroma (Cr, Cb) components are obtained. Further, each matrix Y, Cr, and Cb is processed separately. The procedure DAT is applied to them and matrices Ω[Y], Ω[Cr], Ω[Cb] of DAT coefficients are computed. Next, elements of these matrices are quantized (further, we consider this process in more detail). Finally, quantized DAT-coefficients are encoded using a lossless compression algorithm. The range of most of these values is small. Moreover, depending on quantization coefficients choice, a significant part of them is equal to zero. A combination of the features described provides effective compression by such algorithms, as Huffman codes, which are often used in combination with run-length encoding, as well as arithmetic coding (AC) [55]. Consider this in more detail.
In this research, we apply the algorithm AC to compress bit streams obtained from quantized DAT-coefficients. Different ways to transform data into bitstream can be used. For example, the special bit planes bypass is applied in JPEG2000 [37]. In the current research, we use an approach that is based on Golomb coding [58]. Our proposition is to apply the following bitstream assignment: 0 ↔ 0, 1 ↔ 10, −1 ↔ 110, 2 ↔ 1110, −2 ↔ 11110, etc. In general, the code for 0 is 0, the code for positive k is the sequence of 2k−1 bits equal to 1 and end-bit 0, the code for negative −k is the sequence of 2k bits equal to 1 and end-bit 0. For example, a binary stream of the sequence {0, 3, 4, −1, −2, 1, 0, 0, 2} is 0111110111111101101111010001110. In addition, we use row-by-row scanning of the coded blocks. The choice of such a scan is based primarily on performance reasons, namely, on the principle of locality, which allows significant speeding up the data processing by effectively using the features of the memory architecture [59]. Of course, there may be another way to bypass the blocks, which might provide better compression efficiency.
The process of reconstructing a compressed image is carried out in the reverse order to that shown in Figure 1.
We note that the algorithm DAC can be used to compress grayscale digital images. In this case, the preprocessing step is skipped, and DAT is applied directly to the matrix of the image processed. After that, DAT-coefficients are quantized and encoded in the same way as above.
In the next subsection, we consider the procedure DAT in more detail.

2.2. Discrete Atomic Transform

Discrete atomic transform is based on the following expansion:
f ( x ) = k = 1 n j J k ω k , j w k ( x 2 k + 1 j N ) + j J n + 1 υ j v n ( x 2 n + 1 j N ) ,
where f(x) is the function representing some discrete data D = {d1, d2, …, dm}, n is positive integer, N is non-zero constant and the system { w k ( x 2 k + 1 j N ) ,   v n ( x 2 n + 1 j N ) } is a system of atomic wavelets [57]. These wavelets are constructed using the atomic function
u p s ( x ) = 1 2 π e itx k = 1 sin 2 ( st ( 2 s ) k ) s 2 t ( 2 s ) k sin ( t ( 2 s ) k ) dt ,   s = 1 , 2 , 3 , .
In this research, we use the function u p 32 ( x ) and N = 2 n + 1 .
From (1), it follows that f ( x ) = m ( x ) + k = 1 n k ( x ) , where m ( x ) = j υ j v n ( x 2 n + 1 j N ) represents the main value of f(x), i.e., a small copy of the data D, and each function k ( x ) = j ω k , j w k ( x 2 k + 1 j N ) describes orthogonal components corresponding to the wavelet w k ( x ) . The function f(x) is defined by the system of atomic wavelet coefficients Ω = { ω k j , υ j } , which is equivalent to the description of the discrete data D by Ω . A procedure of atomic wavelet computation is called discrete atomic transform of the data D (Figure 2). In addition, the number n is called its depth.
We note that the depth of DAT can be varied. This means that a structure and, therefore, result of DAT can be changed.
There are many ways to construct DAT of multidimensional data. Here, we concentrate on the two-dimensional case, since image processing is considered.
Let D be a rectangular matrix.
The first way to construct discrete atomic transform of the matrix D is as follows: first, array transform DAT of the depth n is applied to each row of the matrix D and then to each column of the matrix of DAT-coefficients obtained at the previous step (see Figure 3). We call this procedure DAT1 of the depth n.
Consider another approach. First, array transform DAT of depth 1 is applied to each row of D and then to each column of the resulting matrix (Figure 4). In this way, a simple matrix transform, which is called DAT2, of depth 1 is built. The matrix of DAT-coefficients Ω is a result. This matrix has a block structure: the block Ω 0 , 0 contains a small, aggregated copy of the source data D, all others contain DAT-coefficients of the corresponding orthogonal layers.
If we apply DAT2 of depth 1 to the block Ω 00 , i.e., left upper block, we obtain the matrix transform, which is called DAT2 of depth 2. In the same way, the matrix transform DAT2 of any valid depth n is constructed (Figure 5). Such a transform belongs to classic wavelet transforms that are widely used in image compression [58,60].
It is clear that DAT1 and DAT2 are significantly different. The output matrices have different structures, and their elements have different meanings. For this reason, complete information about the matrix transform applied is required in order to reconstruct the original matrix D.
Note that various mixtures of DAT1 and DAT2 can be applied. For example, first, DAT2 of the depth 1 can be applied, and then each block of the resulting matrix can be transformed by DAT1 (Figure 6). In addition, different combinations of DAT-procedure reuse can be applied to blocks of the matrix obtained at the previous step.
Hence, there is a great variety of constructions of the two-dimensional DAT. We stress that any attempt to correctly reconstruct the source matrix D using the given matrix of DAT-coefficients Ω requires huge computational resources if comprehensive information about DAT-procedure applied is absent. Moreover, the source matrix can be divided into blocks, each of which can be then transformed by DAT. Notice that this approach is used in such algorithms as JPEG [61] and WebP [62].
It is obvious that changes in the structure of DAT affect the DAC efficiency including complexity, memory savings, and metrics of quality loss. For this reason, the following question is of particular interest: what is the dependence of the DAC compression efficiency on the structure of DAT applied? An answer to this question makes it possible to choose such a structure of DAT that provides the best results with respect to different criteria.
In [47], DAT1 of the depth 5 and DAT2 of the depth 5 were considered, and it was shown that they provided almost the same compression ratio with the same distortions measured by RMSE (actually, only one compression mode of DAC, which provides the average RMSE = 2.8913, was considered), i.e., a significant variation of DAT structure does not reduce the efficiency of the DAC. It was proposed to apply this feature in order to provide protection of digital images.
Further, we compare DAT1 and DAT2. In opposite to [47], in the current research, a greater number of DAT structures is considered. Moreover, analysis is carried out for a wider range of quality loss levels. It will be shown that almost the same results can be obtained using these principally different matrix transforms. In other words, a significant variation of DAT structure does not significantly affect the processing results. For this reason, it is natural to expect that other intermediate structures of DAT provide practically the same compression results.
The following combination of features makes the algorithm DAC a promising tool for image protection and compression:
  • a great variety of structures of the procedure DAT, which is a core of DAC;
  • a possibility to reconstruct the source image correctly if and only if the correct inverse transform is applied;
  • almost the same compression efficiency provided by different structures of DAT.
It is obvious that if DAC is used in some software, a key containing information about the structure of DAT applied should not be stored in the compressed file in order to provide protection of image content. If this requirement is satisfied, then a high level of privacy protection is guaranteed. If unauthorized persons obtain access to file and compression technology, but do not have the key, then correct content reconstruction requires great computational resources. Such a hack can be performed with more complication by encrypting some elements of the file with a compressed image.
Consider some special data structure requirements. Different variants of the matrix form DAT are based on one-dimensional DAT, i.e., DAT of an array. From functional properties of atomic wavelets [57], which constitute a core of DAT, it follows that a length of the source array A should be equal to l e n g t h ( A ) = s · 2 n , where n is a depth of the DAT and s is some integer. This restriction is called the length condition. If it is not satisfied, then it is suggested to extend A such that equality holds. We propose to fill extra elements by values, which are equal to the last element of A. In this case, the extended version of A has up to 2 n 1 extra elements. When processing the matrix M, its rows and columns should also satisfy the length condition, defined by the structure of DAT. In order to provide such satisfaction, it is proposed to add extra columns, each of which coincides with the last column of M, and after that to add extra rows using the same approach. This provides a possibility to apply the DAT of any structure to a matrix of any size.

2.3. Quality Loss Control Mechanism

DAC is the lossy compression algorithm. Main distortions occur during the quantization of DAT-coefficients. It is clear that appropriate coefficients of quantization should be used. In the standards of some algorithms (for instance, JPEG), the recommended values are given. However, sometimes developers of software and devices apply their own coefficients. We stress that loss of quality depends a lot on quantization coefficients. For this reason, their choice should provide fulfillment of requirements for the quality in terms of some given metrics.
In [60], a quality loss control mechanism for the algorithm DAC was introduced. It provides the possibility to obtain the desired distortions measured by maximum absolute deviation (MAD) often used in remote sensing [22,23]. Basically, this metric is defined by the formula
M A D = max i , j | X i j Y i j | ,
where X = ( X i j ) , Y = ( Y i j ) are the source and reconstructed images, respectively. For the case of full-color digital images, the MAD-metric is built as follows:
M A D = max i , j { | X i j [ R ] Y i j [ R ] | , | X i j [ G ] Y i j [ G ] | , | X i j [ B ] Y i j [ B ] | } ,
where ( X i j [ R ] , X i j [ G ] , X i j [ B ] ) ,   ( Y i j [ R ] , Y i j [ G ] , Y i j [ B ] ) are RGB-components of pixels X i j and Y i j .
High sensitivity even to minor distortions is a key feature of MAD. Note that if MAD is small, then quality loss, which is obtained during processing (e.g., due to lossy compression), is insignificant. If MAD is large, then it means that at least one pixel is changed considerably. Although, if only several pixels have significant changes of color intensity, visual quality might remain high, especially when processing high-resolution images. Hence, MAD-metric should be used as a metric of distortions in the case of low-quality loss or near lossless compression.
Consider the quality loss control mechanism, which was proposed in [60], in more detail. It is based on an estimate concerning the expansion (1).
We start with the transform DAT1 and the case of grayscale image processing. Let D be a source matrix. Using DAT1, we obtain the matrix Ω that consists of blocks { Ω i , j } i , j = 0 n (see Figure 3). Denote by { δ i , j } i , j = 0 n a set of positive real numbers. Consider the following quantization procedure:
Ψ 0 , 0 = R o u n d ( Ω 0 , 0 δ 0 , 0 ) ,   Ψ 0 , j = R o u n d ( 2 2 j 1 Ω 0 , j δ 0 , j )   for   j = 1 , , n ,
Ψ i , 0 = R o u n d ( 2 2 i 1 Ω i , 0 δ i , 0 )   for   i = 1 , , n   and   Ψ i , j = R o u n d ( 2 2 ( i + j 1 ) Ω i , j δ i , j )   for   i ,   j = 1 , , n .  
It is presented in matrix form. We assume that all operations are applied to each element of blocks. Using (2), (3), the matrix Ψ = ( Ψ i , j ) i , j = 0 n is computed. In DAC, blocks of this matrix are encoded using binary AC.
Dequantization is constructed as follows:
Ω ˜ 0 , 0 = δ 0 , 0 Ψ 0 , 0 ,   Ω ˜ 0 , j = δ 0 , j 2 2 j 1 Ψ 0 , j   for   j = 1 , ,   n ,
Ω ˜ i , 0 = δ i , 0 2 2 i 1 Ψ i , 0   for   i = 1 , ,   n   and   Ω ˜ i , j = δ i , j 2 2 ( i + j 1 ) Ψ i , j   for   i ,   j = 1 , ,   n .  
This procedure provides computation of the matrix Ω ˜ = { Ω ˜ i , j } i , j = 0 n that is further used in order to obtain D ˜ , which is a matrix of the decompressed image.
It follows that if (2)–(5) are applied, then
M A D i , j = 0 n δ i , j .  
The right part of this inequality is an upper bound of MAD. We denote it by UBMAD. In other words, the proposed quantization and dequantization procedures provides that loss of quality measured by MAD is not greater than UBMAD, which is defined by parameters of quantization { δ i , j } i , j = 0 n . As it can be seen, DAT-coefficients corresponding to the same wavelet layer are quantized using the same quantization coefficient.
When processing full-color digital image, we propose to apply the same approach. Consider three sets of positive real numbers { δ i , j [ Y ] } i , j = 0 n , { δ i , j [ C r ] } i , j = 0 n and { δ i , j [ C b ] } i , j = 0 n . Each of these sets is used in (2)–(5) for quantizing and dequantizing of matrices Ω[Y], Ω[Cr], Ω[Cb] of DAT-coefficients corresponding to Y, Cr, and Cb respectively. In this case,
M A D max { i , j = 0 n δ i , j [ Y ] , i , j = 0 n δ i , j [ C r ] , i , j = 0 n δ i , j [ C b ] } .  
where the right part is a maximum of three values, each of which is a sum of real numbers introduced above. This maximum is denoted by UBMAD.
Further, consider the transform DAT2, which is used for grayscale image compressing. A result of the application of DAT2 to the matrix D of the image processed is the matrix Ω that consists of blocks { 0 , 0 , k , 0 , k , 1 , k , 2 } k = 1 n (see Figure 5). Let { δ 0 , 0 , δ k , 0 , δ k , 1 , δ k , 2 } k = 1 n be a set of positive numbers. As before, these values are used in quantizing blocks of the matrix Ω. This procedure is:
Ψ 0 , 0 = R o u n d ( Ω 0 , 0 δ 0 , 0 ) ,   Ψ k , 0 = R o u n d ( 2 2 k 1 Ω k , 0 δ k , 0 ) ,
Ψ k , 1 = R o u n d ( 2 2 ( 2 k 1 ) Ω k , 1 δ k , 1 )   and   Ψ k , 2 = R o u n d ( 2 2 k 1 Ω k , 2 δ k , 2 )   for   k = 1 , ,   n .  
Dequantization is built as follows:
Ω ˜ 0 , 0 = δ 0 , 0 Ψ 0 , 0 ,   Ω ˜ k , 0 = δ k , 0 2 2 k 1 Ψ k , 0 ,
Ω ˜ k , 1 = δ k , 1 2 2 ( 2 k 1 ) Ψ k , 1   and   Ω ˜ k , 2 = δ k , 2 2 2 k 1 Ψ k , 2   for   k = 1 , ,   n .  
Computation of the matrix Ω ˜ , which is used in order to obtain the decompressed image, is provided by (10), (11). Quality loss measured by MAD satisfies the following inequality:
M A D δ 0 , 0 + k = 1 n ( δ k , 0 + δ k , 1 + δ k , 2 ) .  
By UBMAD, we denote the right part of this expression.
In the case of full-color image compression using DAC with DAT2, the same can be used. Three sets { δ 0 , 0 [ Y ] , δ k , 0 [ Y ] , δ k , 1 [ Y ] , δ k , 2 [ Y ] } k = 1 n , { δ 0 , 0 [ C r ] , δ k , 0 [ C r ] , δ k , 1 [ C r ] , δ k , 2 [ C r ] } k = 1 n and { δ 0 , 0 [ C b ] , δ k , 0 [ C b ] , δ k , 1 [ C b ] , δ k , 2 [ C b ] } k = 1 n are applied as parameters of quantization. In order to obtain formulas for quantizing and dequantizing the matrices Ω[Y], Ω[Cr], Ω[Cb], one should put these values to (8)–(11). In this case, the following inequality holds:
M A D max { δ 0 , 0 [ Y ] + k = 1 n ( δ k , 0 [ Y ] + δ k , 1 [ Y ] + δ k , 2 [ Y ] ) ,   δ 0 , 0 [ C r ] + k = 1 n ( δ k , 0 [ C r ] + δ k , 1 [ C r ] + δ k , 2 [ C r ] ) , δ 0 , 0 [ C b ] + k = 1 n ( δ k , 0 [ C b ] + δ k , 1 [ C b ] + δ k , 2 [ C b ] ) } .
Further, the right part of (13) is denoted by UBMAD.
This implies that (2)–(5) in combination with (6), (7) and (8)–(11) in combination with (12), (13) provide control of quality loss measured by MAD-metric when compressing digital images by DAC with DAT1 and DAT2, respectively.
It is obvious that (6), (7), (12), and (13) are upper bounds. Application of the proposed methods of quantization and dequantization does not provide obtaining MAD, which is equal to the desired value. Nevertheless, the following property is guaranteed: quality loss measured by this metric does not exceed the given value. This feature is important if a minor loss of quality is required.
The choice of parameters { δ i , j } or { δ i , j [ Y ] , δ i , j [ C r ] , δ i , j [ C b ] } , when processing respectively grayscale or full-color images, defines quality loss settings of DAC. Many lossy compression algorithms, including DAC, have the following feature: if one fixes some setting of quality and processes two images of different content complexity, results of non-equal distortions are usually obtained. Hence, the following question is of particular interest: what is a variation of compression efficiency indicators? In the next section, we study this question. Besides, as has been mentioned above, the metric MAD might be a non-adequate measure of distortions if its value is large. In this case, other quality loss indicators, in particular, RMSE and PSNR can be used:
R M S E = 1 3 m n i , j = 1 m , n ( ( X i j [ R ] Y i j [ R ] ) 2 + ( X i j [ G ] Y i j [ B ] ) 2 + ( X i j [ B ] Y i j [ b ] ) 2 ) , P S N R = 20 log 10 255 R M S E ,
where X i j = ( X i j [ R ] , X i j [ G ] , X i j [ B ] ) ,   Y i j = ( Y i j [ R ] , Y i j [ G ] , Y i j [ B ] ) are pixels of RGB-images X and Y, which are respectively the source and the reconstructed images of the size m × n .
Further, we investigate the correlation of these metrics and MAD, as well as their dependence on UBMAD. For this purpose, a set of 100 test images is used. Each of them is processed using the algorithm DAC with different quality loss settings and structures of DAT.

3. Discrete Atomic Compression of Test Data

In the current research, we used 100 digital images (see Figure 7) of the European Space Agency (ESA). They were downloaded from ESA official site: https://www.esa.int/ESA_Multimedia/Images, accessed on 27 October 2021. In addition, these test data are available at the link to Google-drive folder (here, RGB-images in BMP-format, short information about them, and tables with results of their processing can be found): https://drive.google.com/drive/folders/1PSld3GqFQJYfrNs_b4uaxieY4m3AlwW2?usp=sharing, accessed on 27 October 2021. One of the principal features of the test data applied is the presence of a great number of small details and sharp changes of color (Figure 8a).
In other words, mostly images with complex content are used. However, some of them contain domains of relatively constant color in combination with smooth color changes (Figure 8b).
For testing purposes, it can be used in situ datasets too. These data can be collected during different land surveys, for example along the roads (Figure 9), and can be especially useful for crop classification. During data preprocessing, it could be prepared small polygons, which can be some representation of different homogeneous land cover classes.
Each of the test images is processed by the algorithms DAC with DAT1 of the depth 5 and DAT2 with the depth n = 1, 2, 3, 4, and 5. Different quality loss settings, which are defined by the values { δ i , j [ Y ] , δ i , j [ C r ] , δ i , j [ C b ] } , are used. The following steps are applied:
(1)
fix the structure of DAT and its depth in the case of DAT2;
(2)
fix parameters { δ i , j [ Y ] , δ i , j [ C r ] , δ i , j [ C b ] } and compute UBMAD;
(3)
for each test image perform the following:
compress the current image;
compute compression ratio (CR): CR = size   of   source   file size   of   cimpressed   file ;
decompress image;
compute quality loss measured by MAD, RMSE, and PSNR;
store results in Table.
In this paper, Tables with the results obtained are not given due to their huge size (these data are presented in files Efficiency_indicators.pdf, ESA_data_DAT_1.xlsx, and ESA_data_DAT_2.xlsx, which are available at the link to Google-drive folder https://drive.google.com/drive/folders/1PSld3GqFQJYfrNs_b4uaxieY4m3AlwW2?usp=sharing, accessed on 27 October 2021). Further, we present their analysis.
First, we study the correlation of quality loss metrics RMSE, PSNR, and MAD. Since PSNR is a function of RMSE, it is sufficient to investigate the dependence of RMSE on MAD. In Figure 10, scatter plots of RMSE vs. MAD are shown. In addition, we have computed Pearson’s correlation coefficient R and Spearman’s rank-order correlation coefficient ρ [63]. Moreover, using the least-square method [63], linear regression equations have been constructed. Their graphs are also given in Figure 10. In Table 1, values of R, ρ , as well as coefficients a, b of the linear equation y = a x + b , where y = RMSE and x = MAD, are presented.
Furthermore, using the ANOVA F-test [64], we have checked whether there is a linear regression relationship between RMSE and MAD. For this purpose, the following test statistic has been computed:
F ¯ = i = 1 n ( y ^ i y ¯ ) 2 1 n 2 i = 1 n ( y i y ^ i ) 2 ,  
where n = 100 is the number of analyzed values, { y 1 , , y n } is a set of RMSE-values and y ¯ = 1 n i = 1 n y i ; y ^ i = a x i + b , where a, b are coefficients of linear regression, { x 1 , , x n } are values of MAD. Here, we note that the point ( x i , y i ) is a pair of quality loss values measured by MAD and RMSE for an i-th test image. In Table 1, values of F ¯ are given. This statistic is compared with F 1 , n 2 from F-table [64]. Currently, F 1 , n 2 = 3.865 . If F ¯ > F 1 , n 2 , then there is a linear regression between y and x, i.e., RMSE and MAD. It follows from Table 1 that, for each structure of DAT, there is the linear regression between quality loss indicators MAD and RMSE. This is also evidenced by the fact that values of both Pearson’s correlation and Spearman’s rank correlation coefficients are close to 1.
Second, we investigate a dependence of MAD, RMSE, PSNR and CR on UBMAD. For this purpose, we compute mean (E), minimum, maximum values and deviation ( σ ) of these compression efficiency indicators for each value of UBMAD applied. In addition, we calculate percentage of values obtained that belongs to segments [ E k σ , E + k σ ] for k = 1, 2 and 3. In other words, we estimate the scatter of experimental data with respect to the mean. In Table 2 and Table 3, the results of computation are given for the case of DAT1 of the depth 5 and DAT2 of the depth 2 (the results concerning other cases are presented in the file Efficiency_indicators.pdf that can be found at the link given above). As it can be seen, the difference between minimum and maximum is great. For instance, when processing test images “Southern Bavaria” (Figure 8a) and “Jewels of the Maldives” (Figure 8b) by DAC with DAT1 and UBMAD = 155, we obtain, respectively, PSNR = 32.854 dB, CR = 2.146 and PSNR = 42.615 dB, CR = 44.525, which are, respectively, minimum and maximum values of the correspondent indicators. Nevertheless, percent of values, which belong to segments [ E σ , E + σ ] and [ E 2 σ , E + 2 σ ] , is great. Besides, we see that σ is small if UBMAD is small. Although, σ grows as UBMAD increases.
Hence, in the algorithm DAC, there is a mechanism for control of quality loss measured by MAD, RMSE and PSNR. It does not provide obtaining some values of these indicators, but it guarantees with high level of certainty that each of them is within fixed limits.
Next, in Figure 11, dependences of the mean value of CR on the mean value of PSNR for each structure of DAT are shown. We see that curves are close to each other except for the one corresponding to DAT2 of the depth 1. This means that it is not recommended to use DAT2 of the depth 1 and that a structure of DAT can be changed without significant changes of compression efficiency, which is important in the context of privacy protection requirements (see Section 2.2).
Finally, we verify the results presented above by processing other test data. In Figure 12, the test images SS3 and SS4 are shown. Table 4 and Table 5 contain results of their compressing. We stress that we have used the same quality loss settings as when processing the previous test data set. Figure 13 shows the dependence of CR on PSNR. It follows that there is no significant difference between results obtained for different structures of DAT. Furthermore, we see that indicators MAD, RMSE, PSNR, and CR belong to segments obtained when processing ESA images.

4. Considered Approach to Multichannel Image Classification

4.1. Maximum Likelihood Classifier

The process of pixel-by-pixel classification of raster images consists of the distribution of all pixels into classes in accordance with the value of each of them in one or more zones of the spectrum. Formally, the task is reduced to the creation of an optimal classifier that maps a set of observations of class attributes into a set of classes (represented by unique names or numbers) d ( x ) : X A . The optimality criterion is usually understood as the requirement that when elements x from the observation space X are presented in the classification process, correct decisions are made as often as possible. Variability of spectral features, imperfect characteristics of imaging systems, noise, and interference during registration are sources of stochasticity in decision-making. Since observations X are realizations of random variables, the transformation d(x) is a random function, the class number also turns out to be a random variable. Thus, the design of pattern recognition methods is inevitably associated with the study of random mappings and is based on information-statistical methods for the formation of feature space, nonparametric estimation of probability densities, and the adoption of statistical hypotheses.
Statistical recognition methods, in contrast to heuristic ones, allow making mathematically sound decisions taking into account the available a priori information about the form of distribution   f ( x ; θ | a k ) for all sets of patterns A = { a k } and the probability of appearance of patterns for each class P ( a k ) . In this case, the values of the attributes of the classes are considered as realizations of random variables, and their joint probability distribution densities are used to describe the etalons of the classes.
All statistical decision rules are based on the formation of the likelihood ratio L
f ( x * ; θ | a u ) f ( x * ; θ | a v )
and its comparison to a certain threshold, the value of which is determined by the selected criterion. The choice of criterion determines the way of dividing the space of features X into closed non-intersecting decision-making areas G k , k = 1, 2, …, K, each of which contains such values of features x X that are most characteristic (probable) for one of the classes. Then, each pixel of the image s with spatial coordinates (i, j) is assigned to the class in the area of which its vector of values x * ( i , j ) falls.
Complete and detailed knowledge of a priori information is consistent with the Bayesian approach to classification. The Bayesian classifier provides minimal error rates and is used to compare the performance of other classification algorithms. In the absence of information about the prior probabilities of classes and losses associated with making erroneous decisions, the maximum likelihood criterion is used. Following this criterion, the vector of values of a current pixel x * ( i , j ) is alternately substituted into the probabilistic models of class etalons. The decision is made in favor of the class for which the likelihood function is maximal:
f ( x * ; θ | a v ) = max 1 k K { f ( x * ; θ | a k ) } s a v .
The results f ( x * ( i , j ) | a k ) , k = 1… K are compared to each other and the maximum value of the likelihood function is selected; its number is the number of the class to which the current pixel s ( i , j ) belongs. Since, in statistical recognition, the densities f ( x ; θ | a k ) are in general not known, their estimates obtained at the stage of training the classifier are substituted into the decision rule.
Supervised classification procedures (supervised learning) are characterized by the presence of training samples. When classifying remote sensing data, training samples are collections of pixels that represent a recognizable pattern or potential class. Usually, these are some well-defined homogeneous areas in the image, identified based on the true data on the Earth’s surface.
In the case of nonparametric estimation of the PDF based on the training sample (N points of the c-dimensional space), it is necessary to restore the form of the a priori unknown surface f c ( x ) in the (c + 1)-dimensional space of features. Difficulties in constructing adequate multivariate statistical models are due to the fact that the methodology of data processing in the presence of correlations is based on the assumption that the distributions under consideration are normal. At the same time, the distributions of real multichannel data often have a non-Gaussian form and are determined over a finite interval of admissible values. To approximate such distributions, one can use the multivariate S B –Johnson distribution
f p ( x ) = ( 2 π ) p / 2 | R | 1 / 2 κ = 1 p η κ λ κ ( x κ ε κ ) ( λ κ + ε κ x κ ) × , × e x p [ 1 2 κ , υ = 1 p R κ υ 1 ( γ κ + η κ l n ( x κ ε κ λ κ + ε κ x κ ) ) ( γ υ + η υ l n ( x υ ε υ λ υ + ε υ x υ ) ) ] ,
where ε is the displacement parameter, λ is the scale parameter, η and γ are the parameters of the distribution shape; R is a correlation matrix.
The disadvantage of the Johnson distribution is the lack of a direct connection between the estimates of sample moments with the distribution parameters θ = { ε , λ , η , γ } . Methods for estimating these parameters for each component of the feature vector are iterative and are reduced to solving an optimization problem of the form
F ( θ ) = [ H ( x κ ) f ( x κ ; θ ) ] 2 d x κ m i n ,
where H ( x κ ) is the empirical distribution (histogram) of the kth component.
Based on the results of constructing one-dimensional statistical models of the components of the feature vector for each class, it is possible to obtain a matrix of parameters of a multivariate distribution
Θ k = { ε κ k , λ κ k , η κ k , γ κ k } p × 4 .
The elements of the sample correlation matrix R are found as
R ^ κ υ = 1 N M i = 1 N j = 1 M z κ ( i , j ) z υ ( i , j ) ,
where z is a normal random variable with zero mathematical expectation and unit variance, obtained by transforming the original sample x:
z = γ + η l n ( x ε λ + ε x ) .
The obtained multidimensional models of class references are used to assign each analyzed pixel to a particular class based on the values of the likelihood functions.

4.2. ML Classifier Training

To study the effect of the compression procedure on the classification results, we have taken two multichannel images of 512 × 512 pixels obtained from the Sentinel-2 satellite (Figure 12). It has been assumed that each image contains four classes of objects: 1—Urban, 2—Water, 3—Vegetation, and 4—Bare soil. Based on factual data on the territory represented in these images (Kharkiv and its environs, Ukraine), relatively homogeneous fragments of images representing separate classes have been identified. Each of the selected fragments was marked with a conditional color corresponding to a certain class: Urban—yellow, Water—blue, Vegetation—green, and Bare soil—black. The sets of reference marked pixels have been divided into two non-overlapping subsets: training and control (verification) samples.
The marked areas (sets of reference pixels) have been divided into two subsets, which were used for training and assessing the quality of the classifier (Figure 14 and Figure 15). At the same time, it has been assumed that these subsets can partially overlap. The volumes of the training samples have been of the order of (4… 20) × 103 pixels, the volumes of the verification samples have been several times larger ((7… 50) × 103 pixels).
Figure 16 shows the empirical distributions of spectral features G, B for four classes of objects on the test image in Figure 12a, and graphs of the densities of Johnson’s SB-distribution, which approximate them. As one can see, there is a sufficient overlapping of features in the feature space.
After obtaining the reference class descriptions, a pixel-by-pixel classification has been carried out according to the criterion of maximum likelihood. To assess the reliability of the classification, control samples have been used. The percentage of correctly recognized patterns of the kth class Q k was in this case an empirical estimate of the probability of correct recognition of the kth class P k k . The estimate of the overall probability of correct recognition (quality criterion) for unknown (i.e., equiprobable) a priori class probabilities was determined as
P t o t a l = 1 K k = 1 K Q k .

5. Analysis of Classification Characteristics

Classification accuracy depends on many factors including a used classifier and its parameters, methodology of its training, image properties, and compression parameters. The classifier type, its parameters, and methodology of its training are fixed. Two images of different complexity will be analyzed in this Section. The main emphasis here is on the impact of compression parameters.
Let us start by considering the simpler structure image (Figure 12a). Let us analyze more in detail confusion matrices for compression with DAT of depth = 1 for different MAD values. The obtained results are presented in Table 6.
Analysis of data in Table 6 shows the following. First, classes are recognized with sufficiently different probabilities. The class Water is usually recognized in the best way although this is not the case for MAD = 16. Variations of the probability of correct recognition for the class Water (P22) are due to two obstacles.
First, this class has sufficient overlapping of features distributions with other classes and probability density functions for this class are “narrow” (see Figure 17). Second, distortions due to lossy compression, in particular, mean shifting for large homogeneous areas, can lead to misclassifications. This effect is illustrated by two classification maps in Figure 18. For MAD = 16, there are many misclassifications (the pixels that belong to the class Water are related to the class Urban and shown by yellow color). The class Urban is recognized with approximately the same probability of correct recognition P11. The probability of correct recognition P33 for the class Vegetation does not change a lot. Finally, the probability of correct recognition P44 for the class Bare Soil changes a little for small MAD values and sufficiently reduces for the largest MAD = 35.
Second, since there are overlappings in the feature space, there are misclassifications. In particular, the pixels belonging to the class Bare Soil are often recognized as Urban and vice versa. This is not surprising and always happens in RS data classification for classes “close” to each other.
Third, the total probability of correct classification Ptotal depends on MAD. Being equal to 0.876 for original images, it occurs to be equal to 0.87 for MAD = 4, 0.866 for MAD = 7, 0.861 for MAD = 10, 0.825 for MAD = 16, 0.875 for MAD = 22, and 0.839 for MAD = 35. Thus, there is some tendency of reduction of Ptotal with “local variations”.
Let us now consider the results for other depths of DAT. The obtained probabilities are presented in Table 7, Table 8, Table 9 and Table 10.
As one can see, probabilities for particular classes slightly vary depending on the depth of DAT and MAD but not by much. They remain more stable than in the case of DAT with depth 1. Concerning the total probability of correct classification, its small degradation with MAD increasing is observed for depths 3 and 4, but reduction can be considered acceptable, if it does not exceed 0.02.
Let us now consider the second real-life test image that has a complex structure (Figure 12b). Its classification maps for the original image and three values of MAD are presented in Figure 19. The comparison shows that there are no essential differences between the classification maps. Another observation is that there are quite many misclassifications from the Vegetation class to Water (authors from Kharkiv live or work in this region). The confusion matrix (Table 11) confirms this. Here, it is seen that the class Water is recognized worse than in the previous case. The class Urban is also recognized worse while the classes Vegetation and Bare Soil are recognized better than in the previous case. Ptotal equals 0.811. For comparison, Table 12 gives an example of a confusion matrix for the compressed image. As one can see, there is no essential difference, at least in probabilities P11, P22, P33, and P44. Ptotal = 0.787, i.e., noticeable reduction of Ptotal takes place, and it is worth analyzing probabilities for different MAD values and depths of DAT.
Thus, let us consider data for different depths of DAT and different values of MAD. They are presented in Table 13, Table 14, Table 15, Table 16 and Table 17. As it follows from data analysis in these Tables, there is a tendency of reduction of Ptotal if MAD increases. This is especially obvious for a depth equal to 1. For MAD = 34, the considerable reduction of P22 and P44 takes place.
The smallest reduction takes place for depth = 2. For other depths, the results for depths 3, 4, and 5 are, in general, better than for depth = 1, but worse than for depth = 2. For depth = 2, it is possible to state that classification results are acceptable for all considered MAD values (even MAD = 36) because Ptotal for compressed images is less than Ptotal for the original image by no more than 0.03.
Certainly, these two examples are not enough to obtain full imagination of classification accuracy of compressed images and give some final recommendations. Besides, classification results usually depend on a classifier applied. To partly move around these shortcomings of previous analysis, we have carried out additional experiments for other Sentinel-2 images of different complexity, the Landsat image, and a neural network classifier. The obtained data are briefly presented below (a more detailed data can be found at the following link to the Google drive folder: https://drive.google.com/drive/folders/14m7TLLM7o836yGzJo9NKlaUc9sLsL5VK?usp=sharing, accessed on 27 October 2021).
The experiments have been performed for 512 × 512 pixel fragments of Sentinel-2 and Landsat images (Figure 20). Table 18 presents the total probabilities of correct classification for the image in Figure 20, a compressed with depth 2. The three-layer neural network (NN) classifier has been trained using the same fragments as for the ML classifier. The verification fragments are also the same to provide the correctness of comparisons. Classifiers have been trained for non-compressed data. The analysis shows that the probabilities are practically at the same level for all MAD values except the last one where a small reduction of Ptotal is observed. The classification accuracy for the NN classifier is slightly better but not sufficiently.
Table 19 gives the results (Ptotal) for the image in Figure 20b. As one can see, the more complicated structure image is classified worse than a simpler one (compare the data in Table 18 and Table 19). Again, the NN classifier performs a little bit better than the ML one. Finally, there is a general tendency to reduction of Ptotal if the MAD of introduced losses increases. Meanwhile, if MAD is less than 30, the reduction of classification accuracy is acceptable.
Finally, Table 20 presents a part of the results obtained for the Landsat image in Figure 20c using the ML classifier. The probabilities for particular five classes and the total probability of correct classification are given. As one can see, the classification results are quite stable if introduced distortions are not too large (MAD < 20, PSNR > 38 dB), then a fast reduction of classification accuracy takes place if MAD increases (PSNR decreases).
In addition, in Figure 21, the results of compressing the images given in Figure 20 are presented. Figure 21a shows almost the same behavior of dependence of PSNR on MAD for all three images. This means that the dependence of PSNR on MAD depends on image content only slightly. In contrast, compression efficiency measured by CR significantly depends on the image content. For instance, the test image shown in Figure 20b has a lot of small objects and sharp changes of color intensity. As one can see, CR for this image is the smallest for any value of PSNR considered (Figure 21b).

6. Discussion

Below, we discuss the obtained results and present some recommendations concerning their further applications.
First, computation complexity is of particular interest, especially, when processing a huge amount of digital images. In [54], it has been shown that computation of DAT-coefficients is linear in the size of data processed, i.e., time complexity of DAT is O(N), where N is the number of pixels. However, one specific feature of digital devices and/or computational systems should be taken into account when applying DAC. In terms of time expenses, data transferring from one memory part to another (needed in the calculation of DAT) can take more time than performing arithmetic operations [56]. Note that when applying DAT of the depth greater than 1, such a transferring is used and the time needed for it increases if depth becomes larger. For this reason, the use of DAT of the low depth can be recommended if high performance is required.
Second, it follows from the results obtained in the previous sections that DAT2 of the depth 2 can be recommended from the viewpoint of the same performance compared to DAT versions with the larger depth. Since this recommendation is consistent with the observation for computational complexity, the use of DAT2 with depth = 2 can be treated as a reasonable practical choice.
In practice, one might need to provide lossy compression of RS data with providing the desired quality characterized, e.g., by the desired PSNR. In this sense, although inequalities (7) and (12) are upper estimates, there is a possibility to obtain compressed data with loss of quality measured by PSNR, MAD, or RMSE which are close to the desired values. Consider this, e.g., for PSNR. Let a structure of DAT be fixed. For instance, let DAT1 of the depth 5 or DAT2 of the depth 2 be chosen. If PSNR = p is desired, then settings of DAC can be found as follows:
compute RMSE = r = 255 · 10 p / 20 ;
find MAD = ( r b ) / a , where a and b are parameters of linear regression y = a x + b , x = M A D , y = RMSE (here, the values presented in Table 1 can be used);
find UBMAD, using interpolation methods and data from Table 2 or Table 3 depending on the structure of DAT.
The value of UBMAD computed defines the settings of DAC. These settings provide distortions measured by PSNR that, with high probability, belong to an appropriately narrow neighborhood of p.
The UBMAD can be also found directly, using the dependence of the mean value of PSNR on UBMAD (see data in Table 2 and Table 3). However, the error of providing a desired PSNR might be greater due to non-linear dependence between these two parameters if linear interpolation is applied.
In addition, as is mentioned in Section 2.1, any matrix M can be processed by DAT of arbitrary structure. Although, in this case, if its rows or columns do not satisfy the length condition mentioned, then application of the extension procedure is required. This leads to an increase in the number of DAT coefficients. Nevertheless, since atomic wavelets have zero mean value [57], most extra DAT coefficients are equal to zero. Such data are well compressed by the proposed coding. When processing images of a high resolution, a wide variety of the DAT variants can be applied without a significant increase of additional data. Moreover, if the DAC is considered as a data protection coder, then some increase in the compressed file size is insignificant.
Besides, it is shown that there is no significant difference in the mean value of CR provided by DAT1 and DAT2 (except the case of the depth 1) for any distortions measured by PSNR (see Figure 11). This means that significant variation of the structure of DAT does not affect significant changes in compression efficiency. Such a result provides the possibility to achieve both compression and protection.
Finally, the algorithm DAC was compared with JPEG in [40,41,47]. It has been shown that, on average, DAC provides a higher compression ratio than JPEG for the same quality measured by PSNR. We note that previously Huffman codes and run-length encoding were used to compress quantized DAT-coefficients. In the current research, binary arithmetic coding is applied instead. In [65], it has been shown that such an approach provides better compression of quantized DAT-coefficients than a combination of Huffman codes with run-length encoding. Hence, the following statement is valid: on average, the algorithm DAC with binary arithmetic coding of quantized DAT-coefficients compresses three-channel better than JPEG with the same distortions measured by PSNR.
Furthermore, the proposed quality loss control mechanism provides distortions measured by MAD that are not greater than UBMAD, which defines quality loss settings. It is only this value that can be varied to obtain different quality losses. In Section 3, using statistical methods, it is shown that there is linear dependence of RMSE on MAD, and coefficients of linear regression are provided. Hence, a mechanism for controlling the loss of quality measured by RMSE and PSNR is obtained. This result is of particular importance since the metric MAD is adequate only if it is small; other metrics, especially RMSE and PSNR should be used otherwise. Further, inequality MAD   UBMAD is an upper bound. If the value of UBMAD is fixed, then actual MAD can be significantly smaller than UBMAD. This feature is shown in Table 2 and Table 3. Nevertheless, in these tables, the dependence of the mean value of MAD (also, RMSE, PSNR, and CR) and its deviation on UBMAD is provided. Moreover, it is shown that a high percentage of values obtained experimentally belongs to segments [E − 2 σ , E + 2 σ ], where E is the mean value and σ is deviation. In other words, the limits of efficiency indicators are obtained for each structure of DAT and each value of UBMAD. This provides a possibility to obtain the desired results in terms of MAD, RMSE, PSNR, and CR.
Finally, we have carried out verification of the proposed approach for some other three-channel images acquired by Sentinel-2 and then compressed by DAT2 with depth equal to 2. The obtained results and recommendations are similar to those presented for the two images used in our research above.
In the future for additional assessment of quality and accuracy of the proposed methods, it will be useful to deal with more sensitive classification tasks such as different crop type classification and other applied problems.

7. Conclusions

In this paper, we have analyzed the task of lossy compression of three-channel images using discrete atomic transform. Two real-life images of different complexity have been considered. The quality of compressed images has been characterized in different ways: using MAD and RMSE (or PSNR) and applying probabilities of correct recognition, both total and for particular classes (ML classifiers have been used).
The following has been demonstrated:
-
There are many versions of DAC where compression using DAT2 with depth equal to 2 can be recommended for practical use because of the following reasons: (a) it has quite low computational complexity; (b) rate/distortion characteristics are better than for DAC with DAT of depth 1 and practically the same as for DAT with larger depth values; (c) privacy protection is provided; taken all together, these properties explain our recommendation.
-
Lossy compression based on DAT is controlled by UBMAD but we have got approximations that allow recalculating UPMAD to MAD, RMSE, and PSNR and, thus, providing the desired quality of compressed images quite accurately and without iterations; this is a useful property especially if compression should be performed quickly, e.g., onboard of satellite or airborne carrier with compressing large volumes of RS data;
-
classification results (obtained for ML classifier) for lossy compressed data depend on image complexity; for the image of low complexity, lossy compression has a low negative impact on classification accuracy if MAD is less than 35 (PSNR is larger than about 34 dB); for the image of high complexity, lossy compression for the same conditions might lead to a reduction of total probability of correct classification by about 3% that seems to be acceptable for practice.
-
DAT-based compression performs better than JPEG, and one of its main advantages is the possibility of easily providing data privacy.
In the future, we plan to analyze more images and consider other classifiers; besides, we plan to extend the DAT-based compression to RS data with more than three numbers of channels and other applied tasks such as crop type classification monitoring of forests cuts, etc.

Author Contributions

V.M. created different versions of DAC and compared them; I.V. created and trained the ML classifier; V.L. carried out the analysis of dependencies between quality indicators; A.S. performed the analysis of classification results; N.K. formulated requirements to image compression; B.V. was responsible for preparing the paper draft and also carried out editing and supervision. All authors have read and agreed to the published version of the manuscript.

Funding

The authors gratefully acknowledge the funding received by the: National Research Foundation of Ukraine from the state budget 2020/01.0273, 2020.01/0268 and 2020.02/0284 and French Ministries of Europe and Foreign Affairs (MEAE) and Higher Education, Research and Innovation (MESRI) through the PHC Dnipro 2021 project n°46844Z.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors gratefully acknowledge the funding received by the National Research Foundation of Ukraine from the state budget 2020/01.0273 “Intelligent models and methods for determining land degradation indicators based on satellite data” and 2020.01/0268 “Information technology for fire danger assessment and fire monitoring in natural ecosystems based on satellite data” (NRFU Competition “Science for human security and society”) and 2020.02/0284 “Geospatial models and information technologies of satellite monitoring of smart city problems” (NRFU Competition “Leading and Young Scientists Research Support”). The research performed in this manuscript was also partially supported by the French Ministries of Europe and Foreign Affairs (MEAE) and Higher Education, Research and Innovation (MESRI) through the PHC Dnipro 2021 project n°46844Z.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Pillai, D.K. New Computational Models for Image Remote Sensing and Big Data. In Big Data Analytics for Satellite Image Processing and Remote Sensing; Swarnalatha, P., Sevugan, P., Eds.; IGI Global: Hershey, PA, USA, 2018; pp. 1–21. [Google Scholar]
  2. Mielke, C.; Boshce, N.K.; Rogass, C.; Segl, K.; Gauert, C.; Kaufmann, H. Potential Applications of the Sentinel-2 Multispectral Sensor and the ENMAP hyperspectral Sensor in Mineral Exploration. EARSeL Eproc. 2014, 13, 93–102. [Google Scholar] [CrossRef]
  3. Joshi, N.; Baumann, M.; Ehammer, A.; Fensholt, R.; Grogan, K.; Hostert, P.; Jepsen, M.R.; Kuemmerle, T.; Meyfroidt, P.; Mitchard, E.T.A.; et al. A Review of the Application of Optical and Radar Remote Sensing Data Fusion to Land Use Mapping and Monitoring. Remote Sens. 2016, 8, 70. [Google Scholar] [CrossRef] [Green Version]
  4. Kussul, N.; Lavreniuk, M.; Kolotii, A.; Skakun, S.; Rakoid, O.; Shumilo, L. A workflow for Sustainable Development Goals indicators assessment based on high-resolution satellite data. Int. J. Digit. Earth 2020, 13, 309–321. [Google Scholar] [CrossRef]
  5. Kolotii, A.; Kussul, N.; Shelestov, A.; Skakun, S.; Yailymov, B.; Basarab, R.; Ostapenko, V. Comparison of biophysical and satellite predictors for wheat yield forecasting in Ukraine. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci.—ISPRS Arch. 2015, 40, 39–44. [Google Scholar] [CrossRef] [Green Version]
  6. Kussul, N.; Mykola, L.; Shelestov, A.; Skakun, S. Crop inventory at regional scale in Ukraine: Developing in season and end of season crop maps with multi-temporal optical and SAR satellite imagery. Eur. J. Remote Sens. 2018, 51, 627–636. [Google Scholar] [CrossRef] [Green Version]
  7. Kussul, N.; Shelestov, A.; Lavreniuk, M.; Butko, I.; Skakun, S. Deep learning approach for large scale land cover mapping based on remote sensing data fusion. In Proceedings of the 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 10–15 July 2016; pp. 198–201. [Google Scholar] [CrossRef]
  8. Zhong, P.; Wang, R. Multiple-Spectral-Band CRFs for Denoising Junk Bands of Hyperspectral Imagery. IEEE Trans. Geosci. Remote Sens. 2013, 51, 2269–2275. [Google Scholar] [CrossRef]
  9. Sajjad, H.; Kumar, P. Future Challenges and Perspective of Remote Sensing Technology. In Applications and Challenges of Geospatial Technology; Kumar, P., Rani, M., Chandra Pandey, P., Sajjad, H., Chaudhary, B., Eds.; Springer: Cham, Switzerland, 2019; pp. 275–277. [Google Scholar] [CrossRef]
  10. First Applications from Sentinel-2A. Available online: http://www.esa.int/Our_Activities/Observing_the_Earth/Copernicus/Sentinel-2/First_applications_from_Sentinel-2A (accessed on 7 October 2021).
  11. Kussul, N.; Lemoine, G.; Gallego, F.J.; Skakun, S.; Lavreniuk, M.; Shelestov, A. Parcel-Based Crop Classification in Ukraine Using Landsat-8 Data and Sentinel-1A Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 2500–2508. [Google Scholar] [CrossRef]
  12. Bioucas-Dias, J.M.; Plaza, A.; Camps-Valls, G.; Scheunders, P.; Nasrabadi, N.; Chanussot, J. Hyperspectral Remote Sensing Data Analysis and Future Challenges. IEEE Geosci. Remote Sens. Mag. 2013, 1, 6–36. [Google Scholar] [CrossRef] [Green Version]
  13. Chi, M.; Plaza, A.; Benediktsson, J.A.; Sun, Z.; Shen, J.; Zhu, Y. Big Data for Remote Sensing: Challenges and Opportunities. Proc. IEEE 2016, 104, 2207–2219. [Google Scholar] [CrossRef]
  14. Manolakis, D.G.; Lockwood, R.B.; Cooley, T.W. Hyperspectral Imaging Remote Sensing: Physics, Sensors, and Algorithms; Cambridge University Press: Cambridge, UK, 2016. [Google Scholar] [CrossRef]
  15. Yu, G.; Vladimirova, T.; Sweeting, M.N. Image compression systems on board satellites. Acta Astronaut. 2009, 64, 988–1005. [Google Scholar] [CrossRef]
  16. Christophe, E. Hyperspectral Data Compression Tradeoff. In Optical Remote Sensing in Advances in Signal Processing and Exploitation Techniques; Prasad, S., Bruce, L., Chanussot, J., Eds.; Springer: Berlin/Heidelberg, Germany, 2011; pp. 9–29. [Google Scholar] [CrossRef]
  17. Blanes, I.; Magli, E.; Serra-Sagrista, J. A Tutorial on Image Compression for Optical Space Imaging Systems. IEEE Geosci. Remote Sens. Mag. 2014, 2, 8–26. [Google Scholar] [CrossRef] [Green Version]
  18. Zemliachenko, A.N.; Kozhemiakin, R.A.; Uss, M.L.; Abramov, S.K.; Ponomarenko, N.N.; Lukin, V.V. Lossy compression of hyperspectral images based on noise parameters estimation and variance stabilizing transform. J. Appl. Remote Sens. 2014, 8, 25. [Google Scholar] [CrossRef]
  19. Chow, K.; Tzamarias, D.E.O.; Blanes, I.; Serra-Sagristà, J. Using Predictive and Differential Methods with K2-Raster Compact Data Structure for Hyperspectral Image Lossless Compression. Remote Sens. 2019, 11, 2461. [Google Scholar] [CrossRef] [Green Version]
  20. Radosavljevic, M.; Brkljac, B.; Lugonja, P.; Crnojevic, V.; Trpovski, Ž.; Xiong, Z.; Vukobratovic, D. Lossy Compression of Multispectral Satellite Images with Application to Crop Thematic Mapping: A HEVC Comparative Study. Remote Sens. 2020, 12, 1590. [Google Scholar] [CrossRef]
  21. Blanes, I.; Kiely, A.; Hernández-Cabronero, M.; Serra-Sagristà, J. Performance Impact of Parameter Tuning on the CCSDS-123.0-B-2 Low-Complexity Lossless and Near-Lossless Multispectral and Hyperspectral Image Compression Standard. Remote Sens. 2019, 11, 1390. [Google Scholar] [CrossRef] [Green Version]
  22. Aiazzi, B.; Alparone, L.; Baronti, S. Near-lossless compression of 3-D optical data. IEEE Trans. Geosci. Remote Sens. 2001, 39, 2547–2557. [Google Scholar] [CrossRef]
  23. Aiazzi, B.; Alparone, L.; Baronti, S.; Lastri, C.; Selva, M. Spectral Distortion in Lossy Compression of Hyperspectral Data. J. Electr. Comput. Eng. 2012, 2012, 850637. [Google Scholar] [CrossRef] [Green Version]
  24. Santos, L.; Lopez, S.; Callico, G.; Lopez, J.; Sarmiento, R. Performance evaluation of the H.264/AVC video coding standard for lossy hyperspectral image compression. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2012, 5, 451–461. [Google Scholar] [CrossRef]
  25. Krivenko, S.; Krylova, O.; Bataeva, E.; Lukin, V. Smart Lossy Compression of Images Based on Distortion Prediction. Telecommun. Radio Eng. 2018, 77, 1535–1554. [Google Scholar] [CrossRef]
  26. Vasilyeva, I.; Li, F.; Abramov, S.; Lukin, V.V.; Vozel, B.; Chehdi, K. Lossy compression of three-channel remote sensing images with controllable quality. In Proceedings of the SPIE 11862, Image and Signal Processing for Remote Sensing XXVII, Madrid, Spain, Online Only. 12 September 2021; p. 118620R. [Google Scholar] [CrossRef]
  27. Penna, B.; Tillo, T.; Magli, E.; Olmo, G. Transform Coding Techniques for Lossy Hyperspectral Data Compression. IEEE Trans. Geosci. Remote Sens. 2007, 45, 1408–1421. [Google Scholar] [CrossRef]
  28. Manolakis, D.; Lockwood, R.; Cooley, T. On the Spectral Correlation Structure of Hyperspectral Imaging Data. In Proceedings of the IGARSS 2008—2008 IEEE International Geoscience and Remote Sensing Symposium, Boston, MA, USA, 7–11 July 2008; pp. II-581–II-584. [Google Scholar]
  29. Lam, K.W.; Lau, W.; Li, Z. The effects on image classification using image compression technique. Amsterdam. Int. Arch. Photogramm. Remote Sens. 2000, 33, 744–751. [Google Scholar]
  30. Ozah, N.; Kolokolova, A. Compression improves image classification accuracy. In Advances in Artificial Intelligence. Canadian AI 2019. Lecture Notes in Computer Science; Meurs, M.J., Rudzics, F., Eds.; Springer: Cham, Switzerland, 2019; pp. 525–530. [Google Scholar] [CrossRef]
  31. Chen, Z.; Ye, H.; Yingxue, Z. Effects of Compression on Remote Sensing Image Classification Based on Fractal Analysis. IEEE Trans. Geosci. Remote Sens. 2019, 57, 4577–4589. [Google Scholar] [CrossRef]
  32. García-Sobrino, J.; Laparra, V.; Serra-Sagristà, J.; Calbet, X.; Camps-Valls, G. Improved Statistically Based Retrievals via Spatial-Spectral Data Compression for IASI Data. IEEE Trans. Geosci. Remote Sens. 2019, 57, 5651–5668. [Google Scholar] [CrossRef]
  33. Perra, C.; Atzori, L.; De Natale, F.G.B. Introducing supervised classification into spectral VQ for multi-channel image compression. In IGARSS 2000. IEEE 2000 International Geoscience and Remote Sensing Symposium. Taking the Pulse of the Planet: The Role of Remote Sensing in Managing the Environment. Proceedings (Cat. No.00CH37120); IEEE: Hoboken, NJ, USA, 2000; Volume 2, pp. 597–599. [Google Scholar] [CrossRef]
  34. Zabala, A.; Pons, X. Impact of lossy compression on mapping crop areas from remote sensing. Int. J. Remote Sens. 2013, 34, 2796–2813. [Google Scholar] [CrossRef]
  35. Zabala, A.; Pons, X.; Diaz-Delgado, R.; Garcia, F.; Auli-Llinas, F.; Serra-Sagrista, J. Effects of JPEG and JPEG2000 Lossy Compression on Remote Sensing Image Classification for Mapping Crops and Forest Areas. In Proceedings of the 2006 IEEE International Symposium on Geoscience and Remote Sensing, Denver, CO, USA, 31 July–4 August 2006; pp. 790–793. [Google Scholar] [CrossRef] [Green Version]
  36. Lukin, V.; Vasilyeva, I.; Krivenko, S.; Li, F.; Abramov, S.; Rubel, O.; Vozel, B.; Chehdi, K.; Egiazarian, K. Lossy Compression of Multichannel Remote Sensing Images with Quality Control. Remote Sens. 2020, 12, 3840. [Google Scholar] [CrossRef]
  37. Taubman, D.; Marcellin, M. JPEG2000 Image Compression Fundamentals, Standards and Practice; Springer: Boston, MA, USA, 2002; 777p. [Google Scholar]
  38. Khelifi, F.; Bouridane, A.; Kurugollu, F. Joined spectral trees for scalable SPIHT-based multispectral image compression. IEEE Trans. Multimed. 2008, 10, 316–329. [Google Scholar] [CrossRef]
  39. Balasubramanian, R.; Ramakrishnan, S.S. Wavelet application in compression of a remote sensed image. In Proceedings of the 2013 the International Conference on Remote Sensing, Environment and Transportation Engineering (RSETE 2013), Nanjing, China; 2013; pp. 659–662. [Google Scholar]
  40. Lukin, V.; Brysina, I.; Makarichev, V. Discrete Atomic Compression of Digital Images: A Way to Reduce Memory Expenses. In Integrated Computer Technologies in Mechanical Engineering. Advances in Intelligent Systems and Computing; Nechyporuk, M., Pavlikov, V., Kritskiy, D., Eds.; Springer: Cham, Switzerland, 2020; Volume 113, pp. 492–502. [Google Scholar] [CrossRef]
  41. Makarichev, V.O.; Lukin, V.V.; Brysina, I.V.; Vozel, B.; Chehdi, C. Atomic wavelets in lossy and near-lossless image compression. In Proceedings of the SPIE 11533, Image and Signal Processing for Remote Sensing XXVI, Edinburgh, UK, Online Only. 20 September 2020. [Google Scholar] [CrossRef]
  42. Maniadaki, M.; Papathanasopoulos, A.; Mitrou, L.; Maria, E.-A. Reconciling Remote Sensing Technologies with Personal Data and Privacy Protection in the European Union: Recent Developments in Greek Legislation and Application Perspectives in Environmental Law. Laws 2021, 10, 33. [Google Scholar] [CrossRef]
  43. Schoenmaker, A. Community Remote Sensing Legal Issues. Available online: https://swfound.org/media/62081/schoenmaker_paper_community_remote_sensing_legal_issues_final.pdf (accessed on 7 October 2021).
  44. Kumari, M.; Gupta, S.; Sardana, P. A Survey of Image Encryption Algorithms. 3D Res. 2017, 8, 37. [Google Scholar] [CrossRef]
  45. Liu, S.; Guo, C.; Sheridan, J.T. A review of optical image encryption technique. Opt. Laser Technol. 2014, 57, 327–342. [Google Scholar] [CrossRef]
  46. Mondal, B. Cryptographic image scrambling techniques. In Cryptographic and Information Security Approaches for Images and Videos; Ramakrishnan, S., Ed.; CRC Press: Boca Raton, FL, USA, 2018; pp. 37–65. [Google Scholar]
  47. Makarichev, V.; Lukin, V.; Brysina, I. Discrete Atomic Compression with Different Structures of Discrete Atomic Transform: Efficiency Comparison and Perspectives of Application to Digital Images Privacy Protection. In Proceedings of the 2020 IEEE 11th International Conference on Dependable Systems, Services and Technologies (DESSERT), Kyiv, Ukraine, 14–18 May 2020; pp. 301–306. [Google Scholar] [CrossRef]
  48. Lukin, V.; Ponomarenko, N.; Egiazarian, K.; Astola, J. Analysis of HVS-Metrics’ Properties Using Color Image Database TID 2013. Proc. ACIVS 2015, 613–624. Available online: https://www.semanticscholar.org/paper/Analysis-of-HVS-Metrics’-Properties-Using-Color-Ponomarenko-Lukin/0ef7f0524a7f97af609a865e7afb102ccdf5e8e1 (accessed on 7 October 2021).
  49. Lin, W.; Jay Kuo, C.-C. Perceptual visual quality metrics: A survey. J. Vis. Commun. Image Represent. 2011, 22, 297–312. [Google Scholar] [CrossRef]
  50. Ponomarenko, N.; Silvestri, F.; Egiazarian, K.; Carli, M.; Astola, J.; Lukin, V. On between-coefficient contrast masking of DCT basis functions. In Proceedings of the CD-ROM Proceedings of VPQM, Scottsdale, AZ, USA, 25–26 January 2007. [Google Scholar]
  51. Congalton, R.G.; Green, K. Assessing the Accuracy of Remotely Sensed Data: Principles and Practices; CRC Press: Boca Raton, FL, USA, 1999. [Google Scholar]
  52. Lukin, V.; Abramov, S.; Krivenko, S.; Kurekin, A.; Pogrebnyak, O. Analysis of classification accuracy for pre-filtered multichannel remote sensing data. J. Expert Syst. Appl. 2013, 40, 6400–6411. [Google Scholar] [CrossRef]
  53. Sun, J.; Yang, J.; Zhang, C.; Yun, W.; Qu, J. Automatic remotely sensed image classification in a grid environment based on the maximum likelihood method. Math. Comput. Model. 2013, 58, 573–581. [Google Scholar] [CrossRef]
  54. Sisodia, P.S.; Tiwari, V.; Kumar, A. Analysis of Supervised Maximum Likelihood Classification for remote sensing image. In Proceedings of the International Conference on Recent Advances and Innovations in Engineering (ICRAIE-2014), Jaipur, India, 9–11 May 2014; pp. 1–4. [Google Scholar] [CrossRef]
  55. Proskura, G.; Vasilyeva, I.; Fangfang, L.; Lukin, V. Classification of Compressed Multichannel Images and Its Improvement. In Proceedings of the 2020 30th International Conference Radioelektronika, Bratislava, Slovakia, 15–16 April 2020; pp. 62–67. [Google Scholar] [CrossRef]
  56. Al-Shaykh, O.K.; Mersereau, R.M. Lossy compression of noisy images. IEEE Trans. Image Process. 1998, 7, 1641–1652. [Google Scholar] [CrossRef] [PubMed]
  57. Makarichev, V.; Lukin, V.; Brysina, I. On the Applications of the Special Class of Atomic Functions: Practical Aspects and Perspectives. In Integrated Computer Technologies in Mechanical Engineering—2020. ICTM 2020. Lecture Notes in Networks and Systems; Nechyporuk, M., Pavlikov, V., Kritskiy, D., Eds.; Springer: Cham, Switzerland, 2021; Volume 188, pp. 42–54. [Google Scholar] [CrossRef]
  58. Sayood, K. Introduction to Data Compression, 4th ed.; Morgan Kaufman: Burlington, MA, USA, 2017. [Google Scholar]
  59. Bryant, R.E.; O’Hallaron, D.R. Computer Systems: A Programmer’s Perspective, 2nd ed.; Pearson: London, UK, 2010. [Google Scholar]
  60. Makarichev, V.O.; Lukin, V.V.; Brysina, I.V. On estimates of coefficients of generalized atomic wavelets expansions and their application to data processing. Radioelectron. Comput. Syst. 2020, 93, 44–57. [Google Scholar] [CrossRef]
  61. Welstead, S. Fractal and Wavelet Image Compression Techniques; SPIE Publications: Bellingham, WA, USA, 1999; 254p. [Google Scholar]
  62. WebP, Compression Techniques. Available online: https://developers.google.com/speed/webp/docs/compression (accessed on 7 October 2021).
  63. Kotz, S.; Balakrishnan, N.; Read, C.; Vidakovic, B.; Johnson, N.L. (Eds.) Encyclopedia of Statistical Sciences, 2nd ed.; Wiley-Interscience: Hoboken, NJ, USA, 2005. [Google Scholar]
  64. Olive, D.J. Linear Regression; Springer: Berlin/Heidelberg, Germany, 2017; 508p. [Google Scholar]
  65. Makarichev, V.O.; Lukin, V.V.; Brysina, I.V.; Vozel, B.; Chehdi, C. Discrete atomic compression of satellite images: A comprehensive efficiency research. In Proceedings of the SPIE 11862, Image and Signal Processing for Remote Sensing XXVII, Madrid, Spain, Online Only. 12 September 2021; Volume 11862, pp. 185–198. [Google Scholar] [CrossRef]
Figure 1. Discrete atomic compression of full-color digital images.
Figure 1. Discrete atomic compression of full-color digital images.
Remotesensing 14 00125 g001
Figure 2. Discrete atomic transform of an array.
Figure 2. Discrete atomic transform of an array.
Remotesensing 14 00125 g002
Figure 3. Discrete atomic transform of a matrix: the procedure DAT1.
Figure 3. Discrete atomic transform of a matrix: the procedure DAT1.
Remotesensing 14 00125 g003
Figure 4. Discrete atomic transform of a matrix: the procedure DAT2 of depth 1.
Figure 4. Discrete atomic transform of a matrix: the procedure DAT2 of depth 1.
Remotesensing 14 00125 g004
Figure 5. Discrete atomic transform of a matrix: the procedure DAT2 of depth n.
Figure 5. Discrete atomic transform of a matrix: the procedure DAT2 of depth n.
Remotesensing 14 00125 g005
Figure 6. A mixture of DAT1 and DAT2.
Figure 6. A mixture of DAT1 and DAT2.
Remotesensing 14 00125 g006
Figure 7. Small copies of test images.
Figure 7. Small copies of test images.
Remotesensing 14 00125 g007
Figure 8. Test images “Southern Bavaria” (a) and “Jewels of the Maldives” (b).
Figure 8. Test images “Southern Bavaria” (a) and “Jewels of the Maldives” (b).
Remotesensing 14 00125 g008
Figure 9. Routes for in situ data collection for the territory of Ukraine.
Figure 9. Routes for in situ data collection for the territory of Ukraine.
Remotesensing 14 00125 g009
Figure 10. Scatter plots of RMSE vs. MAD for the test images processed by DAC with different structures of DAT: DAT1 of the depth 5 (a), DAT2 of the depth 1 (b), DAT2 of the depth 2 (c), DAT2 of the depth 3 (d), DAT2 of the depth 4 (e), DAT2 of the depth 5 (f).
Figure 10. Scatter plots of RMSE vs. MAD for the test images processed by DAC with different structures of DAT: DAT1 of the depth 5 (a), DAT2 of the depth 1 (b), DAT2 of the depth 2 (c), DAT2 of the depth 3 (d), DAT2 of the depth 4 (e), DAT2 of the depth 5 (f).
Remotesensing 14 00125 g010
Figure 11. Dependence of the mean value of CR on the mean value of PSNR.
Figure 11. Dependence of the mean value of CR on the mean value of PSNR.
Remotesensing 14 00125 g011
Figure 12. Test images SS3 (a) and SS4 (b): real life Sentinel-2 images for country-side (a) and city (b) areas in Kharkiv region, Ukraine.
Figure 12. Test images SS3 (a) and SS4 (b): real life Sentinel-2 images for country-side (a) and city (b) areas in Kharkiv region, Ukraine.
Remotesensing 14 00125 g012
Figure 13. Processing of the test images SS3 (a) and SS4 (b): dependence of CR on PSNR.
Figure 13. Processing of the test images SS3 (a) and SS4 (b): dependence of CR on PSNR.
Remotesensing 14 00125 g013
Figure 14. Three-channel fragments used for classifier training (a) and ground truth map (b) for the test image in Figure 12a.
Figure 14. Three-channel fragments used for classifier training (a) and ground truth map (b) for the test image in Figure 12a.
Remotesensing 14 00125 g014
Figure 15. Three-channel fragments used for classifier training (a) and ground truth map (b) for the test image in Figure 12b.
Figure 15. Three-channel fragments used for classifier training (a) and ground truth map (b) for the test image in Figure 12b.
Remotesensing 14 00125 g015
Figure 16. Histograms of the brightness features B and G for classes on the test image in Figure 12a: B|Urban (a), B|Water (b), G|Vegetation (c), and G|Bare soil (d).
Figure 16. Histograms of the brightness features B and G for classes on the test image in Figure 12a: B|Urban (a), B|Water (b), G|Vegetation (c), and G|Bare soil (d).
Remotesensing 14 00125 g016
Figure 17. Approximated distributions of features (R (a), G (b), and B (c) values) for four classes; red color curves—Urban, blue color curves—Water, green color curves—Vegetation, and black color curves—Bare Soil.
Figure 17. Approximated distributions of features (R (a), G (b), and B (c) values) for four classes; red color curves—Urban, blue color curves—Water, green color curves—Vegetation, and black color curves—Bare Soil.
Remotesensing 14 00125 g017
Figure 18. Classification maps for images compressed by DAT of depth 1 with MAD = 4 (a) and MAD = 16 (b).
Figure 18. Classification maps for images compressed by DAT of depth 1 with MAD = 4 (a) and MAD = 16 (b).
Remotesensing 14 00125 g018
Figure 19. Classification maps for the original image (a) and images compressed by DAT with depth 1 with MAD = 4 (b), MAD = 11 (c), and MAD = 34 (d).
Figure 19. Classification maps for the original image (a) and images compressed by DAT with depth 1 with MAD = 4 (b), MAD = 11 (c), and MAD = 34 (d).
Remotesensing 14 00125 g019
Figure 20. Processed fragments of Sentinel-2 (a,b) and Landsat (c) three-channel images.
Figure 20. Processed fragments of Sentinel-2 (a,b) and Landsat (c) three-channel images.
Remotesensing 14 00125 g020
Figure 21. Results of compression by DAC with DAT of the depth 2 of test images, shown in Figure 20: dependence of PSNR on MAD (a), the dependence of CR on PSNR (b).
Figure 21. Results of compression by DAC with DAT of the depth 2 of test images, shown in Figure 20: dependence of PSNR on MAD (a), the dependence of CR on PSNR (b).
Remotesensing 14 00125 g021
Table 1. Indicators of correlation of MAD and RMSE for different structures of DAT.
Table 1. Indicators of correlation of MAD and RMSE for different structures of DAT.
Structure of DATPearson’s Correlation CoefficientSpearman’s Rank Correlation CoefficientsParameters of Linear Regression Value   of   Test   Statistic   F ¯
ab
DAT1 of the depth 50.9360.9400.1100.2642802.34
DAT2 of the depth 10.9880.9850.1450.17323,608.65
DAT2 of the depth 20.9630.9670.1200.2857624.15
DAT2 of the depth 30.9650.9540.1180.2728181.95
DAT2 of the depth 40.9600.9670.1160.2707172.21
DAT2 of the depth 50.9480.9520.1180.2225292.41
Table 2. DAT1 of the depth 5: indicators of compression efficiency.
Table 2. DAT1 of the depth 5: indicators of compression efficiency.
IndicatorUBMAD
366395155
MADmin8142030
max12203049
mean (E)9.03016.89024.84038.710
deviation (σ)0.6881.1542.1354.103
percentage   of   values   in   [ E σ , E + σ ] 67816665
percentage   of   values   in   [ E 2 σ , E + 2 σ ] 97989694
percentage   of   values   in   [ E 3 σ , E + 3 σ ] 99100100100
RMSEmin0.9721.4051.6441.887
max1.2102.5243.8495.806
mean (E)1.1382.2043.0984.451
deviation (σ)0.0560.2720.5000.856
percentage   of   values   in   [ E σ , E + σ ] 77696868
percentage   of   values   in   [ E 2 σ , E + 2 σ ] 93969596
percentage   of   values   in   [ E 3 σ , E + 3 σ ] 100100100100
PSNR, dBmin46.47340.09036.42432.854
max48.37845.18043.81542.615
mean (E)47.01741.34038.43335.343
deviation (σ)0.4441.1551.5171.856
percentage   of   values   in   [ E σ , E + σ ] 78787073
percentage   of   values   in   [ E 2 σ , E + 2 σ ] 93969495
percentage   of   values   in   [ E 3 σ , E + 3 σ ] 99999999
CRmin1.2361.5951.8172.146
max4.73414.32624.13844.525
mean (E)2.2693.7694.9407.376
deviation (σ)0.6331.7402.8335.297
percentage   of   values   in   [ E σ , E + σ ] 75859092
percentage   of   values   in   [ E 2 σ , E + 2 σ ] 95959697
percentage   of   values   in   [ E 3 σ , E + 3 σ ] 98999898
Table 3. DAT2 of the depth 2: indicators of compression efficiency.
Table 3. DAT2 of the depth 2: indicators of compression efficiency.
IndicatorUBMAD
71420254664
MADmin5913152734
max61217213652
mean (E)5.03010.18014.48017.86031.30042.360
deviation (σ)0.1710.5750.7711.0051.8512.952
percentage   of   values   in   [ E σ , E + σ ] 977085677468
percentage   of   values   in   [ E 2 σ , E + 2 σ ] 979199969797
percentage   of   values   in   [ E 3 σ , E + 3 σ ] 9798999910099
RMSEmin0.7041.1461.3671.4052.1353.276
max0.8171.5432.5033.0935.1946.915
mean (E)0.7741.4522.1722.5694.0435.287
deviation (σ)0.0310.0910.2780.3960.7240.873
percentage   of   values   in   [ E σ , E + σ ] 638569686766
percentage   of   values   in   [ E 2 σ , E + 2 σ ] 969396969798
percentage   of   values   in   [ E 3 σ , E + 3 σ ] 10099100100100100
PSNR, dBmin49.88444.36040.15838.32233.81931.334
max51.17946.94345.41445.17241.54137.827
mean (E)50.35644.90941.47140.04636.14633.788
deviation (σ)0.3480.5711.1961.4541.6681.483
percentage   of   values   in   [ E σ , E + σ ] 648777727068
percentage   of   values   in   [ E 2 σ , E + 2 σ ] 969395959597
percentage   of   values   in   [ E 3 σ , E + 3 σ ] 10098999999100
CRmin1.2101.4251.6451.7542.1272.375
max4.1747.82913.80616.4627.30138.611
mean (E)2.1382.9093.8384.3436.4848.286
deviation (σ)0.5330.9831.6241.9413.3284.659
percentage   of   values   in   [ E σ , E + σ ] 747780817879
percentage   of   values   in   [ E 2 σ , E + 2 σ ] 969696969696
percentage   of   values   in   [ E 3 σ , E + 3 σ ] 989899999999
Table 4. Results of compressing SS3 and SS4 using DAC with DAT1 of the depth 5.
Table 4. Results of compressing SS3 and SS4 using DAC with DAT1 of the depth 5.
IndicatorImageUBMAD
366395155
MADSS310141829
SS48142041
RMSESS31.2132.3173.0254.306
SS41.2222.4983.6045.198
PSNR, dBSS346.45140.83238.51835.449
SS446.38940.17836.99433.815
CRSS32.3473.9065.1348.103
SS41.8982.8273.5294.863
Table 5. Results of compressing SS3 and SS4 using DAC with DAT2 of the depth 2.
Table 5. Results of compressing SS3 and SS4 using DAC with DAT2 of the depth 2.
IndicatorImageUBMAD
71420254664
MADSS35912152734
SS44913162936
RMSESS30.8141.5362.2702.6173.8694.949
SS40.8171.5522.4602.9644.6635.919
PSNR, dBSS349.92044.40541.00939.77436.37934.240
SS449.88944.31140.31238.69434.75732.687
CRSS32.2423.0604.1644.8407.80610.398
SS41.8362.3632.9873.3434.8306.047
Table 6. Probabilities of correct classifications for particular classes depending on image quality.
Table 6. Probabilities of correct classifications for particular classes depending on image quality.
ClassUrbanWaterVegetationBare Soil
Original (Uncompressed) Image
Urban0.9121.71 × 10−45.05 × 10−30.083
Water4.39 × 10−3 0.9957.67 × 10−40
Vegetation0.020.0560.8890.035
Bare soil0.29401.57 × 10−40.706
Compressed image, MAD = 4
Urban0.9161.71 × 10−4 5.39 × 10−30.079
Water0.0180.9720.0110
Vegetation0.020.0510.9020.026
Bare soil0.3107.85 × 10−50.690
Compressed image, MAD = 7
Urban0.9211.28 × 10−45.05 × 10−30.074
Water0.0230.9560.0211.47 × 10−5
Vegetation0.0220.0480.9070.023
Bare soil0.3207.85 × 10−50.680
Compressed image, MAD = 10
Urban0.9188.56 × 10−55.48 × 10−30.077
Water0.0270.9540.0192.95 × 10−5
Vegetation0.0220.0460.910.022
Bare soil0.33802.36 × 10−40.661
Compressed image, MAD = 16
Urban0.9181.71 × 10−46.04 × 10−30.076
Water0.1550.8140.0310
Vegetation0.0260.0450.9050.024
Bare soil0.33507.85 × 10−50.664
Compressed image, MAD = 22
Urban0.9261.71 × 10−44.67 × 10−30.069
Water0.0350.9450.0200
Vegetation0.0250.0390.9250.011
Bare soil0.295000.705
Compressed image, MAD = 35
Urban0.9152.14 × 10−46.76 × 10−30.078
Water0.050.9350.0141.47 × 10−5
Vegetation0.0380.0410.890.031
Bare soil0.385000.615
Table 7. Probabilities for classes and total probabilities of correct classification for depth 2 and different MAD.
Table 7. Probabilities for classes and total probabilities of correct classification for depth 2 and different MAD.
ClassesDepth_2
MAD_5MAD_9MAD_12MAD_15MAD_27MAD_34
Urban0.9170.9190.9190.9190.9160.917
Water0.9670.9510.9600.9650.9650.970
Vegetation0.9020.9080.9100.9100.9080.906
Bare soil0.6900.6700.6660.6800.6910.679
Ptotal0.8690.8620.8640.8690.8700.868
Table 8. Probabilities for classes and total probabilities of correct classification for depth 3 and different MAD.
Table 8. Probabilities for classes and total probabilities of correct classification for depth 3 and different MAD.
ClassesDepth_3
MAD_5MAD_12MAD_13MAD_15MAD_26MAD_32
Urban0.9180.9210.9190.9210.9150.918
Water0.9630.9540.9550.9580.9640.936
Vegetation0.9050.9080.9090.9050.8950.895
Bare soil0.6890.6590.6630.6500.6670.649
Ptotal0.8690.8610.8620.8590.8600.850
Table 9. Probabilities for classes and total probabilities of correct classification for depth 4 and different MAD.
Table 9. Probabilities for classes and total probabilities of correct classification for depth 4 and different MAD.
ClassesDepth_4
MAD_5MAD_11MAD_14MAD_20MAD_29MAD_42
Urban0.9170.9190.9190.9180.9180.917
Water0.9620.9530.9540.9680.9570.931
Vegetation0.9040.9070.9060.9040.8930.902
Bare soil0.6880.6590.6570.6630.6630.648
Ptotal0.8680.8600.8590.8630.8580.850
Table 10. Probabilities for classes and total probabilities of correct classification for depth 5 and different MAD.
Table 10. Probabilities for classes and total probabilities of correct classification for depth 5 and different MAD.
ClassesDepth_5
MAD_6MAD_11MAD_13MAD_21MAD_27MAD_35
Urban0.9170.9190.9210.9190.9190.916
Water0.9640.9540.9540.9660.9660.975
Vegetation0.9040.9060.9060.9020.8950.896
Bare soil0.6810.6510.6500.6740.6660.665
Ptotal0.8660.8570.8580.8650.8620.863
Table 11. Confusion matrix for original image in Figure 12b.
Table 11. Confusion matrix for original image in Figure 12b.
ClassProbability of Decision
UrbanWaterVegetationBare Soil
Urban0.6452.9 × 10−29.6 × 10−20.230
Water4.3 × 10−20.8608.7 × 10−29.7 × 10−3
Vegetation4.87 × 10−30.0520.9431.8 × 10−4
Bare soil0.1607.54 × 10−33.8 × 10−20.795
Table 12. Confusion matrix for the image compressed with DAT of depth 1 with MAD = 11.
Table 12. Confusion matrix for the image compressed with DAT of depth 1 with MAD = 11.
ClassProbability of Decision
UrbanWaterVegetationBare Soil
Urban0.6470.02490.09750.2307
Water0.05340.82360.11329.84 × 10−3
Vegetation8.71 × 10−30.09110.902.32 × 10−4
Bare soil0.17296.47 × 10−30.04250.7781
Table 13. Probabilities for classes and total probabilities of correct classification for depth 1 and different MAD.
Table 13. Probabilities for classes and total probabilities of correct classification for depth 1 and different MAD.
ClassesDepth_1
MAD_4MAD_7MAD_11MAD_16MAD_25MAD_34
Urban0.64690.64830.64690.64880.65650.6733
Water0.84240.83160.82360.82010.84280.6991
Vegetation0.90580.89380.90.89940.87970.865
Bare soil0.79640.78490.77810.76520.75260.6035
Ptotal0.7980.7900.7870.7830.7830.710
Table 14. Probabilities for classes and total probabilities of correct classification for depth 2 and different MAD.
Table 14. Probabilities for classes and total probabilities of correct classification for depth 2 and different MAD.
ClassesDepth_2
MAD_4MAD_9MAD_13MAD_16MAD_29MAD_36
Urban0.64760.64710.64790.64790.65360.6603
Water0.84090.82870.82470.82320.82820.829
Vegetation0.90270.89520.90120.89990.89510.9016
Bare soil0.79150.78270.77290.77630.78440.7799
Ptotal0.7960.7880.7870.7870.7900.793
Table 15. Probabilities for classes and total probabilities of correct classification for depth 3 and different MAD.
Table 15. Probabilities for classes and total probabilities of correct classification for depth 3 and different MAD.
ClassesDepth_3
MAD_5MAD_10MAD_14MAD_15MAD_30MAD_40
Urban0.64820.64590.64620.6470.6540.6528
Water0.8350.82460.8230.82370.81390.8036
Vegetation0.89970.89920.90020.89960.8880.9109
Bare soil0.7940.78090.77810.76570.76510.7819
Ptotal0.7940.7880.7870.7840.7800.787
Table 16. Probabilities for classes and total probabilities of correct classification for depth 4 and different MAD.
Table 16. Probabilities for classes and total probabilities of correct classification for depth 4 and different MAD.
ClassesDepth_4
MAD_6MAD_12MAD_15MAD_21MAD_30MAD_44
Urban0.64820.64810.65010.64960.65450.6707
Water0.83580.82470.82040.81540.81890.7712
Vegetation0.89120.89690.89750.89490.8860.8866
Bare soil0.79230.77590.76460.76610.77750.7498
Ptotal0.7920.7860.7830.7820.7840.770
Table 17. Probabilities for classes and total probabilities of correct classification for depth 5 and different MAD.
Table 17. Probabilities for classes and total probabilities of correct classification for depth 5 and different MAD.
ClassesDepth_5
MAD_7MAD_11MAD_15MAD_22MAD_32MAD_35
Urban0.64960.64890.65050.65210.65280.6668
Water0.84310.830.82550.81210.79860.8079
Vegetation0.90210.90170.89950.89030.88540.8949
Bare soil0.79120.770.76180.77040.76940.7598
Ptotal0.7970.7880.7840.7810.7770.782
Table 18. Total probabilities of correct classification for depth 2 and different MAD for the image in Figure 20a using ML and NN classifiers.
Table 18. Total probabilities of correct classification for depth 2 and different MAD for the image in Figure 20a using ML and NN classifiers.
ClassifierDepth_2
MAD_5MAD_9MAD_12MAD_15MAD_27MAD_34
ML0.8690.8620.8640.8690.8700.862
NN0.8780.8730.8750.8800.8770.871
Table 19. Total probabilities of correct classification for depth 2 and different MAD for the image in Figure 20b using ML and NN classifiers.
Table 19. Total probabilities of correct classification for depth 2 and different MAD for the image in Figure 20b using ML and NN classifiers.
ClassifierDepth_2
MAD_5MAD_9MAD_13MAD_17MAD_30MAD_39
ML0.8420.8360.8270.8290.8210.814
NN0.8620.8500.8490.8460.8460.837
Table 20. Total probabilities of correct classification for depth 2 and different MAD for the image in Figure 20c using the ML classifier.
Table 20. Total probabilities of correct classification for depth 2 and different MAD for the image in Figure 20c using the ML classifier.
ClassesOriginalMAD_4MAD_7MAD_11MAD_18MAD_24MAD_37
Soil0.7470.7400.7410.7440.7360.6950.651
Grass0.8120.8100.8110.8100.7960.7980.343
Water0.9670.9670.9700.9700.9680.9670.974
Urban0.9890.9890.9890.9890.9860.9850.982
Bushes0.8120.8050.8060.8070.7970.7180.701
Ptotal0.8650.8620.8630.8640.8570.8330.730
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Makarichev, V.; Vasilyeva, I.; Lukin, V.; Vozel, B.; Shelestov, A.; Kussul, N. Discrete Atomic Transform-Based Lossy Compression of Three-Channel Remote Sensing Images with Quality Control. Remote Sens. 2022, 14, 125. https://0-doi-org.brum.beds.ac.uk/10.3390/rs14010125

AMA Style

Makarichev V, Vasilyeva I, Lukin V, Vozel B, Shelestov A, Kussul N. Discrete Atomic Transform-Based Lossy Compression of Three-Channel Remote Sensing Images with Quality Control. Remote Sensing. 2022; 14(1):125. https://0-doi-org.brum.beds.ac.uk/10.3390/rs14010125

Chicago/Turabian Style

Makarichev, Victor, Irina Vasilyeva, Vladimir Lukin, Benoit Vozel, Andrii Shelestov, and Nataliia Kussul. 2022. "Discrete Atomic Transform-Based Lossy Compression of Three-Channel Remote Sensing Images with Quality Control" Remote Sensing 14, no. 1: 125. https://0-doi-org.brum.beds.ac.uk/10.3390/rs14010125

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop