Next Article in Journal
Easy Processing of Metal–Organic Frameworks into Pellets and Membranes
Next Article in Special Issue
A Heterogeneous Ensemble Learning Framework for Spam Detection in Social Networks with Imbalanced Data
Previous Article in Journal
Projection-Based Augmented Reality Assistance for Manual Electronic Component Assembly Processes
Previous Article in Special Issue
Improving Incident Response in Big Data Ecosystems by Using Blockchain Technologies
 
 
Article
Peer-Review Record

Synthetic Minority Oversampling Technique for Optimizing Classification Tasks in Botnet and Intrusion-Detection-System Datasets

Appl. Sci. 2020, 10(3), 794; https://doi.org/10.3390/app10030794
by David Gonzalez-Cuautle 1, Aldo Hernandez-Suarez 1, Gabriel Sanchez-Perez 1, Linda Karina Toscano-Medina 1, Jose Portillo-Portillo 1, Jesus Olivares-Mercado 1, Hector Manuel Perez-Meana 1,* and Ana Lucila Sandoval-Orozco 2
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Submission received: 8 December 2019 / Revised: 16 January 2020 / Accepted: 17 January 2020 / Published: 22 January 2020

Round 1

Reviewer 1 Report

The authors conducted a very interesting work related to the classification of botnet network traffic using different machine learning techniques. The authors used a very nice methodology and they used two realistic datasets to evaluate the approach. Nevertheless, some reflections should be taken into account to improve the paper quality:

 

Introduction: Overall, the authors should reinforce the explanation of the problem and what is the solution you provide and the improvements. It is not clear enough what is the problem authors will address and how it will improve that.   Related work: The authors focused on describing the different variants of SMOTE techniques but articles related to the classification network traffic are skipped. For instance, this paper "A Proposal for a New Way of Classifying Network Security Metrics: Study of the Information Collected through a Honeypot". Feature Extraction and Selection: Formulation must be improved. Try to do an introduction to the concept as a description and then put the equation. Indeed, a tiny example (with data) can help to understand the data structure you handle and the operations applied. Generation of new samples with SMOTE: I don't understand this:  Xmin ∪ Xmaj = Xtrain and Xmin ∩ Xmaj = Xbalanced, because the union of Xmin and Xmaj compose of the complete Xtrain dataset how the intersection compose of the Xbalanced dataset?? Please clarify that maybe a tiny example can help to understand that.  Experimental Results: The experimentation is very nice in the use of a portfolio of algorithms and datasets. However, the discussion of results is very poor. A deeper discussion of results regarding the improvements in classification achieved compared to other approaches. I recommend to include a new section of discussion of results where authors face the results against the expected objectives. In my opinion, the ML application and the discussion of parameters must be also discussed. On the other hand, I need another type of discussion thinking about the usefulness of the approach, the application of the results in other scenarios, real scenarios where network traffic is generated in real-time and the implications on the application of your approach. Thus, I expect to see the time requires to build the datasets and the time of application of the approach if we want to apply that in real-time IDS systems. 

Minor issues detected:

In general, I recommend to polish and proofread the complete paper to eliminate typos. For instance:

Change the sentence: "... It is also stated that in order compare a crafted dataset to real traffic, there" Change intruders: " ... works related to Botnet and inruders detection, t" Change the sentence: " ... information contained is by means of the emulation of a small business environmen ..." Blanck space in the brackets: "th a yi ∈ {1, , 2..., C}" Change neighbourhood: " from in a certain neighborhood of Xi ∈ Xmin, which is compu " Figure 2 is only a part of Figure 1, hence, the second figure must be skipped. Change the verb serve: " because they are a projection that serve as an estimation of the performance of each classifier 333 model, when implemented in real data" Point before however. "on (PCA), however, the impact on the resulting 343 predictive models was negatively affected due to the imbalance of classes, in addition to the training 344 data had a high variance, that is, the model was not able to correctly predict the valid"

Author Response

Response to Reviewers’ Comments

 

Title: Synthetic Minority Oversampling Technique for Optimizing Classification Tasks in Botnet and Intrusion-Detection-System Datasets

 

applsci-676070

 

 

We thank the Editor and the reviewers for providing comments and suggestions. We have made major revisions to the manuscript and addressed all comments provided. In this document, we provide answers and discussions to every question and comment.

1. Introduction: Overall, the authors should reinforce the explanation of the problem and what is the solution you provide and the improvements. It is not clear enough what is the problem authors will address and how it will improve that.

 

According to your comments and suggestions, in Section 1 - Introduction, the following text was modified and extended:

On this basis, Machine Learning (ML)-based solutions offer a sufficiently broad perspective on the early detection of a botnet attack.  In this way, robust classification models can discover hidden malicious patterns in network flows, reveal notorious taxonomies, uncover particular attack dynamics, and distinguish unique features; important tasks that prebuilt security appliances may fail to assess [3]. However, for real-time botnet and IDS ML-based detection environments, data acquisition plays a major role, since the trade-off between dataset quality, quantity, and complexity reinforce the discriminative power of the chosen algorithms to effectively solve the initial formulation, thus reducing misclassification rates and increasing the trustworthiness of the implementation —a crucial step for protecting critical assets.

Although there are many publicly available datasets for botnets and IDSs resembling real scenarios [5–7], some drawbacks have been identified regarding the attributes of network captures (traffic redundancy), attack diversity, labeling-procedure reliability, and data dimensionality, more specifically, the compensation between the number of benign and malicious samples [8]. In supervised ML-based problems, when a certain number of classes are not equally distributed, the data are said to be unbalanced, impacting the algorithm capabilities to aptly learn from the samples of the predominant class, following to a degradation of classification performance. Considering [8,9], the credibility of an ML-based cyber–physical system depends on the predictive abilities of intelligent agents trained with a wide range of adversarial behavioral patterns, able to protect workloads against malicious activities. Indeed, data distribution has a great effect on the efficiency of different ML models [10,11];

thus, it is important to establish data-level strategies in the preprocessing and training stages. To deal with unbalanced datasets, two methods can be applied: resampling, which balances data by sample aggregation or deletion, and unbalanced learning [12] that seeks to improve the detection rate of minority classes.

In contrast with other state-of-the-art experiments, in this work we developed a deep inspection of botnet- and IDS-related datasets, tackling the unbalancing issues that cause irregular learning rates, and optimizing the training criteria of a set of supervised-learning algorithms, producing strengthened models that can transform data points into actionable knowledge. To achieve this goal, data acquisition must meet the following considerations [16]:

 

Information ought to be from crude network flows as a main source of malicious-packet delivery; incorporation of a considerable collection of bot-based malware attacks obtained through environments closest to near-real contexts; data should cover requirements from operating cycles found in production deployments (working hours); and the proportion of more benign traffic packets must outnumber that of malicious ones since, in near-real-time conditions, only a slight portion of packets arise from infected sources.

2. Related work: The authors focused on describing the different variants of SMOTE techniques but articles related to the classification network traffic are skipped. For instance, this paper "A Proposal for a New Way of Classifying Network Security Metrics: Study of the Information Collected through a Honeypot".

 

According to your comments and suggestions, in Section 2 – Related Work, the following text was modified and extended:

A network comprising bot-based malware and persistent intrusion attacks can cause extensive harm in a short period of time if cybersecurity consultants are not ceaselessly aware of what is going through their endpoints. Inspecting botnets and IDS network captures is not a trivial task; classical defensive tactics include honey-based detection, a helpful tool when computational resources are limited, and evasive rules can be quickly written from specific honeynets qualified to analyze malicious entries. Honey-based sensing is prone to be bypassed by advanced cyber attacks, directly affecting scalability and practical responses from the net. IDS-based detection is mostly achieved by security policies triggered by the real-time monitoring of network activity; nevertheless, this kind of recognition cannot properly dissect the anomalous characteristics of malicious attacks, failing to perform persuasive mitigation [17]. As a consequence, botnet and IDS data analysis has attracted alternative areas of research, for instance, ML-based solutions. Prominent ML approaches facilitated the disclosure of underlying patterns of traffic flows, pointing out the importance of feature engineering ((feature extraction & selection)) and evaluating malicious traces with different assessments to reach higher accuracy in real implementations [18]. The authors in [19] addressed botnet and IDS detection relying on two notable ML ramifications, Supervised (SL) and Unsupervised Learning (UL). SL aims to acquire knowledge by mapping malicious and benign samples into a predefined set of labels, learning from intrinsic features and producing a classification model ready to predict incoming samples [20]. In contrast, UL employs a more rigorous exploration of similar patterns on underlying structures without labeling or categorizing samples, allowing the in-depth scrutiny of similarities between different kinds of samples. The methodology of the present work is arranged by SL techniques, and related works are further described [21].

Because of the increasing number of types and characteristics of botnet and IDS records, classification remains a challenging research topic. According to the input metrics detailed in [22], malicious network flows can be outlined in four principal categories (Login, Inputs, Downloads and Geo-location), depending on specific missions for attackers. In general, well-known SL algorithms, including Logistic Regression (LG) [24], Support Vector Machine (SVM) [25], Artificial Neural Networks (ANN) [26], Decision Trees (DT) [27], Random Forest (RF) [27], Bayesian Networks (BN) [28], and Deep Learning (DL) Networks [29] have been fitted to overcome different menaces that directly depend on the conditions, circumstances, and settings in which botnet and IDS attacks are monitored and framed. Remarkable evidence is taken from IRC connections, P2P bots, DNS queries, anomalous traffic footprints, blacklisted IP addresses, irregular or malformed packet lengths, and abnormal intervals of multiple requests and responses over various network protocols [21].

We thank the reviewer for this valuable suggestion. In [22], was added the reference to the following manuscript: A Proposal for a New Way of Classifying Network Security Metrics: Study of the Information Collected through a Honeypot

3. Feature Extraction and Selection: Formulation must be improved. Try to do an introduction to the concept as a description and then put the equation. Indeed, a tiny example (with data) can help to understand the data structure you handle and the operations applied.

According to your comments and suggestions, in Subsection 3.4. - Feature Extraction and Selection, the following text was modified and reworded:

As indicated in the literature [50–54], it is essential to strengthen evidence provided by the features described on each dataset as this positively influences SL training routines by capturing maximum variability of the inputs and expanding the capacity of inference to the resulting models. If at some point ISCX-Bot-2014 and CIDDS-001 features produce correlation effects, this downgrades the computation of the algorithm. With the incorporation of dimensionality reduction, features that produce wrong variability are discarded, and the remaining inputs are represented in a new subspace of lower dimension on the basis of their variability. In this proposal, Xtrain was preprocessed via Principal Component Analysis (PCA) [55], a dimensionality reduction used to describe features in a new set of noncorrelated variables.

 4. Generation of new samples with SMOTE: I don't understand this: Xmin Xmaj = Xtrain and Xmin ∩ Xmaj = Xbalanced, because the union of Xmin and Xmaj compose of the complete Xtrain dataset how the intersection compose of the Xbalanced dataset?? Please clarify that maybe a tiny example can help to understand that.

 

According to your comments and suggestions, the section Generation of new samples with SMOTE was reorganized and renamed as Subsection 3.3  - Synthetic Minority Oversampling. The following text was modified and redefined:

 

Consider each dataset as representation of  n samples  X ∈ Rn×m N mapped with yi ∈ {1, 2..., C} labels. The unbalanced subsets are defined as Xmin X for minority class samples and Xmaj Xtrain for majority observations, respectively; so, Xmin Xmaj = X; ∀Xmin < Xmaj. Depending upon the amount of oversampling based on the extent of  Xmaj, synthetic data are generated by pointing to       a space of similar features between instances belonging to xi Xmin from a certain neighborhood. Then, k-nearest neighbors are computed by considering the smallest Euclidean distance between the neighbor and the rest of Xmin instances; the closest k neighbors serve as reference points to create new in-between samples. The interpolation of those observations is described in Equation (1):

 

                                                    Xsyn = X + (Xk X) ·e,                                                             (1)

 

                                                                                                                                                 

where Xk Xmin is one of K-nearest neighbors (k = 1, 2, 3..., K) from selected samples xi, e is a random number belonging to the range of [0, 1] that represent the number of instances between xi and Xk, and Xsyn is the new synthetic sample.

 

 5. Experimental Results: The experimentation is very nice in the use of a portfolio of algorithms and datasets. However, the discussion of results is very poor. A deeper discussion of results regarding the improvements in classification achieved compared to other approaches. I recommend to include a new section of discussion of results where authors face the results against the expected objectives. In my opinion, the ML application and the discussion of parameters must be also discussed. On the other hand, I need another type of discussion thinking about the usefulness of the approach, the application of the results in other scenarios, real scenarios where network traffic is generated in real-time and the implications on the application of your approach. Thus, I expect to see the time requires to build the datasets and the time of application of the approach if we want to apply that in real-time IDS systems.

According to your kind comments and suggestions, Section 6 Results and Discussion was added, with the following explanations:

The experiment results for the botnet and IDS datasets using SMOTE + GS demonstrated significant  improvement  for  the  prediction  of  malicious  samples  in  highly  unbalanced datasets as compared to what the authors in [65] and [66] achieved by means of resampling techniques.  Indeed, there was a lack of meticulous inspection with reference to sampling engineering, and an absence of  algorithm  evaluation  regarding  optimization  and  future  assessment.   Even  though  in[65] the performance of some SL models reached high accuracy rates, it was not explained if some extraordinary scores were the product of overfitting conditions mainly caused by a balancing factor and the employed type of resampling. In our approach, it is emphasized that by employing SMOTE and supervising forthcoming model, k-fold cross-validation overfitting can be significantly  avoided; in this way, during the learning stage, Xtrain was split into a validation set of equal size of training data, making it possible to correctly verify average accuracy in different trials. As demonstrated in the results, feature extraction and selection played an important role since manual inspection was not sufficient to exploit the inherent values of each feature, but is also indispensable for a better outlook on data representations that maximize variability in a feature space. Moreover, algorithms are by default constrained by inner parameters, so that in the learning process they must be put through rigorous search to find optimal values, enhancing a more favorable sensing ratio. To present the improvements of the present work, similar proceedings are compared in Table 10.

 

 

Table 10. Comparative analysis between related works for CIDDS-001 and ISCX-Bot-2014 datasets.

 

Methodology

Dataset

Algorithm

Accuracy

Verma A. et al. [65]

CIDDS-001

KNN

93.87%

SMOTE+GS

CIDDS-001

KNN

98.72%

 

 

KNN

93.87%

 

 

DT

93.37%

Bijalwan A. et al. [66]

ISCX-Bot-2014

Bagging with KNN

95.69%

 

 

Ada-Boost with DT

94.78%

 

 

Soft voting of KNN and DT

96.41%

 

 

KNN

98.72%

 

 

SVM

97.35%

SMOTE + GS

ISCX-Bot-2014

LR

97.89%

 

 

DT

98.65%

 

 

RF

98.84%

 

 

In addition, another significant factor for the improvement of the classification models created  in this work in both datasets was the fact that they were a configuration that was external to the model and whose value could not be estimated from the learned patterns, but from the performance of the algorithm in itself during the training stage (Figure 1), helping to improve the classification performance of the model. That is, for each iteration of the hyperparameters in each algorithm of the proposed portfolio, the best were found to be their combinations, strengthening the predictions of each suggested model, as shown in the table above.

 

The hyperparameters for the KNN algorithm of the CIDDS-001 dataset with the best performance used four neighbors because the algorithm that calculates the closest proximity between neighbors (Ball Tree) and the function weight (Distance) used in the prediction showed that all evaluation metrics oscillated by 98% in their results. In the case for the SVM, the penalty parameter (C) showed that, with a margin of error of 0.001, a linear kernel and the form decision function (o-v-r) taking into account the number of samples and the number of classes for this study, binary classes generated results in 90% average.  For the LR algorithm, penalty rule     showed a very efficient regularization force (0.0001) and, with the help of the LBFGS optimization algorithm, evaluation metrics reached above 70%. With depth of nine nodes for the DT, considering the best division for each was twelve, in addition to showing favorable results in the metrics (average of 98%), reduced memory consumption, complexity, and tree size for the produced classifier model. Finally, in RF, in the same way as in DT, the depth of its nodes and the best division in the trees were nine and twelve, respectively, because it is a DT derivative. By having a fully balanced dataset, sampling of features to be considered for the best division in each tree was disabled (Bootstrap = False) so that its construction took into account the whole set. Only two samples were enough to form a leaf (binary class) inside the tree, achieving results above 98%.

 

For the hyperparameters of the ISCX-Bot-2014 dataset, the algorithm in KNN (Ball Tree)  calculated that the optimal number of nearest neighbors was four because the function weight (Distance) was taken into account to determine them, achieving results of more than 96% in its evaluation metrics. The form-decision function (o-v-r) in SVM, the kernel type (Linear), and the penalty parameter of 0.01 had a positive impact when generating the predictive model, reaching metrics above 80% in the classification of botnet samples.  Evaluations in the performance metrics of the predictive model with LR were the lowest of all the algorithms of the used portfolio (below 79% in the classification) despite having good regularization force (0.0001); the penalty rule (A1) affected the results due to the optimization algorithm used (LBFGS). In DT, in the same way as in the CIDDS-001 dataset, the depth of the nodes was nine and their best divisions were 12 because this led to the creation of the best tree with an average performance of 97%. As already mentioned, RF is a derivative of DT, so the depth of its nodes was 9, while its best divisions were seven, taking into account the function to measure its quality (GINI) because counting in unique values is favored and therefore better classification (average of 97%). This being a binary classification problem, leaves were composed of only two samples and a single tree.

              In future work, a real-time ML-based implementation of SMOTE + GS is proposed by considering the suggestions adopted in [68].  First, a specialized host is indispensable to filter, examine, and transform network packages into useful features; then, a labeling process must map samples trough a botnet or intruder category and store them in a knowledge-based database. Furthermore, observations must be trained in an offline fashion for an amount of time; consecutively, an actuator layer is responsible for establishing the ML module to inspect the communication network and force to identity the type of transmitted package.  Finally, a handler must decide to mitigate the identified threat, send an alarm to network administrators, or execute a defence mechanism. An important factor for the reproduction of the proposed in real applications is also to gather samples over a four-week timespan because, on average, inactive threats exhibit their functionality.

6. Minor issues detected: In general, I recommend to polish and proofread the complete paper to eliminate typos. For instance: Change the sentence: "... It is also stated that in order compare a crafted dataset to real traffic, there" Change intruders: " ... works related to Botnet and inruders detection, t" Change the sentence: " ... information contained is by means of the emulation of a small business environmen ..." Blanck space in the brackets: "th a yi {1, , 2..., C}" Change neighbourhood: " from in a certain neighborhood of Xi Xmin, which is compu " Figure 2 is only a part of Figure 1, hence, the second figure must be skipped. Change the verb serve: " because they are a projection that serve as an estimation of the performance of each classifier 333 model, when implemented in real data" Point before however. "on (PCA), however, the impact on the resulting 343 predictive models was negatively affected due to the imbalance of classes, in addition to the training 344 data had a high variance, that is, the model was not able to correctly predict the valid".

 

According to your kind comments and suggestions, the manuscript was proofread and edited by MDPI English Editing Services, with id english-15397.

Author Response File: Author Response.pdf

Reviewer 2 Report

The paper presents the application of a data resampling technique to two datasets that resemble real Botnet and IDS information.
Some experimental results are reported proving that the performances of the final classification models improve.

The contribution is vauable, some points need imporvements:

-workflow in table 1 does not include the phases described in the following, for example the grid-search of sec. 6.3 is not included.

IN general the dexription shuld be more coherent, sec. 4-5-6 ust be reorganized

-editing of the paper is needed to improv English usage and correct gramm errors

Some of the numerous errors:

line 213 -> it is essential...

line 224, 230, leave a space after comma

line 239, each feature is detailed

line 256 malicious

libe 268 it is unlikely

line 327 that describes

Author Response

Response to Reviewers’ Comments

 

Title: Synthetic Minority Oversampling Technique for Optimizing Classification Tasks in Botnet and Intrusion-Detection-System Datasets

 

applsci-676070

 

We thank the Editor and the reviewers for providing comments and suggestions. We have made major revisions to the manuscript and addressed all comments provided. In this document, we provide answers and discussions to every question and comment.

1. workflow in table 1 does not include the phases described in the following, for example the grid-search of sec. 6.3 is not included.

According to your comments and suggestions, in Section 3 - Proposed Methodology, the workflow was reorganized as depicted in Figure 1 (please note the coverletter.

Attending to your suggestions, in Section 3 - Proposed Methodology, the following lines were extended:

The workflow of the proposed methodology is depicted in Figure1. First, in the Data Acquisition block, a comprehensive search for datasets related to botnet and IDS traffic flows is conducted by collecting those that resemble real backgrounds, but with balancing issues. Furthermore, in the Feature Examination and Labeling block, datasets are subjected to an examination process aiming to exploit features that are considered useful by most authors in several sate-of-the-art approaches [39–42]. Once data are normalized into a set of unique features, each sample is labeled as benign or malicious depending on the structure stated from the original data. During the Synthetic Minority Oversampling block, the percentage of minority-class samples from the training set are inspected to serve as a basis to oversample the data via synthetic production, resulting in a fully balanced training set. Subsequently, datasets are merged and split into training and testing sets. Therefore, in the Feature Extraction and Selection block, the balanced training set is preprocessed using Principal Component Analysis (PCA) [55] as the algorithm to extract and select the most informative and relevant features in a new dimensional subspace.  In the Supervised Machine Learning Classification Algorithms block, a portfolio of widely used algorithms is proposed to train the fully balanced set, thus enhancing the classification ratio through grid search, exhaustively tuning preconceived hyperparameters. Finally, the resulting classification models are evaluated by scoring the classification outcomes (predictions) from a testing set in terms of the following performance metrics: Accuracy, Recall, and F1-Score.

2. In general the description should be more coherent, sec. 4-5-6 must be reorganized

Attending to your suggestions, sec. 4-5-6 were reorganized as follows

3. Proposed Methodology

3.1  Data Acquisition

3.2. Data Examination and Class Labeling

3.3 Synthetic Minority Oversampling

3.4  Feature Extraction and Selection

3.5 Supervised-Machine-Learning Algorithms

3.6  Grid Search

 

 3. editing of the paper is needed to improv English usage and correct gramm errors

Some of the numerous errors:

line 213 -> it is essential...

line 224, 230, leave a space after comma

line 239, each feature is detailed

line 256 malicious

libe 268 it is unlikely

line 327 that describes

According to your kind comments and suggestions, the manuscript was proofread and edited by MDPI English Editing Services, with id english-15397.

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

Thanks to the authors since they have made a great effort following all the recommendations. I just want to recommend to consider some new references that would be included in the paper:

Varela-Vaca , Á.J.; Gasca, R.M.; Ceballos, R.; Gómez-López, M.T.; Torres, P.B. CyberSPL: A Framework for the Verification of Cybersecurity Policy Compliance of System Configurations Using Software Product Lines. Appl. Sci. 20199, 5364. Fernández-Cerero, D., Varela-Vaca, Á.J., Fernández-Montes, A. et al. Measuring data-centre workflows complexity through process mining: the Google cluster case. J Supercomput (2019) doi:10.1007/s11227-019-02996-2

Once again a very good job for the great effort carrying out the changes. After including the references I can consider the paper acceptable to be published in the journal. 

Author Response

Response to Reviewers’ Comments

Title: Synthetic Minority Oversampling Technique for Optimizing Classification Tasks in Botnet and Intrusion-Detection-System Datasets

applsci-676070

 

Thanks to the authors since they have made a great effort following all the recommendations. I just want to recommend to consider some new references that would be included in the paper: According to your comments and suggestions, in Section 1 - Introduction, the following text was added:


Beside this, improper configurations in various networks devices can also lead to a potential malware infection [2].

 

We thank the reviewer for this valuable suggestion, in [2], was added the reference to the following manuscript: CyberSPL: A Framework for the Verification of Cybersecurity Policy Compliance of System Configurations Using Software Product Lines.

 

According to your comments and suggestions, in Section 2 – Related Work, the following text was added:

the inherent complexity of data centers work flows [36]

 

We thank the reviewer for this valuable suggestion, in [36], was added the reference to the following manuscript: Measuring data-centre workflows complexity through process mining: the Google cluster case.

Author Response File: Author Response.pdf

Reviewer 2 Report

Paper has been improved and comments by the  revewers addressed, it can now be considered for publication.

Author Response

Response to Reviewers’ Comments

Title: Synthetic Minority Oversampling Technique for Optimizing Classification Tasks in Botnet and Intrusion-Detection-System Datasets

applsci-676070

 

Paper has been improved and comments by the revewers addressed, it can now be considered for publication. Thank you for your kind comments. According to your suggestion, we revised the current manuscript in terms of English and layout quality.

Author Response File: Author Response.pdf

Back to TopTop