Next Article in Journal
Robust Amino-Functionalized Mesoporous Silica Hollow Spheres Templated by CO2 Bubbles
Previous Article in Journal
De Novo Transcriptome Analysis of the Venom of Latrodectus geometricus with the Discovery of an Insect-Selective Na Channel Modulator
Previous Article in Special Issue
Quantitative FRET (qFRET) Technology for the Determination of Protein–Protein Interaction Affinity in Solution
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Benchmark Evaluation of Protein–Protein Interaction Prediction Algorithms

by
Brandan Dunham
and
Madhavi K. Ganapathiraju
*
Department of Biomedical Informatics, University of Pittsburgh, Pittsburgh, PA 15232, USA
*
Author to whom correspondence should be addressed.
Submission received: 5 November 2021 / Accepted: 23 November 2021 / Published: 22 December 2021
(This article belongs to the Special Issue Protein-Protein Interactions 2021)

Abstract

:
Protein–protein interactions (PPIs) perform various functions and regulate processes throughout cells. Knowledge of the full network of PPIs is vital to biomedical research, but most of the PPIs are still unknown. As it is infeasible to discover all of them experimentally due to technical and resource limitations, computational prediction of PPIs is essential and accurately assessing the performance of algorithms is required before further application or translation. However, many published methods compose their evaluation datasets incorrectly, using a higher proportion of positive class data than occuring naturally, leading to exaggerated performance. We re-implemented various published algorithms and evaluated them on datasets with realistic data compositions and found that their performance is overstated in original publications; with several methods outperformed by our control models built on ‘illogical’ and random number features. We conclude that these methods are influenced by an over-characterization of some proteins in the literature and due to scale-free nature of PPI network and that they fail when tested on all possible protein pairs. Additionally, we found that sequence-only-based algorithms performed worse than those that employ functional and expression features. We present a benchmark evaluation of many published algorithms for PPI prediction. The source code of our implementations and the benchmark datasets created here are made available in open source.

1. Introduction

Proteins are large macromolecules that perform a variety of functions necessary for an organism. Biological processes are driven by multiple proteins creating a protein complex or a signaling cascade through biophysical interactions. Understanding which proteins interact with each other is an important first step towards understanding molecular mechanisms of biological functions and of diseases, and in the design of therapeutics. Knowledge of binary protein–protein interactions (PPIs) is also useful in conducting further research on blocking or enhancing different interactions to control biological processes and functions, and to design therapeutics. However, given that human genome contains approximately 20,000 protein-coding genes and therefore nearly 200 million unique protein pairs even without accounting for multiple isoforms of genes, resolving all pairs of interacting protein is not a trivial task.
Many experimental biology methods have been devised to find pairs of interacting proteins. Benchwork experiments like co-immunoprecipitation (co-IP) are resource-intensive and low-throughput, often leading to false negatives depending on experimental conditions, for example due to ineffective antibodies or due to the transient nature of PPIs [1]. In contrast, high throughput screens such as yeast two-hybrid (Y2H) and tandem affinity purification mass-spectrometry (TAP-MS), and more recently a sequencing approach called PROPER-seq [2] attempt to capture tens-to-hundreds of thousands of PPIs in a single experiment. In the past, statistical estimates suggested that Y2H had 25%–45% false positives, and difficulties in detecting PPIs for certain subsets of proteins such as membrane proteins [3]. In later studies, it was shown that extensive filtering techniques can be used to decrease the false positive rate, usually by running multiple screens or comparing them to other data sources to obtain a more precise subset of potential interactions [4]. In one recent approach, a high quality protein interaction network of 53,000 PPIs was produced using such an approach [4]. However, the overlap between PPIs identified by different large-scale techniques remains low, with each method producing large amounts of PPIs unverified by any other experiment [5].
Overall, most PPIs out of an estimated 500,000 to 3 million binary PPIs of the human interactome remain unknown [4,6,7,8], which motivates the development of computational approaches for PPI discovery. At minimum, computational approaches should identify an accurate subset of likely interacting protein pairs that that can subsequently be validated with experimental techniques as true PPIs, thereby eliminating much of the cost in testing millions of other pairs. A sizeable amount of literature has been published on the computational prediction of PPIs, especially in model organisms like yeast, suggesting their extrapolation to human interactome mapping. These methods represent pairs of proteins by features of amino acid composition, protein functional annotations, or gene co-expression, and develop machine learning models to classify each pair (a data instance) as interacting or non-interacting.
Amino acid sequence-based predictors make up most of the literature for computationally predicting proteome-wide PPIs, except for yeast, where other features have also been widely explored. Sequence-based predictors rely on features computed from amino acid sequences, their physicochemical properties or unigram or n-gram counts. Examples of such feature representations include auto covariance (AC), pseudo amino acid composition (PSEAAC), and conjoint triads (CT), all of which have been widely used for predicting PPIs [9,10,11,12,13,14,15,16,17,18]. Other approaches attempt to make a more direct usage of amino acid sequences, either by generating per protein features from PSSMs using PSI-Blast, or by encoding the amino acids into a numeric vector to use as the features [13,19,20,21,22,23]. Once calculated, these values can build a numeric vector of equal length for each protein, creating a representation of data typically used in machine learning.
Non-sequence-based predictors utilize functional, protein subcellular location, structural and other biological annotation and transcriptomic data to compute features (henceforth referred to as annotation-based predictors in contrast to sequence-based predictors). These features rely heavily on Gene Ontology (GO), structural domain databases, and gene expression databases to assemble the features of all protein pairs, although there may be missing values for many pairs. They compute features of protein pairs, including cooccurrence in subcellular locations, co-expression correlation across different experimental conditions or across tissues, and semantic similarity of ontology annotations as input to machine learning classification models [24,25,26,27,28]. Many previous works rely on not one but a collection of these annotations, creating a feature vector per protein pair [29,30]. These collection-based classification algorithms have built-in methodologies to handle missing values so that entire protein pairs are not discarded when they miss some features.
Given the large number of unknown PPIs in the human interactome and the potential role of computational methods in discovering them, we must assess how accurately they perform on ‘real-world’ application, and if proven successful, compare them against each other to identify the best methods. In their original publications, most predictors are claimed to be highly accurate, with several obtaining over 90% accuracy, and some obtaining accuracies as high as 95–98%. Logically, if multiple classifiers exist that can obtain such high accuracies, predicting the full interactome should be a trivial task. However, the evaluations that these algorithms have been put through do not seem to consider two important aspects of PPIs, namely, (i) the large number of known PPIs involving a few hub proteins, and (ii) the rarity of PPIs among all possible protein-pairs. Here, we seek to evaluate and benchmark the performance of these algorithms, taking these aspects into account.
The common evaluation methodology for most PPI prediction methods is to utilize a set of known PPIs from a public repository as positive-label instances, and to use randomly sampled protein pairs (excluding the pairs that are known to interact) in place of negative-label instances. Randomly chosen pairs are utilized due to a lack of known, non-interacting pairs from experimental methods, as a failure to identify a PPI in a biological experiment only suggests that a PPI was not observed under those experimental conditions and does not prove that the two proteins never interact. Therefore, failed experiments do not create negative-labeled data. This lack of ability to experimentally determine non-interacting protein pairs prevents the creation of a traditional gold standard negative dataset. However, taking into account that only 0.325–1.5% of protein pairs have PPIs, computational methods utilize randomly sampled protein pairs as negatively labeled data, with the majority of randomly selected pairs likely being truly non-interacting pairs.
Although it is estimated that there exist about 500,000 to 3 million PPIs out of a total of 200 million protein pairs in humans (0.325–1.5%), most algorithms are trained and tested on datasets containing 50% positive label data. This simple approach for assessing an individual algorithm’s real-world application is questionable. Additionally, the protein interactome is believed to be a small-world network [31]. Such networks have hubs, i.e., nodes that connect directly to many other nodes, or in this case, proteins with a large number of PPIs. For example, The Biological General Repository for Interaction Datasets (BioGRID) contains ~125,000 unique, non-self-interactions among ~14,500 proteins (represented by their gene names/symbols and not distinguishing isoforms; this aspect that proteins are referred to by their genes is applicable throughout this paper except where explicitly mentioned otherwise) [32]. This averages to around 17 interactions per protein; however, currently 360 proteins have more than a 100 PPIs each, with one protein, APP, involved in over 2000 PPIs. On the other hand, over 9000 proteins have 10 or fewer PPIs each. While positive class PPI data are from a scale-free network, randomly paired data (negative class) are sampled uniformly, and thus have a significantly different distribution of proteins. This can lead to biasing problems in machine learning, where a single protein appears far more times in the positive dataset, allowing machine learning algorithms to simply predict pairs containing such proteins to be of positive labels, generating a high accuracy on the test datasets from corresponding distributions. Past experiments in this field have suggested similar findings, with works by Yu as well as Park and Marcotte suggesting that many prediction algorithms primarily predict bias in underlying datasets [33,34].
In addition to dataset creation, evaluation metrics also play a role in correctly assessing whether an algorithm can make good predictions on real data. In evaluating classification algorithms for rare category data, accuracy and AUC are not suitable methods, and precision-recall (P-R) curves are recommended [35]. In biological and clinical domains, where the natural distribution of class labels may be highly imbalanced, a P-R curve provides more reliable information and distinguishes models that are practical for real world applications, whereas AUC may misleadingly convey more impressive accuracy than are realistically achievable on the rare category that is of interest (e.g., an interacting protein-pair or the presence of a disease) [36]. However, most published works have used AUC/accuracy metrics. Both the aspects of data composition and evaluation metrics were taken into account by Qi et al. in the seminal paper on proteome-wide human PPI prediction (i.e., when a protein is compared against all the proteins) for membrane receptor proteins [29]. They set their test data at 1:1000 positive to negative instances and evaluated with a P-R curve. Our own method, called High-Precision Protein–Protein Interaction Prediction (HiPPIP) for human interactome prediction also was evaluated on the lines of Qi et al.’s, both in terms of rareness of positive class and using a P-R curve [37]. Additionally, the HiPPIP method was also evaluated for its ability to predict proteome-wide PPIs, by computing the cumulative number of true-positives versus the increasing number of false-positives (note that in this scenario of proteome-wide evaluation (i.e., where a hub protein is paired with every other protein in the human proteome), the unlabeled data may not all be false-positives, and may indeed me true positives that are currently unknown) on hub proteins with the reasonable assumption that many, if not all, of their PPIs are known; this metric was used as a reasonable approximation of the number of accurate predictions produced by HiPPIP.
In this study, we evaluate different PPI prediction algorithms to determine how well they perform on realistically proportioned datasets. We first implement various algorithms, using similar feature sets and classification models to those described in the original publications. Where possible, we download the datasets used by these algorithms to test whether our implementations produce similar results. Next, we created six new evaluation datasets containing three proportions of positive label instances (50%, 10% and 0.3%) and two sampling methods (sampled randomly from the full list of proteins as is commonly done in literature, henceforth known as Full), and using a held out set of proteins for evaluation (referred to as Held-Out, known as C3 data in Park and Marcotte’s prior work [33]). Finally, we also created control models to compare these predictors against, by using illogical features (e.g., frequency of the proteins in the dataset or random vectors to represent the proteins) with simple, naïve classifiers. These types of predictions do not consider the pairwise compatibility of two proteins, and simply predict PPIs based on the distinct distributions of proteins in positive and negative classes in the datasets. This allows us to determine how well the algorithms perform relative to these illogical features.

2. Methods

2.1. Data

Datasets used to validate the implementation of various sequence-based interaction predictors were downloaded from previous works. The datasets were adjusted from their original formats where necessary to include only proteins with a sequence length greater than 30 amino acids, as several algorithms have this requirement. Datasets which provided only protein names instead of the sequences had protein sequence data downloaded from UniProt (21 June 2021, 25 June 2021, 6 August 2021, 17 August 2021), with protein pairs removed if the names no longer mapped to valid proteins.
For prediction methods that employ protein annotations, the data were not downloaded from original sources for validation because most methods do not provide their datasets, and the availability of protein annotations such as domains, functions, expression and localizations increases over time. Some annotation databases are no longer available (e.g., InterDom), while others have increased their number of annotations since original publications; thus, we cannot try to match our results from these methods against the original publications. We treat our implementations of these as methods to be inspired by the original publications rather than exact reproductions. To create the necessary features, the Gene Ontology was download from the Gene Ontology Consortium (28 March 2021), and GO annotations of proteins were downloaded from the Gene Ontology Annotation Database (16 February 2021) [38,39]. Domain features were downloaded from InterPro (27 March 2021), Prosite (17 September 2021), and Pfam (17 September 2021) to compute domain and family-related features [40,41,42]. When annotations are provided using UniProt Identifiers, a union of all features assigned to a given Entrez Gene ID was used as features for a given protein [43].
Known PPI data were downloaded from the BioGRID (v4.4.198, compiled 25 May 2021). Only direct biophysical protein interactions, encoded by Proteomics Standards Initiative—Molecular Interaction (PSI-MI) ontology identifier MI:0407 and its descendants, were included. After filtering out pairs containing non-human proteins, self-interactions, and protein-RNA bindings, the Entrez Gene IDs provided by BioGRID were then mapped to UniProt IDs [43,44]. To account for Entrez Gene IDs mapping to multiple UniProt identifiers, the longest amino acid sequence was assigned to the gene. To ensure compliance with various sequence-based algorithms, protein sequences less than or equal to 30 amino acids were removed. A total of 123,626 unique interactions among 14,678 proteins remained as positively labeled protein pairs (i.e., PPIs). A total of 19,115 proteins with UniProt ID to Entrez Gene ID mapping that met the minimum sequence length were used to represent the full set of human proteins, with random pairs drawn from this dataset to use as negative labeled protein pairs (i.e., non-interacting protein pairs) (see Supplementary File S1 for details).
Training and testing datasets were created using two methodologies: In the first, a random set of non-overlapping interacting pairs and non-interacting pairs are sampled for training and testing from the full set of all possible proteins (Full). This method is widely used in the literature. We do not take any other precautions when randomly sampling non-interacting pairs, such as selecting proteins in different subcellular locations, as this could induce a bias towards the type of protein pairs chosen and creates a separate bias by limiting the number of proteins available to use in the non-interacting dataset. The second methodology is based on the work of Park and Marcotte, where some proteins are held out and used exclusively in the test dataset (Held Out) [33]. For this dataset creation method, we separated the proteins into 6 equal sized bins and hold out either 1 or 2 bins to create the test data. No pair using a protein from these bins is used in training data. Training data is created from bins excluding those held-out to create test data.
Multiple datasets were created for testing with different percentage compositions of positively labeled protein pairs (known interactions). Specifically, training sets were created with 50% and 20% positive data, while test sets were created with 50%, 10%, and 0.3% positive data. Models generated to test both the 10% and 0.3% positive test datasets used the 20% positive data for training, while 50% positive test datasets utilized 50% positive train datasets for training. For data based on random pairs, the five 50% positive train and test sets were created using stratified cross validation on a single set of 125,000 protein pairs. Data for the 20% training set, and 10% and 0.3% positive test sets, were chosen randomly from all pairs while ensuring that no test data overlapped with the training data. For the second method, namely by holding out proteins used in test data, training and test sets of the same ratios as mentioned earlier were created for each of the 21 combinations of holding out 1 or 2 of the 6 bins. Again, the 10% positive and 0.3% positive test sets shared the same training sets with 20% positive data.

2.2. Feature Computation

We replicated various sequence-based features as described in prior publications. Some, such as auto covariance and conjoint triads are used in multiple papers we reimplement, whereas others such as the Weighted-Skip Sequential Conjoint Triad method are used in only a single paper utilized in our recreations. While most methods listed can be computed from the sequences of the proteins in the pair, two of the listed methods required usage of other proteins’ sequence data. Skip-gram models compute the similarity of words, or in this case, amino acids, by training a neural network with neighboring words to create word embeddings [45]. For protein sequence features, we trained the skip-gram models using all protein sequences related to the organism under study. The second method, PSSMs (i.e., protein sequence similarity matrices), were computed using PSI-BLAST over UniProt’s SwissProt protein sequence database, using 3 iterations and a significance value of 0.001 [46]. If no hits were found, sequences were encoded using BLOSUM62 values. For features relying on physicochemical properties (e.g., hydrophobicity, hydrophilicity, polarity, charge, solvent-accessible surface area, etc.), we employed commonly used amino acid property values unless otherwise specified (see Supplementary File S2 for details). Finally, where possible, we normalized values produced by different feature computations to map to a low range of values, preferably between 0 and 1, or −1 and 1 (see Supplementary File S2 for details).
For annotation-based features, we primarily focused on pairs of domains and GO annotations appearing in known interactions, as well as semantic similarity calculations on GO annotations. As proteins commonly have multiple GO annotations and domains, many algorithms compute these features as a grid of scores between all possible pairs of annotations between a pair of proteins and convert the grid to a single score through an aggregation, such as average, max, sum, product, or the best matching average (see Supplementary File S3 for details). All protein pairs where the necessary features did not exist were scored as zero to ensure that we could train and test using the same datasets used for sequence-based prediction testing.

2.3. Model Construction

Classification models were constructed using Python (version 3.8.5) libraries Scikit-Learn, PyTorch, LightGBM, and ThunderSVM, or an independently created Rotation Forest library [47,48,49,50,51]. While methods in the selected publications have used other programming languages and/or libraries, these should be capable of reproducing similar results.
Models for sequence-based features were constructed to match published methods as closely as possible. For some models, such as support vector machines (SVMs), which are highly sensitive to different scales between features, features were scaled using min-max or standard scaling. Learning rates for neural networks implemented in PyTorch were adjusted to obtain similar results as reported in prior works, and all models were fitted with a decaying learning rate to ensure that the model would conclude quickly when learning had plateaued, ensuring quick running times on larger datasets. For publications that do not clearly state hyperparameters or other exact details used to construct models, we utilized settings that seemed similar to concepts in the referenced publication or other related publications, executed quickly, and performed comparably to reported results in the publications on their datasets. A total of 36 algorithms utilizing random forests, rotation forests, boosting, neural networks, and support vector machines were implemented based on previously published literature (see Supplementary Files S4 and S5 for details related to hyperparameters and any changes between previous work and our implementations).
For annotation-based features, a total of 6 models were created using domain and GO data. Two of these models, based on the works of Chen et al. and Maetschke et al., use large binary/ternary features to represent protein pairs. Chen et al. utilizes a pairwise sum of binary features containing which domains existed in each protein [26]. Maetschke et al. generated binary feature vectors from GO annotations, with non-zero values representing all annotations up to the lowest common ancestor (LCA) common to both proteins [28]. The final 4 annotation-based models used significantly smaller feature representations for each protein pair. Guo et al. and Zhang et al. each developed models relying on semantic similarity using maximum aggregation. Guo et al. relied on a single metric per ontology and a logistic regression model, while Zhang used 10 different semantic similarity measurements to train an SVM [24,25]. Zhang et al. used a score based on different domain databases to predict protein interactions, taking the maximum score across the databases per domain pair and using product aggregation to create a final score [27]. As we are not using pre-computed scores from domain databases, and the list of domains we obtained do not natively map between databases, we created a variation of their algorithm by computing the product aggregation score for each set of domain annotations, and used logistic regression to compute a final score. We refer to this algorithm as the Domain Variant model. Finally, inspired by approaches that utilize a large set of features, we created a simple ensemble model using a random forest with features computed from best matching average aggregations for level 2 GO annotations in protein interactions, our 3 domains annotations in protein interactions, and Resnik semantic similarity as features [29,30]. See Supplementary Files S3 and S5 for full details of each algorithm.

2.4. Illogical/Random Feature Models to Serve as Control

Five different models were created that use either illogical or random features to demonstrate that the models may capture bias, rather than interaction-related properties, in datasets. By illogical feature, we mean that there is no logical reason to expect the feature to distinguish between interacting and non-interacting protein pairs. The expectation is that accurate PPI prediction methods should outperform these illogical/random models. See Supplementary Files S5 and S6 for full implementation details.
Count Bias simply counts how many more times the protein appears in positive training instances than negative training instances. A prediction score is simply the sum of these numbers for the two proteins in a data instance. Sequence Similarity Bias (Seq Sim Bias) is analogous to Count Bias, except that instead of counting each individual proteins’ appearance in the training data, it counts the positive and negative instances of up to the 5 most sequence similar proteins used in the training set, and performs a weighted average based on their similarity. A third feature, Sequence Similarity + Protein Bias (Seq Sim + Prot Bias), calculates the 5 most similar proteins, and their weights, from a combination of sequence similarity and the number of overlapping proteins shared by a given pair of Entrez IDs. This protein bias check is designed to ensure that non-sequence-based features (i.e., gene annotation features) are not getting an advantage when aggregating features from proteins that map to multiple genes. Sequence-based methods are not tested for this spurious feature. Finally, random features for each protein are represented by vectors of 500 random numbers, with pairs of proteins represented by a concatenation of these values. These features are then utilized by either a random forest or a simple neural network. The neural network architecture contains multiple shared layers before the protein pair’s values are concatenated and run through a final prediction layer.

2.5. Evaluation Metrics

For our evaluations, test sets with 50% positive data are compared on accuracy (Acc) and area under the receiver operating curve (AUC) measurements. These metrics allow us to compare our results with previous literature, which most commonly utilize accuracy to measure their model’s predictive capabilities. When data is imbalanced, as is common in the biological domain, typically having many more negatively labeled instances than positively labeled instances, precision recall curves (P-R) are recommended, as simple accuracy and AUC calculations may be heavily influenced by predicting the negatively labelled, non-rare class frequently [35,36]. Therefore, test sets with 10% or 0.3% positive data are compared on the precision at 3% recall (Prec) and average precision (Avg P), rather than relying on accuracy-based measures. 3% recall was chosen to determine whether the algorithms are capable of making good predictions on their top scoring pairs, a necessity to provide laboratories with good sets of protein pairs to test using experimental methods. See Supplementary File S7 for additional notes on implementing these metrics.

3. Results

To illustrate the reasoning presented in the Introduction, we started by testing representative methods by evaluating their predictions on hub proteins against the known PPIs. First, HiPPIP is compared to Qi et al. as originally reported [37], but with recently updated data of known PPIs from HPRD and BioGRID. See Figure 1, where 1A is recomputed on the same hub proteins as in [37], and 1B is computed for hub proteins in current data with degree >100. In Figure 1B, a sequence-based predictor SPRINT [52], which in its original publication reported 79–89% AUC on 1:1 data composition, was generated from BioGRID; however, this high performance is not reflected at the proteome scale, and it is outperformed by the seminal work by Qi et al., which was published in 2009, and by HiPPIP.
We implemented various PPI prediction algorithms and tested most of them on their original datasets and evaluation metrics for verifying that the results are reproduced similarly. We created reusable datasets and evaluated the algorithms on these datasets, with commonly reported evaluation metrics as well as metrics more suitable for this domain, to benchmark their performance when predicting proteome-wide PPIs where interacting protein pairs are very rare. We suggested suitable evaluation metrics and are releasing the datasets and source code of re-implementations so that these together may lead to a rapid advancement of computational approaches for PPI prediction and their benchmarking.

3.1. Realistic Datasets for Benchmark Evaluations

We created benchmark training and test datasets at various proportions of positive class instances (1:1, 1:4, 1:9, and 1:332). In creating them, we employed two different approaches: the first approach selected pair-wise instances while ensuring that the pairs used in training and testing did not overlap (referred henceforth as Full data); the second approach held out proteins to be used exclusively in test data (referred henceforth as Held Out data). The size of each dataset is shown in Table 1.

3.2. Sequence-Based Predictors

We implemented 36 sequence-based PPI prediction algorithms with various different feature representations (Table 2) and evaluated them on the same datasets that were used in original publications (Table 3) to confirm that our implementations were comparable (Table 4). Excluding Tian et al.’s SVM algorithm, our implementations had <1% difference in accuracy on an average relative to those reported in those publications, with a correlation around 0.89. Large differences (>5%) were primarily observed when testing the smallest datasets (Martin Human and Martin H.Pylori). Overall, we obtained comparable results without extensive tuning or performing other significant hyperparameter optimization, showing that our implementations provide a good representation of previously produced works.
For each dataset used in previous literature, we compared our implementations of sequence-based predictors against the illogical/random feature control models. These comparisons are shown in Table 5. We can see that illogical/random feature models perform about as well as most previous implementations. In fact, the best result of each control model exceeds or falls within 4% accuracy of sequence-based predictors on all but two datasets. Our explanation is that sequence-based predictors likely capture information that some proteins are overrepresented in PPIs compared to random pairs (which is not an aspect that can be exploited in real-world interactome prediction). Thus, the datasets with much fewer proteins used in random pairs, such as Guo Multi Chen and both Pan Human datasets, are the easiest to obtain high accuracy on, as having several proteins exist in positive pairs without existing in negative pairs creates an easily exploitable bias when making predictions. These results strongly suggest that sequence-based prediction models inherently predict biases of individual proteins, or at least have been primarily evaluated on datasets where the biases are easy to exploit.

3.3. Benchmark Evaluation of Sequence-Based Methods

Each sequence-based predictor was trained on each of our 52 training datasets (26 based on Full data, and 26 based on Held Out data) and evaluated on the corresponding testing datasets (see Table 1). The results of these experiments are shown in Table 6 and Figure 2. Overall, most algorithms seem to be matched or outperformed by one or more of the control methods. Specifically, when scoring on datasets generated from the Full set of protein pairs with 1:1 positive class labels, using random numbers as features with a neural network model tied for 3rd in accuracy, and scored 5th in AUC in comparison to the dozens of sequence-based methods. In datasets where class distribution is skewed at 1:9, or more realistically at 1:332, this model placed 6th and 13th in average precision, outperforming over half of the sequence-based methods. Even when measuring precision at 3% recall on the skewed-class data, it outperforms half of the sequence-based predictors.
Table 3. Datasets reported in prior publications. The original publications that created these datasets are referenced in the first column. The next two columns report the species of the data and the publications that report PPI prediction accuracies on these data. The data sizes (number of PPIs are number of unique proteins) are reported after mapping protein names to UniProt and filtering to remove proteins shorter than 31 residues.
Table 3. Datasets reported in prior publications. The original publications that created these datasets are referenced in the first column. The next two columns report the species of the data and the publications that report PPI prediction accuracies on these data. The data sizes (number of PPIs are number of unique proteins) are reported after mapping protein names to UniProt and filtering to remove proteins shorter than 31 residues.
Dataset CreatorSpeciesDataset-Referencing PaperPositive PairsRandom PairsProteins in Positive DataProteins in Random Data
Du [56]Yeast aDu [56]17,25748,59443822521
Guo [9]Yeast bChen [66]5594559422172421
Guo [67]MultiChen [66]32,95932,95911,5271399
Jia [57]Yeast e,fJia [57]17,33933,05644363260
Liu [68]Fruit FlyLiu [68]4156424124634080
Martin [69]H.PyloriJia [53]142014581313727
Martin [69]HumanPan [10]937938828740
Pan [10]HumanPan [10]36,61736,48094732184
Pan [10]HumanPan [10]389942622502661
Guo [9]Yeast bTian [15]5594559425211194
Li [17]Human c,dLi [17]4096409628051865
Richoux [22]Human cRichoux [22]39,67264,388667615,869
a Only a random sample of random pairs are used to create a dataset of 50% positive data. b Different datasets based on Guo’s yeast data were found in literature. Unless specified otherwise, we use Tian’s dataset by default. c Data are provided in individual train and test sets, rather than used for cross validation. Test sets have fewer pairs than train sets. d Data from Alzheimer’s disease network. e Original dataset contained inter-species pairs. Pairs with non-yeast proteins were removed. f Jia’s yeast data is used in two different ways, split into a training/cross validation set of 50% positive data (Jia Yeast Cross) and a held out test set of 30% positive data (Jia Yeast Held), or for full cross validation (Jia Yeast Cross Full).
Table 4. Comparison of previous results to our implementations (increase or decrease of our implementations is listed in parenthesis, relative to the scores from the original publications).
Table 4. Comparison of previous results to our implementations (increase or decrease of our implementations is listed in parenthesis, relative to the scores from the original publications).
AlgorithmDatasetAccuracyAUC
Guo 2008 AC SVM [9]Guo Tian Yeast87 (−2)
Pan 2010 PSAAC SVM [10]Martin Human68 (−12)
Pan 2010 PSAAC SVM [10]Pan Small91 (−18)95 (−17)
Pan 2010 PSAAC Rot [10]Pan Small95 (+3)97 (+2)
Pan 2010 PSAAC Rand [10]Pan Small96 (+2)97 (+2)
Pan 2010 LDA Rot [10]Pan Large97 (+1)99 (+0)
Pan 2010 LDA Rot [10]Pan Small96 (+2)98 (+1)
Pan 2010 LDA Rand [10]Pan Large98 (+0)99 (+0)
Pan 2010 LDA Rand [10]Pan Small96 (+2)98 (+1)
Pan 2010 LDA SVM [10]Martin Human69 (−6)
Pan 2010 LDA SVM [10]Pan Large95 (+2)98 (+1)
Pan 2010 LDA SVM [10]Pan Small91 (+4)95 (+3)
Pan 2010 AC SVM [10]Martin Human51 (+15)
Pan 2010 AC SVM [10]Pan Small89 (+7)94 (+4)
Pan 2010 AC Rot [10]Pan Small95 (+2)96 (+3)
Pan 2010 AC Rand [10]Pan Small96 (+2)97 (+2)
Zhou 2011 SVM [70]Guo Tian Yeast89 (+2)95 (+1)
Zhao 2012 SVM [11]Liu Fruit Fly81 (−4)
Zhao 2012 SVM [11]Martin H Pylori89 (−3)
Jia 2015 RF [57]Jia Yeast Held87 (−5)
Jia 2015 RF [57]Jia Yeast Cross84 (−6)
Jia 2015 RF [57]Martin H Pylori91 (−3)
You 2015 RF [61]Guo Tian Yeast95 (−1)
You 2015 RF [61]Martin H Pylori88 (−2)
Ding 2016 RF [63]Guo Tian Yeast95 (−1)
Ding 2016 RF [63]Martin H Pylori88 (+2)
Ding 2016 RF [63]Pan Small98 (+0)
Du 2017 Sep [56]Du Yeast93 (+0)97 (−0)
Du 2017 Sep [56]Guo Tian Yeast94 (+1)
Du 2017 Sep [56]Martin H Pylori86 (+2)
Du 2017 Sep [56]Pan Small98 (+1)
Du 2017 Comb [56]Du Yeast90 (+1)96 (+0)
Sun 2017 CT Auto [12]Pan Large95 (−1)
Sun 2017 AC Auto [12]Pan Large97 (−0)
Wang 2017 Rot [19]Guo Tian Yeast90 (−0)
Wang 2017 Rot [19]Martin H Pylori88 (−12)
Göktepe 2018 SVM [13]Martin Human74 (−6)83 (−11)
Göktepe 2018 SVM [13]Martin H Pylori89 (−5)94 (−3)
Göktepe 2018 SVM [13]Pan Small94 (+4)93 (+6)
Gonzalez-Lopez 2018 [21]Du Yeast93 (−1)97 (−0)
Gonzalez-Lopez 2018 [21]Guo Tian Yeast95 (−1)98 (−0)
Gonzalez-Lopez 2018 [21]Martin H Pylori85 (+1)92 (−0)
Gonzalez-Lopez 2018 [21]Pan Small98 (+1)100 (−0)
Hashemifar 2018 CNN [20]Guo Tian Yeast95 (+0)
Li 2018 CNN/LSTM [23]Pan Large99 (−0)
Chen 2019 LGBM [14]Guo Tian Yeast95 (−0)
Chen 2019 LGBM [14]Martin H Pylori89 (−1)
Chen 2019 RNN [66]Guo Chen Yeast97 (−1)
Jia 2019 RF [53]Jia Yeast C. Full88 (−4)
Jia 2019 RF [53]Martin H Pylori93 (−4)
Richoux 2019 LSTM [22]Richoux Strict78 (−1)
Richoux 2019 Full [22]Richoux Strict76 (−1)
Tian 2019 SVM [15] aGuo Tian Yeast96 (−12)
Tian 2019 SVM [15] aMartin H Pylori96 (−17)
Yao 2019 Net [71]Guo Tian Yeast95 (+1)
Yao 2019 Net [71]Pan Small99 (+0)
Zhang 2019 Deep [16]Du Yeast95 (−4)97 (−2)
Li 2020 Deep [17]Li AD95 (+3)95 (+5)
Czibula 2021 Auto SS [18]Guo Chen Multi97 (+0)97 (+2)
Czibula 2021 Auto SS [18]Pan Large98 (−1)98 (+1)
Czibula 2021 Auto SJ [18]Guo Chen Multi97 (−0)97 (+2)
Czibula 2021 Auto SJ [18]Pan Large98 (−1)98 (+1)
Czibula 2021 Auto JJ [18]Guo Chen Multi98 (−1)98 (+1)
Czibula 2021 Auto JJ [18]Pan Large98 (−1)96 (+3)
a Results of Tian are expected to be different due to differences in data preprocessing, see S6 for details.
Table 5. Comparisons of the range of values per dataset in previous literature and our implementations, versus 4 algorithms that rely on simple counting or random numbers. Only 1 dataset was successfully predicted by an algorithm at a rate of over 10% above using random numbers, while most algorithms struggle to predict any better than counting or using random numbers per protein. Even the best results from each of the three datasets used in 10 or more previous implementations only exceed predictions using random numbers by less than 3%, with several algorithms falling below the bias-based predictions.
Table 5. Comparisons of the range of values per dataset in previous literature and our implementations, versus 4 algorithms that rely on simple counting or random numbers. Only 1 dataset was successfully predicted by an algorithm at a rate of over 10% above using random numbers, while most algorithms struggle to predict any better than counting or using random numbers per protein. Even the best results from each of the three datasets used in 10 or more previous implementations only exceed predictions using random numbers by less than 3%, with several algorithms falling below the bias-based predictions.
Dataset Referred by Table 1 Column 1, Column 2, Column 3Count BiasSeq Sim BiasRand NetRand RFNumber of ImplementationsResults Reported in PublicationsResults of Our ImplementationsMax Improvement over Bias Methods
Du Yeast87.787.592.588.5490–95.390.5–92.62.8%
Guo Yeast Chen81.581.48474.4197.196.313.1%
Guo Yeast Tian8785.994.3941087.3–95.184.5–95.51.2%
Guo Multi Chen93.593.198.796.4396.9–98.296.6–97.3−0.5%
Jia Yeast Cross78.778.475.276.5184.477.95.7%
Jia Yeast Held82.982.581.380.7186.582.03.6%
Jia Yeast C. Full83.883.185.481.118884.32.6%
Li AD96.779.176.297.3194.797.70.4%
Liu Fruit Fly84.195.796.684.2180.976.8−15.7%
Martin Human61.283.181.162.2451–73.855.7–67.8−9.3%
Martin H Pylori83.66159.289.61085.2–9375.9–89.93.4%
Pan Human Large96.383.182.297.8994.5–9994.3–98.91.2%
Pan Human Small94.593.198.898.61489.3–98.772.5–99.40.6%
Richoux Strict79.694.496.979.5276.3–78.374.7–76.8−18.6%
Table 6. Accuracies, Area Under the Receiver Operating Curve, Precision, and Average Precisions across all sequence based and illogical feature-based models and all datasets.
Table 6. Accuracies, Area Under the Receiver Operating Curve, Precision, and Average Precisions across all sequence based and illogical feature-based models and all datasets.
Algorithm50% Pos Full10% Pos Full0.3% Pos Full50% Pos Held10% Pos Held0.3% Pos Held
AccAUCPrecAvg PPrecAvg PAccAUCPrecAvg PPrecAvg P
Control Methods
Count Bias84.291.591.956.428.06.550.050.010.010.00.30.3
Seq Sim Bias82.390.084.553.014.85.265.670.741.821.62.10.9
Random Vec NNet84.792.092.560.429.17.451.050.211.110.00.40.3
Random Vec RF78.185.987.345.917.93.850.950.510.510.10.30.3
Sequence-Based Predictors
Guo 2008 AC SVM [9]74.181.583.340.416.13.461.465.444.217.32.40.7
Pan 2010 PSAAC SVM [10]64.268.446.721.32.71.463.267.143.519.22.21.1
Pan 2010 PSAAC Rot [10]82.990.696.157.544.58.564.470.262.120.84.91.4
Pan 2010 PSAAC Rand [10]83.791.496.859.946.99.366.472.565.722.65.41.5
Pan 2010 LDA Rot [10]82.790.493.754.733.76.859.763.938.016.62.30.9
Pan 2010 LDA Rand [10]83.591.194.25735.97.461.365.944.617.93.11.0
Pan 2010 LDA SVM [10]77.885.388.445.919.24.258.861.927.414.91.30.5
Pan 2010 AC SVM [10]80.287.285.547.913.63.859.964.541.818.22.00.7
Pan 2010 AC Rot [10]83.291.094.355.237.76.857.861.330.714.51.30.7
Pan 2010 AC Rand [10]83.991.794.157.039.07.459.864.337.015.42.10.8
Zhou 2011 SVM [70]80.488.289.051.723.65.460.664.533.617.11.50.6
Zhao 2012 SVM [11]77.983.586.639.614.92.964.468.535.019.11.70.7
Jia 2015 RF [57]84.692.295.960.741.88.765.170.451.119.53.31.2
You 2015 RF [61]83.190.895.856.543.97.761.265.942.917.42.51.0
Ding 2016 RF [63]84.792.396.961.149.810.064.170.058.318.34.61.2
Du 2017 Sep [56]85.592.894.964.539.59.867.073.356.324.93.91.2
Du 2017 Comb [56]83.290.694.759.033.87.965.170.758.523.54.11.1
Sun 2017 CT Auto [12]74.482.061.737.84.32.058.862.226.914.41.00.5
Sun 2017 AC Auto [12]77.384.474.142.58.82.958.160.522.414.50.80.5
Wang 2017 Rot [19]70.273.533.021.81.30.856.257.116.111.80.50.4
Göktepe 2018 SVM [13]82.590.293.657.029.27.165.671.259.424.04.61.2
Gonzalez-Lopez 2018 [21]8390.589.955.623.95.954.155.417.811.90.70.4
Hashemifar 2018 CNN [20]82.289.384.848.915.03.961.465.530.816.81.40.6
Li 2018 CNN/LSTM [23]84.391.893.660.928.37.356.058.524.713.61.10.5
Chen 2019 LGBM [14]81.989.796.057.645.28.462.767.743.619.82.41.0
Chen 2019 RNN [66]83.990.475.454.77.74.059.863.227.115.71.20.5
Jia 2019 RF [53]83.291.097.159.552.710.066.072.068.223.25.61.6
Richoux 2019 LSTM [22]80.087.091.652.825.25.854.355.515.611.90.60.4
Richoux 2019 Full [22]82.890.491.757.925.76.555.256.815.112.20.60.4
Tian 2019 SVM [15]76.083.686.444.318.84.165.370.857.722.04.31.0
Yao 2019 Net [71]83.590.789.355.920.15.657.760.727.114.71.20.5
Zhang 2019 Deep [16]81.488.174.851.08.03.759.561.738.217.11.90.6
Li 2020 Deep [17]86.493.496.667.750.712.467.373.865.926.86.11.7
Czibula 2021 Auto SS [18]76.584.766.836.55.72.153.154.011.311.30.70.4
Czibula 2021 Auto SJ [18]66.674.866.633.35.91.853.054.418.211.60.80.4
Czibula 2021 Auto JJ [18]74.782.667.935.85.82.155.757.24012.91.70.5
When testing on protein pairs containing Held Out proteins for test data, the accuracies and precisions of all published algorithms are much lower, with the best prediction accuracy on 1:1 data falling below 70%, and the precision at 3% recall and average precision on the 1:332 data falling below 10% for all models. Biases that exploit the number of positive and negative instances in the training data that each protein from the test dataset appear in, such as simple counts and random numbers, are eliminated when using our Held Out protein data generation method. This is shown by their prediction accuracies and precisions reducing down to being nearly equal to the percentage of positive data in the test set, implying random ordering. However, predicting on individual proteins based on sequence similarity still places in the top half of all comparisons, and in the top 10 in accuracy and AUC on 50% positive test data.

3.4. De-Biasing Annotation-Based Predictors

Computations based on gene annotation features (non-sequence-based) can induce biases when dealing with missing data. If the missing data is much more prominent in the positive or negative class (which it can be for PPI prediction datasets [72]), algorithms can learn to make predictions based on number of missing values as a spurious feature. In Table 7, we show the amount of missing data for each dataset. While negative pairs have more missing data, most pairs contain at least one of the three GO features, and some Pfam and InterPro features. Given that more than half of the negative data contain information from all 3 GO ontologies, more than 40% of them contain domain data from Prosite, the feature that is most often missing, combined with the heavy skew of negative data in the imbalanced datasets, we do not believe that, from these features, missing data alone would provide a significant advantage to predicting PPIs.
Additionally, many of the published algorithms based on protein domains, and to a lesser extent on GO annotations, utilized information related to interactions either directly, by computing the probability that a pair of annotations belongs to known interactions, or indirectly by utilizing a domain database that bases its scores on the probability of a pair of domains belonging to protein interactions. When utilizing these data without filtering out interacting pairs in the test data, the results could be biased such that rare pairs of domains or GO annotations that occur in an interaction in the test dataset are scored highly solely because of that interaction’s usage when generating the protein pair’s features. The performance of our two algorithms using features that rely on interactions is shown in Table 8 when allowing the features to be created using all, non-testing, and non-held out proteins exclusively. Overall, there is a significant drop in accuracy when removing test pairs from the feature creation process, with a more moderate drop when holding out entire proteins instead of just the pairs in the test dataset. As test data should not influence feature creation and training, we believe this shows a significant bias which must be accounted for when calculating features. However, compared to sequence-based predictors, the drop found when changing from holding out test data to holding out entire proteins is much more moderate, suggesting less of a biasing issue when not holding out entire proteins. While the difference is smaller, for the purpose of fair comparisons, we utilized features calculated with proteins being held-out when comparing to sequence-based methods on the Held Out datasets.

3.5. Benchmark Evaluation of Annotation-Based Methods

We tested our implementations of annotation-based methods on the benchmark datasets. These results, along with predictions from illogical/random feature models and some of the best sequence-based predictors, are shown in Table 9. Overall, the results from these methods are worse than control models in several categories when not holding out any data. However, unlike control and sequence-based methods, the four methods that use small feature vectors not relying heavily on individual protein data (i.e., excluding ‘Chen 2005 Dom RF’ and ‘Maetschke 2012 ULCA’) drop much less when running on held out data, with some even improving their precision at 3% recall. Unlike the sequence-based methods, these 4 methods manage to obtain over 85% precision at 3% recall on 1:9 positive data, and 15–25% precision on 1:332 data. The latter result represents a 50x–80x performance improvement over random data, over a 7x improvement over predicting based on similar sequences, and a 2x–4x improvement over the best sequence-based methods at low recall levels. While methods using aggregations showed the ability to make good predictions at low recall levels, Chen et al.’s method using the pairwise sum of binary data per protein and Maetschke’s method using a union of GO annotations between two proteins up to and including their lowest common ancestor both struggled like sequence-based methods. This may be a reflection of the large number of features used by these models, as well as how the feature vectors utilized mostly reflect each individual protein’s features or a simple calculation between individual protein’s features. A plot of the best annotation-based predictors versus the best sequence-based predictors can be found in Figure 3. In both precision recall curves, there is a clear gap in precision between the 4 best annotation-based predictors and sequence-based predictors at low recall levels.

4. Discussion

We implemented different PPI prediction algorithms and evaluated them on benchmark datasets containing different class distributions. Our results show that most published methods perform much lower than originally reported when evaluation data are created with realistic proportions of positive and negative classes. We also showed that many of these methods may be capturing spurious, illogical features that represent the frequency of specific proteins in the data rather than meaningful information about PPIs themselves; such methods will not translate to a real-world application of proteome-wide discovery of PPIs where every protein will be tested against every other protein in the proteome.
In prior publications, most sequence-based predictors were evaluated on datasets with 50% positively labeled instances, with randomly selected protein-pairs serving as negative class data. In those datasets, PPIs (i.e., the positive class data) are drawn from a scale-free network where some nodes are hubs, and therefore appear in many protein pairs, whereas randomly paired negative class data are drawn from nearly complete-graph data. Thus, the usage rates of different proteins, particularly hub proteins, in the positively and negatively labeled instances are dramatically different, creating an easily exploitable bias. Some of this can be inferred from Table 3, which shows far more unique proteins in positive class data than in negative class data in many datasets. (If say 100 pairs are drawn from scale-free distribution and there is one hub with 25 PPIs, it contains 26 unique proteins; whereas, if that hub did not exist and it was drawn from a complete graph, the number of unique proteins from those 25 PPIs would be between 26 and 50) When evaluating these datasets, algorithms may assign class labels based on protein frequency rather than true characteristics of an interacting protein pair, and falsely shows higher performance in evaluations; the class label is assigned independently of the second protein in the pair but learns a likely invalid premise for real world interactome prediction. Our experiments showed that both on datasets from original publications and on our newly created Full datasets, control models with illogical features that simply capture protein membership can perform on par with most of the models from the literature. When analyzing the results of test datasets with proteins not utilized in the training set (Held Out), we find that most algorithms’ accuracies drop significantly. Accuracies on our 1:1 test data dropped from their original results of 75–85% accuracy down to 55–67% accuracy. On more relevant metrics, namely precision at 3% recall on 1:332 positive class data and holding out test proteins gave a mere 6.1% for the best sequence-based algorithm. Thus, when both the data ratio and evaluation metric are suitably chosen, the true ability is revealed to be impractically low.
Some of the previous methods filtered out protein sequences that have a high sequence similarity, which we have not implemented; however, we have mapped our data to gene-level information, and so multiple isoforms are not included in our test data, minimizing the number of highly sequence similar proteins in our data. It is likely that if we removed proteins with similar sequences from our datasets, the results when predicting on held out proteins would be lower, as exploiting sequence similarity provides similar results to sequence-based methods. This may be analyzed in future work.
When training the methods from literature that were originally designed for 50% positive training and test datasets, we did not make adjustments to the designs or implementations to adjust for the different ratios of data. Some methods, such as class weighting, are commonly recommended when training models on imbalanced data. However, to keep the models as similar to those found in previous literature as possible, we decided not to implement adjustments per positive data ratio. We note that we did test all models on 50% positive test data and found the results to be similar to what could be obtained by prioritizing proteins by their number of known interactions. Therefore, we believe it is unlikely that using concepts such as class-weighting would drastically increase the precision of these methods.
When using annotation features, we found that feature representation has a significant impact on the results of the model. Most methods we tested were unable to outperform control models made from illogical features when generating data from all pairs of proteins. However, using Held Out protein data showed moderate precision at 3% recall even on heavily imbalanced class data, showing their predictive capabilities do not depend on exploitable, individual protein-based biases in the underlying data.
Models that use features computed from pairs of protein domains and GO annotations that appear in other interactions performed well at predicting interactions; these are well recognized to be meaningful features in predicting interactions (e.g., that two protein domains that are known to interact are highly likely to conserve that interaction/function when those protein domains appear in other proteins). Thus, using the knowledge of interacting protein domains or compatible GO annotations (specific ligand and receptor annotation in the two proteins), and along with other protein features to learn an effective classification machine learning model which helps shortlist protein-pairs for experimental validation. We also note that the two methods using only 3 features, i.e., ‘Domain Variant’ and ‘Guo 2006 Sem’, performed as well or better than other non-sequence methods based on 10–20 features. Surprisingly, our ensemble method, which contained Resnik semantic similarity, GO annotation frequency, and all 3 domain features performed worse than the methods using only 3 domain or Resnik semantic similarity features. This was most likely due to the different aggregation methods, with Domain Variant and Guo 2006 Sem using product and max aggregations respectively, while our ensemble used best matching average aggregation. This could imply that using max or product aggregation is better for predicting protein interactions, or this could suggest a bias where protein interactions are primarily known for genes with a high number of annotations. If the latter is true, product, sum, and maximum aggregations could exploit this bias, as all three functions monotonically increase as more data are provided. We leave analyzing this to future work.
As for methods where the features used by the models were highly similar to features produced for each individual protein, such as Chen 2005 Dom RF and Maetschke 2012 ULCA, we found that their performance mirrored the performances of sequence-based methods. This implies that using sequences alone is not the problematic part for sequence-based methods, but rather, any methodology that relies on producing unique feature sets per protein and using simple combinations of these features to create data for machine learning methods seem to mostly make predictions based on underlying biases in generated PPI datasets. Only when creating a small number of more complex features using pairs of proteins, instead of individual proteins, do models see a significant improvement beyond bias when positive interaction data are used as the rare class.
In conclusion, we compare P-R performance of HiPPIP with selected sequence-based and annotation-based methods (Figure 4A) that performed the best and find that HiPPIP outperforms all of them significantly. In this evaluation, for all predictors, known PPIs from HPRD dataset are also included in assessing whether a prediction is a true positive. HiPPIP was evaluated on two sets of test data, each set containing 10 datasets: in one set, the data are created in the same fashion as Full data (‘HiPPIP-a’ in Figure 4A) and in the other, the positive instances for test data are taken from BioGRID published January 2017 (‘HiPPIP-b’ in Figure 4A); Figure 4 shows aggregated results across the datasets in 4A and individual datasets in Figure 4B,C for the two sets. The aforementioned caveat is: the specific pairs used in training HiPPIP are unavailable; therefore, there may be some overlap with training data; based on the number of known PPIs in BioGRID and considering that 20,000 PPIs were used in training HiPPIP, it is estimated that HiPPIP-a and HiPPIP-b test datasets may have 24% and 14% overlap for positive instances respectively, whereas overlap for negative data are negligible.

5. Conclusions

In this analysis, we compared protein interaction predictions from a variety of different algorithms in the literature, including methods based on sequences, Gene Ontology, and domain features. Overall, we found significant biases in several previous studies. When removing these biases, our results showed that pairwise features, such as semantic similarity and pairs of domains existing in known protein interactions, predicted interactions in highly imbalanced data at low recall levels that outperformed other competing methods, such as those using sequence-based features. When using methods involving feature sets per protein, as is done with most sequence-based predictions, we found that most algorithms fail to predict interactions significantly better than simple illogical features based on individual proteins.
In these implementations, we did not make any significant adjustments or tuning of the algorithm hyperparameters in order to compare these results with original reported results. We believe that by releasing the source code and the datasets, the algorithms will continue to be improved by the scientific community by devising better features and algorithms, or even by tuning the algorithms to handle the underlying skewed distribution.

Supplementary Materials

The following are available online. S1: Details of downloading and processing positive protein interactions. S2: Implementation details of sequence-based features. S3: Implementation details of annotation-based features. S4: Details of classification models and hyperparameter processing. S5: List and brief descriptions of published algorithms that are re-implemented here. S6: Implementation details of control models with illogical and random features. S7: Evaluation Metric Notes.

Author Contributions

Conceptualization, methodology, formal analysis, investigation, visualization, resources: B.D. and M.K.G.; software, validation, data curation: B.D.; writing—original draft preparation, B.D.; writing—review and editing, B.D. and M.K.G.; supervision, M.K.G. funding acquisition, B.D.’s training program. All authors have read and agreed to the published version of the manuscript.

Funding

The research reported in this publication and the article publication charges were supported in part by the National Library of Medicine of the National Institutes of Health under award number T15LM007059.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The benchmark datasets and source code of our implementations of various published algorithms can be downloaded from https://github.com/bmd2007/benchmark_eval.

Conflicts of Interest

The authors declare no conflict of interest.

Sample Availability

Not applicable.

References

  1. Tang, Z.; Takahashi, Y. Analysis of Protein–Protein Interaction by Co-IP in Human Cells. In Two-Hybrid Systems; Springer: New York, NY, USA, 2018; pp. 289–296. [Google Scholar]
  2. Johnson, K.L.; Qi, Z.; Yan, Z.; Wen, X.; Nguyen, T.C.; Zaleta-Rivera, K.; Chen, C.J.; Fan, X.; Sriram, K.; Wan, X.; et al. Revealing protein-protein interactions at the transcriptome scale by sequencing. Mol. Cell 2021, 81, 4091–4103. [Google Scholar] [CrossRef]
  3. Huang, H.; Jedynak, B.M.; Bader, J.S. Where have all the interactions gone? Estimating the coverage of two-hybrid protein interaction maps. PLoS Comput. Biol. 2007, 3, e214. [Google Scholar] [CrossRef] [Green Version]
  4. Luck, K.; Kim, D.-K.; Lambourne, L.; Spirohn, K.; Begg, B.E.; Bian, W.; Brignall, R.; Cafarelli, T.; Campos-Laborie, F.J.; Charloteaux, B. A reference map of the human binary protein interactome. Nature 2020, 580, 402–408. [Google Scholar] [CrossRef] [PubMed]
  5. Lund-Johansen, F.; Tran, T.; Mehta, A. Towards reproducibility in large-scale analysis of protein–protein interactions. Nat. Methods 2021, 18, 720–721. [Google Scholar] [CrossRef]
  6. Hart, G.T.; Ramani, A.K.; Marcotte, E.M. How complete are current yeast and human protein-interaction networks? Genome Biol. 2006, 7, 120. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Stumpf, M.P.; Thorne, T.; de Silva, E.; Stewart, R.; An, H.J.; Lappe, M.; Wiuf, C. Estimating the size of the human interactome. Proc. Natl. Acad. Sci. USA 2008, 105, 6959–6964. [Google Scholar] [CrossRef] [Green Version]
  8. Rual, J.F.; Hirozane-Kishikawa, T.; Hao, T.; Bertin, N.; Li, S.; Dricot, A.; Li, N.; Rosenberg, J.; Lamesch, P.; Vidalain, P.O.; et al. Human ORFeome version 1.1: A platform for reverse proteomics. Genome Res. 2004, 14, 2128–2135. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Guo, Y.; Yu, L.; Wen, Z.; Li, M. Using support vector machine combined with auto covariance to predict protein–protein interactions from protein sequences. Nucleic Acids Res. 2008, 36, 3025–3030. [Google Scholar] [CrossRef] [Green Version]
  10. Pan, X.-Y.; Zhang, Y.-N.; Shen, H.-B. Large-Scale prediction of human protein− protein interactions from amino acid sequence based on latent topic features. J. Proteome Res. 2010, 9, 4992–5001. [Google Scholar] [CrossRef]
  11. Zhao, X.-W.; Ma, Z.-Q.; Yin, M.-H. Predicting protein-protein interactions by combing various sequence-derived features into the general form of Chou’s Pseudo amino acid composition. Protein Pept. Lett. 2012, 19, 492–500. [Google Scholar] [CrossRef]
  12. Sun, T.; Zhou, B.; Lai, L.; Pei, J. Sequence-based prediction of protein protein interaction using a deep-learning algorithm. BMC Bioinform. 2017, 18, 1–8. [Google Scholar] [CrossRef] [Green Version]
  13. Göktepe, Y.E.; Kodaz, H. Prediction of protein-protein interactions using an effective sequence based combined method. Neurocomputing 2018, 303, 68–74. [Google Scholar] [CrossRef]
  14. Chen, C.; Zhang, Q.; Ma, Q.; Yu, B. LightGBM-PPI: Predicting protein-protein interactions through LightGBM with multi-information fusion. Chemom. Intell. Lab. Syst. 2019, 191, 54–64. [Google Scholar] [CrossRef]
  15. Tian, B.; Wu, X.; Chen, C.; Qiu, W.; Ma, Q.; Yu, B. Predicting protein–protein interactions by fusing various Chou’s pseudo components and using wavelet denoising approach. J. Theor. Biol. 2019, 462, 329–346. [Google Scholar] [CrossRef] [PubMed]
  16. Zhang, L.; Yu, G.; Xia, D.; Wang, J. Protein–protein interactions prediction based on ensemble deep neural networks. Neurocomputing 2019, 324, 10–19. [Google Scholar] [CrossRef]
  17. Li, F.; Zhu, F.; Ling, X.; Liu, Q. Protein Interaction Network Reconstruction Through Ensemble Deep Learning with Attention Mechanism. Front. Bioeng. Biotechnol. 2020, 8, 390. [Google Scholar] [CrossRef]
  18. Czibula, G.; Albu, A.-I.; Bocicor, M.I.; Chira, C. AutoPPI: An Ensemble of Deep Autoencoders for Protein–Protein Interaction Prediction. Entropy 2021, 23, 643. [Google Scholar] [CrossRef] [PubMed]
  19. Wang, L.; You, Z.-H.; Xia, S.-X.; Liu, F.; Chen, X.; Yan, X.; Zhou, Y. Advancing the prediction accuracy of protein-protein interactions by utilizing evolutionary information from position-specific scoring matrix and ensemble classifier. J. Theor. Biol. 2017, 418, 105–110. [Google Scholar] [CrossRef]
  20. Hashemifar, S.; Neyshabur, B.; Khan, A.A.; Xu, J. Predicting protein–protein interactions through sequence-based deep learning. Bioinformatics 2018, 34, i802–i810. [Google Scholar] [CrossRef] [Green Version]
  21. Gonzalez-Lopez, F.; Morales-Cordovilla, J.A.; Villegas-Morcillo, A.; Gomez, A.M.; Sanchez, V. End-to-end prediction of protein-protein interaction based on embedding and recurrent neural networks. In Proceedings of the 2018 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Madrid, Spain, 3–6 December 2018; pp. 2344–2350. [Google Scholar]
  22. Richoux, F.; Servantie, C.; Borès, C.; Téletchéa, S. Comparing two deep learning sequence-based models for protein-protein interaction prediction. arXiv 2019, arXiv:1901.06268. [Google Scholar]
  23. Li, H.; Gong, X.-J.; Yu, H.; Zhou, C. Deep neural network based predictions of protein interactions using primary sequences. Molecules 2018, 23, 1923. [Google Scholar] [CrossRef] [Green Version]
  24. Guo, X.; Liu, R.; Shriver, C.D.; Hu, H.; Liebman, M.N. Assessing semantic similarity measures for the characterization of human regulatory pathways. Bioinformatics 2006, 22, 967–973. [Google Scholar] [CrossRef]
  25. Zhang, S.-B.; Tang, Q.-R. Protein–protein interaction inference based on semantic similarity of gene ontology terms. J. Theor. Biol. 2016, 401, 30–37. [Google Scholar] [CrossRef]
  26. Chen, X.-W.; Liu, M. Prediction of protein–protein interactions using random decision forest framework. Bioinformatics 2005, 21, 4394–4400. [Google Scholar] [CrossRef] [PubMed]
  27. Zhang, X.; Jiao, X.; Song, J.; Chang, S. Prediction of human protein–protein interaction by a domain-based approach. J. Theor. Biol. 2016, 396, 144–153. [Google Scholar] [CrossRef] [PubMed]
  28. Maetschke, S.R.; Simonsen, M.; Davis, M.J.; Ragan, M.A. Gene Ontology-driven inference of protein–protein interactions using inducers. Bioinformatics 2012, 28, 69–75. [Google Scholar] [CrossRef]
  29. Qi, Y.; Dhiman, H.K.; Bhola, N.; Budyak, I.; Kar, S.; Man, D.; Dutta, A.; Tirupula, K.; Carr, B.I.; Grandis, J. Systematic prediction of human membrane receptor interactions. Proteomics 2009, 9, 5243–5255. [Google Scholar] [CrossRef] [Green Version]
  30. Thahir, M.; Sharma, T.; Ganapathiraju, M.K. An efficient heuristic method for active feature acquisition and its application to protein-protein interaction prediction. In Proceedings of the Great Lakes Bioinformatics Conference 2012, Ann Arbor, MI, USA, 15–17 May 2012; pp. 1–9. [Google Scholar]
  31. Goldberg, D.S.; Roth, F.P. Assessing experimentally derived interactions in a small world. Proc. Natl. Acad. Sci. USA 2003, 100, 4372–4376. [Google Scholar] [CrossRef] [Green Version]
  32. Stark, C.; Breitkreutz, B.-J.; Reguly, T.; Boucher, L.; Breitkreutz, A.; Tyers, M. BioGRID: A general repository for interaction datasets. Nucleic Acids Res. 2006, 34, D535–D539. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  33. Park, Y.; Marcotte, E.M. Flaws in evaluation schemes for pair-input computational predictions. Nat. Methods 2012, 9, 1134. [Google Scholar] [CrossRef]
  34. Yu, J.; Guo, M.; Needham, C.J.; Huang, Y.; Cai, L.; Westhead, D.R. Simple sequence-based kernels do not predict protein–protein interactions. Bioinformatics 2010, 26, 2610–2614. [Google Scholar] [CrossRef] [Green Version]
  35. Pinker, E. Reporting accuracy of rare event classifiers. NPJ Digit. Med. 2018, 1, 56. [Google Scholar] [CrossRef]
  36. Saito, T.; Rehmsmeier, M. The precision-recall plot is more informative than the ROC plot when evaluating binary classifiers on imbalanced datasets. PLoS ONE 2015, 10, e0118432. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  37. Ganapathiraju, M.K.; Thahir, M.; Handen, A.; Sarkar, S.N.; Sweet, R.A.; Nimgaonkar, V.L.; Loscher, C.E.; Bauer, E.M.; Chaparala, S. Schizophrenia interactome with 504 novel protein-protein interactions. NPJ Schizophr. 2016, 2, 16012. [Google Scholar] [CrossRef]
  38. Consortium, G.O. Gene ontology consortium: Going forward. Nucleic Acids Res. 2015, 43, D1049–D1056. [Google Scholar] [CrossRef]
  39. Huntley, R.P.; Sawford, T.; Mutowo-Meullenet, P.; Shypitsyna, A.; Bonilla, C.; Martin, M.J.; O’Donovan, C. The GOA database: Gene ontology annotation updates for 2015. Nucleic Acids Res. 2015, 43, D1057–D1063. [Google Scholar] [CrossRef] [PubMed]
  40. Hunter, S.; Apweiler, R.; Attwood, T.K.; Bairoch, A.; Bateman, A.; Binns, D.; Bork, P.; Das, U.; Daugherty, L.; Duquenne, L. InterPro: The integrative protein signature database. Nucleic Acids Res. 2009, 37, D211–D215. [Google Scholar] [CrossRef] [Green Version]
  41. Hulo, N.; Bairoch, A.; Bulliard, V.; Cerutti, L.; De Castro, E.; Langendijk-Genevaux, P.S.; Pagni, M.; Sigrist, C.J. The PROSITE database. Nucleic Acids Res. 2006, 34, D227–D230. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  42. Bateman, A.; Coin, L.; Durbin, R.; Finn, R.D.; Hollich, V.; Griffiths-Jones, S.; Khanna, A.; Marshall, M.; Moxon, S.; Sonnhammer, E.L. The Pfam protein families database. Nucleic Acids Res. 2004, 32, D138–D141. [Google Scholar] [CrossRef]
  43. The UniProt Consortium. UniProt: The universal protein knowledgebase in 2021. Nucleic Acids Res 2021, 49, D480–D489. [Google Scholar] [CrossRef] [PubMed]
  44. National Center for Biotechnology Information. Available online: https://0-www-ncbi-nlm-nih-gov.brum.beds.ac.uk/ (accessed on 22 November 2021).
  45. Mikolov, T.; Chen, K.; Corrado, G.; Dean, J. Efficient estimation of word representations in vector space. arXiv 2013, arXiv:1301.3781. [Google Scholar]
  46. Altschul, S.F.; Madden, T.L.; Schäffer, A.A.; Zhang, J.; Zhang, Z.; Miller, W.; Lipman, D.J. Gapped BLAST and PSI-BLAST: A new generation of protein database search programs. Nucleic Acids Res. 1997, 25, 3389–3402. [Google Scholar] [CrossRef] [Green Version]
  47. Wen, Z.; Shi, J.; Li, Q.; He, B.; Chen, J. ThunderSVM: A fast SVM library on GPUs and CPUs. J. Mach. Learn. Res. 2018, 19, 797–801. [Google Scholar]
  48. Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L. Pytorch: An imperative style, high-performance deep learning library. Adv. Neural Inf. Process. Syst. 2019, 32, 8026–8037. [Google Scholar]
  49. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V. Scikit-learn: Machine learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  50. Ke, G.; Meng, Q.; Finley, T.; Wang, T.; Chen, W.; Ma, W.; Ye, Q.; Liu, T.-Y. Lightgbm: A highly efficient gradient boosting decision tree. Adv. Neural Inf. Process. Syst. 2017, 30, 3146–3154. [Google Scholar]
  51. Rodriguez, J.J.; Kuncheva, L.I.; Alonso, C.J. Rotation forest: A new classifier ensemble method. IEEE Trans. Pattern Anal. Mach. Intell. 2006, 28, 1619–1630. [Google Scholar] [CrossRef]
  52. Li, Y.; Ilie, L. SPRINT: Ultrafast protein-protein interaction prediction of the entire human interactome. BMC Bioinform. 2017, 18, 485. [Google Scholar] [CrossRef] [Green Version]
  53. Jia, J.; Li, X.; Qiu, W.; Xiao, X.; Chou, K.-C. iPPI-PseAAC (CGR): Identify protein-protein interactions by incorporating chaos game representation into PseAAC. J. Theor. Biol. 2019, 460, 195–203. [Google Scholar] [CrossRef]
  54. Shen, J.; Zhang, J.; Luo, X.; Zhu, W.; Yu, K.; Chen, K.; Li, Y.; Jiang, H. Predicting protein–protein interactions based only on sequences information. Proc. Natl. Acad. Sci. USA 2007, 104, 4337–4341. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  55. Dubchak, I.; Muchnik, I.; Holbrook, S.R.; Kim, S.-H. Prediction of protein folding class using global description of amino acid sequence. Proc. Natl. Acad. Sci. USA 1995, 92, 8700–8704. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  56. Du, X.; Sun, S.; Hu, C.; Yao, Y.; Yan, Y.; Zhang, Y. DeepPPI: Boosting prediction of protein–protein interactions with deep neural networks. J. Chem. Inf. Model. 2017, 57, 1499–1510. [Google Scholar] [CrossRef]
  57. Jia, J.; Liu, Z.; Xiao, X.; Liu, B.; Chou, K.-C. iPPI-Esml: An ensemble classifier for identifying the interactions of proteins by incorporating their physicochemical properties and wavelet transforms into PseAAC. J. Theor. Biol. 2015, 377, 47–56. [Google Scholar] [CrossRef]
  58. Zhang, Z.-H.; Wang, Z.-H.; Wang, Y.-X. A new encoding scheme to improve the performance of protein structural class prediction. In Proceedings of the International Conference on Natural Computation, Changsha, China, 27–29 August 2005; pp. 1164–1173. [Google Scholar]
  59. Yu, B.; Chen, C.; Wang, X.; Yu, Z.; Ma, A.; Liu, B. Prediction of protein–protein interactions based on elastic net and deep forest. Expert Syst. Appl. 2021, 176, 114876. [Google Scholar] [CrossRef]
  60. Yang, L.; Xia, J.-F.; Gui, J. Prediction of protein-protein interactions from protein sequence using local descriptors. Protein Pept. Lett. 2010, 17, 1085–1090. [Google Scholar] [CrossRef]
  61. You, Z.-H.; Chan, K.C.; Hu, P. Predicting protein-protein interactions from primary protein sequences using a novel multi-scale local feature representation scheme and the random forest. PLoS ONE 2015, 10, e0125811. [Google Scholar] [CrossRef]
  62. You, Z.-H.; Zhu, L.; Zheng, C.-H.; Yu, H.-J.; Deng, S.-P.; Ji, Z. Prediction of protein-protein interactions from amino acid sequences using a novel multi-scale continuous and discontinuous feature set. BMC Bioinform. 2014, 15, 1–9. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  63. Ding, Y.; Tang, J.; Guo, F. Predicting protein-protein interactions via multivariate mutual information of protein sequences. BMC Bioinform. 2016, 17, 1–13. [Google Scholar] [CrossRef] [Green Version]
  64. Chou, K.C. Prediction of protein cellular attributes using pseudo-amino acid composition. Proteins Struct. Funct. Bioinform. 2001, 43, 246–255. [Google Scholar] [CrossRef] [PubMed]
  65. Chou, K.-C. Prediction of protein subcellular locations by incorporating quasi-sequence-order effect. Biochem. Biophys. Res. Commun. 2000, 278, 477–483. [Google Scholar] [CrossRef]
  66. Chen, M.; Ju, C.J.-T.; Zhou, G.; Chen, X.; Zhang, T.; Chang, K.-W.; Zaniolo, C.; Wang, W. Multifaceted protein–protein interaction prediction based on Siamese residual RCNN. Bioinformatics 2019, 35, i305–i314. [Google Scholar] [CrossRef] [Green Version]
  67. Guo, Y.; Li, M.; Pu, X.; Li, G.; Guang, X.; Xiong, W.; Li, J. PRED_PPI: A server for predicting protein-protein interactions based on sequence data with probability assignment. BMC Res. Notes 2010, 3, 1–7. [Google Scholar] [CrossRef] [Green Version]
  68. Liu, L.; Cai, Y.; Lu, W.; Feng, K.; Peng, C.; Niu, B. Prediction of protein–protein interactions based on PseAA composition and hybrid feature selection. Biochem. Biophys. Res. Commun. 2009, 380, 318–322. [Google Scholar] [CrossRef] [PubMed]
  69. Martin, S.; Roe, D.; Faulon, J.-L. Predicting protein–protein interactions using signature products. Bioinformatics 2005, 21, 218–226. [Google Scholar] [CrossRef]
  70. Zhou, Y.Z.; Gao, Y.; Zheng, Y.Y. Prediction of protein-protein interactions using local description of amino acid sequence. In Advances in Computer Science and Education Applications; Springer: Qingdao, China, 2011; pp. 254–262. [Google Scholar]
  71. Yao, Y.; Du, X.; Diao, Y.; Zhu, H. An integration of deep learning with feature embedding for protein–protein interaction prediction. PeerJ 2019, 7, e7126. [Google Scholar] [CrossRef] [PubMed]
  72. Mohamed, T.P.; Carbonell, J.G.; Ganapathiraju, M.K. Active learning for human protein-protein interaction prediction. BMC Bioinform. 2010, 11 (Suppl. 1), S57. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. Evaluation of representative methods on hub proteins. Three methods (HiPPIP, that by Qi et al., and SPRINT) are evaluated on hub proteins. The figures show the cumulative number of true positives versus the number of false positives a, computed individually for each protein and then averaged for each method for each point on x-axis. The hub proteins considered are: (A) same proteins as originally used in HiPPIP publication, and (B) hub proteins with more than 100 PPIs as per currently known PPIs from HPRD and BioGRID. The labeled data used in evaluation are from HPRD and BioGRID for (A) and (B). a Note that in this scenario of proteome-wide evaluation (i.e., where a hub protein is paired with every other protein in the human proteome), the unlabeled data may not all be false-positives, and may indeed me true positives that are currently unknown.
Figure 1. Evaluation of representative methods on hub proteins. Three methods (HiPPIP, that by Qi et al., and SPRINT) are evaluated on hub proteins. The figures show the cumulative number of true positives versus the number of false positives a, computed individually for each protein and then averaged for each method for each point on x-axis. The hub proteins considered are: (A) same proteins as originally used in HiPPIP publication, and (B) hub proteins with more than 100 PPIs as per currently known PPIs from HPRD and BioGRID. The labeled data used in evaluation are from HPRD and BioGRID for (A) and (B). a Note that in this scenario of proteome-wide evaluation (i.e., where a hub protein is paired with every other protein in the human proteome), the unlabeled data may not all be false-positives, and may indeed me true positives that are currently unknown.
Molecules 27 00041 g001
Figure 2. (AC, left) Results when selecting from the full set of protein pairs for training and testing for a handful of the best algorithms (dotted lines) and the 4 illogical feature-based algorithms (solid lines). (A) ROC curve with 50% positive data. (B,C) Precision-Recall curves using 10% and 0.3% positive data in the test set. (DF, right) Computations when performed on held out proteins instead of selecting from the full set of pairs. When holding out protein pairs, the algorithms exhibit a significant performance drop. Additionally, the algorithms and bias measurements score similarly across all 6 tests.
Figure 2. (AC, left) Results when selecting from the full set of protein pairs for training and testing for a handful of the best algorithms (dotted lines) and the 4 illogical feature-based algorithms (solid lines). (A) ROC curve with 50% positive data. (B,C) Precision-Recall curves using 10% and 0.3% positive data in the test set. (DF, right) Computations when performed on held out proteins instead of selecting from the full set of pairs. When holding out protein pairs, the algorithms exhibit a significant performance drop. Additionally, the algorithms and bias measurements score similarly across all 6 tests.
Molecules 27 00041 g002
Figure 3. (A). ROC curve for results on held out proteins on 50% positive data, comparing sequence (dotted lines) an annoation-based methods (solid lines). (B,C). Precision Recall Curves for held out proteins on 10% and 0.3% positive data, comparing sequence and non-sequence-based methods. Non-sequence-based methods (solid lines) perform better at lower recall levels when positive data is more rare than random pairs.
Figure 3. (A). ROC curve for results on held out proteins on 50% positive data, comparing sequence (dotted lines) an annoation-based methods (solid lines). (B,C). Precision Recall Curves for held out proteins on 10% and 0.3% positive data, comparing sequence and non-sequence-based methods. Non-sequence-based methods (solid lines) perform better at lower recall levels when positive data is more rare than random pairs.
Molecules 27 00041 g003
Figure 4. Evaluation of HiPPIP. (A) Comparison of HiPPIP with selected sequence-based and annotation-based methods that performed the best. The two lines HiPPIP-a and HiPPIP-b correspond to whether the known PPIs in the test data were drawn from recent version of BioGRID (May 2021, same as used in this paper) or from an earlier version of BioGRID (January 2017). Both lines in (A) show an average performance across 10 datasets, whereas performance on individual datasets in each set are shown in (B) and (C), respectively. For all methods, including sequence-based and annotation-based methods, pairs that appear in the HPRD database are also treated as true-positives in this figure; there is no discernable difference between the results of the sequence-based and annotation-based algorithms in Figure 3C and here, despite the inclusion of HPRD PPIs as positive labels. It is estimated that HiPPIP-a and HiPPIP-b test datasets may have about 24% and 14% overlap for positive instances respectively, because the specific pairs used in training HiPPIP are unavailable.
Figure 4. Evaluation of HiPPIP. (A) Comparison of HiPPIP with selected sequence-based and annotation-based methods that performed the best. The two lines HiPPIP-a and HiPPIP-b correspond to whether the known PPIs in the test data were drawn from recent version of BioGRID (May 2021, same as used in this paper) or from an earlier version of BioGRID (January 2017). Both lines in (A) show an average performance across 10 datasets, whereas performance on individual datasets in each set are shown in (B) and (C), respectively. For all methods, including sequence-based and annotation-based methods, pairs that appear in the HPRD database are also treated as true-positives in this figure; there is no discernable difference between the results of the sequence-based and annotation-based algorithms in Figure 3C and here, despite the inclusion of HPRD PPIs as positive labels. It is estimated that HiPPIP-a and HiPPIP-b test datasets may have about 24% and 14% overlap for positive instances respectively, because the specific pairs used in training HiPPIP are unavailable.
Molecules 27 00041 g004
Table 1. Sizes of different datasets created for testing. Pair sizes vary for the held-out data due to the number of pairs drawn from a single bin (such as 4, 4) being half the size of drawing pairs from 2 bins (such as 3, 4). Held out test sets for 1:1 (50%) and 1:9 (10%) positive pairs use all positive data per held out bin.
Table 1. Sizes of different datasets created for testing. Pair sizes vary for the held-out data due to the number of pairs drawn from a single bin (such as 4, 4) being half the size of drawing pairs from 2 bins (such as 3, 4). Held out test sets for 1:1 (50%) and 1:9 (10%) positive pairs use all positive data per held out bin.
Pos RatioUsageInstance
Sampling
Number of DatasetsPositive PairsNegative PairsProteins in Positive DataProteins in Negative Data
1:1 (50%)BothFull1 (5-fold CV)62,50062,50012,89519,082
1:4 (20%)TrainFull520,00080,0009250–934519,105–19,110
1:9 (10%)TestFull510,00090,0006868–698719,109–19,111
1:332 (0.3%)TestFull51500498,5002099–217019,112
1:1TrainHeld Out2150,00050,0008849–10,76212,734–15,899
1:1TestHeld Out213170–71293170–71291530–32332770–5707
1:4TrainHeld Out2120,00080,0006978–829712,749–15,927
1:9TestHeld Out213170–712928,530–64,1611530–32333185–6372
1:332TestHeld Out21300–60099,700–199,400398–8463185–6372
Table 2. Different preprocessing algorithms used on amino acid sequences prior to model training.
Table 2. Different preprocessing algorithms used on amino acid sequences prior to model training.
Amino Acid
Composition (AAC)
Auto Covariance (AC) [9]Chaos Game
Representation [53]
Conjoint Triad (CT) [54]Composition-Transition-Distribution (CTD) [55]
Dipeptide Composition [56]Discrete Wavelet Transform Physicochemical [57]Encoding Based on Grouped Weight (EBGW) [58]Geary Autocorrelation [59]Local Descriptor (LD) [60]
Multi-scale Local Descriptor [61]Multi-scale Continuous and Discontinuous [62]Multivariate Mutual Information [63]Moran Autocorrelation [59]Normalized Moreau-Broto Autocorrelation [59]
Numeric/One Hot encodingPseudo Amino Acid Composition [64]PSSM ([46])PSSM(DPC)/PSSM(Bi-gram) [13]PSSM Discrete Cosine Transform [19]
Quasi Sequence Order Descriptor [65]Sequence Order [11]Skip Gram [45]Weighted Skip-Sequential Conjoint Triad [13]
Table 7. Percentages of missing data in the positive and negative our benchmark datasets. Overall, 1–15% more negative data is missing from most datasets in each category. However, most pairs appear to have at least some data, as shown by the GO Any and InterPro columns, each showing at least 94–97% of all protein pairs have some data from GO and domains.
Table 7. Percentages of missing data in the positive and negative our benchmark datasets. Overall, 1–15% more negative data is missing from most datasets in each category. However, most pairs appear to have at least some data, as shown by the GO Any and InterPro columns, each showing at least 94–97% of all protein pairs have some data from GO and domains.
DatasetPos%Data TypeClassGO CCGO BPGO MFGO AnyPfamPrositeInterPro
Full50%TrainPositive7.313.32.90.78.247.81.2
Full50%TrainNegative13.622.217.95.911.957.72.3
Full50%TestPositive7.313.32.90.78.247.81.2
Full50%TestNegative13.622.217.95.911.957.72.3
Full20%TrainPositive7.213.22.80.78.247.61.2
Full20%TrainNegative13.822.217.96.012.057.82.4
Full10%TestPositive7.213.22.90.78.248.01.2
Full10%TestNegative13.722.017.96.012.157.82.4
Full0.3%TestPositive7.313.32.70.77.948.61.1
Full0.3%TestNegative13.722.117.96.012.057.82.4
Held Out50%TrainPositive7.213.32.80.78.247.91.2
Held Out50%TrainNegative13.722.017.86.012.057.82.4
Held Out50%TestPositive7.213.22.80.78.147.91.2
Held Out50%TestNegative13.922.117.96.112.057.62.3
Held Out20%TrainPositive7.213.32.80.78.148.01.2
Held Out20%TrainNegative13.722.017.96.012.057.82.4
Held Out10%TestPositive7.213.22.80.78.147.91.2
Held Out10%TestNegative13.622.017.95.912.057.82.4
Held Out0.3%TestPositive6.512.92.90.77.747.31.2
Held Out0.3%TestNegative13.722.017.96.012.057.72.4
Table 8. Accuracy of a domain feature predictor (Zhang 2016 Domain Variant) and ensemble (Simple Ensemble) predictor using all interactions, non-test interactions, and non-held out protein pair interactions when computing features. Held out protein pairs are not valid for datasets created by selecting random pairs. The accuracy of these predictors drops significantly when removing the test interactions from the feature creation process.
Table 8. Accuracy of a domain feature predictor (Zhang 2016 Domain Variant) and ensemble (Simple Ensemble) predictor using all interactions, non-test interactions, and non-held out protein pair interactions when computing features. Held out protein pairs are not valid for datasets created by selecting random pairs. The accuracy of these predictors drops significantly when removing the test interactions from the feature creation process.
Algorithm50% Pos Rand10% Pos Rand0.3% Pos Rand50% Pos Held10% Pos Held0.3% Pos Held
AccAUCPrecAvg PPrecAvg PAccAUCPrecAvg PPrecAvg P
Dom Var All92.097.399.088.472.942.492.197.399.188.376.243.7
Dom Var NonTest74.277.496.751.541.39.073.876.996.449.946.49.6
Dom Var HeldOut 63.364.291.828.725.62.8
Ensemble All94.398.397.590.951.338.393.898.097.590.152.238.0
Ensemble NonTest76.482.292.254.125.48.876.081.492.151.923.98.0
Ensemble HeldOut 64.767.288.427.916.42.3
Table 9. Results of non-sequence-based predictors versus control and annotation-based predictors.
Table 9. Results of non-sequence-based predictors versus control and annotation-based predictors.
Algorithm50% Pos Full10% Pos Full0.3% Pos Full50% Pos Held10% Pos Held0.3% Pos Held
AccAUCPrecAvg PPrecAvg PAccAUCPrecAvg PPrecAvg P
Annotation-Based Methods
Chen 2005 Dom RF [26]75.982.777.141.88.82.865.169.658.823.14.31.0
Gou 2006 Sem [24]66.372.190.133.715.72.966.372.289.533.122.13.3
Maetschke 2012 ULCA [28]71.077.446.228.62.31.269.575.242.025.52.01.0
Dom Variant74.277.496.751.541.39.063.364.291.828.725.62.8
Zhang 2016 Sem [25]73.580.486.035.713.52.773.079.585.433.419.72.9
Simple Ensemble76.482.292.254.125.48.864.767.288.427.916.42.3
Control Methods
Count Bias84.291.591.956.428.06.550.050.010.010.00.30.3
Seq Sim Bias82.390.084.553.014.85.265.670.741.821.62.10.9
Seq Sim Bias + Protein Bias82.389.984.552.914.85.265.670.841.621.62.10.9
Rand Net84.792.092.560.429.17.451.050.211.110.00.40.3
Rand RF78.185.987.345.917.93.850.950.510.510.10.30.3
Selected Best-Performing Sequence-Based Methods
Pan 2010 PSAAC Rand [10]83.791.496.859.946.99.366.472.565.722.65.41.5
Jia 2015 RF [57]84.692.295.960.741.88.765.170.451.119.53.31.2
Ding 2016 RF [63]84.792.396.961.149.810.064.170.058.318.34.61.2
Du 2017 Sep [56]85.592.894.964.539.59.867.073.356.324.93.91.2
Göktepe 2018 SVM [13]82.590.293.657.029.27.165.671.259.424.04.61.2
Li 2018 CNN/LSTM [23]84.391.893.660.928.37.356.058.524.713.61.10.5
Jia 2019 RF [53]83.291.097.159.552.710.066.072.068.223.25.61.6
Li 2020 Deep [17]86.493.496.667.750.712.467.373.865.926.86.11.7
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Dunham, B.; Ganapathiraju, M.K. Benchmark Evaluation of Protein–Protein Interaction Prediction Algorithms. Molecules 2022, 27, 41. https://0-doi-org.brum.beds.ac.uk/10.3390/molecules27010041

AMA Style

Dunham B, Ganapathiraju MK. Benchmark Evaluation of Protein–Protein Interaction Prediction Algorithms. Molecules. 2022; 27(1):41. https://0-doi-org.brum.beds.ac.uk/10.3390/molecules27010041

Chicago/Turabian Style

Dunham, Brandan, and Madhavi K. Ganapathiraju. 2022. "Benchmark Evaluation of Protein–Protein Interaction Prediction Algorithms" Molecules 27, no. 1: 41. https://0-doi-org.brum.beds.ac.uk/10.3390/molecules27010041

Article Metrics

Back to TopTop