Next Article in Journal
Scoring Alternatives for Mental Speed Tests: Measurement Issues and Validity for Working Memory Capacity and the Attentional Blink Effect
Previous Article in Journal
Longitudinal IQ Trends in Children Diagnosed with Emotional Disturbance: An Analysis of Historical Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Bibliometric Keyword Analysis across Seventeen Years (2000–2016) of Intelligence Articles

1
Department of Management, Cleveland State University, Cleveland, OH 44115, USA
2
The Ulster Institute for Social Research, London AL2 1AB, UK
*
Author to whom correspondence should be addressed.
Submission received: 10 August 2018 / Revised: 11 October 2018 / Accepted: 12 October 2018 / Published: 15 October 2018

Abstract

:
An article’s keywords are distinct because they represent what authors feel are the most important words in their papers. Combined, they can even shed light on which research topics in a field are popular (or less so). Here we conducted bibliometric keyword analyses of articles published in the journal, Intelligence (2000–2016). The article set comprised 916 keyword-containing papers. First, we analyzed frequencies to determine which keywords were most/least popular. Second, we analyzed Web of Science (WOS) citation counts for the articles listing each keyword and we ran regression analyses to examine the effect of keyword categories on citation counts. Third, we looked at how citation counts varied across time. For the frequency analysis, “g factor”, “psychometrics/statistics”, and “education” emerged as the keywords with the highest counts. Conversely, the WOS citation analysis showed that papers with the keywords “spatial ability”, “factor analysis”, and “executive function” had the highest mean citation values. We offer tentative explanations for the discrepant results across frequencies and citations. The analysis across time revealed several keywords that increased (or decreased) in frequency over 17 years. We end by discussing how bibliometric keyword analysis can detect research trends in the field, both now and in the past.

1. Introduction

Bibliometrics is the branch of library science that applies mathematical and statistical techniques to analyze books, articles, and other documents [1,2]. The discipline is becoming increasingly popular. For bibliometric evidence of this claim, note that the correlation between the number of published articles on “bibliometrics” (via Google Scholar), and the year of publication (this century, through 2017) is positive and very strong (0.91).
Bibliometric techniques are also convenient, as the relevant data are both quantitative and readily available [3]. Moreover, the field has high utility. Bibliometrics can be used to inform policy decisions [4], allocate research funding [5], assist libraries in prioritizing acquisitions [6], and of course, evaluate scholarly activity [7]. Even a “bibliometric study of literature on bibliometrics” has now been conducted [8]. The field is therefore likely here to stay, and will continue to be a primary means of judging impact for articles, researchers, and journals.
The present article focuses on bibliometrics for the journal, Intelligence. The journal was founded in 1977 by Douglas K. Detterman. Since then, it has published 1828 articles [9], including two that feature bibliometric analyses. First, Wicherts [10] analyzed Web of Science (WOS) citation counts for 797 articles in the journal, published between the years 1977 and 2007. The median citation count for these articles was 10, with a mode of 6. Wicherts also showed that the journal’s impact factor had been steadily rising each year, although it has since leveled off.1 Finally, Wicherts reported a top 25 list of the most-cited papers in the journal to date. These articles had citation counts (via Google Scholar) ranging from 81 to 492.
Second, Pesta [11] examined articles published after Wicherts’ study. He analyzed 619 papers, written by 1897 authors, and published between the years 2008 and 2015. Pesta reported citation counts for both articles and authors, and found the median citation rate for the former (i.e., 10) to be identical to that calculated by Wicherts (albeit, Pesta’s mean citation value was 17, with a mode of 6). Pesta also reported a list of the most prolific authors, and an updating of the 25 top, most-cited articles in the journal between the years 1977 and 2015. These articles had citation counts (via Google Scholar) ranging from 186 to 905.
Here we attempt to build upon existing bibliometric research for this journal. Our focus, however, is on the article keywords, versus the authors, or the articles themselves. Note that the journal began using keywords in the year 2000.
Why apply bibliometrics to article keywords? First, keywords represent the author’s opinion of the three to five (or so) most important words in their articles. Second, keyword analysis can potentially detect trending research topics both currently, and in the past. Third, bibliometric keyword analysis can answer several interesting questions. Some of these include (1) What research topics in this journal are the most frequent/popular; (2) Are certain keywords associated with an increased likelihood of a paper being cited; and (3) Has the use of specific keywords increased or decreased over time? Our goal is to provide bibliometric answers to these questions, focusing specifically on articles and keywords published in the journal, Intelligence.

2. Method

We coded all keyword-containing articles in the journal, Intelligence, for the years 2000 to 2016. In line with other bibliometric studies on this journal [10,11], we included all articles except book reviews and obituaries. Note that the year 2000 was the first we found where the journal featured keywords, although only one such article appeared at this time (i.e., [12]). We did not code post 2016 articles because we did not think they had enough time to accumulate a relative, representative number of citations. One of the prior bibliometric reviews of Intelligence [11] did similarly. Moreover, we excluded some other articles published after 2000 (e.g., book reviews, obituaries, and some editorials), because they did not contain keywords. In total, we ended up coding 916, keyword-containing articles.
For each article, we coded the title, first author’s name, and all listed keywords. Regarding keywords, we coded 4364 of them from the articles in the set. On 11 July 2018, we next coded each article’s Web of Science (WOS) citation count. We did not also code Google Scholar (GS) counts, as Pesta [11] reported that these correlate 0.97 with WOS citation counts. In fact, GS counts are very close to double those reported by WOS. Note that these values come specifically from analyzing articles published in Intelligence [11].
We coded both citations overall and citations per year for every keyword in the set. The latter values adjust citation counts for the effects of “time since publication” on a paper’s current citation count [11]. In line with a previous review [11], this calculation involved dividing each article’s number of citations by (2018.53 minus the article’s year). We used 0.53 as the decimal because 11 July is the 192nd day of the year, and 192 divided by 365 equals 0.53.
Coding the keywords was sometimes not straightforward. For example, authors often used several different keywords to describe the same research topic (e.g., “g”, “g factor”, “general mental ability”, “general cognitive ability”, and “general intelligence”). This required us to form dozens of “keyword categories”, each containing all synonym keywords for the same underlying construct. An example of a keyword category would be “g factor” for the synonyms listed in parenthesis above.
Next, within articles, authors often used keywords that were redundant (e.g., “general intelligence” and “g”). In fact, redundant keywords comprised 763 of the 4364 (17.5%) keywords in the article set. However, counting all redundant keywords within articles would artificially inflate (i.e., double count) overall citation rates for both the articles and the keywords. We therefore did not analyze redundant keywords. Finally, a small number of keywords could logically be placed into more than one category (e.g., “speeded and un-speeded testing”). Although only eleven (2.50%) such keywords existed, we nonetheless excluded them from analyses.
The first two authors separately coded all keywords into categories. We then compared the codings to identify discrepancies. These were discussed until consensus was reached on every keyword category. Most discrepancies involved disagreements on how fine to delineate categories (e.g., whether to group “general intelligence” separate from “intelligence”). As such we did not calculate a reliability coefficient, since discordance often reflected a difference in level of analysis, not an inconsistency in classification.
Unexpectedly, judging from our preliminary review of the first couple of years of data, many of the resulting keyword categories turned out to have very small sample sizes (i.e., number of articles listing them). We therefore chose to analyze only keyword categories with at least 20 citations. There were 38 such unique keyword categories (37 after excluding “intelligence”), with 2699 (2161 after excluding “intelligence”) references to these keywords in the article set.
We ran three separate analyses on the keyword categories. The first was a simple frequency comparison of how many articles listed each keyword. This allowed us to identify the most frequent/popular research topics. The second involved mean difference tests of WOS citation counts for the articles citing each of the 37 keyword categories, together with regression analyses of the effect of keyword categories on citation counts. The goal here was to determine which categories were associated with highly cited papers.
Third, for the categories associated with the most paper citations (i.e., those categories with at least 50 paper citations; N = 20, which is 54% of the 37 keyword categories), we visualized WOS trends in citation counts across time. Specifically, we created plots with year on the x-axis and proportion on the y-axis. The proportion was simply the number of articles citing the keyword in a given year, divided by the total number of citing articles for that keyword. This approach allowed us to visually spot trends in keyword citations over the years.

3. Results

Table 1 lists frequency data for the 37 keyword categories (hereafter, “keywords”) that we analyzed. The third column shows how many times the keyword was listed by one of the 916 articles in the set. Also reported are standardized residuals. These can be interpreted as Z scores [13,14]. Residuals of greater than plus or minus two indicate that the keyword’s observed frequency was significantly higher (or lower) than its expected frequency.
Not surprisingly, “intelligence” was the most frequently employed keyword. It was listed as a keyword by 59% of the articles in the set. Because “intelligence” is the primary focus of the journal, we decided not to include it in the analyses that follow.
Next, by a fair amount, the keyword associated with the most number of citations (excluding “intelligence”) was “g factor”. It was listed by 15% of the articles in the set. This is also not surprising, but it perhaps exemplifies the field’s sustained, direct focus on general intelligence, versus specific cognitive abilities. For example, only 7 of the 916 (0.01%) articles in the set included “non-g abilities” as a keyword (but see also the frequencies for crystallized and fluid intelligence in Table 1).
“Psychometrics/statistics” was the second-most listed keyword in the set. This seems intuitive, as these are the tools researchers must use to get their data published in the journal. Next was the keyword, “education”. It ranked third on the list, which seemed surprising, given our perhaps outdated impression that educational researchers tend to eschew intelligence research. On the other hand, at least one seminal article on this topic appears in Intelligence [15]. The article is the third-most cited paper of all time for the journal. Moreover, “education” might attract relatively more research interest because the keyword is broadly multi-disciplinary. The supply of researchers able and interested in this topic may be greater than that for many of the other keywords in the article set. This explanation is admittedly speculative.
The fourth most-listed keyword was “IQ/achievement/aptitude tests”. This keyword is a hodgepodge, which likely explains its relatively high count. For example, the category contains 102 keyword synonyms total. Of these, 84 (82.4%) are unique (i.e., were listed by only one article in the entire set). Some of the many exemplars for this category include “AFQT”, “CAT”, “Draw a Person”, “GATB”, “GMAT scores”, “Stanford Binet”, “TIMSS”, “WAIS III”, “WISC”, and “Wonderlic”. Rounding out the top-five listed keywords in Table 1 was “Race/ethnicity”. We attribute this keyword’s high frequency count partly to the work of Richard Lynn and colleagues; see, e.g., [16], the authors of which have mapped out IQs for numerous ethnicities/national origins across the world.
Finally, we are reluctant to discuss the keywords in Table 1 that have relatively low frequencies. These are misleading. Keywords like “emotional intelligence”, “politics”, and “Spearman’s hypothesis” have “low” citation counts, but only relative to the other keywords in Table 1. Recall that Table 1 presents just the top 37 out of 384 (9.6%) categories. Therefore, any keyword appearing in Table 1 is indeed something multiple researchers have expressed interest in. Conversely, the keywords with relatively lower impact would be those that did not make Table 1.
Table 2 shows WOS citation rates for the top 37 keywords in the set. Specifically, for every article listing a keyword, we coded that article’s WOS citation count, and then took the average of all article counts within each keyword. We report both citation counts overall, and then per year.
Interestingly, the top 5 keywords in Table 2 are all different from those in Table 1. Several of the keywords were associated with more citations, relative to keywords with higher frequencies of usage. For example, “spatial ability” ranked first (55.53 WOS cites) in Table 2, but only 29.5th (32 counts) in Table 1. Conversely, “psychometrics/statistics” ranked second in Table 1, but only 15th in Table 2. This led us to correlate the ranks across frequencies (Table 1) and overall citations (Table 2). Although the resulting correlation was actually negative (r = −0.192), it was small and not significant (p = 0.369). Still, the safest interpretation is perhaps interesting: publishing on a frequently-researched keyword does not guarantee that the article will yield the highest citation counts.
In Table 2, the keyword associated with the second highest number of citations was “factor analysis”. We see this as paralleling “psychometrics/statistics” in Table 1. Specifically, both keywords represent tools that researchers use when attempting to publish in this journal. Third, “executive function” is another example of a relatively infrequently used keyword associated with a high number of citations. However, the construct is very similar to “working memory” (we almost grouped these two keywords together—see, e.g., [17]), which is a staple research topic in this journal. It has both a correspondingly high frequency and associated citation count (i.e., it ranked sixth of all keywords in both Table 1 and Table 2).
“Attention” was the keyword associated with the fourth highest number of citations. This is another illustration of a keyword with a relatively low frequency (25th in the rank) but a high number of citations. Moreover, the keyword has seven articles referencing it, each with at least 50 citations. Two of these [18,19] even fall on Pesta’s [11] top 25 list of all time, with 224, and 479 overall citations, respectively. Finally, “IQ theories” was the keyword associated with the fifth highest number of citations. This keyword is also a hodgepodge (made up of, e.g., “multiple intelligences”, “VPR theory”, “practical intelligence”, and “Gf–Gc theory”), but we have no real explanation for why papers using these keywords amassed such high citation counts.
The top-five values for citations per year in Table 2 are mostly similar to the values for citations overall in Table 1. An exception is that “working memory” replaced “IQ theories” in cites per year versus cites overall. Also noteworthy is that the correlation between the two citation values for all keywords in Table 2 is 0.91. In sum, no very large differences in ranks occurred when looking at citations overall versus per year.
Turning to statistical analyses, the Table 2 grand mean for citations overall (N = 2161) was 26.28 (SD = 42.53). A one-way ANOVA on these data was significant: F (36) = 2.49; MSe = 1765. Similarly, the grand mean for citations per year was 2.89 (SD = 3.51). This ANOVA was also significant: F (36) = 3.30; MSe = 11.90. For context, recall that both Wicherts [10] and Pesta [11] reported median citation counts of 10 for all articles in their sets. The keywords in Table 2 (i.e., the top 37 in the entire journal) range from being cited 2.63 times to substantially more than that (e.g., papers listing “spatial ability” as a keyword averaged 55.53 citations each).
The one-way ANOVAs above each have 37 levels. This creates an awkward scenario when trying to determine how best to run post-hoc tests. Ultimately, we decided not to conduct Family Wise Error Rate tests, as the number of multiple comparisons here was too large: (37 × 36)/2 = 666. A correction like Bonferroni’s would have an unduly punitive effect on our statistical power. Specifically, the effective alpha rate with a Bonferroni correction on these data would be 0.05/666 (0.00008).
Instead, we employed the false discovery rate (FDR), which is particularly useful when researchers conduct many post hoc tests. The FDR is the proportion of Type I errors existing among all tests that resulted in rejecting the null hypothesis. This is in contrast to the typical reliance on the alpha level (i.e., p-value) to determine Type I error rates. Instead, the FDR focuses on the q-value (i.e., the proportion of significant comparisons that are actually Type I errors). A q-value of 0.05 means that 5% of the significant test results are likely Type I errors [20,21].
The Benjamini–Hochberg test [21,22] is a common procedure used to control for the FDR. It does so via calculation and interpretation of q-values. We used it here for our post hoc tests. Following convention, we adopted 0.05 as our value for q.
Fully 70 of the 666 (11%) post-hoc comparisons for overall citations were significant with q-values of less than 0.05. The results for citations per year were similar, yielding 105 of 666 (16%) significant comparisons. The Supplementary Materials lists all pairwise comparisons, both for citations overall, and then per year.
We next analyzed the effects of keywords on citation counts by running a regression model with keyword, publication year (and nonlinear transformations of this), and keyword count as independent variables. The dependent variable was the citation number. A reviewer suggested that we add a dummy variable for the editor to capture the effect of editorial preference; however, Doug Detterman was editor-in-chief from 2000 to 2016, when Richard Haier took over. As our data extends to 2016, there is little to no editorial variance (we also do not know who the action editor was for each paper). Results appear in Table 3. Because the analysis involves the full population of Intelligence papers (versus some sample of them), statistical significance is arguably not an important concern. Nonetheless, we asterisked those keywords that had p-values <0.05, and then used these as a threshold to warrant further discussion. Results from other models are shown in the Supplementary Materials. The model we selected here was geared toward finding the highest R2-adj-value.
Results below parallel those observable from the FDR analyses. Papers dealing with “executive function”, “factor analysis”, “fluid intelligence”, “IQ theories”, “working memory”, and “spatial ability” were all cited as more than typical, while those dealing with “health”, “mental speed”, and “race/ethnicity” were cited as less than typical. It is no secret that “race/ethnicity” is an unattractive topic for many researchers, so it is perhaps not surprising that papers focusing here garner lower citations. The topic of cognitive epidemiology (“health”) may be similarly less popular. Finally, “mental speed” and “ECTs” were often conceptually overlapping categories, and both of their betas were negative. The reason for the negative effects of these topics is unclear.
Our final analysis involved looking at trends in citation counts over the years. That is, does a keyword’s popularity (operationalized as the mean citation count of the articles that listed it) change over time? A preliminary way of testing this is to simply correlate the year an instance of a keyword appeared with its corresponding WOS citation counts.
This correlational analysis included every exemplar (N = 2161; “intelligence” excluded) of the top 37 keywords in Table 2. The year of publication correlated moderately at −0.375 with overall WOS citations. However, in looking at the scatterplot (not displayed here), the inverse relationship occurred because many of the papers with very high citation counts were published prior to 2010. Consistent with this, the correlation between year of publication and citations per year (which corrects for time since publication) was only −0.148. This value, though, was still significant, given the large sample size.
A more revealing analysis involves plotting the change in citations for specific keywords across the years. We deemed that visual presentation of these data would be the easiest way to interpret them. Figure 1, Figure 2 and Figure 3 (split into three panels illustrating flat, parabolic, and increasing/decreasing trends, respectively) shows the top-20 keywords. Each keyword has n > 50 citations overall. The year of publication is plotted on the x-axis, and the proportion of articles (published in the journal that year) with the specified keyword is plotted on the y-axis.
Note the timespan for where curves peak on the y-axis. Peaks represent when keywords received (proportionately, compared with all other articles published that year) their most amount of research attention. For example, “g-factor” peaked between 2004 and 2008, after which it steadily declined (it has, however, recovered somewhat over the last few of years).
Figure 1 displays keywords that appear to show little trend. These tend to have relatively low frequencies. This set includes “attention”, “EQ”, “factor analysis”, “fluid intelligence”, “modeling”, “reasoning”, “and spatial ability”.
Figure 2 shows keywords that appear to exhibit a marked nonlinear relation. Notably, between about 2004 and 2008, “g factor” and “IQ/achievement/ability tests” peaked. Likewise, around years 2007–2010, “brain/neuro” and “sex differences” hit their peaks. They thereafter trended downward. Finally, “crystallized intelligence” showed a strong drop off between 2009 and 2011 with a recovery over the last five years.
Figure 3 shows keywords that exhibit somewhat linear trends. “Education” is currently showing a strong trend upward, as is the “Flynn effect”, except for in the most recent years. Conversely, “memory and cognition” is showing a downward trend, as is “mental speed” and possibly “IQ theories”. Additionally, “psychometrics/statistics” has somewhat increased. This is a hodgepodge category, so little can be made of this trend. We included “executive function” in Figure 3 as well, since it was increasing until just recently.

4. Discussion

For several reasons, we decided to conduct bibliometric analyses on keywords for articles (2000–2016) published in the journal Intelligence. First, we were interested in which keywords were most often employed by authors publishing articles in this journal. Next, we wondered if certain keywords were associated with greater or fewer citations for the papers that listed them. Lastly, we sought to identify trending keywords—ones that had increased or decreased in usage over the 17-year span where the journal started featuring them.
Summarizing, the five-most frequently listed keywords (Table 1; “intelligence” excluded) were “g factor”, “psychometrics/statistics”, “education”, “IQ/achievement/aptitude tests”, and “race/ethnicity”. These keywords accounted for 574 of the 2161 (27%) total keyword instances in Table 1.
Regarding WOS citations for articles by keywords, those with the highest means overall were “spatial ability”, “factor analysis”, “executive function”, “attention”, and “IQ theories”. Together, articles using these keywords averaged 47.1 citations overall. This is in contrast to a median citation rate of ten (for all articles in the journal), reported by both Wicherts [10], and Pesta [11]. However, we found it counter intuitive that the top-five most frequently listed keywords were all different from the top-five keywords with the highest mean citation values. We tentatively conclude there is a low correlation between a keyword’s frequency of use, and how many citations it will receive on average from the papers that list it.
But what could explain the discrepancy? A reviewer suggested a plausible scenario: An article using a low frequency keyword will tend to be one of only very few addressing the respective research question. Therefore, if others conduct research in this area, it is more likely that they will cite the article in their own work. Thus, this article might be cited relatively frequently despite the keyword being used infrequently overall. In contrast, an article using a very frequent keyword will probably be only one among many others that could be cited. Therefore, the article might have a relatively lower probability of being cited by newer papers in the area.
Our last analyses was an attempt to identify trends across time for the most frequently listed keywords in the article set. We visually displayed these trends in Figure 1, Figure 2 and Figure 3. As might be expected, no keyword’s frequency increased (or decreased) monotonically across the years. Instead, all trends were curvilinear. Examples of keywords with notable increases in frequency across recent years included “crystallized intelligence” and “education”. Those with notable decreases were “brain/neuro” and “executive function”.
Regarding study limitations, perhaps the largest is that we had no other, similar journal’s keyword data to serve as a reference point or control group. We did, however, compare our findings with those in [10,11] where appropriate. Next, coding the keywords was not completely objective. Many authors used synonym keywords across articles, and/or redundant keywords within articles.2 We attenuated this problem by having multiple raters rank and discuss each of the codings.
Nonetheless, we were forced to use keyword categories, versus each specific keyword itself, due to the high frequency of synonym keywords across papers. Perhaps the journal should implement a more standardized approach to author keyword selection. One way this could be achieved is for the journal to present a drop down list of keywords that authors select from when submitting their manuscripts. This approach could provide benefits beyond just the facilitation of bibliometric analyses. For example, standardizing keywords would help readers more efficiently find articles they are interested in.

5. Conclusions

In sum, the present paper was a first step toward illuminating how keyword analysis can shed light on this journal’s focus. Specifically, we reported which research topics authors spent the most time on, and of them, which averaged the most citations. We also identified keywords that trended across the 17-year time span of our article set. It is our hope that periodic bibliometric analyses of publications in this journal will continue to generate big picture data on where the field has been, and where it might be going.

Supplementary Materials

The data set is available online at https://osf.io/q8yj4/.

Author Contributions

Author contributions are as follows: conceptualization: B.P.; methodology: B.P., J.F., and E.O.W.K.; data coding and validation: B.P., J.F., and E.O.W.K.; formal analyses: B.P., J.F., and E.O.W.K.; draft preparation: B.P.; writing—review & editing: B.P., J.F., & E.O.W.K.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Otlet, P. Traité de Documentation: Le Livre Sur le Livre, Théorie et Pratique; Editions Mundaneum: Brussels, Belgium, 1934. [Google Scholar]
  2. Pritchard, A. Statistical bibliography or bibliometrics? J. Doc. 1969, 25, 348–349. [Google Scholar]
  3. Bornmann, L.; Leydesdorff, L. Scientometrics in a changing research landscape. Sci. Soc. 2014, 15, 1228–1232. [Google Scholar]
  4. Russell, J.; Rousseau, R. Bibliometrics and institutional evaluation. In Science and Technology Policy; Arvanitis, R., Ed.; Eolss Publishers: Oxford, UK, 2010. [Google Scholar]
  5. Cronin, B.; Sugimoto, C. Beyond Bibliometrics: Harnessing Multiple Indicators of Scholarly Impact; The MIT Press: Cambridge, MA, USA, 2014. [Google Scholar]
  6. Engler, S. Bibliometrics and the study of religions. Religion 2014, 44, 193–219. [Google Scholar] [CrossRef]
  7. Rousseau, R.; Egghe, L.; Guns, R. Becoming Metric-Wise: A Bibliometric Guide for Researchers; Elsevier: Cambridge, MA, USA, 2018. [Google Scholar]
  8. Patra, S.; Bhattacharya, P.; Verma, M. Bibliometric study of literature on bibliometrics. J. Libr. Inf. Technol. 2006, 26, 27–32. [Google Scholar]
  9. Web of Science 2018. Available online: https://0-login-webofknowledge-com.brum.beds.ac.uk/ (accessed on 11 July 2018).
  10. Wicherts, J. The impact of papers published in Intelligence 1977–2007 and an overview of the citation classics. Intelligence 2009, 37, 443–446. [Google Scholar] [CrossRef]
  11. Pesta, B. Bibliometric analysis across eight years 2008–2015 of Intelligence articles: An updating of Wicherts (2009). Intelligence 2018, 67, 26–32. [Google Scholar] [CrossRef]
  12. Colom, R.; Espinosa, M.; Abad, F.; Garcia, L. Negligible sex differences in general intelligence. Intelligence 2000, 28, 57–68. [Google Scholar] [CrossRef]
  13. Agresti, A. An Introduction to Categorical Data Analysis; Wiley: Hoboken, NJ, USA, 2007. [Google Scholar]
  14. Sharpe, D. Your chi-square test is statistically significant: Now what? Pract. Assess. Res. Eval. 2015, 20, 1–10. [Google Scholar]
  15. Deary, I.; Strand, S.; Smith, P.; Fernandes, C. Intelligence and educational achievement. Intelligence 2007, 35, 13–21. [Google Scholar] [CrossRef]
  16. Lynn, R.; Meisenberg, G. National IQs calculated and validated for 108 nations. Intelligence 2010, 38, 353–360. [Google Scholar] [CrossRef]
  17. David, P.; McCabe, D.; Roediger, H.; McDaniel, M.; Balota, D.; Hambrick, D. The relationship between working memory capacity and executive functioning: Evidence for a common executive attention construct. Neuropsychology 2011, 24, 222–243. [Google Scholar]
  18. Conway, A.; Cowan, N.; Bunting, M.; Therriault, D.; Minkoff, S. A latent variable analysis of working memory capacity, short-term memory capacity, processing speed, and general fluid intelligence. Intelligence 2002, 30, 163–183. [Google Scholar] [CrossRef]
  19. Oberauer, K.; Sub, H.; Wilhelm, O.; Wittman, W. The multiple faces of working memory: Storage, processing, supervision, and coordination. Intelligence 2003, 31, 167–193. [Google Scholar] [CrossRef]
  20. Genovese, C.; Lazara, N.; Nichols, T. Thresholding of statistical maps in functional neuroimaging using the False Discovery Rate. NeuroImage 2002, 15, 870–878. [Google Scholar] [CrossRef] [PubMed]
  21. Pike, N. Using false discovery rates for multiple comparisons in ecology and evolution. Methods Ecol. Evol. 2011, 2, 278–282. [Google Scholar] [CrossRef]
  22. Benjamini, Y.; Hochberg, Y. Controlling the false discovery rate: A practical and powerful approach to multiple testing. J. R. Stat. Soc. 1995, 57, 289–300. [Google Scholar]
1
In 2008, the journal’s impact factor was 3.27. As of 2017, this value was 2.79.
2
We are not criticizing authors who used redundant keywords. These are likely needed in any given study to give readers fine-grained information regarding what the article is about. These redundancies, however, complicate the coding process when the focus of the article itself (e.g., here) is on the keywords.
Figure 1. Proportion of articles each year that list a given keyword (flat).
Figure 1. Proportion of articles each year that list a given keyword (flat).
Jintelligence 06 00046 g001
Figure 2. Proportion of articles each year that list a given keyword (parabolic).
Figure 2. Proportion of articles each year that list a given keyword (parabolic).
Jintelligence 06 00046 g002
Figure 3. Proportion of articles each year that list a given keyword (increasing/decreasing).
Figure 3. Proportion of articles each year that list a given keyword (increasing/decreasing).
Jintelligence 06 00046 g003
Table 1. Frequencies, percentages, and residuals for the 37 keywords with the most counts in the article set.
Table 1. Frequencies, percentages, and residuals for the 37 keywords with the most counts in the article set.
RankKeywordFrequency (%)ResidualStandardized Residual
-Intelligence/cognitive ability538 (58.7%)--
1g factor (general intelligence factor)141 (15.4%)82.610.81
2Psychometrics/statistics116 (12.7%)57.67.54
3Education114 (12.5%)55.67.28
4IQ/achievement/aptitude tests102 (11.1%)43.65.71
5Race/ethnicity101 (11.0%)42.65.57
6Working memory97 (10.6%)38.65.05
7Brain/neuro84 (9.2%)25.63.35
8Nature/nurture81 (8.8%)22.62.96
9Children/child development76 (8.3%)17.62.30
10Memory/cognition74 (8.1%)15.62.04
11.5Sex differences73 (8.0%)14.61.91
11.5Income/status/SES73 (8.0%)14.61.91
13Health70 (7.6%)11.61.52
14Adult/aging69 (7.5%)10.61.39
15Flynn effect61 (6.7%)2.60.34
16Fluid intelligence60 (6.6%)1.60.21
17Modeling58 (6.3%)−0.4−0.05
18.5ECTs (Elementary cognitive tasks)57 (6.2%)−1.4−0.18
18.5Genes/evolution57 (6.2%)−1.4−0.18
20Mental speed51 (5.6%)−7.4−0.97
21IQ theories49 (5.4%)−9.4−1.23
22Aggregate/regional IQs47 (5.1%)−11.4−1.49
23Raven’s45 (4.9%)−13.4−1.75
24Crystallized intelligence39 (4.3%)−19.4−2.54
25Attention36 (3.9%)−22.4−2.93
26Personality34 (3.7%)−24.4−3.19
27.5Reasoning33 (3.6%)−25.4−3.32
27.5Executive function33 (3.6%)−25.4−3.32
29.5Factor analysis32 (3.5%)−26.4−3.45
29.5Spatial ability32 (3.5%)−26.4−3.45
31Spearman’s Hypothesis31 (3.4%)−27.4−3.59
32Item level/IRT (Item response theory)28 (3.1%)−30.4−3.98
33Politics23 (2.5%)−35.4−4.63
34Longitudinal designs22 (2.4%)−36.4−4.76
35.5SLODR (Spearman’s law of diminishing returns)21 (2.3%)−37.4−4.89
35.5Problem solving/decision making21 (2.3%)−37.4−4.89
37EIQ (Emotional intelligence)20 (2.2%)−38.4−5.02
Notes: The frequencies are out of 916 articles and 2699 (2161 without “intelligence”) keyword counts. The resulting expected value for each cell is 58.4. Lastly, “intelligence/cognitive ability” was not included in statistical analyses for Table 1.
Table 2. Mean Web of Science (WOS) citation counts for articles with specific keywords.
Table 2. Mean Web of Science (WOS) citation counts for articles with specific keywords.
WOS RankKeywordWOS Cites M (SD)WOS Cites Per Year M (SD)
-Intelligence/cognitive ability24.89 (43.93)2.82 (3.80)
1Spatial ability55.53 (71.38)4.67 (4.88)
2Factor analysis54.50 (98.73)4.98 (7.42)
3Executive function42.33 (49.17)5.80 (6.03)
4Attention42.08 (84.58)3.96 (5.28)
5IQ theories40.96 (56.23)3.93 (5.24)
6Working memory39.87 (61.06)4.39 (4.72)
7Memory/cognition31.46 (63.89)2.94 (3.98)
8Fluid intelligence30.82 (39.65)3.86 (4.12)
9IQ/achievement/aptitude tests29.35 (59.20) 2.88 (5.11)
10Modeling28.59 (63.27)3.24 (4.21)
11Sex differences28.49 (28.70)2.89 (2.31)
12Education28.39 (58.00)3.31 (5.12)
13g factor (General intelligence factor)27.25 (32.14)2.71 (2.49)
14Reasoning26.24 (35.70)2.92 (3.21)
15Psychometrics/statistics26.05 (41.35)2.84 (3.37)
16Crystallized intelligence25.62 (37.29)2.73 (3.25)
17Flynn effect25.41 (30.18)3.14 (2.28)
18Mental speed25.24 (30.31)2.44 (2.24)
19EIQ (Emotional intelligence)24.45 (23.68)2.88 (1.98)
20Brain/neuro23.74 (30.28)2.71 (2.77)
21Income/status/SES22.81 (34.34)2.72 (3.22)
22Raven’s22.80 (30.82)2.09 (2.22)
23Problem solving/decision making22.76 (17.10)3.78 (2.38)
24Politics22.39 (18.90)3.23 (2.16)
25Aggregate/regional IQs22.04 (23.10)2.57 (2.30)
26ECTs (Elementary cognitive tasks)21.95 (27.75)1.98 (1.78)
27Children/child development20.11 (23.60)2.41 (2.39)
28SLODR (Spearman’s law of diminishing return)19.71 (18.78)1.95 (1.71)
29Genes/evolution19.65 (15.64)2.57 (1.57)
30Adult/aging19.62 (22.99)2.38 (2.54)
31Health19.54 (22.71)2.17 (1.83)
32Longitudinal designs17.91 (19.59)2.39 (1.90)
33Spearman’s hypothesis17.35 (14.89)2.28 (1.59)
34Nature/nurture16.81 (18.68)1.99 (1.88)
35Personality15.44 (15.54)1.91 (1.93)
36Race/ethnicity13.84 (17.75)1.81 (1.65)
37Item level/IRT (Item response theory)11.68 (8.73)1.72 (1.19)
Note: “Intelligence/cognitive ability” was not included in statistical analyses for Table 2.
Table 3. Regression results for citations by keyword categories.
Table 3. Regression results for citations by keyword categories.
PredictorBeta (B)SE
Intercept−588.4284.8
Adult/aging−0.200.40
Aggregate/regional IQs−0.170.52
Attention−0.340.56
Brain/neuro−0.100.18
Child/Child development−0.400.35
Crystallized intelligence−0.560.49
ECT−0.370.36
Education0.440.32
Executive Function1.850.44 *
Factor analysis1.380.54 *
Fluid intelligence1.220.54 *
Flynn effect0.770.41
Genes/environment−0.340.25
Genes/evolution−0.120.30
g factor−0.530.33
Health−0.460.23 *
Income/status/SES0.200.34
Intelligence/cognitive ability0.310.22
IQ/achievement/aptitude test0.200.27
IQ theories1.270.38 *
Item level/IRT−0.630.51
Longitudinal−0.400.80
Memory/cognition−0.200.44
Mental speed−0.980.46 *
Modeling0.510.37
Personality−0.580.46
Politics0.150.36
Psychometrics/statistics−0.190.24
Race/ethnicity−0.710.25 *
Raven’s−0.710.56
Reasoning−0.240.59
Sex differences−0.260.41
Spatial ability0.960.43 *
Spearman’s hypothesis0.250.66
Working memory0.930.34 *
Count0.200.11
Year0.290.14 *
Year (nonlinear)−0.960.43 *
Year (nonlinear)5.493.06
Year (nonlinear)−10.526.78
Notes: B = Unstandardized beta; SE = standard error; model: R2 = 0.14; R2-adj = 0.10; * indicates significance at the p < 0.05 level. Year (nonlinear) refers to regression splines.

Share and Cite

MDPI and ACS Style

Pesta, B.; Fuerst, J.; Kirkegaard, E.O.W. Bibliometric Keyword Analysis across Seventeen Years (2000–2016) of Intelligence Articles. J. Intell. 2018, 6, 46. https://0-doi-org.brum.beds.ac.uk/10.3390/jintelligence6040046

AMA Style

Pesta B, Fuerst J, Kirkegaard EOW. Bibliometric Keyword Analysis across Seventeen Years (2000–2016) of Intelligence Articles. Journal of Intelligence. 2018; 6(4):46. https://0-doi-org.brum.beds.ac.uk/10.3390/jintelligence6040046

Chicago/Turabian Style

Pesta, Bryan, John Fuerst, and Emil O. W. Kirkegaard. 2018. "Bibliometric Keyword Analysis across Seventeen Years (2000–2016) of Intelligence Articles" Journal of Intelligence 6, no. 4: 46. https://0-doi-org.brum.beds.ac.uk/10.3390/jintelligence6040046

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop