Next Article in Journal
Referee Bias and Stoppage Time in Major League Soccer: A Partially Adaptive Approach
Previous Article in Journal
Polynomial Regressions and Nonsense Inference
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Academic Rankings with RePEc

1
Research Division, Federal Reserve Bank of St. Louis, P.O. Box 442, St. Louis MO 63166–0442,USA
2
Institute for the Study of Labor (IZA), P.O. Box 7240, D-53072 Bonn, Germany
3
CESifo, Poschingerstrasse 5, 81679 Munich, Germany
4
Rimini Centre for Economic Analysis (RCEA), Via Patara, 3, 47921 Rimini (RN), Italy
Submission received: 28 October 2013 / Revised: 29 November 2013 / Accepted: 2 December 2013 / Published: 17 December 2013

Abstract

:
This article describes the data collection and use of data for the computation of rankings within RePEc (Research Papers in Economics). This encompasses the determination of impact factors for journals and working paper series, as well as the ranking of authors, institutions, and geographic regions. The various ranking methods are also compared, using a snapshot of the data.

1. Introduction

RePEc has become an important bibliographic service for economics and related fields. A considerable amount of data has been collected regarding who authored which paper, where it was published, who reads it, and where it is cited. One way to use this wealth of data is to compute rankings of individuals, journals (and series), institutions, and even countries. Along with the growth of the underlying data, these rankings, even though they are still experimental, have grown in importance in the profession. Indeed, there is evidence that they are used more and more for evaluation purposes (promotion and tenure decisions) and even hiring. Also, country-specific rankings have been used in various professional publications and even the popular press.
It is therefore time for the methodology behind these rankings to be explained. While a criterion such as the number of citations may appear to be simple, it is necessary to understand how it is computed. Indeed, for ranking purposes in RePEc, self-citations are not counted, but citations to other versions of an articles are counted. It is also important to understand how the citations are extracted, i.e., what citations can be considered in the statistics.
Compared with other ranking exercises, the present one also includes some criteria that are unique, such as those based on readership, those based on the number of authors citing, and those based on centrality among co-authors. It is also rare to find the same source being used both to establish impact factors of publications and rankings of authors or institutions. Finally, no other effort has included working papers, which have now become a very important way to disseminate research in economics, if not the most important.
The RePEc project would never have been possible without the efforts of the many volunteers that have participated in one way or another: the maintainers of the so-called RePEc archives who contribute the basic bibliographic data and all those who have contributed through their programming skills, making available hardware and/or bandwidth, giving advice or simply spreading the word about RePEc. RePEc is committed to honor the work of these volunteers by making sure their work will never be subject to fees, both for publishers and users, and will remain in the public domain.
The rest of the paper is structured in the following way. Section 2 describes how the various components of the data used in the rankings are gathered. Section 3 details the construction of the impact factors. Section 4 describes how articles and working papers can be ranked. The various criteria used to rank authors are introduced in Section 5, which also discusses the various ways these criteria can be aggregated and justifies the choices made for the “official” rankings. Section 6, 7, and 8 present the procedures to rank, respectively, institutions, geographic regions, and finally other rankings. Section 9 takes a snapshot of the data and documents the concordance of the the various rank criteria. Section 10 discusses how RePEc rankings differ from other rankings. Section 11 concludes.

2. Data Gathering

This section describes how all the data are gathered to obtain the sources underlying for the rankings. All data come from RePEc and other projects related to RePEc. These data are continuously updated, and the rankings are refreshed on a monthly basis.

2.1. Bibliographic Data

The source of all the bibliographic data is RePEc. RePEc (Research Papers in Economics, http://repec.org/) was founded in June 1997 under the leadership of Thomas Krichel as a followup project to NetEc, founded in 1993. Under very little central management, publishers (commercial or academic) contribute the bibliographic data (called metadata) themselves using a common format. These data are provided through the servers of the publishers, which anybody can access and use. Thus RePEc is just a scheme to organize metadata and make it available in the public domain.
At the time of this writing, almost 1,500 archives were contributing metadata to RePEc, thus covering: 3,400+ series with 460,000 working papers, 1,500 journals with 720,000 articles, 15,500 book chapters, 12,000 books, and 2,700 software components, for a total of over 1,200,000 items. Almost 1,100,000 of them are available for download in full text.
So-called RePEc services are then allowed to use these data to freely provide public access to them. Several websites directly display the data collected through RePEc, the most popular being IDEAS [1], EconPapers [2], Inomics [3], and finally Socionet [4].1 An email notification service for new on-line working papers is also available (NEP [5]). Finally, data gathered by RePEc are relayed through the Open Archives Initiative and therefore made available even more widely, but to services that do not specialize in economics, such as Google Scholar and Oyster.
RePEc data are so far relatively under-used for the exploration of publishing in Economics. Only a few papers so far have exploited the dataset. [6,7,8] use RePEc data to compute some alternative rankings of economists, and in the last case to categorize them in archetypes. [9] use the RePEc journal impact factors to determine the impact of a publication on academic salaries in California universites. [10,11] use the RePEc author rankings to analyse the distribution of citations across authors within cohorts. Finally in this issue, [12] compare ISI and RePEc impact factors for econometrics journals and conclude that the 2-year ISI factors are not robust.

2.2. Author Data

For any ranking, one needs to collect information about the publications of an author. One great difficulty is the many ways an author’s name may be indexed. For example, John Maynard Keynes may be listed in the bibliographic metadata as
  • John Maynard Keynes
  • John M. Keynes
  • John Keynes
  • J. M. Keynes
  • J. Keynes
  • Keynes, John Maynard
  • Keynes, John M.
  • Keynes, John
  • Keynes, J. M.
  • Keynes, J.
and one an imagine many other ways, including misspellings. Variations are even more numerous if nicknames, titles or suffixes (Jr., Sr., III) are used or if accents are used. In addition, several people may have the same name, especially if the first name is abbreviated. Thus, an automatated attribution of works to authors is bound to have a high level of errors. Human intervention is necessary here.
The best people to perform this intervention are the authors themselves. To do this, they register with the RePEc Author Service at http://authors.repec.org/. In doing so, they provide contact details, their affiliations (see next section), and their name variations expected in the metadata. The search engine then suggests to them works from the RePEc metadata that match the name variations—works that the author then can add to their profile.
One may ask why authors would go through that trouble. There are several incentives, see [13]. First, without being registered in the RePEc Author Service, an author is not ranked and his research output does not count toward the ranking of the institutions he is affiliated with. Second, when registered, an author obtains notification of new citations that are found within RePEc, a compilation of all citations, as well as a detailed ranking analysis every month.
At the time of this writing, over 32,000 authors were registered, claiming over 750,000 works as theirs, somewhat less than half of all the works listed in RePEc once the double-counts (works claimed by several authors) are taken into account.2
The RePEc Author Service is based at the Economic Research Division of the Federal Reserve Bank of St. Louis and is monitored by the author of this paper. It runs on open source software written by Ivan Kurmanov and financed by a grant from the Ford Foundation, with extension funding provided by the Federal Reserve Bank of St. Louis.

2.3. Institutional Data

Institutional data are based on the institutional records collected since 1995 in EDIRC (Economics Departments, Institutes, and Research Centers in the World, [14]). This website collects links to academic institutions and government agencies that would principally employ economists. The data are quite accurate; for example it lists within a university all relevant departments (economics, finance, agricultural economics, business schools, and sometimes public policy and similar departments), research centers, institutes, formal research groups, and some chairs are listed as long as economists form a substantial part of the staff or economic issues are prominent in the mission of the group. A second condition is that this listed entity has its own website. It does not need to have its own server (virtual or not), but it needs to have a web page that is more substantial than just a listing of classes: there should be at least a listing of faculty by name.
Entities not based in universities can also be listed. The obvious ones are central banks and government agencies directly applying economic policy, say ministries of finance, treasury, labor, and industry, but also statistical agencies and various research agencies. The same applies to international organizations. Finally, independent research institutes and think tanks are also listed, but not most commercial institutions (banks, consultants). The only exceptions are those that have a RePEc archive or that provide substantial research for free through their website. Associations and societies are also listed.
All in all, over 12,700 institutions are listed, almost 6,000 of which are associated with an author registered in the RePEc Author Serive (not counting those without claimed works). If they are specialized in a particular field, they are categorized, and almost all governmental agencies are categorized. Institutions are also categorized by countries or, in the case of the United States, by state. When authors register with the RePEc Author Service, they have the opportunity to specify with which institutions they are affiliated with among those listed in EDIRC (except associations and societies), but they can also suggest new entities. If they do not fit within the criteria of EDIRC, they are still kept in their list of affiliations without a link to an institution in EDIRC.
EDIRC is housed and managed at the Economic Research Division of the Federal Reserve Bank of St. Louis by the author of this paper.

2.4. Citation Data

Citation counts are often considered to be the most useful metric of the impact of a piece of research. Finding citations is, however, not a trivial matter. It can be performed either manually at great cost or automatically which is a process that needs considerable fine tuning and many exception rules.
All citation data for RePEc ranking purposes are provided by the CitEc project [15] managed by José Manuel Barrueco Cruz, librarian at the University of Valencia. CitEc runs on hardware provided by the Valencian Economic Research Institute.
CitEc downloads all papers that it can find in pdf format, typically those that are not hidden behind a password or some IP protection. Those pdf files are then successively converted to PostScript and text. The text is then parsed to recognize the references, which are then paired with items listed in RePEc with a fuzzy matching algorithm on titles and authors. To prevent erroneous attributions, the level of confidence for a match needs to be set quite high. For somewhat lower levels of confidence, registered authors have the option to check and add appropriate citations.
At the time of this writing, over 360,000 documents have been processed, extracting over eight million references, over three million of which refer to over 400,000 items listed in RePEc. Given that only freely available documents can be analyzed, a large part of those documents are working papers. This has advantages and disadvantages. Working papers are typically more recent than published articles, thus it allows a much more up-to-date analysis than with articles alone. However, citations in published articles are considered to be much more valuable than those in working papers (erroneously, as discussed further in a subsequent section). This is partially corrected in three ways: (1) publishers directly provide information to CitEc about references in their articles, either because their content is gated or because they want to increase the quality of matches; (2) for authors who have both the working paper and published article version of an item in their profile, the references found in one version can be attributed to the other; (3) on an experimental basis, authors can add references to the database, something authors are quite keen to do to increase their citation count. The system requests that all references of a paper be added, so as to provide a positive externality for others as well.

2.5. Abstract Views and Downloads Data

Another measure of the impact of research is how often it has been “looked at.” Abstract views statistics assess the attractiveness of the title, the authors or the general topic. In addition, downloads statistics indicate how much abstracts have contributed to the attractiveness of the downloaded document.
Keeping track of abstract views is not difficult using the logs of a web server. The only drawback is that abstracts displayed during uses of the search engine cannot be counted. Downloads are more difficult, given that they typically link to external servers. Thus some mechanism needs to be put in place to keep track of downloads.
The decentralized nature of RePEc complicates the compilation of these statistics. The participating services first need to keep appropriate logs and second need to make them available in an appropriate format. The LogEc project [16], managed by Sune Karlsson at Örebro University, tries collect this information. The following RePEc services provide information for downloads and abstract views: EconPapers, IDEAS, NEP, EconomistsOnline, and Socionet. The defunct NetEc also used to provide data. Other services that use RePEc data, in whole or part, unfortunately do not provide statistics. Among them are EconStor, Inomics, Econlit, Oyster, and any service making use of the RePEc data made available though the Open Archive Initiative (Google Scholar, for example).
Quite obviously, these statistics are subject to manipulation, as one could repeatedly download a paper to increase its count. For this reason, various information about the abstract viewer or downloader are recorded to prevent repeat counts. This is mainly performed through the use of the IP address, taking also into account IP clusters. Also, and this is mostly relevant for abstract views, visits by search engine robots need to be discarded as they do not represent human readership. Some robots identify themselves, and they can easily be taken care of. Others do not obey standard protocols and need to be recognized as robots. Various identification mechanisms are used to filter these additional robots from the data. Complete details on how all this performed cannot be given here. But overall, about 80% of abstract views are thus discarded—less for downloads.
Whether is it an over-count or under-count of the true count is unknown. Some robots may err, through. Some downloads are discarded as repeated despite originating from different users because they came from the same IP clusters. This happens in particular with institutions using a single cache or proxy server. We hope, however, that the statistics are sufficiently high for such accidents to even out relatively smoothly across all documents and no bias is introduced.
In addition, various checks and balances are implemented to recognize abnormal behavior, mostly from authors trying to manipulate the statistics. Obviously, these safeguards are not revealed here, but let it be known that a human eye has a final look at the server logs in these cases and that several authors have been caught.
Despite all these adjustments, LogEc records over two million abstract views and half a million downloads a month; in other words, every document’s abstract is viewed once or twice time a month, and every item available on-line is downloaded once every second month, on average for reporting RePEc services.

2.6. Further Refinements of the Data

As the works covered in RePEc contain both publications and pre-publications, there is an issue with several versions of the same work being listed. In particular, a working paper may appear in several series. Thus, for any measure that considers the number of works someone has authored, one should count distinct works. For technical reasons, the matching of different versions is done only for works that are listed in a registered author’s profile. The basis is a very similar title and the author’s recognition of authorship. Manual adjustments are done when titles differ, upon request to the RePEc team.
Note that such works may have been cited in their different versions. A citation to any version is counted toward all versions. The same applies to references. Any author statistics involving a count of works or citations aggregates the data from the different versions. Such matching is not performed for abstract views and download counts.

2.7. Discussion of Coverage

Quite obviously, only journals and working paper series that are listed in RePEc can be classified, and only authors that registered themselves can be included. Thus, there are omissions. This is obviously avoidable, but the structure of RePEc puts the burden of indexing on the publishers. Unlisted authors can easily correct this by registering themselves. Missing journals and working papers series can be indexed by their publishers and they will then be fully considered.
Being listed is not sufficient. The listing needs to be maintained, i.e., new items added as they are published. Some publishers are better at this task than others, be it with regard to timeliness, completeness (missing items), coverage (years covered), or data quality (syntax errors, confusing author names). Again, it is up to the publishers to do this work. Registered authors also need to maintain their profile with any additions.
Deceased authors are kept in the database, but their affiliations are removed, the logic being that they cannot contribute to the academic life of their employer anymore. The RePEc Author Service maintenance team tries to keep their profiles current. Authors whose email addresses no longer function are considered to have either moved or died. Hence, their affiliations are discarded from consideration.
Note that while some journals present in other studies are not classified here, our rankings also cover working paper series that are typically neglected by other studies. There is also a limited number of chapters and books. It turns out that some working paper series have very high impact factors, while many journals have low impact factors. It is thus wrong to believe that research is valued only when it is published in a journal. There are also software components. They are either stand-alone program code or material necessary to replicate some study. Citations to them are currently not considered, as it is often difficult to disentangle them with citations to the original works. More on this later, in the discussion of impact factors.

3. Computation of Impact Factors and Ranking of Series or Journals

Many ranking exercises for institutions or authors rely heavily on impact factors calculated elsewhere, and these impact factors are usually the most controversial issue with these rankings. Here we take a different approach in that the impact factors are determined with the RePEc data. We compute four sets of impact factors.

3.1. Simple Impact Factors

The computation of this simple impact factor is rather straightforward. Just find all citations to items in that particular series or journal, count those citations and divide by the number of items in the series or journal. Several adjustments are performed to the number of citations: (1) Self-citations within the series or journal are discarded, to prevent self-inflation. Self-citations by authors are still counted, though. (2) Considering that a work may have appeared in different series, all versions of the cited and citing work are considered, but only one is counted. This matters: for example, an article may be cited, while its working paper version is not, but the working paper series is still credited with this citation.

3.2. Recursive Impact Factors

Recursive impact factors are computed in the same way as the simple impact factors, except that every citation carries some weight. That weight is the recursive impact factor. It is thus the fix point of a function that could be specified in the following way:
R I = 1 I i I R I c J I R J i I 1 I ,
where R I is the recursive impact factor of series or journal I, which has items i. c J represents all citations from journal J. To guarantee that a fix point exists, the weights are normalized such that the average item (article or working papers) has a recursive impact factor of one. Also, when there are several versions of a citing item, the one with the highest impact factor is considered.
These factors are computed by iteration. In the first pass, simple impact factors are used and then in each pass the recursive impact factors from the previous iteration are taken. This does, however, never converge completely, as new items and citations are continuously added to the database. The results are relatively stable, though. Concretely, the weights are recomputed every day for all series and journals that are refreshed on IDEAS, that is, those that have had any amendments in the bibliographic data and those that have not been refreshed for thirty days.
The recursive impact factor computed here is similar to the Google PageRank [17], which ranks web pages higher if they are linked to many others, even more so if it is by web sites that have a high PageRank. The difference is that Google computes a different factor for every page, whereas we compute one for every journal or paper series. The idea of the PageRank is to determine the probability that a web surfer clicking randomly would end up at that page. In our case, this would be the probability, or rather something proportional to it, that a reader randomly following references in articles and papers would end up with a particular journal or working paper series.3
The recursive impact factor also bears some similarity with the Article Influence, which is a journal’s Eigenfactor divided by the proprotion of articles from that journal (see [18] for details.) We have, however, not checked how close to each other the two are.

3.3. Discounted Impact Factors

This factor is similar to the simple impact factor, with one important difference: each citation counts for the inverse of the age in years (plus one) of the citing paper. Thus, if an article is cited in a paper dated in 2009 and we are in 2012, this citations would count for 0.25.
Such a factor gives an edge to what is cited now, and therefore highlights the publications series that are hot now. It does not mean, however, that its most recent publications are well cited, only that some of them—possibly old—are now well cited.

3.4. Recursive Discounted Impact Factors

This factor is the recursive version of the discounted impact factor. It thus uses its own factors as weights, multiplied by the age factor. This highlights publication series currently well cited in series that are currently well cited.

3.5. H-Index

This statistic is typically used for authors, and hence it is more thoroughly discussed when we go through author rankings. A journal would have H-index of h if h articles have at least h citations. Quite obviously, this favors older journals or series that have a good and numerours stock of articles that attract citations. It takes quite a few years for young publications series to rank well.

3.6. Abstract Views

This criterion simply extracts the abstract views statistics from the LogEc project, using the numbers for the past twelve months.

3.7. Downloads

As for the previous criterion, download numbers are used for the past twelve months.

3.8. Aggregation

With six criteria, rankings are obviously going to differ, and every editor or publisher is going to find a favorite. There is nothing wrong with that, but one may want to have a more authoritative ranking. We suggest that aggregating these rankings may do the trick. For reasons explained below on the aggregation of author rankings, the harmonic mean of ranks of all six criteria is used.

3.9. Discussion

Some other published impact factors differentiate by type of article, for example, by giving different weights to full articles, notes and book reviews. One may also want to discard corrigenda. The metadata do not contain the type of the article, and the title in the vast majority of cases does not allow one to infer the type. Thus, we do not take them into consideration.
Also, some journal issues are different. For example, the American Economic Review has one issue a year with non-refereed short articles, the Papers and Proceedings of the annual meeting of the American Economic Association. These short papers are less likely to be cited and add to the article count, thereby diluting the impact factor of the regular articles. One could isolate these special issues, but the task then becomes subjective, as other journals are subject to the same issues at varying degrees. We want to stay objective in our ranking and thus do not adjust. In this particular example, the American Economic Association does not want this distinction to be made anyway.
There are also some small sample issues. Some working paper series, especially, have few items and may as a result have unexpectedly high or low impact factors—high if just one item is often cited. The current solution is not to rank series or journals with fewer than 50 items. The impact factors are, however, used as is.
Finally, there is a problem when a journal changes publishers. Technically, it is now a different journal in RePEc, as its metadata are supplied from a different source. Publishers have the opportunity to record in the journal metadata what the predecessor of the successor was, but few do (or are aware they can). When recognized, this is adjusted by hand in adding pairs, or even triplets, into an exception file. Then statistics are aggregated over them.

4. Ranking of Works

There are six different ways to rank works (working papers, articles, chapters, books). One is to simply count the number of citations it has gathered, again adjusting for different versions of the same item. The second is to discount each citation by its age. The remaining four are to weigh those citations by the impact factors of the citing series or journals.
Thus, if one were to add up all citations to articles in a particular journal, then divide the result by the number of articles, one would obtain the simple impact factor (except that self-citations within the journal need to be excluded). Or if one were to add up the scores of all articles in a journal, with scores using the recursive impact factors and excluding self-citations, one would obtain the recursive impact factor. Doing this with simple impact factors would result in the factors of the first pass in the recursive impact factor computation.
RePEc publicizes rankings for the top 1‰ (one in every thousand) items for each ranking method. In addition, items published five years ago or more recently that are among the top 2‰ are also listed. As there are several criteria, one could also think of aggregating them. Because of the large amount of data, the required computational time and the fact that these rankings are updated daily as abstract pages are refreshed, this has not been implemented.

5. Rankings of Authors

Every person registered in the RePEc Author Service with works listed in the profile is ranked. There are many ways to rank authors and this section discusses those used in the RePEc rankings. The strategy to aggregate the various rankings is then discussed.

5.1. Criteria based on the Number of Works

The simplest of all ways to ranks authors is by the number of works they have authored. However, as working papers are also considered, the same work may appear several times, in different versions. These duplicates cannot therefore be considered. A ranking including the duplicates is provided, but it is not used in the calculation of the aggregate rankings.
The number of distinct works thus serves as basis for the following criteria. They are a combination of simple counts and counts with weights from the simple or recursive impact factors with those counts either divided by the number of authors or not. Thus, the following criteria are used (with their respective labels in bold face):
  • NbWorks: Simple count.
  • DNbWorks: Count divided by number of authors on each work.
  • ScWorks: Count with simple impact factor weights.
  • AScWorks: Count with simple impact factor weights divided by number of authors on each work.
  • WScWorks: Count with recursive impact factor weights.
  • AWScWorks: Count with recursive impact factor weights divided by number of authors on each work.
The first two criteria merely indicate how prolific an author is. The four others measure one characteristic of the quality of one’s work: where it was published. It is an imperfect measure, given that one may simply ride on the coat tails of other papers published in the same series or journal that have been frequently cited. But such count based solely on the impact factors are the ones most frequently used, as they do not necessitate the compilation of citations if one simply takes the impact factors from somewhere else.
Note that the discounted impact factors and recursive discounted impact factors are not used here. They could also be considered, but this would put too much weight on criteria based on the number of works in the overall rankings.

5.2. Criteria based on Citation Counts

Here, we have criteria similar to those based on the work counts, but we count citations. Self-citations are eliminated to avoid artificial and in some cases malicious inflation of citation scores. We may apply to each citation weights by any of the four impact factors, or no weight. And all these criteria may be divided by the number of authors or not.
In addition, we provide the h-index introduced by [19]. His definition: A scientist has index h if h of his/her N p papers have at least h citations each, and the other ( N p - h ) papers have no more than h citations each. Thus, this author would have at least h 2 citations (at least h papers with at least h citations each). Such a criterion puts more emphasis on an important body of work, instead of a few very highly cited papers, by giving higher score to those who have many cited papers. This index was developed for physics, where scientists write a lot of papers and also cite rather generously. Some physicists have h above 100, but in economics it is very rare to have an h above 20, mainly due to the fact that economists write fewer, but more involved papers.
A variation of the h-index is provided, the so-called Wu-index following [20]: A scientist has index w if w of his/her N p papers have at least 10 w citations each, and the other ( N p - w ) papers have no more than 10 w citations each.
Finally, two criteria count the number of registered authors citing a particular author: first a simple count, second a count considering the rank of the citing author, giving more points for highly ranked citers. This can measure how widely an author is cited. For example, this penalizes those that cite each other repeatedly (“citing clubs”). Note that each co-author counts for these criteria is she has some self-citations. This is the only case where a self-citation may count. It is possible to compute these criteria thanks to the very nature of the RePEc data with author profiles. We are not aware of any other ranking using such criteria.
Thus, we have the following criteria based on citations:
  • NbCites: Simple citation count.
  • ANbCites: Citation count divided by number of authors on each work.
  • ScCites: Citation count with simple impact factor weights.
  • AScCites: Citation count with simple impact factor weights divided by number of authors on each work.
  • WScCites: Citation count with recursive impact factor weights.
  • AWScCites: Citation count with recursive impact factor weights divided by number of authors on each work.
  • DCites: Citation count discounted by age.
  • ADCites: Citation count discounted by age and divided by number of authors on each work.
  • DScCites: Citation count with discounted impact factor weights.
  • ADScCites: Citation count with discounted impact factor weights divided by number of authors on each work.
  • WDScCites: Citation count with recursive discounted impact factor weights.
  • AWDScCites: Citation count with recursive discounted factor weights divided by number of authors on each work.
  • HIndex: h-index.
  • WIndex: Wu-index.
  • NCAuthors: Count of citing registered authors.
  • RCAuthors: Rank weighted count of citing registered authors.
Due to scheduling differences between the upload of new citations and the ranking computations, the new citations are included for a minority of the authors in current ranking, but they are for all authors in the next issue of the rankings. Again, all self-citations by the author are, of course, excluded.

5.3. Criteria based on Journal Page Counts

The following criteria concern only journal articles. Whether one publishes a note, which is shorter, or a full-length article is an indication how editors feel about the contribution of an article. Also, some argue that editors allow particularly good pieces to run longer, while less important works are cut. Thus the page count can be an indication of the worth of one’s publication record. Again, the page count can be weighted or not and divided by the number of authors or not.
  • NbPages: Simple page count.
  • ScPages: Page count divided by number of authors on each work.
  • WSCPages: Page count with simple impact factor weights.
  • ANbPages: Page count with simple impact factor weights divided by number of authors on each work.
  • AScPages: Page count with recursive impact factor weights.
  • AWScPages: Page count with recursive impact factor weights divided by number of authors on each work.
Thus, publishing a long article in an obscure journal is valued highly with the two first criteria, but barely factors in with the four others. Note that these are criteria that, in contrast to the others, pertain to a subset of all documents (articles). Also, these criteria can sometimes be somewhat misleading. For example, if a journal does not provide page numbers, either because they are missing in the metadata or because the article is online only and not in a paginated format, the number of pages defaults to one. This is justified by the fact that in some cases only the number of the starting page is provided, with it indistinguishable from a one-page article. In addition, these criteria do not take into account the size of the pages. Some journals publish in A4 or Letter format, whereas most have smaller formats. Font size may vary as well, thus actual content of a page could be quite different from one journal to another. No such adjustments are performed as there is no way to systematically verify those parameters and how they may change through the years, except through intensive manual labor that would count the average number of words per page or something of that order.
Note also that the discounted impact factors are not considered. Adding them would be giving more weight to publications in journals. Given that many journals have impact factors lower than working paper series, there is no particular reason to privilege journals. The market can decide which is the better publication outlet.

5.4. Criteria based on Popularity on Reporting RePEc Services

Here, we measure how many times document abstracts have been viewed and how often they have been downloaded. As described in the section on LogEc, these statistics pertain to the subset of RePEc services that report such statistics. Furthermore, as all the metadata collected by RePEc are in the public domain, one cannot track how much it is used. Looking at the collected subset can. however, still give a good indication. Note that these statistics are checked for multiple views or downloads, and robot and web spider activity is excluded, as described above.
Again, we provide statistics with the criteria either divided by the number of authors or not. Thus the following four criteria are available in the category:
  • AbsViews: Total abstract views in the past 12 months.
  • AAbsViews: Total abstract views per author in the past 12 months.
  • Downloads: Total downloads in the past 12 months.
  • ADownloads: Total downloads per author in the past 12 months.
Statistics are computed for the past 12 months. On the one hand, including a longer period allows the smoothing out of inherent short-term variability—for example, new papers announced through NEP get a large one-time boost, and authors may not yet have claimed them in their profile. On the other hand, the period considered should not be too long. First, this allows one to take into account what is popular now; second, it corrects for bias stemming from items having been listed for a long time, while even older material may have been added only recently.
Note that counting abstract views and downloads starts as soon as the research item (article, paper, etc.) is added to RePEc, and these numbers are aggregated for registered authors. Thus, when an author creates a profile, the statistics for his/her papers are added also for the period where he/she was not yet registered.
For computational reasons, the criteria with statistics per author are computed with a one-month delay.

5.5. Criteria based on Co-Authorship Networks

These two criteria have been recently included and exploit the new CollEc project, http://collec.repec.org/, run by Thomas Krichel and hosted by the Economics Department at Washington University. CollEc looks at all registered authors and computes a network of all of them, using their ties through co-authorship. Several disconnected networks emerge from this analysis, one of them encompassing the majority of the authors (as of this writing, 23,994 of 32,664). Within this network, the shortest path between any two authors is computed. With this, we can compute two criteria:
  • Close: The average number of degrees of separation through co-authorship with all other registered authors.
  • Betweenn4: The frequency the author appears on the shortest path through co-authorship between any two other registered authors.
The first measures the average number of hops through the co-authorship network that are necessary to reach all member authors. This is similar to the Erdős number mathematicians use to relate themselves to the most prolific of them, Paul Erdős. In economics, there is no such standout author, and we average over all. This is the only criterion where a smaller number is better.
The second looks at how likely a shortest path (from the set of all shortest paths) is likely to run through a particular author. Because some authors that are in the network are on the end-points of shortest paths (“dangling nodes”), the numbers of those that can be ranked with betweenness is smaller than for closeness, currently 16,791. In both cases, authors with null scores are ranked just behind the last author with a score.
Why are these criteria used for rankings? For one, they measure how involved and networked authors may be. Co-authorship is penalized by other criteria, so this may be some redemption for authors who have helped many others with their work. But because these criteria do not merely measure the number of co-authors but also the centrality in the co-authorship network, it is more relevant to measuring the pre-eminence of an author. In addition, it encourages authors to get their co-authors to sign up with RePEc.

5.6. Aggregation of Criteria

Quite obviously, with so many criteria, it is difficult to agree on who the best economists are, especially as the rankings certainly do not correlate perfectly.5 Some way to aggregate the rankings is required and unfortunately different ways of doing so give different results. In fact, they emphasize different aspects that all have some relevance. We discuss here some of them and then discuss our choice.

5.6.1. Harmonic Mean of Ranks

The harmonic mean is defined as
M - 1 = N 1 i = 1 N 1 r i ,
where r i is the ranking of an author in criterion i. In such a mean, very good rankings have a lot of weight; for example, the first rank counts twice as much as the second one. But a one rank difference carries very little weight for higher numbers. This aggregation method therefore rewards those who are particularly good in some category, but perhaps rewards too much. For this reason, the harmonic mean is dampened somewhat by adding a constant (currently one) to each rank and then subtracting it from the mean.

5.6.2. Arithmetic Mean of Ranks

This is the easiest and most frequently used way to aggregate criteria and create indices. It is defined as
M 1 = 1 N i = 1 N r i .
Doing poorly on one criterion penalizes an author particularly hard. Doing particularly well on one criterion to compensate is much more difficult. Thus, the arithmetic mean rewards those who rank consistently across criteria.

5.6.3. Geometric Mean of Ranks

The geometric mean is defined as
M 0 = i = 1 N r i 1 N ,
where ∏ symbolizes the product. The geometric mean penalizes poor rankings and emphasizes good rankings. To see this, notice that the geometric mean is the exponential of the arithmetic mean, and thus it dramatizes the features of the latter. Or put in another way, given a generalized mean with exponent p defined as
M p = 1 n x i p 1 p ,
the geometric mean corresponds to p = 0 , which is between the arithmetic mean ( p = 1 ) and the harmonic mean ( p = - 1 ).

5.6.4. Lexicographic Ordering of Ranks

Ranking extremely well for a particular criterion is the most rewarding with this aggregation method. For an author, all ranks are ordered from best to worst, then all authors are ranked in the following way: first all those with their best rank being a first rank, the tie breaker being their second best rank, then third best. Once all authors ranked first for any criterion are exhausted, those with rank two as their best rank are taken, and so on. This is akin to the ordering of words in the dictionary, hence it is named “lexicographic.” This concept is also used in economics to describe some preference classes in utility theory.

5.6.5. Graphicolexic Ordering of Ranks

This method takes the lexicographic method, but turns it on its head, hence its newly coined name: authors are ranked by their worst rank under any criterions, then their second worst rank to break ties, etc. This rewards authors that do not have a slip-up according to some criterion.

5.6.6. Sum of Percent of Best in Criterion

All the aggregation methods above consider only how someone is ranked according to the various criteria, but not how far apart the ranks are from each other for each criterion. For example, barely being first is valued in the same way as when there is a large gap between the first-ranked and the second-rankied. One way to take the latter into account is to attribute 100% to the first ranked, and then proportionally percentages to the lower ranked authors. All these scores are then added. This aggregation method benefits the most those who have criteria where they are significantly better than others, especially for criteria where the dispersion of scores is larger.

5.6.7. Exclusion of Extremes

The truncated mean excludes the x largest and smallest values. This reduces the impact of outliers. In particular, if one thinks that the particular aggregation mean one has chosen is influenced too much by such outliers, using truncation can make the mean more credible. There is no particular guideline to choose what the value of x should be. An alternative is the Winsorized mean, where the truncated criteria are set to the rank of the largest respectively lowest remaining ranks.

5.6.8. Discussion and Aggregation Choice

We have identified 35 different criteria for ranking authors and could have easily added more. In addition, we presented six aggregation methods, which can even be varied with the number of extremes to exclude and some other degrees of freedom. Each of the criteria can be multiplied by some weight. This is a dismaying array of possibilities, but we need to make choices. Those choices are easier if the criteria or aggregation methods lead to similar results. To some extent they do, as we see in a subsequent section, but there are noticeable differences. We still need to make a choice and take a stand.
Everyone would probably favor a combination of criteria and aggregation method that would favor oneself. We need to find something that is credible, in the sense that a person outside the profession would find it agreeable. We want to highlight the particular achievement, say that an author is particularly successful in downloads despite not having published much (yet), or that an author elicited many citations despite not being prolific. The harmonic mean achieves this, but needs to be tempered somewhat, and we thus add a constant of one to each rank. Also, we include all criteria but two, the simple number of works NbWorks (which does not distinguish distinct works, as multiple versions of the same work inflate this count) and the Wu-index (as it leads to a large number of ties and in particular a lot of null scores), in the aggregation. For each author, we further truncate by dropping the best and worst ranking. Thus, in summary: we consider for each author 31 rankings from a pool of 33, having dropped for all authors NbWorks and the Wu-index from the 35 presneted rankings, with aggregation through an adjusted harmonic mean.
These choices can, and should, be argued and we leave the reader the opportunity to try other ways to rank on the website.

6. Ranking of Institutions

When registering, each author has the opportunity to affiliate himself with some institution(s). For those that are listed in EDIRC, the affiliation is recorded with an identifier that can be used to aggregate all authors from that institution. This then allows ranking of institutions.
A few rules apply. Only institutions listed in EDIRC are ranked. An author can affiliate himself with several institutions and all receive credit for that author. If an institution is a sub-entity of another institution also listed in EDIRC, the latter also receives credit. The ranking score of sub-entities is computed, but these institutions do not increment ranking counters. This means it is possible to consider this sub-entity as if it were a stand-alone entity. For each criterion, the institution’s score is just the sum of the scores of each affiliated author. The only exceptions are the h-index and the Wu-index, see below.
Quite obviously, institutions with many authors have an advantage. Clearly, taking an average score within an institution would make little sense, as author registration is not mandatory, and potentially lower-ranked authors may be discouraged to register. On the contrary, adding up all authors’ scores gives the right incentive: everyone should register, including students who already have authored something in RePEc.
One controversial aspect, though, is how to treat authors with multiple affiliations. Until the December 2008 ranking, each affiliation counted equally and fully, which counted some authors multiple times, and some institutions with numerous “courtesy” appointments would rank much higher than expected. Since the January 2009 ranking, the rules for multiple appointments have changed in the following way. For each affiliation i, the number of registered authors is counted; call it N i . Then, the weight of that institution is
w i = 1 2 j N j N i k j N j N k = 1 2 N i j 1 N j .
Note that these weights sum to 0.5. The remaining 0.5 is attributed the the affiliations whose website domain most closely matches the email address or personal website of the author (ties are split equally). If it is impossible to identify a principal affiliation, for example for authors without institutional homepages and with email accounts at Gmail or alumni accounts, all weights are doubled. For affiliations that are not listed in EDIRC, and thus that do not have a well-defined N i , by default the number of authors divided by the number of institutions in EDIRC with authors is taken.
Of course, authors may disagree with the weights. Since February 2012, authors can specify the weights themselves. In fact, any change in affiliation now requires the author to set these weights, with the hope that system-set weights will gradually fade out. In the end, these weights are supposed to better take into account courtesy appointments by giving them less weight and attribute authors to the location where they mostly work.
Finally, we need to explain how the h-index is computed in the case of institutions. Remember that for authors h is defined as the number of works with at least h citations. For institutions, we follow [21] and define the institutional h as the number of authors affiliated to that institution with an h-index of at least h. As the h can only be an integer and the support of its distribution is even smaller than for authors, there are numerous ties. To break these ties, we adapt [22]. They augment h by a rational number between zero and one, measuring the distance to the next h-index considering how many citations are required to reach it. In our case, we measure a similar distance, but we consider how many authors with appropriate h-indices are necessary to reach the next step. Note that for multiple affiliations, it is impossible to use the weights w i discussed above. The h of member authors is fully counted toward each institution.
A final note regarding institutions: due to the nature of criteria, the measure of centrality in the co-authorship network, Close and Betweenn, cannot be computed.

7. Ranking of Geographic Regions

To rank geographic regions (countries, U.S. states), the same logic is used as for ranking institutions. All authors affiliated with institutions in a particular region are added to the pool of that region. However, authors with multiple affiliations have their scores split among all regions according to the weights discussed in the previous section.
For authors with affiliations not listed in EDIRC, the geographic location of their affiliation is guessed from the address of its web page. If it still cannot be found, then the home page of the author and then the email address are used. Obviously, this can still fail, as addresses with .com, .net, .org or .info are not geographically informative. We have, at least tried to avoid errors.6
Once all these attributions are made, we simply add up the scores, properly weighted. The only exceptions are, again, the h-index, where the same scheme as for institutions is used, and the closeness and betweenness indicators in the co-authorship network. Note that we do not calculate scores for the United States as a whole, as it would obviously be number one in every aspect. Rankings for every state are given, though.

8. Other Rankings

A wealth of data is available, and this allows us to establish various other rankings. A few examples are below, and more will be added once sufficient critical mass is present to display somewhat credible results.

8.1. Ranking within Geographic Regions

Once authors have been attributed to a particular region, it is easy to rank them within that region as well. The same applies to institutions within that region. Publishing rankings with very few entities or authors do not make much sense, though. For this reason, a minimum of five authors or five institutions need to be present. In some regions, there is little hope for authors to be listed, whatever their prestige, due to lack of participation by others in RePEc, or in small countries, due to the lack of economists. Therefore, rankings for regional conglomerates are presented as well, say the Mountain states in the United States, Central America and the Caribbean, or Africa.
Again, we need to mention authors with multiple affiliations here. If those span several geographic regions, their score is multiplied by the appropriate weight w i as computed above.7
A ranking that uses a straight excerpt from the world rankings is also provided for information (take the world ranking, and pick those from the specific region in the same order). But this ranking can differ significantly from the regional ranking for several reasons: first and as mentioned, authors with multiple affiliations across regions can only count part of their score toward a regional ranking; second, aggregate rankings are computed afresh within the region. This means that an author who far ahead in the world ranking under some criteria (say, because of very high citation counts) is still ahead under the regional ranking, but not by much. This can matter for the aggregation of ranks.
The same rules apply for ranking institutions within regions, where author scores (multiplied by relevant weights) are added. And in a similar way, regional rankings may differ from a regional extraction from the world rankings.

8.2. Ranking of Female Economists

Women are, unfortunately, quite underrepresented in the economics profession. It appears, from a limited investigation, that they are also underrepresented within RePEc. One can still try to make a meaningful ranking with data collected within RePEc. Unfortunately, an author registering with RePEc does not declare his or her gender. This needs to be inferred from the first and middle names using a name data bank. There are, however, several difficulties: some names may be used for both females and males, and this may vary by culture. Also, given the international nature of RePEc, there is a incredible diversity in first names.
The following rules are applied for gender attribution: if there is more than 90% confidence the gender is correct, it is attributed. Ambiguous and unrecognized names and are then manually entered into the exception tables—one for names that were not in the original tables, the other for case by case attributions.8 In the end, only 0.4% are left without a gender. Close to 18% are identified as female.
The ranking of female economists is performed solely among female economists, that is, without considering the gender-wide ranking: females are ranked within their group according to each criterion and then the rankings are aggregated. This means it is possible that the order of female economists when considering only gender may be different from the order of female economists among all economists, as also occurs in the regional rankings described above.

8.3. Ranking of Young Economists

It takes a long time for economists to make it into the top ranks; thus, it is of interest to compute rankings limited to young economists so that they have a chance of gaining some visibility. However, the RePEc Author Service does not collect data about birth date or graduation dates. As a proxy for age or professional experience, one can use the date of the first publication, whatever its form. It is commonplace to publish at least a working paper within a year of graduation, if not before finishing studies.
There is a small percentage of records in RePEc that do not carry dates. There is nothing that can be done about this, but we can only hope that those items are not the first works of some authors. For all others, the selection criterion is that the first work be within 5, 10, 15, or 20 years of the current year, counting whole years. Clearly, young economists have fewer papers and citations, so the rankings are much less stable once you go past the top few—especially for the very youngest. For this reason, the ranking is limited to the top 200.

8.4. Ranking of Deceased Economists

When we learn about the death of an economist, the deceased author is flagged and the profile continues to be maintained, as some works may still be added (posthumous publications as well as late additions to the database). As noted above, deceased authors do not have affiliations and are ranked alongside living authors.
As there are now about 150 deceased author, it has become possible to rank deceased authors. It does not serve any particular purpose, except that it can be done.

8.5. Ranking of Top-Level Institutions

Institution rankings are performed at the department, school, institute, or center level—that is, whichever unit has a substantial number of economists. Some, however, are not affiliated in such units—say, a political science department. In a few cases they are senior adminitrators of the university. In addition, many universities have economists dispersed in several independent units that are listed in EDIRC; for example, the departments of economics, agricultural economics, public policy, and finance. The strength of a university with many such units may not be properly reflected in ranking based on units. The same applies to the Federal Reserve System whose constituents are treated as separate.
For this purpose, a separate ranking is created for “top-level” institutions. For example, anyone affiliated with any unit at Harvard University is counted toward the university. The aggregation is based on EDIRC records, and for those without an EDIRC affiliation the domain of the institutional webpage is used (authors can submit free text affiliations, where an institutional URL is required). The Federal Reserve Banks are aggregated as well. From there, ranking is performed in the same way as other institutional rankings.

8.6. Ranking within Fields

When registering, authors do not declare a field of research. It is therefore difficult to classify them within each field, although one could try to infer it from the JEL codes attached to their papers. However, as it is customary to put several JEL codes on each paper, and only about 20% of all papers have such a code, infered field attributions would not be reliable. However, we can attribute authors to fields by using data collected with the NEP project [5].
NEP disseminates new working papers by email. At the time of writing, there are 91 field-specific NEP reports, each managed by an editor who selects, from all new papers, those that fit within her field. We use these assignments to classify authors. Thus, an author who had 75% of his papers in NEP announced in field A would get 75% of his score attributed towards his ranking in that field. To be ranked, a minimum threshold of 5 papers or 25% is required. As a paper can be announced in several NEP fields, an author may have attributions adding to more than 100%.
To rank institutions within fields, author scores are added for those affiliated, using the appropriate field and affiliation weights. No minimum threshold is used, the rationale being that institutions are expected to have much more diverse expertise than individuals.
In addition, one can also use the field code in EDIRC for institutions. For example, institutions working in agricultural economics or finance are well identified. Also, certain institution types are well documented: central banks, think tanks, international organizations. For others, patterns in their names (or their English translation) are used. This is the case for economics departments and business schools. For all of them, separate rankings are released, including for U.S. Economics departments. Note that for economics departments, an effort is made to remove mis-fits and add those missed by the automatic categorization.
Note that, as for regional rankings, ranking points are computed within the set of admissible authors or institutions and thus can differ from an excerpt of the world rankings, as for other rankings of subsets described above.

9. A Glimpse at Results

We do not want to give detailed rankings here; they are constantly updated and available at http://ideas.repec.org/top/. In the following, we present a comparison of the various criteria and aggregation methods using a snapshot of the data on July 9, 2012, with 32,731 authors registered affiliated with 5,825 institutions.

9.1. Impact Factors

How do the impact factors compare? Table 1 provides a summary with rank correlations. All of them are very high. This is quite natural as series with many citations ought also to be cited by series with high impact factors. Overall, it does not seem to matter which criterion is used when it comes to ranking series or journals.
Of particular interest here is to compare the impact of journal articles relative to working papers. Table 2 shows that there is no clear winner, which could surprise many. We have to keep in mind that some journals have very low impact factors, while some working paper series have impact factors superior to most journals. Note also, as explained in the previous sections, that if the article version of a paper is cited, it counts toward both. So these numbers do not reflect where the citing author found the reference.
Table 1. Rank correlations of series.
Table 1. Rank correlations of series.
Impact FactorAll
All series
Simple factors1.956.985.951.03.456.381
Recursive factors.9561.95.993.022.506.424
Discounted factors.985.951.96.031.475.419
Recursive discounted factors.951.993.961.023.508.44
H-index.03.022.031.0231-.03-.024
Abstract views.456.506.475.508-.031.904
Downloads.381.424.419.44-.024.9041
Journals
Simple factors1.968.99.966-.014.55.557
Recursive factors.9681.954.994-.017.533.535
Discounted factors.99.9541.965-.013.531.547
Recursive discounted factors.966.994.9651-.016.515.524
H-index-.014-.017-.013-.0161-.037-.03
Abstract views.55.533.531.515-.0371.929
Downloads.557.535.547.524-.03.9291
Working paper series
Simple factors1.954.98.948.027.508.369
Recursive factors.9541.947.992.021.572.424
Discounted factors.98.9471.961.028.543.433
Recursive discounted factors.948.992.9611.023.585.457
H-index.027.021.028.0231-.024-.015
Abstract views.508.572.543.585-.0241.887
Downloads.369.424.433.457-.015.8871
Impact factorw/ 50 items
All series
Simple factors1.963.985.957-.056.367.341
Recursive factors.9631.951.991-.065.359.335
Discounted factors.985.9511.964-.051.366.364
Recursive discounted factors.957.991.9641-.062.352.345
H-index-.056-.065-.051-.0621-.031-.023
Abstract views.367.359.366.352-.0311.903
Downloads.341.335.364.345-.023.9031
Journals
Simple factors1.957.988.956-.06.477.504
Recursive factors.9571.94.992-.067.448.471
Discounted factors.988.941.954-.061.45.489
Recursive discounted factors.956.992.9541-.069.422.455
H-index-.06-.067-.061-.0691-.039-.033
Abstract views.477.448.45.422-.0391.917
Downloads.504.471.489.455-.033.9171
Working paper series
Simple factors1.963.977.952-.057.372.258
Recursive factors.9631.949.988-.065.379.272
Discounted factors.977.9491.966-.048.398.322
Recursive discounted factors.952.988.9661-.057.394.312
H-index-.057-.065-.048-.0571-.017-.010
Abstract views.372.379.398.394-.0171.905
Downloads.258.272.322.312-.010.9051

9.2. Works

How do the various rankings compare? Taking all articles and papers that are ranked in the top 500 in any of the six categories on February 23, 2009, and narrowing them down to those listed in all six categories, we obtain a sample of 416 items. The fact that 83% of the top 500 according to one criterion are listed in all other criteria is already an indication of high correlation. Within this (rather small) sample, the rank correlations are still fairly high, averaging 0.647 (Table 3). Rank correlations over the whole sample would be much larger, as demonstrated in other contexts below, but much more difficult to compute, for technical reasons.

9.3. Authors

We have 34 different ways to rank authors;9 thus if we want to compare how differently they perform, we need to look at 1,122 correlations ( 34 2 - 34 ). Table 4 reports them along with correlations with the harmonic average of rank (excluding extreme ranks), which is used as the default for aggregate rankings. While all these numbers can be overwhelming, the following can be extracted: the average correlation stands at 0.822 and varies between 0.561 and 0.997. The table groups the criteria in categories (number of works, citations, derived from citations, article pages, visibility on RePEc); not surprisingly, correlations within these categories tend to be higher than within other categories. It is more interesting to see where criteria seem to differ most: article pages versus co-authorship centrality on RePEc, with an average correlation of 0.646. This does not mean that they are orthogonal, though; 0.646 is still a significant correlation. But it is revealing that publishing in journals, or even in good journals more specifically, has relatively little to do with how much people are connected to each other. Looking at how the various criteria correlate with the harmonic mean, we note that correlations are again relatively high. No criterion stands out with a particularly high correlation, an indication that one cannot easily summarize the aggregate ranking with a single criterion. The criteria based on co-authorship networks score lower than the others, however.
Speaking of significance of correlations, there is a statistic that allows one to measure how independent the criteria are from each other, χ 2 . Here,
χ n - 1 2 = ( n - 1 ) ( ( p - 1 ) r ¯ + 1 ) ,
where p = 34 is the number of criteria10, r ¯ = 0 . 822 is the average correlation, n = 32 , 731 is the number of authors, and χ 32 , 730 2 = 855 , 104 . To be significant at 5%, the statistic would need to be below 32 , 310 . Therefore, we easily reject the null hypothesis that the criteria are independent.
One should expect that correlations are higher when we consider the aggregate ranking criteria than for the individual ranking criteria. This turns out to be wrong; see Table 5. They average 0.771, with a minimum of - 0 . 112 and a maximum of 1. This is because the “percent” aggregator is not well defined for the Close criterion, where a smaller number is better.11 Excluding the best and worst criterion for each author makes a significant impact on the overall picture, however, as the Close criterion is then excluded for most in the “percent” aggregations but has little impact otherwise. Indeed, experience shows that the exclusion of extremes can alter the rankings at the very top for a few authors with a large variance in the rankings across criteria, but it does relatively little for others. The only exception in the “percent” aggregation, where a strong lead in a category can cause a drastic reduction in ranking when it is excluded, for example. It is also remarkable that harmonic, arithmetic, and geometric aggregation methods are all very close to each other.
Table 2. Average impact factors.
Table 2. Average impact factors.
PapersJournals
Simple factors3.672.79
Recursive factors0.270.19
Discounted simple factors0.860.61
Discounted recursive factors0.310.19
Table 3. Rank correlations of scores for top items by criteria.
Table 3. Rank correlations of scores for top items by criteria.
Criteria from Left Column
Number of citations1.909.465.895.831.429
Simple factors.9091.543.783.883.432
Recursive factors.465.5431.416.483.598
Discounted citations.895.783.4161.867.508
Discounted simple factors.831.883.483.8671.564
Discounted recursive factors.429.432.598.508.5641
Table 4. Rank correlations across criteria for authors, full sample.
Table 4. Rank correlations across criteria for authors, full sample.
NbDNbScWScANbAScAWScNbDScDScWScWDScANbADAScADScAWScAWDScHNCRCNbScWScANbAScAWScAbsDownAAbsADownCloseBetHarm
WorksWorksWorksWorksWorksWorksWorksCitesCitesCitesCitesCitesCitesCitesCitesCitesCitesCitesCitesIndexAuthorsAuthorsPagesPagesPagesPagesPagesPagesViewsloadsViewsloads weennonic
NbWorks1.986.864.808.962.859.802.832.817.792.784.778.772.828.816.791.784.777.772.822.822.814.858.803.776.830.794.769.903.853.878.835.716.752.909
DNbWorks.9861.829.766.978.831.767.808.789.759.748.743.734.810.794.763.753.746.738.800.793.784.865.784.751.844.780.748.887.833.883.835.671.740.888
ScWorks.864.8291.984.810.990.977.886.875.899.891.897.890.878.871.896.889.894.888.867.889.890.784.894.891.750.883.883.794.768.755.736.774.678.945
WScWorks.808.766.9841.750.974.992.847.839.878.871.884.877.838.833.874.869.881.875.825.856.860.730.862.874.697.850.866.745.725.702.691.752.637.910
ANbWorks.962.978.810.7501.835.770.779.758.733.719.718.706.798.779.749.737.732.722.768.758.750.860.777.746.862.787.754.854.796.889.833.596.687.869
AScWorks.859.831.990.974.8351.983.871.857.882.871.880.870.876.866.889.880.886.878.851.869.870.786.891.887.769.891.889.781.751.768.744.723.652.939
AWScWorks.802.767.977.992.770.9831.833.823.864.855.870.861.836.828.869.861.874.867.812.838.842.731.859.870.710.857.870.732.710.712.697.708.614.904
NbCites.832.808.886.847.779.871.8331.982.974.963.959.950.991.977.969.96.954.948.956.981.978.802.863.845.765.848.834.789.756.752.727.758.697.946
DCites.817.789.875.839.758.857.823.9821.961.976.947.962.970.991.954.970.941.957.937.971.968.784.850.834.744.833.821.785.761.742.726.759.693.936
ScCites.792.759.899.878.733.882.864.974.9611.988.996.987.963.953.994.985.992.983.927.972.976.768.866.861.731.851.850.749.722.708.689.762.664.939
DScCites.784.748.891.871.719.871.855.963.976.9881.985.997.949.965.980.994.978.991.919.966.969.755.856.852.716.839.839.752.731.704.692.766.664.934
WScCites.778.743.897.884.718.880.870.959.947.996.9851.990.948.939.991.982.995.987.915.961.966.755.861.861.718.846.850.736.711.694.676.756.651.932
WDScCites.772.734.890.877.706.870.861.950.962.987.997.9901.937.951.979.991.983.995.910.957.961.744.852.853.705.835.840.739.719.692.680.760.651.928
ANbCites.828.810.878.838.798.876.836.991.970.963.949.948.9371.982.972.960.955.946.947.966.963.809.863.845.786.859.842.777.742.762.734.715.675.943
ADCites.816.794.871.833.779.866.828.977.991.953.965.939.951.9821.960.974.945.959.934.959.956.793.853.836.767.847.832.778.750.756.736.720.674.937
AScCites.791.763.896.874.749.889.869.969.954.994.980.991.979.972.9601.988.996.986.925.963.967.774.869.863.748.861.858.743.713.717.695.732.650.940
ADScCites.784.753.889.869.737.880.861.960.970.985.994.982.991.960.974.9881.985.997.918.959.962.764.861.856.735.851.849.747.724.716.700.736.649.936
AWScCites.777.746.894.881.732.886.874.954.941.992.978.995.983.955.945.996.9851.990.913.953.958.761.864.863.734.855.858.731.703.702.682.729.638.933
AWDScCites.772.738.888.875.722.878.867.948.957.983.991.987.995.946.959.986.997.9901.909.950.955.752.856.856.722.847.850.734.713.702.688.733.638.931
HIndex.822.800.867.825.768.851.812.956.937.927.919.915.910.947.934.925.918.913.9091.933.925.781.834.818.745.822.808.773.739.735.710.733.671.928
NCAuthors.822.793.889.856.758.869.838.981.971.972.966.961.957.966.959.963.959.953.950.9331.997.788.860.847.746.842.834.787.761.739.722.781.706.938
RCAuthors.814.784.890.860.750.870.842.978.968.976.969.966.961.963.956.967.962.958.955.925.9971.783.862.851.741.844.837.777.751.730.713.783.700.935
NbPages.858.865.784.730.860.786.731.802.784.768.755.755.744.809.793.774.764.761.752.781.788.7831.896.862.985.897.863.758.706.761.712.626.687.858
ScPages.803.784.894.862.777.891.859.863.850.866.856.861.852.863.853.869.861.864.856.834.860.862.8961.992.870.994.988.721.687.703.674.697.654.906
WScPages.776.751.891.874.746.887.870.845.834.861.852.861.853.845.836.863.856.863.856.818.847.851.862.9921.836.984.996.695.663.674.648.690.629.893
ANbPages.830.844.750.697.862.769.710.765.744.731.716.718.705.786.767.748.735.734.722.745.746.741.985.870.8361.888.850.723.668.752.699.561.639.830
AScPages.794.780.883.850.787.891.857.848.833.851.839.846.835.859.847.861.851.855.847.822.842.844.897.994.984.8881.991.707.671.705.673.662.631.898
AWScPages.769.748.883.866.754.889.870.834.821.850.839.850.840.842.832.858.849.858.850.808.834.837.863.988.996.850.9911.684.651.675.647.662.611.888
AbsViews.903.887.794.745.854.781.732.789.785.749.752.736.739.777.778.743.747.731.734.773.787.777.758.721.695.723.707.6841.957.966.933.677.711.871
Downloads.853.833.768.725.796.751.710.756.761.722.731.711.719.742.750.713.724.703.713.739.761.751.706.687.663.668.671.651.9571.912.965.671.692.838
AAbsViews.878.883.755.702.889.768.712.752.742.708.704.694.692.762.756.717.716.702.702.735.739.730.761.703.674.752.705.675.966.9121.951.579.655.847
ADownloads.835.835.736.691.833.744.697.727.726.689.692.676.680.734.736.695.700.682.688.710.722.713.712.674.648.699.673.647.933.965.9511.582.643.821
Close.716.671.774.752.596.723.708.758.759.762.766.756.760.715.720.732.736.729.733.733.781.783.626.697.690.561.662.662.677.671.579.5821.750.757
Betweenn.752.740.678.637.687.652.614.697.693.664.664.651.651.675.674.650.649.638.638.671.706.700.687.654.629.639.631.611.711.692.655.643.7501.710
Harmonic.909.888.945.910.869.939.904.946.936.939.934.932.928.943.937.940.936.933.931.928.938.935.858.906.893.830.898.888.871.838.847.821.757.7101
Table 5. Rank correlations across aggregate criteria for authors, full sample.
Table 5. Rank correlations across aggregate criteria for authors, full sample.
ExcludeHarmonicArithmeticGeometricLexicographicg=GraphicolexicPercent
Outliers?noyesnoyesnoyesnoyesnoyesnoyes
harmonicno11.9910.9912.9963.9964.9568.9568.9140.9140-.0872.9129
harmonicyes11.9909.9911.9963.9964.9570.9570.9132.9132-.0871.9122
arithmeticno.9910.990911.9983.9982.9247.9247.9369.9369-.1038.9358
arithmeticyes.9912.991111.9984.9983.9250.9250.9352.9352-.1034.9341
geometricno.9963.9963.9983.998411.9385.9385.9271.9271-.0961.9260
geometricyes.9964.9964.9982.998311.9387.9387.9259.9259-.0959.9248
lexicographicno.9568.9570.9247.9250.9385.938711.8459.8459-.0915.8449
lexicographicyes.9568.9570.9247.9250.9385.938711.8459.8459-.0916.8449
graphicolexicno.9140.9132.9369.9352.9271.9259.8459.845911-.1112.9999
graphicolexicyes.9140.9132.9369.9352.9271.9259.8459.845911-.1112.9999
percentno-.0872-.0871-.1038-.1034-.0961-.0959-.0915-.0916-.1112-.11121-.1046
percentyes.9129.9122.9358.9341.9260.9248.8449.8449.9999.9999-.10461

9.4. Institutions

The concordance of rankings across institutions is higher than that of authors for individual criteria and for aggregate criteria;12 see Table 6 and Table 7. Looking at the individual correlations, the patterns are also somewhat different compared with authors. For example, the h-index and citing authors rankings typically correlate less, while page counts and RePEc visibility correlate more. And of course, the “percent” aggregation correlates much more. Comparing the individual criteria with their harmonic average, we find that correlations are significantly lower than for authors, and the h-index stands out with a higher correlation than the others.13 One has to keep in mind that while a correlation is a linear concept, the harmonic mean is not, the more so when the extreme values are removed. One can thus not necessarily imply from the rather low correlations that all criteria necessarily contribute information to the aggregate.

10. Comparison with Other Ranking Methodologies

The goal of this section is not to compare how the impact factors or rankings obtained by RePEc differ from other exercises.14 It is rather to highlight some of the conceptual differences: what RePEc may miss and what others may miss.

10.1. What RePEc can Do and Others Not

The rankings described above make use of the many facets of the data collected within the RePEc project. Some of them are quite unique, which certainly gives these rankings some added value when compared with existing rankings:
Table 6. Rank correlations across criteria for institutions, full sample.
Table 6. Rank correlations across criteria for institutions, full sample.
NbDNbScWScANbAScAWScNbDScDScWScWDScANbADAScADScAWScAWDScHNCRCNbScWScANbAScAWScAbsDownAAbsADownHarm
WorksWorksWorksWorksWorksWorksWorksCitesCitesCitesCitesCitesCitesCitesCitesCitesCitesCitesCitesIndexAuthorsAuthorsPagesPagesPagesPagesPagesPagesViewsloadsViewsloadsonic
NbWorks1.992.856.807.991.841.792.784.801.686.730.667.711.751.786.671.714.653.694.768.816.808.966.815.762.956.797.744.937.920.921.903.524
DNbWorks.9921.817.764.996.804.750.748.765.644.688.624.667.716.752.630.673.611.653.761.782.774.964.779.722.956.763.706.923.903.914.892.538
ScWorks.856.8171.994.846.997.990.968.973.940.959.932.951.957.968.931.952.922.944.725.980.980.887.986.976.889.978.967.921.922.915.920.405
WScWorks.807.764.9941.796.993.997.968.971.959.972.955.969.962.968.951.966.945.962.697.976.978.846.984.985.849.978.977.889.893.884.893.378
ANbWorks.991.996.846.7961.837.787.777.791.679.720.660.700.749.781.668.708.650.689.770.808.801.969.810.757.967.797.745.930.912.927.906.538
AScWorks.841.804.997.993.8371.994.967.968.943.957.935.950.961.969.938.955.929.947.720.977.977.878.987.979.884.984.975.910.910.910.914.404
AWScWorks.792.750.990.997.787.9941.965.964.960.968.955.965.963.967.955.967.950.963.691.971.974.835.983.987.843.982.984.877.879.878.886.375
NbCites.784.748.968.968.777.967.9651.993.977.986.969.980.990.993.971.982.963.976.679.991.992.831.971.971.834.966.964.900.908.900.914.365
DCites.801.765.973.971.791.968.964.9931.975.989.966.981.988.996.966.982.957.974.684.995.994.841.969.968.841.96.957.915.925.911.926.368
ScCites.686.644.940.959.679.943.960.977.9751.995.999.996.986.978.996.995.994.995.613.967.970.742.949.970.748.947.965.831.842.834.853.303
DScCites.730.688.959.972.720.957.968.986.989.9951.991.999.987.987.987.996.984.994.642.982.984.779.962.975.783.956.967.864.876.861.880.325
WScCites.667.624.932.955.660.935.955.969.966.999.9911.995.980.970.995.992.996.995.601.958.961.725.942.967.732.941.963.814.826.818.838.295
WDScCites.711.667.951.969.700.950.965.980.981.996.999.9951.982.980.989.995.987.996.631.975.977.762.956.973.766.951.965.847.860.845.865.317
ANbCites.751.716.957.962.749.961.963.990.988.986.987.980.9821.996.988.992.981.987.656.984.985.804.965.972.812.965.970.878.886.886.901.348
ADCites.786.752.968.968.781.969.967.993.996.978.987.970.980.9961.976.989.968.982.678.992.992.833.971.972.838.968.966.903.912.906.922.368
AScCites.671.630.931.951.668.938.955.971.966.996.987.995.989.988.9761.995.999.996.604.959.962.732.945.967.742.948.967.817.827.826.845.301
ADScCites.714.673.952.966.708.955.967.982.982.995.996.992.995.992.989.9951.992.999.634.976.979.769.961.976.776.960.973.850.861.855.873.323
AWScCites.653.611.922.945.650.929.950.963.957.994.984.996.987.981.968.999.9921.995.592.950.954.715.937.963.725.941.964.800.811.810.830.292
AWDScCites.694.653.944.962.689.947.963.976.974.995.994.995.996.987.982.996.999.9951.622.969.972.752.954.973.759.954.971.833.845.839.858.314
HIndex.768.761.725.697.770.720.691.679.684.613.642.601.631.656.678.604.634.592.6221.700.697.777.709.675.780.704.670.749.737.744.732.757
NCAuthors.816.782.980.976.808.977.971.991.995.967.982.958.975.984.992.959.976.950.969.70011.856.975.970.858.967.960.920.930.917.932.385
RCAuthors.808.774.980.978.801.977.974.992.994.970.984.961.977.985.992.962.979.954.972.69711.851.976.973.853.969.964.914.924.912.927.381
NbPages.966.964.887.846.969.878.835.831.841.742.779.725.762.804.833.732.769.715.752.777.856.8511.876.826.996.864.814.933.920.927.914.532
ScPages.815.779.986.984.810.987.983.971.969.949.962.942.956.965.971.945.961.937.954.709.975.976.8761.993.881.997.988.892.893.893.899.400
WScPages.762.722.976.985.757.979.987.971.968.970.975.967.973.972.972.967.976.963.973.675.970.973.826.9931.834.992.997.859.862.862.871.361
ANbPages.956.956.889.849.967.884.843.834.841.748.783.732.766.812.838.742.776.725.759.780.858.853.996.881.8341.875.827.927.913.928.914.541
AScPages.797.763.978.978.797.984.982.966.960.947.956.941.951.965.968.948.960.941.954.704.967.969.864.997.992.8751.994.877.877.883.889.4
AWScPages.744.706.967.977.745.975.984.964.957.965.967.963.965.970.966.967.973.964.971.670.960.964.814.988.997.827.9941.843.845.852.860.361
AbsViews.937.923.921.889.930.910.877.900.915.831.864.814.847.878.903.817.850.800.833.749.920.914.933.892.859.927.877.8431.994.993.988.48
Downloads.920.903.922.893.912.910.879.908.925.842.876.826.860.886.912.827.861.811.845.737.930.924.920.893.862.913.877.845.9941.986.992.464
AAbsViews.921.914.915.884.927.910.878.900.911.834.861.818.845.886.906.826.855.810.839.744.917.912.927.893.862.928.883.852.993.9861.994.486
ADownloads.903.892.920.893.906.914.886.914.926.853.880.838.865.901.922.845.873.830.858.732.932.927.914.899.871.914.889.860.988.992.9941.469
Harmonic.524.538.405.378.538.404.375.365.368.303.325.295.317.348.368.301.323.292.314.757.385.381.532.400.361.541.4.361.48.464.486.4691
Table 7. Rank correlations across aggregate criteria for institutions, full sample.
Table 7. Rank correlations across aggregate criteria for institutions, full sample.
harmonicarithmeticgeometriclexicographicgraphicolexicpercent
exclude outliers?noyesnoyesnoyesnoyesnoyesnoyes
harmonicno1.9992.6096.6105.7063.7038.8849.8849.5241.5241.5254.5423
harmonicyes.99921.6304.6315.7272.7250.8966.8966.5437.5437.5462.5637
arithmeticno.6096.63041.9998.9602.9589.7533.7533.9165.9165.9481.9668
arithmeticyes.6105.6315.99981.9630.9619.7561.7561.9138.9138.9490.9677
geometricno.7063.7272.9602.96301.9998.8559.8559.8599.8599.9048.9241
geometricyes.7038.7250.9589.9619.99981.8557.8557.8579.8579.9046.9239
lexicographicno.8849.8966.7533.7561.8559.855711.6551.6551.6904.7093
lexicographicyes.8849.8966.7533.7561.8559.855711.6551.6551.6904.7093
graphicolexicno.5241.5437.9165.9138.8599.8579.6551.655111.8593.8774
graphicolexicyes.5241.5437.9165.9138.8599.8579.6551.655111.8593.8774
percentno.5254.5462.9481.9490.9048.9046.6904.6904.8593.85931.9959
percentyes.5423.5637.9668.9677.9241.9239.7093.7093.8774.8774.99591
  • Timeliness: The data in RePEc are constantly updated and the results are continuously refreshed on its websites. For example, a working paper or article is typically listed within 24 hours of the publisher indexing it, its citation analysis is released within a month, and its downloads are continuously monitored.
  • Current Affiliations: Rankings of institutions reflect the current affiliations of authors and can incorporate the move of an author from one affialiation to the other into account within a month. Other counts typically take into account only the affiliation at the time of publication.
  • Pre-Publications: Established citation aggregators typically consider only citations in journals to journal articles. Even the set of journals is often severely limited. There are no such restrictions in RePEc. In fact, working papers are a very important means of dissemination in economics (and RePEc may have contributed to this) that should not be neglected. Note that analyzing working papers also significantly contributes to the timeliness of rankings.
  • Certainty about Authorship: Given that authors acknowledge what works they have authored when they maintain their RePEc profiles, one big issue in ranking authors is resolved: name ambiguities. Indeed, many publications provide only the initial of the first name. Also, there are homonyms in the profession. The use of RePEc data leave no doubt.
  • New Ranking Criteria: Thanks to the fact that authors build profiles in RePEc, it is possible to reliably count how many different authors cite a particular author. We do not know of the use of the NCAuthors, RCAuthors, Close, and Betweenn criteria elsewhere. The same applies to the h-index for journals, series, and institutions.

10.2. What RePEc cannot Do

There is very little human intervention in anything that RePEc does. Thus various aspects of other ranking analyses cannot be performed here:
  • Errors: Citation analysis is based heavily on automatic reference extraction from texts and pattern matching of titles. Errors can obviously happen, and probably more so than with analysis by humans. The most important case is when a list of other working papers in a particular series is printed on the last page of a paper, and this list is interpreted as the continuation of the citations. This is adjusted when reported, but affected authors have little incentive to report this. Authors can now remove citations that are not accurate, though.
  • Adjustments: Any criteria based on page counts can be adjusted by the size of the page or its average word count in order to truly reflect the length of the article. RePEc does not do this, as it is completely automated.
  • Stable impact factors: Due to the constant adjustments in RePEc, impact factors change frequently, within bounds. This can make the use of such factors difficult for third parties.
  • Comprehensiveness: Some important publications are still missing in RePEc, but RePEc has no staff to index them. Also, not all authors are registered with RePEc, and some do little to maintain the accuracy of their records.

11. Conclusions

In this paper, we hope to have demonstrated that the ranking exercises performed in RePEc are based on a sound methodology and can be useful. It should also be clear that they are a work in progress, as the data are not yet as comprehensive as they could be, both in terms of listed publications and, especially, registered authors. The citation database is the component that is the most experimental15 at this point, as reference extraction and matching is difficult and error prone. As more publishers and more authors join in the RePEc project, as we perfect the analysis of the data, our confidence in the rankings will rise, and we hope the RePEc rankings will be regarded as a useful tool in the profession.

Acknowledgments

This paper benefited from discussions and electronic correspondence with Kit Baum, Oded Galor, Bill Goffe, N. Gregory Mankiw, and Ekkehard Schlicht. The data used in these rankings would not exist without the major contributions of José Manuel Barrueco Cruz, Kit Baum, Sune Karlsson, Thomas Krichel, Ivan Kurmanov and all the other volunteers working on RePEc. This version updates the previous ones with several criteria that have been added since the last version, as well as updates for the tables. The views expressed are those of individual authors and do not necessarily reflect official positions of the Federal Reserve Bank of St. Louis, the Federal Reserve System, or the Board of Governors.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Available online at http://ideas.repec.org/ (accessed on 28 October 2013).
  2. Available online at http://econpapers.repec.org/ (accessed on 28 October 2013).
  3. Available online at http://inomics.com/ (accessed on 28 October 2013).
  4. Available online at http://socionet.ru/ (accessed on 28 October 2013).
  5. Available online at http://nep.repec.org/ (accessed on 28 October 2013).
  6. C. Seiler, and K. Wohlrabe. “Ranking Economists and Economic Institutions Using RePEc: Some Remarks.” Ifo Working Paper 96. 2011. Available online at: http://www.econstor.eu/handle/10419/73773 (accessed on 28 October 2013).
  7. C. Seiler, and K. Wohlrabe. “Ranking Economists on the Basis of Many Indicators: An Alternative Approach Using RePEc Data.” J. Informetr. 6 (2012): 389–402. [Google Scholar] [CrossRef]
  8. C. Seiler, and K. Wohlrabe. “Archetypal Scientists.” J. Informetr. 7 (2013): 345–356. [Google Scholar] [CrossRef]
  9. J. Gibson, D. Anderson, and J. Tressler. “Which Journal Rankings Best Explain Academic Salaries? Evidence from the University of California.” Working Paper 12/10. University of Waikato, Department of Economics, 2012. available online at: http://researchcommons.waikato.ac.nz/handle/10289/6690 (accessed on 28 October 2013).
  10. R.S.J. Tol. “Quantile Kernel Regression for Identifying Excellent Economists.” Working Paper 6013. Department of Economics, University of Sussex, 2013a. available online at: http://ideas.repec.org/p/sus/susewp/6013.html (accessed on 28 October 2013).
  11. R.S.J. Tol. “The Matthew Effect for Cohorts of Economists.” J. Informetr. 7 (2013): 522–527. [Google Scholar] [CrossRef]
  12. C. Chang, and M. McAleer. “Ranking Leading Econometrics Journals Using Citations Data from ISI and RePEc.” Econometrics 1 (2013): 217–235. [Google Scholar] [CrossRef] [Green Version]
  13. T. Krichel, and C. Christian Zimmermann. “The Economics of Open Source Bibliographic Data Provision.” Economic Analysis and Policy 39 (2009): 143–152. [Google Scholar] [CrossRef]
  14. Available online at http://edirc.repec.org/ (accessed on 28 October 2013).
  15. Available online at http://citec.repec.org/ (accessed on 28 October 2013).
  16. Available online at http://logec.repec.org/ (accessed on 28 October 2013).
  17. S. Brin, and T. Page. “The Anatomy of a Large-Scale Hypertextual Web Search Engine.” Comput. Netw. 30 (1998): 107–117. [Google Scholar] [CrossRef]
  18. Available online at http://www.eigenfactor.org/ (accessed on 28 October 2013).
  19. J.E. Hirsch. “An index to quantify an individual’s scientific out.” Proc. Natl. Acad. Sci. U.S.A. 102 (2005): 16569–16572. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  20. Q. Wu. “The w-index: A significant improvement of the h-index.” J. Am. Soc. Inf. Sci. Technol. 61 (2010): 609–614. [Google Scholar]
  21. A. Schubert. “Successive h-indices.” Scientometrics 70 (2007): 201–205. [Google Scholar] [CrossRef]
  22. F. Ruane, and R.S.J. Tol. “Rational (Successive) H-Indices: An application to economics in the Republic of Ireland.” Scientometrics 75 (2008): 395–405. [Google Scholar] [CrossRef]
  23. Available online at http://ideas.repec.org/cgi-bin/newrank.cgi (accessed on 28 October 2013).
  • 1NetEc, with its child projects WoPEc and BibEc, displayed RePEc data at one time. NetEc closed, as it was not worth the maintenance effort given that competitors within RePEc were offering a superior product, according to its maintainer. Econlit also uses RePEc data for working papers through an exchange of services agreement with RePEc.
  • 2By the time of publication, these numbers increased to over 38,000 and 900,000.
  • 3Strictly speaking this would only be true if we did not account for different versions of the same item. Also, the reader would need to follow all citations, as the impact factor is not divided by the number of cited items. Some versions of PageRank do this, however.
  • 4This is not a misspelling. “Between” is a reserved word in relational database management systems and can thus not be used as a variable name.
  • 5We discuss these correlations below.
  • 6Some errors are unavoidable. For example, at the time of this writing, the Pacific island nations of Niue and Nauru are ranked thanks to two authors using courtesy domains from these micro-nations.
  • 7Before January 2009, the weight was one, which lead to the perversion of rankings in some countries where it is a habit to provide courtesy appointments to foreign scholars. The new weighing scheme now ranks true residents on top.
  • 8Thanks to many authors for putting a picture of themselves on their web page!
  • 9We do not consider the Wu-Index.
  • 10We do not consider the Wu-Index.
  • 11For ranking computations, the opposite of Close is used.
  • 12Keep in mind that the Close and Betweenn criteria are not comupted for institutions, and the Close criterion is the one that draws down severely the aggregate correlations for authors.
  • 13Note that the h-index is computed differently from the h-index for authors.
  • 14For a list of such ranking exercises, as indexed on RePEc, see http://ideas.repec.org/k/ranking.html.
  • 15All rankings mention their experimental nature. One metric that can be used to remove this label is when the number of items with references exceeds the number of items that are cited.

Share and Cite

MDPI and ACS Style

Zimmermann, C. Academic Rankings with RePEc. Econometrics 2013, 1, 249-280. https://0-doi-org.brum.beds.ac.uk/10.3390/econometrics1030249

AMA Style

Zimmermann C. Academic Rankings with RePEc. Econometrics. 2013; 1(3):249-280. https://0-doi-org.brum.beds.ac.uk/10.3390/econometrics1030249

Chicago/Turabian Style

Zimmermann, Christian. 2013. "Academic Rankings with RePEc" Econometrics 1, no. 3: 249-280. https://0-doi-org.brum.beds.ac.uk/10.3390/econometrics1030249

Article Metrics

Back to TopTop