Despite all the recognized limitations, bibliometric assessments of scientific productivity have become increasingly widely adopted. Authors [below] describe herein what they believe is “an improved method to quantify the influence of a research article”. They propose to make use of a research article’s co-citation network to “field-normalize” the number of citations it has received.
Article citation rates are divided by an expected citation rate that is derived from [a] performance of articles in the same field and [b] benchmarked to a peer comparison group. The resulting Relative Citation Ratio (RCR) is article level-, and field-, independent and provides an alternative to the invalid practice of using journal impact factors to identify influential papers. To illustrate one application of their method, authors analyzed 88,835 articles published between 2003 and 2010 and found that the National Institutes of Health awardees who authored those papers occupy relatively stable positions of influence across all disciplines. Authors demonstrate that the values generated by this method are correlated very strongly with the opinions of subject matter experts in biomedical research. Authors therefore suggest that the same approach should be generally applicable to articles published in all areas of science.
A beta version of iCite, their web tool for calculating Relative Citation Ratios of articles listed in PubMed, is available at https://icite.od.nih.gov.
PLoS Biol Sept 2o16; 14: e1002541 & e1002452 [Ed.]