HEALTH

A New Metric Could Help Weed Out Junk Science

NIH announces an alternative to the dreaded impact factor—a new metric, which measures the influence of a scientific study regardless of its journal

HEALTH
(REUTERS)
Sep 09, 2016 at 10:00 AM ET

When you stumble across a suspicious study, one of the best ways to verify the research is to check where it was published. If the study is from a well regarded publication, like the New England Journal of Medicine, it’s probably worth a gander. If it’s in the International Journal of Advanced Computer Technology (which once offered to publish obscene gibberish for $150), then probably not. And for everything in between, there’s the “impact factor.” A number shrouded in secrecy, the impact factor ranks journals based on their importance to science — and those wary of its arbitrary power have been trying to kill it for decades.

Now, a study in PLoS Biology describes a new metric that may finally dispatch the troublesome impact factor. It’s called the Relative Citation Ratio, and it promises to judge each paper based on its own scientific merits, without lumping it together with the other papers published in the same journal. You can take it for a test drive here.

More It’s Surprisingly Cheap And Easy To Commit Academic Fraud

The impact factor was once merely a tool that librarians used to select journal subscriptions. But as of late, impact factors have been terrorizing scientists, and somehow become the de facto way to judge the merits of both a paper and its authors. “It has become a cancer that can no longer be ignored,” Stephen Curry, biology professor at Imperial College London, once wrote. “We spend our lives fretting about how high an impact factor we can attach to our published research because it has become such an important determinant in the award of the grants and promotions needed to advance a career…retarding the progress of science in the chase for a false measure of prestige.”

The impact factor simply indicates the average number of times a journal’s articles have been cited by other articles. For instance, the leading academic journal Nature has a respectable impact factor of about 41, which means that Nature articles are cited roughly 41 times each, on average. This implies that anything published in Nature is likely crucial to the scientific process, since each article it produces can be expected to influence no less than 41 other scientific studies.

But there are several glaring problems with that assumption. Impact factors are easy to skew — it would take only a handful of high-profile studies in one journal to mask the fact that most garner barely any citations. And citations themselves can be tricky business. Some papers cite prior research in order to disprove it; others pack in citations simply to pad their introductory sections. Since scientists tend to cite high impact journals more often than others, there’s also “rich get richer” component at work here. Meanwhile, the citation count itself is kept under lock-and-key by Thompson Reuters, a private company that, at least for now, manages the citation database.

So scientists at NIH set out to develop a less broken metric — the Relative Citation Ratio. The RCR is based partially on citations, but takes the extra step of comparing a given article’s citation count to the average NIH-funded paper, similar types of articles, and research from similar institutions. A high RCR rating (3 or better) indicates that an article received more citations than the average NIH-funded study within its field, or is performing better than usual given the affiliation of its authors.

As part of the study, the NIH researchers compared their new RCR metric to the erstwhile impact factor, and found that the latter often missed important, influential research simply because it’s published in a less prestigious journal. “Continued use of the [impact factor] as an evaluation metric will fail to credit researchers for publishing highly influential work,” the authors write. “Articles in high-profile journals have average RCRs of approximately 3. However, high-impact-factor journals only account for 11 percent of papers that have an RCR of 3 or above.”

“Using impact factors to credit influential work therefore means overlooking 89 percent of similarly influential papers published in less prestigious venues.”