Bubbles and dots – novel ways of perceiving scientific impact

winner-takes-all-2012-07_3

Science is a very competitive business. We compete with our colleagues for positions, grants and tenures. The main currency is publications – the more the better, and in as good journals as possible. (Teaching is often portrayed as being important for your career, but in most cases that are simply not true – just lip service from the system). But how can we measure quality?

Quantity – the number of articles – is one way to show it. This is probably most important in the early part of your career, where each and every publication counts and competition for postdoc money is fierce. But for established scientists this is not as relevant; really, is a scientist with 60 publications better than one with 50? And of course the number of publications is a function of time too, and the old silverback will always win in such comparisons.

Everyone agrees that it should be quality, not quantity that should be most important. But we can’t read everything everyone is publishing – it is simple beyond the realms of possibilities, given the enormous flow of articles in peer-reviewed fora. So how, then, can we put a quality brand on our work? For the last 10 years, the light from the journal Impact Factor has been the beacon to which scientists have set their course. This is an index on how much the average article in a specific journal is cited by other articles in the years that follow. Undoubtedly a very crude measure, and an AVERAGE measure of the journal, not a metric of the specific articles that appears in the journal. (Or in other words: just because an article is published in Nature, it doesn’t need to be a gold nugget.)

Thus science has a huge problem in measuring researcher, article and journal qualities. The quest of publishing in journals with highest possible impact factors, rather than in the journal with the best scope for your study, overloads the peer-review system with an ever-increasing number of reviews.

For individual researchers, the total number of citations, and the arithmetic H2 factor (a value of 3 means the person has 3 articles that have been cited at least 3 times; a value of 23 means the person has 23 articles that have been cited at least 23, etc.) are becoming more and more used.

But impact can also be at the societal level; how well it gets across to the public. The journal family PLOS just released a beta-version of a new article-level metric system that measure a range of factors in articles published in their journals. Quick and easy you can see the number of viewings of a particular article (and all PLOS articles are open access, by the way), the number of downloads, the number of citations in different databases, the social media impact (twitter, Facebook, Wikipedia etc.) and how all these things change over time. You can also play around and compare different articles and journals. A fun exercise, but potentially informative too.

Five hundred PLOS articles matching the keyword 'avian influenza'

Five hundred PLOS articles matching the keyword ‘avian influenza’

The graph above shows the change over time in citations for 500 articles matching the keyword avian influenza. Different journals in different colors, PLOS One in yellow, and the high impact journals PLOS Biology, PLOS Medicine and PLOS Pathogens in green and shades of purple, respectively. And, yes, over time the average article seem to do better in the ‘best’ journals, but the spread in PLOS One is more interesting – with many articles with as good, or better impact than those published in the top-notch journals.

You can also gain insights in where science is made. For instance have a look at where researchers on sexual selection have their headquarters. The dominance of Europe and America is monumental; partly of course due to historic reasons, research infrastructure, funding etc., but likely also because of language (Russians still publish a lot in Russian, Latin American researchers in Spanish, etc.).

Affiliations of researchers on 500 sexual selection papers.

Affiliations of researchers on 500 sexual selection papers.

Speaking about sexual selection. Guess which article that has had highest ALM impact? The dot in the graph below is an article that appeared in PLOS One on fellatio in bats. Perhaps not the most important paper in terms of science, but a curiosity teaser likely picked up by a lot of newspapers. This paper has been cited 6 times, but have more than 9000 shares on social media and 288,000 views at the homepage.

fellatio in batsFinally, what could PLOS do to make it better?

  • It would be awesome if this could be more in Gapminder style, where the user could use combinations of search terms to contrast the results. For instance, if I want to see how well my articles on flu are doing in relation to other articles on flu – how can I do that?
  • It would also be interesting to add journal or keyword-based regression lines.
  • The author institution map is very slow when many articles are chosen. Speed it up please!
  • And of course, it would be nice to see a similar system incorporating other journals too. But, that’s something for the future.

A good initiative!

Jonas Waldenström