Monday, April 12, 2010

Breakout: Article-level Metrics at PLoS.

Peter Binfield’s breakout session explored the ways in which PLoS are tracking alternatives to the journal-level impact factor. The quality of a journal as implied by its impact factor doesn’t always reflect the quality of every article in that journal; article-level metrics give a more granular, specific, and perhaps appropriate way to measure the impact of content. PLoS has 20,000 published articles across 7 journals, and all of these articles have published metrics.

PLoS are tracking much more than citations. As well as article downloads they are collecting social bookmarking data, ratings from readers, links from blogs. And they are publishing all of the data collected alongside the articles on their site. Their mission, as Peter explains, is to find all commentary relating to an article no matter where it occurs, to bring it back to the article and let the reader use it. In addition to providing a measurement, pulling all of this data together improves discoverability as users can navigate not just through citations but also comments, ratings, blog mentions, etc.

Peter showed a PLoS One article and the various stats that can be accessed by clicking on the “Metrics” tab above the HTML article. In addition to citations via CrossRef and Scopus, bookmarks from CiteULike and Connotea can be viewed and followed, users can apply a one to five star rating, and leave comments (15-20% of PLoS articles receive comments, which is apparently fairly high among publishers). There are also links to blog coverage, and the raw metrics data can be downloaded as an XML file if you want to do your own analysis.

After a heavy period of internal development, integrating with third party APIs to collect data has been relatively straightforward, and the DOI as article identifier ties it all together in PLoS’s ALM database

There is a lack of benchmarks on article-level metrics, although PLoS provides a user guide to help readers interpret the data. Feedback from authors has been very positive, with academics much preferring ALM to Impact Factor.

This is a work in progress at PLoS. Binfield hopes to see additional data added, including expert reviews from Faculty of 1000, media coverage (very hard to track as it rarely links to the actual article), Twitter mentions, etc. Usage data from IRs isn’t counted: an API for PubMedCentral data would be useful, for example. Standards should be developed (NISO? CrossRef?). There’s also the case for author-level metrics and institution-level metrics.

In response to a question from the audience, Binfield said that PLoS had not had any negative feedback from authors or editorial boards.

For the best view of what PLoS are doing in this area, I’d very much recommend that you go and take a look at it for yourself.

1 Comments:

Blogger Binfield said...

Thanks for the write-up. The slides are at: http://tiny.cc/ALM14

10:25 am  

Post a Comment

<< Home