Use and Abuse of Usage Measures - Ian Bannerman, Managing Director for Journals, Taylor & Francis
COUNTER and the Usage Factor
Launched in 2002 this attempt to make usage data creditable and countable. COUNTER conceived by Herbert Van de Sompel and Colleagues in 2006. An invitation to tendor is now out.
Thomson Scientific Impact Factor: total cites for items published and total items published
Usage Factor: total usage of published items.
Implicit assumptions of usage statistics
- Usage data is consistent, credible and compatible
- Usage factor would be a meaningful indicator of something
Ian has brought up an example article - many many access in one Russian institution - accessed once every 9 seconds or so by some local error (COUNTER would ignore that); another example shows every article in a journal being accessed about 57 times by a Korean institutions - look suspiciously like a robot but the stats arrive 3 months after the event; a further example is an uncited article (and an obscure one) being accessed 1,183 times - not clear why! There is a lot of noise in the system and it's hard to identify or understand it all.
Is usage a meaningful indicator anyway?
Ian Bannerman cites Davis & Price (2006) [eJournal Interface can influence usage statistics: implications for libraries, publishers and Project COUNTER. JASIST v.57 n.9, 1243-1248] in showing the impact of an interface on usage which, he claims, is at odds with the meaning of usage statistics. In particular he talks about those journals that require viewing of full text HTML before downloading a PDF - COUNTER would count this twice at present! Bannerman adds that if people's careers relied on usage (rather than impact) of publications you would clearly have some issues here.
Bannerman is also concerned about the impact on publishing usage statistics and the lack of transparancy that may occur if financial success dependent on them - the Observer Effect. "By measuring the literature we may change the literature." Issues at author or publisher level include (and this is on impact factor): self-citing; alerting authors to content they "should" cite; seeking out prolific high quality authors (who may self cite); publishing most citable articles early in the year (larger window for citation and impact factor); targeting topical areas rather than long term studies (affects funding); publising review articles; etc.
Additional issues for usage factor may be worse: getting friends, your neighbour etc. to download articles (or writing a bot to do it); temptation to leave usage data unfiltered; publishing for students not for researchers (impact factor for citations is prestige amongst peer group, usage is based on numbers); sexing-up title and key-words; using abstract to tease rather than inform; stopping printed journals; blogging it, tagging it and posting it; broadcasting metadata but keeping articles where they are counted - not in OA repositories (although your blogger here feels this is as things are, you could do counting from OA repositories).
Impact Factor - not all attempts to change and improve impact factor are "bad", leave an audit trail, act of citing usually meaningful (you stake your reputation on it). Usage trails not (as) trackable, no reputation impact as practically anonymous.
- Extreme caution in over interpretating usage data
- Further research into factors that influence article downloads
- Improved guidelines for identifying and filtering robots
- Awareness of the Observer Effect