Peter Shepherd reports on Usage Factor Study
Separated into two phases, the survey explored the community’s general assessment of the relative value of a standardized comparative measure of online usage, its feasibility and the overall interest level of developing such a measure. The first phase consisted of personal interviews with 29 authors, publishers and librarians, while the second phase was a broader online survey of librarians and authors. Among the topics covered during the interviews were: Reaction, in principle, to the Usage Factor; what might be some of the practical implementations; what time windows might be appropriate for the calculation of the Usage Factor; what may be practical ways to consolidate the information and who might take on that responsibility; and what might be implications for participants and non-COUNTER-compliant titles.
Peter reported that there was broad support for a usage-based quality measure, particularly among authors and librarians. Despite moderate dissatisfaction with the Impact Factor and their desire for new quantitative quality measures, authors appeared to be more reticent about changing their behavior based on a new quality measure. Publishers were more mixed in their support, although concerns tended to focus around policy and approach rather than on the principle. Particularly of interest for the library community is developing a broader standard by which titles can be qualitatively measured. One large university-based librarian reported being queried by their dean of Research how they measure the quality of the 22,000 journal titles not covered by ISI. Interestingly, a Usage Factor would rank highly on librarian’s purchase decision matrix if it existed.
COUNTER is seen a an increasingly accepted basis for usage statistics and while it’s application isn’t universal among publishers, it is rapidly being adopted as an accurate representation of online usage. Much of the open questions remain in the detail areas of what precisely will be measured, over what time frame, and who would manage the process. The devil, as they say, is in the details. For example, difficult questions exist of what exactly should be measured – full text only, articles versus other content, and usage of multiple versions (or versions in multiple locations). Also, because people can see the value of a high ranking and the possibility of misuse or abuse, the need for a trusted third party to establish the criteria, calculate, audit disseminate this information to the community will be key to the project’s acceptance.
The meeting ended with a straw poll of the attendees and the vast majority indicated that they supported moving forward to a more specific test development phase. A full report of the research is being provided to UKSG and an article on the results will be released later this year.