Richard "call me Dimbleby" Gedye and the Four Horses of the Research Quality Apocalypse
We're not too thin on the ground for this morning's 9am session, which suggests either that the conference dinner was a washout (it wasn't, as witnessed by my slight queasiness and faint headache) or that this morning's session is big draw (it is - a new, Question-Time-style format for UKSG).
Jim Pringle from Thomson Reuters opens the show. We're talking about the value / quality of research, and the ways in which it can be measured. Jim suggests:
Tools are emerging that make use of metadata relationships (funders, co-authors, patents, etc) to more powerfully and accurately measure value. "But metrics are only as good as the people who use them, and can support judgements but should never be the sole ground for judgements."
COUNTER's Peter Shepherd walks us through PIRUS2, a project that is developing a standard for reporting usage at the individual article level. Journal-level metrics may not be representative of individual articles and therefore author, and citation data is not adequate for measuring some fields. The H-Index is author-centred but can be biased towards older researchers. Overall reliance on any one metric is misleading, and distorts author behaviour.
Article-level usage is becoming more relevant because:
Alain Peyraube from CNRS picks up the baton, talking about the European Reference Index for the Humanities project. "We need appropriate tools (for measuring impact) or we are going to lose valuable grants." In the humanities in particular, such a methodology needs to address monographs, book chapters, edited volumes etc as well as journals. The ERIH steering committee took some time to set up the project, identifying which disciplines would be considered (lots of overlap with social sciences), selecting peer review as "the only practicable method of evaluation in basic research", setting up panels for each of 15 disciplines, providing guidelines and soliciting categorised lists of journals. The project suffered from considerable misunderstanding and criticism, especially from the UK scientific community, and particularly around the categorisation of journals. Both the panels and the journal lists have been revised since the project's inception.
Hugh Look takes the stage in a fabulous red blazer (Good Morning Campers!). Trying to attach numbers to things, says Hugh, is a difficult business - numeric targets are always open to abuse, and he points us to the LSE's Michael Power for a widespread analysis of this. What's the benefit of measuring - and who benefits? What are the underlying structures, and what behaviours do they engender? How strong is the link between measurements and real-world impact? We're not currently measuring performance or quality - we're setting up a structure of control, and this risks unproductive use of public money. "The managerial class are the primary beneficiaries of the measurement culture." They manage risk to the institution - but also the risk to themselves, so they use metrics to safeguard their own position / avoid bad PR, and coerce practitioners into compliance by making it part of annual reviews. There is a flight from judgement in the way that metrics are used. Managers cannot do peer review themselves, so they mistrust it.
Hugh acknowledges that measuring things can break "who you know" elitism, but both he and Peter referred to "fetishism" around statistics. Some organisations, e.g. the RAF, are stepping back from using metrics to assess research quality - a "dawning of common sense". Metrics divert attention away from "other things that are more useful to do" - "what aren't we doing?"
(Questions / floor discussion I will post separately, or I'll be verging on LiveSerials' longest posting).
Jim Pringle from Thomson Reuters opens the show. We're talking about the value / quality of research, and the ways in which it can be measured. Jim suggests:
- attention (citations, but also more generally)
- aggregation (researcher -> institution, article -> journal)
- relation (links, related content, metadata - lots of data stored around the institution can be useful in assessing researcher value / impact).
Tools are emerging that make use of metadata relationships (funders, co-authors, patents, etc) to more powerfully and accurately measure value. "But metrics are only as good as the people who use them, and can support judgements but should never be the sole ground for judgements."
COUNTER's Peter Shepherd walks us through PIRUS2, a project that is developing a standard for reporting usage at the individual article level. Journal-level metrics may not be representative of individual articles and therefore author, and citation data is not adequate for measuring some fields. The H-Index is author-centred but can be biased towards older researchers. Overall reliance on any one metric is misleading, and distorts author behaviour.
Article-level usage is becoming more relevant because:
- more journal articles in repositories
- interest from authors and funding agencies
- online usage growing in credibility e.g. PLoS reporting, Knowledge Exchange
- increased practicality - COUNTER, SUSHI
Alain Peyraube from CNRS picks up the baton, talking about the European Reference Index for the Humanities project. "We need appropriate tools (for measuring impact) or we are going to lose valuable grants." In the humanities in particular, such a methodology needs to address monographs, book chapters, edited volumes etc as well as journals. The ERIH steering committee took some time to set up the project, identifying which disciplines would be considered (lots of overlap with social sciences), selecting peer review as "the only practicable method of evaluation in basic research", setting up panels for each of 15 disciplines, providing guidelines and soliciting categorised lists of journals. The project suffered from considerable misunderstanding and criticism, especially from the UK scientific community, and particularly around the categorisation of journals. Both the panels and the journal lists have been revised since the project's inception.
Hugh Look takes the stage in a fabulous red blazer (Good Morning Campers!). Trying to attach numbers to things, says Hugh, is a difficult business - numeric targets are always open to abuse, and he points us to the LSE's Michael Power for a widespread analysis of this. What's the benefit of measuring - and who benefits? What are the underlying structures, and what behaviours do they engender? How strong is the link between measurements and real-world impact? We're not currently measuring performance or quality - we're setting up a structure of control, and this risks unproductive use of public money. "The managerial class are the primary beneficiaries of the measurement culture." They manage risk to the institution - but also the risk to themselves, so they use metrics to safeguard their own position / avoid bad PR, and coerce practitioners into compliance by making it part of annual reviews. There is a flight from judgement in the way that metrics are used. Managers cannot do peer review themselves, so they mistrust it.
Hugh acknowledges that measuring things can break "who you know" elitism, but both he and Peter referred to "fetishism" around statistics. Some organisations, e.g. the RAF, are stepping back from using metrics to assess research quality - a "dawning of common sense". Metrics divert attention away from "other things that are more useful to do" - "what aren't we doing?"
(Questions / floor discussion I will post separately, or I'll be verging on LiveSerials' longest posting).
Labels: "research assessment", citations, impact factor, metrics, Quality, Quality Measures, usage, Usage Factor, value
0 Comments:
Post a Comment
<< Home