Wednesday, April 14, 2010

Research quality: responses from the floor

Questions from the audience, and answers from the panel, relating to Richard "call me Dimbleby" Gedye and the Four Horses of the Research Quality Apocalypse

Ed Pentz (audience): is there a common definition / agreement on what quality or impact actually is?


Jim Pringle: we rarely articulate this. Quality is in the eye of the beholder - subjective. Is the work making a significant contribution to the field, changing it? Is it well constructed, convincing? From the funders' point of view, was it worthwhile?

Hugh Look: ye-es. I'm pessimistic about this. We can come up with words that everyone agrees on but will it really mean anything? We just manufacture more specialist language. (Rick Anderson summarises this on Twitter as "Shared language" doesn't necessarily equal "common understanding." - exactly.)

Alain Peyraube: who decides what is making a significant contribution to the field? It depends on judgements that inherently don't achieve consensus. And it changes over time.

Peter Shepherd: the easiest way to describe quality is that you know it when you see it! It helps create insight, and enable others to take things further forward.

Hugh (again): if peer review and metrics lead to risk-averse decisions, what do we do? Encourage plurality and diversity. Metrics have a chilling, crushing effect on plurality and diversity. We do need to do riskier things, that we can't foresee the outcome of. That is being lost from the system because we are focussing on targets.

Jim: quality is associated with value and worth. We talk about it because of related funding decisions. Challenge: what would be the evidence of worth and value that we should look for? The issues we have discussed are a failure of management to use tools correctly; they don't mean there should be no search for measurable value.

Judy Luther (audience): what's your sense of the future of the journal as a signifier of research quality?

Peter: journals are a service to authors, and a service to readers. On the harvesting side, journals will continue to be an important measure of quality (from the author's point of view) - represented by editors and editorial boards. Within the great mass of information, most of us need something like the journal's personality to trust our research to. From the reader's point of view, the journal as quality is "becoming more shady". The collection (database) or the individual article might become more meaningful than the journal, as a proxy for quality.

Alain: the journal will continue to exist. The problem is that when a paper is published in a journal with a good reputation, the content of the paper has already been discussed, criticised etc (e.g. at conferences - several months prior to publication) so the article is too late to improve science - there is nothing new for specialists in the field; they already know the content, and they don't read it again. (On Twitter, Ed Pentz argues that "research may have moved on but in many fields old articles can have a huge impact"),

Hugh: we might start to see metrics relating to interestingness, relevance, rather than long-term quality. "Perhaps we need to look at temporal variation in the significance and meaning of metrics, to understand in a more sophisticated way what these things are doing for us."

Hazel Woodward (audience): the focus for assigning value is the article / journal, but research funds are allocated to researchers (individuals and institutions). Should we move to allocating research quality points to individuals / institutions, or is it not practical to pursue this?

Hugh: I feel uncomfortable - that this is an ill-thought out idea. It seems to be further atomisation, further subjective judgements, further bureaucracy. I instinctively feel that practitioners should take a fairly aggressive stance against this.

Richard Gedye: do you worry that it's already happening (RAE etc)?

Hugh: I worry about its use. In the end, it rewards the compliant, not the difficult - those who are performing. It fundamentally supports a structure that does not reward those who go against the grain.

Alain: It's already happening (in institutions) - Shanghai rankings etc. Attributing research quality points to researchers would invoke strong reactions.

Jim: if you're a researcher, and you have a body of work that has never been downloaded, never visibly has been read or shown or cited, shouldn't we ask some questions about the value of your research?

Peter: is this Question Time or I'm Sorry I Haven't a Clue? Points mean prizes! I don't in principle have a problem with awarding points, but the problem is the scale of the enterprise sitting behind this. A huge bureaucracy outweighs the benefit of the metric. The cost, in many senses, of playing this game, has become too much.

Labels: , ,

0 Comments:

Post a Comment

<< Home