Tuesday, April 08, 2008

The MESUR Project: in search of usage based impact metrics - Herbert Van de Sompel, Los Alamos National Laboritory, Research Library

Herbert Van de Sompel began by explaining that he is in place of the MESUR team leader, Johan Bollen, who is speaking at another conference today.

"The current digital scholarly communication system is a mere scanned copy of its paper-based ancestor." - Herbert's "standard" provocative statement!

We haven't changed our approach for digital age but the MESUR project looks to how we can completely rethink things for the digital age.

The Los Alamos team works on lots of related areas: interoperability, standards etc. But MESURE is all about metrics.

The Thomson Scientific IF (Impact Factor) was about the only metrics that could be computerd in a paper based era.

Now that things are no longer paper based though there are many other factors that can be measured in a digital age:

Usage-based metrics: in a digital environment access to scholarly communication is accessed over networks who can already (and do already) measure a great deal of information

Network-based metrics: a bit more complicated. essentially scholarly communication generates network effects citation, co-authorship etc.

Usage data has enormous potential. It can be recorded for all types of scholarly communications: papers, journals, preprints, blogs, datasets, chemical structures, software etc. (and not just 10,000 journals). Recording is started immediately after publication - very rapid indicator of scholarly trends (no publication delays)

Herbert has examples: a table showing top 5 ranking journals in the Los Alamos National Laboritory (a physics institute) and a table of the California State University top 5 journals (some overlap with Los Alamos but also local bias), and then the national table of top 5 journals (some overlap with both other tables: clearly some journals are key global journals).

Usage data comes with significant challenges

What exactly is usage?
  • Different types of usage (download pdf, email abstract, etc...)
  • Privacy concerns
  • Aggregating item-level usage at across different network systems - standardised recording and aggregating etc. needed
  • How do we deal with robots etc.

Network based metrics

  • We have 50 years of network science available to us
  • Wide variety of metrics have been proposed to characterinse networks and to asses the importance of nodes in the network (eg social network analysis, small workd graphs, social modelling etc).
  • So when defining metrics for scholarly communication we look at all the context of that scholarly communication using network analysis

Herbert is showing a variety of network based metrics - e.g. pagerank (google's recusive algorithm), shortest path between nodes - a journal might cross over multiple disciplines etc.

MESUR

  • Create very large scale reference data set - usage, citation, bibliographic data combined
  • Investigate sampling issues - bias? bots? noise...
  • Investigate validity of usage data and usage-based metrics - don't expect same thing but some sort of overlap or recognisability.

MESUR is NOT about one metric but a whole range of type and facets of metrics


How to obtain 1.000.000,000 usage events?

  • Working with lots of institutions
  • Scale - over 1,000,000,000 events
  • Covers period 2002 - 2007

Generating network from usage data
  • Same session - document relatedness (same session, same user, common interest; frequency of co-occurance = degree of relevence)
  • Usage data is on article level
  • Note: not something MESUR invented (used by Amazon/Netflix recommendations etc. for instance)

Herbert has put up a gargantuan and complex diagram of a first (science) usage map. Can see related nodes by connection. You can also see Muir Gray illustrates point from yesterday: practictioners read in natural sciences and elsewhere.

Citation network map up now shows a much less rich set of connections, especially cross-discipline. People read across disciplines but cite in their own - citations are what orthodox academia/scholarly communication expects but very interesting that reading doesn't match this.

Next up a list of the immense array of metrics that have been computed on this data. And tables generated by this. Citations: Network metrics largely agree with each other but they are all different from the Impact Factor. Same story in the usage metrics. The Impact Factor is therefore a completely different metric.

This leads on to metric correlations which can be shown via metric maps. Mapped using proportional distances between maps.

The map produced is very pretty map with lots of yellow areas, with orange and deep red areas. All the usage metrics cluster together. Citation-based metrics break into two areas - prestige and popularity are different areas though they overlap. Really high correlation between usage and citation metrics. Impact Factor again sits on the outside of all the trends.

MESUR explorer prototype

This is a java applet to visualise metrics. Will be released in May on the MESUR website. This will allow exploration of journal rankings according to multiple metrics of impact.

Conclusions

"Usage data totally rocks!"

18 months into the project:
  • First scientific exploration of nerw paradigm for scholarly assessment
  • Creation of vast reference data set of usage data
  • Infrastructure for a continues research program
  • Beyond discussion of merits and validity of usage data
Still lots of challenges though.

0 Comments:

Post a Comment

<< Home