Saturday, April 28, 2007

The Best in Show!!

Now that I am back from the UKSG meeting and have had a chance to reflect on the meeting, I wanted to offer a few thoughts. At the first plenary session where we were waiting to see the two industry giants Google and Microsoft deliver speech's that would change our view of the world, it was the third speaker that really hit the home run. T. Scott Plutchak, the medical library Director from the University of Alabama, Birminhgam with his speech on "The Librarian: Fantastic Adventures in the Digital World" provided the most valuable insight into where we are heading in this digital world.

As librarians we often confuse the term library and librarians and Scott made it clear that the two terms or not synonyms. Many of us are afraid that the library that we know and love is being marginalize. With the shift to digital collections and the influence of Google and Microsoft this decline in importance of the library is accelerating. Many librarians are attempting to do battle with the Google trying to show how misleading searches can be or how limited the information is in certain fields. Give it up!

In Scott's view that is ok to see the library declining in importance as it is the librarian that is the important component in this equation. We have always had a major role in the print world and we will continue to have a major role in the digital age. According to Scott "Librarians fundamental purpose has been to support the process of research and education of our community." The primary tool in the print world was the library. Now as we move into the digital world we need to carry on the same role as before but just change the tools.

Scott's presentation provided a number of examples with the librarian working outside of the library with researchers in their labs and offices and identified this activity to be part of the role in the future. As a librarian I can see the fantastic adventure developing in the digital world. Librarian's are going to have an even greater opportunity to use our creativity to further the interests of our community.

So it is time to stop putting our energy into trying to stonewall the Google's of the world and begin to utilize our information gathering, analysis, and organizational skills to help our community make the most out of the new tools that technology and creative software developers have provided.

Dan Tonkery

Labels: ,

Friday, April 20, 2007

Framework for Improving Link Resolver Systems

In 2006, UKSG funded a research project to explore the industry context of link resolvers and the chain of serials delivery in hopes of describing some of the issues and laying the groundwork for their future resolution. James Culling, Online Project Manager at Oxford University Press presented the report on the research conducted by Scholarly Information Strategies. (NB – The SIS consulting company led by Simon Inger and Chris Beckett recently disbanded, although they are finishing the report on this project for UKSG.)

When a user conducts a search, either in a traditional A&I database, a federated search, Google Scholar, or clicks through a reference link via the CrossRef system, the essential metadata about the object is passed via an OpenURL to a link resolver system, such as those available from ExLibris, Serials Solutions, Innovative Interfaces, and Openly Informatics. At the heart of each of these systems is a core “Knowledge Base” which provides the context to the OpenURL, comparing it to issue availability data, library holdings information and providing a variety of linking options to the content that was searched. The user is than directed to the content that is available to the content available through his/her institutional subscription.

While these systems work extremely well in the vast majority of cases, they are not without significant inefficiencies and inaccuracies. Much of this is due to the complexity of the distributed supply chain of this information. Link resolver providers (each of which has its own data system, structure, and ingest methodology) receive information from publishers and subscription agents, who provide data on publication release, collections, locations, etc. Frequently, this transfer process requires normalization and quality control review, which add to the complexity and opportunity for error. The library in turn needs to provide holdings and subscription information from their own library systems in order to customize the resolver to match their holdings.

Through a series of conversations with publishers, resolver-systems suppliers and librarians, the research has pointed to some issues and barriers that are inhibiting the deployment and use of these systems. Among the issues identified by SIS were: a lack of awareness that significant issues persist and a lack on cooperation in solving those issues; inaccurate or incomplete data; a lack of procedures for transferring titles; lack of data format and transfer standards; and a communal responsibility for data quality. While OpenURL compliance is growing rapidly, there will need to be broader understanding of the role of OpenURL and how it interacts with other necessary information transfers to facilitate the discovery and delivery of content.

Initial recommendations were suggested and may be explored by UKSG and the community. Much like Project COUNTER, a code of practice might be developed which will address Knowledge Base compliance regarding which information is provided and it what formats. Such a code of practice might certify compliance in areas of format, delivery method, timing and OpenURL compliance among the key organizations in this process, publishers, subscription agents, and resolver suppliers. There might also be areas of standards, which could be developed or expanded to improve this information exchange, such as current work led by EDItEUR on ONIX for Serials Holdings, or possibly a NISO SUSHI equivalent for holdings information.

The final report will be provided to the UKSG Board during their May meeting and the report will likely be posted to the UKSG website sometime shortly thereafter. A summary article is also being prepared for the July issue of Serials. Hopefully, as many other similar UKSG research projects have done, this work will lead to significant outcomes that will improve information exchange.

Labels: , , ,

Battle of the Giants: Microsoft vs Google!

The first plenary session at UKSG 2007 provided a rare treat; a chance to see two industry giants in action. Microsoft's Cliff Guren, Director of Publishing Evangelism, and Google's Philippe Colombet, Manager, Strategic Partner Development, presented their take on the world - interesting to see Microsoft's embryonic competitor products to Google's Scholar and Book Search. Even more interesting was the fact that Google announced a powerpoint solution on the last day of our conference (that was kept very quiet!) which suggests more of a step into each other's areas.

However several speakers at the conference referred to the 'googleization' of students in academia; perhaps the move to make them 'Microsoftized' has come too late when it comes to sophisticated searching tools and digitisation solutions? Or is it simply a case of 'watch this space'?

Labels: , , ,

Thursday, April 19, 2007

The UK Research Reserve - protecting the UK's research collection

The UK Research Reserve was initiated by the Consortium of Research Libraries (CURL) and the British Library in 2005, with their commissioning of the CHEMS Consultancy to investigate the possibility of a collaborative store for little-used printed materials.

Nicola Wright of Imperial College, the Project Manager for the first stage of the UKRR, gave an overview of progress so far in her briefing session for the conference.

The concept behind the project is deceptively simple. Libraries are running out of space. Certain categories of printed material (mostly journals) take up prime position in most research libraries but are now little used as their online counterparts have taken over as the medium of choice for researchers. Therefore why not discard the printed versions, retaining only sufficient copies to satisfy issues of preservation and access for future researchers, then do much more interesting things with the space that is released? The official description of the UKRR is that it is a "collaborative, co-ordinated & sustainable approach to securing the long term retention, storage and access to low use printed research materials".

However, the devil is in the detail. Much of the work that will go into the project focusses on de-duplicating material to the point that there is one copy stored in the British Library's document supply collection at Boston Spa and two further copies within Higher Education Libraries somewhere (anywhere) in the UK. This will have to happen without inadvertantly discarding the last copy of a title, volume or issue.

The British Library will be the principle point of access to the collection, through its existing document supply services. According to the CHEMS Report, the BL has about 80% of the journal titles/volumes held collectively across UK HE, so will actively take in material that is being discarded by universities, where it is needed to fill gaps in its collection. Participating libraries will be able pay an annual flat fee for continued access to all of the materials within the UKRR. It is anticipated that this facility will be available to all HE libraries, regardless of whether they send printed material into the collection or not. It is intended that the UKRR will be integrated into the BL's collection as a whole in the long run, so will eventually be accessible to any researcher through standard docdel services.

There are quite a few challenges inherent in the process of establishing the UKRR. These include:
  • Persuading academics and researchers to part with printed material to which they feel emotionally attached.
  • Collecting enough accurate data about holdings to be able to make informed decisions about what to discard and what to keep. Typically the journal holdings listed on library catalogues are out of date (volumes or issues have been lost, vandalised, or withdrawn earlier and the catalogue not amended) and do not offer the required level of granularity ("incomplete" instead of "lacking Vol 2, issue 1"), therefore a huge amount of shelf-checking has to be carried out. In addition to holdings information, the length of the title runs will have to be measured in order to plan space.
  • Deciding which institution draws the short straw and gets to keep the residual copies in perpetuity, and ensuring that this decision is recorded somewhere useful so that the volumes are not withdrawn somewhere down the line. SCONUL (Society of College, National and University Libraries) will act as the clearing house for these decisions and policies.
  • Making sure that all of the changes in holdings that are going on during the de-duplication process are made clear in university, union and BL catalogues in a timely fashion. This will be crucial, as several institutions will be de-duplicating simultaneously and will be cross-checking holdings against one another and the BL.

This sounds like a huge undertaking, and makes me feel rather glad that my home institution decided not to take part in the project for now. I wholeheartedly wish everyone involved (the initial project partners in the HE sector are Birmingham, Cardiff, Imperial College, Liverpool, St Andrews and Southampton University Libraries) the very best of luck! If all goes well during the initial stage - which runs until June 2008 - the processes and policies developed will be rolled out to all UK research libraries, including special and society collections and national libraries as well as those in Higher Eduation.

Wednesday, April 18, 2007

It's all over bar the blogging!

All good things must come to an end, and as of roughly 1.05pm this afternoon the UKSG's 2007 Annual Conference is all over. We've been blessed with great weather, great speakers (too many to link here, but check out previous postings), great entertainment (more on that shortly) and great expectations (for next year's event in Torquay!)

But if you, like me, have the post-UKSG blues, then fear not - memories and reports of the conference will continue to be posted right here on LiveSerials for several weeks to come. Not to mention the photos. Those will certainly be worth checking back for...

"The old git slot:": a life in scholarly publishing flashing before our eyes

"I've drawn the old git slot," said John Cox ruefully as he took the stage, and then proceeded to confirm that judgement by listing the plethora of modern office necessities not yet invented when he started in publishing, and bemoaning the "witless wonders" that are our modern youth.

Yet plus ça change, plus c'est la même chose. Scholarly publishing is essentially the same industry as it was when UKSG was founded 30 years ago - and in fact the principles on which publishing rested 300 years ago are still relevant today. Even commercial publishing has been around longer than we think; Cox cited examples from the 18th and 19th centuries. And back at the beginning of Cox's career, journal publishing was rudely healthy. Librarians had ample funding and researchers' appetites for information were not yet overwhelmed. But social changes began to limit the growth of libraries' budgets such that journals began to be cancelled and success for new journals was not so immediate (as Paul Calow noted in yesterday's "Financial Imperatives" session, it now takes 7 years for a new title to break even).

Then along came a spider ... or, in fact, its Web. The shift from print to online may not yet be complete, but about 90% of scholarly journals are now online, and this has changed the way that libraries and publishers do business together - for example, with consortial purchasing. One consequence of online publishing is the hunger for Open Access - an unproven business model which has not yet shown itself to be sustainable, says Cox, particularly across the broader and non-scientific literature. Further, as Sally Morris had noted this morning, the Open Archive movement is potentially damaging to the scholarly journal; the world's 850 institutional repositories may currently be scantly populated (with academics actually admitting they are "distinctly unwilling" to deposit), but they are being supported by a number of major funding agencies, and may yet grow sufficiently to change the current landscape.

Cox took a detour at this point to acknowledge the effect on scholarly communications of Google, "the search engine of choice for most of us" (albeit propounding the common misconception that the search giant has indexed "most journals" in Google Scholar ). Google is getting closer and closer to us and will "shape the development of our industry over the next 5 to 10 years", having already revolutionised things with its page rank algorithm.

The future for publishers, therefore, is in the functionality within which they wrap their content. If the research itself is freely available - and easily discoverable - elsewhere, publishers have to differentiate themselves with truly useful features (e.g. supporting datasets, taxonomies, community facilities). Cox praises OECD's SourceOECD for using the capabilities of online to add massive value over the print, and Alexander Street Press for building communities in the humanities - demonstrating the value across different sectors.

Web 2.0 "will bring further changes", of which user-generated content and folksonomies have most relevance to scholarly publishing. They represent the value-adds which can differentiate publisher platforms from institutional repositories - if publishers are willing or able to make the necessary investment in technology, and to make the transition to being service providers rather than manufacturers.

Labels: , , , , ,

A beginner's guide to mining, and why you shouldn't do it anyway

Geoffrey Bilder contends that, when asked to deliver this session for UKSG, "I knew nothing about text mining". By the end of today's session, I suspected this was purely a comedy opener - either that, or he's really done his homework in the meantime.

Bilder promised to help us understand the concept of text mining and reach the stage where "you can avoid having to do it". He began by clarifying what data mining is *not*:
  • Data mining is not information retrieval. Tools which filter and refine searches to find specific bits of information are retrieving, not mining, data
  • Data mining is not information extraction. Tools that allow you to extract and normalise data from many sources, for further analysis, are extracting, not mining, data
  • Data mining is not information analysis. Tools that allow you to load, manipulate and analyse data are analysing, not mining data.
However - put these together and you may have something closer to the concept of data mining. Data mining collates information - masses of it, and perhaps seemingly disparate - and looks at it in a new way to reveal something new; something previously unknown. Bilder cites an apocryphal example of data mining which despite lack of veracity espouses the spirit: a supermarket discovered that people who buy nappies (sorry, Geoff; I can't bring myself to use the word d*apers) will also often buy beer. More prosaically, data mining helped researchers make the connection between magnesium deficiency and migraine.

Text mining is an extension of data mining. There's a false belief out there that people want to read scholarly articles - yet lots of evidence that suggests they are doing everything within their power to avoid reading, because they can't keep up with the literature. Text mining helps us to extract the core facts - from data that is designed for human, not machine, reading. It parses texts for data which can be reliably extracted and interpreted to create keyword-type labels for that text.

Bilder showcased the Gate tool (General Architecture for Text Engineering) and noted that it has more or less accuracy/value depending on the subject area and type of text being mined. But then comes the crunch: "the thing that keeps striking me is: if hiding information in unstructured text is a problem, shouldn't we be exploring new ways to publish?"

So Bilder proposes some new approaches which we could deploy to help users avoid text/data mining in future. He used an initial example of human reading being able to identify the different reasons why words in different types of phrase might be italicised (for emphasis; because the word is foreign; etc). He then showed the machine-readable version of the example, which would require the words not simply to be tagged with italic tags, but to be tagged with more useful, more granular tags denoting the different meanings intended by the italicisation. Bilder cited IngentaConnect's semantic tagging of data which can then be machine read by, for example, social bookmarking tools and RSS readers.

He then introduced Nature Publishing's Open Text Mining Initiative which moves beyond tagging of metadata to tagging of full text, to enable researchers to make use of a full article without necessarily having access to the human-readable full text. An OTMI file pre-identifies the number of times particular words appear in the article, and includes out of order snippets - so that a text mining tool can make use of the text, but humans cannot read it. OTMI thus allows providers to open up paid archives of content to allow machines to mine it, thus making it more useful for users.

But oh, says Bilder, so much more is possible (and everyone in the room sits wide-eyed with wonder at this emerging new dawn).

The semantic web, he reminds us, is "web as database", where every item of information is categorised to aid its integration and usage elsewhere in the web. Information items are identified as either subjects (Bill), predicates (is the brother of) or objects (Ben), which are then linked together in a simple data structure called a "triple" (Bill is the brother of Ben). A query language (such as SPARQL) can be pointed at an RDF data file (made up of triples) thus enabling the web to be queried in a way that was previously restricted to databases.

Given that we *can* provide data in such a well-tagged and structured way, users shouldn't *have* to data mine. It's like the early evolution of publishing - once we had created the concept of page numbers and tables of contents, wasn't it only logical to then implement these in order to make life as easy as possible? "Before we go out and get everybody text mining, I think we should ask ourselves the question: why are we publishing text? We can also publish data. We don't have to strip it out, we can supplement it and help our users."

For a full moment there was an awestruck silence - and then, as testament to Bilder's ability to make non-technical audiences comprehend densely technical subjects, the questions came.

Where does this RDF data might come from - who has to create it? Bilder replies that publishers generally have it and are already doing things with it e.g. sending it to CrossRef. Plus Nature's OTMI has a tool that can convert data from the PubMed DTD to OTMI.

How many researchers are attempting to do text analysis in this way - is it a small number but likely to grow, or? Bilder says "a lot of organisations [e.g. PubMedCentral] justify what they do on the basis that the data they collate will be data mined". He notes that it's not, of course, necessary for data to be gathered in one place, as machines that can read data can also retrieve it.

What's the typical publisher policy, given that text mining activities have in the past set off the security systems and brought up IP blocks? Bilder notes that agreements may be necessary between miner and provider to ensure the activity can take place. Any interface can create an area for this kind of usage of its data.

Labels: , , , , , ,

Will the King's Horses be there if the parasite kills the host?

Sally Morris, Editor of Learned Publishing, was a welcome last-minute stand-in for the sadly indisposed Peter Banks, to whom the UKSG community sends its best wishes.

Morris' rhetoric is familiar to those who follow lib-license and other discussion lists, and her lobbying on behalf of publishers during her years as Secretary General of ALPSP has shown her to be a staunch defender of existing scholarly communication models. Today she reiterated for the uninitiated many of the points she has made so vociferously in the past, and cited again the wealth of evidence which ALPSP, amongst others, has gathered to help inform the Open Access debate. We were reminded of surveys demonstrating that librarians will cancel journal subscriptions if reliable alternatives are available (Beckett & Inger 2006), a potential outcome also recognised by funding agencies. Morris restated the reasons why the journal model is worth defending: chiefly, its facilitation of the peer review process (for which, thus far, alternatives such as Nature's admirable open peer review experiment have failed to substitute adequately); the huge contribution made by editing, both for readability but also for accuracy (in 5.5% of cases, editorial changes "materially altered the sense of the article" - Wates & Campbell, 2007) and to support interactivity (e.g. improving references in order that they will link).

Morris went on to debunk, again, the myths that journal publishing is a exclusively a greedy for-profit game, and that journal publication could instead be supported by other publishing programs or society activities. Raym Crow's 2006 data shows that more than half (55%) of journals are non-profit, while Baldwin (2004) demonstrated that the surpluses from journals support a variety of functions - keeping membership and conference fees down; education and bursaries; research funding; (it is also the case in many publishing houses that journal publication tends to support the more low-profit activity of book publication, rather than vice versa). Since 90% of society publishers only have one journal they would be at risk if their subscription revenues are cannibalised by OA, as would niche and low-profit journals.

But given the broader mandate of a UKSG plenary paper, and having covered the background, Morris now developed her position further, and towards some conclusions that perhaps one wouldn't have initially suspected. Whilst publishers do need to continue engaging with others in the scholarly community to ensure the risks and consequences of self-archiving are understood, they should also be proactively experimenting (for example with hybrid open access models) and avoiding regressive insistence on retaining the existing model. Morris also picked up on a concern felt by many - that scholarly communication is changing in many ways, and we need to ensure the open access issue is not clouding our awareness of other potential revolutions. She questioned how else publishers can add value, and suggested that we need to be clear about those functions of the journal important enough to retain - even if the model around those functions should evolve. Ultimately, we should allow for the possibility that publishers and journals may cease to exist - but we should be very clear that this is a desirable and practicable future before throwing the baby out with the bathwater. Humpty Dumpty was fine on the way down, cautioned Morris - it was only after he'd hit the ground that it turned out to be impossible to put him together again.

Nevermind the fact that we no longer have King's Horses and King's Men to help us should we turn out to regret a careless destruction of the journal model.

Labels: , , ,

Tuesday, April 17, 2007

Author Identification Project in the Netherlands

The key issue in author identification is not whether this author produced a particular work (although the problem of orphan works is a separate issue), but is this author the same author who produced work A and B and C. Disambiguation, particularly in cataloging is a significant problem. Catalog information can have abbreviations, variant spellings or have or be missing diacritics; authors might change their name, go by nicknames or pseudonyms; and translating languages like Japanese, Chinese or Russian into western Latin text can lead to spelling variations. One project aiming to address this situation is based in the Netherlands and consists of a partnership among 12 universities, SURF, UCI and OCLC Pica.

Daniel van Spanje at OCLC PICA presented a status update of the project underway. The Digital Author Identification (DAI) {NOTE: sites are in Dutch} Project grew out of the Digital Academic REpositories (DARE) a Dutch initiative to stimulate the prodcution of digital science online. The project’s goal is to uniquely identify all of the approximately 40,000 authors conducting research in the Netherlands. A successful pilot test at the University of Groningen in 2005-06, identified approximately 3,000 unique authors and researchers. The project was then rolled out to 13 additional institutions in 2006, with an expected completion date later this year. By using METIS, a registry of metadata on publications and researchers in Netherlands and the GGC, the Dutch national union catalog system, information was gathered on the authors to distinguish and de-duplicate authors for assigning IDs. The project has created a central registry of names cover a wide range of identification information, such as variant names, nationality language, date of birth, publications. After the pilot additional fields such as sex, organizational affiliation, titles and dates of employment were added to the file.

The project certainly has privacy ramifications, although according to Dutch understanding of privacy regulations, the project is justifiable as a library/bibliographic resource. It is unclear that the same methodology would be similarly acceptable in other countries.

There are some other initiatives that relate to this project on an international level, with the ISO Technical Committee 46 -- Information and Documentation that is began work on the International Standard Party Identifier (ISPI) in August 2006. ISPI is as a new international identification system for the parties (persons and corporate bodies) involved in the creation and production of content entities. As envisioned, the work already done by the DAI project would be incorporated into a broader international standard if one is agreed upon. Certainly, there will be more work on this important issue and hopefully, we can learn from the progress made by OCLC PICA, SURF and this DAI project.

Labels: , , , ,

Caution: statistics operating in this area

Jason Price, Claremont Colleges' Life Science Librarian and today's Duke of Hazards, took us on an entertaining journey through the potential pitfalls of over-reliance on journal usage data. He opened by warning us of some general hazards to beware:
  • narrow definition of use
    • COUNTER JR 1 (full text article requests) is only one dimension of use; others might be:
    • A-Z list click-throughs
    • citations from your faculty
    • impact factor
    • which journals your faculty publish in, and how much
    • surveying faculty/researchers
    • Page Rank-type
  • vagaries of user behaviour
    • did the user actually get any value out of something to which they clicked through
    • Google Accelerator preloads links from pages users visit
  • different dissemination styles of teaching
    • does the lecturer download and circulate (or post internally) the PDF, or circulate a link to it?
  • granularity of usage reports
    • if report is at title-level, there's no indication of whether accesses are e.g. to purchased frontfile or free backfile
Price then moved onto some more specific hazards that may be encountered:
  • determining cost per use
    • take an annual COUNTER report - divide the package fee by the article views. But what package fee? The *stated* annual fee, or the actual cost, factoring in the additional lock-in fee?
  • comparing to ILL cost
    • the views measures in an online environment cannot be directly to correlated to what would otherwise be ordered via ILL
  • comparing across publishers
    • different interfaces can affect number of article deliveries, for example if linking to the article immediately renders its HTML version - so a user then choosing to access the PDF as well could count as two full text downloads
      • Price notes that the COUNTER code of conduct does require providers to de-dupe statistics to provide a "unique article requests" figure
    • exposure in Google Scholar can also skew usage
    • (at this point, Price's laptop popped up a helpful flag to let us all know that it is hazy but warm in California this afternoon, which we all enjoyed)
  • ignoring by-title data
    • Price showed a classic long-tail curve with low use titles having very high cost per use - these would be the ones to be excluded from/replaced in future packages
  • lack of benchmarks
    • your concerns about the price per use you're paying could amplify or be assuaged by finding out what other institutions' cost per use is for the same publisher
    • it's better to evaluate both cross-institution and cross-publisher to get a more general picture for comparison
Price summarised his recommendations to close before ceding the floor to COUNTER's Peter Shepherd, who opened with an overview of the COUNTER codes of practice, its recommendations and processes, and the current level of compliance within the industry. He outlined some projects which have been undertaken using COUNTER-compliant statistics, for example, the NESLi2 analysis which was able to output cross-publisher comparisons of per-article download costs and growth in full-text article downloads.

Shepherd then overviewed global metrics for evaluating e-journals, including the impact factor, and the potential for a new Usage Factor. Preliminary conclusions of UKSG's recent research in this area indicate that there is considerable support from author, librarian and publisher communities. The UKSG project outcomes also suggest that the COUNTER codes of practice may not be adequately robust, and that there remains frustration at the lack of comparable, quantitative data - particularly given that continuing print usage is not included.

Shepherd proceeded to identify some of the current issues faced by COUNTER, including:
  • interface effects on usage statistics as referenced by Price earlier
    • filter project concluded that it's not best practice to render the HTML regardless of the user's potential choice
    • at most, usage statistics are inflated by no more than 30% as a result of multiple formats
  • separate reporting for archives
    • can currently be requested as a supplementary/sub-report
  • usage of content within institutional repositories
  • involvement with SUSHI protocol to automate retrieval of usage data from providers
And he finished with some future challenges:
  • continued evolution of codes of practice - perhaps with respect to federated searching, "pre-fetching" (Google Accelerator), usability, additional data, new categories of content
  • deriving metrics from the codes of practice e.g. cost per use, Usage Factor for journals

Labels: , , , ,

A couple of pictures of our sunny conference venue...


The Warwick Arts Centre - venue for the Plenary Sessions and Exhibition


The courtyard of the Social Sciences Building - venue for the workshops and briefing sessions


E-books: plugging and playing in Toronto

"E-books have become my new passion," says Warren Holder, University of Toronto Libraries. Toronto's recent focus on e-books has been motivated by demand from users, from faculty (more in medicine? but also in social sciences) and also from students, who have grown up with the web and expect its immediacy and simplicity (Holder, like earlier speaker Tom Davy, also played video clips of students talking about their research habits: "The physical library? no ... I get out of there as quickly as I can."). Other factors included:
  • inter-campus borrowing (increasing, and less cumbersome electronically)
  • high-usage of short term loan content
  • the need for new acquisition models.
Furthermore, U Toronto's existing subscriptions to e-book packages show that electronic use is more than print use in 58% of cases where a book exists in both formats.

So the University of Toronto - listening to its Google generation - decided "it behooves us to create a single interface to our e-books ... most students don't care who the publisher is, or even whether it's a journal article or a chapter of a book - they just want the content. We want them to be able to plug and play with one search." They are piloting their own platform to host their 20,000 e-books from 5 major publishers (and, longer term, their journals and A&I content). The platform will only contain the content to which the university has access (I do find myself wondering - perhaps unreasonably - if this doesn't throw the baby out with the bathwater: avoiding the frustration of non-licensed dead ends, but at the same time restricting students' view of the wider literature?)

U Toronto's analysis of various usage statistics from its pilot platform is beginning to indicate how students and faculty are using the content, and they intend to begin logging and analysing referring links and navigation between books. Again, current usage patterns across days of the week or months of the year reflects that seen for other types of content (which I take as good evidence that users are format agnostic). Interestingly, reading patterns within ebooks indicate different types of user with different habits (e.g. some read front and then skip to back, others read about half and then skip...), whilst reviewing the evolving top 10 lists demonstrates a real depth and breadth to the range of e-books being used.

During questions, Don Chvatal from Ringgold mentioned the received wisdom that 90% of books in academic libraries are used only once every 10 years, and asked how this would affect U Toronto's e-book purchasing. Holder responded that content is purchased for future possible views, and that such statistics can only become available after you've bought the book. "Our responsibility now", he added, "is to build critical mass, and we're currently getting some good prices for e-books that won't necessarily be available longer-term."

Labels: ,

Thinking outside the books:

Tom Davy of Thomson Learning EMEA proposed to examine how the textbook has evolved to its present form, and to ask whether, given techological advancements, it's still relevant to today's students. He gave an initial definition of a text book, and noted that whilst they were initially very dry ("Learning wasn't meant to be fun; it was the price you paid for the sex, drugs and rock'n'roll that went with student life in the 70s"), they have been "sexed up" over the years with brighter designs and pedagogical features such as learning objectives and case studies. By the late 80s, everyday students (not just geeks) had come to use computers in their work so textbooks began to be supplemented with software - from floppy disks through CD-ROMs to websites.

While teachers and students have different philosophies and expectations about the teaching process, publishers' objectives are primarily about winning market share, through creating superior text books and persuading educators to adopt them. "It has become a bit of an arms race", with the UK market suffering from defensive publishing to combat second-hand sales, and growth coming from price increases rather than volume sales. Some campus bookstores are returning up to half of the textbooks they order, as smart students shop around online for the best deal - which may require purchasing an overseas edition (Asian editions may be half the price of UK editions which in turn may be half the price of the US edition).

Other factors include decreasing face time with tutors (students are spending less on text books as no-one is driving them to purchase) and students' increasing habit of seeking "free" answers, for example via Google. Davy showed some compelling video clips which, whilst amusing, demonstrated students' reliance on Google - even when they are aware of the higher value of authoritative resources ("I might look for some scholarly papers in JSTOR ... teachers love that").

As such, the diminishing returns in educational publishing, and the lack of return on investment, is causing several major publishers (e.g. Wolters Kluwer) to leave the sector. Could its future be digital, and if so, in what format?

Davy proceed to evaluate paper vs digital formats for educational publishing. Textbooks, he noted, are portable, tactile, independent of other equipment or power, and easier to read - but they are linear and can support only a single learning style. Meanwhile digital textbooks and the media supporting them can also be portable and tactile; the equipment is ubiquitous anyway (iPods, mobile phones) and the format supports interactivity, multiple media and individual learning styles.

Moving from a book-centric model to a learning-objective-centric model transcends the problem of content silos and incorporates other web-based media e.g. You Tube, Flickr, whilst supporting instant access to authoritative content whenever it is needed. Why isn't there more demand for this kind of next-generation learning resource? and why do providers give away the arguably more useful non-print materials?

Davy suggested that the term "e-book" is a distraction, since it implies a simple digitisation of textbooks for delivery as PDFs. This is inadequate; we need to break the content down further for delivery at much more granular levels - chapters, tables etc. Innovations in web platforms and search technologies make it easier to achieve this; customised learning objectives can be created for individual students, and linked to appropriate resources. Social websites can teach us a great deal about user behaviour in the core market of 16-25 year olds.

Davy closed by presenting three key opportunities for progress:
  • university managers need to compete for students' attention as well as their fee income
  • librarians need to improve marketing services and move into the campus bookshop space to increase the library's status within the institution
  • publishers need to recognise that whilst the textbook won't become extinct, they need to think "outside the book" and begin creating digital learning objects.
During questions David Thomas (ScholarOne) suggested that, whilst we've heard a lot about e-textbooks over the last 5 years, sales show that they haven't really achieved traction. How will this be achieved? Davy responded that this will depend on faculty embracing e-resources as part of the core material that they instructing users to use. He added that the business model (requiring the student to purchase from a bookstore) is a block; we need to be able to licence the package to the university, and have the costs rolled into the course fee. (I would reference here the UK HE service Heron, which provides digitisation and copyright clearance services to higher education services in the UK; Heron's Packtracker service digitises the materials required for a given course, clears the necessary copyright permissions, and makes the pack available for students to access online).

Labels:

Google Scholar @ GSK: from discussion to implementation

Jennifer Whitaker, a member of the Published Information Group within the Information Management team at GlaxoSmithKline, gave a succinct presentation at her briefing session, which nevertheless provoked much discussion amongst the attendees.

Jennifer described the thinking behind the decision to promote Google Scholar to researchers as a means of providing a quick search across scientific information on the web. She went on to explain how Scholar was positioned within the company; the information given to researchers; how Scholar has been subsequently used at GSK; and the effects on usage the standard bibliographic databases subscribed to at the organisation.

The reasons behind the decision to promote Scholar were pragmatic:
  • There was a wish to maximise the use of expensive subscriptions to full-text e-journals.
  • There was a need from researchers to be able make "quick and dirty" searches for background information on topics which would still yield information from quality-controlled sources.
  • The standard Google search was already a highly used tool, and as such there was high awareness of the Google brand.
  • Google Scholar was easy to use, yet offered some of the standard search features of a bibliographic database (such as journal title search field).
  • Google Scholar offered broad coverage of scientific information across disciplines.

There were several strands to the implementation:

  • An evaluation of scientific search engines was carried out, with users being informed of progress and results via the Library webpages.
  • Once it was decided that Google Scholar would be the preferred choice, the Google Scholar toolbar was added to the primary Library Resources webpage in a prominent position.
  • Much thought was given to communication with users, both in person through roadshows to departments, and through FAQs mounted on Library website.

Information professionals at GSK were at pains to inform users that Google Scholar would not be the answer to all their information needs, and should not replace the use of bibliographic databases for in-depth searching. It does not offer comprehensive coverage of scientific literature, and does not necessarily pick up the most recent publications. The search engine is also still in beta, which means that functions could change, and that there is a possibility that it could be withdrawn or become a chargeable service at any time.

An additional factor at GSK is the commercial sensitivity of the searches that researchers carry out. All employees are trained to be aware that their searches of the open web are insecure and can be tracked, and that standard web search engines should not be used when there is a need for confidentiality. Researchers were reminded of the other online resources on offer, and that the Library staff were also on hand to offer advice on searching and locating full text.

Trends in usage statistics for Scholar at GSK showed a steady increase between the implementation in June 2005 and November 2006, with marked increases in usage at the times when the search toolbar was added to the Library webpages, and also when full text linking to GSK's own full text subscriptions was added. Interestingly, usage statistics for the top 6 bibliographic databases at the company showed static usage over the same period, demonstrating that Scholar was a complementary facility rather than a competing one. Unfortunately, no statistics on the usage of full text journals were offered, so we could not see if one of the objectives of the exercise, to make these resources more visible, was achieved.

Jennifer concluded by stating that the decision to promote the use of Google Scholar at GSK was a successful but pragmatic one, and that it will be kept under review for the forseeable future, particularly as new scientific search products become available.

There were many questions from the audience - I have paraphrased some of these below, with Jennifer's answers:

Q: Is there source/coverage list for Google Scholar that GSK users can consult?

A: No there is not.

Q: Does the fact that Google Scholar is still in beta concern you?

A: It is a concern, but users are informed of the beta status and what it implies through face-to-face discussion and online FAQs.

Q: Has there been any research on how researchers at GSK use Google Scholar?

A: Nothing formal, however anecdotal evidence suggests that many are using it as a quick way of locating a paper for which they already have the bibliographic details, and for basic background information on a topic, for example a particular disease.

Q: Have you considered using server log metrics to find out who is using Scholar and how much?

A: Not yet, although we will think about this.

There was some general discussion amongst the audience about the desirability of Google having data on Library holdings in order to provide the OpenURL linking service, although no definite conclusions were drawn.

Q: Would you consider the implementation of a cross-search or federated search engine which could search across your subscribed databases?

A: We are keeping all types of search options under review as they are developed.

Q: If your statistics had shown a drop in use of your bibliographic databases, would you have considered cancelling subscriptions?

A: We would have felt that this was a failure of communication on our part, as Google Scholar is not intended to replace these resources. We would be inclined to re-double our efforts to market the databases rather than consider cancelling them.

It was noted by some members of the audience that users trust (or at least are highly aware of) Google, whilst some information professionals show a marked distrust of Google Scholar. It was also noted that users conflate brand awareness with trust - for example unpublished research has shown that if a set of search results is branded with the Google logo, users will trust the results, even if they have actually been drawn from other search engines.

Altogether this was an interesting session which once again highlighted that speed and ease of use is incredibly important to searchers, even those in the pharmaceutical industry.

Labels: ,

Monday, April 16, 2007

Federated Access Management in the UK

This morning I attended the UKSG workshop on "Federated Access Management in the UK" presented by Nicole Harris of the JISC.

Harris described how federated access management is a key part of the JISC overall strategy and is currently a high priority. The session therefore served to raise awareness of the project outlining some of the benefits; the current state of play; and some of the options available to both institutions and service providers.

In a federated system authentication is devolved to the institution who is responsible for identifying all users entitled to access their resources. Access control is then negotiated by sharing metadata about the user between the authenticating institution and the service provider.

The architecture (Shibboleth) that the JISC have adopted was described as "technology neutral" but must be SAML compliant. Harris explained that there was "international convergence" on Shibboleth and SAML for federated authentication and that this wide support would not only secure the future development of the system, but would also create a wider market for suppliers.

At present institutions can choose to deploy open source software to implement federated authentication, or instead license commercial software. Institutions may also outsource the authentication to an "identity provider" much as most institutions already do with the Athens service.

Athens itself is not going away. Institutions will be able to continue to use Athens but will ultimately be charged a fee in order to do so; JISC funding for the Athens service will shortly cease. "Gateways" that bridge between the older Athens service and the newer federated authentication options provide an upgrade path for institutions as well as interoperability between the different services.

Harris stressed that there will be considerable institutional investment (in both time and resources) in order to implement federated authentication and suggested that institutions begin including this in their IT strategy.

The return on this investment will be reaped in several ways. Users will be able to benefit from "single sign-on" across multiple services, not only from external providers but also internal to the institutions. This should address repeated problems with users gaining access to resources as well as the need to manage many different accounts. Ultimately a federated system may also allow institutions to provide collaborative resource sharing, licensing users from other institutions to access internal systems.

The expectation is that by November 2008 around one third of UK institutions will have implemented federated access management, with the majority completed by November 2009. It seems likely that the Athens service will continue to run, as a paid for service, until at least 2011.

Readers interested to learn more about federated access management should visit ukfederation.org.uk.

Labels: , , ,

Peter Shepherd reports on Usage Factor Study

The UKSG sponsored a Project COUNTER study into the feasibility of developing a new metric for assessing journal quality, which is based on the electronic traffic on a journal’s site. The “Usage Factor” is envisioned as a ratio of downloads of articles (measured by COUNTER-compliant statistics) and the number of articles available. This approach could provide an alternative quantitative measure of quality to the broadly accepted Thomson Scientific ISI Journal Impact Factor (Wikipedia). While the Impact Factor has many benefits, such as its global, independent and well-established position in the community, its difficulty to be manipulated, and its broadly reflective of consensus quality status, the Impact Factor also has its disadvantages, particularly regarding coverage, its STM focus, and the limitation of the types of usage a product might receive. Also of concern are the application of the Impact Factor by publishers and the reliance of it as a measure for performance review in the academy. Peter Shepherd, Project Director of COUNTER, conducted the research and reported on the results of the recently completed survey.

Separated into two phases, the survey explored the community’s general assessment of the relative value of a standardized comparative measure of online usage, its feasibility and the overall interest level of developing such a measure. The first phase consisted of personal interviews with 29 authors, publishers and librarians, while the second phase was a broader online survey of librarians and authors. Among the topics covered during the interviews were: Reaction, in principle, to the Usage Factor; what might be some of the practical implementations; what time windows might be appropriate for the calculation of the Usage Factor; what may be practical ways to consolidate the information and who might take on that responsibility; and what might be implications for participants and non-COUNTER-compliant titles.

Peter reported that there was broad support for a usage-based quality measure, particularly among authors and librarians. Despite moderate dissatisfaction with the Impact Factor and their desire for new quantitative quality measures, authors appeared to be more reticent about changing their behavior based on a new quality measure. Publishers were more mixed in their support, although concerns tended to focus around policy and approach rather than on the principle. Particularly of interest for the library community is developing a broader standard by which titles can be qualitatively measured. One large university-based librarian reported being queried by their dean of Research how they measure the quality of the 22,000 journal titles not covered by ISI. Interestingly, a Usage Factor would rank highly on librarian’s purchase decision matrix if it existed.

COUNTER is seen a an increasingly accepted basis for usage statistics and while it’s application isn’t universal among publishers, it is rapidly being adopted as an accurate representation of online usage. Much of the open questions remain in the detail areas of what precisely will be measured, over what time frame, and who would manage the process. The devil, as they say, is in the details. For example, difficult questions exist of what exactly should be measured – full text only, articles versus other content, and usage of multiple versions (or versions in multiple locations). Also, because people can see the value of a high ranking and the possibility of misuse or abuse, the need for a trusted third party to establish the criteria, calculate, audit disseminate this information to the community will be key to the project’s acceptance.

The meeting ended with a straw poll of the attendees and the vast majority indicated that they supported moving forward to a more specific test development phase. A full report of the research is being provided to UKSG and an article on the results will be released later this year.

Labels: , , , ,

"The Great Age of Librarians is Just Beginning" or will the "Wisdom of Crowds" Reign?

Monday 16th April, morning session

Today I learned that being a librarian is probably quite an exciting occupation to be in right now, not just a soon to be extinct vocation for those who like the smell of (possibly dusty) printed paper. If Plutchak is right, librarians can take advantage of the digital age and the plethora of search engines to free themselves from standing behind desks and start to work with researchers within their departments and thereby become more involved in the reason for the research. He suggested that the librarian can help the researcher find the "joy and delight" in the act of discovering and using research.

On the other hand, you might argue that the librarian's days are numbered and that Google's philosophy of the "wisdom of crowds" will dominate, making the masses the arbiters of what is a high quality and appropriate resource for a researcher to use. Phillipe Columbet from Google confirmed that the "wisdom of crowds" is still their philosophy but didn't actually say he thought librarian's days were numbered. Clifford Guren at Microsoft reported that the Live Search Academic draws on the need for qualification of content that is added to a search index, and that in the future search engines will need to start to rely on rankings based on criteria such as peer review.

Ultimately I think we need to tackle the overwhelming amount of information that researchers are faced with from both directions. Online research communities will intervene and provide the "wisdom of [small] crowds". Librarians should be encouraged by their institutions and beyond, to go out and spread their advice within research contexts. Indexers of content should continue to capture the breadth of content so that the obsure is available, but also consider offering quality indicators for the content in their databases. (IMHO)

Labels: ,

Marketing the Library - Using technology to increase visibility, impact and reader engagement.

A Monday afternoon plenary session presented by Melinda Kenneway of TBI Communications

Melinda started by explaining that simply producing a product isn't enough - you need to market it, and this applies to libraries too. Marketing is a skill that librarians need to learn. Digital is going to become the main medium for marketing, because it's where the users are. It's also particularly good for certain things - it can be customised, personlised, shared, two-way, interactive and cost-effective. Importantly, you can also measure the impact of what you do.

Melinda's presentation concentrated on five key areas that need to be taken into consideration when creating a marketing strategy for a library:

1. Create a powerful digital brand. The changing nature of the library user (visiting the physical library less, using search engines, etc.) "creates a "brand challenge" for the library. There is a need to shift preception of what the library is, that it's not "just books". The question libraries need to ask is "What's our big idea?". Melinda gave several examples: the Idea Store, a rebranded public library in Tower Hamlets, which has seen previously-declining membership go up since its relaunch; the American Library Association's @Your Library campaign; the Open University's DigiLab. As for action at a more micro level, Melinda advised that librarians should badge library content at a deep level (not all users come through the homepage), use branding tools provided by vendors, and create an engaging online experience through the library website.

2. Personalisation. Users at computers want it all to be about them. Segment your market, by role (student, faculty, etc) but also by patterns of behaviour. Pinpoint marketing - get the right message to the right people at the right time. Customer Relationship Management (CRM) systems can help you with multi-channel marketing. The Open University has a screen saver that has different messages targetted at staff, students, etc.

3. Go where your users go, which is no longer just the library building. PDAs and mobiles are a growing way to reach users - 100m ad messages are sent to mobiles every month. Most people will accept relevant advertising on their mobile. An even newer (and perhaps scarier?) area is RFID marketing - Radio Frequency ID tags, which are already on UK passports, can act as tags on the user that allow them to download relevant information as they move around a building or even a city. Some libraries are already moving into mobile marketing: Manchester Metropolitan University is sending text messages to users about overdue books, and has podcast tours of the library. Duke University issues ipods to all users, with orientation info, academic schedules, lectures, etc.

4. Online community networks are another natural marketing channel, although one to be used with caution. Social networks such as YouTube and Flickr, but also virtual worlds such as Second Life. Marketing through networks requires subtlety of touch, and needs to add value to the community.
How could this work In the library? Imperial College London has produced a humourous plagiarism video on DVD, which could be condensed and posted to YouTube? An ALA video entitled "March of the Librarians" - a spoof of March of the Penguins - got 190,000 views on YouTube. Melinda noted that although it is entertaining viewing, it probably doesn't do much to bolster the image of librarians!
Billboards are for sale in Second Life. Cybrary City has recently been created in Second Life, where libraries can showcase their digital resources. Melinda cautioned again that if you're going to try marketing in online communities you must make sure that your users are there, that you add value and that you don't just spam or clutter.

4. Community Marketing. Broadcast your message to a few who will pass it on to the wider community, rather than the old approach of broadcasting your message to all and sundry. Melinda cited the example of the launch of GMail via power users who could send invitations. Demand created by limited availability.
What's the best approach? Provide a great service, provide tools, create and support user groups, support communication within those groups, and participate.

In conclusion, Melinda recommended ten things a library could do now:
1. Manage your brand
2. Create an engaging website
3. Segment your customers
4. Tailor communications to key groups
5. Invest in a CRM system
6. Start a library blog
7. Plan for mobiles
8. Post a video to YouTube
9. Post a podcast in Podcast Alley
10. Visit Cybrary City in Second Life

Good marketing is giving people the information they need to make informed choices - isn't this much the same as good librarianship?

T. Scott and the Search Giants: quest for the information industry's future

You'd expect our first two speakers, Microsoft's Cliff Guren, Director of Publishing Evangelism, and Google's Philippe Colombet (Manager, Strategic Partner Development), to share a number of common themes, and indeed they did (and, for that matter, a fast and loose approach to their instruction not to use their prime speaking slots for product reviews). Both referenced the old adage that "if it's not online, it doesn't exist": a myth, said Microsoft, since only 5% of the world's information is currently available online; but one we must acknowledge and act on, said Google, as offline data will effectively cease to exist if it cannot be found in online searches.

Both speakers also noted that "what is at the periphery will be at the centre of our lives". Colombet noted that new formats and the increasingly networked world are revolutionising user activity and the speed at which information is disseminated to users - whether its politics (his ability to follow the French elections by downloading podcasts when travelling) or football (the Coupe de Boule ringtone made available online immediately after Zidane's assault on Materazzi and being picked up by Warner for release in 20 countries).

And both talked about the rise of social software as something that will pervade the scholarly communications process, with Guren noting its potential use in the peer review and research process. Prominent opening speakers, to be sure, but perhaps offering us a little less insight than we might have hoped.

T. Scott Plutchak of the University of Alabama at Birmingham followed the search giants on to the floor with his usual mix of good humour and good sense. "How we think about our future is revealed in how we talk about it", says Scott: when we're talking sloppily, it's because we're thinking sloppily. For example, we talk as if "library" were a synonym for "librarian", when in fact we should separate the two (to allow librarians' roles to evolve and change outside of the restrictive concept of the traditional library). Whilst the library is a means by which librarians have connected users to information, it is one of many tools which can be deployed to this end.

Scott referenced the "library 2.0" phenomenon and noted that although not a fan of the label, the concepts it represents are useful to librarians. However, whilst it's great that librarians are utilising Web 2.0 technologies to connect with their users, "second life is not a replacement for first life". Personal relationships with library users are no less critical in this new age - so if
users don't come to the library any more (because its services are so readily available digitally), librarians do need to seek out other means of engagement.

During questions, Sirsi-Dynix's Stephen Abram asked Google's Colombet about the company's potential plans for adding advertising Google Scholar. Colombet said none was currently anticipated, as "we have plenty of other areas where advertising makes sense" - and Google's inventory, unlike print inventory, increases by the user. "We can afford to have products, like Scholar and Google News, that do not have AdWords related to them, because there are many other areas where this is ... less controversial and more user-friendly. There's no urgency [for us] to fill the right-hand side of the page."

David Thomas, Scholar One noted that students are doing all their research on the web; web searches return both authoritative and non-authoritative results. How does a student know which sources to go to - what advice to librarians/faculty give? Plutchak responded that faculty are now clearer that additional instruction is needed, and that librarians' skills can be utilised here. Guren adds that lack of awareness of authoritativeness of data is cyclical; general web currently rules, but we expect brands to reassert themselves as guides to "the authoritative web" and suggested something like the "Good Housekeeping Seal" (Kitemark) of information could be usefully implemented. Colombet said bravely that Google believes in the wisdom of crowds, and that while librarians are correctly tasked with giving specific guidance to users, users can also rate information (even by clicking) to create collective intelligence which can be leveraged by the web. "Google doesn't have editorial ambition, and will never rate sites", he says, although Google Scholar's content selection policies could be construed otherwise. "Identification of the original source is ample information" to ascertain the authoritativeness, or otherwise, of the data.

Sheila Cannell, University of Edinburgh brought a light end to the morning's proceedings by asking Guren and Colombet to comment on what each thinks is good about the other's product (with respect to scholarly communications)? Guren took the plunge: "sincerely, Google's crawling-based approach yields great breadth and contributes to their mission to organise the world's information" and then added the inevitable dig, "although casting the net so wide brings in the flotsam and the jetsum...". Colombet responded that he's always been "extremely impressed that Microsoft, coming from the software [as opposed to the publisher] perspective, has been able to build such a good scholarly experience as Encarta."

And we're off!

Paul Harwood, Chairman of UKSG, opened the 30th UKSG annual conference in front of a packed auditorium (650 delegates from 18 countries!) in the Warwick Arts Centre's theatre. Paul gave a tantalising hint about the future of the conference, in terms of its ability to accept more delegates and exhibitors in future. He noted that UKSG founder John Merriman would be back at the conference tomorrow to celebrate its 30th anniversary, and welcomed all first-time delegates and library school students. Finally, Paul introduced Denise Novak, president of NASIG and head of acquisitions at the Carnegie Mellon University Library in Pennsylvania, who expressed great enthusiasm for the forthcoming UKSG quiz!

Wednesday, April 04, 2007

UKSG Annual Conference 2007: read all about it ... right here!

With this year's UKSG Annual Conference fast approaching (it will be held 16-18 April in Warwick, UK), I am delighted to announce the members of the blogging team who will be providing realtime coverage of the Conference here on LiveSerials:
As you can see, we've aimed for a mix of bloggers representing all shades of the serials community spectrum. The team will post reports of the daily conference sessions and hopefully some of the after-hours activities as well.

For all those who were unsuccessful in booking a place at this year's conference, make sure you have LiveSerials on your blogroll to be kept up-to-date with what's hot and happening in the information industry.

Labels: