Tag Archives | ebsco

Conference Buzz: 2010 Charleston Conference

Written for Unlimited Priorities and DCLnews Blog.

Donald T. Hawkins

Donald T. Hawkins

At the 2010 Charleston Conference (November 3-6), which was the 30th in the series (surely one of the more long-running information industry events!), one of the highlights was a “discovery systems faceoff” between two of the major players, Serials Solutions and EBSCO. The faceoff came about from a challenge posed after an exchange of Letters to the Editor of The Charleston Advisor. (See “Conference Buzz” in the previous issue of DCL News for an explanation of discovery systems.) It took the form of two questions posed by a moderator, with responses from each company, followed by a rebuttal and summary. After the questions, live demonstrations of each system were conducted. The questions were:

  • Why do libraries need discovery tools and how does your product meet those needs?
  • Why should a library choose your service rather than that of a competitor?

The EBSCO representatives stressed their system’s superiority in the number of sources available and the capability of adding indexes of sources not covered by EBSCO. The Serials Solutions participants described the ease of using its service, its proven value, and a two-year track record of reliability and scalability. They also noted that many publishers have made the full text of their data available to them for use in indexing. The Summon demonstration was flawless, but EBSCO had technical problems with theirs. Unfortunately, the time allotted to the session was too short to permit meaningful audience interaction, but it is clear that this faceoff was only the opening salvo in a long battle for supremacy in the discovery systems area.

Another fascinating presentation at Charleston was by Jon Orwant, Engineering Manager of the Google Books project. So far, Google has scanned about 15 million books, about 10% of those available, which amounts to about 4 billion pages and 2 trillion words. Google collects metadata from over 100 sources, parses the records, creates a “best” record for each data cluster, and displays appropriate parts of it on the site. Orwent described how the resulting database can be used not only for searching but for other interesting projects, such as a study of how language changes over time, publication rates of book publishing as a function of publication date. Google even makes grants available to scientists and linguistic analysts to do research projects because they consider books as a corpus of human knowledge and a reflection of cultural and societal trends over time.

Comments { 0 }