Current Cites

July 2008

Edited by Roy Tennant

Contributors: Charles W. Bailey, Jr., Keri Cascio, Frank Cervone, Brian Rosenblum, Roy Tennant

Hagedorn, Kat, and Joshua  Santelli. "Google Still Not Indexing Hidden Web URLsD-Lib Magazine  14(7/8)(July/August 2008)( - This article highlights a long-standing challenge for digital libraries: the digital collections that libraries, museums and archives create with great effort and expense are not always well-indexed by Web search engines, thus decreasing the potential use and impact of those digital resources. OAIster, a "union catalog of digital resources" developed at the University of Michigan, provides access to over 16 million digital resources by harvesting OAI metadata from over 1000 repositories worldwide. About 45% of this material, the authors determine, is also indexed by Google, leaving the remaining 55% "hidden" in the deep web, unindexed by Web search engines. Two recent blog posts (and related comments) provide important follow-up discussions to this article. Roy Tennant cites further anecdotal figures from other repositories that support the findings of this article, and suggests that libraries, museums and archives need many different strategies to get their content to users. Similarly, John Wilkin argues explicitly that it is cultural heritage institutions, rather than companies like Google, that bear the responsibility for making this content more visible: "we must also learn...that a simplified rendering of the content, so that it can be easily found by the search engines, is not an unfortunate compromise, but rather a necessary part of our work." - BR

Hirtle, Peter B.. "Copyright Renewal, Copyright Restoration, and the Difficulty of Determining Copyright StatusD-Lib Magazine  14(7/8)(July/August 2008)( - Peter Hirtle's chart on "Copyright Term and the Public Domain in the United States" has long been an essential quick reference guide to determining public domain status. In this article Hirtle untangles a particularly complicated strand of copyright law: how does one determine the copyright status of a work published in the United States from 1923 to 1964? The 1996 restoration of US copyrights in foreign works has not only prevented libraries from offering to the public the full text of most foreign works, but has also made it very difficult, if not impossible, to determine with certainty the copyright status of works published in the United States during that period. Using concrete examples, Hirtle outlines several questions that must be asked to determine copyright status. (Among others: was the work solely published in the United States? Is the American work a translation or other derivative work based on a foreign work? Was the work first published outside the United States?) There is no automated way to answer these questions, and in many cases comes it down to the almost impossible task of proving a negative, so libraries that wish to offer material from this period must settle on a strategy that identifies and manages risks. - BR

Kroski, Ellyssa. "On the Move with the Mobile Web: Libraries and Mobile TechnologiesLibrary Technology Reports  44(5)(July 2008)( - More and more library users are using their cellphones or other mobile devices (e.g., PDAs, smartphones, etc.) for much more than talking and texting. Many are searching and browsing the web, reading magazines and books, and generally doing things that until recently required a computer to do. In this issue of Library Technology Reports, Kroski does an excellent job of surveying the present usage of mobile devices, providing an overview of devices, providers, and features, describing the various activities these devices support, highlighting how libraries are responding with services tailored for these devices, and providing good advice and assistance for any libraries wanting to go further. It is well-researched, nicely illustrated, and chock-full of good advice and assistance with getting started. Highly recommended for any library wanting to better understand mobile users and/or tailoring services for them. - RT

Laplante, Philip A. "Open Source: The Dark Horse of Software? Computing Reviews  (15 July 2008)( - Frequently we have the need to explain open source software (OSS) to people who may not have a high level of familiarity with, and perhaps actually skepticism of, the concept. Unfortunately, all too frequently articles or other informational pieces that could be useful take on a decidedly "rah-rah" tone in support of OSS, which casts serious doubts on the validity and objectivity of the piece. Thankfully, this is not the case with this article. In a well laid out and neutral fashion based on evidence culled from research into open source projects, the author describes the major issues one faces related to evaluation and implementation of open source software and gives some practical tips related to both topics. Written from the perspective of a researcher, this article could be useful as an "intro piece" for your library's administrative team if you are in the midst of evaluating open source software. - FC

Linoski, Alexis, and Tine  Walczyk. "Federated Search 101netConnect  (15 July 2008)( - This is a credible, if somewhat superficial, review of the recent state of the library metasearch tool market and how to approach tool selection. Since this is a fast-moving market you may find it useful to take the pulse of the market closer to when you need to select an option, since this piece is based on information already a year old, but the general information probably still applies (e.g., most desired features, etc.). - RT

Oder, Norman. "BiblioCommons Emerges: "Revolutionary" Social Discovery System for LibrariesLibrary Journal  (19 July 2008)( - Those of us on the speaking circuit have seen Beth Jefferson speak about BiblioCommons, a new "social" discovery system for libraries, but few until now have actually seen it in action. And as of this writing, the BiblioCommons website still consists of one splash page with testimonials. Now this brief piece by LJ editor Oder provides a quick introduction to it as it has been released "in the wild" at Oakville Public Library in Ontario. Apparently Bibliocommons is an add-on to your existing library system, it doesn't replace it, but they claim interoperability with some key vendors. The most interesting part (for me, at least) is that it appears they will be setting up ways that user-contributed content can be shared among libraries, thereby helping to create a critical mass of content faster. - RT

The Library of Congress National Digital Information Infrastructure and Preservation Program, , et. al."International Study on the Impact of Copyright Law on Digital PreservationLibrary of Congress, Digital Preservation  (July 2008)( - In a world of ephemeral digital objects, libraries need to be aware of the issues surrounding digital preservation. The Library of Congress National Digital Information Infrastructure and Preservation Program (NDIIPP) created a report with its counterparts from other countries to review the current state of copyright laws and make recommendations for legislative reform. The section that covers US copyright law is very complete, covering all appropriate laws for digitization and digital preservation activities. Joint recommendations include establishing laws that would apply equally to all categories of copyrighted materials in all media and formats. Without more even laws and policies, we risk losing print and digital materials every day. - KC

Wilbanks, John. "Public Domain, Copyright Licenses and the Freedom to Integrate ScienceJournal of Science Communication  7(2)(2008)( - In this article, John Wilbanks, Vice President of the Science Commons, makes a passionate plea for putting scientific databases in the public domain. He strongly argues against the use of Creative Commons licenses (or other "Free/Libre/Open" licenses) for this purpose. For example, he explains the problem with licenses that require attribution in the context of database integration and federation, which he calls the "cascading attribution" problem: "Would a scientist need to attribute 40,000 data depositors in the event of a query across 40,000 data sets? How does this relate to the evolved norms of citation within a discipline, and does the attribution requirement indeed conflict with accepted norms in some disciplines? Indeed, failing to give attribution to all 40,000 sources could be the basis for a copyright infringement suit at worst, and at best, imposes a significant transaction cost on the scientist using the data." As "open data" moves front and center, these are issues worth carefully thinking about. - CB