If you are a reader of The Chronicle of Higher Education, you may have seen last week’s article about library discovery tools. Marc Parry’s article, As Researchers Turn to Google, Libraries Navigate the Messy World of Discovery Tools, sheds light on some of the complications and questions caused by discovery tools and their ability to make library resources more discoverable. Parry opens with this description of discovery tools:
“Instead of bewildering users with a bevy of specialized databases—books here, articles there—many libraries are bulldozing their digital silos. They now offer one-stop search boxes that comb entire collections, Google style.”
As much as we’d like to promise seamless access to our entire collection through a single search box, the discovery tools on the market are from perfect. The items retrieved in a search and their page ranking are not always determined purely by their relevancy or recency, but instead by algorithms and licensing agreements between publishers, database vendors and the companies creating discovery tools. Parry’s article questions the possibility of bias in discovery tools, which would cause results from one vendor or content provider to be ranked higher than another. (Vendors will not explain the algorithms used to rank results for fear of sharing proprietary information). The article also points to the possibility of the unfortunate pairing of an imperfect ranking system and high number of results so the “best” sources are lost in the mix. What happens if we’re using discovery tools as a primary access point for research, but we don’t exactly know how the tool sorts and ranks results? Is it that unlike searching Google, but not knowing how Google’s algorithms work?
Another issue, not addressed in the article, is the inconsistent representation of resources. For a source to be included in a discovery tool, the publisher or database vendor responsible for that content has to sign a license agreement with the discovery tool company. Without such a contract, the content will not be part of the discovery tool system and completely unavailable in a search. Last summer, we gave the library website a face-lift and had to decide just what to call the search box in the middle of the library’s homepage. We considered labeling the box “Everything” which is catchy although not entirely accurate, or “OneSearch,” which is less misleading if possibly less eye grabbing. We opted to call our discovery tool OneSearch. We’ve also noticed, and perhaps some of you have as well, when using OneSearch from off-campus, the metadata is not viewable for all results unless you’ve already logged in and authenticated. Why? Content providers signed different licensing agreements with EBSCO which contain different rules to define how their metadata will appear.
When we teach OneSearch, we have to be clear in explaining just how it works to help students understand how best to use it. During instruction sessions, we answer such questions as: how do I search for only academic journal articles? (There’s a facet for that). Or, how do I narrow my results? (Use more facets and try a different search term or combination of terms). The question should really be: how do I best find relevant, high quality sources that answer my research question? That question requires a larger conversation about more than simply which box to check. Part of the lesson may be identifying when it’s time to switch from OneSearch and use a subject-specific database or to stop trying to find the ideal source among thousands of results and take a moment to learn how to refine a search to retrieve fewer, more relevant results.
Discovery tools promise to make research easier and more streamlined, but they’re relatively new and not perfect. If students are relying increasingly on discovery tools for research, we need those tools to be accurate and comprehensive. In the meantime, we are doing our best to provide access to library resources and to educate students on just what they’re finding and where it’s coming from.