If you are a reader of The Chronicle of Higher Education, you may have seen last week’s article about library discovery tools. Marc Parry’s article, As Researchers Turn to Google, Libraries Navigate the Messy World of Discovery Tools, sheds light on some of the complications and questions caused by discovery tools and their ability to make library resources more discoverable. Parry opens with this description of discovery tools:
“Instead of bewildering users with a bevy of specialized databases—books here, articles there—many libraries are bulldozing their digital silos. They now offer one-stop search boxes that comb entire collections, Google style.”
As much as we’d like to promise seamless access to our entire collection through a single search box, the discovery tools on the market are from perfect. The items retrieved in a search and their page ranking are not always determined purely by their relevancy or recency, but instead by algorithms and licensing agreements between publishers, database vendors and the companies creating discovery tools. Parry’s article questions the possibility of bias in discovery tools, which would cause results from one vendor or content provider to be ranked higher than another. (Vendors will not explain the algorithms used to rank results for fear of sharing proprietary information). The article also points to the possibility of the unfortunate pairing of an imperfect ranking system and high number of results so the “best” sources are lost in the mix. What happens if we’re using discovery tools as a primary access point for research, but we don’t exactly know how the tool sorts and ranks results? Is it that unlike searching Google, but not knowing how Google’s algorithms work?