Access the full text.
Sign up today, get DeepDyve free for 14 days.
(1996)
Relevance reconsidered
D. Lewandowski (2008)
The Retrieval Effectiveness of Web Search Engines: Considering Results DescriptionsArXiv, abs/1511.05800
Ben Carterette, E. Kanoulas, Emine Yilmaz (2012)
Evaluating Web Retrieval Effectiveness
(2011)
Modern information retrieval : the concepts and technology behind search (2
Ying Zhang (2010)
Developing a holistic model for digital library evaluationJ. Assoc. Inf. Sci. Technol., 61
(2002)
A taxonomy of web search
C. Buckley, E. Voorhees (2005)
Retrieval System Evaluation
K. Järvelin, Jaana Kekäläinen (2000)
IR evaluation methods for retrieving highly relevant documentsACM SIGIR Forum, 51
Nick Craswell, D. Hawking (2004)
Overview of the TREC 2004 Web Track.
Zhang Tao (2013)
User-Centered Evaluation of a Discovery Layer System with Google Scholar
Kevyn Collins-Thompson, Paul Bennett, Fernando Diaz, C. Clarke, E. Voorhees (2015)
TREC 2014 Web Track Overview
A. O'Brien (1990)
Relevance as an aid to evaluation in OPACsJournal of Information Science, 16
Andrew Asher, Lynda Duke, Suzanne Wilson (2013)
Paths of Discovery: Comparing the Search Effectiveness of EBSCO Discovery Service, Summon, Google Scholar, and Conventional Library ResourcesColl. Res. Libr., 74
(2017)
A framework for designing retrieval effectiveness studies of library information systems using human relevance assessments
Bing Pan, H. Hembrooke, T. Joachims, Lori Lorigo, Geri Gay, Laura Granka (2007)
In Google We Trust: Users' Decisions on Rank, Position, and RelevanceJ. Comput. Mediat. Commun., 12
T. Peters (1993)
The history and development of transaction log analysisLibrary Hi Tech, 11
D. Hawking, Nick Craswell (2004)
The Very Large Collection and Web Tracks (Preprint version)
D. Harman (2011)
Information Retrieval Evaluation
A. Blandford, A. Adams, S. Attfield, G. Buchanan, J. Gow, S. Makri, J. Rimmer, C. Warwick (2008)
The PRET A Rapporter framework: Evaluating digital libraries from the perspective of information workInf. Process. Manag., 44
D. Hawking, Nick Craswell, P. Bailey, K. Griffiths (2001)
Measuring Search Engine QualityInformation Retrieval, 4
S. Huffman, Michael Hochster (2007)
How well does result relevance predict session satisfaction?
Pia Borlund (2003)
The IIR evaluation model: a framework for evaluation of interactive information retrieval systemsInf. Res., 8
T. Saracevic (2008)
Effects of Inconsistent Relevance Judgments on Information Retrieval Test Results: A Historical PerspectiveLibrary Trends, 56
Jaana Kekäläinen (2005)
Binary and graded relevance in IR evaluations--Comparison of the effects on ranking of IR systemsInf. Process. Manag., 41
Joachim Griesbaum (2004)
Evaluation of three German search engines: Altavista.de, Google.de and Lycos.deInf. Res., 9
D. Lewandowski, Sebastian Sünkler (2013)
Designing search engine retrieval effectiveness tests with RATInf. Serv. Use, 33
E. Voorhees, C. Buckley (2002)
The effect of topic set size on retrieval experiment error
Eng Lau, D. Goh (2006)
In search of query patterns: A case study of a university OPACInf. Process. Manag., 42
B. Jansen, A. Spink (2006)
How are we searching the World Wide Web? A comparison of nine search engine transaction logsInf. Process. Manag., 42
Valery Frants, J. Shapiro, Vladimir Voiskunskii (1997)
Automated information retrieval - theory and methods
C. Cleverdon, J. Mills, M. Keen (1966)
Aslib Cranfield research project - Factors determining the performance of indexing systems; Volume 1, Design; Part 1, Text
K. Antelman, Emily Lynema, A. Pace (2006)
Toward a Twenty-First Century Library CatalogInformation Technology and Libraries, 25
C. Hildreth (2001)
Accounting for users' inflated assessments of on-line catalogue search performance and usefulness: an experimental studyInf. Res., 6
D. Lewandowski (2010)
Using Search Engine Technology to Improve Library CatalogsArXiv, abs/1511.05808
Xi Niu, Zhang Tao, Hsin-Liang Chen (2014)
Study of User Search Activities With Two Discovery Tools at an Academic LibraryInternational Journal of Human-Computer Interaction, 30
A. Spink (2002)
A user-centered approach to evaluating human interaction with Web search engines: an exploratory studyInf. Process. Manag., 38
(2010)
The digital information seeker : Report of findings from selected OCLC , RIN and JISC user behaviour projects
Joseph Deodato (2015)
Evaluating Web-Scale Discovery Services: A Step-by-Step GuideInformation Technology and Libraries, 34
S. Robertson, E. Kanoulas, Emine Yilmaz (2010)
Extending average precision to graded relevance judgmentsProceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval
Tamar Sadeh (2015)
From Search to DiscoveryBibliothek Forschung und Praxis, 39
Y. Zhang, B. Jansen, A. Spink (2009)
Time series analysis of a Web search engine transaction logInf. Process. Manag., 45
P. Kantor, E. Voorhees (2000)
The TREC-5 Confusion Track: Comparing Retrieval Methods for Scanned TextInformation Retrieval, 2
F. Chickering, Sharon Yang (2014)
Evaluation and Comparison of Discovery Tools: An UpdateInformation Technology and Libraries, 33
C. Buckley, E. Voorhees (2004)
Retrieval evaluation with incomplete information
M. Dillon, P. Wenzel (1990)
Retrieval effectiveness of enhanced bibliographic recordsLibrary Hi Tech, 8
D. Harman (2005)
The TREC Test Collections
C. Cleverdon, M. Keen, Cyri Cleverdon, Wharley Bedford (1966)
ASLIB CRANFIELD RESEARCH PROJECT FACTORS DETERMINING THE PERFORMANCE OF INDEXING SYSTEMS VOLUME 2
D. Lewandowski (2014)
Evaluating the retrieval effectiveness of web search engines using a representative query sampleJournal of the Association for Information Science and Technology, 66
S. Dumais, N. Belkin (2005)
The TREC Interactive Tracks: Putting the User into Search
Melissa Hofmann, Sharon Yang (2012)
"Discovering" What's Changed: A Revisit of the OPACs of 260 Academic LibrariesLibr. Hi Tech, 30
D. Lewandowski (2011)
The retrieval effectiveness of search engines on navigational queriesAslib Proc., 63
C. Cleverdon (1960)
The ASLIB CRANFIELD RESEARCH PROJECT ON The COMPARATIVE EFFICIENCY OF INDEXING SYSTEMS, 12
D. Lewandowski (2015)
A Framework for Evaluating the Retrieval Effectiveness of Search EnginesArXiv, abs/1511.05817
J. Tague-Sutcliffe (1997)
The Pragmatics of Information Retrieval Experimentation RevisitedInf. Process. Manag., 28
C. Barry, Mark Lardner (2010)
A Study of First Click Behaviour and User Interaction on the Google SERP
Andrew Large, J. Beheshti (1997)
OPACs : A research reviewLibrary & Information Science Research, 19
C. Cleverdon, J. Mills, M. Keen (1966)
Aslib Cranfield research project - Factors determining the performance of indexing systems; Volume 1, Design; Part 2, Appendices
Karen Ciccone, John Vickery (2015)
Summon, EBSCO Discovery Service, and Google Scholar: A Comparison of Search Performance Using User QueriesEvidence Based Library and Information Practice, 10
K. Moore, Courtney Greene (2012)
Choosing Discovery: A Literature Review on the Selection and Evaluation of Discovery LayersJournal of Web Librarianship, 6
Michael Gordon, Praveen Pathak (1999)
Finding Information on the World Wide Web: The Retrieval Effectiveness of Search EnginesInf. Process. Manag., 35
PurposeThe purpose of this paper is to demonstrate how to apply traditional information retrieval (IR) evaluation methods based on standards from the Text REtrieval Conference and web search evaluation to all types of modern library information systems (LISs) including online public access catalogues, discovery systems, and digital libraries that provide web search features to gather information from heterogeneous sources.Design/methodology/approachThe authors apply conventional procedures from IR evaluation to the LIS context considering the specific characteristics of modern library materials.FindingsThe authors introduce a framework consisting of five parts: search queries, search results, assessors, testing, and data analysis. The authors show how to deal with comparability problems resulting from diverse document types, e.g., electronic articles vs printed monographs and what issues need to be considered for retrieval tests in the library context.Practical implicationsThe framework can be used as a guideline for conducting retrieval effectiveness studies in the library context.Originality/valueAlthough a considerable amount of research has been done on IR evaluation, and standards for conducting retrieval effectiveness studies do exist, to the authors’ knowledge this is the first attempt to provide a systematic framework for evaluating the retrieval effectiveness of twenty-first-century LISs. The authors demonstrate which issues must be considered and what decisions must be made by researchers prior to a retrieval test.
Journal of Documentation – Emerald Publishing
Published: May 8, 2017
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.