Information Retrieval Evaluation in a Changing World: Lessons Learned from 20 Years of Clef
Description
From Multilingual to Multimodal: The Evolution of CLEF over Two Decades.- The Evolution of Cranfield.- How to Run an Evaluation Task.- An Innovative Approach to Data Management and Curation of Experimental Data Generated through IR Test Collections.- TIRA Integrated Research Architecture.- EaaS: Evaluation-as-a-Service and Experiences from the VISCERAL Project.- Lessons Learnt from Experiments on the Ad-Hoc Multilingual Test Collections at CLEF.- The Challenges of Language Variation in Information Access.- Multi-lingual Retrieval of Pictures in ImageCLEF.- Experiences From the ImageCLEF Medical Retrieval and Annotation Tasks.- Automatic Image Annotation at ImageCLEF.- Image Retrieval Evaluation in Specific Domains.- 'Bout Sound and Vision: CLEF beyond Text Retrieval Tasks.- The Scholarly Impact and Strategic Intent of CLEF eHealth Labs from 2012-2017.- Multilingual Patent Text Retrieval Evaluation: CLEF-IP.- Biodiversity Information Retrieval through Large Scale Content-Based Identification: A Long-Term Evaluation.- From XML Retrieval to Semantic Search and Beyond.- Results and Lessons of the Question Answering Track at CLEF.- Evolution of the PAN Lab on Digital Text Forensics.- RepLab: an Evaluation Campaign for Online Monitoring Systems.- Continuous Evaluation of Large-scale Information Access Systems: A Case for Living Labs.- The Scholarly Impact of CLEF 2010-2017.- Reproducibility and Validity in CLEF.- Visual Analytics and IR Experimental Evaluation.- Adopting Systematic Evaluation Benchmarks in Operational Settings.