The primary aim of the PIR-CLEF 2018 laboratory is to use the framework for evaluation of Personalised Information Retrieval (PIR) developed within the PIR-CLEF 2017 workshop to run a benchmark task for researchers responding to an open call for participation. Personalisation and other adaptation of the search experience to the user and search context is an important topic in Information Retrieval (IR).
The PIR-CLEF 2017 pilot workshop at CLEF 2017 created an initial PIR task development test collection. This pilot PIR task consists of a test collection created using the methodology that we have developed, and that we have described in our EVIA 2016 paper. Construction of this test collection provided value experiences in the specification and definition of a PIR collection. Pilot evaluations using this test collection are currently underway to fine tune the design of the task to be used for the PIR-CLEF 2018. We are also developing a tool for the evaluation of PIR using this test collections of this type. For PIR-CLEF 2018 we will build on the pilot collection developed in PIR-CLEF 2017, to provide participants with a new set of search data for the comparative evaluation of alternative methods for PIR. Participants in PIR-CLEF 2018 will be initially receive the PIR-CLEF 2017 pilot collection together with our evaluation tool to enable them them to perform development runs on the collection prior to creating submission runs for the PIR-CLEF 2018 task.