ERCIM news 135
ERCIM news 135
ERCIM news 134
ERCIM news 134
ERCIM news 133
ERCIM news 133
ERCIM news 132
ERCIM news 132
ERCIM news 131
ERCIM news 131
ERCIM news 130
ERCIM news 130
Back Issues Online
Back Issues Online

by Nicola Ferro

Since 2000, CLEF has played a successful role in stimulating research and promoting evaluation in a wide range of key areas in the information access and retrieval domain. In 2010, a radical innovation and renewal process led to the establishment of the CLEF Initiative, whose mission is to promote research, innovation, and development of information access systems with emphasis on multilingual and multimodal information

by providing an infrastructure for:

  • multilingual and multimodal system testing, tuning and evaluation
  • investigation of the use of unstructured, semi-structured, highly-structured, and semantically enriched data in information access
  • creation of reusable test collections for benchmarking
  • exploration of new evaluation methodologies and innovative ways of using experimental data
  • discussion of results, comparison of approaches, exchange of ideas, and transfer of knowledge.

The CLEF Initiative is structured in two main parts:

  • A series of Evaluation Labs, i.e. laboratories to conduct evaluation of information access systems and workshops to discuss and pilot innovative evaluation activities.
  • A peer-reviewed Conference on a broad range of issues, including:
    • the activities of the Evaluation Labs
    • experiments using multilingual and multimodal data; in particular, but not only, data resulting from CLEF activities
    • research in evaluation methodologies and challenges.

Due to these changes and the broader scope of the CLEF Initiative, the acronym CLEF, traditionally expanded to Cross-Language Evaluation Forum, now translates to Conference and Labs of the Evaluation Forum.
This renewal process and the organization of the annual CLEF events are partially supported by the EU FP7 PROMISE project (Participative Research labOratory for Multimedia and Multilingual Information Systems Evaluation).

Impressions from the CLEF conference
Impressions from the CLEF conference

CLEF 2011: The Second Event of the CLEF Initiative
CLEF 2011 was hosted by University of Amsterdam, The Netherlands, 19-22 September 2011 as a three and a half days event where conference presentations, laboratories and workshops, and community sessions were smoothly interleaved to provide a continuous stream of discussions on the different facets of experimental evaluation.

14 papers (ten full and four short) were accepted for the Conference and published by Springer in their Lectures Notes for Computer Science (LNCS) series. Two keynote speakers highlighted important developments in the field of evaluation. Elaine Toms, University of Sheffield, focused on the role of users. She argued that evaluation has moved from an emphasis on topical relevance to an emphasis on measuring almost anything that can be quantified. Omar Alonso, from Microsoft USA, presented a framework for the use of crowdsourcing experiments in retrieval evaluation.

The community sessions at CLEF 2011 were organized around a strategic EU meeting to promote funding opportunities, a networking session on IR for scientific multimedia data organized by the Chorus Network of Excellence, an Evaluation Initiatives session with overviews from other benchmarking fora, and an infrastructure session dedicated to the DIRECT system for handling scientific data resulting from retrieval experiments.

Five benchmarking evaluations ran as labs in CLEF 2011:

  • CLEF-IP: a benchmarking activity on intellectual property
  • ImageCLEF: a benchmarking activity on image retrieval, focusing on the combination of textual and visual retrieval
  • LogCLEF: a benchmarking activity on multilingual log file analysis, namely language identification, query classification, success of a query
  • PAN: a benchmarking activity on uncovering plagiarism, authorship, and social software misuse
  • QA4MRE: a benchmarking activity on the evaluation of machine reading systems through question answering and reading comprehension tests.

There were also two exploration workshops:

  • CHiC (new): a workshop aimed at a systematic and large-scale evaluation of cultural heritage digital libraries
  • MusicCLEF (new): a pilot lab/workshop on the evaluation of music search engines using both audio content and textual descriptions

CLEF 2012: Information Access Evaluation meets Multilinguality, Multimodality, and Visual Analytics

CLEF 2012 will be hosted by the Sapienza University of Rome, Italy, 17-20 September 2012. The Call for Lab proposals was issued at the beginning of November 2011 and 10 lab proposals and two workshops proposal were received. Seven labs and one workshops, out of which four are new, have been selected to run during 2012:

  • CHiC (new): a last-year workshop turning into a benchmarking activity for the cultural heritage domain based on Europeana collections
  • CLEF-IP: a benchmarking activity on intellectual property
  • ImageCLEF: a benchmarking activity on image retrieval
  • INEX (new): the well-known Initiative for Evaluation of XML retrieval joins efforts with CLEF to target new synergies between multilingual, multimodal and semi-structured information access
  • PAN: a benchmarking activity on plagiarism detection
  • QA4MRE: a benchmarking activity on the evaluation of Machine Reading systems through Question Answering and Reading Comprehension Test
  • RepLab (new): a benchmarking activity on microblog data for online reputation management
  • eHealth (new): a workshop on new evaluation issues in the health domain, related to the Louhi series of workshops on NLP in Health Informatics.

The Call for papers for the Conference was released December 2011; the expected deadline for the submission of papers is late April 2012.

Links:
CLEF 2012: http://www.clef2012.org/
CLEF: http://www.clef-campaign.org/
DIRECT: http://direct.dei.unipd.it/
PROMISE: http://www.promise-noe.eu/

Please contact:
Nicola Ferro
University of Padua, Italy
E-mail: This email address is being protected from spambots. You need JavaScript enabled to view it.

{jcomments on}
Next issue: January 2024
Special theme:
Large Language Models
Call for the next issue
Get the latest issue to your desktop
RSS Feed