ERCIM news 135
ERCIM news 135
ERCIM news 134
ERCIM news 134
ERCIM news 133
ERCIM news 133
ERCIM news 132
ERCIM news 132
ERCIM news 131
ERCIM news 131
ERCIM news 130
ERCIM news 130
Back Issues Online
Back Issues Online

by Carol Peters

The results of the tenth campaign of the Cross Language Evaluation Forum (CLEF) were presented and plans for CLEF 2010 were unveiled at the annual CLEF workshop, held this year in Corfu, Greece.

The objective of the Cross Language Evaluation Forum (CLEF) is to promote research in the field of multilingual information access (MLIA) research and development. This is done through the organisation of annual evaluation campaigns offering tasks designed to test different aspects of mono- and cross-language information retrieval systems. The aim is to encourage the development of next generation multilingual multimodal IR systems.

The participation in the CLEF initiative has increased steadily over the years, growing from just 20 research groups in CLEF 2000 to more than 150 groups this year, mainly but not only from academia. Most of the participants in CLEF 2009 were from Europe but there was also a good contingent from North America and Asia plus a few from South America and Africa.

CLEF 2009 Tracks
CLEF 2009 offered eight main tracks designed to evaluate the performance of systems for:

  • multilingual textual document retrieval (Ad Hoc)
  • interactive cross-language retrieval (iCLEF)
  • multiple language question answering (QA@CLEF)
  • cross-language retrieval in image collections (ImageCLEF)
  • multilingual information filtering (INFILE@CLEF)
  • cross-language video retrieval (VideoCLEF)
  • intellectual property (CLEF-IP) - New this year
  • log file analysis (LogCLEF) - New this year.

An experimental pilot task was also offered:

  • Grid Experiments (Grid@CLEF).

In addition, CLEF collaborated in the organisation of Morpho Challenge 2009, an activity of the Pascal Network of Excellence.

A main result of the CLEF campaigns is the creation of a number of valuable, reusable test collections consisting of data in many languages and diverse media (text, image, speech and video) which are made available for system benchmarking purposes.

Workshop
As usual, this year’s workshop was held in conjunction with the European Conference on Digital Libraries. It was attended by 160 researchers and system developers, who presented their experiments and results in lively plenary, parallel, poster and breakout sessions. There were several invited talks. Noriko Kando, National Institute of Informatics Tokyo, reported on the “Evolution of NTCIR”, an evaluation infrastructure testing Information Access technologies for Asian languages) and Jaap Kamps of the University of Amsterdam presented the main outcomes of a SIGIR workshop on the “Future of IR Evaluation”. Donna Harman, US National Institute of Standards and Technology, in a concluding talk summed up what she felt were the main achievements of CLEF over these ten years of activity.

Scenes from CLEF 2009 Workshop.
Scenes from CLEF 2009 Workshop.

The presentations given at the CLEF 2009 Workshop and detailed reports on the experiments of CLEF 2009 and previous years can be found on the CLEF website.

CLEF 2010 – Conference on Multilingual and Multimedia Information Access Evaluation
CLEF 2009 has represented an important milestone for the MLIA community. After ten years of activity focused on stimulating the development MLIA systems and functionality through the organisation increasingly complex evaluation tasks, it is now time to assess achievements and to identify priorities for the future. For this reason, it has been decided to change the format for next year. CLEF 2010 will take the form of an independent, peer-reviewed Conference organised in conjunction with a set of Evaluation Labs, each running experiments aimed at testing performance in MLIA-related areas.

The Conference will be held in Padua, Italy, September 2010, as a four day event: The first two days will consist of plenary sessions in which keynote speeches and peer-reviewed papers will be presented. The goals will be to explore current needs and practices for information access, study new evaluation metrics and methodologies, discuss new directions for future activities in the European multilingual /multimodal IR system evaluation context. In Days 3 and 4, the results of the Labs will be presented in full and half-day workshops.

CLEF 2008 and 2009 have been sponsored by the TrebleCLEF project: Evaluation, Best Practices and Collaboration in the Multilingual Information Access domain. The first Calls for Proposals for the organisation of Evaluation Labs and the preliminary Call for Papers for the submission of scientific papers to the Conference will be posted on the TrebleCLEF website.

Links:
CLEF: http://www.clef-campaign.org
NTCIR: http://research.nii.ac.jp/ntcir/
TrebleCLEF: http://www.trebleclef/eu

Please contact:
Carol Peters
ISTI-CNR, Italy
E-mail: This email address is being protected from spambots. You need JavaScript enabled to view it.

Next issue: January 2024
Special theme:
Large Language Models
Call for the next issue
Get the latest issue to your desktop
RSS Feed