by Gareth J. F. Jones and Martha Larson

MediaEval is an international multimedia benchmarking initiative offering innovative new tasks to the multimedia research community. MediaEval 2013 featured tasks incorporating video analysis, speech search, music recommendation, analysis of affect and location placing of images.

MediaEval is an international multimedia benchmarking initiative that offers innovative new content analysis, indexing and search tasks to the multimedia community. MediaEval focuses on social and human aspects of multimedia and strives to emphasize the ‘multi’ in multimedia, including the use of image, video, speech, audio, tags, users, and context. MediaEval seeks to encourage novel and creative approaches to tackling these new and emerging multimedia tasks. Participation in MediaEval tasks is open to any research group who signs up. MediaEval 2013 was the fourth evaluation campaign in its current form, which follows on from the VideoCLEF track at CLEF 2008 and CLEF 2009.

MediaEval 2013 offered 6 Main Tasks and 6 Brave New Tasks coordinated in cooperation with various research groups in Europe and elsewhere. The following Main Tasks were offered in the 2013 season:

  • Placing Task: Geo-coordinate Prediction for Social Multimedia. The Placing Task required participants to estimate the geographical coordinates (latitude and longitude) of images, as well as to predict how “placeable” a media item actually is. The Placing Task integrated all aspects of multimedia: textual meta-data, audio, image, video, location, time, users and context.
  • Search and Hyperlinking of Television Content Task: This task required participants to find relevant video segments to an information need and to provide a list of useful hyperlinks for each of these segments. The task focused on television data provided by the BBC and real information needs created by volunteer home users.
  • Spoken Web Search: Spoken Term Detection for Low Resource Languages The task involved searching for audio content within audio content using an audio content query. This task is particularly interesting for speech researchers in the area of spoken term detection or low-resource speech processing.
  • Violent Scenes Detection in Film (Affect Task): This task required participants to automatically detect portions of movies depicting violence. P Violence is defined as "physical violence or accident resulting in human injury or pain". Participants were encouraged to deploy multimodal approaches (audio, visual, text) to solve the task.
  • Social Event Detection Task: This task required participants to discover social events and organize the related media items in event-specific clusters. Social events of interest were planned by people, attended by people and the social media captured by people.
  • Visual Privacy: Preserving Privacy in Surveillance Videos For this task, participants needed to propose methods for protection of privacy sensitive elements (i.e., obscuring identifying elements on people) in videos so that they were rendered unrecognizable in a manner that is suitable for computer vision tools and is perceived as appropriate to human viewers of the footage.

The MediaEval 2013 Brave New Tasks are incubators of potential Main Tasks for future years, and typically have smaller numbers of participants. The MediaEval 2013 Brave New Tasks were: Searching Similar Segments of Social Speech, Retrieving Diverse Social Images, Characterizing Emotion in Music, Crowdsourcing for Social Multimedia, Question Answering for the Spoken Web, and Soundtrack Selection for Commercials (MusiClef Task).

MediaEval 2013 participants. Photo: John Brown.

MediaEval 2013 participants. Photo: John Brown.

The MeviaEval 2013 campaign culminated in the annual workshop which was held at Reial Acadèmia de Bones Lletres in Barcelona, Spain from 18th-19th October. The workshop brought together the task participants to report on their findings, discuss their approaches and learn from each other. MediaEval participation increased again in 2013 with a total of 95 papers appearing in the Working Notes, and 100 participants attending the workshop. In addition to organizer and participant presentations, the workshop featured invited presentations by Jana Eggink, BBC Research and Development, London and Frank Hopfgartner, Technische Universität Berlin, Germany. The Working Notes proceedings from the MediaEval 2013 workshop have again been published by CEUR workshop proceedings.

MediaEval 2013 received support from a number of EU and national projects and other organizations including: AXES, Chorus+, CUbRIK, COMMIT/, Promise, Social Sensor, Media Mixer, MUCKE, Quaero, FWF, CNGL, Technicolor and CMU. We are also happy to acknowledge the support of ACM SIGIR for providing a grant to support international student travel.

The MediaEval 2014 campaign is currently getting underway with task resgistration opening in March 2014. The tasks will run through the spring and summer, with participants invited to present results of their work at the MediaEval 2014 Workshop which will again be held in Barcelona, Spain in October.

Further details of the MediaEval campaigns are available from the MediaEval website.

MediaEval 2013 online proceedings:

Please contact:
Martha Larson
TU Delft, The Netherlands
E-mail: This email address is being protected from spambots. You need JavaScript enabled to view it.

Next issue: January 2024
Special theme:
Large Language Models
Call for the next issue
Image ERCIM News 97 epub
This issue in ePub format
Get the latest issue to your desktop
RSS Feed
Cookies user preferences
We use cookies to ensure you to get the best experience on our website. If you decline the use of cookies, this website may not function as expected.
Accept all
Decline all
Read more
Tools used to analyze the data to measure the effectiveness of a website and to understand how it works.
Google Analytics
Set of techniques which have for object the commercial strategy and in particular the market study.
DoubleClick/Google Marketing