by Gareth J. F. Jones and Martha Larson
MediaEval is an international multimedia benchmarking initiative offering innovative new tasks to the multimedia community. MediaEval 2012 featured tasks incorporating social media search, analysis of affect and location placing of images.
MediaEval is an international multimedia benchmarking initiative that offers innovative new content analysis, indexing and search tasks to the multimedia community. MediaEval focuses on social and human aspects of multimedia and strives to emphasize the “multi” in multimedia, including the use of speech, audio, tags, users, and context, as well as visual content. MediaEval seeks to encourage novel and creative approaches to tackling these new and emerging multimedia tasks. Participation in MediaEval tasks is open to any research group who signs up. MediaEval 2012 was the third evaluation campaign in its current form, which follows on from VideoCLEF track at CLEF 2008 and CLEF 2009.
MediaEval 2012 offered six main tasks coordinated in cooperation with various research groups in Europe and elsewhere. The following tasks were offered in the 2011 season:
- Placing Task: This task required participants to assign geographical coordinates (latitude and longitude) to each of a provided set of test videos in two sub-tasks: placing anywhere in the world, precise location in a known city. Participants could make use of metadata and audio and visual features as well as external resources.
- Spoken Web Search Task: This task involved searching for audio content within audio content using an audio content query. It addresses the challenge of search for multiple, resource-limited languages with application in low-literacy communities in the developing world.
- Affect Task: This task required participants to deploy multimodal features to automatically detect portions of movies containing violent material. Violence is defined as “physical violence or accident resulting in human injury or pain”. Any features automatically extracted from the video, including the subtitles, could be used.
- Social Event Detection Task: This task requires participants to discover events and detect media items that are related to either a specific social event or an event-class of interest. Social events of interest were planned by people, attended by people and the social media captured by people.
- Tagging Task: The task required participants to automatically assign tags to Internet videos using features derived from speech, audio, visual content or associated textual or social information. This year the task focused on labels that reflect the genre of the video.
- Visual Privacy Task: Participants were required to explore methods to obscure human faces so as to make them unrecognisable in digital imagery with application in situations where persons may be captured in a video frame, but may wish to protect their privacy. will need to propose methods whereby human faces occurring in digital imagery can be obscured so as to render them unrecognisable.
MediaEval 2012 also introduced the idea of Brave New Tasks, as activities with smaller participant groups as incubators of potential main tasks for future years. The MediaEval 2012 Brave New Tasks were: User Account Matching, Search and Hyperlinking and MusiClef: Multimodal Music Tagging.
The MeviaEval 2012 campaign again culminated in a workshop that was held at Fossabanda Santa Croce in Pisa, Italy from 4 to 5 October. The workshop brought together the task participants to report on their findings, discuss their approaches and learn from each other. MediaEval participation increased again in 2012 with a total of 54 papers appearing in the Working Notes, and 60 participants attending the workshop. In addition to organizer and participant presentations, the workshop features invited presentations by Jana Eggink, BBC Research and Development, London and Nicola Ferro, University of Padova, co-ordinator of the Promise Network of Excellence. The workshop concluded with a meeting of task organizers and other interested researchers that consisted of presentations and discussions of task proposals for MediaEval 2013. An exciting development of the workshop was the increasing collaborations between task participants arising from informal breakout discussions, which are now leading to further experiments with MediaEval datasets and submissions of joint papers to international conferences. The Working Notes proceedings from the MediaEval 2012 workshop have again been published by CEUR workshop proceedings.
MediaEval 2012 received support from a number of EU and national projects and other organizations including: AXES, Glocal, WeKnowIt, Chorus+, Quaero, IISSCoS, Technicolor, IBM India and CMU.
The MediaEval 2013 campaign is currently in progress and participants will be presenting results of their work at the MediaEval 2013 Workshop in Barcelona from 18 to 19 October, just before the ACM Multimedia 2013 conference.
Further details of MediaEval are available from the MediaEval website.
Martha Larson, TU Delft, The Netherlands