Share on Facebook Tweet on Twitter Share on LinkedIn Share by email
MSR-Bing Image Retrieval Challenge (IRC) @ ACM Multimedia 2013
  • Outline/Links

  •     Award announcement
  •     Updates
  •     Rules
  •     Dataset download
  •     Related articles
  •     Workshop 2013 (at Microsoft HQ)   
  •     FAQ
  •  
  • ***AWARD ANNOUNCEMENT***:

Congratulations to the top five teams!

  • 1st Place: NTU MIRA (National Taiwan University, Taiwan)
  • 2nd Place: Orange/BYRFTRD (France Telecom Orange Labs & Beijing University of Posts and Telecommunications, Beijing, China)
  • 3rd Place: NLPR_MMC (Chinese Academy of Sciences, Beijing, China)
  • 4th Place: YaM (Yandex, Moscow, Russia)
  • 5th Place: USTC-CityU (University of Science and Technology of China & City University of HongKong, Hefei & Hong Kong, China)

(last update on 8/11/2013)

***IMPORTANT***: All contestants who got DCG higher than 0.46 please register your team (with real names of team members or at least team leader and the country of residence) in the CMT system (or send us emails) by the end of *** Aug 8 (PDT) ***. Contestants without sending/submitting this information won't be considered as winners per the contest rules. (posted on 8/6/2013)

***IMPORTANT***: All contestants who got DCG higher than 0.46 please register your team in the CMT system and submit your description by Aug 5 (2~4 pages). Not qualified for the awards if no team information (at least the real name and country of residence of the team leader) is found in the CMT.(posted on 8/1/2013)

These teams have not registered to CMT or sent team information so far: BLXLRousongBread (last update on 8/9/2013)

News: Evaluation End Date has been extended to 9pm PDT July 30!

News: Deadline of Industrial Track has been extended!

Important Note: Extended Deadlines (firm)

Participants please register your system ID/title/abstract by June 30 here. Please note this is for submitting description papers for the systems for the industrial track. You're required to register your system title/abstract and the team information by June 17. After evaluation ends, we will send further guidelines for submitting a full description paper. Please note this is different from the

ACM Multimedia 2013 Grand Challenge

paper (which should follow the guideline of the ACM Multimedia Conference).

Rules

The Second Microsoft Research (MSR)-Bing challenge (the “Challenge”) is organized into a dual track format, one scientific and the other industrial. The two tracks share exactly the same task and timelines but independent submission and ranking processes.

For the scientific track, we will follow exactly what MM13 GC outlines at http://acmmm13.org/submissions/call-for-multimedia-grand-challenge-solutions/. The papers will be submitted to MM13, and go through the review process. The accepted ones will be presented at the conference. At the conference, the authors of the accepted papers will be requested to introduce their solutions, give a quick demo, and take questions from the judges and the audience. Winners will be selected for Multimedia Grand Challenge Award based on their presentation.

The industrial track of the Challenge will be conducted over the internet through a website maintained by Microsoft. Contestants participating in the industrial track are encouraged to take advantage of the recent advancements in the cloud computing infrastructure and public datasets and must submit their entries in the form of publicly accessible REST-based web services (further specified below). Each entry will be evaluated against a test set created by Bing on queries received at Bing Image Search in the EN-US market. Due to the global nature of the Web the queries are not necessarily limited to the English language used in the United States.

Note that the “Task” for the two tracks is the same, while the submission, evaluation and awards are different, as summarized in the table below. Teams can participate in either or both of the tracks.

 

Task

Measurements and evaluation

Dataset

Submission

Awards

Scientific Track

See below

See below + research impact

Use entire or partial dataset

To conference as a paper

According to MM13 guidelines

Industrial Track

See below

See below

Use entire dataset

See below

Ranked by measurements

Task

The topic of the Challenge is web scale image retrieval. The contestants are asked to develop systems to assess the effectiveness of query terms in describing the images crawled from the web for image search purposes. A contesting system is asked to produce a floating-point score on each image-query pair that reflects how relevant the query could be used to describe the given image, with higher numbers indicating higher relevance. The dynamic range of the scores does not play a significant role so long as, for any query, sorting by its corresponding scores for all its associated images gives the best retrieval ranking for these images.

Measurements

Each entry to the Challenge is ranked by its respective Discounted Cumulated Gain (DCG) measure against the test set. To compute DCG, we first sort for each query the images based on the floating point scores returned by the contesting entry. DCG for each query is calculated as

where is the manually judged relevance for each image with respect to the query, and 0.01757 is a normalizer to make the score for 25 Excellent results 1. The final metric is the average of for all queries in the test set.

In addition to DCG, the average latency in processing each image-query pair will be used as a tie-breaker. For this Challenge, each entry is given at maximum 12 seconds to assess each image-query pair. Tied or empty (time-out) results are assigned the least favorable scores to produce the lowest DCG.

Process

A dataset based on Bing Image search index will be made available for offline training purposes. Detailed descriptions of the dataset can be found at the “Datasets” section of this web site. In addition, the organizer will also make available a web service accessible from the “Team” section of this site for online test runs three months before the final submission deadline. Each contestant can enter the URI of the web service implementing a contending entry at the website. Upon receiving an entry, the Challenge web site will schedule a job to call the web service, evaluate the responses of the web service and post the results on the “Team” and the “Leaderboard” sections of the web site if the entry is designated to show its result in public.

Web Service Development Phase

Initially, the web site will evaluate each entry by computing the DCG on a trial data set with 10 queries and 50 images each. Contestants can therefore submit as many test runs as necessary.

Final Challenge

At 8AM PDT on July 22, 2013(extended), the web site will switch the data set to the Challenge test set. All contestants must ensure their entries are properly registered with the website prior to this time and their web services are up and running for at least 1 week starting on the July 22. No further revisions to the entry are allowed at this point.

Once the winners of the Challenge are determined, the website will resume accepting submissions and evaluating results from the general public. The web site with the Challenge test set will be maintained indefinitely after the Challenge for future researchers to include in their studies as a baseline.

Web Service Interface

Each entry is a URI to REST-based web service hosted by each team. The web service must be publicly accessible with HTTP POST through the internet with the following parameters:

name

Type

description

runID

UTF-8 string

A unique identifier to name a particular run when a system is submitted for evaluation. White spaces are not allowed in the string

query

UTF-8 string

A text query in its raw form of user input (all capitalization, punctuations etc retained)

image

Base 64 string

A base64 encoded JPEG image thumbnail, processed so that the larger dimension between width and height is at most 300 pixels

A contesting system should process the image and respond as soon as possible with a HTTP 200 OK. The response body should be encoded in UTF-8 with the MIME type ‘text/plain’ and contain a floating-point score with higher scores indicating more relevant result for the query. The organizer can call each contesting web service multiple times during the Final Challenge week to obtain statistically significant results to determine winners.

Participation and Prizes

The Challenge is a team-based contest. Each team can have one or more members, and an individual can be a member of multiple teams. No two teams, however, can have identical membership. During the Development Phase, teams are encouraged to interact with each other, try out each other’s contending entries, and merge or split teams as appropriate. Other than an email address to receive notifications from the organizer, the contestants can remain anonymous throughout the Development Phase.

The team membership must be finalized and submitted to the organizer prior to the Final Challenge starting date. Each team must select one individual as the lead and the recipient of the prize check if the team wins.

At the end of the Final Challenge (July 29th), all entries will be ranked based on the metrics described above. The top five teams will receive the following cash prizes:

  • 1st place: $10,000
  • 2nd place: $8,000
  • 3rd place: $6,000
  • 4th place: $4,000
  • 5th place: $2,000

Winners will be contacted by us and have 3 business days to claim the prize once the Challenge results are announced at the Challenge web site. Cash prizes are subject to eligibility criteria and the pertinent laws or regulations by the countries of the contestants. View the Official Rules of the Contest at the organizer’s web site for details.

Questions related to this challenge should be directed to MSRBingChallenge@microsoft.com

Dataset

Please read the license agreement before downloading the datasets.

The following Development Package is provided by MSR and Bing for the participants of the Challenge. Please read this document for the descriptions of the datasets. For your convenience, we offer two ways to download the datasets. Please follow the below link and you will be led to a page with two links for the two methods (Microsoft ID registration/login will be requested - please follow the directions to register an ID if you don't have, or just login in use your Microsoft ID):

Click here to download the dataset

Method 1

For downloading the training and development set in one package. You may also select the data center closest to you for faster download.

Method 2

If smaller download sizes are preferred, the training dataset can also be download in parts. Please put all files in one folder and use 7-zip to unzip z01.

Please let us know if you run into issues.

Contributed Articles:

  • Published on March 20, 2013: persectives from MSR Researcher Dr. Xian-sheng Hua on the Challenge.
  • Published on April 12, 2013: An FAQ answered: Can I use additional data? by Xian-sheng Hau and Linjun Yang.