• Thomas Mensink , Gabriela Csurka , Florent Perronnin , Jorge Sanchez , Verbeek Jakob
CLEF 2010 - Conference on Multilingual and Multimodal Information Access evaluation, Padua, Italy , 20-23 September 2010
Full paper available on <a href= > CLEF Website </a>
In this paper we present the common effort of Lear and XRCE for the lmageCLEF Visual Concept Detection and Annotation Task. We first sought to combine our individual stale-of-the-art approaches: the Fisher vector image representation, with the TagProp method for image auto-annotation. Our second motivation was to investigate the annotation performance by using extra information in the form of provided Flickr-tags.
The results show that using the Flickr-tags in combination with visual features improves the results of any method using only visual features. Our winning system, an early-fusion linear-SVM classifier, trained on visual and flicks-lags features, obtains 45.53 in mean Average Precision (mAP), almost a 5% absolute improvement compared to the best visual-only system. Our best visual-only system obtains 39.0% mAP, and is close to the best visual-only system. It is a late-fusion linear-SVM classifier, trained on two types of visual features (SIFT and color). The performance of TagProp is close to our SVM classifiers.
The methods presented in this paper, are all scalable to large datasets and/or many concepts. This is due to the fast FK framework for image representation, and due to the classifiers. The linear SVM classifier has proven lo scale well for large datasets. The k-MN approach of TagProp is interesting in this respect since it requires only 2 parameters per concept
Report number: