Publications
Authors:
  • Florent Perronnin , Jorge Sanchez , Yan Liu
Citation:
CVPR (Conference on Computer Vision and Pattern Recognition), San Francisco, CA, USA. 13-18 June 2010
Abstract:
Kernel machines rely on an implicit mapping of the data such that non-linear classification in the original space corresponds to linear classification in the new space. As kernel machines are difficult to scale to large training sets, it has been proposed to perform an explicit mapping of the data
and to learn directly linear classifiers in the new space. In this paper, we consider the problem of learning image categorizers on large image sets (e.g. > lOOk images) using bag-of-visual-words (BOV) image representations and Support
Vector Machine classifiers. We experiment with three approaches to BOV embedding: 1) kernel PCA (kPCA), 2) a modified kPCA we propose for additive kernels and 3) random projections for shift-invariant kernels. We report results on 3 datasets: CaltechlOJ, VOC07 and ImogeNet. One conclusion is that simply square-rooting BOV vectors - which corresponds to an exact mapping in
the case of the Bhattacharyya kernel - already leads to large improvements, often quite close to the best results obtained with additive kernels. Another important conclusion is that, although It Is possible to go beyond additive
kernels, the embedding comes at a much higher cost.
Year:
2010
Report number:
2009/090