The limitations of convex optimization approaches for learning-to-rank

27th January 2014

27th March, 2014 at 11:00

Nicolas Usunier , maître de conférences at Université de Technologie de Compiègne, Compiègne, France


Part of the research on machine learning algorithms for ranking has been driven by the applications to search engines, where the training data consists of user queries, candidate documents for each query, and where information on the desired ordering of the documents is obtained from user feedback or paid annotators. In that context, the community has put a great emphasis on designing algorithms that optimize a convex objective function on the training data. The exact form of the convex objective function varies from one algorithm to another, but in all cases the convex objective is used as a computationally tractable surrogate of a pre-specified quality measure of the predicted rankings. The use of convex surrogates is frequent in machine learning, and theoretically well-grounded for classification tasks in the sense that optimizing a well-chosen convex objective function asymptotically leads to an optimal classifier. In this talk, I will discuss the extent to which such desirable properties of convex surrogate approaches can, or cannot, be extended to ranking. In particular, I will show that for some of the most common quality measures used to evaluate search engines, it is impossible to generate an optimal ranking function by optimizing a convex objective function. The result implies in particular that many existing algorithms for learning to rank cannot optimize the quality measure they are designed for, even in a favorable asymptotic regime.