Beyond Semantics

Computer vision tasks such as automated object recognition or localization, which are related to the semantic content of images, are well-studied problems. Until recently however, it was unclear whether image properties such as aesthetic quality, which are experienced differently by each individual, were amenable to automatic description and categorization.

This debate has now been settled, with works demonstrating that image saliency, aesthetics, iconicity and memorability can to a great extent be predicted by training supervised models on visual data. Automatic prediction of such properties can enable technologies such as personalization of marketing materials, enhancing learning materials, and database management and compression.

However, several interrelated challenges exist for studying image properties that go beyond semantics. Foremost is the need for rich annotations that can capture the range of subjective opinions that may exist for an image, as there is never total agreement among viewers about how aesthetically pleasing or memorable an image is. This need may in turn may require the acquisition and analysis of large-scale image datasets. In addition, the design of image representations that adequately capture image characteristics such as aesthetic quality or iconicity are active research domains.

Our group has made important contributions to visual analysis beyond semantics, with a focus on image aesthetics. We have published a dataset of over 200K images with extensive annotations related to their aesthetic quality, including real-valued scores and textual comments given to each image by dozens of photography enthusiasts. We have also provided an extensive analysis of the informational content present in these annotations and explored the wide variety of applications they enable, including aesthetic quality assessment, image re-ranking, and style tagging. We also demonstrated that Fisher vectors (FVs), a state-of-the-art generic image representation developed in our group, can be effectively used to train models of image aesthetics. Our work on visual saliency has shown that FVs are also effective for learning image saliency models.

Selected publications:

Learning Beautiful (and ugly) attributes
 Luca Marchesotti, Florent Perronnin
 BMVC 2013 (Bristol, U.K.) 
 

Learning to Rank Images using Semantic and Aesthetic Labels
 Naila Murray, Luca Marchesotti, Florent Perronnin
 BMVC 2012 (Guilford, U.K.) 
 

AVA: A Large-Scale Database for Aesthetic Visual Analysis
 Naila Murray, Luca Marchesotti, Florent Perronnin
 CVPR 2012 (Providence, U.S.) 
 

Towards automatic and flexible concept transfer
 Naila Murray, Sandra Skaff, Luca Marchesotti, Florent Perronnin
 Computers & Graphics 2012 
 

Assessing the aesthetic quality of photographs using generic image descriptors
 Luca Marchesotti, Florent Perronnin, Diane Larlus, Gabriela Csurka
 ICCV 2011 (Barcelona, Spain) 
 

Learning Moods and Emotions from Color Combinations
 Luca Marchesotti, Gabriela Csurka, Craig Saunders, Sandra Skaff
 ICVGIP 2011, (Honorable mention),(Chennai, India) 
 

Font Retrieval on Large Scale: an experimental study
 Luca Marchesotti, Florent Perronnin, Saurabh Kataria
 ICIP 2010 (Hong Kong)
 

A framework for visual saliency detection with applications to image thumbnailing
 Luca Marchesotti, Claudio Cifarelli, Gabriela Csurka
 ICCV 2009 (Kyoto, Japan)