XRCE organises public scientific seminars on a regular basis which you are welcome to attend. These seminars are an occasion to exchange with researchers from various backgrounds and to broaden scientific expertise. You can subscribe to our seminar RSS Feed for dates, speakers and topics.
The goal of the ANR-KAMELEON project is to capture and process vast amounts of dynamic anatomical data to analyze the skeletal structure and locomotion of vertebrates. The current system uses six high-speed cameras for the external surface with temporally calibrated x-ray cineradiography for the internal skeletal.
I will be presenting the interesting challenges that arise in the context of this application and our current approach for addressing these issues. I will be focusing on our current research in video segmentation, 3D reconstruction, and statistical analysis, as well as the future direction of the project including learning the skeletal movement from the external surface information.Slides (4.90 MB)
I will present a general framework for Collaborative Filtering (CF), which is the task of learning preferences of users or customers for products, such as books or movies, from a set of known preferences. A standard approach to CF is to find a low rank, or low trace norm, approximation to a partially observed matrix of user preferences. We generalize this approach to estimation of a compact operator, of which matrix estimation is a special case. We develop a notion of spectral regularization which captures both rank constraint and trace norm regularization. The major advantage of this approach is that it provides a natural method of utilizing side-information, such as age and gender, about the users (or objects) in question - a formerly challenging limitation of the low-rank approach. We provide a number of algorithms, and test our results on a standard CF dataset with promising results. This is a joint work with Jacob Abernethy (UC Berkeley), Francis Bach (INRIA and ENS Paris) and Theodoros Evgeniou (INSEAD).Slides (319.38 kB)
The output of image segmentation is often represented by a labelled graph, each vertex corresponding to a segmented region, with edges joining neighboring regions. However, such rich representations of images have mostly remained underused for learning tasks, partly due to the observed instability of the segmentation process and the inherent difficulty of inexact graph matching or other graph mining problems with uncertain graphs. Recent advances in kernel-based methods have allowed to handle structured objects such as graphs by defining similarity measures via kernels, that can be used for many learning tasks such as classification with a support vector machine.
In this talk, I will first review the rapidly developing kernel-based methods in machine learning and present a new family of kernels between two segmentation graphs. Our kernels are based on soft matchings of subtree patterns of the respective graphs, leveraging the natural structure of images while remaining robust to the segmentation process uncertainty. (Joint work with Zaid Harchaoui)Slides (2.32 MB)
Conditional Random Fields (CRFs) are an effective tool for a variety of different data segmentation and labelling tasks including visual scene interpretation, which seeks to partition images into their constituent semantic-level regions and assign appropriate class labels to each region. For accurate labelling it is important to capture the global context of the image as well as local information. We introduce a CRF based scene labelling model that incorporates both local features and features aggregated over the whole image or large sections of it. Secondly, traditional CRF learning requires fully labelled datasets. Complete labellings are typically costly and troublesome to produce. We introduce an algorithm that allows CRF models to be learned from datasets where a substantial fraction of the nodes are unlabeled. It works by marginalizing out the unknown labels so that the log-likelihood of the known ones can be maximized by gradient ascent. Loopy Belief Propagation is used to approximate the marginals needed for the gradient and log-likelihood calculations and the Bethe free-energy approximation to the log-likelihood is monitored to control the step size. Our experimental results show that incorporating top-down aggregate features significantly improves the segmentations and that effective models can be learned from fragmentary labellings. The resulting methods give scene segmentation results comparable to the state-of-the-art on three different image databases.Slides (1.01 MB)
Current NLP-related projects at the Department of Computer Linguistics and AI at the Adam Mickiewicz University (Faculty of Mathematics and Computer Science). I intend to present some results obtained at the Department in the area of language engineering. These will be: the system POLINT for computer understanding of Polish texts and its application to a crisis management system (being currently developped), the Polish WordNet project PolNet and its integration to the Global Wordnet Grid, electronic morphological dictionary POLEX.Status: completed