Non-rigid shape from single images: from linear to deep learning formulations
In this talk, I will first present linear formulations to estimate the shape of 3D deformable objects from a single image. Since monocular non-rigid reconstruction is severely under-constrained, these formulations need to incorporate a number of constraints enforcing 3D-to-2D projectivity, local rigidity or shading coherence. Then, I will discuss the major limitations of these linear approaches and describe an alternative stochastic exploration strategy. I will show results for both non-rigid shape and human pose recovery.
In the second part of the talk, I will present a novel solution to estimate 3D human pose from a single image. In contrast to previous formulations, instead of using joint coordinates to represent the shape, we will consider a matrix of pairwise distances between joints. We will then formulate the shape retrieval problem as a 2D-to-3D distance matrix regression, which we will solve using very simple Neural Network architectures. The approach has also the advantage to naturally handle missing observations and allowing to hypothesize the position of non-observed joints. Quantitative results on standard data sets demonstrate consistent performance gains over state-of-the-art.