Introducing the SynthCity world: synthetic worlds for urban semantic segmentation and autonomous navigation

13th January 2016

Speaker: German Ros , doctoral candidate at Computer Vision Center, Barcelona, Spain

Abstract: In this talk we present a new synthetic world, called SynthCity, which recreates a urban environment, including all the common elements present in real scenarios, such as different types of road, sidewalks, buildings, vegetation and dynamic objects such as pedestrians, cars and cyclists. Some of the most appealing properties of SynthCity are:

  1. cheap generation of all sorts of ground-truth (e.g., pixel-wise class annotations, 3D trajectories of objects, depth, optical flow, etc.);
  2. photo-realistic rendering of objects;
  3. dynamic weather generation, with variable lighting conditions and realistic shadow casting;
  4. and simulation of different seasons.

SynthCity is conceived to provide users with rich and versatile urban sequences with an accurate ground-truth to train and validate data driven state-of-the-art algorithms, such as DeepNets, for autonomous navigation related tasks. Among these tasks there are critical problems such as semantic segmentation, SLAM, object detection, behavior analysis, depth estimation and optical flow. Here, we focus on showing how SynthCity helps to improve semantic segmentation results.