This repository contains scripts for inspection, preparation, and evaluation of the Cityscapes dataset. This large-scale dataset contains a diverse set of stereo video sequences recorded in street scenes from 50 different cities, with high quality pixel-level annotations of 5 000 frames in addition to a larger set of 20 000 weakly annotated frames.
Details and download are available at: www.cityscapes-dataset.net
The folder structure of the Cityscapes dataset is as follows:
{root}/{type}{video}/{split}/{city}/{city}_{seq:0>6}_{frame:0>6}_{type}{ext}
The meaning of the individual elements is:
rootthe root folder of the Cityscapes dataset. Many of our scripts check if an environment variableCITYSCAPES_DATASETpointing to this folder exists and use this as the default choice.typethe type/modality of data, e.g.gtFinefor fine ground truth, orleftImg8bitfor left 8-bit images.splitthe split, i.e. train/val/test/train_extra/demoVideo. Note that not all kinds of data exist for all splits. Thus, do not be surprised to occasionally find empty folders.citythe city in which this part of the dataset was recorded.seqthe sequence number using 6 digits.framethe frame number using 6 digits. Note that in some cities very few, albeit very long sequences were recorded, while in some cities many short sequences were recorded, of which only the 19th frame is annotated.extthe extension of the file and optionally a suffix, e.g._polygons.jsonfor ground truth files
Possible values of type
gtFinethe fine annotations, 2975 training, 500 validation, and 1525 testing. This type of annotations is used for validation, testing, and optionally for training. Annotations are encoded usingjsonfiles containing the individual polygons. Additionally, we providepngimages, where pixel values encode labels. Please refer tohelpers/labels.pyand the scripts inpreparationfor details.gtCoarsethe coarse annotations, available for all training and validation images and for another set of 19998 training images (train_extra). These annotations can be used for training, either together with gtFine or alone in a weakly supervised setup.gtBboxCityPersonspedestrian bounding box annotations, available for all training and validation images. Please refer tohelpers/labels_cityPersons.pyas well as theCityPersonspublication (Zhang et al., CVPR '17) for more details.leftImg8bitthe left images in 8-bit LDR format. These are the standard annotated images.leftImg16bitthe left images in 16-bit HDR format. These images offer 16 bits per pixel of color depth and contain more information, especially in very dark or bright parts of the scene. Warning: The images are stored as 16-bit pngs, which is non-standard and not supported by all libraries.rightImg8bitthe right stereo views in 8-bit LDR format.rightImg16bitthe right stereo views in 16-bit HDR format.timestampthe time of recording in ns. The first frame of each sequence always has a timestamp of 0.disparityprecomputed disparity depth maps. To obtain the disparity values, compute for each pixel p with p > 0: d = ( float(p) - 1. ) / 256., while a value p = 0 is an invalid measurement. Warning: the images are stored as 16-bit pngs, which is non-standard and not supported by all libraries.camerainternal and external camera calibration. For details, please refer to csCalibration.pdfvehiclevehicle odometry, GPS coordinates, and outside temperature. For details, please refer to csCalibration.pdf
More types might be added over time and also not all types are initially available. Please let us know if you need any other meta-data to run your approach.
Possible values of split
trainusually used for training, contains 2975 images with fine and coarse annotationsvalshould be used for validation of hyper-parameters, contains 500 image with fine and coarse annotations. Can also be used for training.testused for testing on our evaluation server. The annotations are not public, but we include annotations of ego-vehicle and rectification border for convenience.train_extracan be optionally used for training, contains 19998 images with coarse annotationsdemoVideovideo sequences that could be used for qualitative evaluation, no annotations are available for these videos
There are several scripts included with the dataset in a folder named scripts
helpershelper files that are included by other scriptsviewerview the images and the annotationspreparationconvert the ground truth annotations into a format suitable for your approachevaluationvalidate your approachannotationthe annotation tool used for labeling the dataset
Note that all files have a small documentation at the top. Most important files
helpers/labels.pycentral file defining the IDs of all semantic classes and providing mapping between various class properties.helpers/labels_cityPersons.pyfile defining the IDs of all CityPersons pedestrian classes and providing mapping between various class properties.viewer/cityscapesViewer.pyview the images and overlay the annotations.preparation/createTrainIdLabelImgs.pyconvert annotations in polygonal format to png images with label IDs, where pixels encode "train IDs" that you can define inlabels.py.preparation/createTrainIdInstanceImgs.pyconvert annotations in polygonal format to png images with instance IDs, where pixels encode instance IDs composed of "train IDs".evaluation/evalPixelLevelSemanticLabeling.pyscript to evaluate pixel-level semantic labeling results on the validation set. This script is also used to evaluate the results on the test set.evaluation/evalInstanceLevelSemanticLabeling.pyscript to evaluate instance-level semantic labeling results on the validation set. This script is also used to evaluate the results on the test set.setup.pyrunsetup.py build_ext --inplaceto enable cython plugin for faster evaluation. Only tested for Ubuntu.
The scripts can be installed via pip, i.e. from within the scripts:
sudo pip install .
This installs the scripts as a python module named cityscapesscripts and exposes the following tools, see above for descriptions:
csViewercsLabelToolcsEvalPixelLevelSemanticLabelingcsEvalInstanceLevelSemanticLabelingcsCreateTrainIdLabelImgscsCreateTrainIdInstanceImgs
Note that for the grapical tools you additionally need to install:
sudo apt install python-tk python-qt4
Once you want to test your method on the test set, please run your approach on the provided test images and submit your results:
www.cityscapes-dataset.net/submit/
For semantic labeling, we require the result format to match the format of our label images named labelIDs.
Thus, your code should produce images where each pixel's value corresponds to a class ID as defined in labels.py.
Note that our evaluation scripts are included in the scripts folder and can be used to test your approach on the validation set.
For further details regarding the submission process, please consult our website.
Please feel free to contact us with any questions, suggestions or comments:
- Marius Cordts, Mohamed Omran
- mail@cityscapes-dataset.net
- www.cityscapes-dataset.net