Official repository for DERD-Net: Learning Depth from Event-based Ray Densities, by Diego Hitzges*, Suman Ghosh*, and Guillermo Gallego.
*Equal contribution.
If you use this work in your research, please cite it as follows:
@InProceedings{Hitzges25neurips,
title = {{DERD-Net}: Learning Depth from Event-based Ray Densities},
author = {Hitzges, Diego and Ghosh, Suman and Gallego, Guillermo},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
year = {2025}
}- Create Disparity Space Images (DSIs) from events and camera pose
- In case of stereo event vision, fuse DSIs from two or more cameras
- Select pixels with sufficient ray counts in the DSI
- Local subregion of the DSI around each pixel (Sub-DSI)
- Each Sub-DSI is one data point and processed independently and parallely by the network
- Pixel-wise depth estimation for each Sub-DSI:
- Single value at selected pixel for single-pixel network version
- 3x3 grid at selected and 8-neighboring pixels for multi-pixel network version
Drones (MVSEC)
Using three-fold cross-validaton on the MVSEC indoor_flying sequences, our approach drastically outperforms comparable methods:
- Using purely monocular data, our method achieves comparable results to existing stereo methods.
- When applied to stereo data, it strongly outperforms all state-of-the-art (SOTA) approaches, reducing the mean absolute error by at least 42%.
- Our method also allows for increases in depth completion by more than 3-fold while still yielding a reduction in median absolute error of at least 30%.
Driving (DSEC)
The code for our approach is provided in Jupyter Notebooks, each of which contains detailed usage instructions. They are located in the notebooks folder and cover the following functionalities:
To use these notebooks, follow the installation guide below:
git clone https://github.com/tub-rip/DERD-Net.git
cd DERD-Net
We provide an environment.yml file to ensure compatibility with all dependencies. It can be installed using Conda.
conda env create -f environment.yml
conda activate derdnet_env
The following command opens the Jupyter interface in your browser. You can then open and run the notebooks listed above.
jupyter notebook
If you are new to Jupyter, see this quick beginner’s guide to help you get started.
Pretrained models are available in the models folder. These include weights for both the single-pixel and multi-pixel versions of DERD-Net. They can can be used directly within the provided Jupyter notebooks:
- Simply place the desired
.pthfile from themodelsdirectory. - The model will be automatically loaded as specified in the corresponding notebook.
Disparity Space Images (DSIs) can be obtained by running dvs_mcemvs with the parameter save_dsi=true in the config file, like in this example. Please note that saving DSIs occupy significant disk space.
For a quick start, sample DSIs from the MVSEC flying1 sequence are provided here.
This work is released under MIT License.




