Skip to content
/ SOF Public

Official implementation for the paper "SOF: Sorted Opacity Fields for Fast Unbounded Surface Reconstruction."

License

Notifications You must be signed in to change notification settings

r4dl/SOF

Repository files navigation

SOF: Sorted Opacity Fields for Fast Unbounded Surface Reconstruction

Project Page arXiv Point Clouds Meshes

SIGGRAPH Asia 2025

1 Graz University of Technology 🇦🇹
2 Huawei Technologies 🇦🇹🇨🇭
3 University of Stuttgart 🇩🇪

Overview

SOF is a method for rapid extraction of unbounded surfaces, using 3D Gaussians. Compared to recent methods, we deliver improved mesh quality, with more details, while accelerating both training and meshing significantly.

Code

Find all instructions for running our code here!

Setup
# Clone the repository
git clone https://github.com/r4dl/SOF.git
cd SOF

# Create a conda environment
conda env create --file environment.yml
conda activate sof

# Install the remaining dependencies
pip install submodules/simple-knn/ --no-build-isolation
pip install submodules/diff-gaussian-rasterization/ --no-build-isolation
pip install git+https://github.com/rahul-goel/fused-ssim/ --no-build-isolation

If you want to extract meshes with Fast Marching Tetrahedra, install Tetra-Triangulation, based on Tetra-NeRF:

cd submodules/tetra-triangulation

conda install cmake conda-forge::gmp conda-forge::cgal
cmake .
# to build, it might be necessary for building to define the CUDA PATH
# export CPATH=/usr/local/<CUDA_VERSION>/targets/x86_64-linux/include:$CPATH
make
# Note: editable mode is required here
pip install -e . --no-build-isolation

We have tested this implementation with Ubuntu 22.04 and CUDA 12.2.

Data

For our evaluation, we used the following datasets:

Dataset Name Link Note
Tanks & Temples Download ⚠️ See Instructions below!
DTU Download ⚠️ See Instructions below!
Mip-NeRF 360 Download -
TNT/DB (NVS) Download -

The links redirects you to a download page! We assume all data within the data/ directory for our scripts to work. If your data lies somewhere else, modify DATA_DIR in scripts/constants.py.

data
├── TNT_GOF
│   ├── Barn
│   └── ...
├── DTU
├── tnt_db
└── m360

Tanks & Temples post-install

Note: For Tanks and Temples, additional care needs to be taken!

First, you need to rename <SCENE>_COLMAP_SfM.log to <SCENE>_traj_path.log for every scene! Afterwards, visit the download page for TNT. For each scene, download everything and paste into the corresponding scene folder!

Now your setup is good to go!

DTU post-install

Download both the SampleSet and the Points from here. See DTU_GT_DATA in scripts/run_dtu.py.

Scripts

We provide scripts to train, mesh/render and evaluate our method, using the same hyperparameters as reported in the paper.

Note: There may be some noise in the final results; for convenience, we provide the point clouds/meshes we used for evaluation in our paper!

# Training, Meshing (Marching Tets) and Evaluation for Tanks & Temples
python scripts/run_tnt.py
# Training, Meshing (TSDF) and Evaluation for DTU 
python scripts/run_dtu.py    
# Training, Rendering and Evaluation for NVS (Mip-NeRF 360 by default)
python scripts/run_nvs.py 

Note: To show the results, simple use the corresponding show_* script, e.g., python scripts/show_nvs.py.

Training

To train our method, use the train.py script, as in, e.g. StopThePop. To document rasterizer settings, we use .json files, located in the configs/ directory.

# SOF default settings
python train.py --splatting_config configs/hierarchical.json -s <path to dataset>

Note: Every parameter specified in the .json config file can also be individually set or overridden via command line arguments when running train.py. For a full list of all options (including those corresponding to fields in the config), see:

python train.py -h

...
Splatting Settings:
  --sort_mode {GLOBAL,PPX_FULL,PPX_KBUFFER,HIER}
  --sort_order {Z_DEPTH,DISTANCE,PTD_CENTER,PTD_MAX,MIN_Z_BOUNDING}
  --tile_4x4 {64}       only needed if using sort_mode HIER
  ...                  # plus any other config entries...

This means that any field from your configs/hierarchical.json (or other config) can be set/overwritten on the CLI using the --field_name value pattern.

Meshing

Bounded (such as DTU)

For bounded scenes (such as DTU), we use TSDF fusion, which can be run using

python extract_mesh_tsdf.py -m <MODEL_PATH>

Note: By default, we use a voxel_size of 0.002, but it can be modified via --voxel_size.

As a result, you will get the ply-file in <MODEL_PATH>/test/ours_30000/tsdf.ply.

Unbounded (such as Tanks & Temples)

Here, we use Fast Marching Tetrahedra, which are run using

python extract_mesh_tets.py -m <MODEL_PATH>

As a result, you will get the ply-file in <MODEL_PATH>/test/ours_30000/mesh_faster_binary_search_7.ply.

Note: To use the STP bounding mode, or the opacity cutoff, use the CLI (show options with python extract_mesh_tets.py -h).

Note: You can (and should) inspect these meshes using our mesh viewer (python mesh_viewer.py <PATH TO PLY FILE>). See the Visualization & Debugging section below for more details.

Evaluation

Meshing

All evaluataion scripts for meshing are contained in mesh_utils/.

Tanks & Temples

To evaluate your meshes for the Tanks & Temples dataset, use

python mesh_utils/eval_TNT.py \
--dataset-dir <DATASET> \
--ply-path <PATH TO MESH> \
--traj-path <TRAJ PATH LOG FILE> \
--out-dir <OUT DIR>

Note: For the <TRAJ PATH LOG FILE>, we used the <SCENE>_COLMAP_SfM.log file you get from the TNT_GOF download; see Data for details.

DTU

To evaluate your meshes for the DTU dataset, use

python mesh_utils/eval_DTU.py \
--instance_dir <PATH TO SCAN> \
--input_mesh <PATH TO MESH> \
--dataset_dir <PATH TO GT DATA> \
--vis_out_dir <OUT DIR>

Note: The GT DATA needs to be downloaded separately from here; see Data for details.

Novel View Synthesis

To evaluate novel view synthesis, run

# render images
python render.py -m <MODEL DIRECTORY> --skip_train
# create metrics
python metrics.py -m <MODEL DIRECTORY>

This is the exact same workflow as in scripts/run_nvs.py.

Alternatively, you can also adapt the run_nvs.py script.

Note: By default, we run Mip-NeRF 360 using the default settings; to modify this, modify the script:

# modify these to test a different dataset
scenes = ...
factors = ...
TRAIN_DATA = ...
Metrics

These are the results for the latest run, using this codebase!

Note: The numbers may vary slightly per-run, and this is not the original codebase we used; altough a cleaned-up version!

Meshing

Table: DTU evaluation

Scan 24 37 40 55 63 65 69 83 97 105 106 110 114 118 122 Average
CD 0.557 0.775 0.567 0.395 1.235 0.813 0.708 1.172 1.263 0.670 0.728 0.968 0.503 0.589 0.521 0.764

Table: TNT evaluation

Metric Barn Caterpillar Courthouse Ignatius Meetingroom Truck Average
F-Score 0.533 0.403 0.293 0.708 0.298 0.553 0.465

Note: Use the show_dtu.py and show_tnt.py script to quickly get the metrics (after the corresponding run-script)!

Novel View Synthesis

Table: Mip-NeRF 360 evaluation

Metric bicycle bonsai counter flowers garden stump treehill kitchen room Average
PSNR 25.409 31.194 28.475 21.633 27.154 26.936 22.461 30.311 29.899 27.052
SSIM 0.784 0.933 0.898 0.636 0.862 0.789 0.643 0.910 0.907 0.818
LPIPS 0.185 0.197 0.204 0.278 0.109 0.197 0.279 0.143 0.222 0.202
FLIPS 0.158 0.091 0.111 0.217 0.125 0.146 0.183 0.108 0.116 0.139

Note: Use the show_nvs.py to quickly get the metrics (after the run_nvs-script)!

Visualization & Debugging

Our visualization suite is built upon Splatviz, and is fully self-contained within this repository. To use it, first navigate to the splatviz/ directory.

In it, run either

# to attach to a currently running training session
python run_main.py --mode attach {--port <PORT>}


# to attach to a currently running training session
python run_main.py --data_path <PATH TO A POINT CLOUD FILE>

With both (yes, both), open the Render tab to checkout different debug visualization modes (e.g. Depth/Normal/Transmittance), change rasterizer settings on the fly or just inspect the current scene.

SOF Demo Teaser

Inspecting Meshes

We additionally provide a mesh viewer to inspect triangulated meshes. To run, simply do

python mesh_viewer.py <PATH TO PLY FILE>

By default, normals are displayed. Checkout the CLI for more information!

Licensing

This code has been built on top of StopThePop, and as such, is primarily licensed under the "Gaussian Splatting License". For more information, we refer to our Notice.

Acknowledgements

This research was supported by the Austrian Science Fund (FWF) [10.55776/I6663], the German Science Foundation (DFG) [contract 528364066] and the Alexander von Humboldt Foundation funded by the German Federal Ministry of Research, Technology and Space.

BibTeX

@inproceedings{radl2025sof,
  author    = {Radl, Lukas and Windisch, Felix and Deixelberger, Thomas and Hladky, Jozef and Steiner, Michael and Schmalstieg, Dieter and Steinberger, Markus},
  title     = {{SOF: Sorted Opacity Fields for Fast Unbounded Surface Rconstruction}},
  booktitle = {SIGGRAPH Asia Conference Proceedings},
  year      = {2025}
}

About

Official implementation for the paper "SOF: Sorted Opacity Fields for Fast Unbounded Surface Reconstruction."

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published