Note: I knew sometime we might want to see codes asap, so I upload all my experiment codes without cleaning up (some lib might missing etc). Update 2025-12-30 I'm updating the script now.... Hope I can finish all it before 2026-01-13. I will try my best to cleanup TBD list here:
- Update the repo README.
- Update OpenSceneFlow repo for dataprocess and SeFlow++.
- Test successfully all codes in this repo.
- Upload Scania validation set (w/o gt).
- Setup leaderboard for users get their Scania val score.
- Downstream task two repos README update.
- Public the author-response file for readers to check some discussion and future directions etc.
Clone this repo with submodule:
git clone --recurse-submodules https://github.com/KTH-RPL/HiMo.gitPlease refer to OpenSceneFlow to setup the opensf Python environment or docker pull the opensf image.
For HiMo paper (with review discussion), we discussed two possible way to do motion compensation:
- In our paper, we propose HiMo, a modular pipeline that repurposes scene flow to compute distortion correction vectors for each point.
- While it is theoretically possible to attempt such correction using alternative approaches (e.g.,dynamic segmentation + ICP alignment across LiDARs), such solutions would require complex ad-hoc heuristics for target selection, tracking, and optimization and have not been shown to work in practice.
We present Our HiMo pipeline how to use scene flow for motion compensation here:
Please refer to OpenSceneFlow for output the flow, you can use SeFlow++ or other optimization-based baselines in our paper, we also pushed all baseline code into our codebase.
After training, please run [OpenSceneFlow/save.py] for output the flow results:
# (feed-forward): load ckpt
python save.py checkpoint=/home/kin/model_zoo/seflowpp_ppbest.ckpt dataset_path=/home/kin/data/scania/val
# (optimization-based): change another model by passing model name.
python save.py model=fastnsf dataset_path=/home/kin/data/av2/h5py/demo/valThe methods in HiMo, we first run the scene flow and you can save the compensation distance for each point by [save.py]. While for local validation like in av2, you can directly run [eval.py] as we support read flow and then compensated inside eval code, check more in evaluation section.
python save.pyYou need upload your result files to the public leaderboard page: TODO to get your Scania val score.
As mentioned in the paper, we select high-speed objects scenes, the evaluated frames is listed in assets/docs/av2/index_eval.pkl. You can download it and put it under av2 .h5 data folder (13 scenes are provided).
Then run the evaluation code:
# himo(flow): first run scene flow with flow-mode the motion compensation will be done inside eval code.
# others: save your comp_dis results as .zip file under each scene folder, then run eval code.
python eval.py --data_dir /home/kin/data/av2/h5py/sensor/himo --flow_mode 'seflowpp'In the paper, we present Segmentation Task: WaffleIron and 3D Detection Task: OpenPCDet.
[TODO] We add the running README in each repo for users to be able run HiMo results with downstream task.
To visualize the HiMo results, we refer OpenSceneFlow open3d visualizer to do so:
python visualize.py --flow "['raw', 'seflowpp', 'nsfp']"For paper results, I manually select some instance for a clearer qualitative comparison etc.
python tools/view_instance.pyFor project website animation, check tools/animation_video.py.
# first step:
fire.Fire(save_animation_traj)
# second step:
fire.Fire(animation_video)For Video animation example, we use manim. I may upload this part also for tutorial purpose etc.
@article{zhang2025himo,
title={{HiMo}: High-Speed Objects Motion Compensation in Point Clouds},
author={Zhang, Qingwen and Khoche, Ajinkya and Yang, Yi and Ling, Li and Sina, Sharif Mansouri and Andersson, Olov and Jensfelt, Patric},
year={2025},
journal={arXiv preprint arXiv:2503.00803},
}
💞 Thanks to Bogdan Timus and Magnus Granström from Scania and Ci Li from KTH RPL, who helped with this work. We also thank Yixi Cai, Yuxuan Liu, Peizheng Li and Shenghai Yuan for helpful discussions during revision. We also thank the anonymous reviewers for their useful comments.
This work was partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation and Prosense (2020-02963) funded by Vinnova. The computations were enabled by the supercomputing resource Berzelius provided by National Supercomputer Centre at Linköping University and the Knut and Alice Wallenberg Foundation, Sweden.