1S-Lab, Nanyang Technological Universityโ
2SenseTime Research, Singaporeโ
โ Project lead
โ Project lead
MatAnyone 2 is a practical human video matting framework that preserves fine details by avoiding segmentation-like boundaries, while also shows enhanced robustness under challenging real-world conditions.
๐ฅ For more visual results, go checkout our project page
- [2025.12] This repo is created.
If you find our repo useful for your research, please consider citing our paper:
@InProceedings{yang2025matanyone2,
title = {{MatAnyone 2}: Scaling Video Matting via a Learned Quality Evaluator},
author = {Yang, Peiqing and Zhou, Shangchen and Hao, Kai and Tao, Qingyi},
booktitle = {arXiv preprint arXiv:2512.11782},
year = {2025}
}If you have any questions, please feel free to reach us at peiqingyang99@outlook.com.



