Skip to content
/ LoMix Public

Official repository of NeurIPS 2025 paper "LoMix: Learnable Weighted Multi‑Scale Logits Mixing for Medical Image Segmentation"

License

Notifications You must be signed in to change notification settings

SLDGroup/LoMix

Repository files navigation

LoMix

Official repository of NeurIPS 2025 paper LoMix: Learnable Weighted Multi‑Scale Logits Mixing for Medical Image Segmentation.
arxiv
Md Mostafijur Rahman, Radu Marculescu

The University of Texas at Austin

🔍 Check out our MICCAI 2025 paper! EfficientMedNeXt

🔍 Check out our CVPR 2025 paper! EffiDec3D

🔍 Check out our ICCVW 2025 paper! MK-UNet

🔍 Check out our CVPR 2024 paper! EMCAD

🔍 Check out our CVPRW 2024 paper! PP-SAM

🔍 Check out our WACV 2024 paper! G-CASCADE

🔍 Check out our MIDL 2023 paper! MERIT

🔍 Check out our WACV 2023 paper! CASCADE

LoMix Supervision

Quantitative Results

Qualitative Results

Usage:

Recommended environment:

Please run the following commands.

conda create -n lomixenv python=3.8
conda activate lomixenv

pip install torch==1.11.0+cu113 torchvision==0.12.0+cu113 torchaudio==0.11.0 --extra-index-url https://download.pytorch.org/whl/cu113

pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu113/torch1.11.0/index.html

pip install -r requirements.txt

Data preparation:

  • Synapse Multi-organ dataset: Sign up in the official Synapse website and download the dataset. Then split the 'RawData' folder into 'TrainSet' (18 scans) and 'TestSet' (12 scans) following the TransUNet's lists and put in the './data/synapse/Abdomen/RawData/' folder. Finally, preprocess using python ./utils/preprocess_synapse_data.py or download the preprocessed data and save in the './data/synapse/' folder. Note: If you use the preprocessed data from TransUNet, please make necessary changes (i.e., remove the code segment (line# 88-94) to convert groundtruth labels from 14 to 9 classes) in the utils/dataset_synapse.py.

  • ACDC dataset: Download the preprocessed ACDC dataset from Google Drive and move into './data/ACDC/' folder.

  • Polyp datasets: Download the splited polyp datasets from Google Drive and move into './data/polyp/' folder.

Pretrained model:

You should download the pretrained PVTv2 model from Google Drive or PVT GitHub, and then put it in the './pretrained_pth/pvt/' folder for initialization.

Training:

cd into LoMix
python -W ignore train_synapse_lomix.py --root_path /path/to/train/data --volume_path path/to/test/data --encoder pvt_v2_b2 --supervision lomix        # replace --root_path and --volume_path with your actual path to data.

Testing:

cd into LoMix 

Acknowledgement

We are very grateful for these excellent works timm, EMCAD, CASCADE, MERIT, G-CASCADE, PP-SAM, PraNet, and TransUNet, which have provided the basis for our framework.

Citations

@inproceedings{rahmanlomix,
  title={LoMix: Learnable Weighted Multi-Scale Logits Mixing for Medical Image Segmentation},
  author={Rahman, Md Mostafijur and Marculescu, Radu},
  booktitle={The Thirty-ninth Annual Conference on Neural Information Processing Systems}
}

About

Official repository of NeurIPS 2025 paper "LoMix: Learnable Weighted Multi‑Scale Logits Mixing for Medical Image Segmentation"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages