Non-official Pytorch implementation of "Deep-E: a fully-dense neural network for improving the elevation resolution in linear-array-based photoacoustic tomography" (IEEE TMI)
git clone https://github.com/thu-bplab/OpenDeepE
cd OpenDeepE
Install requirements for OpenDeepE first.
pip install -r requirements.txt
The dataset should be organized in the following hierarchical structure:
dataset_root/
├── human_vessel/ # Data category
├── train/ # Training set
│ ├── sparse_image/ # Sparse image data
│ └── gt_image/ # Ground truth images
├── val/ # Validation set
│ ├── sparse_image/
│ └── gt_image/
└── test/ # Test set
├── sparse_image/
└── gt_image/
All data files are in NIfTI format (.nii) with the following naming pattern:
{sample_index}_{slice_index}.nii
sample_index: 3D data sample number (e.g., 1, 2, 3...)slice_index: 2D slice index (e.g., 90, 91, 92...)
1_90.nii- Slice 90 from 3D sample 11_91.nii- Slice 91 from 3D sample 11_92.nii- Slice 92 from 3D sample 12_90.nii- Slice 90 from 3D sample 2
- sparse_image: Contains input sparse image data
- gt_image: Contains corresponding ground truth image data
- The code defaults to reading NIfTI format files
Ensure that each sample has corresponding files with identical names and counts in both sparse_image and gt_image directories to maintain data alignment during training.
human_vessel/
├── train/
│ ├── sparse_image/
│ │ ├── 1_90.nii
│ │ ├── 1_91.nii
│ │ ├── 1_92.nii
│ │ └── ...
│ └── gt_image/
│ ├── 1_90.nii
│ ├── 1_91.nii
│ ├── 1_92.nii
│ └── ...
├── val/
│ ├── sparse_image/
│ └── gt_image/
└── test/
├── sparse_image/
└── gt_image/
This organization ensures proper data handling and compatibility with the provided codebase.
- The recommended PyTorch version is
>=2.0. Code is developed and tested under PyTorch2.0.1. - If you encounter CUDA OOM issues, please try to reduce the
batch_sizein the training and inference configs. - The details related to the original network are all in Model Detail. Each processing step is explained with comments, which corresponds to Figure 1 of the original paper.
- Our sample training defaults to use 1 gpu with
fp32precision. - You may modify the configuration to fit your own environment.
- Please replace data related paths in the script file or
train.pywith your own paths and customize the training settings. - An example training usage is as follows:
# Example usage cd scripts bash train.sh
- You need to modify the
$MODEL_PATHin the testing script. - An example inference usage is as follows:
bash test.sh
- We thank the authors of the original paper for their great work!
- This project is supported by Beijing Academy of Artificial Intelligence by providing the computing resources.
- This project is advised by Cheng Ma.
@article{zhang2021deep,
title={Deep-E: a fully-dense neural network for improving the elevation resolution in linear-array-based photoacoustic tomography},
author={Zhang, Huijuan and Bo, Wei and Wang, Depeng and DiSpirito, Anthony and Huang, Chuqin and Nyayapathi, Nikhila and Zheng, Emily and Vu, Tri and Gong, Yiyang and Yao, Junjie and others},
journal={IEEE Transactions on Medical Imaging},
volume={41},
number={5},
pages={1279--1288},
year={2021},
publisher={IEEE}
}
@software{di_kong_2025_17364359,
author = {Di Kong},
title = {OpenDeepE: Open-Source Deep-E / 2D Fully-Dense U-Net},
month = oct,
year = 2025,
publisher = {Zenodo},
version = {v1.0},
doi = {10.5281/zenodo.17364359},
url = {https://doi.org/10.5281/zenodo.17364359}
}