-
Notifications
You must be signed in to change notification settings - Fork 4
Open
Labels
enhancementNew feature or requestNew feature or request
Description
On GPU hardware, in production scripts, post-processing takes twice as long as neural network inference.
On an NVidia GeForce GTX 1080 Ti (12GB RAM), for one full-size Sentinel-2 tile, CRGA OS2 UNet model, tile size 1024:
- With post-processing: ~6 minutes
- Without post-processing: ~2 minutes
A temporary workaround for speedup could be to skip post-processing by setting nodatavalues to None in inference.py. However, this would induce some artifacts, especially on images containing NoData.
A more long-term solution would be to include the post-processing inside the Keras model.
remicres
Metadata
Metadata
Assignees
Labels
enhancementNew feature or requestNew feature or request