-
Notifications
You must be signed in to change notification settings - Fork 0
Add configurable Monte Carlo dropout uncertainty estimation #29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
ad12
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looking good! add some unittests as well to make sure things are working as you'd expect
| with h5py.File(save_name, "w") as h5f: | ||
| h5f.create_dataset("probs", data=output["y_pred"]) | ||
| h5f.create_dataset("labels", data=labels) | ||
| h5f.create_dataset("true", data=output["y_true"]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i would avoid saving y_true - it should be easily accessible from your input data hdf5 file and duplicating it here would use up more disk space, which is limited.
| mc_dropout=False, | ||
| mc_dropout_T=100, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
seems like these are not used - if that's the case, delete
|
|
||
| tmp_batch_outputs_mc_dropout = None | ||
| if mc_dropout: | ||
| tmp_batch_outputs_mc_dropout = np.stack([model(batch_x, training=True) for _ in range(mc_dropout_T)]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see what you're trying to do here, but it will not be reproducible, which is necessary if we are to add this in the inference loop. There is no random seed being set, so the features that are dropped out will be different if you run inference on the same example twice.
im not sure exactly how to account for this, potentially setting a random seed. Write a unittest to verify that this does in fact produce identical inputs when run twice.
| MC_DROPOUT = False | ||
| MC_DROPOUT_T = 100 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add a comment indicating what MC_DROPOUT and MC_DROPOUT_T are referring to
No description provided.