-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Additional information on unet baseline #9
Comments
You can find all information on the baseline nnUNet model in our publication: |
Hi |
Thanks for clarifying. Well, nnUNet can get very slow if it falls back to a single-threaded process (with all other workers sleeping instead of actually prefetching the data) or if it is not run on the GPU. Are you loading data from a local hard drive or via a network drive which could bottleneck you? Usually for us an epoch took around a few minutes (<5 min). |
Thanks for your answer. I am actually training a Unet not a nnUnet. I am using torchio to manage the patch sampling. The training is actually quite fast, it is the loading of images, sampling of patches and preparation of dataloaders which is long. Any advice on how to faster this part? |
In this regard the only advice is to run multiple workers for pre-fetching and pre-processing the data. All tweaks depend on your hardware and actual setup. |
Thank you for your advices. Indeed, multi-workers help |
You are welcome. |
How long to train a Unet on 800 epochs with your hyperparameters?
How many samples per volume do you use at each epoch?
Do you use a scheduler?
Best,
Hugo
The text was updated successfully, but these errors were encountered: