Skip to content

Latest commit

 

History

History

nnU-Net

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 

nnU-Net

This folder contains the scripts for training and inference of the nnUNet model on medical image data in MedSAM's preprocessed npz format. For details regarding the data preprocessing pipeline, please refer to the MedSAM.

Prerequisites

This codebase uses the nnUNetv2. One can choose to install the out-of-the-box version via pip as follows:

pip install nnunetv2

For further details regarding configuring the nnUNetv2, please consult nnUNet's official documentation.

Training

We converted the training npz/npy files to nii format which is widely used by nnU-Net. In order to incorporate the bounding box prompts into the model, we converted the bounding box as a binary mask and concatenated it with the image as the model input. The bounding box was simulated based on ground truth.

nnUNetv2_train xxx 2d all

Inference

The inference scripts assume that the data is in the npz format generated by MedSAM preprocess pipeline. To run inference, one can download the model here and use the provided inference scripts.

Inference for 2D images

The infer_nnunet_2D.py script can be used for inference on 2D images. Below are the parameters that need to be configured:

  • -checkpoint: Path to the trained model checkpoint.
  • -data_root: Path to the test data.
  • --grey: Whether the input dataset is greyscale.
  • -pred_save_dir: Path to save the output segmented images.
  • --save_overlay: Save the overlay of the segmentation on the original image. (Optional)
  • -png_save_dir: Path to save the overlay images. (Required if --save_overlay is used)
  • -num_workers: Number of workers for multiprocessing during inference. (Optional)

Note that for 2D images, the preprocessing step prior to inference is different for the RGB and greyscale. Hence it is necessary to specify the --grey flag when running inference on greyscale images.

For RGB images:

python infer_nnunet_2D.py \
    -checkpoint nnUNet_results/Dataset001_Fundus \
    -data_root path/to/test/data \
    -pred_save_dir path/to/save/results \
    --save_overlay \
    -png_save_dir path/to/save/overlay \
    -num_workers 2 \

For greyscale images:

python infer_nnunet_2D.py \
    -checkpoint nnUNet_results/Dataset002_X-Ray \
    -data_root path/to/test/data \
    -pred_save_dir path/to/save/results \
    --save_overlay \
    -png_save_dir path/to/save/overlay \
    -num_workers 2 \
    --grey

Inference for 3D images

The infer_nnunet_3D.py script can be used for inference on 3D images. Below are the parameters that need to be configured:

  • -checkpoint: Path to the trained model checkpoint.
  • -data_root: Path to the test data.
  • -pred_save_dir: Path to save the output segmented 3D images.
  • --save_overlay: Save the overlay of the segmentation on the original image. (Optional)
  • -png_save_dir: Path to save the overlay images. (Required if --save_overlay is used)
  • -num_workers: Number of workers for multiprocessing during inference. (Optional)

Example command for 3D images inference:

python infer_nnunet_3D.py \
    -checkpoint nnUNet_results/Dataset003_CT \
    -data_root path/to/test/data \
    -pred_save_dir path/to/save/results \
    --save_overlay \
    -png_save_dir path/to/save/overlay \
    -num_workers 2 \

Acknowledgement

We would like to thank the authors and the contributors of nnUNet for their great work and for making the code publicly available.