NiftyNet is an open-source library for convolutional networks in medical image analysis.
NiftyNet was developed by the Centre for Medical Image Computing at University College London (UCL).
- Easy-to-customise interfaces of network components
- Designed for sharing networks and pretrained models
- Designed to support 2-D, 2.5-D, 3-D, 4-D inputs*
- Efficient discriminative training with multiple-GPU support
- Implemented recent networks (HighRes3DNet, 3D U-net, V-net, DeepMedic)
- Comprehensive evaluation metrics for medical image segmentation
*2.5-D: volumetric images processed as a stack of 2D slices; 4-D: co-registered multi-modal 3D volumes
- six
- Python
- Tensorflow
- Nibabel
- Numpy
- Scipy
- configparser
- scikit-image
Please run pip install -r requirements-gpu.txt
to install all dependencies
with GPU support (or pip install -r requirements-cpu.txt
for a CPU support
only version).
For more information on installing Tensorflow, please follow https://www.tensorflow.org/install/
To train a "toynet" specified in network/toynet.py
:
cd NiftyNet/
# download demo data (~62MB)
wget https://www.dropbox.com/s/y7mdh4m9ptkibax/example_volumes.tar.gz
tar -xzvf example_volumes.tar.gz
python run_application.py train --net_name toynet \
--image_size 42 --label_size 42 --batch_size 1
(GPU computing is enabled by default; to train with CPU only please use --num_gpus 0
)
After the training process, to do segmentation with a trained "toynet":
cd NiftyNet/
python run_application.py inference --net_name toynet \
--save_seg_dir ./seg_output \
--image_size 80 --label_size 80 --batch_size 8
Commandline parameters override the default settings defined in config/default_config.txt
.
Alternatively, to run with a customised config file:
cd NiftyNet/
# training
python run_application.py train -c /path/to/customised_config
# inference
python run_application.py inference -c /path/to/customised_config
where /path/to/customised_config
implements all parameters listed by running:
python run_application.py -h
- Create a
network/new_net.py
inheritingTrainableLayer
fromlayer.base_layer
- Implement
layer_op()
function using the building blocks inlayer/
or creating new layers - Import
network.new_net
to theNetFactory
class inrun_application.py
- Train the network with
python run_application.py train -c /path/to/customised_config
Image data in nifty format (extension .nii or .nii.gz) are supported.
The basic picture of a training procedure (data parallelism) is:
<Multi-GPU training>
(engine/training.py)
|>----------------------+
|>---------------+ |
|^| | |
(engine/input_buffer.py) |^| sync GPU_1 GPU_2 ...
|^| +----> model model (network/*.py)
with multiple threads: |^| | | |
(layer/input_normalisation.py) |^| CPU v v (layer/*.py)
image&label ->> (engine/*_sampler.py) >>> | model <----+------+
(*.nii.gz) (layer/rand_*.py) >>> | update stochastic gradients
For UCL CMIC members: To run NiftyNet on the CS cluster, follow these instructions:
-
If you do not already have a CS cluster account, get one by following these instructions.
-
Log in to the CS cluster by following these instructions:
-
Set up NiftyNet on your cluster account.
-
Install dependencies by running the following commands from the NiftyNet directory:
export LD_LIBRARY_PATH=/share/apps/python-3.6.0-shared/lib:$LD_LIBRARY_PATH
/share/apps/python-3.6.0-shared/bin/pip3 install --user -r requirements-gpu.txt
5.a) Requesting single GPU, create a submission script (mySubmissionScript.sh
in this example) for the NiftyNet task (run_application.py train --net_name toynet --image_size 42 --label_size 42 --batch_size 1
in this example):
#$ -P gpu
#$ -l gpu=1
#$ -l gpu_titanxp=1
#$ -l h_rt=23:59:0
#$ -l tmem=11.5G
#$ -S /bin/bash
#!/bin/bash
# The lines above are resource requests. This script has requested 1 Titan X (Pascal) GPU for 24 hours, and 11.5 GB of memory to be started with the BASH Shell.
# More information about resource requests can be found at http://hpc.cs.ucl.ac.uk/job_submission_sge/
# This line ensures that you only use the 1 GPU requested.
export CUDA_VISIBLE_DEVICES=$(( `nvidia-smi | grep " / .....MiB"|grep -n " ...MiB / [0-9]....MiB"|cut -d : -f 1|head -n 1` - 1 ))
# If CUDA_VISIBLE_DEVICES is set to -1, there were no available GPUs. This is often due to someone else failing to correctly limit their GPU usage as in the line above.
if (( $CUDA_VISIBLE_DEVICES > -1 ))
then
# These lines runs your NiftyNet task with the correct library paths for tensorflow
TF_LD_LIBRARY_PATH=/share/apps/libc6_2.17/lib/x86_64-linux-gnu/:/share/apps/libc6_2.17/usr/lib64/:/share/apps/gcc-6.2.0/lib64:/share/apps/gcc-6.2.0/lib:/share/apps/python-3.6.0-shared/lib:/share/apps/cuda-8.0/lib64:/share/apps/cuda-8.0/extras/CUPTI/lib64:$LD_LIBRARY_PATH
/share/apps/libc6_2.17/lib/x86_64-linux-gnu/ld-2.17.so --library-path $TF_LD_LIBRARY_PATH $(command -v /share/apps/python-3.6.0-shared/bin/python3) -u run_application.py train --net_name toynet --image_size 42 --label_size 42 --batch_size 1
fi
5.b) Similarly, to request two GPUs:
#$ -P gpu
#$ -l gpu=2
#$ -l gpu_titanxp=2
#$ -l h_rt=23:59:0
#$ -l tmem=11.5G
#$ -S /bin/bash
#!/bin/bash
# The lines above are resource requests. This script has requested 2 Titan X (Pascal) GPU for 24 hours, and 11.5 GB of memory to be started with the BASH Shell.
# More information about resource requests can be found at http://hpc.cs.ucl.ac.uk/job_submission_sge/
n_gpus=2
nvidia-smi
export CUDA_VISIBLE_DEVICES=$(echo $(nvidia-smi pmon -s m -c 1|grep ' - '|sed "s:[^0-9]::g"|sort|uniq|head -n $n_gpus)|sed -e "s:\s:,:g")
echo $CUDA_VISIBLE_DEVICES
# These lines runs your NiftyNet task with the correct library paths for tensorflow
TF_LD_LIBRARY_PATH=/share/apps/libc6_2.17/lib/x86_64-linux-gnu/:/share/apps/libc6_2.17/usr/lib64/:/share/apps/gcc-6.2.0/lib64:/share/apps/gcc-6.2.0/lib:/share/apps/python-3.6.0-shared/lib:/share/apps/cuda-8.0/lib64:/share/apps/cuda-8.0/extras/CUPTI/lib64:$LD_LIBRARY_PATH
/share/apps/libc6_2.17/lib/x86_64-linux-gnu/ld-2.17.so --library-path $TF_LD_LIBRARY_PATH $(command -v /share/apps/python-3.6.0-shared/bin/python3) -u run_application.py train --net_name toynet --num_gpus $n_gpus --image_size 42 --label_size 42 --batch_size 1
- While logged in to comic100 or comic2, submit the job using qsub
qsub mySubmissionScript.sh
Feature requests and bug reports are collected on Issues.
Contributors are encouraged to take a look at CONTRIBUTING.md
If you use this software, please cite:
@InProceedings{niftynet17,
author = {Li, Wenqi and Wang, Guotai and Fidon, Lucas and Ourselin, Sebastien and Cardoso, M. Jorge and Vercauteren, Tom},
title = {On the Compactness, Efficiency, and Representation of 3D Convolutional Networks: Brain Parcellation as a Pretext Task},
booktitle = {International Conference on Information Processing in Medical Imaging (IPMI)},
year = {2017}
}
This project was supported through an Innovative Engineering for Health award by the Wellcome Trust and EPSRC (WT101957, NS/A000027/1), the National Institute for Health Research University College London Hospitals Biomedical Research Centre (NIHR BRC UCLH/UCL High Impact Initiative), UCL EPSRC CDT Scholarship Award (EP/L016478/1), a UCL Overseas Research Scholarship, a UCL Graduate Research Scholarship, and the Health Innovation Challenge Fund by the Department of Health and Wellcome Trust (HICF-T4-275, WT 97914). The authors would like to acknowledge that the work presented here made use of Emerald, a GPU-accelerated High Performance Computer, made available by the Science & Engineering South Consortium operated in partnership with the STFC Rutherford-Appleton Laboratory.