Skip to content

yangheng95/autocuda

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

autocuda - Auto choose the cuda device having the largest free memory in Pytorch

Downloads Downloads Downloads

PyPI - Python Version PyPI PyPI_downloads

Usage

Install

pip install autocuda

ready to use

from autocuda import auto_cuda_info, auto_cuda_index, auto_cuda, auto_cuda_name

cuda_info_dict = auto_cuda_info()

cuda_device_index = auto_cuda_index()  # return cuda index having largest free memory. return 'cpu' if not cuda
# os.environ['CUDA_VISIBLE_DEVICES'] = [str(cuda_device_index)]

cuda_device = auto_cuda()
# model.to(cuda_device) # assume you have inited your pytorch model

cuda_device_name = auto_cuda_name()
print('Choosing cuda device: {}'.format(cuda_device_name))

Copyright

This package is based on https://github.com/QuantumLiu/tf_gpu_manager with MIT license

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages