Skip to content

Commit

Permalink
Fix torch multi-GPU --device error (ultralytics#1701)
Browse files Browse the repository at this point in the history
* Fix torch GPU error

* Update torch_utils.py

single-line device =

Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
  • Loading branch information
NanoCode012 and glenn-jocher committed Dec 16, 2020
1 parent 44c84e3 commit 9652ae9
Showing 1 changed file with 3 additions and 2 deletions.
5 changes: 3 additions & 2 deletions utils/torch_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -75,13 +75,14 @@ def time_synchronized():
return time.time()


def profile(x, ops, n=100, device=torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')):
def profile(x, ops, n=100, device=None):
# profile a pytorch module or list of modules. Example usage:
# x = torch.randn(16, 3, 640, 640) # input
# m1 = lambda x: x * torch.sigmoid(x)
# m2 = nn.SiLU()
# profile(x, [m1, m2], n=100) # profile speed over 100 iterations


device = device or torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
x = x.to(device)
x.requires_grad = True
print(torch.__version__, device.type, torch.cuda.get_device_properties(0) if device.type == 'cuda' else '')
Expand Down

0 comments on commit 9652ae9

Please sign in to comment.