Skip to content

Latest commit

 

History

History
11 lines (7 loc) · 457 Bytes

README.md

File metadata and controls

11 lines (7 loc) · 457 Bytes

GPU Collective Communication (NCCL)

Speaker: Dan Johnson

Date: 5/4/2012

The Nvidia Collective Communications Library (NCCL) implements fast communication between GPUs making it a critical part of distributed training. I'll be talking about what NCCL is, why we need it for distributed training, and how it works under the hood.

You can find the code from this lecture here:

  • ddp_simple.py
  • ddp_example.py