Skip to content

Does high-level cuQuantum API work with MPI? #112

Answered by DmitryLyakh
yapolyak asked this question in Q&A
Discussion options

You must be logged in to vote

Yes, the high-level tensor network API fully supports distributed parallel execution via MPI on multiple/many GPUs. The way to activate distributed parallel execution is exactly the same as before (example:

cutn.distributed_reset_configuration(handle, MPI._addressof(cutn_comm), MPI._sizeof(cutn_comm))
). Of course, this requires a distributed GPU platform with CUDA-aware MPI installed (https://docs.nvidia.com/cuda/cuquantum/latest/cutensornet/api/functions.html#distributed-parallelization-api) as well as other bookkeeping related to MPI initialization, etc. (like here

Replies: 2 comments 10 replies

Comment options

You must be logged in to vote
7 replies
@yapolyak
Comment options

@yapolyak
Comment options

@yapolyak
Comment options

@DmitryLyakh
Comment options

@yapolyak
Comment options

Answer selected by yapolyak
Comment options

You must be logged in to vote
3 replies
@DmitryLyakh
Comment options

@yapolyak
Comment options

@DmitryLyakh
Comment options

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants