-
According to https://docs.nvidia.com/cuda/cuquantum/cutensornet/overview.html#slicing,
I'm running a QAOA circuit as described in https://arxiv.org/abs/2012.02430 (MaxCut problem for random graphs of degree 3). I have explicitly ensured that slicing is enabled, and have tweaked various slicing settings as shown in the following code snippet optimize_options = {
"samples": 5, # default is 0 (disabled)
"slicing": {
"disable_slicing": 0,
"memory_model": 1, # 0 is heuristic, 1 is cutensor (default)
"min_slices": 10, # default is 1
"slice_factor": 2, # default is 32
},
"cost_function": 0, # 0 for FLOPS (default), 1 for time
"reconfiguration": {
"num_iterations": 800, # default is 500; good values are within 500-1000
# Higher number means more time spent in reconfiguration, scales exponentially
"num_leaves": 10, # default is 8
},
} I split the contraction step into At 30 qubits, on a NVIDIA A100 40 GB, I constantly got this error, which means the contraction path is not efficient enough. But even so, shouldn't the slicing ensures that the contraction can run, albeit taking longer time (as a trade off for lower memory)?
|
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 17 replies
-
Hello, Sorry for the delayed response. A couple of notes here:
# Approach I. Reuse Path
optimize_options = {"samples": 5, ...}
path, info = contract_path(expr, *operands, optimize=optimize)
output = contract(expr, *operands, optimize={'path': path})
# Approach 2. Direct
output = contract(expr, *operands, optimize=optimize)
import pickle
data = {'expr': expr,
'shapes': [o.shape for o in operands]}
with open('data.pickle', 'wb') as f:
pickle.dump(data, f) |
Beta Was this translation helpful? Give feedback.
-
After the fix in #63 (reply in thread) and upgrading to 23.06, I occasionally encountered this error, with the same circuit I have been using:
The error message is definitely different this time. But I wonder because this is due to me not gc-ing the |
Beta Was this translation helpful? Give feedback.
Upon further investigation, we realized that the number of elements exceeding int32_t was due to
slicing
information not passed tooptimize
so thatcontract
call did not slice at all. Two simple fixes:optimize={"path": path}
withoptimize={'path': path, 'slicing': info.slices}
contract_path
andcontract
call, you can simply docontract(..., optimize=optimize_options)
. We internally pass both path and slicing.Let us know if all the issues are resolved with the fixes. BTW, I think with the fix you won't need the custom
optimize_options
.