Skip to content
This repository has been archived by the owner on Feb 12, 2022. It is now read-only.

The might be a bug in splitcross.py #123

Open
cswhjiang opened this issue Sep 16, 2020 · 0 comments
Open

The might be a bug in splitcross.py #123

cswhjiang opened this issue Sep 16, 2020 · 0 comments

Comments

@cswhjiang
Copy link

cswhjiang commented Sep 16, 2020

The following test case failed wtih pytorch 1.6.

if __name__ == '__main__':
    np.random.seed(42)
    torch.manual_seed(42)
    if torch.cuda.is_available():
        torch.cuda.manual_seed(42)

    V = 300
    H = 400
    N = 500
    E = 10

    embed = torch.nn.Embedding(V, H).cuda()
    crit = SplitCrossEntropyLoss(hidden_size=H, splits=[100, 200]).cuda()
    bias = torch.nn.Parameter(torch.ones(V)).cuda()
    optimizer = torch.optim.SGD(list(embed.parameters()) + list(crit.parameters()), lr=1)

    for _ in range(E):
        prev = torch.rand(N, 1) * 0.999 * V
        prev = prev.int().long().cuda()
        x = (torch.rand(N, 1) * 0.999 * V).int().long().cuda()
        y = embed(prev).squeeze()
        c = crit(embed.weight, bias, y, x.view(N))
        print('Crit', c.exp().item())

        logprobs = crit.logprob(embed.weight, bias, y[:2]).exp()
        print(logprobs)
        print(logprobs.sum(dim=1))

        optimizer.zero_grad()
        c.backward()
        optimizer.step()

Here are the error messages:

......
/pytorch/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:86: operator(): block: [0,0,0], thread: [42,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.
/pytorch/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:86: operator(): block: [0,0,0], thread: [43,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.
/pytorch/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:86: operator(): block: [0,0,0], thread: [44,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.
/pytorch/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:86: operator(): block: [0,0,0], thread: [46,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.
/pytorch/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:86: operator(): block: [0,0,0], thread: [48,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.
/pytorch/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:86: operator(): block: [0,0,0], thread: [49,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.
/pytorch/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:86: operator(): block: [0,0,0], thread: [51,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.
/pytorch/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:86: operator(): block: [0,0,0], thread: [54,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.
/pytorch/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:86: operator(): block: [0,0,0], thread: [55,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.
/pytorch/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:86: operator(): block: [0,0,0], thread: [58,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.
/pytorch/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:86: operator(): block: [0,0,0], thread: [60,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.
/pytorch/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:86: operator(): block: [0,0,0], thread: [61,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.
/pytorch/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:86: operator(): block: [0,0,0], thread: [63,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.
Traceback (most recent call last):
  File "splitcross.py", line 195, in <module>
    print('Crit', c.exp().item())
RuntimeError: CUDA error: device-side assert triggered
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant