-
Notifications
You must be signed in to change notification settings - Fork 22.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[MPS] Fix non-contig to contig tensor copy #86056
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/86056
Note: Links to docs will display an error until the docs builds have been completed. ❌ 3 FailuresAs of commit 557f10b: The following jobs have failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
@@ -186,7 +186,9 @@ void copy_cast_mps(at::Tensor& dst, const at::Tensor& src, | |||
// For View tensors, the storage offset can be bigger than what's being reported by nbytes | |||
src_total_size = at::detail::computeStorageNbytesContiguous(src.sizes(), src.element_size(), src.storage_offset()); | |||
} else { | |||
src = src_; | |||
// Views are not the only source of non-contiguous tensors | |||
// For example, tensor constructed from ndarray can be non-contagious |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// For example, tensor constructed from ndarray can be non-contagious | |
// For example, tensor constructed from ndarray can be non-contiguous |
nit
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fix lint before merge 😛
`Tensor::contiguous()` is a no-op if tensor is contiguous Fixes #85967
7dcd968
to
557f10b
Compare
# See https://github.com/pytorch/pytorch/issues/85967 | ||
def test_from_numpy_non_contiguous(self): | ||
a = np.arange(9).reshape(3, 3)[:, :2] | ||
t_cpu = torch.tensor(a, device="cpu") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
does this tensor constructor copy the passed data and makes always contig tensors? should this not be torch.as_tensor instead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indeed it does, but that's the point: MPS copy operator did not handled this case correctly
@pytorchbot merge -f "MPS tests are green" |
@pytorchbot successfully started a merge job. Check the current status here. |
Hey @malfet. |
This handles a rare case when MPS tensor is constructed from non-contiguous CPU tensor. Fixes #85967 Pull Request resolved: #86056 Approved by: https://github.com/janeyx99
This handles a rare case when MPS tensor is constructed from non-contiguous CPU tensor.
Fixes #85967