Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sparse Compressed Transpose add support for Batch dims and BSR/BSC layouts #82122

Closed
wants to merge 31 commits into from

Conversation

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Jul 25, 2022

🔗 Helpful links

❌ 1 New Failures

As of commit 906e60b (more details on the Dr. CI page):

Expand to see more
  • 1/1 failures introduced in this PR

🕵️‍♀️ 1 failure not recognized by patterns:

The following CI failures may be due to changes from the PR
Job Step
CircleCI Checks build Unknown

This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

…compressed"

sparse layouts

[ghstack-poisoned]
…compressed"

sparse layouts

[ghstack-poisoned]
…compressed"

sparse layouts

[ghstack-poisoned]
@amjames amjames changed the title Sparse Compressed Transpose batch dim support for (block) compressed Sparse Compressed Transpose add support for Batch dims and BSR/BSC layouts Jul 26, 2022
@amjames amjames added the module: sparse Related to torch.sparse label Jul 27, 2022
// the fact that the block should not be possible to hit this code block
TORCH_INTERNAL_ASSERT_DEBUG_ONLY(
false, "transpose(): Shouldn't have reached this point");
result_vals = AT_DISPATCH_PLAIN_SPARSE_COMPRESSED_LAYOUTS(
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I left this code in although support is disabled and the block is unreachable.

Dense dimension support for CSR/CSC is in the immediate future, but I can stash the change locally and bring it back out when we are ready to actually add it if that is preferred.

@amjames amjames requested a review from nikitaved July 29, 2022 22:43
Copy link

@bhosmer bhosmer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This LGTM, couple super minor comments inline but really nice overall. Def worth getting @nikitaved's eyes on it too though ofc

};
const auto dim1_type = classify_dim(dim1);
TORCH_CHECK(
classify_dim(dim1) == transpose_type,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/classify_dim(dim1)/dim1_type?

aten/src/ATen/native/TensorShape.cpp Show resolved Hide resolved
test/test_sparse_csr.py Outdated Show resolved Hide resolved
@amjames
Copy link
Collaborator Author

amjames commented Sep 1, 2022

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

@pytorchbot successfully started a merge job. Check the current status here and land check progress here.
The merge job was triggered with the land checks (-l) flag. If you did not specify this flag yourself, you are likely enrolled in the land checks rollout. This means that your change will be merged once all checks on your PR and the land checks have passed (ETA 4 Hours). If you need to coordinate lands between different changes and cannot risk a land race, please add the ciflow/trunk label to your PR and wait for signal to complete, and then land your changes in proper order. Having trunk, pull, and Lint pre-run on a PR will bypass land checks and the ETA should be immediate. If this is not the intended behavior, feel free to use some of the other merge options in the wiki.
Please reach out to the PyTorch DevX Team with feedback or questions!

pytorchmergebot pushed a commit that referenced this pull request Sep 1, 2022
@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: Failed to merge; some land checks failed: pull, pull / linux-bionic-py3_7-clang8-xla / test (xla, 1, 1, linux.2xlarge)

If you believe this is an error, you can use the old behavior with @pytorchbot merge -g (optionally with the ciflow/trunk to get land checks) or use @pytorchbot merge -f "some reason here". For more information, see the bot wiki.

Please reach out to the PyTorch DevX Team with feedback or questions!

Details for Dev Infra team Raised by workflow job

@amjames
Copy link
Collaborator Author

amjames commented Sep 2, 2022

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

@pytorchbot successfully started a merge job. Check the current status here and land check progress here.
The merge job was triggered with the land checks (-l) flag. If you did not specify this flag yourself, you are likely enrolled in the land checks rollout. This means that your change will be merged once all checks on your PR and the land checks have passed (ETA 4 Hours). If you need to coordinate lands between different changes and cannot risk a land race, please add the ciflow/trunk label to your PR and wait for signal to complete, and then land your changes in proper order. Having trunk, pull, and Lint pre-run on a PR will bypass land checks and the ETA should be immediate. If this is not the intended behavior, feel free to use some of the other merge options in the wiki.
Please reach out to the PyTorch DevX Team with feedback or questions!

@github-actions
Copy link

github-actions bot commented Sep 2, 2022

Hey @amjames.
You've committed this PR, but it does not have both a 'release notes: ...' and 'topics: ...' label. Please add one of each to the PR. The 'release notes: ...' label should represent the part of PyTorch that this PR changes (fx, autograd, distributed, etc) and the 'topics: ...' label should represent the kind of PR it is (not user facing, new feature, bug fix, perf improvement, etc). The list of valid labels can be found here for the 'release notes: ...' and here for the 'topics: ...'.
For changes that are 'topic: not user facing' there is no need for a release notes label.

@facebook-github-bot facebook-github-bot deleted the gh/amjames/11/head branch September 6, 2022 14:20
facebook-github-bot pushed a commit that referenced this pull request Sep 6, 2022
…youts (#82122) (#82122)

Summary:
Pull Request resolved: #82122
Approved by: https://github.com/bhosmer

Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/9b115c7bd32b4a516f253a217bc8ec47bd07c44d

Reviewed By: mehtanirav, izaitsevfb

Differential Revision: D39277567

fbshipit-source-id: 9b7ae56319bba48becf8df1eb767683b156d81a3

// We have validated everything, early exit for equal dims (no effect)
if (dim0 == dim1) {
return self.clone();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't using clone here an issue? I thought transpose was a view. Shouldn't this return an alias? cc @bhosmer @amjames

# correct layout
self.assertEqual(transpose.layout, subject.layout)
# transpose must be return a view
_check_transpose_view(subject, transpose)
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@cpuhrsch re: self.clone() Why does this check not fail, if the result is not a view?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a good question. The test must not hit that branch or the check is too weak. I adopted it from existing view checks. Seems worthwhile investigating.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants