Skip to content

Commit

Permalink
remove RTTI check for TensorImpl shadow copy (pytorch#22773)
Browse files Browse the repository at this point in the history
Summary:
We introduced RTTI in recent change: pytorch#21613

For internal mobile build we don't enable '-frtti' yet. This diff is trying to replace
RTTI with alternative approach.

According to dzhulgakov we could compare two tensors' type_id directly in most cases -
which is more strict than comparing TensorImpl subclass type as TensorImpl -> type_id
mapping is 1-to-n but it's more proper for this use case.

The only two cases where we can relax direct type comparison (for legacy reason) are:
1. CPUTensor <-> CUDATensor;
2. SparseCPUTensor <-> SparseCUDATensor;
Pull Request resolved: pytorch#22773

Differential Revision: D16277696

Pulled By: ljk53

fbshipit-source-id: 043e264fbacc37b7a11af2046983c70ddb62a599
  • Loading branch information
ljk53 authored and facebook-github-bot committed Jul 16, 2019
1 parent c5afdd0 commit 3b1c399
Show file tree
Hide file tree
Showing 11 changed files with 35 additions and 18 deletions.
2 changes: 1 addition & 1 deletion aten/src/ATen/OpaqueTensorImpl.h
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,7 @@ struct CAFFE2_API OpaqueTensorImpl : public TensorImpl {
* see NOTE [ TensorImpl Shallow-Copying ].
*/
void shallow_copy_from(const c10::intrusive_ptr<TensorImpl>& impl) override {
AT_ASSERT(typeid(*(impl.get())) == typeid(OpaqueTensorImpl<OpaqueHandle>));
AT_ASSERT(has_compatible_shallow_copy_type(impl->type_id()));
auto opaque_impl = static_cast<const OpaqueTensorImpl<OpaqueHandle>*>(impl.get());
copy_tensor_metadata(
/*src_impl=*/opaque_impl,
Expand Down
2 changes: 1 addition & 1 deletion aten/src/ATen/SparseTensorImpl.h
Original file line number Diff line number Diff line change
Expand Up @@ -209,7 +209,7 @@ struct CAFFE2_API SparseTensorImpl : public TensorImpl {
* see NOTE [ TensorImpl Shallow-Copying ].
*/
void shallow_copy_from(const c10::intrusive_ptr<TensorImpl>& impl) override {
AT_ASSERT(typeid(*(impl.get())) == typeid(SparseTensorImpl));
AT_ASSERT(has_compatible_shallow_copy_type(impl->type_id()));
auto sparse_impl = static_cast<const SparseTensorImpl*>(impl.get());
copy_tensor_metadata(
/*src_impl=*/sparse_impl,
Expand Down
8 changes: 5 additions & 3 deletions aten/src/ATen/native/TypeProperties.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -38,9 +38,11 @@ bool is_quantized(const Tensor& self) {
return self.is_quantized();
}

// True if `self` has the same derived type of TensorImpl as `other`.
bool _has_same_tensorimpl_type(const Tensor& self, const Tensor& other) {
return typeid(*(self.unsafeGetTensorImpl())) == typeid(*(other.unsafeGetTensorImpl()));
// True if `self` and `from` have compatible tensor type so that `from`'s
// TensorImpl can be copied to `self`.
bool _has_compatible_shallow_copy_type(const Tensor& self, const Tensor& from) {
return self.unsafeGetTensorImpl()->has_compatible_shallow_copy_type(
from.type_id());
}

Tensor type_as(const Tensor& self, const Tensor& other) {
Expand Down
2 changes: 1 addition & 1 deletion aten/src/ATen/native/native_functions.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2131,7 +2131,7 @@
- func: type_as(Tensor self, Tensor other) -> Tensor
variants: method

- func: _has_same_tensorimpl_type(Tensor self, Tensor other) -> bool
- func: _has_compatible_shallow_copy_type(Tensor self, Tensor from) -> bool
variants: function

- func: _unique(Tensor self, bool sorted=True, bool return_inverse=False) -> (Tensor, Tensor)
Expand Down
2 changes: 1 addition & 1 deletion aten/src/ATen/quantized/QTensorImpl.h
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ struct CAFFE2_API QTensorImpl : public c10::TensorImpl {
* see NOTE [ TensorImpl Shallow-Copying ].
*/
void shallow_copy_from(const c10::intrusive_ptr<TensorImpl>& impl) override {
AT_ASSERT(typeid(*(impl.get())) == typeid(QTensorImpl));
AT_ASSERT(has_compatible_shallow_copy_type(impl->type_id()));
auto q_impl = static_cast<const QTensorImpl*>(impl.get());
copy_tensor_metadata(
/*src_impl=*/q_impl,
Expand Down
15 changes: 15 additions & 0 deletions c10/core/TensorImpl.h
Original file line number Diff line number Diff line change
Expand Up @@ -914,6 +914,21 @@ struct C10_API TensorImpl : public c10::intrusive_ptr_target {
// because `shallow_copy_from()` is used for implementing functions such as `var.set_data(tensor)`, which
// changes `var`'s tensor metadata and expects its `allow_tensor_metadata_change_` to be ignored.

/**
* One TensorImpl can be copied to another TensorImpl if they have the same
* type_id. The only two special cases (for legacy reason) are:
* CPUTensorId is compatible with CUDATensorId and SparseCPUTensorId is
* compatible with SparseCUDATensorId.
*/
inline bool has_compatible_shallow_copy_type(TensorTypeId from) {
TensorTypeId self = type_id();
return (self == from) ||
((self == CPUTensorId() || self == CUDATensorId() || self == HIPTensorId()) &&
(from == CPUTensorId() || from == CUDATensorId() || from == HIPTensorId())) ||
((self == SparseCPUTensorId() || self == SparseCUDATensorId() || self == SparseHIPTensorId()) &&
(from == SparseCPUTensorId() || from == SparseCUDATensorId() || from == SparseHIPTensorId()));
}

/**
* Return a TensorImpl that is a shallow-copy of this TensorImpl.
*
Expand Down
2 changes: 1 addition & 1 deletion test/test_autograd.py
Original file line number Diff line number Diff line change
Expand Up @@ -3167,7 +3167,7 @@ def test_set_data_tensorimpl_type(self):
# of type `SparseTensorImpl`.
x = torch.randn(1, 2)
x_s = torch.sparse_coo_tensor(torch.zeros([1, 1]), torch.ones([1]))
with self.assertRaisesRegex(RuntimeError, 'different types of TensorImpl'):
with self.assertRaisesRegex(RuntimeError, 'incompatible tensor type'):
x.data = x_s

def test_set_data_preserve_pyobj(self):
Expand Down
2 changes: 1 addition & 1 deletion test/test_mkldnn.py
Original file line number Diff line number Diff line change
Expand Up @@ -372,7 +372,7 @@ def test_set_data_tensorimpl_type(self):
# of type `OpaqueTensorImpl<IDeepTensorWrapperPtr>`.
x = torch.randn((1, 2), dtype=torch.float, device=torch.device('cpu'))
x_mkldnn = x.to_mkldnn()
with self.assertRaisesRegex(RuntimeError, 'different types of TensorImpl'):
with self.assertRaisesRegex(RuntimeError, 'incompatible tensor type'):
x.data = x_mkldnn

def test_empty(self):
Expand Down
8 changes: 4 additions & 4 deletions torch/csrc/autograd/variable.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -86,11 +86,11 @@ void Variable::backward(

void Variable::set_data(const at::Tensor &new_data) const {
// `var.set_data(new_data)` shallow-copies all non-autograd TensorImpl fields
// from `new_data` to `var`. It requires that `new_data` has the same derived
// type of TensorImpl as `var`.
// from `new_data` to `var`. It requires that `new_data` and `var` have compatible
// tensor type.
TORCH_CHECK(
_has_same_tensorimpl_type(*this, new_data),
"Attempted to call `variable.set_data(tensor)`, but `variable` and `tensor` have different types of TensorImpl.");
_has_compatible_shallow_copy_type(*this, new_data),
"Attempted to call `variable.set_data(tensor)`, but `variable` and `tensor` have incompatible tensor type.");

// Resets gradient accumulator if metadata is out of date
Variable::AutogradMeta* autograd_meta = get_autograd_meta();
Expand Down
6 changes: 3 additions & 3 deletions torch/csrc/autograd/variable.h
Original file line number Diff line number Diff line change
Expand Up @@ -239,8 +239,8 @@ struct TORCH_API Variable : public at::Tensor {
bool create_graph) const;

/// Sets the tensor data held by this `Variable` to be the same as `new_data`.
/// It requires that `new_data` has the same derived type of TensorImpl as
/// this `Variable`, by checking `_has_same_tensorimpl_type(this, new_data)`.
/// It requires that `new_data` and `Variable` have compatible tensor type, by
/// checking `_has_compatible_shallow_copy_type(this, new_data)`.
void set_data(const at::Tensor &new_data) const;

/// Set the gradient edge -- i.e. `grad_fn` and `input_nr` -- of the
Expand Down Expand Up @@ -485,7 +485,7 @@ struct TORCH_API Variable::DifferentiableViewMeta : public Variable::AutogradMet
/// are a lot of call sites to these factory functions that need to change the
/// variable's size or storage afterwards, and they don't expect the original
/// tensor (where the variable is created from) to be updated. Setting
/// `allow_tensor_metadata_change_` to false by default would unnecessarily
/// `allow_tensor_metadata_change_` to false by default would unnecessarily
/// prevent those changes from happening and is undesirable.

// See NOTE [ Autograd View Variables ] for details.
Expand Down
4 changes: 2 additions & 2 deletions torch/nn/modules/module.py
Original file line number Diff line number Diff line change
Expand Up @@ -200,8 +200,8 @@ def _apply(self, fn):
module._apply(fn)

def compute_should_use_set_data(tensor, tensor_applied):
if torch._has_same_tensorimpl_type(tensor, tensor_applied):
# If the new tensor has the same TensorImpl type as the existing tensor,
if torch._has_compatible_shallow_copy_type(tensor, tensor_applied):
# If the new tensor has compatible tensor type as the existing tensor,
# the current behavior is to change the tensor in-place using `.data =`,
# and the future behavior is to overwrite the existing tensor. However,
# changing the current behavior is a BC-breaking change, and we want it
Expand Down

0 comments on commit 3b1c399

Please sign in to comment.