-
Notifications
You must be signed in to change notification settings - Fork 3.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix gru, rnn, lstm test cases to match the specification and add some cases #920
Conversation
# Conflicts: # onnx/backend/test/case/node/lstm.py
# Conflicts: # onnx/backend/test/case/node/gru.py # onnx/backend/test/case/node/lstm.py # onnx/backend/test/case/node/rnn.py # onnx/backend/test/data/node/test_gru_defaults/model.onnx # onnx/backend/test/data/node/test_gru_defaults/test_data_set_0/output_0.pb # onnx/backend/test/data/node/test_gru_with_initial_bias/model.onnx # onnx/backend/test/data/node/test_gru_with_initial_bias/test_data_set_0/output_0.pb # onnx/backend/test/data/node/test_lstm_defaults/model.onnx # onnx/backend/test/data/node/test_lstm_defaults/test_data_set_0/output_0.pb # onnx/backend/test/data/node/test_lstm_with_initial_bias/model.onnx # onnx/backend/test/data/node/test_lstm_with_initial_bias/test_data_set_0/output_0.pb # onnx/backend/test/data/node/test_lstm_with_peepholes/model.onnx # onnx/backend/test/data/node/test_lstm_with_peepholes/test_data_set_0/output_0.pb # onnx/backend/test/data/node/test_simple_rnn_defaults/model.onnx # onnx/backend/test/data/node/test_simple_rnn_defaults/test_data_set_0/output_0.pb # onnx/backend/test/data/node/test_simple_rnn_with_initial_bias/model.onnx # onnx/backend/test/data/node/test_simple_rnn_with_initial_bias/test_data_set_0/output_0.pb
✅ Build onnx 0.3.3299 completed (commit f7ad67e7e7 by @fumihwh) |
✅ Build onnx 0.3.3301 completed (commit c0ee7c17b9 by @fumihwh) |
# Conflicts: # onnx/backend/test/case/node/lstm.py # onnx/backend/test/data/node/test_lstm_defaults/model.onnx # onnx/backend/test/data/node/test_lstm_defaults/test_data_set_0/output_0.pb # onnx/backend/test/data/node/test_lstm_with_initial_bias/model.onnx # onnx/backend/test/data/node/test_lstm_with_initial_bias/test_data_set_0/output_0.pb
❌ Build onnx 0.3.3393 failed (commit 88f45dc176 by @fumihwh) |
onnx/backend/test/case/node/rnn.py
Outdated
|
||
H = self.f(np.dot(self.X, np.transpose(self.W)) + np.dot(self.H_0, self.R) + w_b + r_b) | ||
return np.reshape(H, (self.num_directions, self.batch_size, self.hidden_size)) | ||
h_list = [] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
instead of having this step function/implementation, shall we have the output result explicitly set in the test case please? otherwise, I'm seeing this case is testing "test code" (this step function).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@linkerzhang
I didn't get you.
Basically, we need this numpy implemented function to calculate the output result.
Maybe the name of function step
is not good for you?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
haha, I was seeing this test case is testing the "step" function. I understood that it's used to generate the data. ignore this.
|
||
self.hidden_size = params[R].shape[-1] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as the other comment: Is there any specific reason why we are removing these parameters from the class and instead rely on local variables? this may put unnecessary limitations on reusing the values elsewhere.
|
||
b = params[B] if B in params else np.zeros(2 * number_of_gates * self.hidden_size) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there any specific reason why we are removing these parameters from the class and instead rely on local variables? this may put unnecessary limitations on reusing the values elsewhere.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we don't need to reuse hidden_size
. All things (init b, p, h, c) need hidden_size
is done when initialize class.
Even until last version doesn't have it.
The author of this version wanted to use hidden_size
in step (to reshape the result) and put it in to class.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, sounds good.
onnx/backend/test/case/node/lstm.py
Outdated
lstm = LSTM_Helper(X=x, W=w, R=r, B=b) | ||
output = lstm.step() | ||
|
||
output_precomputed = np.array([[[[-0.09183919, -0.20181324, 0.04475278, -0.12022446, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Where are these numbers coming from and what is the need to have them here instead of relying on reference implementation? Using precomputed numbers may create unnecessary complications going forward. For instance, If a particular framework generates numbers that do not match these numbers in future, how can we resolve conflicts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It comes from pytorch.
Yes, make sense. I will remove this.
# Conflicts: # onnx/backend/test/case/node/gru.py # onnx/backend/test/case/node/lstm.py # onnx/backend/test/case/node/rnn.py
❌ Build onnx 0.3.3468 failed (commit a88545a49f by @fumihwh) |
# Conflicts: # onnx/backend/test/case/node/gru.py # onnx/backend/test/case/node/lstm.py # onnx/backend/test/data/node/test_gru_defaults/model.onnx # onnx/backend/test/data/node/test_gru_with_initial_bias/model.onnx # onnx/backend/test/data/node/test_lstm_defaults/model.onnx # onnx/backend/test/data/node/test_lstm_with_initial_bias/model.onnx # onnx/backend/test/data/node/test_lstm_with_peepholes/model.onnx # onnx/backend/test/data/node/test_simple_rnn_defaults/model.onnx # onnx/backend/test/data/node/test_simple_rnn_with_initial_bias/model.onnx
❌ Build onnx 0.3.3536 failed (commit ec730aaec7 by @fumihwh) |
❌ Build onnx 0.3.3538 failed (commit 8953279c84 by @fumihwh) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me in general.
…h the specification and add some cases (onnx/onnx#920) onnx/onnx@d2a46da
…h the specification and add some cases (onnx/onnx#920) onnx/onnx@d2a46da
…h the specification and add some cases (onnx/onnx#920) onnx/onnx@d2a46da
… cases (onnx#920) * fix lstm test cases to match the specification and add precomputed case * add rnn and update test cases * add and update gru test cases * update doc * remove lstm precomputed test case * update test cases * update doc and bug fix * fix mypy
… cases (onnx#920) * fix lstm test cases to match the specification and add precomputed case * add rnn and update test cases * add and update gru test cases * update doc * remove lstm precomputed test case * update test cases * update doc and bug fix * fix mypy
lstm
np.float32
. defaultnp.float64
causes inconsistency of input's dtype.lstm
,rnn
andgru
[seq_length, num_directions, batch_size, hidden_size]