-
Notifications
You must be signed in to change notification settings - Fork 5.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Backward rnnlm #2436
Backward rnnlm #2436
Conversation
@@ -0,0 +1,137 @@ | |||
#!/bin/bash |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks! please make it a soft link to tuning/ in case we later want to update it.
# Lattice rescoring | ||
rnnlm/lmrescore_back.sh \ | ||
--cmd "$decode_cmd --mem 4G" \ | ||
--weight 0.5 --max-ngram-order $ngram_order \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you are using a weight of 0.5 for both the forward and backward passes, I'd be surprised if this weight was optimal. Because effectively the weight on the n-gram LM would be zero.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tuned this a bit and it seems 0.5 is the best weight.
I found another small in issue in the code which I should fix before merging this PR. Do you think we should just leave 0.5 as it is now?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did you try using 0.4 for both forward and backward? It's very surprising if it doesn't want any weight at all on the n-gram LM-- that always helps, as they are so complementary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. 0.4/0.4/0.2 is also worse than 0.5/0.5/0
lattice rescoring with backward RNNLMs