-
Notifications
You must be signed in to change notification settings - Fork 891
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve performance of nvtext::tokenize_with_vocabulary for long strings #14336
Improve performance of nvtext::tokenize_with_vocabulary for long strings #14336
Conversation
Performance numbers for long strings from the included benchmark
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All looks good. I have one suggested rewrite for some of the counting math.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the comment and explanation. +1.
/merge |
Fixes a bug introduced in #14336 when trying to simplify the token-counting logic as per this discussion #14336 (comment) The simplification caused an error which was found when running the nvtext benchmarks. The appropriate gtest has been updated to cover this case now. Authors: - David Wendt (https://github.com/davidwendt) Approvers: - Bradley Dice (https://github.com/bdice) - Karthikeyan (https://github.com/karthikeyann) URL: #14393
Description
Improves
nvtext::tokenize_with_vocabulary
performance for long strings. Also adds additional tests and an nvbench benchmark.Checklist