-
Notifications
You must be signed in to change notification settings - Fork 26.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LlamaTokenizer: Slow implementation opts for whitespace-lead token (different from fast) #24569
Comments
Thanks for reporting, will have a look |
Hi @ArthurZucker! Are you currently working on this? If not, I think I could fix it pretty quickly :) |
Sure! Feel free to take it! 😉 I'll have a look soon otherwise |
@ArthurZucker @lbeurerkellner I have done some debugging and I have a few observations. Firstly I have checked other tokenizers that use
So it seems like it was a deliberate decision to split special tokens like this?
|
Actually this is fixed, the output is now slow = AutoTokenizer.from_pretrained(model, use_fast=False, legacy = False) |
System Info
transformers
version: 4.30.2Who can help?
@ArthurZucker @youn
Information
Reproduction
Comparing slow and fast
LlamaTokenizer
instances withhuggyllama/llama-7b
.Expected behavior
It looks like the slow LlamaTokenizer wrongly tokenises
uns
. I would not expect the additional whitespace when round-tripping or when tokenising in the first place.Thanks a lot in advance.
The text was updated successfully, but these errors were encountered: