-
Notifications
You must be signed in to change notification settings - Fork 240
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature]: Can it load the model locally? 可以本地加载本地模型吗? #2669
Comments
https://towhee.io/sentence-embedding/transformers |
Hello, There is a question may related this issue , how can I config this model loaded from my folder (not from remote url) |
|
i use this, but got an error.
That you can see the error logs:
I find yours clip.py, and I'm not sure is my fault or yours.
|
so, what's next? |
@HarwordLiu ,I've updated the operator, please remove your cached operator and try it again. |
i guess maybe my local model cache file's root path is wrong? |
@HarwordLiu ,It seems you want use your cached weights, in such case, you can use clip directly. the path is useful when you have your saved customized clip model. img_pipe = (
pipe.input('url')
.map('url', 'img', ops.image_decode.cv2_rgb())
.map('img', 'vec', ops.image_text_embedding.clip(
model_name='clip_vit_large_patch14_336',
modality='image'))
.output('img', 'vec')
) save the model from PIL import Image
from transformers import CLIPProcessor,CLIPModel
model = CLIPModel.from_pretrained("openai/clip-vit-large-patch14-336")
model.save_pretrained('models--openai--clip-vit-large-patch14-336') load from it. img_pipe = (
pipe.input('url')
.map('url', 'img', ops.image_decode.cv2_rgb())
.map('img', 'vec', ops.image_text_embedding.clip(
model_name='clip_vit_large_patch14_336',
checkpoint_path='./models--openai--clip-vit-large-patch14-336',
modality='image'))
.output('img', 'vec')
) |
Thanks for all your help. |
@wxywb Why still download the file from 'huggingface' after successfully loading the model? |
@HarwordLiu ,this is because of tokenizer(that's why we need model_name). Usually saved model folder would not contain tokenizer, so we provide model_name to specify tokenizer explicitly. If you have a folder contains tokenizer(something like vocab.json), you can fill model_name with that path and let tokenizer and processor loaded from that path. |
Amazing, i can't believe it, althought it's worked. Fill model_name by file path, it's look so hack. |
This hack is beyond the initial design. Since huggingface transformer will connect to server although it cached the tokenizer. |
That’s it, no wonder I replaced the cached file for server (~/.cache) but still connect to huggingface to download. |
.map('img', 'embedding', ops.image_embedding.timm(model_name='resnet50')) |
@ycqu , network issue is beyond the framework's capability. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. Rotten issues close after 30d of inactivity. Close the stale issues and pull requests after 7 days of inactivity. Reopen the issue with |
Is there an existing issue for this?
Is your feature request related to a problem? Please describe.
Our server has issues accessing the Hugging Face website. Therefore, is it possible to download the model to the local machine and then continue working by adjusting the local loading of the model using Towhee? Is so, how? Thanks. 我们的服务器访问huggingface网站有问题。所以,是否可以下载模型到本地,然后通过调整Towhee在本地加载模型的形式来继续工作吗? 如果可以的话,怎么操作呢?谢谢!
Describe the solution you'd like.
No response
Describe an alternate solution.
No response
Anything else? (Additional Context)
No response
The text was updated successfully, but these errors were encountered: