-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Environment issues #40
Comments
As an update, the key conflict is: INFO:root:********** Run starts. ********** |
Hello, Maybe you can try to run the following commands step by step?
|
Thanks a lot for your help and your wonderful work! Your commands work well! |
Glad that I can help! |
Hi,
I'm trying to run your excellent code! However,after I download WizardMath-7B-V1.0 from huggingface and run:
python inference_llms_instruct_math_code.py --dataset_name gsm8k --finetuned_model_name WizardMath-7B-V1.0 --tensor_parallel_size 1 --weight_mask_rate 0.0
I got:
ValueError: Model architectures ['LlamaModel'] are not supported for now. Supported architectures: ['AquilaModel', 'BaiChuanForCausalLM', 'BaichuanForCausalLM', 'BloomForCausalLM', 'FalconForCausalLM', 'GPT2LMHeadModel', 'GPTBigCodeForCausalLM', 'GPTJForCausalLM', 'GPTNeoXForCausalLM', 'InternLMForCausalLM', 'LlamaForCausalLM', 'LLaMAForCausalLM', 'MPTForCausalLM', 'OPTForCausalLM', 'QWenLMHeadModel', 'RWForCausalLM']
as the architecture of WizardMath-7B-V1.0 is 'LlamaModel'. Do you have any thoughts about this problem? I suspect this may be a problem of my environment...still appreciate it if you could provide any useful information!
Thanks a lot for your help!
The text was updated successfully, but these errors were encountered: