You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Our initial work was primarily focused on the encoder architecture, which is why we experimented with the largest language model being BERT-large. In our subsequent research, we shifted our focus to the decoder architecture and achieved significant protective effects on the llama2-7B model. As our latest work was submitted to ARR in February under anonymous review, the new paper and code will be disclosed after the review results are published. Once the review results are available, I will update the repository with the relevant links.
你好,论文中只有在BERT-base上的实验,请问有在大模型上做实验吗?比如llama和chatglm
The text was updated successfully, but these errors were encountered: