Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LLMs实验 #1

Closed
XDeepAzure opened this issue Dec 7, 2023 · 1 comment
Closed

LLMs实验 #1

XDeepAzure opened this issue Dec 7, 2023 · 1 comment

Comments

@XDeepAzure
Copy link

你好,论文中只有在BERT-base上的实验,请问有在大模型上做实验吗?比如llama和chatglm

@flamewei123
Copy link
Owner

你好,我们的初步工作主要集中在encoder-only结构的研究上,因此我们尝试的最大规模语言模型是BERT-large。在后续的研究中,我们转向了decoder-only结构,并在llama2-7B模型上也取得了优秀的隐私保护效果。由于最新的研究成果已于二月份提交至ARR并选择匿名评审,因此新的论文和代码将待审稿结果公布后才能分享。审稿结果一旦出炉,我将会在此仓库更新相关链接。

Our initial work was primarily focused on the encoder architecture, which is why we experimented with the largest language model being BERT-large. In our subsequent research, we shifted our focus to the decoder architecture and achieved significant protective effects on the llama2-7B model. As our latest work was submitted to ARR in February under anonymous review, the new paper and code will be disclosed after the review results are published. Once the review results are available, I will update the repository with the relevant links.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants