Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prefix tuning with "gated add" for Roberta #20

Open
GeondoPark opened this issue Sep 27, 2022 · 0 comments
Open

Prefix tuning with "gated add" for Roberta #20

GeondoPark opened this issue Sep 27, 2022 · 0 comments

Comments

@GeondoPark
Copy link

Hi, thanks for publishing the paper and sharing the source code.

I found that the "attn_output" is not used after definition.
When learning roberta for parameter efficient learning, the paper version of prefix tuning does not seem to work properly.

Could you please check it out?!

if self.config.attn_mode != "none" and self.config.attn_composition == "gate_add":
attn_output = context_layer * w_attn + cross_attn_output * w_prefix

Thanks!

@GeondoPark GeondoPark changed the title Hi, thanks for sharing the source code. Prefix tuning with "gated add" for Roberta Sep 27, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant