Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Missing model caching in perplexity metrics #402

Open
daskol opened this issue Jan 11, 2023 · 1 comment
Open

Missing model caching in perplexity metrics #402

daskol opened this issue Jan 11, 2023 · 1 comment

Comments

@daskol
Copy link
Contributor

daskol commented Jan 11, 2023

The "official" implementation of perplexity metric does not cache language model [1]. It seems that metric instance should fetch model and prepare it for further usage in _download_and_prepare. I suppose that there should be clear API on caching and cache resetting. Also, it is totally unclear how to configure metrics on loading (there is only config_name but kwargs are just ignored).

@KangkangStu
Copy link

度量的“官方”实现perplexity不缓存语言模型[ 1 ]。看来指标实例应该获取模型并准备好在_download_and_prepare. 我认为应该有关于缓存和缓存重置的明确 API。此外,完全不清楚如何配置加载指标(只有config_namekwargs被忽略)。

+1 ,Do you know now?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants