Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix RLOO checkpointing #2114

Open
wants to merge 5 commits into
base: main
Choose a base branch
from
Open

Fix RLOO checkpointing #2114

wants to merge 5 commits into from

Conversation

bartoszzuk
Copy link
Contributor

This PR fixes RLOO checkpointing (in the same way as a recent fix for PPOv2 PR #2080).

This is needed after changes to _save_checkpoint method introduced in transformers v4.45.0.dev. Specifically we will get KeyError: 'TrainerControl' during saving of the trainer state (here is the exact line causing the issue). By passing stateful_callbacks to OnlineTrainerState explicitly the TrainerControl object is stored and can be properly accessed in _save_checkpoint.

@qgallouedec
Copy link
Member

Nice, thanks @bartoszzuk. Without your fix, does it cause any error when running RLOO?

@qgallouedec
Copy link
Member

make sure to run make precommit by the way to make the CI happy

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@bartoszzuk
Copy link
Contributor Author

@qgallouedec Yes when using transformers v4.45.0.dev I'm getting

...
File "/usr/local/lib/python3.10/dist-packages/trl/trainer/rloo_trainer.py", line 449, in train
  self._save_checkpoint(model, trial=None, metrics=metrics)
File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 3016, in _save_checkpoint
  if isinstance(self.state.stateful_callbacks[cb_name], list):
KeyError: 'TrainerControl'

Because the self.state.stateful_callbacks is an empty dict

Sorry, totally forgot about make precommit will fix it ASAP

@qgallouedec
Copy link
Member

Thanks. I'm not sure to understand why this failing mode doesn't break our CI

@sahandrez
Copy link

sahandrez commented Sep 25, 2024

I am not sure if this is related, but I have observed a strange behaviour in RLOO checkpointing. For example, I have set it to checkpoint every 500 steps and it follows that for some time, but after a while it starts generating checkpoints every 2 steps. Is this an intended functionality?

Copy link
Member

@lewtun lewtun left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @bartoszzuk thanks for the fix! Would you mind writing a regression test in test_rloo_trainer.py that fails on main but passes on your branch? That would help ensure future code changes don't accidentally introduce the bug again

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants