Skip to content
This repository has been archived by the owner on Dec 16, 2022. It is now read-only.

Commit

Permalink
improve err msg for PolynomialDecay LR scheduler (#5143)
Browse files Browse the repository at this point in the history
* improve err msg for PolynomialDecay LR scheduler

* Update CHANGELOG.md

Co-authored-by: Dirk Groeneveld <dirkg@allenai.org>

Co-authored-by: Dirk Groeneveld <dirkg@allenai.org>
  • Loading branch information
epwalsh and dirkgr authored Apr 27, 2021
1 parent 530dae4 commit c71bb46
Show file tree
Hide file tree
Showing 2 changed files with 14 additions and 0 deletions.
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

- The `GradientDescentTrainer` no longer leaves stray model checkpoints around when it runs out of patience.
- Fixed `cached_path()` for "hf://" files.
- Improved the error message for the `PolynomialDecay` LR scheduler when `num_steps_per_epoch` is missing.


## [v2.3.1](https://github.com/allenai/allennlp/releases/tag/v2.3.1) - 2021-04-20
Expand Down
13 changes: 13 additions & 0 deletions allennlp/training/learning_rate_schedulers/polynomial_decay.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
from overrides import overrides
import torch

from allennlp.common.checks import ConfigurationError
from allennlp.training.learning_rate_schedulers.learning_rate_scheduler import LearningRateScheduler


Expand Down Expand Up @@ -41,6 +42,18 @@ def __init__(
):
super().__init__(optimizer, last_epoch)

# Sanity check here.
if num_steps_per_epoch is None:
raise ConfigurationError(
"'num_steps_per_epoch' is required for this LR scheduler.\n\n"
"If you know how many batches per epoch for your training data, you can set this value "
"directly in your config. Otherwise you'll need to use compatible settings with your data loader "
"so that it can report an accurate number of batches per epoch. "
"If you're using the MultiProcessDataLoader, "
"this means you either need to set 'batches_per_epoch' "
"or leave 'max_instances_in_memory' as None (if your entire dataset can fit into memory)."
)

self.power = power
self.warmup_steps = warmup_steps
self.total_steps = num_epochs * num_steps_per_epoch
Expand Down

0 comments on commit c71bb46

Please sign in to comment.