Skip to content
This repository has been archived by the owner on Dec 16, 2022. It is now read-only.

v1.2.0

Compare
Choose a tag to compare
@AkshitaB AkshitaB released this 29 Oct 21:37
· 47 commits to master since this release

What's new

Changed ⚠️

  • Enforced stricter typing requirements around the use of Optional[T] types.
  • Changed the behavior of Lazy types in from_params methods. Previously, if you defined a Lazy parameter like
    foo: Lazy[Foo] = None in a custom from_params classmethod, then foo would actually never be None.
    This behavior is now different. If no params were given for foo, it will be None.
    You can also now set default values for foo like foo: Lazy[Foo] = Lazy(Foo).
    Or, if you want you want a default value but also want to allow for None values, you can
    write it like this: foo: Optional[Lazy[Foo]] = Lazy(Foo).
  • Added support for PyTorch version 1.7.

Fixed ✅

  • Made it possible to instantiate TrainerCallback from config files.
  • Fixed the remaining broken internal links in the API docs.
  • Fixed a bug where Hotflip would crash with a model that had multiple TokenIndexers and the input
    used rare vocabulary items.
  • Fixed a bug where BeamSearch would fail if max_steps was equal to 1.

Commits

7f85c74 fix docker build (#4762)
cc9ac0f ensure dataclasses not installed in CI (#4754)
812ac57 Fix hotflip bug where vocab items were not re-encoded correctly (#4759)
aeb6d36 revert samplers and fix bug when max_steps=1 (#4760)
baca754 Make returning token type id default in transformers intra word tokenization. (#4758)
5d6670c Update torch requirement from <1.7.0,>=1.6.0 to >=1.6.0,<1.8.0 (#4753)
0ad228d a few small doc fixes (#4752)
71a98c2 stricter typing for Optional[T] types, improve handling of Lazy params (#4743)
27edfbf Add end+trainer callbacks to Trainer.from_partial_objects (#4751)
b792c83 Fix device mismatch bug for categorical accuracy metric in distributed training (#4744)