Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add "Fine-tune ALBERT for sentence-pair classification" notebook to the community notebooks #7255

Merged
merged 1 commit into from
Sep 21, 2020

Conversation

NadirEM
Copy link
Contributor

@NadirEM NadirEM commented Sep 19, 2020

Hello,

I'm adding to the community notebooks a tutorial on fine-tuning ALBERT and other BERT-based models for sentence-pair classification.
The main features of this tutorial are :
[1] End-to-end ML implementation (training, validation, prediction, evaluation)
[2] Easy adaptability to your own datasets
[3] Facilitation of quick experiments with other BERT-based models (BERT, ALBERT, ...)
[4] Quick training with limited computational resources (mixed-precision, gradient accumulation, ...)
[5] Multi-GPU execution
[6] Threshold choice for the classification decision (not necessarily 0.5)
[7] Freeze BERT layers and only update the classification layer weights or update all the weights
[8] Reproducible results with seed settings

Copy link
Member

@LysandreJik LysandreJik left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very nice, thank you for your contribution!

@LysandreJik LysandreJik merged commit 4b3e55b into huggingface:master Sep 21, 2020
fabiocapsouza pushed a commit to fabiocapsouza/transformers that referenced this pull request Nov 15, 2020
fabiocapsouza added a commit to fabiocapsouza/transformers that referenced this pull request Nov 15, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants