diff --git a/README.md b/README.md index c89b086de558d7..6096ee02323ce9 100644 --- a/README.md +++ b/README.md @@ -256,7 +256,7 @@ Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 1. **[XLSR-Wav2Vec2](https://huggingface.co/transformers/model_doc/xlsr_wav2vec2.html)** (from Facebook AI) released with the paper [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) by Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli. 1. Want to contribute a new model? We have added a **detailed guide and templates** to guide you in the process of adding a new model. You can find them in the [`templates`](./templates) folder of the repository. Be sure to check the [contributing guidelines](./CONTRIBUTING.md) and contact the maintainers or open an issue to collect feedbacks before starting your PR. -To check if each model has an implementation in Flax, PyTorch or TensorFlow, or has an associated tokenizer backed by the 🤗 Tokenizers library, refer to [this table](https://huggingface.co/transformers/index.html#bigtable). +To check if each model has an implementation in Flax, PyTorch or TensorFlow, or has an associated tokenizer backed by the 🤗 Tokenizers library, refer to [this table](https://huggingface.co/transformers/index.html#supported-frameworks). These implementations have been tested on several datasets (see the example scripts) and should match the performance of the original implementations. You can find more details on performance in the Examples section of the [documentation](https://huggingface.co/transformers/examples.html). diff --git a/docs/source/index.rst b/docs/source/index.rst index 47eb5ad2aecec7..b24dce5cfd48a4 100644 --- a/docs/source/index.rst +++ b/docs/source/index.rst @@ -84,7 +84,10 @@ The documentation is organized in five parts: - **INTERNAL HELPERS** for the classes and functions we use internally. The library currently contains Jax, PyTorch and Tensorflow implementations, pretrained model weights, usage scripts and -conversion utilities for the following models: +conversion utilities for the following models. + +Supported models +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. This list is updated automatically from the README with `make fix-copies`. Do not update manually! @@ -267,7 +270,8 @@ conversion utilities for the following models: Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli. -.. _bigtable: +Supported frameworks +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The table below represents the current support in the library for each of those models, whether they have a Python tokenizer (called "slow"). A "fast" tokenizer backed by the 🤗 Tokenizers library, whether they have support in Jax (via diff --git a/examples/pytorch/question-answering/README.md b/examples/pytorch/question-answering/README.md index 96bed2d06be740..68645fc7d23c84 100644 --- a/examples/pytorch/question-answering/README.md +++ b/examples/pytorch/question-answering/README.md @@ -20,7 +20,7 @@ Based on the script [`run_qa.py`](https://github.com/huggingface/transformers/bl **Note:** This script only works with models that have a fast tokenizer (backed by the 🤗 Tokenizers library) as it uses special features of those tokenizers. You can check if your favorite model has a fast tokenizer in -[this table](https://huggingface.co/transformers/index.html#bigtable), if it doesn't you can still use the old version +[this table](https://huggingface.co/transformers/index.html#supported-frameworks), if it doesn't you can still use the old version of the script. The old version of this script can be found [here](https://github.com/huggingface/transformers/tree/master/examples/legacy/question-answering). diff --git a/examples/pytorch/question-answering/run_qa.py b/examples/pytorch/question-answering/run_qa.py index 57b0cb04e94955..0a48770a6946fe 100755 --- a/examples/pytorch/question-answering/run_qa.py +++ b/examples/pytorch/question-answering/run_qa.py @@ -304,7 +304,7 @@ def main(): if not isinstance(tokenizer, PreTrainedTokenizerFast): raise ValueError( "This example script only works for models that have a fast tokenizer. Checkout the big table of models " - "at https://huggingface.co/transformers/index.html#bigtable to find the model types that meet this " + "at https://huggingface.co/transformers/index.html#supported-frameworks to find the model types that meet this " "requirement" ) diff --git a/examples/pytorch/token-classification/README.md b/examples/pytorch/token-classification/README.md index e78d9bb3934802..fbff0176e93b7a 100644 --- a/examples/pytorch/token-classification/README.md +++ b/examples/pytorch/token-classification/README.md @@ -52,7 +52,7 @@ python run_ner.py \ **Note:** This script only works with models that have a fast tokenizer (backed by the 🤗 Tokenizers library) as it uses special features of those tokenizers. You can check if your favorite model has a fast tokenizer in -[this table](https://huggingface.co/transformers/index.html#bigtable), if it doesn't you can still use the old version +[this table](https://huggingface.co/transformers/index.html#supported-frameworks), if it doesn't you can still use the old version of the script. ## Old version of the script diff --git a/examples/pytorch/token-classification/run_ner.py b/examples/pytorch/token-classification/run_ner.py index 81690186bc462b..4ff79088cef3c4 100755 --- a/examples/pytorch/token-classification/run_ner.py +++ b/examples/pytorch/token-classification/run_ner.py @@ -306,7 +306,7 @@ def get_label_list(labels): if not isinstance(tokenizer, PreTrainedTokenizerFast): raise ValueError( "This example script only works for models that have a fast tokenizer. Checkout the big table of models " - "at https://huggingface.co/transformers/index.html#bigtable to find the model types that meet this " + "at https://huggingface.co/transformers/index.html#supported-frameworks to find the model types that meet this " "requirement" ) diff --git a/utils/check_copies.py b/utils/check_copies.py index db1999d2244791..c1ed7c1a222995 100644 --- a/utils/check_copies.py +++ b/utils/check_copies.py @@ -302,7 +302,7 @@ def check_model_list_copy(overwrite=False, max_per_line=119): rst_list, start_index, end_index, lines = _find_text_in_file( filename=os.path.join(PATH_TO_DOCS, "index.rst"), start_prompt=" This list is updated automatically from the README", - end_prompt=".. _bigtable:", + end_prompt="Supported frameworks", ) md_list = get_model_list() converted_list = convert_to_rst(md_list, max_per_line=max_per_line)