From 67a86f9e21353c583904a28eb74b6d164ba95764 Mon Sep 17 00:00:00 2001 From: peterschmidt85 Date: Sun, 26 Nov 2023 10:46:07 +0100 Subject: [PATCH] - Minor bugfixes in tne docs --- docs/examples/finetuning-llama-2.md | 2 +- docs/examples/text-generation-inference.md | 4 ++-- docs/examples/vllm.md | 2 +- mkdocs.yml | 2 +- 4 files changed, 5 insertions(+), 5 deletions(-) diff --git a/docs/examples/finetuning-llama-2.md b/docs/examples/finetuning-llama-2.md index 244029a2f..d0291c27a 100644 --- a/docs/examples/finetuning-llama-2.md +++ b/docs/examples/finetuning-llama-2.md @@ -5,7 +5,7 @@ with QLoRA and your own script, using [Tasks](../docs/guides/tasks.md). If you'd like to fine-tune an LLM via a simple API, - consider using the [Fine-tuning](../docs/guides/task-generation.md) API. It's a lot simpler and + consider using the [Fine-tuning](../docs/guides/text-generation.md) API. It's a lot simpler and doesn't need your own script. ## Prepare a dataset diff --git a/docs/examples/text-generation-inference.md b/docs/examples/text-generation-inference.md index 04627bbb4..7620fca60 100644 --- a/docs/examples/text-generation-inference.md +++ b/docs/examples/text-generation-inference.md @@ -2,11 +2,11 @@ !!! info "NOTE:" This example demonstrates how to deploy an LLM - using [Services](../docs/guides/services.md) and [Text Generation Inference](https://github.com/huggingface/text-generation-inference) (TGI), + using [Services](../docs/guides/services.md) and [TGI](https://github.com/huggingface/text-generation-inference), an open-source framework by Hugging Face. If you'd like to deploy an LLM via a simple API, - consider using the [Text generation](../docs/guides/task-generation.md) API. It's a lot simpler. + consider using the [Text generation](../docs/guides/text-generation.md) API. It's a lot simpler. ## Define the configuration diff --git a/docs/examples/vllm.md b/docs/examples/vllm.md index 30ddb6d67..904d5c89e 100644 --- a/docs/examples/vllm.md +++ b/docs/examples/vllm.md @@ -6,7 +6,7 @@ an open-source library. If you'd like to deploy an LLM via a simple API, - consider using the [Text generation](../docs/guides/task-generation.md) API. It's a lot simpler. + consider using the [Text generation](../docs/guides/text-generation.md) API. It's a lot simpler. ## Define the configuration diff --git a/mkdocs.yml b/mkdocs.yml index 982ca6fb2..401329d55 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -164,9 +164,9 @@ nav: - Guides: - Fine-tuning: docs/guides/fine-tuning.md - Text generation: docs/guides/text-generation.md + - Dev environments: docs/guides/dev-environments.md - Tasks: docs/guides/tasks.md - Services: docs/guides/services.md - - Dev environments: docs/guides/dev-environments.md - Reference: - CLI: docs/reference/cli/index.md - API: docs/reference/api/python/index.md