Skip to content

Commit

Permalink
readthedocs (modelscope#1345)
Browse files Browse the repository at this point in the history
  • Loading branch information
tastelikefeet committed Jul 10, 2024
1 parent 7758ed0 commit a590aa2
Show file tree
Hide file tree
Showing 8 changed files with 87 additions and 119 deletions.
64 changes: 8 additions & 56 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,27 +31,26 @@
- [News](#-news)
- [Installation](#%EF%B8%8F-installation)
- [Getting Started](#-getting-started)
- [Documentation](#-documentation)
- [Classroom](#-Classroom)
- [License](#-License)
- [Citation](#-citation)

## 📝 Introduction
SWIFT supports training, inference, evaluation and deployment of **300+ LLMs and 50+ MLLMs** (multimodal large models). Developers can directly apply our framework to their own research and production environments to realize the complete workflow from model training and evaluation to application. In addition to supporting the lightweight training solutions provided by [PEFT](https://github.com/huggingface/peft), we also provide a complete **Adapters library** to support the latest training techniques such as NEFTune, LoRA+, LLaMA-PRO, etc. This adapter library can be used directly in your own custom workflow without our training scripts.

To facilitate use by users unfamiliar with deep learning, we provide a Gradio web-ui for controlling training and inference, as well as accompanying deep learning courses and best practices for beginners.
To facilitate use by users unfamiliar with deep learning, we provide a Gradio web-ui for controlling training and inference, as well as accompanying deep learning courses and best practices for beginners. SWIFT web-ui is available both on [Huggingface space](https://huggingface.co/spaces/tastelikefeet/swift) and [ModelScope studio](https://www.modelscope.cn/studios/iic/Scalable-lightWeight-Infrastructure-for-Fine-Tuning/summary), please feel free to try!

Additionally, we are expanding capabilities for other modalities. Currently, we support full-parameter training and LoRA training for AnimateDiff.

SWIFT has rich documentations for users, please check [here](https://github.com/modelscope/swift/tree/main/docs/source_en/LLM/index.md).

SWIFT web-ui is available both on [Huggingface space](https://huggingface.co/spaces/tastelikefeet/swift) and [ModelScope studio](https://www.modelscope.cn/studios/iic/Scalable-lightWeight-Infrastructure-for-Fine-Tuning/summary), please feel free to try!
SWIFT has rich documentations for users, please feel free to check our documentation website:
<p align="center">
<a href="https://swift.readthedocs.io/en/latest/">English Documentation</a> &nbsp | &nbsp <a href="https://swift.readthedocs.io/zh-cn/latest/">中文文档</a> &nbsp
</p>

## ☎ Groups

You can contact us and communicate with us by adding our group:


Discord Group | 微信群
[Discord Group](https://discord.gg/qQXTzNUp) | 微信群
:-------------------------:|:-------------------------:
<img src="asset/discord_qr.jpg" width="200" height="200"> | <img src="asset/wechat.png" width="200" height="200">

Expand Down Expand Up @@ -647,54 +646,7 @@ The complete list of supported models and datasets can be found at [Supported Mo
Other variables like `CUDA_VISIBLE_DEVICES` are also supported, which are not listed here.


## 📃 Documentation

### Documentation Compiling

```shell
make docs
# Check docs/build/html/index.html in web-browser
```

### User Guide

| Document Name |
| ------------------------------------------------------------ |
| [Using Web-UI](docs/source_en/GetStarted/Web-ui.md) |
| [Using Tuners](docs/source_en/GetStarted/Tuners.md) |
| [LLM Inference](docs/source_en/LLM/LLM-inference.md) |
| [LLM Fine-tuning](docs/source_en/LLM/LLM-fine-tuning.md) |
| [LLM Evaluation](docs/source_en/LLM/LLM-eval.md) |
| [LLM Quantization](docs/source_en/LLM/LLM-quantization.md) |
| [LLM Deployment](docs/source_en/LLM/VLLM-inference-acceleration-and-deployment.md) |
| [AnimateDiff Training](docs/source_en/AIGC/AnimateDiff-train-infer.md) |
| [Human Preference Alignment Training Documentation](docs/source_en/LLM/Human-Preference-Alignment-Training-Documentation.md) |

### Reference Documentation
| Document Name |
| ------------------------------------------------------------ |
| [Command Line Arguments](docs/source_en/LLM/Command-line-parameters.md) |
| [Supported Models and Datasets List](docs/source_en/LLM/Supported-models-datasets.md) |
| [Customizing New Models and Datasets](docs/source_en/LLM/Customization.md) |
| [Runtime Speed and Memory Benchmark](docs/source_en/LLM/Benchmark.md) |


### Best Practices

| Best Practices Name |
| ------------------------------------------------------------ |
| [Agent Fine-Tuning Best Practice](docs/source_en/LLM/Agent-fine-tuning-best-practice.md) |
| [Agent Deployment Best Practice](docs/source_en/LLM/Agent-deployment-best-practice.md) |
| [Self-Cognition Fine-Tuning Best Practice](docs/source_en/LLM/Self-cognition-best-practice.md) |
| [Qwen1.5 Best Practice](docs/source_en/LLM/Qwen1.5-best-practice.md) |
| [Multi-Modal Model Training Best Practice](docs/source_en/Multi-Modal/index.md) |
| [NPU Best Practice](docs/source_en/LLM/NPU-best-practice.md) |
| [DPO Human Alignment Training](docs/source_en/LLM/DPO.md) |
| [ORPO Human Alignment Training](docs/source_en/LLM/ORPO.md) |
| [SimPO Human Alignment Training](docs/source_en/LLM/SimPO.md) |


### Deep Learning Tutorials
## 📚 Classroom

| Tutorial Name |
|-------------------------------------------------------------- |
Expand Down
67 changes: 8 additions & 59 deletions README_CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,27 +32,25 @@
- [新闻](#-新闻)
- [安装](#-%EF%B8%8F-安装)
- [快速开始](#-快速开始)
- [文档](#-文档)
- [教程](#-教程)
- [License](#-license)
- [引用](#-引用)

## 📝 简介
SWIFT支持**300+ LLM和50+ MLLM**(多模态大模型)的训练、推理、评测和部署。开发者可以直接将我们的框架应用到自己的Research和生产环境中,实现模型训练评测到应用的完整链路。我们除支持了[PEFT](https://github.com/huggingface/peft)提供的轻量训练方案外,也提供了一个完整的**Adapters库**以支持最新的训练技术,如NEFTune、LoRA+、LLaMA-PRO等,这个适配器库可以脱离训练脚本直接使用在自己的自定流程中。

为方便不熟悉深度学习的用户使用,我们提供了一个Gradio的web-ui用于控制训练和推理,并提供了配套的深度学习课程和最佳实践供新手入门。

此外,我们也在拓展其他模态的能力,目前我们支持了AnimateDiff的全参数训练和LoRA训练。

SWIFT具有丰富的文档体系,如有使用问题请请查看[这里](https://github.com/modelscope/swift/tree/main/docs/source/LLM/index.md).

可以在[Huggingface space](https://huggingface.co/spaces/tastelikefeet/swift)[ModelScope创空间](https://www.modelscope.cn/studios/iic/Scalable-lightWeight-Infrastructure-for-Fine-Tuning/summary) 中体验SWIFT web-ui功能了。
为方便不熟悉深度学习的用户使用,我们提供了一个Gradio的web-ui用于控制训练和推理,并提供了配套的深度学习课程和最佳实践供新手入门。 可以在[Huggingface space](https://huggingface.co/spaces/tastelikefeet/swift)[ModelScope创空间](https://www.modelscope.cn/studios/iic/Scalable-lightWeight-Infrastructure-for-Fine-Tuning/summary) 中体验SWIFT web-ui功能了。

SWIFT具有丰富全面的文档,请查看我们的文档网站:
<p align="center">
<a href="https://swift.readthedocs.io/en/latest/">English Documentation</a> &nbsp | &nbsp <a href="https://swift.readthedocs.io/zh-cn/latest/">中文文档</a> &nbsp
</p>

## ☎ 用户群

请扫描下面的二维码来加入我们的交流群:

Discord Group | 微信群
[Discord Group](https://discord.gg/qQXTzNUp) | 微信群
:-------------------------:|:-------------------------:
<img src="asset/discord_qr.jpg" width="200" height="200"> | <img src="asset/wechat.png" width="200" height="200">

Expand Down Expand Up @@ -644,56 +642,7 @@ CUDA_VISIBLE_DEVICES=0 swift deploy \

其他变量如`CUDA_VISIBLE_DEVICES`也支持,但未在此列出。


## 📃文档

### 文档编译

```shell
make docs
# 浏览器查看docs/build/html/index.html
```

### 用户指南

| 文档名称 |
| ------------------------------------------------------------ |
| [使用Web-UI](https://github.com/modelscope/swift/blob/main/docs/source/GetStarted/%E7%95%8C%E9%9D%A2%E8%AE%AD%E7%BB%83%E6%8E%A8%E7%90%86.md) |
| [使用Tuners](https://github.com/modelscope/swift/blob/main/docs/source/GetStarted/%E4%BD%BF%E7%94%A8tuners.md) |
| [LLM推理](https://github.com/modelscope/swift/blob/main/docs/source/LLM/LLM%E6%8E%A8%E7%90%86%E6%96%87%E6%A1%A3.md) |
| [LLM微调](https://github.com/modelscope/swift/blob/main/docs/source/LLM/LLM%E5%BE%AE%E8%B0%83%E6%96%87%E6%A1%A3.md) |
| [LLM评测](https://github.com/modelscope/swift/blob/main/docs/source/LLM/LLM%E8%AF%84%E6%B5%8B%E6%96%87%E6%A1%A3.md) |
| [LLM量化](https://github.com/modelscope/swift/blob/main/docs/source/LLM/LLM%E9%87%8F%E5%8C%96%E6%96%87%E6%A1%A3.md) |
| [LLM部署](https://github.com/modelscope/swift/blob/main/docs/source/LLM/VLLM%E6%8E%A8%E7%90%86%E5%8A%A0%E9%80%9F%E4%B8%8E%E9%83%A8%E7%BD%B2.md) |
| [AnimateDiff训练](https://github.com/modelscope/swift/blob/main/docs/source/AIGC/AnimateDiff%E5%BE%AE%E8%B0%83%E6%8E%A8%E7%90%86%E6%96%87%E6%A1%A3.md) |
| [人类偏好对齐训练](./docs/source/LLM/人类偏好对齐训练文档.md) |


### 参考文档
| 文档名称 |
| ------------------------------------------------------------ |
| [命令行参数](https://github.com/modelscope/swift/blob/main/docs/source/LLM/%E5%91%BD%E4%BB%A4%E8%A1%8C%E5%8F%82%E6%95%B0.md) |
| [支持的模型和数据集列表](https://github.com/modelscope/swift/blob/main/docs/source/LLM/%E6%94%AF%E6%8C%81%E7%9A%84%E6%A8%A1%E5%9E%8B%E5%92%8C%E6%95%B0%E6%8D%AE%E9%9B%86.md) |
| [自定义新模型和数据集](https://github.com/modelscope/swift/blob/main/docs/source/LLM/%E8%87%AA%E5%AE%9A%E4%B9%89%E4%B8%8E%E6%8B%93%E5%B1%95.md) |
| [运行速度与显存Benchmark](https://github.com/modelscope/swift/blob/main/docs/source/LLM/Benchmark.md) |
| [HuggingFace生态兼容](https://github.com/modelscope/swift/blob/main/docs/source/LLM/HuggingFace%E7%94%9F%E6%80%81%E5%85%BC%E5%AE%B9.md) |


### 最佳实践
| 最佳实践名称 |
| ------------------------------------------------------------ |
| [Agent微调最佳实践](https://github.com/modelscope/swift/blob/main/docs/source/LLM/Agent%E5%BE%AE%E8%B0%83%E6%9C%80%E4%BD%B3%E5%AE%9E%E8%B7%B5.md) |
| [Agent部署最佳实践](https://github.com/modelscope/swift/blob/main/docs/source/LLM/Agent%E9%83%A8%E7%BD%B2%E6%9C%80%E4%BD%B3%E5%AE%9E%E8%B7%B5.md) |
| [自我认知微调最佳实践](https://github.com/modelscope/swift/blob/main/docs/source/LLM/%E8%87%AA%E6%88%91%E8%AE%A4%E7%9F%A5%E5%BE%AE%E8%B0%83%E6%9C%80%E4%BD%B3%E5%AE%9E%E8%B7%B5.md) |
| [Qwen1.5最佳实践](https://github.com/modelscope/swift/blob/main/docs/source/LLM/Qwen1.5%E5%85%A8%E6%B5%81%E7%A8%8B%E6%9C%80%E4%BD%B3%E5%AE%9E%E8%B7%B5.md) |
| [多模态模型训练最佳实践](https://github.com/modelscope/swift/blob/main/docs/source/Multi-Modal/index.md) |
| [NPU推理与微调最佳实践](https://github.com/modelscope/swift/blob/main/docs/source/LLM/NPU%E6%8E%A8%E7%90%86%E4%B8%8E%E5%BE%AE%E8%B0%83%E6%9C%80%E4%BD%B3%E5%AE%9E%E8%B7%B5.md) |
| [DPO人类对齐训练](https://github.com/modelscope/swift/blob/main/docs/source/LLM/DPO%E8%AE%AD%E7%BB%83%E6%96%87%E6%A1%A3.md) |
| [ORPO人类对齐训练](https://github.com/modelscope/swift/blob/main/docs/source/LLM/ORPO%E7%AE%97%E6%B3%95%E6%9C%80%E4%BD%B3%E5%AE%9E%E8%B7%B5.md) |
| [SimPO人类对齐训练](https://github.com/modelscope/swift/blob/main/docs/source/LLM/SimPO%E7%AE%97%E6%B3%95%E6%9C%80%E4%BD%B3%E5%AE%9E%E8%B7%B5.md) |


### 深度学习教程
## 📚 教程

| 教程名称 |
| ------------------------------------------------------------ |
Expand Down
30 changes: 30 additions & 0 deletions docs/source/.readthedocs.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
# .readthedocs.yaml
# Read the Docs configuration file
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details

# Required
version: 2

# Set the OS, Python version and other tools you might need
build:
os: ubuntu-22.04
tools:
python: "3.12"

# Build documentation in the "docs/" directory with Sphinx
sphinx:
configuration: docs/source/conf.py

# Optionally build your docs in additional formats such as PDF and ePub
# formats:
# - pdf
# - epub

# Optional but recommended, declare the Python requirements required
# to build your documentation
# See https://docs.readthedocs.io/en/stable/guides/reproducible-builds.html
python:
install:
- requirements: requirements/docs.txt
- requirements: requirements/framework.txt
- requirements: requirements/llm.txt
6 changes: 4 additions & 2 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,9 +19,11 @@
# -- Project information -----------------------------------------------------

project = 'swift'
copyright = '2022-2023, Alibaba ModelScope'
author = 'modelscope Authors'
copyright = '2022-2024, Alibaba ModelScope'
author = 'ModelScope Authors'
version_file = '../../swift/version.py'
html_theme = 'sphinx_rtd_theme'
language = 'zh_CN'


def get_version():
Expand Down
30 changes: 30 additions & 0 deletions docs/source_en/.readthedocs.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
# .readthedocs.yaml
# Read the Docs configuration file
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details

# Required
version: 2

# Set the OS, Python version and other tools you might need
build:
os: ubuntu-22.04
tools:
python: "3.12"

# Build documentation in the "docs/" directory with Sphinx
sphinx:
configuration: docs/source_en/conf.py

# Optionally build your docs in additional formats such as PDF and ePub
# formats:
# - pdf
# - epub

# Optional but recommended, declare the Python requirements required
# to build your documentation
# See https://docs.readthedocs.io/en/stable/guides/reproducible-builds.html
python:
install:
- requirements: requirements/docs.txt
- requirements: requirements/framework.txt
- requirements: requirements/llm.txt
6 changes: 4 additions & 2 deletions docs/source_en/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,9 +19,11 @@
# -- Project information -----------------------------------------------------

project = 'swift'
copyright = '2022-2023, Alibaba ModelScope'
author = 'modelscope Authors'
copyright = '2022-2024, Alibaba ModelScope'
author = 'ModelScope Authors'
version_file = '../../swift/version.py'
html_theme = 'sphinx_rtd_theme'
language = 'en'


def get_version():
Expand Down
1 change: 1 addition & 0 deletions requirements/docs.txt
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,5 @@ recommonmark
sphinx>=5.3.0
sphinx-book-theme
sphinx-copybutton
sphinx-rtd-theme
sphinx_markdown_tables
2 changes: 2 additions & 0 deletions requirements/llm.txt
Original file line number Diff line number Diff line change
@@ -1,8 +1,10 @@
attrdict
charset_normalizer
cpm_kernels
einops
fastapi
gradio>=3.40.0
importlib_metadata
openai
sentencepiece
tiktoken
Expand Down

0 comments on commit a590aa2

Please sign in to comment.