Skip to content

Commit

Permalink
update readme
Browse files Browse the repository at this point in the history
  • Loading branch information
hiyouga committed Apr 20, 2023
1 parent e846249 commit fe0d921
Show file tree
Hide file tree
Showing 2 changed files with 8 additions and 4 deletions.
6 changes: 4 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,8 @@ Fine-tuning 🤖[ChatGLM-6B](https://github.com/THUDM/ChatGLM-6B) model with

## Changelog

[23/04/20] Our repo achieved 100 stars within 12 days! Congratulations!

[23/04/19] Now we support merging the weights of fine-tuned models trained by LoRA! Try `--checkpoint_dir checkpoint1,checkpoint2` argument for continually fine-tuning the models.

[23/04/18] Now we support training the quantized models using three fine-tuning methods! Try `quantization_bit` argument for training the model in 4/8 bits.
Expand All @@ -38,7 +40,7 @@ Our script now supports the following datasets:
- [Firefly 1.1M](https://huggingface.co/datasets/YeungNLP/firefly-train-1.1M)
- [CodeAlpaca 20k](https://huggingface.co/datasets/sahil2801/CodeAlpaca-20k)
- [Alpaca CoT](https://huggingface.co/datasets/QingyiSi/Alpaca-CoT)
- [Web QA](https://huggingface.co/datasets/suolyer/webqa)
- [Web QA (Chinese)](https://huggingface.co/datasets/suolyer/webqa)

Please refer to [data/README.md](data/README.md) for details.

Expand All @@ -55,7 +57,7 @@ Our script now supports the following fine-tuning methods:

## Requirement

- Python 3.10 and PyTorch 2.0.0
- Python 3.8+ and PyTorch 2.0.0
- 🤗Transformers, Datasets, Accelerate and PEFT (0.3.0.dev0 is required)
- protobuf, cpm_kernels, sentencepiece
- jieba, rouge_chinese, nltk
Expand Down
6 changes: 4 additions & 2 deletions README_zh.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,8 @@

## 更新日志

[23/04/20] 我们的项目在 12 天内获得了 100 个 Star!祝贺!

[23/04/20] 我们新增了一个修改模型自我认知的例子,请移步 [alter_self_cognition.md](examples/alter_self_cognition.md) 查阅。

[23/04/19] 现在我们实现了模型融合!请尝试使用 `--checkpoint_dir checkpoint1,checkpoint2` 参数训练融合 LoRA 权重后的模型。
Expand Down Expand Up @@ -40,7 +42,7 @@
- [Firefly 1.1M](https://huggingface.co/datasets/YeungNLP/firefly-train-1.1M)
- [CodeAlpaca 20k](https://huggingface.co/datasets/sahil2801/CodeAlpaca-20k)
- [Alpaca CoT](https://huggingface.co/datasets/QingyiSi/Alpaca-CoT)
- [Web QA](https://huggingface.co/datasets/suolyer/webqa)
- [Web QA (Chinese)](https://huggingface.co/datasets/suolyer/webqa)

使用方法请参考 [data/README.md](data/README.md) 文件。

Expand All @@ -57,7 +59,7 @@

## 软件依赖

- Python 3.10, PyTorch 2.0.0
- Python 3.8+, PyTorch 2.0.0
- 🤗Transformers, Datasets, Accelerate, PEFT(最低需要 0.3.0.dev0)
- protobuf, cpm_kernels, sentencepiece
- jieba, rouge_chinese, nltk
Expand Down

0 comments on commit fe0d921

Please sign in to comment.