Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix routing through relay, default network RPS, --token, logging, readme #399

Merged
merged 10 commits into from
Jul 22, 2023
Prev Previous commit
Next Next commit
Update readme
  • Loading branch information
borzunov committed Jul 22, 2023
commit d35e919511c97e2477ed16dee0a7f4c9ff011970
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ Run these commands in an [Anaconda](https://www.anaconda.com) env (requires Linu

```bash
conda install pytorch pytorch-cuda=11.7 -c pytorch -c nvidia
pip install --upgrade petals
pip install git+https://github.com/bigscience-workshop/petals
python -m petals.cli.run_server enoch/llama-65b-hf --adapters timdettmers/guanaco-65b
```

Expand Down Expand Up @@ -101,7 +101,7 @@ Here's how to install Petals with [Anaconda](https://www.anaconda.com/products/d

```bash
conda install pytorch pytorch-cuda=11.7 -c pytorch -c nvidia
pip install --upgrade petals
pip install git+https://github.com/bigscience-workshop/petals
```

If you don't use Anaconda, you can install PyTorch in [any other way](https://pytorch.org/get-started/locally/). If you want to run models with 8-bit weights, please install PyTorch with CUDA 11.x or newer for compatility with [bitsandbytes](https://github.com/timDettmers/bitsandbytes).
Expand Down
Loading