Skip to content

Commit

Permalink
Name a CPU-only Dockerfile since the NVidia turned out to be complica…
Browse files Browse the repository at this point in the history
…ted to make actually work due to NVidia's own shenanigans.

It seems to be difficult to get OpenCL powered by NVidia inside a
container. My Linux did not have the necessary packages in repositories
(Fedora) to expose NVidia GPUs inside docker.
  • Loading branch information
Noeda committed Apr 7, 2023
1 parent 5b76ef4 commit 059c948
Show file tree
Hide file tree
Showing 3 changed files with 22 additions and 10 deletions.
10 changes: 0 additions & 10 deletions .docker/nvidia.dockerfile → .docker/cpu.dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -8,23 +8,13 @@ RUN apt install -y curl \
tar \
curl \
xz-utils \
ocl-icd-libopencl1 \
opencl-headers \
clinfo \
build-essential \
gcc

RUN mkdir -p /etc/OpenCL/vendors && \
echo "libnvidia-opencl.so.1" > /etc/OpenCL/vendors/nvidia.icd
ENV NVIDIA_VISIBLE_DEVICES all
ENV NVIDIA_DRIVER_CAPABILITIES compute,utility

RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs > /rustup.sh
RUN chmod +x /rustup.sh
RUN /rustup.sh -y

RUN apt install -y opencl-dev

RUN bash -c 'export LD_LIBRARY_PATH=/usr/lib:/lib:/usr/lib64:/lib64; export PATH="$PATH:$HOME/.cargo/bin";rustup default nightly'

COPY . /opt/rllama
Expand Down
1 change: 1 addition & 0 deletions .dockerignore
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
target
21 changes: 21 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,6 +54,27 @@ RUSTFLAGS="-C target-feature=+sse2,+avx,+fma,+avx2" cargo install rllama
There is a `.cargo/config.toml` inside this repository that will enable these
features if you install manually from this Git repository instead.

## Install (Docker path)

There is a Dockerfile you can use if you'd rather just get started quickly and
you are familiar with `docker`. You still need to download the models yourself.

```
docker build -f ./.docker/cpu.dockerfile -t rllama .
```

```
docker run -v /models/LLaMA:/models:z -it rllama \
rllama --model-path /models/7B \
--param-path /models/7B/params.json \
--tokenizer-path /models/tokenizer.model \
--prompt "hi I like cheese"
```

Replace `/models/LLaMA` with the directory you've downloaded your models to.
The `:z` in `-v` flag may or may not be needed depending on your distribution
(I needed it on Fedora Linux)

## LLaMA weights

Refer to https://github.com/facebookresearch/llama/ As of now, you need to be
Expand Down

0 comments on commit 059c948

Please sign in to comment.