Get up and running quickly with these instructions
- Download these files from the docs and place them in this directory
- Run the commands below
Run these commands from this directory
- Run a prompt on mistral-7b, no logging
./mistral-7b-instruct-v0.1-Q4_K_M-main.llamafile -p "list 5 pros and cons of python" --log-disable
- Run the self contained, web server UI for mistral-7b
./mistral-7b-instruct-v0.1-Q4_K_M-server.llamafile
- Run wizard coder
./wizardcoder-python-13b-main.llamafile
Let's build a reusable local llm function to call local LLMs from anywhere.
There are many better ways to do this, but heres a simple, quick way to get local LLMs anywhere in your terminal’
I recommend checking out ‘LLM’ for a complete in terminal, LLM solution.
- Test out the lllm() function with
source local_llm.sh
- Example commands
lllm "Explain LLM architecture"
lllm "list 5 pros and cons of python" mistral 0.9
lllm "count items in list that are str and bool types" wizard
- Move the lllm function into your .bashrc or .zshrc or .bash_profile
- Now you can call lllm() from anywhere in your terminal
- watch the devlog where we create this repo
- llamafile codebase
- Original llamafile introduction
- Core author - creator of llamafile * cosmopolitan
- Original Blog Post
- How llamafile works