Skip to content
/ lllm Public

Local LLMs in One Line Of Code (thanks to llamafile)

Notifications You must be signed in to change notification settings

disler/lllm

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 

Repository files navigation

LLLM - Local LLMs with llamafile

local llms with llamafile

Quick setup

Get up and running quickly with these instructions

Commands

Run these commands from this directory

  • Run a prompt on mistral-7b, no logging
    • ./mistral-7b-instruct-v0.1-Q4_K_M-main.llamafile -p "list 5 pros and cons of python" --log-disable
  • Run the self contained, web server UI for mistral-7b
    • ./mistral-7b-instruct-v0.1-Q4_K_M-server.llamafile
  • Run wizard coder
    • ./wizardcoder-python-13b-main.llamafile

Reusable bash function (local_llm.sh:lllm())

Let's build a reusable local llm function to call local LLMs from anywhere.

There are many better ways to do this, but heres a simple, quick way to get local LLMs anywhere in your terminal’

I recommend checking out ‘LLM’ for a complete in terminal, LLM solution.

  • Test out the lllm() function with source local_llm.sh
  • Example commands
    • lllm "Explain LLM architecture"
    • lllm "list 5 pros and cons of python" mistral 0.9
    • lllm "count items in list that are str and bool types" wizard
  • Move the lllm function into your .bashrc or .zshrc or .bash_profile
  • Now you can call lllm() from anywhere in your terminal

Resources

About

Local LLMs in One Line Of Code (thanks to llamafile)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages