Skip to content

Releases: mudler/LocalAI

v2.19.2

24 Jul 17:08
80ae919
Compare
Choose a tag to compare

This release is a patch release to fix well known issues from 2.19.x

What's Changed

Bug fixes 🐛

  • fix: pin setuptools 69.5.1 by @fakezeta in #2949
  • fix(cuda): downgrade to 12.0 to increase compatibility range by @mudler in #2994
  • fix(llama.cpp): do not set anymore lora_base by @mudler in #2999

Exciting New Features 🎉

  • ci(Makefile): reduce binary size by compressing by @mudler in #2947
  • feat(p2p): warn the user to start with --p2p by @mudler in #2993

🧠 Models

📖 Documentation and examples

👒 Dependencies

  • chore: ⬆️ Update ggerganov/llama.cpp by @localai-bot in #2936
  • chore: ⬆️ Update ggerganov/llama.cpp by @localai-bot in #2943
  • chore(deps): Bump grpcio from 1.64.1 to 1.65.1 in /backend/python/openvoice by @dependabot in #2956
  • chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in /backend/python/sentencetransformers by @dependabot in #2955
  • chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in /backend/python/bark by @dependabot in #2951
  • chore(deps): Bump docs/themes/hugo-theme-relearn from 1b2e139 to 7aec99b by @dependabot in #2952
  • chore(deps): Bump langchain from 0.2.8 to 0.2.10 in /examples/langchain/langchainpy-localai-example by @dependabot in #2959
  • chore(deps): Bump numpy from 1.26.4 to 2.0.1 in /examples/langchain/langchainpy-localai-example by @dependabot in #2958
  • chore(deps): Bump sqlalchemy from 2.0.30 to 2.0.31 in /examples/langchain/langchainpy-localai-example by @dependabot in #2957
  • chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in /backend/python/vllm by @dependabot in #2964
  • chore(deps): Bump llama-index from 0.10.55 to 0.10.56 in /examples/chainlit by @dependabot in #2966
  • chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in /backend/python/common/template by @dependabot in #2963
  • chore(deps): Bump weaviate-client from 4.6.5 to 4.6.7 in /examples/chainlit by @dependabot in #2965
  • chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in /backend/python/transformers by @dependabot in #2970
  • chore(deps): Bump openai from 1.35.13 to 1.37.0 in /examples/functions by @dependabot in #2973
  • chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in /backend/python/diffusers by @dependabot in #2969
  • chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in /backend/python/exllama2 by @dependabot in #2971
  • chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in /backend/python/rerankers by @dependabot in #2974
  • chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in /backend/python/coqui by @dependabot in #2980
  • chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in /backend/python/parler-tts by @dependabot in #2982
  • chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in /backend/python/vall-e-x by @dependabot in #2981
  • chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in /backend/python/transformers-musicgen by @dependabot in #2990
  • chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in /backend/python/autogptq by @dependabot in #2984
  • chore(deps): Bump llama-index from 0.10.55 to 0.10.56 in /examples/langchain-chroma by @dependabot in #2986
  • chore(deps): Bump grpcio from 1.65.0 to 1.65.1 in /backend/python/mamba by @dependabot in #2989
  • chore: ⬆️ Update ggerganov/llama.cpp by @localai-bot in #2992
  • chore(deps): Bump langchain-community from 0.2.7 to 0.2.9 in /examples/langchain/langchainpy-localai-example by @dependabot in #2960
  • chore(deps): Bump openai from 1.35.13 to 1.37.0 in /examples/langchain/langchainpy-localai-example by @dependabot in #2961
  • chore(deps): Bump langchain from 0.2.8 to 0.2.10 in /examples/functions by @dependabot in #2975
  • chore(deps): Bump openai from 1.35.13 to 1.37.0 in /examples/langchain-chroma by @dependabot in #2988
  • chore(deps): Bump langchain from 0.2.8 to 0.2.10 in /examples/langchain-chroma by @dependabot in #2987
  • chore: ⬆️ Update ggerganov/llama.cpp by @localai-bot in #2995

Other Changes

  • ci(Makefile): enable p2p on cross-arm64 builds by @mudler in #2928

Full Changelog: v2.19.1...v2.19.2

v2.19.1

20 Jul 07:16
f9f8379
Compare
Choose a tag to compare

local-ai-release-219-shadow

LocalAI 2.19.1 is out! 📣

TLDR; Summary spotlight

  • 🖧 Federated Instances via P2P: LocalAI now supports federated instances with P2P, offering both load-balanced and non-load-balanced options.
  • 🎛️ P2P Dashboard: A new dashboard to guide and assist in setting up P2P instances with auto-discovery using shared tokens.
  • 🔊 TTS Integration: Text-to-Speech (TTS) is now included in the binary releases.
  • 🛠️ Enhanced Installer: The installer script now supports setting up federated instances.
  • 📥 Model Pulling: Models can now be pulled directly via URL.
  • 🖼️ WebUI Enhancements: Visual improvements and cleanups to the WebUI and model lists.
  • 🧠 llama-cpp Backend: The llama-cpp (grpc) backend now supports embedding ( https://localai.io/features/embeddings/#llamacpp-embeddings )
  • ⚙️ Tool Support: Small enhancements to tools with disabled grammars.

🖧 LocalAI Federation and AI swarms

LocalAI is revolutionizing the future of distributed AI workloads by making it simpler and more accessible. No more complex setups, Docker or Kubernetes configurations – LocalAI allows you to create your own AI cluster with minimal friction. By auto-discovering and sharing work or weights of the LLM model across your existing devices, LocalAI aims to scale both horizontally and vertically with ease.

How it works?

Starting LocalAI with --p2p generates a shared token for connecting multiple instances: and that's all you need to create AI clusters, eliminating the need for intricate network setups. Simply navigate to the "Swarm" section in the WebUI and follow the on-screen instructions.

For fully shared instances, initiate LocalAI with --p2p --federated and adhere to the Swarm section's guidance. This feature, while still experimental, offers a tech preview quality experience.

Federated LocalAI

Launch multiple LocalAI instances and cluster them together to share requests across the cluster. The "Swarm" tab in the WebUI provides one-liner instructions on connecting various LocalAI instances using a shared token. Instances will auto-discover each other, even across different networks.

346663124-1d2324fd-8b55-4fa2-9856-721a467969c2

Check out a demonstration video: Watch now

LocalAI P2P Workers

Distribute weights across nodes by starting multiple LocalAI workers, currently available only on the llama.cpp backend, with plans to expand to other backends soon.

346663124-1d2324fd-8b55-4fa2-9856-721a467969c2

Check out a demonstration video: Watch now

What's Changed

Bug fixes 🐛

  • fix: make sure the GNUMake jobserver is passed to cmake for the llama.cpp build by @cryptk in #2697
  • Using exec when starting a backend instead of spawning a new process by @a17t in #2720
  • fix(cuda): downgrade default version from 12.5 to 12.4 by @mudler in #2707
  • fix: Lora loading by @vaaale in #2893
  • fix: short-circuit when nodes aren't detected by @mudler in #2909
  • fix: do not list txt files as potential models by @mudler in #2910

🖧 P2P area

  • feat(p2p): Federation and AI swarms by @mudler in #2723
  • feat(p2p): allow to disable DHT and use only LAN by @mudler in #2751

Exciting New Features 🎉

  • Allows to remove a backend from the list by @mauromorales in #2721
  • ci(Makefile): adds tts in binary releases by @mudler in #2695
  • feat: HF /scan endpoint by @dave-gray101 in #2566
  • feat(model-list): be consistent, skip known files from listing by @mudler in #2760
  • feat(models): pull models from urls by @mudler in #2750
  • feat(webui): show also models without a config in the welcome page by @mudler in #2772
  • feat(install.sh): support federated install by @mudler in #2752
  • feat(llama.cpp): support embeddings endpoints by @mudler in #2871
  • feat(functions): parse broken JSON when we parse the raw results, use dynamic rules for grammar keys by @mudler in #2912
  • feat(federation): add load balanced option by @mudler in #2915

🧠 Models

  • models(gallery): ⬆️ update checksum by @localai-bot in #2701
  • models(gallery): add l3-8b-everything-cot by @mudler in #2705
  • models(gallery): add hercules-5.0-qwen2-7b by @mudler in #2708
  • models(gallery): add llama3-8b-darkidol-2.2-uncensored-1048k-iq-imatrix by @mudler in #2710
  • models(gallery): add llama-3-llamilitary by @mudler in #2711
  • models(gallery): add tess-v2.5-gemma-2-27b-alpha by @mudler in #2712
  • models(gallery): add arcee-agent by @mudler in #2713
  • models(gallery): add gemma2-daybreak by @mudler in #2714
  • models(gallery): add L3-Stheno-Maid-Blackroot-Grand-HORROR-16B-GGUF by @mudler in #2715
  • models(gallery): add qwen2-7b-instruct-v0.8 by @mudler in #2717
  • models(gallery): add internlm2_5-7b-chat-1m by @mudler in #2719
  • models(gallery): add gemma-2-9b-it-sppo-iter3 by @mudler in #2722
  • models(gallery): add llama-3_8b_unaligned_alpha by @mudler in #2727
  • models(gallery): add l3-8b-lunaris-v1 by @mudler in #2729
  • models(gallery): add llama-3_8b_unaligned_alpha_rp_soup-i1 by @mudler in #2734
  • models(gallery): add hathor_respawn-l3-8b-v0.8 by @mudler in #2738
  • models(gallery): add llama3-8b-instruct-replete-adapted by @mudler in #2739
  • models(gallery): add llama-3-perky-pat-instruct-8b by @mudler in #2740
  • models(gallery): add l3-uncen-merger-omelette-rp-v0.2-8b by @mudler in #2741
  • models(gallery): add nymph_8b-i1 by @mudler in #2742
  • models(gallery): add smegmma-9b-v1 by @mudler in #2743
  • models(gallery): add hathor_tahsin-l3-8b-v0.85 by @mudler in #2762
  • models(gallery): add replete-coder-instruct-8b-merged by @mudler in #2782
  • models(gallery): add arliai-llama-3-8b-formax-v1.0 by @mudler in #2783
  • models(gallery): add smegmma-deluxe-9b-v1 by @mudler in #2784
  • models(gallery): add l3-ms-astoria-8b by @mudler in #2785
  • models(gallery): add halomaidrp-v1.33-15b-l3-i1 by @mudler in #2786
  • models(gallery): add llama-3-patronus-lynx-70b-instruct by @mudler in #2788
  • models(gallery): add llamax3 by @mudler in #2849
  • models(gallery): add arliai-llama-3-8b-dolfin-v0.5 by @mudler in #2852
  • models(gallery): add tiger-gemma-9b-v1-i1 by @mudler in #2853
  • feat: models(gallery): add deepseek-v2-lite by @mudler in #2658
  • models(gallery): ⬆️ update checksum by @localai-bot in #2860
  • models(gallery): add phi-3.1-mini-4k-instruct by @mudler in #2863
  • models(gallery): ⬆️ update checksum by @localai-bot in #2887
  • models(gallery): add ezo model series (llama3, gemma) by @mudler in #2891
  • models(gallery): add l3-8b-niitama-v1 by @mudler in #2895
  • models(gallery): add mathstral-7b-v0.1-imat by @mudler in #2901
  • models(gallery): add MythicalMaid/EtherealMaid 15b by @mudler in #2902
  • models(gallery): add flammenai/Mahou-1.3d-mistral-7B by @mudler in #2903
  • models(gallery): add big-tiger-gemma-27b-v1 by @mudler in #2918
  • models(gallery): add phillama-3.8b-v0.1 by @mudler in #2920
  • models(gallery): add qwen2-wukong-7b by @mudler in #2921
  • models(gallery): add einstein-v4-7b by @mudler in #2922
  • models(gallery): add gemma-2b-translation-v0.150 by @mudler in #2923
  • models(gallery)...
Read more

v2.19.0

19 Jul 17:44
f19ee46
Compare
Choose a tag to compare

local-ai-release-219-shadow

LocalAI 2.19.0 is out! 📣

TLDR; Summary spotlight

  • 🖧 Federated Instances via P2P: LocalAI now supports federated instances with P2P, offering both load-balanced and non-load-balanced options.
  • 🎛️ P2P Dashboard: A new dashboard to guide and assist in setting up P2P instances with auto-discovery using shared tokens.
  • 🔊 TTS Integration: Text-to-Speech (TTS) is now included in the binary releases.
  • 🛠️ Enhanced Installer: The installer script now supports setting up federated instances.
  • 📥 Model Pulling: Models can now be pulled directly via URL.
  • 🖼️ WebUI Enhancements: Visual improvements and cleanups to the WebUI and model lists.
  • 🧠 llama-cpp Backend: The llama-cpp (grpc) backend now supports embedding ( https://localai.io/features/embeddings/#llamacpp-embeddings )
  • ⚙️ Tool Support: Small enhancements to tools with disabled grammars.

🖧 LocalAI Federation and AI swarms

LocalAI is revolutionizing the future of distributed AI workloads by making it simpler and more accessible. No more complex setups, Docker or Kubernetes configurations – LocalAI allows you to create your own AI cluster with minimal friction. By auto-discovering and sharing work or weights of the LLM model across your existing devices, LocalAI aims to scale both horizontally and vertically with ease.

How it works?

Starting LocalAI with --p2p generates a shared token for connecting multiple instances: and that's all you need to create AI clusters, eliminating the need for intricate network setups. Simply navigate to the "Swarm" section in the WebUI and follow the on-screen instructions.

For fully shared instances, initiate LocalAI with --p2p --federated and adhere to the Swarm section's guidance. This feature, while still experimental, offers a tech preview quality experience.

Federated LocalAI

Launch multiple LocalAI instances and cluster them together to share requests across the cluster. The "Swarm" tab in the WebUI provides one-liner instructions on connecting various LocalAI instances using a shared token. Instances will auto-discover each other, even across different networks.

346663124-1d2324fd-8b55-4fa2-9856-721a467969c2

Check out a demonstration video: Watch now

LocalAI P2P Workers

Distribute weights across nodes by starting multiple LocalAI workers, currently available only on the llama.cpp backend, with plans to expand to other backends soon.

346663124-1d2324fd-8b55-4fa2-9856-721a467969c2

Check out a demonstration video: Watch now

What's Changed

Bug fixes 🐛

  • fix: make sure the GNUMake jobserver is passed to cmake for the llama.cpp build by @cryptk in #2697
  • Using exec when starting a backend instead of spawning a new process by @a17t in #2720
  • fix(cuda): downgrade default version from 12.5 to 12.4 by @mudler in #2707
  • fix: Lora loading by @vaaale in #2893
  • fix: short-circuit when nodes aren't detected by @mudler in #2909
  • fix: do not list txt files as potential models by @mudler in #2910

🖧 P2P area

  • feat(p2p): Federation and AI swarms by @mudler in #2723
  • feat(p2p): allow to disable DHT and use only LAN by @mudler in #2751

Exciting New Features 🎉

  • Allows to remove a backend from the list by @mauromorales in #2721
  • ci(Makefile): adds tts in binary releases by @mudler in #2695
  • feat: HF /scan endpoint by @dave-gray101 in #2566
  • feat(model-list): be consistent, skip known files from listing by @mudler in #2760
  • feat(models): pull models from urls by @mudler in #2750
  • feat(webui): show also models without a config in the welcome page by @mudler in #2772
  • feat(install.sh): support federated install by @mudler in #2752
  • feat(llama.cpp): support embeddings endpoints by @mudler in #2871
  • feat(functions): parse broken JSON when we parse the raw results, use dynamic rules for grammar keys by @mudler in #2912
  • feat(federation): add load balanced option by @mudler in #2915

🧠 Models

  • models(gallery): ⬆️ update checksum by @localai-bot in #2701
  • models(gallery): add l3-8b-everything-cot by @mudler in #2705
  • models(gallery): add hercules-5.0-qwen2-7b by @mudler in #2708
  • models(gallery): add llama3-8b-darkidol-2.2-uncensored-1048k-iq-imatrix by @mudler in #2710
  • models(gallery): add llama-3-llamilitary by @mudler in #2711
  • models(gallery): add tess-v2.5-gemma-2-27b-alpha by @mudler in #2712
  • models(gallery): add arcee-agent by @mudler in #2713
  • models(gallery): add gemma2-daybreak by @mudler in #2714
  • models(gallery): add L3-Stheno-Maid-Blackroot-Grand-HORROR-16B-GGUF by @mudler in #2715
  • models(gallery): add qwen2-7b-instruct-v0.8 by @mudler in #2717
  • models(gallery): add internlm2_5-7b-chat-1m by @mudler in #2719
  • models(gallery): add gemma-2-9b-it-sppo-iter3 by @mudler in #2722
  • models(gallery): add llama-3_8b_unaligned_alpha by @mudler in #2727
  • models(gallery): add l3-8b-lunaris-v1 by @mudler in #2729
  • models(gallery): add llama-3_8b_unaligned_alpha_rp_soup-i1 by @mudler in #2734
  • models(gallery): add hathor_respawn-l3-8b-v0.8 by @mudler in #2738
  • models(gallery): add llama3-8b-instruct-replete-adapted by @mudler in #2739
  • models(gallery): add llama-3-perky-pat-instruct-8b by @mudler in #2740
  • models(gallery): add l3-uncen-merger-omelette-rp-v0.2-8b by @mudler in #2741
  • models(gallery): add nymph_8b-i1 by @mudler in #2742
  • models(gallery): add smegmma-9b-v1 by @mudler in #2743
  • models(gallery): add hathor_tahsin-l3-8b-v0.85 by @mudler in #2762
  • models(gallery): add replete-coder-instruct-8b-merged by @mudler in #2782
  • models(gallery): add arliai-llama-3-8b-formax-v1.0 by @mudler in #2783
  • models(gallery): add smegmma-deluxe-9b-v1 by @mudler in #2784
  • models(gallery): add l3-ms-astoria-8b by @mudler in #2785
  • models(gallery): add halomaidrp-v1.33-15b-l3-i1 by @mudler in #2786
  • models(gallery): add llama-3-patronus-lynx-70b-instruct by @mudler in #2788
  • models(gallery): add llamax3 by @mudler in #2849
  • models(gallery): add arliai-llama-3-8b-dolfin-v0.5 by @mudler in #2852
  • models(gallery): add tiger-gemma-9b-v1-i1 by @mudler in #2853
  • feat: models(gallery): add deepseek-v2-lite by @mudler in #2658
  • models(gallery): ⬆️ update checksum by @localai-bot in #2860
  • models(gallery): add phi-3.1-mini-4k-instruct by @mudler in #2863
  • models(gallery): ⬆️ update checksum by @localai-bot in #2887
  • models(gallery): add ezo model series (llama3, gemma) by @mudler in #2891
  • models(gallery): add l3-8b-niitama-v1 by @mudler in #2895
  • models(gallery): add mathstral-7b-v0.1-imat by @mudler in #2901
  • models(gallery): add MythicalMaid/EtherealMaid 15b by @mudler in #2902
  • models(gallery): add flammenai/Mahou-1.3d-mistral-7B by @mudler in #2903
  • models(gallery): add big-tiger-gemma-27b-v1 by @mudler in #2918
  • models(gallery): add phillama-3.8b-v0.1 by @mudler in #2920
  • models(gallery): add qwen2-wukong-7b by @mudler in #2921
  • models(gallery): add einstein-v4-7b by @mudler in #2922
  • models(gallery): add gemma-2b-translation-v0.150 by @mudler in #2923
  • models(gallery)...
Read more

v2.18.1

01 Jul 20:53
b941732
Compare
Choose a tag to compare

What's Changed

Bug fixes 🐛

  • fix(talk): identify the model by ID instead of name by @mudler in #2685
  • fix(initializer): do select backends that exist by @mudler in #2694

Exciting New Features 🎉

  • feat(backend): fallback with autodetect by @mudler in #2693

🧠 Models

👒 Dependencies

Full Changelog: v2.18.0...v2.18.1

v2.18.0

28 Jun 14:17
8d9a452
Compare
Choose a tag to compare

local-ai-release-2 18-shadow

⭐ Highlights

Here’s a quick overview of what’s new in 2.18.0:

  • 🐳 Support for models in OCI registry (includes ollama)
  • 🌋 Support for llama.cpp with vulkan (container images only for now)
  • 🗣️ the transcription endpoint now can also translate with translate
  • ⚙️ Adds repeat_last_n and properties_order as model configurations
  • ⬆️ CUDA 12.5 Upgrade: we are now tracking the latest CUDA version (12.5).
  • 💎 Gemma 2 model support!

🐋 Support for OCI Images and Ollama Models

You can now specify models using oci:// and ollama:// prefixes in your YAML config files. Here’s an example for Ollama models:

parameters:
  model: ollama://...

Start the Ollama model directly with:

local-ai run ollama://gemma:2b

Or download only the model by using:

local-ai models install ollama://gemma:2b

For standard OCI images, use the oci:// prefix. To build a compatible container image, use docker for example.

Your Dockerfile should look like this:

FROM scratch
COPY ./my_gguf_file.gguf /

You can actually use it to store also other model types (for instance safetensors files for stable diffusion) and YAML config files !

🌋 Vulkan Support for Llama.cpp

We’ve introduced Vulkan support for Llama.cpp! Check out our new image tags latest-vulkan-ffmpeg-core and v2.18.0-vulkan-ffmpeg-core.

🗣️ Transcription and Translation

Our transcription endpoint now supports translation! Simply add translate: true to your transcription requests to translate the transcription to English.

⚙️ Enhanced Model Configuration

We’ve added new configuration options repeat_last_n and properties_order to give you more control. Here’s how you can set them up in your model YAML file:

# Force JSON to return properties in the specified order
function:
   grammar:
      properties_order: "name,arguments"

And for setting repeat_last_n (specific to Llama.cpp):

parameters:
   repeat_last_n: 64

💎 Gemma 2!

Screenshot from 2024-06-28 09-31-58

Google has just dropped gemma 2 models (blog post here), you can already install and run gemma 2 models in LocalAI with

local-ai run gemma-2-27b-it
local-ai run gemma-2-9b-it

What's Changed

Bug fixes 🐛

  • fix(install.sh): correctly handle systemd service installation by @mudler in #2627
  • fix(worker): use dynaload for single binaries by @mudler in #2620
  • fix(install.sh): fix version typo by @mudler in #2645
  • fix(install.sh): move ARCH detection so it works also for mac by @mudler in #2646
  • fix(cli): remove duplicate alias by @mudler in #2654

Exciting New Features 🎉

  • feat: Upgrade to CUDA 12.5 by @reneleonhardt in #2601
  • feat(oci): support OCI images and Ollama models by @mudler in #2628
  • feat(whisper): add translate option by @mudler in #2649
  • feat(vulkan): add vulkan support to the llama.cpp backend by @mudler in #2648
  • feat(ui): allow to select between all the available models in the chat by @mudler in #2657
  • feat(build): only build llama.cpp relevant targets by @mudler in #2659
  • feat(options): add repeat_last_n by @mudler in #2660
  • feat(grammar): expose properties_order by @mudler in #2662

🧠 Models

  • models(gallery): add l3-umbral-mind-rp-v1.0-8b-iq-imatrix by @mudler in #2608
  • models(gallery): ⬆️ update checksum by @localai-bot in #2607
  • models(gallery): add llama-3-sec-chat by @mudler in #2611
  • models(gallery): add llama-3-cursedstock-v1.8-8b-iq-imatrix by @mudler in #2612
  • models(gallery): add llama3-8b-darkidol-1.1-iq-imatrix by @mudler in #2613
  • models(gallery): add magnum-72b-v1 by @mudler in #2614
  • models(gallery): add qwen2-1.5b-ita by @mudler in #2615
  • models(gallery): add hermes-2-theta-llama-3-70b by @mudler in #2626
  • models(gallery): ⬆️ update checksum by @localai-bot in #2630
  • models(gallery): add dark-idol-1.2 by @mudler in #2663
  • models(gallery): add einstein v7 qwen2 by @mudler in #2664
  • models(gallery): add arcee-spark by @mudler in #2665
  • models(gallery): add gemma2-9b-it and gemma2-27b-it by @mudler in #2670

📖 Documentation and examples

👒 Dependencies

Other Changes

New Contributors

Full Changelog: v2.17.1...v2.18.0

v2.17.1

19 Jun 06:56
8142bdc
Compare
Choose a tag to compare

local-ai-release-2 17 1

Highlights

This is a patch release to address issues with Linux single binary releases. It also adds support for Stable diffusion 3!

Stable diffusion 3

You can use Stable diffusion 3 by installing the model in the gallery (stable-diffusion-3-medium) or by placing this YAML file in the model folder:

backend: diffusers
diffusers:
  cuda: true
  enable_parameters: negative_prompt,num_inference_steps
  pipeline_type: StableDiffusion3Pipeline
f16: false
name: sd3
parameters:
  model: v2ray/stable-diffusion-3-medium-diffusers
step: 25

You can try then generating an image:

http://localhost:9091/v1/images/generations -H "Content-Type: application/json" -d '{
  "prompt": "A cute baby sea otter", "model": "sd3"
}

Example result:

b64514236520

What's Changed

Bug fixes 🐛

Exciting New Features 🎉

  • feat(sd-3): add stablediffusion 3 support by @mudler in #2591
  • feat(talk): display an informative box, better colors by @mudler in #2600

📖 Documentation and examples

👒 Dependencies

Other Changes

Full Changelog: v2.17.0...v2.17.1

v2.17.0

17 Jun 18:10
2f29797
Compare
Choose a tag to compare

local-ai-release-2 17-shadow
Ahoj! this new release of LocalAI comes with tons of updates, and enhancements behind the scenes!

🌟 Highlights TLDR;

  • Automatic identification of GGUF models
  • New WebUI page to talk with an LLM!
  • https://models.localai.io is live! 🚀
  • Better arm64 and Apple silicon support
  • More models to the gallery!
  • New quickstart installer script
  • Enhancements to mixed grammar support
  • Major improvements to transformers
  • Linux single binary now supports rocm, nvidia, and intel

🤖 Automatic model identification for llama.cpp-based models

Just drop your GGUF files into the model folders, and let LocalAI handle the configurations. YAML files are now reserved for those who love to tinker with advanced setups.

🔊 Talk to your LLM!

Introduced a new page that allows direct interaction with the LLM using audio transcription and TTS capabilities. This feature is so fun - now you can just talk with any LLM with a couple of clicks away.
Screenshot from 2024-06-08 12-44-41

🍏 Apple single-binary

Experience enhanced support for the Apple ecosystem with a comprehensive single-binary that packs all necessary libraries, ensuring LocalAI runs smoothly on MacOS and ARM64 architectures.

ARM64

Expanded our support for ARM64 with new Docker images and single binary options, ensuring better compatibility and performance on ARM-based systems.

Note: currently we support only arm core images, for instance: localai/localai:master-ffmpeg-core, localai/localai:latest-ffmpeg-core, localai/localai:v2.17.0-ffmpeg-core.

🐞 Bug Fixes and small enhancements

We’ve ironed out several issues, including image endpoint response types and other minor problems, boosting the stability and reliability of our applications. It is now also possible to enable CSRF when starting LocalAI, thanks to @dave-gray101.

🌐 Models and Galleries

Enhanced the model gallery with new additions like Mirai Nova, Mahou, and several updates to existing models ensuring better performance and accuracy.

Now you can check new models also in https://models.localai.io, without running LocalAI!

Installation and Setup

A new install.sh script is now available for quick and hassle-free installations, streamlining the setup process for new users.

curl https://localai.io/install.sh | sh

Installation can be configured with Environment variables, for example:

curl https://localai.io/install.sh | VAR=value sh

List of the Environment Variables:

  • DOCKER_INSTALL: Set to "true" to enable the installation of Docker images.
  • USE_AIO: Set to "true" to use the all-in-one LocalAI Docker image.
  • API_KEY: Specify an API key for accessing LocalAI, if required.
  • CORE_IMAGES: Set to "true" to download core LocalAI images.
  • PORT: Specifies the port on which LocalAI will run (default is 8080).
  • THREADS: Number of processor threads the application should use. Defaults to the number of logical cores minus one.
  • VERSION: Specifies the version of LocalAI to install. Defaults to the latest available version.
  • MODELS_PATH: Directory path where LocalAI models are stored (default is /usr/share/local-ai/models).

We are looking into improving the installer, and as this is a first iteration any feedback is welcome! Open up an issue if something doesn't work for you!

Enhancements to mixed grammar support

Mixed grammar support continues receiving improvements behind the scenes.

🐍 Transformers backend enhancements

  • Temperature = 0 correctly handled as greedy search
  • Handles custom words as stop words
  • Implement KV cache
  • Phi 3 no more requires trust_remote_code: true flag

Shout-out to @fakezeta for these enhancements!

Install models with the CLI

Now the CLI can install models directly from the gallery. For instance:

local-ai run <model_name_in gallery>

This command ensures the model is installed in the model folder at startup.

🐧 Linux single binary now supports rocm, nvidia, and intel

Single binaries for Linux now contain Intel, AMD GPU, and NVIDIA support. Note that you need to install the dependencies separately in the system to leverage these features. In upcoming releases, this requirement will be handled by the installer script.

📣 Let's Make Some Noise!

A gigantic THANK YOU to everyone who’s contributed—your feedback, bug squashing, and feature suggestions are what make LocalAI shine. To all our heroes out there supporting other users and sharing their expertise, you’re the real MVPs!

Remember, LocalAI thrives on community support—not big corporate bucks. If you love what we're building, show some love! A shoutout on social (@LocalAI_OSS and @mudler_it on twitter/X), joining our sponsors, or simply starring us on GitHub makes all the difference.

Also, if you haven't yet joined our Discord, come on over! Here's the link: https://discord.gg/uJAeKSAGDy

Thanks a ton, and.. enjoy this release!

What's Changed

Bug fixes 🐛

Exciting New Features 🎉

  • feat(images): do not install python deps in the core image by @mudler in #2425
  • feat(hipblas): extend default hipblas GPU_TARGETS by @mudler in #2426
  • feat(build): add arm64 core containers by @mudler in #2421
  • feat(functions): allow parallel calls with mixed/no grammars by @mudler in #2432
  • feat(image): support response_type in the OpenAI API request by @prajwalnayak7 in #2347
  • feat(swagger): update swagger by @localai-bot in #2436
  • feat(functions): better free string matching, allow to expect strings after JSON by @mudler in #2445
  • build(Makefile): add back single target to build native llama-cpp by @mudler in #2448
  • feat(functions): allow response_regex to be a list by @mudler in #2447
  • TTS API improvements by @blob42 in #2308
  • feat(transformers): various enhancements to the transformers backend by @fakezeta in #2468
  • feat(webui): enhance card visibility by @mudler in #2473
  • feat(default): use number of physical cores as default by @mudler in #2483
  • feat: fiber CSRF by @dave-gray101 in #2482
  • feat(amdgpu): try to build in single binary by @mudler in #2485
  • feat:OpaqueErrors to hide error information by @dave-gray101 in #2486
  • build(intel): bundle intel variants in single-binary by @mudler in #2494
  • feat(install): add install.sh for quick installs by @mudler in #2489
  • feat(llama.cpp): guess model defaults from file by @mudler in #2522
  • feat(ui): add page to talk with voice, transcription, and tts by @mudler in #2520
  • feat(arm64): enable single-binary builds by @mudler in #2490
  • feat(util): add util command to print GGUF informations by @mudler in #2528
  • feat(defaults): add defaults for Command-R models by @mudler in #2529
  • feat(detection): detect by template in gguf file, add qwen2, phi, mistral and chatml by @mudler in #2536
  • feat(gallery): show available models in website, allow local-ai models install to install from galleries by @mudler in #2555
  • feat(gallery): uniform download from CLI by @mudler in #2559
  • feat(guesser): identify gemma models by @mudler in #2561
  • feat(binary): support extracted bundled libs on darwin by @mudler in #2563
  • feat(darwin): embed grpc libs by @mudler in #2567
  • feat(build): bundle libs for arm64 and x86 linux binaries by @mudler in #2572
  • feat(libpath): refactor and expose functions for external library paths by @mudler in #2578

🧠 Models

Read more

v2.16.0

24 May 17:35
Compare
Choose a tag to compare

local-ai-release-2 16

Welcome to LocalAI's latest update!

🎉🎉🎉 woot woot! So excited to share this release, a lot of new features are landing in LocalAI!!!!! 🎉🎉🎉

🌟 Introducing Distributed Llama.cpp Inferencing

Now it is possible to distribute the inferencing workload across different workers with llama.cpp models !

This feature has landed with #2324 and is based on the upstream work of @rgerganov in ggerganov/llama.cpp#6829.

How it works: a front-end server manages the requests compatible with the OpenAI API (LocalAI) and workers (llama.cpp) are used to distribute the workload. This makes possible to run larger models split across different nodes!

How to use it

To start workers to offload the computation you can run:

local-ai llamacpp-worker <listening_address> <listening_port>

However, you can also follow the llama.cpp README and building the rpc-server (https://github.com/ggerganov/llama.cpp/blob/master/examples/rpc/README.md), which is still compatible with LocalAI.

When starting the LocalAI server, which is going to accept the API requests, you can set a list of workers IP/address by specifying the addresses with LLAMACPP_GRPC_SERVERS:

LLAMACPP_GRPC_SERVERS="address1:port,address2:port" local-ai run

At this point the workload hitting in the LocalAI server should be distributed across the nodes!

🤖 Peer2Peer llama.cpp

LocalAI is the first AI Free, Open source project offering complete, decentralized, peer2peer while private, LLM inferencing on top of the libp2p protocol. There is no "public swarm" to offload the computation, but rather empowers you to build your own cluster of local and remote machines to distribute LLM computation.

This feature leverages the ability of llama.cpp to distribute the workload explained just above and features from one of my other projects, https://github.com/mudler/edgevpn.

LocalAI builds on top of the twos, and allows to create a private peer2peer network between nodes, without the need of centralizing connections or manually configuring IP addresses: it unlocks totally decentralized, private, peer-to-peer inferencing capabilities. Works also behind different NAT-ted networks (uses DHT and mDNS as discovery mechanism).

How it works: A pre-shared token can be generated and shared between workers and the server to form a private, decentralized, p2p network.

You can see the feature in action here:

output

How to use it

  1. Start the server with --p2p:
./local-ai run --p2p
# 1:02AM INF loading environment variables from file envFile=.env
# 1:02AM INF Setting logging to info
# 1:02AM INF P2P mode enabled
# 1:02AM INF No token provided, generating one
# 1:02AM INF Generated Token:
# XXXXXXXXXXX
# 1:02AM INF Press a button to proceed

A token is displayed, copy it and press enter.

You can re-use the same token later restarting the server with --p2ptoken (or P2P_TOKEN).

  1. Start the workers. Now you can copy the local-ai binary in other hosts, and run as many workers with that token:
TOKEN=XXX ./local-ai  p2p-llama-cpp-rpc
# 1:06AM INF loading environment variables from file envFile=.env
# 1:06AM INF Setting logging to info
# {"level":"INFO","time":"2024-05-19T01:06:01.794+0200","caller":"config/config.go:288","message":"connmanager disabled\n"}
# {"level":"INFO","time":"2024-05-19T01:06:01.794+0200","caller":"config/config.go:295","message":" go-libp2p resource manager protection enabled"}
# {"level":"INFO","time":"2024-05-19T01:06:01.794+0200","caller":"config/config.go:409","message":"max connections: 100\n"}
# 1:06AM INF Starting llama-cpp-rpc-server on '127.0.0.1:34371'
# {"level":"INFO","time":"2024-05-19T01:06:01.794+0200","caller":"node/node.go:118","message":" Starting EdgeVPN network"}
# create_backend: using CPU backend
# Starting RPC server on 127.0.0.1:34371, backend memory: 31913 MB
# 2024/05/19 01:06:01 failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB). # See https://github.com/quic-go/quic-go/wiki/UDP-Buffer-Sizes for details.
# {"level":"INFO","time":"2024-05-19T01:06:01.805+0200","caller":"node/node.go:172","message":" Node ID: 12D3KooWJ7WQAbCWKfJgjw2oMMGGss9diw3Sov5hVWi8t4DMgx92"}
# {"level":"INFO","time":"2024-05-19T01:06:01.806+0200","caller":"node/node.go:173","message":" Node Addresses: [/ip4/127.0.0.1/tcp/44931 /ip4/127.0.0.1/udp/33251/quic-v1/webtransport/certhash/uEiAWAhZ-W9yx2ZHnKQm3BE_ft5jjoc468z5-Rgr9XdfjeQ/certhash/uEiB8Uwn0M2TQBELaV2m4lqypIAY2S-2ZMf7lt_N5LS6ojw /ip4/127.0.0.1/udp/35660/quic-v1 /ip4/192.168.68.110/tcp/44931 /ip4/192.168.68.110/udp/33251/quic-v1/webtransport/certhash/uEiAWAhZ-W9yx2ZHnKQm3BE_ft5jjoc468z5-Rgr9XdfjeQ/certhash/uEiB8Uwn0M2TQBELaV2m4lqypIAY2S-2ZMf7lt_N5LS6ojw /ip4/192.168.68.110/udp/35660/quic-v1 /ip6/::1/tcp/41289 /ip6/::1/udp/33160/quic-v1/webtransport/certhash/uEiAWAhZ-W9yx2ZHnKQm3BE_ft5jjoc468z5-Rgr9XdfjeQ/certhash/uEiB8Uwn0M2TQBELaV2m4lqypIAY2S-2ZMf7lt_N5LS6ojw /ip6/::1/udp/35701/quic-v1]"}
# {"level":"INFO","time":"2024-05-19T01:06:01.806+0200","caller":"discovery/dht.go:104","message":" Bootstrapping DHT"}

(Note you can also supply the token via args)

At this point, you should see in the server logs messages stating that new workers are found

  1. Now you can start doing inference as usual on the server (the node used on step 1)

Interested in to try it out? As we are still updating the documentation, you can read the full instructions here #2343

📜 Advanced Function calling support with Mixed JSON Grammars

LocalAI gets better at function calling with mixed grammars!

With this release, LocalAI introduces a transformative capability: support for mixed JSON BNF grammars. It allows to specify a grammar for the LLM that allows to output structured JSON and free text.

How to use it:

To enable mixed grammars, you can set in the YAML configuration file function.mixed_mode = true, for example:

  function:
    # disable injecting the "answer" tool
    disable_no_action: true

    grammar:
      # This allows the grammar to also return messages
      mixed_mode: true

This feature significantly enhances LocalAI's ability to interpret and manipulate JSON data coming from the LLM through a more flexible and powerful grammar system. Users can now combine multiple grammar types within a single JSON structure, allowing for dynamic parsing and validation scenarios.

Grammars can also turned off entirely and leave the user to determine how the data is parsed from the LLM to be correctly interpretated by LocalAI to be still compliant to the OpenAI REST spec.

For example, to interpret Hermes results, one can just annotate regexes in function.json_regex_match to extract the LLM response:

  function:
    grammar:
      disable: true
    # disable injecting the "answer" tool
    disable_no_action: true
    return_name_in_function_response: true

    json_regex_match:
    - "(?s)<tool_call>(.*?)</tool_call>"
    - "(?s)<tool_call>(.*?)"
  
    replace_llm_results:
    # Drop the scratchpad content from responses
    - key: "(?s)<scratchpad>.*</scratchpad>"
      value: ""
    replace_function_results:
    # Replace everything that is not JSON array or object, just in case.
    - key: '(?s)^[^{\[]*'
      value: ""
    - key: '(?s)[^}\]]*$'
      value: ""
    # Drop the scratchpad content from responses
    - key: "(?s)<scratchpad>.*</scratchpad>"
      value: ""

Note that regex can still be used when enabling mixed grammars is enabled.

This is especially important for models which does not support grammars - such as transformers or OpenVINO models, that now can support as well function calling. As we update the docs, further documentation can be found in the PRs that you can find in the changelog below.

🚀 New Model Additions and Updates

local-ai-yi-updates

Our model gallery continues to grow with exciting new additions like Aya-35b, Mistral-0.3, Hermes-Theta and updates to existing models ensuring they remain at the cutting edge.

This release is having major enhancements on tool calling support. Besides working on making our default models in AIO images more performant - now you can try an enhanced out-of-the-box experience with function calling in the Hermes model family ( Hermes-2-Pro-Mistral and Hermes-2-Theta-Llama-3)

Our LocalAI function model!

local-ai-functioncall-model

I have fine-tuned a function call model specific to leverage entirely the grammar support of LocalAI, you can find it in the model gallery already and on huggingface

🔄 Single Binary Release: Simplified Deployment and Management

In our continuous effort to streamline the user experience and deployment process, LocalAI v2.16.0 proudly introduces a single binary release. This enha...

Read more

v2.15.0

09 May 17:20
f69de3b
Compare
Choose a tag to compare

local-ai-release

🎉 LocalAI v2.15.0! 🚀

Hey awesome people! I'm happy to announce the release of LocalAI version 2.15.0! This update introduces several significant improvements and features, enhancing usability, functionality, and user experience across the board. Dive into the key highlights below, and don't forget to check out the full changelog for more detailed updates.

🌍 WebUI Upgrades: Turbocharged!

🚀 Vision API Integration

The Chat WebUI now seamlessly integrates with the Vision API, making it easier for users to test image processing models directly through the browser interface - this is a very simple and hackable interface in less then 400L of code with Alpine.JS and HTMX!

output

💬 System Prompts in Chat

System prompts can be set in the WebUI chat, which guide the user through interactions more intuitively, making our chat interface smarter and more responsive.

output

🌟 Revamped Welcome Page

New to LocalAI or haven't installed any models yet? No worries! The updated welcome page now guides users through the model installation process, ensuring you're set up and ready to go without any hassle. This is a great first step for newcomers - thanks for your precious feedback!

output

🔄 Background Operations Indicator

Don't get lost with our new background operations indicator on the WebUI, which shows when tasks are running in the background.

output

🔍 Filter Models by Tag and Category

As our model gallery balloons, you can now effortlessly sift through models by tag and category, making finding what you need a breeze.

output

🔧 Single Binary Release

LocalAI is expanding into offering single binary releases, simplifying the deployment process and making it easier to get LocalAI up and running on any system.

For the moment we have condensed the builds which disables AVX and SSE instructions set. We are also planning to include cuda builds as well.

🧠 Expanded Model Gallery

This release introduces several exciting new models to our gallery, such as 'Soliloquy', 'tess', 'moondream2', 'llama3-instruct-coder' and 'aurora', enhancing the diversity and capability of our AI offerings. Our selection of one-click-install models is growing! We pick carefully model from the most trending ones on huggingface, feel free to submit your requests in a github issue, hop to our Discord or contribute by hosting your gallery, or.. even by adding models directly to LocalAI!

local-ai-gallery
local-ai-gallery-new

Want to share your model configurations and customizations? See the docs: https://localai.io/docs/getting-started/customize-model/

📣 Let's Make Some Noise!

A gigantic THANK YOU to everyone who’s contributed—your feedback, bug squashing, and feature suggestions are what make LocalAI shine. To all our heroes out there supporting other users and sharing their expertise, you’re the real MVPs!

Remember, LocalAI thrives on community support—not big corporate bucks. If you love what we're building, show some love! A shoutout on social (@LocalAI_OSS and @mudler_it on twitter/X), joining our sponsors, or simply starring us on GitHub makes all the difference.

Also, if you haven't yet joined our Discord, come on over! Here's the link: https://discord.gg/uJAeKSAGDy

Thanks a ton, and.. enjoy this release!


What's Changed

Bug fixes 🐛

  • fix(webui): correct documentation URL for text2img by @mudler in #2233
  • fix(ux): fix small glitches by @mudler in #2265

Exciting New Features 🎉

  • feat: update ROCM and use smaller image by @cryptk in #2196
  • feat(llama.cpp): do not specify backends to autoload and add llama.cpp variants by @mudler in #2232
  • fix(webui): display small navbar with smaller screens by @mudler in #2240
  • feat(startup): show CPU/GPU information with --debug by @mudler in #2241
  • feat(single-build): generate single binaries for releases by @mudler in #2246
  • feat(webui): ux improvements by @mudler in #2247
  • fix: OpenVINO winograd always disabled by @fakezeta in #2252
  • UI: flag trust_remote_code to users // favicon support by @dave-gray101 in #2253
  • feat(ui): prompt for chat, support vision, enhancements by @mudler in #2259

🧠 Models

📖 Documentation and examples

👒 Dependencies

Other Changes

New Contributors

Full Changelog: v2.14.0...v2.15.0

v2.14.0

03 May 07:29
b58274b
Compare
Choose a tag to compare

🚀 AIO Image Update: llama3 has landed!

We're excited to announce that our AIO image has been upgraded with the latest LLM model, llama3, enhancing our capabilities with more accurate and dynamic responses. Behind the scenes uses https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B-GGUF which is ready for function call, yay!

💬 WebUI enhancements: Updates in Chat, Image Generation, and TTS

Chat TTS Image gen
chatui ttsui image

Our interfaces for Chat, Text-to-Speech (TTS), and Image Generation have finally landed. Enjoy streamlined and simple interactions thanks to the efforts of our team, led by @mudler, who have worked tirelessly to enhance your experience. The WebUI interface serves as a quick way to debug and assess models loaded in LocalAI - there is much to improve, but we have now a small, hackable interface!

🖼️ Many new models in the model gallery!

local-ai-gallery

The model gallery has received a substantial upgrade with numerous new models, including Einstein v6.1, SOVL, and several specialized Llama3 iterations. These additions are designed to cater to a broader range of tasks , making LocalAI more versatile than ever. Kudos to @mudler for spearheading these exciting updates - now you can select with a couple of click the model you like!

🛠️ Robust Fixes and Optimizations

This update brings a series of crucial bug fixes and security enhancements to ensure our platform remains secure and efficient. Special thanks to @dave-gray101, @cryptk, and @fakezeta for their diligent work in rooting out and resolving these issues 🤗

✨ OpenVINO and more

We're introducing OpenVINO acceleration, and many OpenVINO models in the gallery. You can now enjoy fast-as-hell speed on Intel CPU and GPUs. Applause to @fakezeta for the contributions!

📚 Documentation and Dependency Upgrades

We've updated our documentation and dependencies to keep you equipped with the latest tools and knowledge. These updates ensure that LocalAI remains a robust and dependable platform.

👥 A Community Effort

A special shout-out to our new contributors, @QuinnPiers and @LeonSijiaLu, who have enriched our community with their first contributions. Welcome aboard, and thank you for your dedication and fresh insights!

Each update in this release not only enhances our platform's capabilities but also ensures a safer and more user-friendly experience. We are excited to see how our users leverage these new features in their projects, freel free to hit a line on Twitter or in any other social, we'd be happy to hear how you use LocalAI!

📣 Spread the word!

First off, a massive thank you (again!) to each and every one of you who've chipped in to squash bugs and suggest cool new features for LocalAI. Your help, kind words, and brilliant ideas are truly appreciated - more than words can say!

And to those of you who've been heros, giving up your own time to help out fellow users on Discord and in our repo, you're absolutely amazing. We couldn't have asked for a better community.

Just so you know, LocalAI doesn't have the luxury of big corporate sponsors behind it. It's all us, folks. So, if you've found value in what we're building together and want to keep the momentum going, consider showing your support. A little shoutout on your favorite social platforms using @LocalAI_OSS and @mudler_it or joining our sponsors can make a big difference.

Also, if you haven't yet joined our Discord, come on over! Here's the link: https://discord.gg/uJAeKSAGDy

Every bit of support, every mention, and every star adds up and helps us keep this ship sailing. Let's keep making LocalAI awesome together!

Thanks a ton, and.. exciting times ahead with LocalAI!

What's Changed

Bug fixes 🐛

  • fix: config_file_watcher.go - root all file reads for safety by @dave-gray101 in #2144
  • fix: github bump_docs.sh regex to drop emoji and other text by @dave-gray101 in #2180
  • fix: undefined symbol: iJIT_NotifyEvent in import torch ##2153 by @fakezeta in #2179
  • fix: security scanner warning noise: error handlers part 2 by @dave-gray101 in #2145
  • fix: ensure GNUMake jobserver is passed through to whisper.cpp build by @cryptk in #2187
  • fix: bring everything onto the same GRPC version to fix tests by @cryptk in #2199

Exciting New Features 🎉

  • feat(gallery): display job status also during navigation by @mudler in #2151
  • feat: cleanup Dockerfile and make final image a little smaller by @cryptk in #2146
  • fix: swap to WHISPER_CUDA per deprecation message from whisper.cpp by @cryptk in #2170
  • feat: only keep the build artifacts from the grpc build by @cryptk in #2172
  • feat(gallery): support model deletion by @mudler in #2173
  • refactor(application): introduce application global state by @dave-gray101 in #2072
  • feat: organize Dockerfile into distinct sections by @cryptk in #2181
  • feat: OpenVINO acceleration for embeddings in transformer backend by @fakezeta in #2190
  • chore: update go-stablediffusion to latest commit with Make jobserver fix by @cryptk in #2197
  • feat: user defined inference device for CUDA and OpenVINO by @fakezeta in #2212
  • feat(ux): Add chat, tts, and image-gen pages to the WebUI by @mudler in #2222
  • feat(aio): switch to llama3-based for LLM by @mudler in #2225
  • feat(ui): support multilineand style ul by @mudler in #2226

🧠 Models

📖 Documentation and examples

👒 Dependencies

Other Changes

Read more