Tags: M3TaLShArK/LocalAI
Tags
feat(grpc): backend SPI pluggable in embedding mode (mudler#1621) * run server * grpc backend embedded support * backend providable
feat(extra-backends): Improvements, adding mamba example (mudler#1618) * feat(extra-backends): Improvements vllm: add max_tokens, wire up stream event mamba: fixups, adding examples for mamba-chat * examples(mamba-chat): add * docs: update
⬆️ Update ggerganov/llama.cpp (mudler#1558) Signed-off-by: GitHub <noreply@github.com> Co-authored-by: mudler <mudler@users.noreply.github.com>
docs: improve getting started (mudler#1553) * docs: improve getting started Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com> * cleanups * Use dockerhub links * Shrink command to minimum --------- Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
ci(dockerhub): push images also to dockerhub (mudler#1542) Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
fix(download): correctly check for not found error (mudler#1514)
feat(alias): alias llama to llama-cpp, update docs (mudler#1448) Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
PreviousNext